text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
How to find the Laurent expansion for $\frac{\exp\left(\frac{1}{z^{2}}\right)}{z-1}$ about $z=0$?
I want to find the Laurent expansion for $\frac{\exp\left(\frac{1}{z^{2}}\right)}{z-1}$ about $z=0$,
I've tried to apply this formula $\frac{1}{1-\omega}=\sum_{n=0}^{\infty }\omega^{n}$ and the usual Taylor series of the exponential function, but I don't know how to continue:
$$\begin{align}f(z)&=\frac{1}{z-1}\exp\left(\frac{1}{z^{2}}\right)\\ &=-\frac{1}{1-z}\exp\left(\frac{1}{z^{2}}\right)\\&=-\left (\sum_{n=0}^{\infty }z^{n} \right )\left ( \sum_{n=0}^{\infty}\frac{1}{n!z^{2n}} \right )\end{align}$$ Thanks in advance.
Ps: I tried applying a Cauchy product, but I think this is not appropriate.
Edit 1: If it is useful at the end of the text, the authors say that the Laurent expansion is:
$\sum_{k=-\infty }^{\infty }a_{k}z^{k}$ with $a_{k}=-e$ if $k\geq 0$ and $a_{k}=-e+1+\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{(j-1)!}$if $k=-2$ or $k=-2j+1$ where $j=1,2,...$
sequences-and-series complex-analysis complex-numbers taylor-expansion laurent-series
Magic UnicornMagic Unicorn
Starting with your $=-\left (\sum\limits_{m=0}^{\infty }z^{m} \right )\left ( \sum\limits_{n=0}^{\infty}\frac{1}{n!z^{2n}} \right )$ changing one of the $n$ to $m$, you can say the coefficient of $z^k$ is
$-\sum\limits_{n=0}^{\infty} \frac1{n!} =-e$ when $k\le 0$
$-\sum\limits_{n=k/2}^{\infty} \frac1{n!} =\sum\limits_{n=0}^{n=(k-2)/2} \frac1{n!}-e$ when $k\gt 0$ and even
$-\sum\limits_{n=(k+1)/2}^{\infty} \frac1{n!} =\sum\limits_{n=0}^{(k-1)/2} \frac1{n!}-e$ when $k\gt 0$ and even
But that looks wrong to me: I do not think $$\cdots -e z^{-5} -e z^{-4} -e z^{-3} -e z^{-2} -e z^{-1} -e z^{0}+ \\(1-e)z^1 +(1-e)z^2 +(2-e)z^3 +(2-e)z^4+\left(\frac52-e\right)z^5+\cdots$$ converges when $|z| \le 1$.
Meanwhile for the same question asked elsewhere, a suggested answer was in effect $$z^{-1}+z^{-2}+2 z^{-3}+2 z^{-4}+\frac{5 }{2}z^{-5}+\frac{5}{2}z^{-6}+\cdots$$ but I do not think that converges either when $|z|\le 1$
HenryHenry
$\begingroup$ The series you wrote for $\frac1{1-z}$ fails to converge for $|z|>1$. $\endgroup$
– Mark Viola
$\begingroup$ @MarkViola - You may be correct if you say it fails to converge for $|z|\le 1$ $\endgroup$
– Henry
$\begingroup$ Henry, $\frac1{1-z}=\sum_{n=0}^\infty z^n$ for $|z|<1$, and fails to converge for $|z|\ge 1$. $\endgroup$
$\begingroup$ @MarkViola I am more worried that $\cdots -e z^{-5} -e z^{-4} -e z^{-3} -e z^{-2} -e z^{-1} -e z^{0}$ does not converge than I am about $(1-e)z^1 +(1-e)z^2 +(2-e)z^3 +(2-e)z^4+(\frac52-e)z^5+\cdots$ $\endgroup$
$\begingroup$ I've posted a complete solution that includes the Laurent series for $|z|>1$ and the Laurent series for $0<|z|<1$. $\endgroup$
First, we can write two series for $\frac1{z-1}$ in the two regions $|z|<1$ and $|z|>1$ as
$$\frac1{z-1}=\begin{cases} -\sum_{n=0}^\infty z^n&,|z|<1\\\\ \sum_{n=1}^\infty z^{-n}&,|z|>1\tag1 \end{cases}$$
Second, the Laurent series for $e^{1/z^2}$ for $0<|z|$ is given by
$$e^{1/z^2}=\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,z^{-n}\tag2$$
where $a_n$ the sequence such hat
$$a_n=\begin{cases} 1&,n\,\text{even}\\\\ 0&,n\,\text{odd} \end{cases}$$
Putting $(1)$ and $(2)$ together reveals
$$\frac{e^{1/z^2}}{z-1}= \begin{cases} -\sum_{m=0}^\infty z^m \sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,z^{-n}&,0<|z|<1\tag3\\\\ \sum_{m=1}^\infty z^{-m}\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,z^{-n}&,1<|z| \end{cases} $$
For $|z|>1$, the Laurent series of $\frac{e^{1/z^2}}{z-1}$ can be written
$$\begin{align} \frac{e^{1/z^2}}{z-1}&=\sum_{m=1}^\infty z^{-m}\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,z^{-n}\\\\ &=\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,\sum_{m=1}^\infty z^{-(n+m)}\\\\ &\overbrace{=}^{p=n+m}\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\sum_{p=n+1}^\infty\,z^{-p}\\\\ &=\sum_{p=1}^\infty\left(\sum_{n=0}^{p-1} \frac{a_n}{(n/2)!}\right)\,z^{-p} \end{align}$$
For $0<|z|<1$, the Laurent series of $\frac{e^{1/z^2}}{z-1}$ can be written
$$\begin{align} \frac{e^{1/z^2}}{z-1}&=-\sum_{m=0}^\infty z^{m}\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,z^{-n}\\\\ &=-\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\sum_{m=0}^\infty z^{m-n}\\\\ &\overbrace{=}^{p=m-n}-\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\sum_{p=-n}^\infty z^{p}\\\\ &=-\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\left(\sum_{p=-n}^{0} z^{p}+\sum_{p=1}^\infty z^{p}\right)\\\\ &=-e \sum_{p=1}^\infty z^{p}-\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\sum_{p=0}^{n} z^{-p}\\\\ &=-e \sum_{p=1}^\infty z^{p}-\sum_{p=0}^{\infty}\left(\sum_{n=p}^\infty \frac{a_n}{(n/2)!} \right)z^{-p}\\\\ &=-e \sum_{p=0}^\infty z^{p}-\sum_{p=1}^{\infty}\left(\sum_{n=p}^\infty \frac{a_n}{(n/2)!} \right)z^{-p} \end{align}$$
Mark ViolaMark Viola
$\begingroup$ thanks, but I don't understand why took that series for $exp(\frac{1}{z^{2}})$. $\endgroup$
– Magic Unicorn
$\begingroup$ @BrigitteEliana What is it that you don't understand? $\endgroup$
$\begingroup$ Yes, I don't understand how did you determine that the Laurent series for $exp(\frac{1}{z^{2}})$ is the given in equation (2)? $\endgroup$
$\begingroup$ The Laurent expansion for $|z|>0$ of $e^{1/z^2}$ is given by $$e^{1/z^2}=\sum_{n=0}^\infty \frac1{n!}\,\frac1{z^{2n}}$$Now, note that all of the terms are of even inverse powers of $z$. That is, all of the coefficients of odd inverse powers are equal to $0$. So, we define $a_n$ to be $1$ when $n$ is even and $0$ when $n$ is odd. Then, $\sum_{n=0}^\infty \frac1{n!}\,\frac1{z^{2n}}=\sum_{n=0}^\infty \frac{a_n}{(n/2)!}\,\frac1{z^n}$. Is that clear now? $\endgroup$
$\begingroup$ Yes, thanks for your help. $\endgroup$
Not the answer you're looking for? Browse other questions tagged sequences-and-series complex-analysis complex-numbers taylor-expansion laurent-series or ask your own question.
Find the Laurent expansion for $f(z)=\frac{\exp{1/z^2}}{z-1}$ about $z=0$.
Laurent expansion of $\exp(z+\frac 1z)$
Taylor Expansion about a singular point instead of the Laurent Expansion
Laurent Expansion Coefficients of $\exp\left(\frac{z}{1-z}\right)$
Calculate the residue of $\exp\left(\frac{z+1}{z-1}\right)$ in every point of $\mathbb{C}$
Find the Laurent Series expansion for $\displaystyle{f(z) = ze^{1/(z-1)}}$ valid for $\left|z-1\right|> 0$
Find the Laurent expansion of $(1-z)e^{1/z}$ - When can we use Taylor series to find Laurent series?
Laurent expansion for sin(1/z) | CommonCrawl |
Square triangular number
In mathematics, a square triangular number (or triangular square number) is a number which is both a triangular number and a square number. There are infinitely many square triangular numbers; the first few are:
0, 1, 36, 1225, 41616, 1413721, 48024900, 1631432881, 55420693056, 1882672131025 (sequence A001110 in the OEIS)
For squares of triangular numbers, see squared triangular number.
Explicit formulas
Write Nk for the kth square triangular number, and write sk and tk for the sides of the corresponding square and triangle, so that
$N_{k}=s_{k}^{2}={\frac {t_{k}(t_{k}+1)}{2}}.$
Define the triangular root of a triangular number N = n(n + 1)/2 to be n. From this definition and the quadratic formula,
$n={\frac {{\sqrt {8N+1}}-1}{2}}.$
Therefore, N is triangular (n is an integer) if and only if 8N + 1 is square. Consequently, a square number M2 is also triangular if and only if 8M2 + 1 is square, that is, there are numbers x and y such that x2 − 8y2 = 1. This is an instance of the Pell equation with n = 8. All Pell equations have the trivial solution x = 1, y = 0 for any n; this is called the zeroth solution, and indexed as (x0, y0) = (1,0). If (xk, yk) denotes the kth nontrivial solution to any Pell equation for a particular n, it can be shown by the method of descent that
${\begin{aligned}x_{k+1}&=2x_{k}x_{1}-x_{k-1},\\y_{k+1}&=2y_{k}x_{1}-y_{k-1}.\end{aligned}}$
Hence there are an infinity of solutions to any Pell equation for which there is one non-trivial one, which holds whenever n is not a square. The first non-trivial solution when n = 8 is easy to find: it is (3,1). A solution (xk, yk) to the Pell equation for n = 8 yields a square triangular number and its square and triangular roots as follows:
$s_{k}=y_{k},\quad t_{k}={\frac {x_{k}-1}{2}},\quad N_{k}=y_{k}^{2}.$
Hence, the first square triangular number, derived from (3,1), is 1, and the next, derived from 6 × (3,1) − (1,0) = (17,6), is 36.
The sequences Nk, sk and tk are the OEIS sequences OEIS: A001110, OEIS: A001109, and OEIS: A001108 respectively.
In 1778 Leonhard Euler determined the explicit formula[1][2]: 12–13
$N_{k}=\left({\frac {\left(3+2{\sqrt {2}}\right)^{k}-\left(3-2{\sqrt {2}}\right)^{k}}{4{\sqrt {2}}}}\right)^{2}.$
Other equivalent formulas (obtained by expanding this formula) that may be convenient include
${\begin{aligned}N_{k}&={\tfrac {1}{32}}\left(\left(1+{\sqrt {2}}\right)^{2k}-\left(1-{\sqrt {2}}\right)^{2k}\right)^{2}\\&={\tfrac {1}{32}}\left(\left(1+{\sqrt {2}}\right)^{4k}-2+\left(1-{\sqrt {2}}\right)^{4k}\right)\\&={\tfrac {1}{32}}\left(\left(17+12{\sqrt {2}}\right)^{k}-2+\left(17-12{\sqrt {2}}\right)^{k}\right).\end{aligned}}$
The corresponding explicit formulas for sk and tk are:[2]: 13
${\begin{aligned}s_{k}&={\frac {\left(3+2{\sqrt {2}}\right)^{k}-\left(3-2{\sqrt {2}}\right)^{k}}{4{\sqrt {2}}}},\\t_{k}&={\frac {\left(3+2{\sqrt {2}}\right)^{k}+\left(3-2{\sqrt {2}}\right)^{k}-2}{4}}.\end{aligned}}$
Pell's equation
The problem of finding square triangular numbers reduces to Pell's equation in the following way.[3]
Every triangular number is of the form t(t + 1)/2. Therefore we seek integers t, s such that
${\frac {t(t+1)}{2}}=s^{2}.$
Rearranging, this becomes
$\left(2t+1\right)^{2}=8s^{2}+1,$
and then letting x = 2t + 1 and y = 2s, we get the Diophantine equation
$x^{2}-2y^{2}=1,$
which is an instance of Pell's equation. This particular equation is solved by the Pell numbers Pk as[4]
$x=P_{2k}+P_{2k-1},\quad y=P_{2k};$
and therefore all solutions are given by
$s_{k}={\frac {P_{2k}}{2}},\quad t_{k}={\frac {P_{2k}+P_{2k-1}-1}{2}},\quad N_{k}=\left({\frac {P_{2k}}{2}}\right)^{2}.$
There are many identities about the Pell numbers, and these translate into identities about the square triangular numbers.
Recurrence relations
There are recurrence relations for the square triangular numbers, as well as for the sides of the square and triangle involved. We have[5]: (12)
${\begin{aligned}N_{k}&=34N_{k-1}-N_{k-2}+2,&{\text{with }}N_{0}&=0{\text{ and }}N_{1}=1;\\N_{k}&=\left(6{\sqrt {N_{k-1}}}-{\sqrt {N_{k-2}}}\right)^{2},&{\text{with }}N_{0}&=0{\text{ and }}N_{1}=1.\end{aligned}}$
We have[1][2]: 13
${\begin{aligned}s_{k}&=6s_{k-1}-s_{k-2},&{\text{with }}s_{0}&=0{\text{ and }}s_{1}=1;\\t_{k}&=6t_{k-1}-t_{k-2}+2,&{\text{with }}t_{0}&=0{\text{ and }}t_{1}=1.\end{aligned}}$
Other characterizations
All square triangular numbers have the form b2c2, where b/c is a convergent to the continued fraction expansion of √2.[6]
A. V. Sylwester gave a short proof that there are an infinity of square triangular numbers:[7] If the nth triangular number n(n + 1)/2 is square, then so is the larger 4n(n + 1)th triangular number, since:
${\frac {{\bigl (}4n(n+1){\bigr )}{\bigl (}4n(n+1)+1{\bigr )}}{2}}=4\,{\frac {n(n+1)}{2}}\,\left(2n+1\right)^{2}.$
As the product of three squares, the right hand side is square. The triangular roots tk are alternately simultaneously one less than a square and twice a square if k is even, and simultaneously a square and one less than twice a square if k is odd. Thus,
49 = 72 = 2 × 52 − 1,
288 = 172 − 1 = 2 × 122, and
1681 = 412 = 2 × 292 − 1.
In each case, the two square roots involved multiply to give sk: 5 × 7 = 35, 12 × 17 = 204, and 29 × 41 = 1189.
Additionally:
$N_{k}-N_{k-1}=s_{2k-1};$
36 − 1 = 35, 1225 − 36 = 1189, and 41616 − 1225 = 40391. In other words, the difference between two consecutive square triangular numbers is the square root of another square triangular number.
The generating function for the square triangular numbers is:[8]
${\frac {1+z}{(1-z)\left(z^{2}-34z+1\right)}}=1+36z+1225z^{2}+\cdots $
Numerical data
As k becomes larger, the ratio tk/sk approaches √2 ≈ 1.41421356, and the ratio of successive square triangular numbers approaches (1 + √2)4 = 17 + 12√2 ≈ 33.970562748. The table below shows values of k between 0 and 11, which comprehend all square triangular numbers up to 1016.
k Nk sk tk tk/sk Nk/Nk − 1
0 0 0 0
1 1 1 1 1
2 36 6 8 1.33333333 36
3 1225 35 49 1.4 34.027777778
4 41616 204 288 1.41176471 33.972244898
5 1413721 1189 1681 1.41379310 33.970612265
6 48024900 6930 9800 1.41414141 33.970564206
7 1631432881 40391 57121 1.41420118 33.970562791
8 55420693056 235416 332928 1.41421144 33.970562750
9 1882672131025 1372105 1940449 1.41421320 33.970562749
10 63955431761796 7997214 11309768 1.41421350 33.970562748
11 2172602007770041 46611179 65918161 1.41421355 33.970562748
See also
• Cannonball problem, on numbers that are simultaneously square and square pyramidal
• Sixth power, numbers that are simultaneously square and cubical
Notes
1. Dickson, Leonard Eugene (1999) [1920]. History of the Theory of Numbers. Vol. 2. Providence: American Mathematical Society. p. 16. ISBN 978-0-8218-1935-7.
2. Euler, Leonhard (1813). "Regula facilis problemata Diophantea per numeros integros expedite resolvendi (An easy rule for Diophantine problems which are to be resolved quickly by integral numbers)". Mémoires de l'Académie des Sciences de St.-Pétersbourg (in Latin). 4: 3–17. Retrieved 2009-05-11. According to the records, it was presented to the St. Petersburg Academy on May 4, 1778.
3. Barbeau, Edward (2003). Pell's Equation. Problem Books in Mathematics. New York: Springer. pp. 16–17. ISBN 978-0-387-95529-2. Retrieved 2009-05-10.
4. Hardy, G. H.; Wright, E. M. (1979). An Introduction to the Theory of Numbers (5th ed.). Oxford University Press. p. 210. ISBN 0-19-853171-0. Theorem 244
5. Weisstein, Eric W. "Square Triangular Number". MathWorld.
6. Ball, W. W. Rouse; Coxeter, H. S. M. (1987). Mathematical Recreations and Essays. New York: Dover Publications. p. 59. ISBN 978-0-486-25357-2.
7. Pietenpol, J. L.; Sylwester, A. V.; Just, Erwin; Warten, R. M. (February 1962). "Elementary Problems and Solutions: E 1473, Square Triangular Numbers". American Mathematical Monthly. Mathematical Association of America. 69 (2): 168–169. doi:10.2307/2312558. ISSN 0002-9890. JSTOR 2312558.
8. Plouffe, Simon (August 1992). "1031 Generating Functions" (PDF). University of Quebec, Laboratoire de combinatoire et d'informatique mathématique. p. A.129. Archived from the original (PDF) on 2012-08-20. Retrieved 2009-05-11.
External links
• Triangular numbers that are also square at cut-the-knot
• Weisstein, Eric W. "Square Triangular Number". MathWorld.
• Michael Dummett's solution
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
Estimating mean of Normal with unknown variance and then predict the future observation
I am trying to estimate population mean of 9 observations when the variance is unknown. I marginalized the posterior and understand that the t- distribution would give me the distribution of population mean. I am stuck at this point. Normally, If I had to estimate some thing I would generate 1000 or more random samples of the given distribution and then generate point or interval estimates for it's values. But the T-distribution has confused me. Matlab's tpdf generates only 8 samples, but when I sum them up they do not add up to 1 which looks weird so is it generating actual values? If these are actual values then where is the distribution? how do I estimate mean from it (Substitute these values in the standardization formula to find values of mean?).
PS: I have been studying stats recently and though I understand the mathematical part of it, I feel miserable when doing simulation in matlab. So I would appreciate any pointers twards learning the computational side of it.
EDIT: I understand the mathematical or derivation part of it. It is the computational simulation that confuses me. I use tpdf for using t distribution but it needs data and degree of freedom. and then how do I go about finding the point estimate of mean in matlab. Aso tpdf needs to be translated towards my data values.
probability self-study bayesian normal-distribution t-distribution
Sudh
SudhSudh
$\begingroup$ slef-study?! Please add the tag. And explain more clearly why the $t$ distribution confuses you. It has a mean that you can use as an estimator and you can produce a credible interval by using the t pdf. $\endgroup$ – Xi'an Apr 3 '15 at 7:18
$\begingroup$ Why should they add to 1? $\endgroup$ – Glen_b -Reinstate Monica Apr 3 '15 at 8:33
$\begingroup$ Shouldn't a probability distribution integrate to 1? $\endgroup$ – Sudh Apr 3 '15 at 15:41
$\begingroup$ Values sampled from a distribution are not (usually) probabilities, and even if they were, a set of sampled probabilities do not form a probability distribution. Consider if they did - then the first value (itself a sample of size 1) would have to be "1" to make that initial "distribution" integrate to 1, so all subsequent values would have to be "0". You seem to have a very mistaken notion of what's going on, but there are several errors at once, so it's difficult to start untangling your notions. $\endgroup$ – Glen_b -Reinstate Monica Apr 4 '15 at 0:23
Quoting from our Bayesian Essentials with R book,
if $\mathscr{D}_n$ denotes a normal $\mathscr{N}\left(\mu,\sigma^{2}\right)$ sample of size $n$, if $\mu$ has a prior equal to a $\mathscr{N}\left(0,\sigma^{2}\right)$ distribution, and $\sigma^{-2}$ an exponential $\mathscr{E}(1)$ distribution, the posterior is given by \begin{align*} \pi((\mu,\sigma^2)|\mathscr{D}_n) &\propto \pi(\sigma^2)\times\pi(\mu|\sigma^2)\times f(\mathscr{D}_n|\mu,\sigma^2)\\ & \propto (\sigma^{-2})^{1/2+2}\, \exp\left\{-(\mu^2 + 2)/2\sigma^2\right\}\\ & \times (\sigma^{-2})^{n/2}\,\exp \left\{-\left(n(\mu-\overline{x})^2 + s^2 \right)/2\sigma^2\right\} \\ &\propto (\sigma^2)^{-(n+5)/2}\exp\left\{-\left[(n+1) (\mu-n\bar x/(n+1))^2+(2+s^2)\right]/2\sigma^2\right\}\\ &\propto (\sigma^2)^{-1/2}\exp\left\{-(n+1)[\mu-n\bar x/(n+1)]^2/2\sigma^2\right\}\,.\\ &\times (\sigma^2)^{-(n+2)/2-1}\exp\left\{-(2+s^2)/2\sigma^2\right\}\,. \end{align*} Therefore, the posterior on $\theta$ can be decomposed as the product of an inverse gamma distribution on $\sigma^2$, $$\mathscr{IG}((n+2)/2,[2+s^2]/2)$$ which is the distribution of the inverse of a gamma $$\mathscr{G}((n+2)/2,[2+s^2]/2)$$ random variable and, conditionally on $\sigma^2$, a normal distribution on $\mu$, $$\mathscr{N} (n\bar x/(n+1),\sigma^2/(n+1)).$$ The marginal posterior in $\mu$ is then a Student's $t$ distribution $$ \mu|\mathscr{D}_n \sim \mathscr{T}\left(n+2,n\bar x\big/(n+1),(2+s^2)\big/(n+1)(n+2)\right)\,, $$ with $n+2$ degrees of freedom, a location parameter proportional to $\bar x$ and a scale parameter almost proportional to $s$.
From this distribution, you get the expectation $n\bar x/(n+1)$ that acts as your point estimator of $\mu$. And a credible interval on $\mu$ $$\left(n\bar x/(n+1)-((2+s^2)/(n+1)(n+2))^{1/2}q_{n+2}(\alpha),n\bar x/(n+1)+((2+s^2)/(n+1)(n+2))^{1/2}q_{n+2}(\alpha)\right)$$where $q_{n+2}(\alpha)$ is the $t_{n+1}$ quantile.
Xi'anXi'an
$\begingroup$ Hey thanks, probably I wasn't clear enough. I understand the derivation of it. It is the computational part that confuses me. I mean if I have say 10 data samples. Now how do I estimate population mean using matlab? $\endgroup$ – Sudh Apr 3 '15 at 15:44
$\begingroup$ You have all the statistical elements above, thus only to replace the three parameters of the Student't density with the values obtained from your sample. If this is a matlab question, it should be asked on Stack Overflow, not here. $\endgroup$ – Xi'an Apr 3 '15 at 19:37
$\begingroup$ Not specifically matlab but any computational method. How would it be done in practice? For example, matlab can generate random numbers from a t distribution based on degree of freedom provided but how would that correspond to my data? I mean [5 6 8] and [56 54 57] both have 2 degree of freedom but their means are in very different range. $\endgroup$ – Sudh Apr 4 '15 at 0:25
$\begingroup$ Why would you need a random generator? If I have $n=10$ observations with $\bar x=2.3$ and $s^2=312$, the posterior on $\mu$ is a $t(12,2.09,2.38)$ distribution. End of the story. $\endgroup$ – Xi'an Apr 4 '15 at 8:41
Not the answer you're looking for? Browse other questions tagged probability self-study bayesian normal-distribution t-distribution or ask your own question.
With a small sample from a normal distribution, do you simulate using a t distribution?
Credible interval for Bayesian posterior of variance and mean, and posterior predictive of normal
Estimating variance of center-censored Normal samples
What is the relation between the estimated standard deviation of a normal distribution and the scale of a t distribution when applied to normal data?
Why not use the T-distribution to estimate the mean when the sample is large?
Population is normal with known variance and mean. Sample size is small. Which case?
What is exactly distributed according to t-distribution?
What would be the variance of a circulary complex normal distribution
Mean and variance of multiple trials from normal dist | CommonCrawl |
\begin{document}
\DeclareGraphicsExtensions{.jpg,.pdf,.mps,.png}
\title{
Upper and lower bounds for topological indices on unicyclic graphs}
\author[\'{A}lvaro Mart\'{\i}nez-P\'erez]{\'{A}lvaro Mart\'{\i}nez-P\'erez} \address{ Facultad CC. Sociales de Talavera, Avda. Real F\'abrica de Seda, s/n. 45600 Talavera de la Reina, Toledo, Spain} \email{[email protected]}
\author[Jos\'e M. Rodr{\'\i}guez]{Jos\'e M. Rodr{\'\i}guez$^{(1)}$} \address{Departamento de Matem\'aticas, Universidad Carlos III de Madrid, Avenida de la Universidad 30, 28911 Legan\'es, Madrid, Spain} \email{[email protected]} \thanks{$^{(1)}$ Corresponding author.}
\date{\today}
\begin{abstract} The aim of this paper is to obtain new inequalities for a large family of topological indices restricted to unicyclic graphs and to characterize the set of extremal unicyclic graphs with respect to them. This family includes variable first Zagreb, variable sum exdeg, multiplicative second Zagreb and Narumi-Katayama indices. Our main results provide upper and lower bounds for these topological indices on unicyclic graphs, fixing or not the maximum degree or the number of pendant vertices. \end{abstract}
\maketitle{}
{\it Keywords: Variable first Zagreb index, variable sum exdeg index, multiplicative second Zagreb index, Narumi-Katayama index, unicyclic graphs.}
{\it 2010 AMS Subject Classification numbers: 05C07, 92E10.}
\section{Introduction}
A topological descriptor is a single number that represents a chemical structure in graph-theoretical terms via the molecular graph. They play a significant role in mathematical chemistry especially in the QSPR/QSAR investigations. A topological descriptor is called a topological index if it correlates with a molecular property. Topological indices are used to understand physicochemical properties of chemical compounds, since they capture some properties of a molecule in a single number. Hundreds of topological indices have been introduced and studied, starting with the seminal work by Wiener \cite{Wi}. The \emph{Wiener index} of $G$ is defined as $$ W(G)=\sum_{\{ u,v\}\subseteq V(G)} d(u,v), $$ where $\{ u,v\}$ runs over every pair of vertices in $G$.
Topological indices based on end-vertex degrees of edges have been used over 40 years. Among them, several indices are recognized to be useful tools in chemical researches. Probably, the best know such descriptor is the Randi\'c connectivity index ($R$) \cite{R}.
Two of the main successors of the Randi\'c index are the first and second Zagreb indices, denoted by $M_1$ and $M_2$, respectively, and introduced by Gutman et al. in \cite{GT} and \cite{GRT}. They are defined as $$ M_1(G) = \sum_{u\in V(G)} d_u^2, \qquad M_2(G) = \sum_{uv\in E(G)} d_u d_v , \qquad $$ where $uv$ denotes the edge of the graph $G$ connecting the vertices $u$ and $v$, and $d_u$ is the degree of the vertex $u$. See the recent surveys on the Zagreb indices \cite{AGMM}, \cite{BDFG} and \cite{GMM}.
Along the paper, we will denote by $m$ and $n$, the cardinality of the sets $E(G)$ and $V(G)$, respectively.
Mili\v{c}evi\'c and Nikoli\'c defined in \cite{MN} the \emph{variable first and second Zagreb indices} as $$ M_1^{\alpha}(G) = \sum_{u\in V(G)} d_u^{\alpha}, \qquad M_2^{\alpha}(G) = \sum_{uv\in E(G)} (d_u d_v)^\alpha , $$ with $\alpha \in \mathbb{R}$.
Note that $M_1^{0}$ is $n$, $M_1^{1}$ is $2m$, $M_1^{2}$ is the first Zagreb index $M_1$, $M_1^{-1}$ is the inverse degree index $ID$ \cite{Faj}, $M_1^{3}$ is the forgotten index $F$, etc.; also, $M_2^{0}$ is $m$, $M_2^{-1/2}$ is the usual Randi\'c index, $M_2^{1}$ is the second Zagreb index $M_2$, $M_2^{-1}$ is the modified second Zagreb index \cite{NKMT}, etc.
The concept of variable molecular descriptors was proposed as a new way of characterizing heteroatoms in molecules (see \cite{R2}, \cite{R3}), but also to assess the structural differences (e.g., the relative role of carbon atoms of acyclic and cyclic parts in alkylcycloalkanes \cite{RPL}). The idea behind the variable molecular descriptors is that the variables are determined during the regression so that the standard error of estimate for a particular studied property is as small as possible.
In the paper of Gutman and Tosovic \cite{Gutman8}, the correlation abilities of $20$ vertex-degree-based topological indices occurring in the chemical literature were tested for the case of standard heats of formation and normal boiling points of octane isomers. It is remarkable to realize that the variable second Zagreb index $M_2^\alpha$ with exponent $\alpha = -1$ (and to a lesser extent with exponent $\alpha = -2$) performs significantly better than the Randi\'c index ($R=M_2^{-1/2}$).
The variable second Zagreb index is used in the structure-boiling point modeling of benzenoid hydrocarbons \cite{NMTJ}. Various properties and relations of these indices are discussed in several papers (see, e.g., \cite{AP}, \cite{LZhao}, \cite{LL}, \cite{SDGM}, \cite{ZWC}, \cite{ZZ}).
Several authors attribute the beginning of the study of unicyclic graphs to Dantzig's book on linear programming (1963), in which unicyclic graphs (called \emph{pseudotrees} there) arise in the solution of certain network flow problems \cite{Dantzig}. Since then, the study of unicyclic graphs is a main topic in graph theory. For instance, unicyclic graphs form graph-theoretic models of functions and occur in several algorithmic problems.
Although only about 1000 benzenoid hydrocarbons are known, the number of possible benzenoid hydrocarbons is huge. For instance, the number of possible benzenoid hydrocarbons with 35 benzene rings is $5.85 \times 10^{21}$ \cite{NGJ}. Therefore, the modeling of their physico-chemical properties is very important in order to predict properties of currently unknown species. The main reason for use topological indices is to obtain prediction of some property of molecules (see, e.g., \cite{Gutman7}, \cite{Estrada3}, \cite{Gutman8}, \cite{RPL}). Therefore, given some fixed parameters, a natural problem is to find the graphs that minimize (or maximize) the value of a topological index on the set of graphs satisfying the restrictions given by the parameters (see, e.g., \cite{BE1}, \cite{BE2}, \cite{Cruz}, \cite{Das4}, \cite{Du2}, \cite{Du3}, \cite{Edwards}, \cite{Gutman32}). The aim of this paper is to obtain new inequalities for a large family of topological indices restricted to unicyclic graphs, fixing or not the maximum degree or the number of pendant vertices, and to characterize the extremal unicyclic graphs with respect to them. This family includes variable first Zagreb, variable sum exdeg, multiplicative second Zagreb and Narumi-Katayama indices.
Throughout this work, $G=(V (G),E (G))$ denotes a (non-oriented) finite connected simple (without multiple edges and loops) non-trivial ($E(G) \neq \emptyset$) graph. Note that the connectivity of $G$ is not an important restriction, since if $G$ has connected components $G_1,G_2,\dots,G_r,$ then we have either $I(G) = I(G_1) + I(G_2) + \cdots + I(G_r)$ or $I(G) = I(G_1) I(G_2) \cdots I(G_r)$ for every index $I$ in this paper; furthermore, every molecular graph is connected. If $G_1$ and $G_2$ are isomorphic graphs, we write $G_1 = G_2$.
\section{Schur convexity}
Given two $n$-tuples ${\bf{x}}=(x_1,\dots, x_n)$, ${\bf{y}}=(y_1,\dots ,y_n)$ with $x_1\geq x_2\geq \cdots \geq x_n$ and $y_1\geq y_2 \geq \cdots \geq y_n$, then ${\bf{x}}$ \emph{majorizes} ${\bf{y}}$ (and we write ${\bf{x}}\succ {\bf{y}}$ or ${\bf{y}}\prec {\bf{x}}$) if \[\sum_{i=1}^k x_i \geq \sum_{i=1}^k y_i,\] for $1\leq k \leq n-1$ and \[\sum_{i=1}^n x_i = \sum_{i=1}^n y_i.\]
A function $\Phi \colon \mathbb{R}^n \to \mathbb{R}$ is called \emph{Schur-convex} if $\Phi({\bf{x}})\geq \Phi({\bf{y}})$ for all ${\bf{x}}\succ {\bf{y}}$. Similarly, the function is \emph{Schur-concave} if $\Phi({\bf{x}})\leq \Phi({\bf{y}})$ for all ${\bf{x}}\succ {\bf{y}}$. We say that $\Phi$ is \emph{strictly Schur-convex} (respectively, \emph{strictly Schur-concave}) if $\Phi({\bf{x}})> \Phi({\bf{y}})$ (respectively, $\Phi({\bf{x}})< \Phi({\bf{y}})$) for all ${\bf{x}}\succ {\bf{y}}$ with ${\bf{x}} \neq {\bf{y}}$.
If $$ \Phi({\bf{x}})=\sum_{i=1}^n f(x_i), $$ where $f$ is a convex (respectively, concave) function defined on a real interval, then $\Phi$ is Schur-convex (respectively, Schur-concave). If $f$ is strictly convex (respectively, strictly concave), then $\Phi$ is strictly Schur-convex (respectively, strictly Schur-concave).
Thus, $$M_1^{\alpha}(G) = \sum_{u\in V(G)} d_u^{\alpha},$$ is strictly Schur-convex if $\alpha\in (-\infty,0)\cup (1,\infty)$ and strictly Schur-concave if $\alpha \in (0,1)$.
\section{Unicyclic graphs}
A \emph{unicyclic} graph is a graph containing exactly one cycle \cite[p.41]{Harary}. If $G$ is a unicyclic graph with $n$ vertices, then $G$ has $n$ edges.
Given $n\geq 3$, let $S_{2n}$ be the set of $n$-tuples ${\bf{x}} = (x_1, x_2, \dots ,x_{n-1},x_n)$ with $x_i\in \mathbb{Z}^+$ such that $x_1 \geq x_2 \geq \cdots \geq x_{n} $ and $\sum_{i=1}^{n} x_i = 2n$.
\begin{remark} \label{r:31} Consider any unicyclic graph $G$ with $n$ vertices $v_1,\dots, v_n,$ ordered in such a way that if ${\bf{x}}={\bf{x}}_{_{G}}=(x_1,\dots,x_n)$ is the $n$-tuple where $x_i$ is the degree of the vertex $v_i$, then $x_{i}\geq x_{i+1}$ for every $1\leq i \leq n-1$. By handshaking Lemma, we have that ${\bf{x}}\in S_{2n}$. \end{remark}
Given any function $f : [1,\infty) \rightarrow \mathbb{R}$, let us define the index $$ I_f(G) = \sum_{u\in V(G)} f(d_u). $$ Besides, if $f$ takes positive values, then we can define the index $$ II_f(G) = \prod_{u\in V(G)} f(d_u). $$
\begin{lemma} \label{l:min_max uni} If $G$ is a unicyclic graph with $n \ge 4$ vertices, then \[ (2,\dots,2) \prec {\bf{x}}_{_{G}} \! \prec (n-1,2,2,1,\dots, 1).\] \end{lemma}
\begin{proof} First of all, note that $(2,\dots,2)$ and $(n-1,2,2,1,\dots, 1)$ belong to $S_{2n}$.
Let us consider \emph{${\bf{x}}={\bf{x}}_{_{G}}=(x_1,\dots,x_n)$}. Since $G$ contains a cycle, we have $x_1\geq x_2\geq x_3\geq 2$.
Seeking for a contradiction assume that $\sum_{i=1}^k x_i<2k$ for some $1\leq k\leq n-1$. Thus, $x_k=1$ and $$ \sum_{i=1}^n x_i < 2k+n-k = n+k < 2n, $$ a contradiction. Hence, $$ \sum_{i=1}^k x_i \ge 2k = \sum_{i=1}^k 2, $$ for every $1\leq k\leq n-1$ and $$ (2,\dots,2) \prec {\bf{x}} . $$
Since $$ \sum_{i=k+1}^n x_i \ge \sum_{i=k+1}^n 1 = n-k, $$ for any $3\leq k\leq n-1$, we have $$ \sum_{i=1}^k x_i = 2n -\sum_{i=k+1}^n x_i \leq n+k = n - 1 +2+2 +\sum_{i=4}^k 1 , $$ for every $3\leq k\leq n-1$ (where, as usual, we assume the convention $\sum_{i=4}^3 1 = 0$).
If $k=2$, then we have $$ \begin{aligned} \sum_{i=3}^n x_i & = x_3 + \sum_{i=4}^n x_i \geq 2 + \sum_{i=4}^n 1 = n-1, \\ x_1 + x_2 & = 2n - \sum_{i=3}^n x_i \leq 2n-(n-1) = n-1+2. \end{aligned} $$
If $k=1$, then we have $$ \begin{aligned} \sum_{i=2}^n x_i & = x_2 +x_3+ \sum_{i=3}^n x_i \geq 2 + 2 + \sum_{i=4}^n 1 = n+1, \\ x_1 & = 2n - \sum_{i=2}^n x_i \leq 2n-(n+1) = n-1. \end{aligned} $$
Therefore, $$ {\bf{x}} \prec (n-1,2,2,1,\dots, 1). $$ \end{proof}
\begin{remark} \label{r:x} If $G$ is a unicyclic graph with $3$ vertices then $G=C_3$, and we have $I_f(G) = 3f(2)$ and $II_f(G) = f(2)^3$. So, it suffices to deal with graphs of at lest $4$ vertices. \end{remark}
\begin{theorem} \label{t:Ifuni} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $f : [1,\infty) \rightarrow \mathbb{R}$ is a convex function, then \[ nf(2) \leq I_f(G) \leq f(n-1) + 2f(2) +(n-3)f(1), \] and both inequalities are attained.
\end{theorem}
\begin{theorem} \label{t:If2uni} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $f : [1,\infty) \rightarrow \mathbb{R}$ is a concave function, then \[f(n-1) + 2f(2) +(n-3)f(1)\leq I_f(G) \leq n f(2), \] and both inequalities are attained.
\end{theorem}
In a similar way, we obtain the following results, since $$ \log II_f(G) = \sum_{u\in V(G)} \log f(d_u), $$ and the logarithm is an increasing function.
\begin{theorem} \label{t:IIfuni} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $f : [1,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is convex, then \[f(2)^n \leq II_f(G) \leq f(n-1) f(2)^2 f(1)^{n-3} \] and both inequalities are attained.
\end{theorem}
\begin{theorem} \label{t:IIf2uni} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $f : [1,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is concave, then \[f(n-1) f(2)^2 f(1)^{n-3} \leq II_f(G) \leq f(2)^n, \] and both inequalities are attained.
\end{theorem}
Recall that a vertex in a graph is \emph{pendant} if it has degree $1$. An edge is \emph{pendant} if it contains a pendant vertex.
Let $U_n^3$ be the unicyclic graph obtained from the cycle $C_3$ by attaching $n - 3$ pendant edges to the same vertex on $C_3$. Note that $U_n^3$ is the unique graph with degree sequence $(n-1,2,2,1,\dots, 1)$.
Since $t^{\alpha}$ is strictly convex if $\alpha\in (-\infty,0)\cup (1,\infty)$, Theorem \ref{t:Ifuni} allows to obtain the following result.
\begin{theorem} \label{t:m1uni} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $\alpha\in (-\infty,0)\cup (1,\infty)$, then \[ n 2^\alpha \le M_1^{\alpha}(G) \leq (n-1)^\alpha + 2^{\alpha+1} + n-3 . \] Moreover, the lower bound is attained if and only if $G$ is the cycle graph and the upper bound is attained if and only if $G=U_n^3$. \end{theorem}
The bounds in Theorem \ref{t:m1uni} are proved when $n\ge 7$ with a different argument in \cite{ZZ}.
Since $t^{\alpha}$ is strictly concave if $\alpha \in (0,1)$, Theorem \ref{t:If2uni} allows to obtain the following result.
\begin{theorem} \label{t:m1bisuni} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $\alpha\in (0,1)$, then \[ (n-1)^\alpha + 2^{\alpha+1} + n-3 \le M_1^{\alpha}(G) \leq n 2^\alpha . \] Moreover, the lower bound is attained if and only if $G=U_n^3$ and the upper bound is attained if and only if $G$ is the cycle graph. \end{theorem}
The bounds in Theorem \ref{t:m1bisuni} are proved when $n\ge 7$ with a different argument in \cite{ZZ}.
Theorem \ref{t:m1uni} has the following consequences.
\begin{corollary} \label{l:m1uni} If $G$ is a unicyclic graph with $n \ge 4$ vertices, then the following inequalities hold: $$ \begin{aligned} 4n & \leq M_1(G)\leq n^2-n+6, \\ 8n & \leq F(G)\leq (n-1)^3+n+13, \\ \frac{n}2 & \leq ID(G)\leq \frac1{n-1} +n-2. \end{aligned} $$ Moreover, each lower bound is attained if and only if $G=C_n$, and each upper bound is attained if and only if $G=U_n^3$. \end{corollary}
\begin{corollary} If $G$ is a unicyclic graph with $n$ vertices and $\alpha<1$, then $$ M_1^{\alpha}(G) = O(n). $$ \end{corollary}
In 2011, Vuki\v{c}evi\'c \cite{Vuki4} proposed the following topological index (and named it as the \emph{variable sum exdeg index}) for predicting the octanol-water partition coefficient of certain chemical compounds $$ SEI_a(G) = \sum_{uv\in E(G)} \big(a^{d_u} + a^{d_v}\big) = \sum_{u\in V(G)} d_u a^{d_u} , $$ where $a \neq 1$ is a positive real number. Among the set of $102$ topological indices \cite{http} proposed by the International Academy of Mathematical Chemistry \cite{http0} (respectively, among the discrete Adriatic indices \cite{VG}), the best topological index for predicting the octanol-water partition coefficient of octane isomers has $0.29$ (respectively $0.36$) coefficient of determination. The variable sum exdeg index allows to obtain the coefficient of determination $0.99$, for predicting the aforementioned property of octane isomers \cite{Vuki4}. Therefore, it is interesting to study the mathematical properties of the variable sum exdeg index. Vuki\v{c}evi\'c initiated the mathematical study of $SEI_a$ in \cite{Vuki5}.
If we define $f(t)=t a^{t}$, then $f''(t)=2 a^{t}\log a + t a^{t}(\log a)^2$. Hence, $f$ is strictly convex on $[1,\infty)$ if either $a>1$ or $a\le e^{-2}$, and Theorem \ref{t:Ifuni} allows to obtain the following result.
\begin{theorem} \label{t:SEI1} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $a>1$ or $0<a\le e^{-2}$, then \[ n 2a^2 \le SEI_a(G) \leq (n-1)a^{n-1} + 4a^{2} + (n-3)a . \] Moreover, the lower bound is attained if and only if $G$ is the cycle graph and the upper bound is attained if and only if $G=U_n^3$. \end{theorem}
Theorem \ref{t:SEI1} was proved recently in \cite{DimitrovAli}.
The \emph{Narumi-Katayama index} is defined in \cite{NK} as $$ NK(G) = \prod_{u\in V (G)} d_u . $$ The \emph{multiplicative second Zagreb index} or \emph{modified Narumi-Katayama index} $$ NK^*(G) = \prod_{uv\in E (G)} d_u d_v = \prod_{u\in V (G)} d_u^{d_u} $$ was introduced in \cite{multZ2} and \cite{GSG}.
Since $t \log t$ is a strictly convex function and $\log t$ is a strictly concave function, theorems \ref{t:IIfuni} and \ref{t:IIf2uni} imply, respectively, the following results.
\begin{theorem} \label{t:p2uni} If $G$ is a unicyclic graph with $n \ge 4$, then \[ 4^{n} \leq NK^*(G)\leq 16(n-1)^{n-1}. \] Moreover, the lower bound is attained if and only if $G=C_n$ and the upper bound is attained if and only if $G=U_n^3$. \end{theorem}
\begin{theorem} \label{t:p1uni} If $G$ is a unicyclic graph with $n \ge 4$, then \[ 4(n-1) \leq NK(G)\leq 2^{n}. \] Moreover, the lower bound is attained if and only if $G=U_n^3$ and the upper bound is attained if and only if $G=C_n$. \end{theorem}
Theorems \ref{t:p2uni} and \ref{t:p1uni} were proved in \cite{GSG} and \cite{GG}, respectively, with different arguments.
\section{Unicyclic graphs with maximum degree $\Delta$}
Let $S_{2n}^\Delta$ be the set of $n$-tuples ${\bf{x}}\in S_{2n}$ such that $x_1=\Delta$. Note that if $G$ is a unicyclic graph with $n$ vertices and maximum degree $\Delta$ and ${\bf{x}}_{_G}$ is its degree sequence, then ${\bf{x}}_{_G}\in S_{2n}^\Delta$.
If $G$ is a unicyclic graph with $n $ vertices and maximum degree 2, then $G$ is the cycle $C_n$.
\begin{lemma}\label{l:min uni D} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and ${\bf{y}}=(y_1,y_2,\dots, y_n)$ is such that \begin{itemize}
\item $y_1=\Delta$,
\item $y_{j}= 2$ for every $1<j\leq n-\Delta+2$,
\item $y_{j}=1$ for every $n-\Delta+2< j \leq n,$ \end{itemize} then \[ \bf{y} \prec {\bf{x}}_{_{G}}.\] \end{lemma}
\begin{proof} First, note that ${\bf{y}} \in S_{2n}^\Delta$. Suppose \emph{${\bf{x}}={\bf{x}}_{_{G}}=(x_1,\dots,x_n)$}. Since $G$ contains a cycle, we have $\Delta=x_1\geq x_2\geq x_3\geq 2$.
Seeking for a contradiction assume that $\sum_{i=1}^k x_i<\Delta+2(k-1)$ for some $2\leq k\leq n-\Delta+2$. Thus, $x_k=1$ and $$ \sum_{i=1}^n x_i < \Delta+2k-2+n-k = \Delta+n+k-2 \leq 2n, $$ a contradiction. Hence, $$ \sum_{i=1}^k x_i \ge \Delta+2k-2, $$ for every $2\leq k\leq n-\Delta+2$, and it is immediate to check that $$ {\bf{y}} \prec {\bf{x}}. $$ \end{proof}
As usual, we denote by $\lfloor t \rfloor$ the lower integer part of $t\in\mathbb{R}$, i.e., the greatest integer less than or equal to $t$.
\begin{lemma}\label{l:max uni D} Let $G$ be a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$.
If $q=1$ or $n=2\Delta-2$, let $s=n-\Delta+1$ and then \[ {\bf{x}}_{_{G}} \! \prec {\bf{z}}=(\Delta,s,2,1,\dots,1) .\]
If $q\geq 2$ and $n \neq 2\Delta-2$, let $r=n-q(\Delta-1)+1$ and ${\bf{z}}=(z_1,z_2,\dots, z_n)$ be such that \begin{itemize}
\item $z_{j}=\Delta$, for every $1\leq j\leq q$,
\item $z_{j}= r$ if $j=q+1$,
\item $z_{j}=1$ for every $q+1< j \leq n$, \end{itemize} and then \[ {\bf{x}}_{_{G}} \prec \bf{z} .\] \end{lemma}
\begin{proof} Suppose $q=1$ or $n=2\Delta-2$. Note that $(\Delta,s,2,1,\dots,1),{\bf{z}}\in S_{2n}^\Delta$. Since for any $k$ with $3\leq k\leq n-1$, $$ \sum_{i=k+1}^n x_i \ge \sum_{i=k+1}^n 1 = n-k, $$ we have that $$ \sum_{i=1}^k x_i = 2n -\sum_{i=k+1}^n x_i \leq n+k = \Delta+s+2+\sum_{i=4}^k 1, $$ for every $3\leq k\leq n-1$. Therefore, since $\Delta=x_1\geq x_2\geq x_3\geq 2$, it is readily seen that $$ {\bf{x}} \prec (\Delta,s,2,1,\dots, 1). $$
Suppose $q\geq 2$ and $n \neq 2\Delta-2$. Note that ${\bf{z}}\in S_{2n}^\Delta$. Since for any $k$ with $q+1\leq k\leq n-1$, $$ \sum_{i=k+1}^n x_i \ge \sum_{i=k+1}^n 1 = n-k, $$ we have that $$ \sum_{i=1}^k x_i = 2n -\sum_{i=k+1}^n x_i \leq n+k = q\Delta+r+\sum_{i=q+2}^k 1, $$ for every $q+1\leq k\leq n-1$. Since $\Delta=x_1\geq x_i$ for every $i>1$, it is immediate to check that $$ {\bf{x}} \prec {\bf{z}}. $$ \end{proof}
For any $n\geq 4$ and $3\leq \Delta\leq n-1$, let $\mathcal{H}_n^\Delta$ be the set of graphs obtained from the cycle $C_{k}$ with $3 \le k \le n-\Delta+2$ by attaching to the same vertex of the cycle $\Delta-2$ path graphs with lengths $m_1,m_2,\dots,m_{\Delta-2}\ge 0$ satisfying $k+m_1+m_2+\dots +m_{\Delta-2} = n$. Note that $G \in \mathcal{H}_n^\Delta$ if and only if it is a unicyclic graph with degree sequence ${\bf{y}}$.
Let $\mathcal{K}_n^\Delta$ be the set of unicyclic graphs with degree sequence ${\bf{z}}$. We show now that $\mathcal{K}_n^\Delta \neq \emptyset$. Let $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$. If $q=1$ or $n=2\Delta-2$, then let $K_n^\Delta$ be the graph obtained from the cycle $C_3$ by attaching $\Delta-2$ pendant vertices to some vertex and $n-1-\Delta$ to other. Note that if $q=1$ or $n=2\Delta-2$, then $n-1-\Delta\leq \Delta-2$ and $\mathcal{K}_n^\Delta =\{K_n^\Delta\}$. If $q>1$, $n \neq 2\Delta-2$ and $r=n-q(\Delta-1)+1=1$, then $q \neq 2$; let $K_n^\Delta$ be the graph obtained from the cycle $C_q$ by attaching $\Delta-2$ pendant vertices to each vertex. If $q>1$, $n \neq 2\Delta-2$ and $r=n-q(\Delta-1)+1\geq 2$, let $K_n^\Delta$ be the graph obtained from the cycle $C_{q+1}$ by attaching $\Delta-2$ pendant vertices to each vertex on the cycle except one and $r-2$ pendant vertices to this last vertex. Thus, $K_n^\Delta \in \mathcal{K}_n^\Delta \neq \emptyset$ in any case.
Therefore, by lemmas \ref{l:min uni D} and \ref{l:max uni D}, we obtain the following.
\begin{theorem}\label{t:Ifuni_minD} If $G$ is a unicyclic graph with $n \ge 4$ vertices, maximum degree $\Delta\geq 3$ and $f : [1,\infty) \rightarrow \mathbb{R}$ is a convex function, then \[ I_f(G) \geq f(\Delta)+(n-\Delta+1)f(2)+(\Delta-2)f(1), \] and the inequality is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
\begin{theorem}\label{t:Ifuni_maxD} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $f : [1,\infty) \rightarrow \mathbb{R}$ is a convex function, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, \[ I_f(G)\leq f(\Delta)+f(n-\Delta+1)+f(2)+(n-3)f(1), \]
\item if $q>1$ and $n \neq 2\Delta-2$, \[ I_f(G)\leq q f(\Delta)+f(n-q(\Delta-1)+1)+(n-q-1)f(1), \] \end{itemize} and the inequalities are attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
\begin{theorem}\label{t:If2uni_maxD} If $G$ is a unicyclic graph with $n \ge 4$ vertices, maximum degree $\Delta\geq 3$ and $f : [1,\infty) \rightarrow \mathbb{R}$ is a concave function, then \[ I_f(G)\leq f(\Delta)+(n-\Delta+1)f(2)+(\Delta-2)f(1), \] and the inequality is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
\begin{theorem}\label{t:If2uni_minD} If $G$ is a unicyclic graph with $n \ge 4$ vertices, maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $f : [1,\infty) \rightarrow \mathbb{R}$ is a concave function, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, \[ I_f(G)\geq f(\Delta)+f(n-\Delta+1)+f(2)+(n-3)f(1), \]
\item if $q>1$ and $n \neq 2\Delta-2$, \[ I_f(G)\geq q f(\Delta)+f(n-q(\Delta-1)+1)+(n-q-1)f(1), \] \end{itemize} and the inequalities are attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:IIfuni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $f : [1,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is convex, then \[ II_f(G) \geq f(\Delta) f(2)^{n-\Delta+1}f(1)^{\Delta-2}, \] and the inequality is attained if and only if $G \in \mathcal{H}_n^\Delta$.
\end{theorem}
\begin{theorem} \label{t:IIfuni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $f : [1,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is convex, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$II_f(G) \leq f(\Delta) f(n-\Delta+1)f(2)f(1)^{n-3},$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$II_f(G) \leq f(\Delta)^q f(n-q(\Delta-1)+1) f(1)^{n-q-1},$$ \end{itemize} and both inequalities are attained if and only if $G \in \mathcal{K}_n^\Delta$.
\end{theorem}
\begin{theorem} \label{t:IIf2uni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $f : [1,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is concave, then \[ II_f(G) \leq f(\Delta) f(2)^{n-\Delta+1}f(1)^{\Delta-2}, \] and the inequality is attained if and only if $G \in \mathcal{H}_n^\Delta$.
\end{theorem}
\begin{theorem} \label{t:IIf2uni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $f : [1,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is concave, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$II_f(G) \geq f(\Delta) f(n-\Delta+1)f(2)f(1)^{n-3},$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$II_f(G) \geq f(\Delta)^q f(n-q(\Delta-1)+1) f(1)^{n-q-1},$$ \end{itemize} and both inequalities are attained if and only if $G \in \mathcal{K}_n^\Delta$.
\end{theorem}
Since $t^{\alpha}$ is strictly convex if $\alpha\in (-\infty,0)\cup (1,\infty)$ and strictly concave if $\alpha\in (0,1)$, theorems \ref{t:Ifuni_minD}, \ref{t:Ifuni_maxD}, \ref{t:If2uni_maxD} and \ref{t:If2uni_minD} allow to obtain the following results.
\begin{theorem} \label{t:m1uni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $\alpha\in (-\infty,0)\cup (1,\infty)$, then \[ M_1^{\alpha}(G) \ge \Delta^\alpha + (n-\Delta+1)2^\alpha + \Delta-2 . \] Moreover, the lower bound is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
Note that Theorem \ref{t:m1uni_D_min} generalizes \cite[Theorem 4.1]{GDA}.
\begin{theorem} \label{t:m1uni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $\alpha\in (-\infty,0)\cup (1,\infty)$, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$M_1^{\alpha}(G)\leq \Delta^\alpha+(n-\Delta+1)^\alpha+2^\alpha+n-3,$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$M_1^{\alpha}(G)\leq q\Delta^\alpha+\big(n-q(\Delta-1)+1\big)^\alpha+n-q-1.$$ \end{itemize} Moreover, the upper bound is attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:m1bisuni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $\alpha\in (0,1)$, then \[ M_1^{\alpha}(G) \le \Delta^\alpha + (n-\Delta+1)2^\alpha + \Delta-2 . \] Moreover, the upper bound is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:m1bisuni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $\alpha\in (0,1)$, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$M_1^{\alpha}(G)\geq \Delta^\alpha+(n-\Delta+1)^\alpha+2^\alpha+n-3,$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$M_1^{\alpha}(G)\geq q\Delta^\alpha+\big(n-q(\Delta-1)+1\big)^\alpha+n-q-1.$$ \end{itemize} Moreover, the lower bound is attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
Theorems \ref{t:m1uni_D_min} and \ref{t:m1uni_D_max} have the following consequences.
\begin{corollary} \label{c:m1uni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, then the following inequalities hold: $$ \begin{aligned}
& M_1(G)\geq \Delta^2+4n-3\Delta+2, \\
& F(G)\geq \Delta^3+8n-7\Delta+6, \\
& ID(G)\geq \frac{1}{\Delta}+\frac12 (n+\Delta-3). \end{aligned} $$ Moreover, each lower bound is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{corollary}
\begin{corollary} \label{c:m1uni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$, then the following inequalities hold: \begin{itemize}
\item If $q=1$ or $n=2\Delta-2$, $$ \begin{aligned}
& M_1(G)\leq \Delta^2+(n-\Delta+1)^2+n+1, \\
& F(G)\leq \Delta^3+(n-\Delta+1)^3+n+5, \\
& ID(G)\leq \frac{1}{\Delta}+\frac{1}{n-\Delta+1}+n-\frac52. \end{aligned} $$
\item If $q>1$ and $n \neq 2\Delta-2$, $$ \begin{aligned}
& M_1(G)\leq q\Delta^2+(n-q(\Delta-1)+1)^2+n-q-1, \\
& F(G)\leq q\Delta^3+(n-q(\Delta-1)+1)^3+n-q-1, \\
& ID(G)\leq \frac{q}{\Delta}+\frac{1}{n-q(\Delta-1)+1}+ n-q-1. \end{aligned} $$ \end{itemize} Moreover, each upper bound is attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{corollary}
Since $t a^{t}$ is strictly convex on $[1,\infty)$ if $a>1$ or $a\le e^{-2}$, theorems \ref{t:Ifuni_minD} and \ref{t:Ifuni_maxD} allow to obtain the following results.
\begin{theorem} \label{t:SEI2} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$ and $a>1$ or $0<a\le e^{-2}$, then \[ SEI_a(G) \ge \Delta\, a^\Delta + (n-\Delta+1)\,2a^{2} + (\Delta-2)\,a . \] Moreover, the lower bound is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:SEI3} If $G$ is a unicyclic graph with $n \ge 4$ vertices and maximum degree $\Delta\geq 3$, $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$ and $a>1$ or $0<a\le e^{-2}$, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$SEI_a(G) \leq \Delta\, a^\Delta + (n-\Delta+1)\,a^{n-\Delta+1} + 2 a^2 + (n-3)\,a,$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$SEI_a(G) \leq q\,\Delta\, a^\Delta +\big(n-q(\Delta-1)+1\big)\,a^{n-q(\Delta-1)+1}+(n-q-1)\,a.$$ \end{itemize} Moreover, the upper bound is attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
Since $t \log t$ is a strictly convex function and $\log t$ is a strictly concave function, theorems \ref{t:IIfuni_D_min}, \ref{t:IIfuni_D_max}, \ref{t:IIf2uni_D_max} and \ref{t:IIf2uni_D_min} imply the following results.
\begin{theorem} \label{t:p2uni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ and maximum degree $\Delta\geq 3$, then \[ NK^*(G)\geq \Delta^\D4^{n-\Delta+1}. \] Moreover, the lower bound is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:p2uni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ and maximum degree $\Delta\geq 3$ and $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$NK^*(G)\leq 4\Delta^\Delta(n-\Delta+1)^{n-\Delta+1},$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$NK^*(G)\leq \Delta^{q\Delta} (n-q(\Delta-1)+1)^{n-q(\Delta-1)+1}.$$ \end{itemize} Moreover, the upper bound is attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:p1uni_D_max} If $G$ is a unicyclic graph with $n \ge 4$ and maximum degree $\Delta\geq 3$, then \[ NK(G)\leq \D2^{n-\Delta+1}. \] Moreover, the upper bound is attained if and only if $G \in \mathcal{H}_n^\Delta$. \end{theorem}
\begin{theorem} \label{t:p1uni_D_min} If $G$ is a unicyclic graph with $n \ge 4$ and maximum degree $\Delta\geq 3$, and $q=\big\lfloor \frac{n}{\Delta-1}\big\rfloor$, then \begin{itemize}
\item if $q=1$ or $n=2\Delta-2$, $$NK(G)\geq 2\Delta(n-\Delta+1),$$
\item if $q>1$ and $n \neq 2\Delta-2$, $$NK(G)\geq \Delta^q(n-q(\Delta-1)+1).$$ \end{itemize} Moreover, the lower bound is attained if and only if $G \in \mathcal{K}_n^\Delta$. \end{theorem}
\section{Unicyclic graphs with $p$ pendant vertices}
Given $n\geq 3$, let $S_{2n,p}$ be the set of $n$-tuples ${\bf{x}}\in S_{2n}$ such that $x_j=1$ if and only if $j>n-p$. Note that if $G$ is a unicyclic graph with $n$ vertices and $p$ pendant vertices and ${\bf{x}}_{_G}$ is its degree sequence, then ${\bf{x}}_{_G}\in S_{2n,p}$.
\begin{lemma}\label{l:min_max uni_p} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, $m=\big\lfloor \frac{2n-p}{n-p}\big\rfloor$, $t=2n-p-m(n-p)$, ${\bf{a}}=(a_1,a_2,\dots, a_n)$ is such that \begin{itemize}
\item $a_j=m+1$ for every $1\leq j \leq t$,
\item $a_{j}= m$ for every $t<j \leq n-p$,
\item $a_{j}=1$ for every $n-p< j \leq n,$ \end{itemize} and ${\bf{b}}=(b_1,b_2,\dots, b_n)$ is such that \begin{itemize}
\item $b_1=p+2$,
\item $b_{j}= 2$ for every $1<j \leq n-p$,
\item $b_{j}=1$ for every $n-p< j \leq n,$ \end{itemize} then \[ {\bf{a}} \prec {\bf{x}}_{_{G}} \prec {\bf{b}}.\] \end{lemma}
\begin{proof} First, note that ${\bf{a}},{\bf{b}}\in S_{2n,p}$, $m \ge 2$ and $0 \le t \le n-p$.
Suppose ${\bf{x}}={\bf{x}}_{_{G}}=(x_1,\dots,x_n)\in S_{2n,p}$. Seeking for a contradiction assume that $$\sum_{i=1}^{k}x_i< k(m+1)$$ for some $k\leq t$. Then, $x_k\leq m$ and $$\sum_{i=1}^{n}x_i< k(m+1) +(n-p-k)m+p\leq 2n,$$ a contradiction. Therefore, $$\sum_{i=1}^{k}x_i\geq k(m+1)$$ for every $k\leq t$.
Now assume that $$\sum_{i=1}^{k}x_i< t(m+1)+(k-t)m=km+t$$ for some $t<k\leq n-p$. Then, $x_k\leq m$ and $$\sum_{i=1}^{n}x_i< km+t +(n-p-k)m+p=2n$$ a contradiction. Therefore, $$\sum_{i=1}^{k}x_i\geq t(m+1)+(k-t)m$$ for every $t<k\leq n-p$ and $$\sum_{i=1}^{k}x_i\geq \sum_{i=1}^{k}a_i$$ for every $1 \le k < n$. Thus, \[{\bf{a}} \prec {\bf{x}}_{_{G}}. \].
Since $x_2\geq \cdots \ge x_{n-p}=2$ and $x_j=1$ if $j>n-p$, \[\sum_{i=k+1}^{n}x_i\geq \sum_{i=k+1}^{n}b_i\] for every $1 \le k < n$. Thus, \[\sum_{i=1}^{k}x_i=2n-\sum_{i=k+1}^{n}x_i\leq 2n-\sum_{i=k+1}^{n}b_i =\sum_{i=1}^{k}b_i\] for every $1 \le k < n$ and \[{\bf{x}}_{_{G}} \prec {\bf{b}}. \] \end{proof}
Lemma \ref{l:min_max uni_p} has the following consequences.
\begin{theorem} \label{t:Ifuni_p} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, $m=\big\lfloor \frac{2n-p}{n-p}\big\rfloor$, $t=2n-p-m(n-p)$ and $f : [2,\infty) \rightarrow \mathbb{R}$ is a convex function, then \[tf(m+1)+(n-p-t)f(m)+pf(1) \leq I_f(G) \leq f(p+2) + (n-p-1)f(2) +pf(1), \] and both inequalities are attained. \end{theorem}
\begin{proof} Since $$ I_f(G) = pf(1) + \sum_{u\in V(G), d_u \ge 2} f(d_u) , $$ and $f$ is a convex function on $[2,\infty)$, Lemma \ref{l:min_max uni_p} gives the inequalities. \end{proof}
\begin{theorem} \label{t:If2uni_p} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, $m=\big\lfloor \frac{2n-p}{n-p}\big\rfloor$, $t=2n-p-m(n-p)$ and $f : [2,\infty) \rightarrow \mathbb{R}$ is a concave function, then \[f(p+2) + (n-p-1)f(2) +pf(1) \leq I_f(G) \leq tf(m+1)+(n-p-t)f(m)+pf(1) , \] and both inequalities are attained. \end{theorem}
In a similar way, we obtain the following results.
\begin{theorem} \label{t:IIfuni_p} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, $m=\big\lfloor \frac{2n-p}{n-p}\big\rfloor$, $t=2n-p-m(n-p)$ and $f : [2,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is convex, then \[f(m+1)^tf(m)^{n-p-t}f(1)^p \leq II_f(G) \leq f(p+2) f(2)^{n-p-1} f(1)^{p}, \] and both inequalities are attained.
\end{theorem}
\begin{theorem} \label{t:IIf2uni_p} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, $m=\big\lfloor \frac{2n-p}{n-p}\big\rfloor$, $t=2n-p-m(n-p)$ and $f : [2,\infty) \rightarrow \mathbb{R}^+$ is a function such that $\log f$ is concave, then \[f(p+2) f(2)^{n-p-1} f(1)^{p} \leq II_f(G) \leq f(m+1)^tf(m)^{n-p-t}f(1)^p, \] and both inequalities are attained.
\end{theorem}
Therefore, if $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, the corresponding sharp upper and lower bounds for $M_1^\alpha(G)$, $M_1(G)$, $F(G)$, $ID(G)$, $SEI_a(G)$, $NK^*(G)$ and $NK(G)$ can be easily computed as in the previous sections.
The lower bound obtained in this way for $M_1(G)$ is proved in \cite{GKJA} with different arguments.
We state now just the inequalities for $SEI_a(G)$, since we obtain them in this case for a larger range of values of the parameter $a$.
\begin{theorem} \label{t:SEI5} If $G$ is a unicyclic graph with $n \ge 4$ vertices and $1\leq p\leq n-3$ pendant vertices, $a>1$ or $0<a\le e^{-1}$, $m=\big\lfloor \frac{2n-p}{n-p}\big\rfloor$ and $t=2n-p-m(n-p)$, then \[ t(m+1)a^{m+1}+(n-p-t)m a^{m} +p a \leq SEI_a(G) \leq (p+2)a^{p+2} + (n-p-1)2a^2 +pa, \] and both inequalities are attained. \end{theorem}
\begin{proof} If we define $f(t)=t a^{t}$, then $f''(t)=2 a^{t}\log a + t a^{t}(\log a)^2$. Hence, $f$ is strictly convex on $[2,\infty)$ if either $a>1$ or $a\le e^{-1}$. Therefore, Theorem \ref{t:Ifuni_p} gives the inequalities. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{Flat traces for a random partially expanding map}
\date\today \author{Luc Gossart} \address{Institut Fourier, 100, rue des maths BP74 38402 Saint-Martin d'Heres France\footnote{2010 Mathematics Subject Classification. 37D30 Partially hyperbolic systems and dominated splittings, 37E10 Maps of the circle, 60F05 Central limit and other weak theorems, 37C30 Zeta functions, (Ruelle-Frobenius) transfer operators, and other functional analytic techniques in dynamical systems. }}
\maketitle \begin{abstract}
We consider the skew-product of an expanding map $E$ on the circle $\ma T$ with an almost surely $\mc C^k$ random perturbation $\tau=\tau_0+\delta\tau$ of a deterministic function $\tau_0$:
\[\fonction{F}{\ma T \times \ma R}{\ma T \times \ma R}{(x,y)}{(E(x), y+\tau(x))} .\]
The associated transfer operator $\mc L:u \in C^k (\ma T \times \ma R) \mapsto u\circ F$ can be decomposed with respect to frequency in the $y$ variable into a family of operators acting on functions on the circle:
\[\fonction{\mc L_{\xi}}{C^k(\ma T)}{C^k(\ma T)}{u}{e^{i\xi\tau}u\circ E}.\]
We show that the flat traces of $\mc L^n_{\xi}$ behave as normal distributions in the semiclassical limit $n, \xi\to\infty$ up to the Ehrenfest time $n\leq c_k\log\xi$. \end{abstract}
\section{Introduction}
This paper focuses on the distribution of the flat traces of iterates of the transfer operator of a simple example of partially expanding map. It is motivated by the Bohigas-Gianonni-Schmidt \cite{bohigas1984characterization} conjecture in quantum chaos (see below).\newline In chaotic dynamics, the transfer operator is an object of first importance linked to the asymptotics of the correlations. The collection of poles of its resolvent, called Ruelle-Pollicott spectrum, can be defined as the spectrum of the transfer operator in appropriate Banach spaces (see \cite{ruelle1976zeta} for analytic expanding maps, \cite{kitaev1999fredholm}, \cite{blank2002ruelle}, \cite{baladi2007anisotropic}, \cite{baladi2008dynamical}, \cite{gouezel2006banach}, \cite{faure2008semi} for the construction of the spaces for Anosov diffeomorphisms.)\newline
The study of the Ruelle spectrum for Anosov flows is more difficult because of the flow direction that is neither contracting nor expanding. Dolgopyat has shown in particular in \cite{dolgopyat1998decay} the exponential decay of correlations for the geodesic flow on negatively curved surfaces, and Liverani \cite{liverani2004contact} generalized this result to all $\mc C^4$ contact Anosov flows. His method involved the construction of anisotropic Banach spaces in which the generating vector field has a spectral gap, and no longer relies on symbolic dynamics that prevented from using advantage of the smoothness of the flow. Tsujii \cite{tsujii2010quasi} constructed appropriate Hilbert spaces for the transfer operator of $\mc C^r$ contact Anosov flows, $r\geq 3$ and gave explicit upper bounds for the essential spectral radii in terms of $r$ and the expansion constants of the flow. Butterley and Liverani \cite{butterley2007smooth} and later Faure and Sjöstrand \cite{faure2011upper} constructed good spaces for Anosov flows, without the contact hypothesis. Weich and Bonthonneau defined in \cite{bonthonneau2017ruelle} Ruelle spectrum for geodesic flow on negatively curved manifolds with a finite number of cusps. Dyatlov and Guillarmou \cite{dyatlov2016pollicott} handled the case of open hyperbolic systems. A simple example of Anosov flow is the suspension of an Anosov diffeomorphism, or the suspension semi-flow of an expanding map. Pollicott showed exponential decay of correlations in this setting under a weak condition in \cite{pollicott1985rate} and Tsujii constructed suitable spaces for the transfer operator and gave an upper bound on its essential spectral radius in \cite{tsujii2008decay}.\newline
In this article we study a closely related discrete time model, the skew product of an expanding map of the circle. It is a particular case of compact group extension \cite{dolgopyat2002mixing}, which are partially hyperbolic maps, with compact leaves in the neutral direction that are isometric to each other. Dolgopyat showed in \cite{dolgopyat2002mixing} that the correlation decrease generically rapidly for compact group extensions, and exponentially in the particular case of expanding maps. In our setting of skew-product of an expanding map of the circle, Faure \cite{faure2011semiclassical} has shown using semi-classical methods an upper bound on the essential spectral radius of the transfer operator under a condition shown to be generic by Nakano Tsujii and Wittsten \cite{nakano2016partial}. De Simoi, Liverani, Poquet and Volk \cite{de2017fast} and de Simoi and Liverani \cite{de2016statistical} \cite{de2018limit} studied fast-slow dynamical systems, that generalize $\ma T$-extensions of circle expanding maps. The roof function, depending on two variables is multiplied by a small amplitude, and the authors obtained results about the statistical properties, for long time and small $\varepsilon$. Arnoldi, Faure, and Weich \cite{arnoldi2017asymptotic} and Faure and Weich \cite{faure2017global} studied the case of some open partially expanding maps, iteration function schemes, for which they found an explicit bound on the essential spectral radius of the transfer operator in a suitable space, and obtained a Weyl law (upper bound on the number of Ruelle resonances outside the essential spectral radius). Naud \cite{naud2016rate} studied a model close to the one presented in this paper, in the analytic setting, in which the transfer operator is trace-class, and used the trace formula, in the deterministic and random case to obtain a lower bound on the spectral radius of the transfer operator. In the more general framework of random dynamical systems in which the transfer operator changes randomly at each iteration, for the skew product of an expanding map of the circle, Nakano and Wittsten \cite{nakano2015spectra} showed exponential decay of correlations.\newline
Semiclassical analysis describes the link between quantum dynamics and the associated classical dynamics in a symplectic manifold. The transfer operator happens to be a Fourier integral operator and the semi-classical approach has thus shown to be useful. The famous Bohigas-Giannoni-Schmidt \cite{bohigas1984characterization} conjecture of quantum chaos states that for quantum systems whose associated classical dynamic is chaotic, the spectrum of the Hamiltonian shows the same statistics as that of a random matrix (GUE, GOE or GSE according to the symmetries of the system)(see also \cite{gutzwiller2013chaos} and \cite{giannoni1991chaos}). We are interested analogously in investigating the possible links between the Ruelle-Pollicott spectrum and the spectrum of random matrices/operators. At first we try to get informations about the spectrum using a trace formula. More useful results could follow from the use of a global normal form as obtained by Faure-Weich in \cite{faure2017global}. \subsection{Expanding map}
Let us consider a smooth orientation preserving expanding map $E: \ma T \rightarrow \ma T$ on the circle $\ma T = \ma R/\ma Z$, that is, satisfying $E'>1$, of degree $l$, and let us call \[m:=\inf E'>1\] and \[M:=\sup E'.\]
\subsection{Transfer operator}\label{1.2} Let us fix a function $\tau \in C^k(\ma T)$ for some $k\geq 0$. We are interested in the partially expanding dynamical system on $\ma T \times \ma R$ defined by \begin{equation} \label{def_FF} F(x,y)=\left(E(x),y+\tau(x)\right)\ \end{equation}
We introduce the transfer operator \[\fonction{\mc L_{\tau}}{\mc C^k(\ma T \times \ma R)}{\mc C^k(\ma T \times \ma R)}u{u\circ F}.\] \subsection{Reduction of the transfer operator} Due to the particular form of the map $F$, the Fourier modes in $y$ are invariant under $\mc L_\tau$: if for some $\xi\in\ma R$ and some $v\in \ma C^k(\ma T)$, \[u(x,y) = v(x)e^{i \xi y},\] then \[\mc L_{\tau} u(x,y)= e^{i\xi\tau(x)} v(E(x)) e^{i\xi y}.\]
Given $\xi \geq 0$ and a function $\tau$, let us consequently consider the transfer operator $\mc L_{\xi,\tau}$ defined on functions $v \in C^k( \ma T)$ by \[\forall x\in \ma T,\mc L_{\xi,\tau} v(x):=e^{i\xi\tau(x)}v(E(x)),\]
\subsection{Spectrum and flat trace}
In appropriate spaces, the transfer operator has a discrete spectrum outside a small disk, the eigenvalues are called Ruelle resonances. It is in general not trace-class, but one can define its flat trace (see Appendix \ref{appendice Ruelle spectrum} for a more precise discussion about Ruelle resonances, flat trace and their relationship). \begin{lemma}[Trace formula, \cite{atiyah1967lefschetz}, \cite{guillemin1977lectures}] For any $\mc C^0$ function $\tau$ on $\ma T$, the flat trace of $\mc L^n_{\xi,\tau}$ is well defined and \begin{equation}\label{trace}\mathrm{Tr}^\flat\mc L^n_{\xi,\tau}= \sum_{x,E^n(x)=x}\frac{e^{i\xi\tau_x^n}}{{(E^n)'(x)}-1}, \end{equation} where $\tau^n_x$ denotes the Birkhoff sum: For a function $\phi \in C(\ma T)$ and a point $x\in \ma T$ we define \begin{equation}\label{birkhoff}\phi_x^n := \sum\limits_{k=0}^{n-1}\phi(E^k(x)).\end{equation}
\end{lemma} \subsection{Gaussian random fields}
We define our random functions on the circle by means of their Fourier coefficients. We are only interested in $\mc C^0$ functions. We will denote by $\mc N(0,\sigma^2)$ (respectively $\mc N_{\ma C}(0,\sigma^2)$) the real (respectively complex) centered Gaussian law of variance $\sigma^2$, with respective densities \[ \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2\sigma^2}x^2} \text{ and } \frac{1}{\sigma\pi} e^{-\frac{1}{\sigma^2}|z|^2}.\] With these conventions, a random variable of law $\mc N_{\ma C} (0,\sigma^2)$ has independent real and imaginary parts of law $\mc N(0,\frac{\sigma^2} 2)$, and the variance of its modulus is consequently $\sigma^2$. \begin{defi} We will call centered stationary Gaussian random fields on $\ma T$ the real random distributions $\tau$ whose Fourier coefficients $\left(c_p(\tau)\right)_{p\geq 1}$ are independent complex centered Gaussian random variables, with variances growing at most polynomially, such that $c_0(\tau)$ is a real centered Gaussian variable independent of the $c_p(\tau),\ p\geq 1$. The negative coefficients are necessarily given by \[c_{-p}(\tau)=\overline{c_p(\tau)}.\] \end{defi} The Gaussian fields are in general defined as distributions if their Fourier coefficients have variances with polynomial growth and the decay of the variances of the coefficients gives sufficient conditions for the regularity of the field. \begin{lemma}\label{regularite}
If $\ma E[|c_p(\tau)|^2]$ has a polynomial growth, $\tau=\sum_p c_p(\tau) e^{2i\pi p\cdot}$ defines almost surely a distribution: almost surely \[\forall \phi= \sum c_p(\phi) e^{2i\pi p\cdot}\in\mc C^\infty(\ma T),\ \langle\tau,\phi\rangle:=\sum_p \overline{c_p(\tau)}c_p(\phi)<\infty.\] Let $k\in\ma N$.If for some $\eta>0$ \begin{equation}\label{ck}
\ma E\left[\abs{c_p(\tau)}^2\right]= O\left(\frac 1{p^{2k+2+\eta}}\right). \end{equation} Then $\tau$ is almost surely $\mc C^k$. \end{lemma} \begin{proof} See appendix \ref{Borel-Cantelli}. \end{proof} In what follows we will always assume that (\ref{ck}) is satisfied, at least for $k=0$, so that our random fields are random variables on $\mc C^0(\ma T)$. This will ensure the existence of flat traces.
\subsection{Result}
If $x$ is a periodic point, let us write its prime period \[l_x:=\min\{k\geq1,E^k(x)=x\}.\]
Let us define for every $n\in\ma N$: \begin{equation}
\label{def_An} A_n:=\left(\sum\limits_{E^n(x)=x}\frac {l_x}{((E^n)'(x)-1)^2}\right)^{-\frac 1 2} \end{equation}
\begin{theorem}\label{main} Let $k\in\ma N$. Let $\tau_0\in\mc C^k(\ma T)$. Let \[\delta\tau=\sum_{p\in\ma Z}c_p e^{2i\pi p\cdot}\]
be a centered Gaussian random field, such that $\ma E[|c_p|^2]=O(p^{-2-\nu})$ for some $\nu>0$. This way, $\delta\tau$ is a.s. $\mc C^0$. If
\begin{equation}\label{condition} \exists\epsilon>0, \exists C>0, \forall p\in\ma Z^*, \ma E\left[\left|c_p\right|^2\right]\geq \frac C{p^{2k+2+\epsilon}},\end{equation}
then one has the convergence in law of the flat traces
\begin{equation}\label{theorem}
A_n\mathrm{Tr}^\flat\left(\mc L^n_{\xi, \tau_0+\delta\tau}\right)\longrightarrow\mc N_{\ma C}(0,1) \end{equation} as $n$ and $\xi$ go to infinity, under the constraint \begin{equation}\label{cond2} \exists 0<c<1, \forall n,\xi,\ n\leq c\frac{\log\xi}{\log l+(k+\frac 12+\frac\epsilon2)\log M}. \end{equation} Note that condition (\ref{condition}) can allow $\tau$ to be $\mc C^k$ by Lemma \ref{regularite}. \end{theorem}
\begin{remark} The statement implies that the convergence still holds if we multiply $\delta\tau$ by an arbitrarily small number $\eta>0$. For instance for $\tau_0 = 0$, \[A_n\mathrm{Tr}^\flat\left(\mc L^n_{\xi, 0}\right)\longrightarrow\infty\] at exponential speed, uniformly in $\xi$, but if $\delta\tau$ is an irregular enough Gaussian field in the sense of (\ref{condition}), then for any $\eta>0$ and $c<1$ holds \begin{equation*}
A_n\mathrm{Tr}^\flat\left(\mc L^n_{\xi, \eta \cdot\delta\tau}\right)\longrightarrow\mc N_{\ma C}(0,1) \end{equation*} under condition (\ref{cond2}). \end{remark} \begin{remark}
Condition (\ref{cond2}) means that time $n$ is smaller than a constant times the Ehrenfest time $\log \xi$, and this constant decreases with the regularity $k$ of the field $\delta \tau$. \end{remark}
\subsection{Sketch of proof}
The proof is based around the following arguments: \begin{enumerate}
\item Note first that the convergence (\ref{theorem}) is satisfied if all the phases appearing in (\ref{trace}) are independent and uniformly distributed. \begin{remark}
For sake of simplicity, in this sketch of proof, we will state pairwise independence for the phases in (\ref{trace}), while in fact we must pack them by orbits, since Birkhoff sums $\phi^n_x$ are the same on all the orbit, but this changes little to the problem. For instance this simplification would remove the factor $l_x$ in the definition (\ref{An}) of $A_n$ corresponding to this multiplicity.
\end{remark}The convergence can be deduced from the standard proof of the central limit theorem showing pointwise convergence of the characteristic function. However, here, since the periodic points are dense in $\ma T$, requiring independence of the values $(\delta\tau(x))_{E^n(x)=x}$ would lead to very bad regularity of the field (it is not hard to see that it would be almost surely nowhere locally bounded).
\item We fix a Gaussian field $\delta\tau=\sum c_pe^{2i\pi p\cdot}$ fulfilling the hypothesis of Theorem \ref{main} and start by constructing an auxiliary field with the same law and show that it satisfies the convergence (\ref{theorem}). This is sufficient since the convergence in law only involves the law of the random field.
\item For each $j\geq 1$, we construct a smooth random field $\delta\tau_j$, such that for any pair of periodic points $x\neq y$ of period $j$, $\delta\tau_j(x)$ and $\delta\tau_j(y)$ are independent.
Since by (\ref{trace}) $\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})$ only involves points of period $n$, the phases appearing at time $n$, for the function $\delta\tau_n$, in $\mc L^n_{\xi,\delta\tau_n}$ are consequently all independent random variables on $S^1.$
If moreover $\xi$ is large enough, the variables $\xi\left(\delta\tau_n\right)^n_x$ are Gaussian with large variances, so $\xi\left(\delta\tau_n\right)^n_x\mod 2\pi$ (and therefore the phases $e^{i\xi\left(\delta\tau_n\right)^n_x}$) are close to be uniform. Thus, the convergence (\ref{theorem}) should hold for $\mathrm{Tr}^\flat(\mc L^n_{\xi,\delta\tau_n})$ under a certain relation between $n$ and $\xi$ that will be explained in number (8).
\item An important point is that if the phases $(e^{i\xi(\delta\tau_n)^n_x})_{\{x\in\ma T,E^n(x)=x\}}$ are independent and close to be uniform, then adding to $\delta\tau_n$ an independent field will not change this fact, as the following lemma suggests:
\begin{lemma}\label{indep} Let $X,X'$ be real independent random variables such that $e^{iX}, e^{iX'}$ are uniform on $S^1$. Let $Y,Y'$ be real random variables such that $X$ and $ X'$ are independent of both $Y$ and $Y'$. Then $e^{i(X+Y)}$ and $e^{i(X'+Y')}$ are still independent uniform random variables on $S^1$. \end{lemma} Note that no independence between $Y$ and $Y'$ is needed. See appendix \ref{annexe3} for the proof.
\item Using this analogy, if the fields $\delta\tau_j$ are chosen independent, it should follow that the convergence (\ref{theorem}) holds for $\mathrm{Tr}^\flat\left(\mc L^n_{\xi,\sum_{j\geq 1}\delta\tau_j}\right)$ for large $\xi$. \item The fields $\delta\tau_j$ are almost surely smooth. However, because the distance between periodic points decreases as $M^{-j}$ according to Lemma \ref{periodic}, if we want to be sure that $\sum_j\delta\tau_j$ is $\mc C^k$, and $\ma E[\delta\tau_j(x)\delta\tau_j(y)]=0$ for all $x\neq y$ of period $j$, let us see that we need to impose an exponential decay of the standard deviation (independent of the point $x$): \begin{equation}\label{heuristique ecart type}
\sqrt{\ma E[|\delta\tau_j(x)|^2]}\approx M^{-j(k+\frac 12+\varepsilon)} \end{equation} for some $\varepsilon>0.$ This can be deduced heuristically from the fact (see Definition \ref{defi_covariance} below) that \begin{equation}\label{covariance sketch}
\ma E[\delta\tau_j(x)\delta\tau_j(y)]=\sum_p \ma E[|c_p(\delta\tau_j)|^2] e^{ip(x-y)}=:K_j(x-y)
\end{equation}and the uncertainty principle: a localisation of $K_j$ at a scale $M^{-j}$ implies non negligible coefficients $\ma E[|c_p(\delta\tau_j)|^2]$ for $p$ of order $M^j$. Let us for instance assume that the Fourier coefficients $\ma E[|c_p(\delta\tau_j)|^2]$ of $K_j$ write \begin{equation}
\ma E[|c_p(\delta\tau_j)|^2]=\alpha_j^2f\left(\frac p{M^j}\right)^2 \end{equation} for some amplitudes $\alpha_j$ to determine and some positive Schwartz function $f:\ma R\longrightarrow\ma R.$ Then, since
\[\delta\tau_j=\sum_p \sqrt{\ma E[|c_p(\delta\tau_j)|^2]} \zeta_{j,p}e^{2i\pi p\cdot}\] for i.i.d. $\mc N(0,1)$ random variables $\zeta_{j,p}$, roughly, \[\begin{split}
\sup|\delta\tau_j^{(k)}|&\approx\alpha_j\sum_p |p|^k f\left(\frac p{M^j}\right)\\
&=\alpha_j M^{j(k+1)}\frac1{M^j}\sum_p\frac{|p|^k}{M^{jk}}f\left(\frac p{M^j}\right)\\
&\sim C \alpha_j M^{j(k+1)}. \end{split}\] (The second line involved a Riemann sum.) Consequently, with those approximations, choosing $\alpha_j = M^{-j(k+1+\varepsilon)}$ gives a $\mc C^k$ function $\sum_{j\geq 1}\delta\tau_j$. Then, \[\begin{split}
\ma E[|\delta\tau_j(x)|^2]&\underset{(\ref{covariance sketch})}=\sum_p\ma E[|c_p(\delta\tau_j)|^2]\\
&\underset{\phantom{(\ref{covariance})}}= \sum_p\alpha_j^2f\left(\frac p{M^j} \right)^2\\
&\underset{\phantom{(\ref{covariance})}}=\alpha_j^2M^j\frac1{M^j}\sum_pf\left(\frac p{M^j} \right)^2\\
&\underset{\phantom{(\ref{covariance})}}\sim C \alpha_j^2M^j = M^{-j(2k+1+2\varepsilon)} \end{split}\] as announced. \item This condition, together with (\ref{condition}) can easily be shown to imply that the Fourier coefficients $\tilde c_p$ of $\sum_{j\geq1}\delta\tau_j$ satisfy
\[\ma E[|\tilde c_p|^2]\leq C\ma E[|c_p|^2].\]
This allows us to define a field $\delta\tau_0$, that we chose independent from the other $\delta\tau_j$, by \[\ma E[|c_p(\delta\tau_0)|^2]= C\ma E[|c_p|^2]-\ma E[|\tilde c_p|^2],\] so that $\frac1C\sum_{j\geq0}\delta\tau_j$ has the same law as $\delta\tau$ and still satisfies the convergence (\ref{theorem}) for $\xi$ large enough from (4) of this sketch. \item To get an idea of the origin of the relation (\ref{cond2}) between $n$ and $\xi$, let us assume that we want all the arguments $\xi(\delta\tau_n)^n_x$ in $\mathrm{Tr}^\flat(\mc L^n_{\xi,\delta\tau_n})$ to go uniformly to infinity in order to get approximate uniformity of the phases and thus convergence towards a Gaussian law. Note that for any $x$, \begin{equation}\label{tac}
\ma P\left[\frac{\left|(\delta\tau_n)^n_x\right|}{ \sqrt{\ma E[|(\delta\tau_n)^n_x|^2]}}\leq\epsilon\right]\underset{\epsilon\to0}=O(\epsilon). \end{equation}Let $(C_n)$ be a sequence going to infinity.
(\ref{tac}) implies \[\begin{split}
\ma P\left[\bigcap_{E^n(x)=x}\left\{\xi(\delta\tau_n)^n_x> C_n\right\}\right]&\ \ \ \ =1-\ma P\left[\exists x,E^n(x)=x,\xi(\delta\tau_n)^n_x\leq C_n \right]\\
&\ \ \ \ \geq 1- \sum_{E^n(x)=x}\ma P\left[\xi(\delta\tau_n)^n_x\leq C_n \right]\\
&\underset{\mathrm{Lemma } \ref{periodic}}= 1-(l^n-1) \ma P\left[\xi(\delta\tau_n)^n_x\leq C_n \right]\\
&\ \ \ \underset{(\ref{tac})}\geq 1-Cl^n \frac{C_n}{\xi\sqrt{\ma E[|(\delta\tau_n)^n_x|^2]}} \end{split}\]
if $x$ denotes any point and $\xi\gg\frac{C_n}{\sqrt{\ma E[|(\delta\tau_n)^n_x|^2]}}.$ By independence \[\begin{split}
\sqrt{\ma E[|(\delta\tau_n)^n_x|^2]}&\underset{\phantom{(\ref{heuristique ecart type})}}=\left(\sum_{k=0}^{n-1} {\ma E[|\delta\tau_n(E^k(x))|^2]}\right)^{\frac12}\\
&\underset{(\ref{heuristique ecart type})}\approx \sqrt n M^{n(k+\frac12+\varepsilon)} \end{split}\] for some $\varepsilon>0.$ Thus \[\ma P\left[\xi(\delta\tau_n)^n_x\to\infty \mathrm{\ uniformly}\ \mathrm{w.r.t.\ }x\mathrm{\ s.t.\ }E^n(x)=x\right]\to1\] for $\xi\gg l^n M^{n(k+\frac 12+\varepsilon)}, $ which gives (\ref{cond2}).
\end{enumerate}
\section{Numerical experiments}
We consider an example with the non linear expanding map
\begin{equation}
\label{ex_E}
E(x) = 2x + 0.9/(2\pi) \sin (2\pi (x+0.4) )
\end{equation}
plotted on Figure \ref{fig:dessin_E}.
In Figure \ref{fig:dessin2}, we have the histogram of
the modulus $ S = \left| A_n\mathrm{Tr}^\flat\left(\mc L^n_{\xi, \tau_0+\delta\tau}\right) \right| $ obtained after a sample of $10^4$ random functions $\delta \tau$.
We compare the histogram with the function $ C S \exp (-S^2) $ in red, i.e. the radial distribution of a Gaussian function, obtained from the prediction of Theorem \ref{main}.
We took $n=11$, $\xi = 2 .10^6$, $\tau_0 = \cos(2 \pi x)$. We also observe a good agreement for the (uniform) distribution of the arguments that is not represented here.
\begin{figure}
\caption{Graph of the expanding map $E(x)$ in Eq.(\ref{ex_E})}
\label{fig:dessin_E}
\end{figure}
\begin{figure}
\caption{In blue, the histogram of $ S = \left| A_n\mathrm{Tr}^\flat\left(\mc L^n_{\xi, \tau_0+\delta\tau}\right) \right| $ for $n=11$, $\xi = 2 .10^6$, $\tau_0 = \cos(2 \pi x)$ and the sample $10^4$ random functions $\delta \tau$. The histogram is well fitted by $ C S \exp (-S^2) $ in red, as predicted by Theorem \ref{main}}
\label{fig:dessin2}
\end{figure}
\section{Proof of theorem \ref{main}} A stationary centered Gaussian random field is characterized by its covariance function: \begin{defi}\label{defi_covariance}
Let $\tau=\sum_{p\in\ma Z}c_pe^{2i\pi p\cdot}$ be a stationary centered Gaussian random field, satisfying \[\ma E[|c_p|^2]=O\left(\frac{1}{p^{2+\eta}}\right)\] for some $\eta>0,$ so that $\tau$ is almost surely $\mc C^0$ according to Lemma \ref{regularite}. Let us define its covariance function $K$ by \begin{equation}\label{covariance}
K(x):=\sum_p\ma E[|c_p|^2]e^{2i\pi p x}. \end{equation} For any pair of points $(x,y)\in\ma T^2,$ we have \begin{equation}\label{equation covariance}
\ma E\left[{\tau(x)}\tau(y)\right]=K(x-y). \end{equation} \end{defi} \begin{proof}[Proof of the last statement]
Remark from Appendix \ref{Borel-Cantelli} that the condition $\ma E[|c_p|^2]=O\left(\frac{1}{p^{2+\eta}}\right)$ implies that $\tau$ is almost surely equal to its Fourier series. Thus, \[\begin{split}
\ma E\left[{\tau(x)}\tau(y)\right]&=\sum_{p,q\in\ma Z}\ma E[c_p(\tau)c_q(\tau)]e^{2i\pi(px+qy)}\\
&=\sum_{p\in\ma Z}\left(\ma E[\abs{c_p}^2]e^{2i\pi p(x-y)}+\ma E[{c_p}^2]e^{2i\pi p(x+y)}\right) \end{split}\] from the independence relationships of the Fourier coefficients. Now, \[\ma E[{c_p}^2] = \mathbb E [(\mathrm{Re}(c_p))^2]-\ma E[(\mathrm{Im}(c_p))^2]+2i\ma E[(\mathrm{Re}(c_p))(\mathrm{Im}(c_p))] =0.\] \end{proof} \subsection{Definition of a Gaussian field satisfying Theorem \ref{main}} Let us fix a random centered Gaussian field $\delta\tau=\sum_{p\in\ma Z}c_pe^{2i\pi p\cdot}$ satisfying the hypothesis of Theorem \ref{main}.
Let us define the Gaussian fields mentioned in step (3) of the sketch of proof. Let $K_{\text{init}}\in\mc C^\infty_c(\ma R)$ be a smooth function supported in $\left[-\frac 13,\frac13\right]$, with non negative Fourier transform, satisfying \footnote{To construct such a function, take a non zero even function $g\in\mc C^\infty_c(\ma R)$. $g$ has a real Fourier transform. Then $g*g\in\mc C^\infty_c(\ma R)$ and its Fourier transform is $\hat g^2\geq 0$. Moreover $g*g(0)=\int \hat g^2>0$. }
\begin{equation}
\label{K(0)}
K_{\text{init}}(0)=1.
\end{equation}
Let $k\geq0$ be the integer involved in Theorem \ref{main} giving the regularity of the field. Let $\epsilon>0$ be the constant appearing in Theorem \ref{main} and define for any integer $j\geq 1$ \begin{equation}
\label{kj}K_j(x) = \frac 1 {M^{j(2k+1+\epsilon)}} K_{\text{init}}(M^jx). \end{equation}
The Fourier transform of $K_j$ is given by \begin{equation}\label{Fourier}\widehat{K_j}(\xi) = \frac 1 {M^{j(2k+2+\epsilon)}} \widehat{K_{\text{init}}}\left(\frac\xi{M^j}\right)\geq0\end{equation}
The functions $K_j$, for all $j\geq 1$, are supported in $\left[-\frac 13,\frac13\right]$ and can then be seen as functions on the circle $\ma T$ by trivially periodizing them. Let $c_{p,j}$, for $p\geq 0, j\geq 1$ be independent centered Gaussian random variables of respective variances $\hat K_j(2\pi p)$, and let us write \[\delta\tau_j=\sum_p c_{p,j}e^{2i\pi p\cdot},\] where $c_{-p,j}:=\overline{c_{p,j}},p\geq 1$. Note that, since $K_j$ is smooth for all $j,$ the variances $\hat K_j(2\pi p)$ of $c_{p,j}$ decay rapidly with $p$ (for fixed $j$), and therefore, each $\delta\tau_j$ is almost surely smooth by Lemma \ref{lemme Alejandro}.
\begin{lemma}
$\sum_{j\geq1}\delta\tau_j$ is a centered Gaussian random field $\sum \tilde c_p e^{2i\pi p\cdot}$ and \[\ma E\left[|\tilde c_p|^2\right]=O\left(\ma E\left[|c_p|^2\right]\right).\]
\end{lemma}
\begin{proof}
We have seen in Eq.(\ref{Fourier}) that
\[\widehat{K_j}(\xi) = \frac 1 {M^{j(2k+2+\epsilon)}} \widehat{K_{\text{init}}}(\frac\xi{M^j}).\]
Since $K_{\text{init}}$ is smooth, there exists a constant $C>0$ such that
\[\forall\xi\in\ma R, \widehat{K_{\text{init}}}(\xi)\leq \frac C{\left\langle\xi\right\rangle^{2k+2+\frac\epsilon 2}},\]
with the usual notation $\left\langle\xi\right\rangle=\sqrt{1+\xi^2}\geq\abs\xi$.
Thus, \[\begin{split}
\ma E\left[|c_{p,j}|^2\right]&=\frac{1}{M^{j(2k+2+\epsilon)}}\widehat{K_{\text{init}}}(\frac{2\pi p}{M^j})\\
&\leq \frac{C}{M^{j\frac\epsilon 2}}\frac1{\abs{2\pi p}^{2k+2+\frac\epsilon2}}. \end{split}\] Consequently, since by independence
\[\ma E\left[\left|\tilde c_p\right|^2\right] = \ma E\left[\left|\sum_{j\geq1}c_{p,j}\right|^2\right]=\sum_{j\geq1}\ma E\left[\left|c_{p,j}\right|^2\right], \]
\[\ma E\left[\left|\tilde c_p\right|^2\right] =O\left(\frac{1}{\abs{2\pi p}^{2k+2+\frac\epsilon 2}}\right)\underset{(\ref{condition})}=O(\ma E\left[|c_p|^2\right]).\]
\end{proof} Thus, fixing a constant $C$ such that
\[C\ma E\left[|c_p|^2\right]\geq \ma E\left[\left|\tilde c_p\right|^2\right] ,\] we can define a random Gaussian field $\delta\tau_0=\sum_{p\in\ma Z}c_{p,0}e^{2i\pi p\cdot}$ with coefficients $c_{p,0}$ independent from the $c_{p,j}$ such that
\[\ma E\left[|c_{p,0}|^2\right]=C\ma E\left[|c_p|^2\right]-\ma E\left[\left|\tilde c_p\right|^2\right].\] This way $\frac 1 C \sum_{j\geq0}\delta\tau_j$ and $\delta\tau$ have the same law. By this we mean that their Fourier coefficients have the same laws. By our hypothesis, the convergence of the Fourier series are almost surely normal, thus for any finite subset $\{x_k\}_k$ of $\ma T$, $(\frac 1 C\sum\delta\tau_j(x_k))_k$ and $(\delta\tau(x_k))_k$ have the same law. Therefore, the laws of $\mathrm Tr^\flat(\mc L^n_{\xi,\tau_0+\delta\tau})$ and $\mathrm Tr^\flat(\mc L^n_{\xi,\tau_0+\frac1 C\sum\delta\tau_j})$ are the same, and the convergence of Theorem \ref{main} is equivalent to \begin{equation}
\label{convergence bis}
A_n\mathrm Tr^\flat(\mc L^n_{\xi,\tau_0+\sum\delta\tau_j})\overset{\mc L}\longrightarrow\mc N_{\ma C}(0,1) \end{equation} under condition (\ref{condition}). (The constant $\frac1C$ can be 'absorbed' in $\xi$ up to the replacement of $\tau_0$ by $C\tau_0$ that has no consequence.) In the rest of the paper we will show (\ref{convergence bis}) and will write \begin{equation} \label{defi tau}
\tau:=\tau_0+\sum_{j\geq 0}\delta\tau_j. \end{equation} \subsection{New expression for $\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})$}
We will write the set of periodic orbits of (non primitive) period $n$ as \begin{equation}\label{Pern}
\mathrm{Per}(n):=\left\{\{x,E(x),\cdots,E^{n-1}(x)\}, E^n(x)=x, x \in \ma T\right\}, \end{equation} and the set of periodic orbits of primitive period $n$ as \begin{equation}\label{Pm}
\mathcal{P}_n:=\left\{\{x,E(x),\cdots,E^{n-1}(x)\}, n =\min\{k\in\ma N^*,E^k(x)=x\} , x \in \ma T \right\}. \end{equation} This way, $\mathrm{Per}(n)$ is the disjoint union \begin{equation}\label{prime}
\mathrm{Per}(n)=\coprod_{m|n}\mc P_m. \end{equation} Let us rewrite the sum $\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})$, where $\tau $ is given by (\ref{defi tau}). We know from (\ref{trace}) that \[\begin{split}
\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})&= \sum\limits_{E^n(x)=x}\frac{e^{i\xi\tau_x^n}}{{(E^n)'(x)}-1}\\
&=\sum\limits_{E^n(x)=x}\frac{e^{i\xi\tau_x^n}}{{e^{J^n_x}-1}}, \end{split}\] where $J(x) = \log (E'(x))>0$ and $J_x^n$ is the Birkhoff sum as defined in (\ref{birkhoff}). If $f^n_O$ stands for the Birkhoff sum $f^n_x$ for any $x\in O$ , let us write \begin{equation} \label{Tr_F}
\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})= \sum_{m|n}m\sum_{O\in \mc P_m}\frac{e^{i\xi\tau^n_O}}{e^{J^n_O}-1}. \end{equation}
For $O\in \mathrm{Per}(n)$, we can write \[\tau^n_O = (\delta\tau_n)^n_O + \sum_{j\neq n}(\delta\tau_j)^n_O + (\tau_0)^n_O.\] Since the covariance function $K_n$ is supported in $\left[-\frac 1{3 M^n},\frac1{3M^n}\right]$, we deduce from Lemma \ref{periodic} and (\ref{equation covariance}) that the values taken by $\delta\tau_n$ at different periodic points of period dividing $n$, which have law $\mc N(0, K_n(0))$ are independent random variables.\newline
Thus, for $n\in\ma N$, $m|n$ and $O\in\mc P_m$, $(\delta\tau_n)^m_O$ is a centered Gaussian random variable of variance $mK_n(0)$, and $(\delta\tau_n)^n_O=\frac nm(\delta\tau_n)^m_O$ has variance $(\frac{n}{m})^2mK_n(0) = \frac{n^2}{m}K_n(0)$.
\begin{defi} \label{C} We say that two families of real random variables $(X_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ and $(Y_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ satisfy condition (C) if \begin{enumerate}
\item for every $m|n$, and $O\in\mc P_m$, $X^n_O$ has law $\mc N(0,\frac {n^2} m K_n(0))$,
\item for every $O'\neq O\in\mathrm{Per}(n)$ and every $O''\in\mathrm{Per}(n)$, $X^n_O $ is independent of $X^n_{O'}$ and $Y^n_{O''}$. \end{enumerate} \end{defi}
Writing $X^n_O = (\delta\tau_n)^n_O $ and $Y^n_O = \sum_{j\neq n}(\delta\tau_j)^n_O + (\tau_0)^n_O$, we have obtained \begin{lemma}\label{orbit} There exist families of random variables $(X^n_O), (Y^n_O)$ satisfying condition (C) of Definition (\ref{C}) such that for every $n\geq 1$ and $O\in\mathrm{Per}(n)$\begin{equation} \label{tau_On}
\tau^n_O= X^n_O+Y^n_O \end{equation} \end{lemma} In order to adapt the proof of lemma \ref{indep}, we want to show that for large $\xi$, the random variables $e^{i\xi (X^n_O+Y^n_O)}, O\in\mathrm{Per}(n)$ are close to be independent and uniform on $S^1$.
\begin{remark} \label{Char_function} We have \begin{equation} \label{Tr_F2}
\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau}) \underset{(\ref{Tr_F}),(\ref{tau_On})}= \sum_{m|n}m\sum_{O\in \mc P_m}\frac{e^{i\xi(X_O^n + Y_O^n)}}{e^{J^n_O}-1}. \end{equation}
Our aim is to approximate the characteristic function of $A_n\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})$ which is the expectation of \begin{multline}
\label{BE2} \exp\left(iA_n \left( \mu \mathrm{Re}(\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})) + \nu \mathrm{Im}(\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})) \right)\right) =\\
\prod\limits_{m|n}\prod\limits_{O\in\mc P_m} \exp\left[i\frac{mA_n}{e^{J_O^n}-1} \left(
\mu\cos(\xi(X_O^n + Y_O^n))+ \nu\sin(\xi(X_O^n + Y_O^n)) \right)\right] \end{multline}
for fixed, $\mu,\nu\in\ma R.$ The right hand side of (\ref{BE2}) can be written as
\[\prod\limits_{m|n}\left(\prod\limits_{O\in\mc P_m} f_O\left(e^{i\xi(X^n_{O}+Y^n_{O})}\right)\right)^m\] for some continuous functions $f_O:S^1\longrightarrow\ma C$ (depending on $\mu,\nu$): \[f_O(z)=\exp\left[i\frac{A_n}{e^{J_O^n}-1}\left(\mu\mathrm{Re}(z)+\nu\mathrm{Im}(z)\right)\right]\] In the next Lemma we first consider indicator functions on $S^1$ for $f_O$. \end{remark}
\begin{lemma}\label{2.5}
Let $(X_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in Per(n)\end{subarray} }$ and $(Y_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in Per(n)\end{subarray} }$ be two families of real random variables satisfying satisfying condition (C) of Definition (\ref{C}). Assume that $n$ and $\xi$ satisfy (\ref{cond2}). Then there is a constant $C>0$ such that for every $n\in\ma N$ and every real numbers $(\alpha_O)_{O\in\mathrm{Per}(n)},(\beta_O)_{O\in\mathrm{Per}(n)} $ such that \[\forall O\in\mathrm{Per}(n),\ 0<\beta_O-\alpha_O<2\pi,\] for every complex numbers $(\lambda_O)_{O\in\mathrm{Per}(n)}$, if $A_O:=e^{i]\alpha_O,\beta_O[}\subset S^1\subset\ma C$ and $\mathds 1_{A_O}:S^1\rightarrow\ma C$ is the characteristic function of $A_O$, we have \begin{equation}
\label{complique}
\left|\frac{\ma E\left[\prod\limits_{m|n}\left(\prod\limits_{O\in\mc P_m}\lambda_O\mathds 1_{A_O}\left(e^{i\xi(X^n_{O}+Y^n_{O})}\right)\right)^m\right]}{\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\lambda_O^m \left( \frac{\beta_O-\alpha_O}{2\pi} \right)}-1\right|\leq \xi^{c-1}n^{-\frac12}\left(\underset{(\ref{cond2})}\to0\right). \end{equation}
\end{lemma}
\begin{remark} In this expression, we compare the law of the family of random variables $\left(e^{i\xi(X_O^n+Y_O^n)}\right)_{O\in\mathrm{Per}(n)}$ to the uniform law on the torus of dimension $\# \mathrm{Per}(n)$. The proof of this lemma is given in the next subsection.
\end{remark}
\subsection{A normal law of large variance on the circle is close to uniform} We will need the following lemma, which evaluates how much the law $\mc N(0,1)\mod \frac 1 t$ differs from the uniform law on the circle $\mathbb{R}/(\frac{1}{t} \mathbb{Z})$ for large values of $t$. \begin{lemma}\label{3} There exists a constant $C>0$ such that for every real numbers $\alpha,\beta$, such that $0<\beta-\alpha<2\pi$ and every real number $t\geq 1$,
\[\left|\int_{\ma R}\sum_{k\in\ma Z}\mathds 1_{\frac{\alpha+2k\pi}t\leq x\leq\frac{\beta+2k\pi}t}e^{-\frac{x^2}{2}}\frac{dx}{\sqrt{2\pi}}-\frac{\beta-\alpha}{2\pi}\right|\leq\frac C t (\beta-\alpha).\]
\begin{figure}
\caption{As $t$ goes to infinity, the red area converges to $\frac{\beta-\alpha}{2\pi}$ with speed $O(\frac {\beta-\alpha} t).$}
\label{fig:dessin}
\end{figure} \end{lemma} \begin{proof}
By mean value inequality, if $|x-y|\leq 1$, then
\[\left|e^{-\frac{x^2}2}-e^{-\frac{y^2}2}\right|\leq |x-y| f(y)\]
for the $L^1$ function \[f(y):=\sup_{|u-y|\leq 1} |u|e^{-\frac{u^2}2}\]
Let us then write for $u\in[\frac \alpha t,\frac \beta t]$, $u_k:=u+\frac{2k\pi}{t}$ and $I_k:=[u_k,u_{k+1}]$. We have just seen that for $t\geq 2\pi$, for all $y\in I_k$,
\[\left|e^{-\frac{u_k^2}2}-e^{-\frac{y^2}2}\right|\leq \frac C t f(y).\] Integrating over $y\in I_k$ of length $\frac{2\pi}{t}$ and summing over $k\in\ma Z$ yields
\[\left|\frac{2\pi}{t}\sum_{k\in\ma Z}e^{-\frac{u_k^2}2}-\sqrt{2\pi}\right|\leq \frac Ct\] (The value of the constant $C$ changes at each line, but it depends neither on $t$, nor on $\alpha,\beta$.) Averaging over $u\in[\frac\alpha t,\frac\beta t]$ gives
\[\left|\frac{2\pi}{\beta-\alpha}\sum_{k\in\ma Z}\int_{\frac\alpha t}^{\frac\beta t}\exp\left(-\frac{(u-2k\pi)^2}2\right)du-\sqrt{2\pi}\right|\leq\frac Ct.\] Consequently,
\[\left|\int_{\ma R}\sum_{k\in\ma Z}\mathds 1_{\frac{\alpha+2k\pi}t\leq x\leq\frac{\beta+2k\pi}t}e^{-\frac{x^2}{2}}\frac{dx}{\sqrt{2\pi}}-\frac{\beta-\alpha}{2\pi}\right|\leq\frac Ct(\beta-\alpha)\] \end{proof} \begin{proof}[Proof of lemma \ref{2.5}]
Let us denote by $E$ the expectation
\[E:=\ma E\left[\prod\limits_{m|n}\left(\prod\limits_{O\in\mc P_m}\lambda_O\mathds 1_{A_O}\left(e^{i\xi(X_O^n+Y_O^n)}\right)\right)^m\right].\] If we write respectively $\ma P_X$, $\ma P_Y$ and $\ma P_{X,Y}$ the probability laws of the variables $(\xi X^n_O)_{O\in\mathrm{Per}(n)}$, $(\xi Y^n_O)_{O\in\mathrm{Per}(n)}$ and $(\xi X^n_O)_{O\in\mathrm{Per}(n)}\cup(\xi Y^n_O)_{O\in\mathrm{Per}(n)}$ respectively, then condition (C) of Definition (\ref{C}) implies \begin{multline}
d\ma P_{X,Y}((x_O)_{O\in\mathrm{Per}(n)},(y_O)_{O\in\mathrm{Per}(n)}) =\\ \prod\limits_{m|n}\prod\limits_{O\in\mc P_m}e^{-\frac{{x_O}^2}{2\sigma^2_{n,\xi}}}\frac{dx_O}{\sigma_{n,\xi}\sqrt{2\pi}}\otimes d\ma P_Y((y_O)_{O\in\mathrm{Per}(n)}). \end{multline} with the variance $\sigma^2_{n,\xi}:=\xi^2\frac{n^2}mK_n(0)$. We have \begin{multline}
E=\int_{\ma R^{2\#\mathrm{Per}(n)}}\prod\limits_{{O\in\mathrm{Per}(n)}}\left(\sum_{k\in\ma Z}\lambda_O^m\mathds 1_{]\alpha_O+2k\pi,\beta_O+2k\pi[}(x_O+y_O)\right)\\d\ma P_{X,Y}((x_O)_{O\in\mathrm{Per}(n)},(y_O)_{O\in\mathrm{Per}(n)}). \end{multline} Thus, writing $u_O=\frac{x_O}{\sigma_{n,\xi}}$ for $O\in\mc P_m$, \begin{multline}
E=\int_{\ma R^{\#\mathrm{Per}(n)}}\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\left(\int_{\ma R}\sum_{k\in\ma Z}\lambda_O^m\mathds 1_{\left]\frac{\alpha_O-y_O+2k\pi}{\sigma_{n,\xi}},\frac{\beta_O-y_O+2k\pi}{\sigma_{n,\xi}}\right [}(u_O)e^{-\frac{u_O^2}2}\frac{du_O}{\sqrt{2\pi}}\right)\\d\ma P_{Y}((y_O)_{O\in\mathrm{Per}(n)}) . \end{multline} Let us write for $O\in\mathrm{Per}(n)$ \[I_O = \int_{\ma R}\sum_{k\in\ma Z}\lambda_O^m\mathds 1_{\left]\frac{\alpha_O-y_O+2k\pi}{\sigma_{n,\xi}},\frac{\beta_O-y_O+2k\pi}{\sigma_{n,\xi}}\right [}(u_O)e^{-\frac{u_O^2}2}\frac{du_O}{\sqrt{2\pi}}.\] Lemma \ref{3} yields \[I_O=\lambda_O^m\frac{\beta_O-\alpha_O}{2\pi}\left(1+\epsilon_O\right),\] where \[\begin{split}
\exists C>0,\abs{\epsilon_O}&\leq\frac{C}{\sigma_{n,\xi}}\\
&\leq\frac{C}{\xi\sqrt{nK_n(0)}}. \end{split}\] Let us remark that for every finite family $\{x_k\}_k\subset\ma R$, the expansion of the product and factorization after triangular inequality give
\[\left|\prod_k(1+x_k)-1\right|\leq\prod_k(1+|x_k|)-1.\] Thus, \[\begin{split}
\left|\frac{\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}I_O}{\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\lambda_O^m\left(\frac{\beta_O-\alpha_O}{2\pi}\right)}-1\right|&=\left|\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}(1+\epsilon_O)-1\right|\\
&\leq \left(\left(1+\frac{C}{\xi(nK_n(0))^{1/2}}\right)^{\#\mathrm{Per}(n) }-1\right). \end{split}\] From Lemma \ref{periodic} we have $\#\mathrm{Per}(n) \leq l^n$.
Using hypothesis (\ref{cond2}) we can bound the prefactor: \[\begin{split}\left(1+\frac{C}{\xi(nK_n(0))^{1/2}}\right)^{l^n}-1&\underset{(\ref{kj}),(\ref{K(0)})}=\left(1+\frac{CM^{n(k+\frac12+\frac\epsilon2)}}{\xi\sqrt n}\right)^{l^n}-1\\ &\leq\exp(l^n\frac{CM^{n(k+\frac12+\frac\epsilon2)}}{\xi\sqrt n})-1\\ &\leq C'\frac{l^nM^{n(k+\frac12+\frac\epsilon2)}}{\xi\sqrt n}
\end{split}\]
for some $C'>0$ for $n$ and $\xi$ large enough and satisfying (\ref{cond2}) since
\begin{equation}\label{tendvers0}
l^n\frac{CM^{n(k+\frac12+\frac\epsilon2)}}{\xi\sqrt n}\underset{(\ref{cond2})}{\leq}C\xi^{c-1}n^{-\frac12}\to 0.
\end{equation} \end{proof} \subsection{End of proof} We can now easily extend the lemma \ref{2.5} from characteristic functions to step functions. \begin{corollary}\label{cor} Assume that $n$ and $\xi$ satisfy (\ref{cond2}). For any families $(X_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ and $(Y_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ of real random variables satisfying condition (C) of Definition (\ref{C}), there exists $C>0$ such that, if $(f_{n,O})_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray}}$ is a family of step functions $S^1\to\ma R$, then
\begin{multline} \label{BE1} \left|\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}f_{n,O}^m(e^{i\xi(X_O^n+Y_O^n)})\right]-\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int f_{n,O}^md\mathrm{Leb}\right|\\\leq C\xi^{c-1}n^{-\frac12}\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int \abs{f_{n,O}^m}d\mathrm{Leb}.\end{multline} \end{corollary} \begin{proof} Let us write each $f_{n,O}$ as \[f_{n,O}= \sum\limits_{q=1}^{p_{n,O}}\lambda_{n,O,q}\mathds 1_{A_{n,O,q}},\] where the $\lambda_{n,O,q}$ are complex numbers and the $A_{n,O,q}, 1\leq q \leq p_{n,O}$ are disjoint intervals. We develop (\ref{BE1}), we use Lemma \ref{2.5} and factorize the result:
\begin{multline*}E:=\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}f_{n,O}^m(e^{i\xi(X_O^n+Y_O^n)})\right]\\=\sum_{(q_O)\in\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\{1,\cdots,p_{n,O}\}}\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\lambda_{n,O,q_O}^m\mathds 1_{A_{n,O,q_O}}(e^{i\xi(X_O^n+Y_O^n)})\right].\end{multline*} Consequently, \begin{multline*}
\left|E-\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int f_{n,O}^md\mathrm{Leb}\right|\\\leq\sum_{(q_O)\in\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\{1,\cdots,p_{n,O}\}}\left|\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\lambda_{n,O,q_O}^m\mathds 1_{A_{n,O,q_O}}(e^{i\xi(X_O^n+Y_O^n)})\right]\right.-\\\left.\prod_{m|n}\prod_{O\in\mc P_m}\lambda_{n,O,q_O}^m\text{Leb}(A_{n,O,q_O})\right|\\\leq C\xi^{c-1}n^{-\frac12}\sum_{(q_O)\in\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\{1,\cdots,p_{n,O}\}}\prod_{m|n}\prod_{O\in\mc P_m}\abs{\lambda_{n,O,q_O}}^m\text{Leb}(A_{n,O,q_O}) \end{multline*} from the previous lemma. \\ Hence,
\[\left|E-\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int f_{n,O}^md\mathrm{Leb}\right|\leq C\xi^{c-1}n^{-\frac12}\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int \abs{f_{n,O}}^md\mathrm{Leb}.\] \end{proof} We can use this result in order to estimate the characteristic function of $\mathrm{Tr}^\flat(\mc L^n_{\xi,\tau})$, using remark (\ref{Char_function}). \begin{corollary} \label{pC} Assume that $n$ and $\xi$ satisfy (\ref{cond2}). Let $(X_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ and $(Y_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ be two families of real random variables satisfying condition (C) of Definition (\ref{C}). There exists $C>0$ such that for all $(\mu_O,\nu_O)_{O\in\mathrm{Per}(n)}\in\ma R^{2\#\mathrm{Per}(n)}$,
\begin{multline*}\left|\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}e^{im\mu_O\cos(\xi(X_O^n+Y_O^n))+im\nu_O\sin(\xi(X_O^n+Y_O^n))}\right]\right.\\\left.-\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int_0^{2\pi}e^{i(m\mu_O\cos\theta+m\nu_O\sin\theta)}\frac{d\theta}{2\pi}\right|\leq C\xi^{c-1}n^{-\frac12}.\end{multline*}
\end{corollary} \begin{proof} Let $C$ be the constant from corollary \ref{cor}. For $O\in\mathrm{Per}(n)$, let $f_{O}$ be the function defined on $S^1$ by \[f_{O}(e^{i\theta})=e^{i(\mu_O\cos\theta+\nu_O\sin\theta)}.\]
Each $f_{O}$ is bounded by $1$, we can consequently find for each $O\in\mathrm{Per}(n)$ a family $(f_{j,O})_j$ of step functions uniformly bounded by $1$ converging pointwise towards $f_{O}$.\newline We have for $n$ fixed, by dominated convergence \[E_j:=\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}f_{j,O}^m(e^{i\xi(X^n_{O}+Y^n_{O})})\right]\xrightarrow[j\to\infty]{}E:=\ma E\left[\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}f^m(e^{i\xi(X^n_{O}+Y^n_{O})})\right]\] as well as
\[I_j:=\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int_0^{2\pi}f_{j,O}^m(e^{i\theta})\frac{d\theta}{2\pi}\xrightarrow[j\to\infty]{}I:=\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int_0^{2\pi}f_{O}^m(e^{i\theta})\frac{d\theta}{2\pi}.\] It is thus possible to find an integer $j_0$ such that both
\[\left|E-E_{j_0}\right|\leq \xi^{c-1}n^{-\frac12}\] and
\[\left|I-I_{j_0}\right|\leq\xi^{c-1}n^{-\frac12}\] hold.\newline From corollary \ref{cor}, we know that for all $n\in\ma N$
\[\left|E_{j_0}-I_{j_0}\right|\leq C\xi^{c-1}n^{-\frac12}\sup\abs{f_{j_0}}.\] Thus,
\[\begin{split}\abs{E-I}&\leq\left|E-E_{j_0}\right|+\left|E_{j_0}-I_{j_0}\right|+\left|I-I_{j_0}\right|\\
&\leq (C+2)\xi^{c-1}n^{-\frac12}.\end{split}\] \end{proof} We can know prove the final proposition : \begin{proposition}\label{2.8}Let $(X_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ and $(Y_O^n)_{\begin{subarray}{l}n\geq 1 \\O\in \mathrm{Per}(n)\end{subarray} }$ be two families of real random variables satisfying condition (C) of Definition (\ref{C}). If condition (\ref{cond2}) is satisfied then we have the following convergence in law \begin{equation} \label{Tnxi}
T_{n,\xi} := A_n\sum_{m|n}m\sum_{O\in\mc P_m}\frac{e^{i\xi(X^n_O+Y^n_O)}}{e^{J^n_O}-1}\underset{n,\xi\to\infty}{\longrightarrow}\mc N_{\ma C}(0,1 ), \end{equation} with the amplitude $A_n$ defined in (\ref{def_An}) by \begin{equation}\label{An}
\begin{split}A_n&=\left(\sum_{m|n}m^2\sum_{O\in\mc P_m}\frac 1{(e^{J_O^n}-1)^2}\right)^{-\frac 12}.\end{split} \end{equation} \end{proposition}
\begin{proof} Let us fix two real numbers $\xi_1$ and $\xi_2$ and let $\phi_n$ be the characteristic function of $T_{n,\xi}$: \begin{multline*}
\phi_n(\xi_1,\xi_2) :=\ma E\left[\exp\left(iA_n\left(\xi_1\sum\limits_{m|n}m\sum\limits_{O\in\mc P_m}\frac{\cos(\xi(X_O^n+Y_O^n))}{e^{J^n_O}-1}+\right.\right.\right.\\\left.\left.\left.\xi_2\sum\limits_{m|n}m\sum\limits_{O\in\mc P_m}\frac{\sin(\xi(X^n_O+Y^n_O))}{e^{J^n_O}-1}\right)\right)\right]. \end{multline*} We compute the limit of $\phi_n(\xi_1,\xi_2)$ as $n$ goes to infinity. Corollary (\ref{pC}) yields \begin{equation}
\label{phi}\left|\phi_n(\xi_1,\xi_2)-\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int_0^{2\pi}e^{i\frac{mA_n}{e^{J^n_O}-1}(\xi_1\cos\theta+\xi_2\sin\theta)}\frac{d\theta}{2\pi}\right|\leq C\xi^{c-1}n^{-\frac12}\to 0 \end{equation} under the assumption (\ref{cond2}).
Let \[ \psi(\xi_1,\xi_2) := \int_0^{2\pi} e^{i(\xi_1\cos\theta+\xi_2\sin\theta)}\frac{d\theta}{2\pi}.\] We have the following Taylor's expansion in 0: \[\psi(\xi_1,\xi_2) = 1-\frac 1 4 (\xi_1^2+\xi_2^2)+o(\xi_1^2+\xi_2^2).\] In order to apply this to equation (\ref{phi}), we need to check that \begin{lemma}\label{ref} \[nA_n\sup_{O\in\mathrm{Per}(n)}\frac{1}{e^{J^n_O}-1}\underset{{n\to\infty}}{\longrightarrow}0.\] \end{lemma} \begin{proof} See appendix \ref{label} \end{proof} We can now state that \[\begin{split}
&\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\int_0^{2\pi}e^{i\frac{mA_n}{e^{J_O}-1}(\xi_1\cos\theta+\xi_2\sin\theta)}\frac{d\theta}{2\pi}=\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\psi\left(\xi_1\frac{mA_n}{e^{J_O}-1},\xi_2\frac{mA_n}{e^{J_O}-1}\right)\\
&=\prod\limits_{m|n}\prod\limits_{O\in\mc P_m}\left( 1-\frac {\xi_1^2+\xi_2^2} 4\frac{(mA_n)^2}{(e^{J_O}-1)^2} +o\left(\frac{(mA_n)^2}{(e^{J_O}-1)^2}\right)\right)\\
&=\exp\left(\sum_{m|n}\sum_{O\in\mc P_m}\log\left(1-\frac {\xi_1^2+\xi_2^2} 4\frac{(mA_n)^2}{(e^{J_O}-1)^2} +o\left(\frac{(mA_n)^2}{(e^{J_O}-1)^2}\right)\right)\right)\\
&=\exp\left(\sum_{m|n}\sum_{O\in\mc P_m}-\frac {\xi_1^2+\xi_2^2} 4\frac{(mA_n)^2}{(e^{J_O}-1)^2} +o(\frac{(mA_n)^2}{(e^{J_O}-1)^2})\right)\\ & \underset{(\ref{An})}= e^{-\frac{\xi_1^2+\xi_2^2}4+o(1)},\end{split}\] We deduce that \[\phi_n(\xi_1,\xi_2)\underset{n\to\infty}{\longrightarrow}e^{-\frac{\xi_1^2+\xi_2^2}4}\] which is the characteristic function of a Gaussian variable of law $\mc N_{\ma C}(0,1).$ \end{proof}
\section{Discussion} In this paper we have considered a model where the roof function $\tau$ is random. However, the numerical experiments suggest a far stronger result: for a fixed function $\tau$ and a semiclassical parameter $\xi$ chosen according to a uniform random distribution in a small window at high frequencies, the result seems to remain true, as shown in the following figures for $\tau(x) =\sin(2\pi x)$. The moduli also seem to become uniform. \newline
It would be interesting to understand what informations about the Ruelle resonances can be recovered from the convergence (\ref{theorem}). We know from the Weyl law from \cite{arnoldi2017asymptotic} established in a similar context that the number of resonances of $\mc L_{\xi,\tau}$ outside the essential spectral radius, for a given $\tau$, are of order $O(\xi)$. A complete characterization would thus require a knowledge of the traces of $\mc L_{\xi,\tau}^n$ up to times of order $O(\xi)$, while we only have information for $n=O(\log\xi).$
\begin{figure}
\caption{Histogramm of $ S = \left| A_n\mathrm{Tr}^\flat\left(\mc L^n_{\tau_0,\xi}\right) \right| $ for a sample of $10^4$ random values of $\xi$ uniformly distributed in $[\xi_0, \xi_0 +10]$ with $\xi_0= 2.10^6$ and $n=11$ corresponding to a fraction of the Ehrenfest time $C_e:= n\frac{\log2}{\log \xi_0}=0.5$. It is well fitted by the red curve $S\mapsto C S \exp (-S^2) $.}
\label{fig:dessin3}
\end{figure}
\begin{figure}
\caption{Histogramm of $ S = \left| A_n\mathrm{Tr}^\flat\left(\mc L^n_{\tau_0,\xi}\right) \right| $ for a sample of $10^4$ random values of $\xi$ uniformly distributed in $[\xi_0, \xi_0 +10]$ with $\xi_0= 2000$ and $n=11$ giving $C_e=1.0$. The red curve corresponds to $S\mapsto C S \exp (-S^2) $.}
\label{fig:dessin3}
\end{figure}
\appendix \section{Proof of lemma \ref{periodic}}\label{annexe1}
\begin{lemma}\label{periodic} For every integer $n$, $E^n$ has $l^n-1$ fixed points. The distance between two distinct periodic points is bounded from below by $\frac 1{M^n-1}$.
\end{lemma}
\begin{proof} $E$ is topologically conjugated to the linear expanding map of same degree $x\mapsto lx\mod 1$, (see \cite{katok1997introduction}, p.73). Thus $E^n$ has $l^n-1$ fixed points. Let $\tilde E:\ma R\longrightarrow \ma R$ be a lift of $E$, $x\neq y$ be two fixed points of $E^n$ and $\tilde x,\tilde y\in\ma R$ be representatives of $x$ and $y$ respectively. Note that
\[d(x,y)=\inf|\tilde x-\tilde y|\] where the infimum is taken over all couples of representatives $(\tilde x,\tilde y)$. Since $E^n(x) = x$ and $E^n(y) = y$, $\tilde E^n(\tilde y)-\tilde E^n(\tilde x)-(\tilde y-\tilde x)$ is an integer, different from $0$ because $\tilde E^n $ is expanding. Thus,
\[\left|\tilde E^n(\tilde y)-\tilde E^n(\tilde x)-(\tilde y-\tilde x)\right|\geq 1,\] that is
\[\left|\int_{\tilde x}^{\tilde y}\left((\tilde E^n)'(t)-1\right)\mathrm{d}t\right|\geq 1\] Finally,
\[|\tilde y-\tilde x|(M^n-1)\geq 1.\] Taking the infimum gives the result. \end{proof} \section{Proof of lemma \ref{regularite} on the link between regularity of a Gaussian field and variance of the Fourier coefficients}\label{Borel-Cantelli} Let us recall the following classical estimate: \begin{lemma}\label{lemme Alejandro} If $(X_p)_{p\in\ma Z}$ is a family of independent centered Gaussian random variables of variance $1$, then, almost surely, \[\forall \delta>0,X_p=o(p^\delta).\]
\end{lemma} \begin{proof} Let $\delta>0$. Let us use Borel-Cantelli lemma: \[\forall p\in\ma Z, \ma P(\abs{X_p}>p^{\delta})= \int_{\abs x>p^{\delta}}e^{-\frac{x^2}2}\frac {dx}{\sqrt{2\pi}}.\] Now, we have the upper bound \[p^{\delta}\int_{p^{\delta}}^{+\infty}e^{-\frac{x^2}2}\frac {dx}{\sqrt{2\pi}}\leq\int_{p^{\delta}}^{+\infty}xe^{-\frac{x^2}2}\frac {dx}{\sqrt{2\pi}} =\frac{e^{-\frac {p^{2\delta}}2}}{\sqrt{2\pi}}.\] Thus, \[\forall p\in\ma Z^*, \ma P(\abs{X_p}>p^{\delta})\leq\frac 2{\sqrt{2\pi}p^{\delta}} e^{-\frac {p^{2\delta}}2}.\] Consequently, \[\sum_p\ma P(\abs{X_p}>p^{\delta})<\infty\] and by Borel-Cantelli, almost surely, \[\#\{p\in\ma Z,\abs{X_p}>p^{\delta}\}<\infty.\] \end{proof} With this in mind, we can see that if a real random function $\tau$ has random Fourier coefficients $(c_p)_{p\in\ma Z}$, pairwise independent (for non-negative values of $p$), with variance \[\sigma_p^2:=\ma E\left[\abs{c_p}^2\right]= O(\frac 1{p^{2k+2+\eta}}),\] for some $\eta>0$, then by the previous lemma, almost surely, for all $\delta>0$, \[\frac{ c_p}{\sigma_p}= o(p^\delta),\] and thus for $\delta = \frac\eta 2,$ \[ c_p = o\left(\frac 1 {p^{k+1+\frac\eta 2}}\right)\ \text{a.s}.\] As a consequence,\[\sum_p c_p (2i\pi p)^k e^{2i\pi px}\]converges normally and thus $\tau$ is almost surely $\mc C^k$.
\section{Ruelle resonances and Flat trace}\label{appendice Ruelle spectrum}
\subsection{Ruelle spectrum}If $\tau\in\mc C^k(\ma T)$, the operator $\mc L_{\xi,\tau}$ can be extended to distributions $(\mc C^k(\ma T))'$ by duality. We will denote $H^{s}(\ma T)$ the Sobolev space of order $s \in \ma R$. \begin{theorem}[\cite{ruelle1986locating},\cite{baladi2018dynamical} Thm 2.15 and Lemma 2.16] Let $k\geq1$. If $\tau$ belongs to $\mc C^k$, then for every $0\leq s<k,$ $\mc L_{\xi,\tau}:H^{-s}(\ma T)\rightarrow H^{-s}(\ma T)$ is bounded and its essential spectral radius $r_{\mathrm{ess}}$ satisfies \[r_{\mathrm{ess}}\leq \frac{e^{\mathrm{Pr}(-\frac 12J)}}{m^s},\] where $m=\inf E'$, $J(x)=\log E'(x)$ and $\mathrm{Pr}(-\frac 12J)$ is defined in \ref{pressure}. \end{theorem} The discrete set of eigenvalues of finite multiplicities outside a given disk of radius $r\geq \frac{e^{\mathrm{Pr}(-\frac 12J)}}{m^s}$, and the associated eigenspaces remain the same in every space $H^{-s'}(\ma T)$ for $s'\geq s$. This can be deduced for example from the fact that these spectral elements give the asymptotic behaviour of the correlation functions: for any smooth functions $f,g$ on $\ma T$, for any $s$ large enough, if $\mc L_{\xi,\tau}:H^{-s}(\ma T)\rightarrow H^{-s}(\ma T)$ has no eigenvalue of modulus $r$, \begin{equation}
\int \mc L^n_{\xi,\tau}f\cdot g = \sum_{\substack{\lambda\in\sigma(\mc L_{\xi,\tau})\\|\lambda|>r}}\int \mc L^n_{\xi,\tau}(\Pi_\lambda f)\cdot g +O_{n\to\infty}(r^n),
\end{equation}
where $\Pi_\lambda$ is the spectral projector associated to $\lambda.$
We are interested in the statistical properties of these eigenvalues, called Ruelle-Pollicott spectrum or Ruelle resonances, when $\tau$ is a random function. One way to get informations about the spectrum of such operators is using a trace formula. Although $\mc L_{\xi,\tau}$ is not trace-class, we can give a certain sense to the trace of $\mc L_{\xi,\tau}$. \subsection{Flat trace} This section is an adaptation of section 3.2.2 in \cite{baladi2018dynamical} In order to motivate the definition of flat trace, let us first recall the following fact: \begin{lemma}Let $m> \frac12$. (Then the Dirac distributions belong to $H^{-m}(\ma T)$). If $T:H^{-m}(\ma T)\longrightarrow H^m(\ma T)$ is a bounded operator, then it has a continuous Schwartz kernel $K$ and \[K(x,y)=\langle\delta_x,T\delta_y\rangle.\] If moreover $T$ is class-trace, then \[\mathrm{Tr}\ T=\int_{\ma T}K(x,x)\mathrm{d}x.\] \end{lemma} Let $\rho$ be a smooth compactly supported function such that $\int_{\ma R}\rho=1$. For $\epsilon>0$ and $y\in\ma T$ we write \[\rho_{\epsilon,y}(t)=\frac1\epsilon\rho\left(\frac{t-y}{\epsilon}\right).\]Periodizing this function gives rise to a smooth function $\rho_{\epsilon,y}$ on $\ma T$ satisfying \[\rho_{\epsilon,y}\underset{\epsilon\to0}\longrightarrow \delta_y\] as distributions. \begin{defi} Let $m\geq 0$ and $T:H^{-m}(\ma T)\longrightarrow H^{-m}(\ma T)$ be a bounded operator extending to a continuous operator $(\mc C^0(\ma T))'\longrightarrow(\mc C^0(\ma T))'$. Then the formula \[K_\epsilon(x,y):=\langle\rho_{\epsilon,x},T\delta_y\rangle\] defines for every $\epsilon>0$ a continuous function on $\ma T^2.$ Let \[\mathrm{Tr}^\flat_\epsilon(T):=\int_{\ma T}K_\epsilon(x,x)\mathrm{d}x.\] We say that $T$ admits a flat trace $\mathrm{Tr}^\flat(T)$ if $\mathrm{Tr}^\flat_\epsilon(T)\rightarrow\mathrm{Tr}^\flat(T)$ as $\epsilon$ goes to zero, independently of the choice of the mollifying function $\rho$. \end{defi} Note that, for any $n\in\ma N^*,$ $\xi\in\ma R$, $\tau\in\mc C^0(\ma T)$, the transfer operator $\mc L^n_{\xi,\tau}:(\mc C^0(\ma T))'\longrightarrow(\mc C^0(\ma T))'$ is bounded. \begin{lemma}[Trace formula, \cite{atiyah1967lefschetz}, \cite{guillemin1977lectures}] Let $\tau\in\mc C^k(\ma T), k\geq0.$ For any integer $n\geq 1$, $\mc L_{\xi,\tau}^n$ has a flat trace \begin{equation}\mathrm{Tr}^\flat\mc L^n_{\xi,\tau}= \sum_{x,E^n(x)=x}\frac{e^{i\xi\tau_x^n}}{{(E^n)'(x)}-1} \end{equation} \end{lemma} \begin{proof} \[\mathrm{Tr}^\flat_\epsilon(\mc L^n_{\xi,\tau})=\int_{\ma T}\langle\rho_{\epsilon,x},\mc L^n_{\xi,\tau} \delta_x\rangle\mathrm dx.\] By definition of the action of $\mc L^n_{\xi,\tau}$ on distributions, \[\langle\rho_{\epsilon,x},\mc L^n_{\xi,\tau} \delta_x\rangle=(\mc L^n_{\xi,\tau})^*\rho_{\epsilon,x}(x),\] where $(\mc L^n_{\xi,\tau})^*$ is the $L^2$-adjoint of $\mc L^n_{\xi,\tau}.$ Let us recall that, if $\phi:\ma T\longrightarrow\ma T$ is a local diffeomorphism, for every continuous functions $u,v$ on $\ma T$, \begin{equation}
\label{changement de variables}
\int u(\phi(y))v(y)\mathrm{d}y=\int u(x)\sum_{\phi(y)=x}\frac{v(y)}{|\phi'(y)|}\mathrm{d}x. \end{equation} Thus, \[(\mc L^n_{\xi,\tau})^*v(x)=\sum_{E^n(y)=x}\frac{v(y)e^{i\xi\tau^n_y}}{(E^n)'(y)}.\] Therefore \[\begin{split}
\mathrm{Tr}^\flat_\epsilon(\mc L^n_{\xi,\tau})&=\int_{\ma T}(\mc L^n_{\xi,\tau})^*\rho_{\epsilon,x}(x)\mathrm{d}x\\
&=\int_{\ma T}\sum_{E^n(y)=x}\frac{\rho_{\epsilon,0}(y-E^n(y))e^{i\xi\tau^n_y}}{(E^n)'(y)}\mathrm{d}x\\
&=\int_{\ma T}\rho_{\epsilon,0}(y-E^n(y))e^{i\xi\tau^n_y}\mathrm{d}y \end{split}\] by the change of variables $x=E^n(y)$. Now, since $E$ is expansive, $y\mapsto y-E^n(y)$ is a local diffeomorphism, so applying (\ref{changement de variables}) once again gives \[\begin{split}
\mathrm{Tr}^\flat_\epsilon(\mc L^n_{\xi,\tau})&=\int_{\ma T}\rho_{\epsilon,0}(z)\sum_{y-E^n(y)=z}\frac{e^{i\xi\tau_y^n}}{{(E^n)'(y)}-1} \mathrm{d}z\\
&\underset{\epsilon\to0}\longrightarrow\sum_{E^n(y)=y}\frac{e^{i\xi\tau_y^n}}{{(E^n)'(y)}-1}. \end{split}\] \end{proof} If $E$ and $\tau$ are analytic, it is well known that $\mc L$ is trace-class and that $\mathrm{Tr}^\flat(\mc L_{\xi,\tau})=\mathrm{Tr}(\mc L_{\xi,\tau})$ (see for instance \cite{jezequel2017local}). In the smooth setting however the decay of the Ruelle-Pollicott spectrum can be arbitrarily slow (\cite{jezequel2017local}, Proposition 1.10). The flat trace is however related to the Ruelle-Pollicott spectrum defined above in the following way (This is a consequence of Thm 3.5 in \cite{baladi2018dynamical} and Thm 2.4 in \cite{jezequel2017local}):
\begin{proposition}\label{baladi} Assume that $\tau\in\mc C^k(\ma T)$ for some $k\geq 1$.
Let $\xi\in\ma R$, $0\leq s<k$, and $r>\frac{e^{\mathrm{Pr}(-\frac12J)}}{m^s}$ be such that $\mc L_{\xi,\tau}:H^{-s}(\ma T)\longrightarrow H^{-s}(\ma T)$ has no eigenvalue of modulus $r$, then \begin{equation} \label{baladi_eq}
\exists C>0,\forall n\in \ma N,\left|\mathrm{Tr}^\flat\mc L^n_{\xi,\tau} - \sum\limits_{\substack{\lambda\in\sigma(\mc L_{\xi,\tau})\\\abs{\lambda}>r}}\lambda^n \right|\leq Cr^n, \end{equation} where the eigenvalues are counted with multiplicity.
\end{proposition}
\section{Proof of lemma \ref{indep}}\label{annexe3} \begin{proof}
Let $X,X',Y,$ and $Y'$ be as in the statement of the lemma real random variables such that $e^{iX}, e^{iX'}$ are uniform on $S^1$ and so that $X$ ad $ X'$ are both independent of all three other random variables. Let us write $\ma P_Z$ the law of a random variable $Z$. To show that $e^{i(X+Y)}$ and $e^{i(X'+Y')}$ are independent and uniform on $S^1$, it suffices to show that for any continuous functions $f,g:S^1\longrightarrow \ma R$, \[\ma E\left[f(e^{i(X+Y)})g(e^{i(X'+Y')})\right]=\int_0^{2\pi}\int_0^{2\pi}f(e^{i\theta})g(e^{i\theta'})\frac{d\theta}{2\pi}\frac{d\theta'}{2\pi}.\] \[\ma E\left[f(e^{i(X+Y)})g(e^{i(X'+Y')})\right]=\int_{(S^1)^4}f(e^{i(x+y)})g(e^{i(x'+y')})d\ma P_{(X,Y,X',Y')}(x,y,x',y').\] By hypothesis, \[d\ma P_{(X,Y,X',Y')}(x,y,x',y')=\frac{dx}{2\pi}\frac{dx'}{2\pi}d\ma P_{(Y,Y')}(y,y').\] Thus, \begin{multline*} \ma E\left[f(e^{i(X+Y)})g(e^{i(X'+Y')})\right]\\= \int_{(S^1)^2}\left(\int_0^{2\pi}\int_0^{2\pi}f(e^{i(x+y)})g(e^{i(x'+y')})\frac{dx}{2\pi}\frac{dx'}{2\pi}\right)d\ma P{(Y,Y')}(y,y')\\ \underset{\theta = x+y,\theta'=x'+y'}{=}\int_{(S^1)^2}\left(\int_0^{2\pi}\int_0^{2\pi}f(e^{i\theta})g(e^{i\theta'})\frac{d\theta}{2\pi}\frac{d\theta'}{2\pi}\right)d\ma P{(Y,Y')}(y,y')\\ =\int_0^{2\pi}\int_0^{2\pi}f(e^{i\theta})g(e^{i\theta'})\frac{d\theta}{2\pi}\frac{d\theta'}{2\pi}. \end{multline*} \end{proof} \section{Topological pressure}\label{annexe}
\subsection{Definition}
\begin{defi}\label{pressure} Let $\phi: \ma T\longrightarrow \ma R$ be a Hölder-continuous function. The limit \begin{equation} \label{def_Pr} \mathrm{Pr}(\phi):=\lim_{n\to\infty}\frac 1 n\log\left(\sum_{E^n(x)=x}e^{\phi_x^n}\right) \end{equation} exists and is called the topological pressure of $\phi$ (see \cite{katok1997introduction} Proposition 20.3.3 p.630).
\end{defi} In other words \begin{equation}\label{pression}
\sum_{E^n(x)=x}e^{\phi_x^n}=e^{n\mathrm{Pr}(\phi)+o(n)}. \end{equation} The particular case $\phi=0$ gives the topological entropy $\mathrm{Pr}(0)=h_{top}.$ \begin{remark}\label{absorption} Note that the expression $e^{n\mathrm{Pr}(\phi)+o(n)}$ describes a large class of sequences, since for instance for any $k\in\ma N$, \[n^k e^{n\mathrm{Pr}(\phi)}=e^{n\mathrm{Pr}(\phi)+o(n)}.\] \end{remark} \subsection {Variational principle} Another definition of the pressure is given by the variational principle. Let us denote by $h(\mu)$ the entropy of a measure $\mu$ invariant under $E$ (see \cite{katok1997introduction} section 4.3 for a definition of entropy). For the next theorem, see \cite{katok1997introduction}, sections 20.2 and 20.3. The last sentence comes from Proposition 20.3.10. \begin{theorem}[Variational principle] Let $\phi: \ma T\longrightarrow \ma R$ be a Hölder function. \[\mathrm{Pr}(\phi) = \sup_{\mu\ E-\mathrm{invariant}}\left(\int\phi\ d\mu+h(\mu)\right).\] This supremum, taken over the invariant \textbf{probability} measures, is moreover attained for a unique $E$-invariant measure $\mu$, called equilibrium measure. In addition, if we note $ J = \log E'$ and $\mu_\beta$ the equilibrium measure of $-\beta J$, $\beta\mapsto\mu_\beta$ is one-to-one. \end{theorem}
\begin{corollary} \label{F}
The function \begin{equation} \label{def_F} \fonction{F}{\ma R_+^*}{\ma R}{\beta}{\frac 1\beta\mathrm{Pr}(-\beta J )} \end{equation} is strictly decreasing. \end{corollary}
\begin{proof} Let $\beta'>\beta>0$. By the previous theorem, with the same notations, \[\int-\beta J\ d\mu_\beta+h(\mu_\beta)>\int-\beta J\ d\mu_{\beta'}+h(\mu_{\beta'})\] and thus \[F(\beta) =\int-J\ d\mu_\beta+\frac{h(\mu_\beta)}\beta> \int-J\ d\mu_{\beta'}+\frac{h(\mu_{\beta'})}\beta\geq\int-J\ d\mu_{\beta'}+\frac{h(\mu_{\beta'})} {\beta'}=F(\beta').\] \end{proof}
\subsection{Proof of Lemma \ref{ref}}\label{label} Let $\phi:\ma T\rightarrow\ma R$ be a $\mc C^1$ function. Let as before $\phi^n_x$ be the Birkhoff sum (\ref{birkhoff}). By subadditivity of the sequence $\left(\inf_{x\in\ma T}\phi^n_x\right)_n$ and Fekete's Lemma we can define the following quantity: \begin{defi} Let us define \begin{equation}\label{phimin}
\phi_{\text{min}}:=\lim_{n\to\infty}\inf_{x\in\ma T}\frac 1n\phi^n_x. \end{equation}
\end{defi} \begin{lemma} The infimum in (\ref{phimin}) can be taken over periodic points: \begin{equation}
\phi_{\text{min}}=\lim_{n\to\infty}\inf_{x,E^n(x)=x}\frac 1n\phi^n_x. \end{equation} \end{lemma} \begin{proof} By lifting the expanding map to $\ma R$, we easily see that $E$ has at least a fixed point $x_0$. This point has $l^n$ preimages by $E^n$, defining $l^n-1$ intervals $I_k^n$ such that for all $1\leq k\leq l^n-1$ \[E^n:I^n_k\rightarrow\ma T\backslash\{x_0\}\] is a diffeomorphism. Thus, there exists $C>0$ such that for all $k$, if $x,y\in \overline {I^n_k}$, \[\forall0\leq j\leq n,d(E^j(x),E^j(y))\leq \frac C{m^{n-j}},\] with $m=\inf\abs{E'}>1$. Each $\overline {I^n_k}$ contains moreover a periodic point $y_{k,n}$ of period $n$ given by $E^n(y_{k,n})=y_{k,n}+k$. Hence let $n\in\ma N$, let $x_n\in\ma T$ be such that \[\phi^n_{x_n}= \inf_{x\in\ma T}\phi_x^n,\] and suppose that $x_n \in \overline{I_k^n}$. We have \[\begin{split}
\left|\phi^n_{y_{k,n}}-\phi^n_{x_n}\right|&=\left|\sum_{j=0}^{n-1}\phi(E^j(x_n))-\phi(E^j(y_{k,n}))\right|\\
&\leq C\max\abs{\phi'}\sum_{k=0}^\infty \frac{1}{m^k} \end{split}\] is bounded independently of $n$. Consequently \[\lim_{n\to\infty}\inf_{x,E^n(x)=x}\frac 1n\phi^n_x=\phi_{\text{min}}.\] \end{proof} \begin{lemma} \[F(\beta)\underset{\beta\to+\infty}{\longrightarrow}-\phi_{\text{min}}.\] \end{lemma} \begin{proof} Let $\beta>0$. Let us write \[F_n(\beta) = \frac {1} {n \beta}\log\left(\sum_{E^n(x)=x}e^{-\beta\phi_x^n}\right),\] so that \[F(\beta) \underset{(\ref{def_Pr},\ref{def_F})}{=} \lim_{n\to\infty}F_n(\beta).\] Let $\epsilon>0$. By definition of $\phi_{\min}$, for $n$ large enough, \[\forall x\in\mathrm{Per}(n), \phi^n_x\geq n(\phi_{\min}-\epsilon)\] and \[\exists x\in\mathrm{Per}(n), \phi^n_x\leq n(\phi_{\min}+\epsilon).\] Thus, \[e^{-\beta n(\phi_{\min}+\epsilon)}\leq\sum_{E^n(x)=x}e^{-\beta\phi_x^n}\leq l^n e^{-\beta n(\phi_{\min}-\epsilon)}\] and consequently \[-\phi_{\min}-\epsilon\leq F_n(\beta)\leq \frac{\log l}{\beta}-\phi_{\min}+\epsilon.\] Hence, letting $\epsilon\to 0$, we get \[-\phi_{\min} \leq F(\beta)\leq \frac{\log l}{\beta}-\phi_{\min}.\] When $\beta$ goes to infinity, the result follows. \end{proof}
\begin{proof}[Proof of Lemma \ref{ref}] Now we take $\phi = J = \log( E')$. By the definition of $J_{\min}$ \[\inf_{O\in\mathrm{Per}(n)}{J^n_O}=nJ_{\min}+o(n),\] thus \begin{equation}
\label{sE1} \sup_{O\in\mathrm{Per}(n)}\frac{1}{e^{J_O}-1}=e^{-nJ_{\min}+o(n)}=e^{n\lim_{\beta\to\infty} F(\beta)+o(n)}. \end{equation} We have \begin{multline}
\label{BG4}\frac 1{\sqrt n}\left(\sum_{m|n}m\sum_{O\in\mc P_m}\frac 1{(e^{J_O^n}-1)^2}\right)^{-\frac 12}\leq A_n \ \ \ \underset{(\ref{def_An})}{=} \left(\sum_{m|n}m^2\sum_{O\in\mc P_m}\frac 1{(e^{J_O^n}-1)^2}\right)^{-\frac 12}\\\leq\left(\sum_{m|n}m\sum_{O\in\mc P_m}\frac 1{(e^{J_O^n}-1)^2}\right)^{-\frac 12}.\end{multline} Since \[\begin{split}
\sum_{m|n}m\sum_{O\in\mc P_m}\frac 1{(e^{J_O^n}-1)^2}&= \sum_{E^n(x)=x}\frac{1}{(e^{J_x^n}-1)^2}\\
&=\sum_{E^n(x)=x}e^{-2J_x^n}\left(1+O\left(e^{-J^n_x}\right)\right)\\
&=\left(\sum_{E^n(x)=x}e^{-2J_x^n}\right)(1+o(1))\\
&=e^{n\mathrm{Pr}(-2J)+o(n)}, \end{split}\] Eq.(\ref{BG4}) gives \begin{equation*}
\frac 1{\sqrt n} e^{-\frac{n}{2} \mathrm{Pr}(-2J)+o(n)} \leq A_n \leq e^{-\frac{n}{2} \mathrm{Pr}(-2J)+o(n)} \end{equation*} hence from Remark \ref{absorption} \[nA_n= e^{-\frac n2\mathrm{Pr}(-2J)+o(n)}=e^{-nF(2)+o(n)}.\]
Finally, \[nA_n\sup_{O\in\mathrm{Per}(n)}\frac{1}{e^{J_O}-1} \underset{(\ref{sE1})}{=} e^{n(\lim\limits_\infty F - F(2)+o(n))}\to 0\] from Corollary \ref F. \end{proof}
\end{document} | arXiv |
DOI : 10.1016/S1631-073X(02)02366-X
URL : https://hal.archives-ouvertes.fr/hal-01484345
F. Coquel, E. Godlewski, B. Perthame, A. In, and P. Rascle, Some New Godunov and Relaxation Methods for Two-Phase Flow Problems, Godunov methods, pp.179-188, 1999.
DOI : 10.1007/978-1-4615-0663-8_18
F. Coquel, J. Hérard, K. Saleh, and N. Seguin, A Class of Two-fluid Two-phase Flow Models, 42nd AIAA Fluid Dynamics Conference and Exhibit
DOI : 10.2514/6.2012-3356
G. Dal-maso, P. G. Lefloch, and F. Murat, Definition and weak stability of nonconservative products, J. de Math. Pures et Appl, vol.74, issue.6, pp.483-548, 1995.
V. Deledicque and M. V. Papalexandris, A conservative approximation to compressible two-phase flow models in the stiff mechanical relaxation limit, Journal of Computational Physics, vol.227, issue.21, pp.9241-9270, 2008.
DOI : 10.1016/j.jcp.2007.12.011
D. A. Drew and S. L. Passman, Theory of multicomponent fluids, Applied Mathematical Sciences, vol.135, 1999.
DOI : 10.1007/b97678
T. Gallouët, P. Helluy, J. Hérard, and J. Nussbaum, Hyperbolic relaxation models for granular flows, ESAIM: Mathematical Modelling and Numerical Analysis, vol.44, issue.2, pp.371-400, 2010.
DOI : 10.1051/m2an/2010006
T. Gallouët, J. Hérard, and N. Seguin, NUMERICAL MODELING OF TWO-PHASE FLOWS USING THE TWO-FLUID TWO-PRESSURE APPROACH, Mathematical Models and Methods in Applied Sciences, vol.14, issue.05, pp.663-700, 2004.
DOI : 10.1142/S0218202504003404
S. Gavrilyuk and R. Saurel, Mathematical and Numerical Modeling of Two-Phase Compressible Flows with Micro-Inertia, Journal of Computational Physics, vol.175, issue.1, pp.326-360, 2002.
DOI : 10.1006/jcph.2001.6951
P. Goatin and P. G. Lefloch, The Riemann problem for a class of resonant hyperbolic systems of balance laws, Annales de l'Institut Henri Poincare (C) Non Linear Analysis, vol.21, issue.6, pp.881-902, 2004.
DOI : 10.1016/j.anihpc.2004.02.002
A. Guelfi, D. Bestion, M. Boucker, P. Boudier, P. Fillion et al., NEPTUNE: A New Software Platform for Advanced Nuclear Thermal Hydraulics, Nuclear Science and Engineering, vol.156, issue.3, pp.281-324
DOI : 10.13182/NSE05-98
A. Harten, P. D. Lax, and B. Van-leer, On Upstream Differencing and Godunov-Type Schemes for Hyperbolic Conservation Laws, SIAM Review, vol.25, issue.1, pp.35-61, 1983.
DOI : 10.1137/1025002
E. Isaacson and B. Temple, Convergence of the $2 \times 2$ Godunov Method for a General Resonant Nonlinear Balance Law, SIAM Journal on Applied Mathematics, vol.55, issue.3, pp.625-640, 1995.
M. Ishii and T. Hibiki, Thermo-fluid dynamics of two-phase flow, 2006.
A. K. Kapila, S. F. Son, J. B. Bdzil, R. Menikoff, and D. S. Stewart, Two-phase modeling of DDT: Structure of the velocity-relaxation zone, Physics of Fluids, vol.9, issue.12, pp.3885-3897, 1997.
DOI : 10.1063/1.869488
A. K. Kapila, R. Menikoff, and D. S. Stewart, Two-phase modeling of deflagration-to-detonation transition in granular materials: Reduced equations, Hernàndez-Dueñas. A hybrid algorithm for the Baer-Nunziato model using the Riemann invariants, pp.3002-3024382, 2001.
S. Kawashima and W. Yong, Dissipative structure and entropy for hyperbolic systems of balance laws. Archive for Rational Mechanics and Analysis, pp.345-364, 2004.
P. G. Lefloch, Shock waves for nonlinear hyperbolic systems in nonconservative form, 1991.
M. Papin and R. Abgrall, Fermetures entropiques pour les syst??mes bifluides ?? sept ??quations, Comptes Rendus M??canique, vol.333, issue.11, pp.838-842, 2005.
DOI : 10.1016/j.crme.2005.09.006
V. Ransom and D. Hicks, Hyperbolic two-pressure models for two-phase flow, Journal of Computational Physics, vol.53, issue.1, pp.124-151, 1984.
DOI : 10.1016/0021-9991(84)90056-1
L. Sainsaulieu, Contribution à la modélisation mathématique et numérique des écoulements diphasiques constitués d'un nuage de particules dans un écoulement de gaz. Thèse d'habilitation à diriger des recherches, 1995.
R. Saurel and R. Abgrall, A Multiphase Godunov Method for Compressible Multifluid and Multiphase Flows, Journal of Computational Physics, vol.150, issue.2, pp.425-467, 1999.
R. Saurel, S. Gavrilyuk, and F. Renaud, A multiphase model with internal degrees of freedom: application to shock???bubble interaction, Journal of Fluid Mechanics, vol.495, pp.283-321, 2003.
DOI : 10.1017/S002211200300630X
D. W. Schwendeman, C. W. Wahle, and A. K. Kapila, The Riemann problem and a high-resolution Godunov method for a model of compressible two-phase flow, Journal of Computational Physics, vol.212, issue.2, pp.490-526, 2006.
H. B. Stewart and B. Wendroff, Two-phase flow: Models and methods, Journal of Computational Physics, vol.56, issue.3, pp.363-409, 1984.
M. D. Thanh, D. Kröner, and N. T. Nam, Numerical approximation for a Baer???Nunziato model of two-phase flows, Applied Numerical Mathematics, vol.61, issue.5, pp.61702-721, 2011.
DOI : 10.1016/j.apnum.2011.01.004
S. A. Tokareva and E. F. Toro, HLLC-type Riemann solver for the Baer???Nunziato equations of compressible two-phase flow, Journal of Computational Physics, vol.229, issue.10, pp.3573-3604, 2010.
W. Yong, Entropy and global existence for hyperbolic balance laws. Archive for Rational Mechanics and Analysis, pp.247-266, 2004.
]. A. Ambroso, C. Chalons, F. Coquel, and T. Galié, Relaxation and numerical approximation of a two-fluid two-pressure diphasic model, ESAIM: Mathematical Modelling and Numerical Analysis, vol.43, issue.6, pp.1063-1097, 2009.
N. Andrianov and G. Warnecke, On the solution to the Riemann problem for the compressible duct flow, SIAM J. Appl. Math, vol.64, issue.3, pp.878-901, 2004.
F. Bouchut, ON ZERO PRESSURE GAS DYNAMICS, Advances in kinetic theory and computing, pp.171-190, 1994.
DOI : 10.1142/9789814354165_0006
F. Bouchut, Nonlinear stability of finite volume methods for hyperbolic conservation laws and well-balanced schemes for sources, Frontiers in Mathematics, 2004.
A. Chinnayya, A. Leroux, and N. Seguin, A well-balanced numerical scheme for the approximation of the shallow-water equations with topography: the resonance phenomenon, Int. J. Finite Volumes, pp.1-33, 2004.
F. Coquel and B. Perthame, Relaxation of Energy and Approximate Riemann Solvers for General Pressure Laws in Fluid Dynamics, SIAM Journal on Numerical Analysis, vol.35, issue.6, pp.2223-2249, 1998.
C. M. Dafermos, Hyperbolic conservation laws in continuum physics, 2005.
R. J. Diperna, Measure-valued solutions to conservation laws, Archive for Rational Mechanics and Analysis, vol.2, issue.3, pp.223-270, 1985.
DOI : 10.1007/BF00752112
T. Gallouët, J. Hérard, and N. Seguin, Some approximate Godunov schemes to compute shallow-water equations with topography, Computers & Fluids, vol.32, issue.4, pp.479-513, 2003.
E. Godlewski and P. Raviart, Numerical approximation of hyperbolic systems of conservation laws, Applied Mathematical Sciences, vol.118, 1996.
DOI : 10.1007/978-1-4612-0713-9
L. Gosse and A. Leroux, Un schéma-équilibre adapté aux lois de conservation scalaires non-homogènes, C. R. Acad. Sci. Paris Sér. I Math, vol.323, issue.5, pp.543-546, 1996.
J. M. Greenberg and A. Leroux, A Well-Balanced Scheme for the Numerical Processing of Source Terms in Hyperbolic Equations, SIAM Journal on Numerical Analysis, vol.33, issue.1, pp.1-16, 1996.
S. Jin and Z. P. Xin, The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications on Pure and Applied Mathematics, vol.54, issue.3, pp.235-276, 1995.
DOI : 10.1002/cpa.3160480303
P. G. Lefloch, M. D. Thanh, M. R. Baer, and J. W. Nunziato, The Riemann problem for fluid flows in a nozzle with discontinuous cross-section A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials, Commun. Math. Sci. International Journal of Multiphase Flow, vol.1, issue.126, pp.763-797861, 1986.
C. Coquel, K. Saleh, and N. Seguin, Relaxation and numerical approximation for fluid flows in a nozzle
F. Coquel, T. Gallouët, J. Hérard, and N. Seguin, Closure laws for a two-fluid two-pressure model, Comptes Rendus Mathematique, vol.334, issue.10, pp.334927-932, 2002.
F. Coquel, E. Godlewski, and N. Seguin, RELAXATION OF FLUID SYSTEMS, Mathematical Models and Methods in Applied Sciences, vol.22, issue.08, p.2012
P. Embid and M. Baer, Mathematical analysis of a two-phase continuum mixture theory, Continuum Mechanics and Thermodynamics, vol.10, issue.4, pp.279-312, 1992.
J. Hérard and O. Hurisse, A fractional step method to compute a class of compressible gas-luiquid flows. Computers & Fluids, An International Journal, vol.55, pp.57-69, 2012.
R. Saurel and R. Abgrall, A multiphase godunov method for compressible multifluid and multiphase flows discontinuity, the eigenvalue u 2 is constant and we have the following jump relations (see for instance, Journal of Computational Physics, vol.15013, issue.23 2 0, pp.425-467, 1999.
A. Ambroso, C. Chalons, and P. Raviart, A Godunov-type method for the seven-equation model of compressible two-phase flow, Computers & Fluids, vol.54, issue.0, pp.67-91, 2012.
DOI : 10.1016/j.compfluid.2011.10.004
N. Andrianov and G. Warnecke, The Riemann problem for the Baer???Nunziato two-phase flow model, Journal of Computational Physics, vol.195, issue.2, pp.434-464, 2004.
M. R. Baer and J. W. Nunziato, A two-phase mixture theory for the deflagration-to-detonation transition (ddt) in reactive granular materials, International Journal of Multiphase Flow, vol.12, issue.6, pp.861-889, 1986.
F. Coquel, J. Hérard, and K. Saleh, A splitting method for the isentropic Baer-Nunziato two-phase flow model, ESAIM: Proceedings, vol.38
DOI : 10.1051/proc/201238013
F. Coquel, J. Hérard, K. Saleh, and N. Seguin, Relaxation approximation for the isentropic Baer-Nunziato model with vanishing phases
R. Eymard, T. Gallouët, and R. Herbin, Finite volume methods, VII, Handb. Numer. Anal., VII, pp.713-1020, 2000.
W. C. Thacker, Some exact solutions to the nonlinear shallow-water wave equations, Journal of Fluid Mechanics, vol.8, issue.-1, pp.499-508, 1981.
S. A. Tokareva and E. F. Toro, HLLC-type Riemann solver for the Baer???Nunziato equations of compressible two-phase flow, Journal of Computational Physics, vol.229, issue.10, pp.3573-3604361, 2003.
A. Ambroso, C. Chalons, F. Coquel, and T. Galié, Relaxation and numerical approximation of a two-fluid two-pressure diphasic model, ESAIM: Mathematical Modelling and Numerical Analysis, vol.43, issue.6, pp.1063-1097, 2009.
C. Chalons, F. Coquel, S. Kokh, and N. Spillane, Large Time-Step Numerical Scheme for the Seven-Equation Model of Compressible Two-Phase Flows, Proceedings in Mathematics, FVCA 6, pp.225-233, 2011.
A. Appendix, ]. A. Convexité-de-l-'entropie-mathématique-pour-le-modèle-de-baer-nunziato-references, C. Ambroso, F. Chalons, T. Coquel et al., Relaxation and numerical approximation of a two-fluid two-pressure diphasic model, M2AN Math. Model. Numer. Anal, vol.43, issue.16, pp.1063-1097, 2009.
R. Baraille, Dévelopement de schémas numériques adaptés à l'hydrodynamique, 1991.
D. Bresch, B. Desjardins, J. Ghidaglia, and E. Grenier, Global weak solutions to a generic two-fluid model. Archive for Rational Mechanics and Analysis, pp.599-629, 2010.
S. Karni and G. Hernàndez-dueñas, A Hybrid Algorithm for the Baer-Nunziato Model Using??the??Riemann Invariants, Journal of Scientific Computing, vol.212, issue.3, pp.382-403, 2010.
DOI : 10.1007/s10915-009-9332-y
J. Senior-engineer and E. R&d, Fluid Dynamics, Power Generation and Environment, 6 quai Watier
A. Ambroso, C. Chalons, R. , and P. A. , A Godunov-type method for the seven-equation model of compressible two-phase flow, Computers & Fluids, vol.54, pp.67-91, 2012.
J. B. Bdzil, R. Menikoff, S. F. Son, A. K. Kapila, and D. S. Stewart, Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues, Physics of Fluids, vol.11, issue.2, pp.378-402, 1999.
F. Coquel, T. Gallouët, J. M. Hérard, and N. Seguin, Closure laws for a two-fluid two-pressure model, Comptes Rendus Mathematique, vol.334, issue.10, pp.332-927, 2002.
F. Crouzet, F. Daude, P. Galon, P. Helluy, J. Hérard et al., On the Computation of the Baer-Nunziato Model, 42nd AIAA Fluid Dynamics Conference and Exhibit, 2012.
D. A. Drew and M. Passman, Theory of multicomponent fluids, Applied Mathematical Sciences, vol.135, 1999.
T. Gallouët, P. Helluy, J. Hérard, and N. J. , Hyperbolic relaxation models for granular flows, ESAIM: Mathematical Modelling and Numerical Analysis, vol.44, issue.2, pp.2010371-400
L. Girault and J. Hérard, A two-fluid hyperbolic model in a porous medium, ESAIM: Mathematical Modelling and Numerical Analysis, vol.44, issue.6, pp.1319-1348, 2010.
J. Glimm, D. Saltz, and D. H. Sharp, Renormalization group solution of two-phase flow equations for Rayleigh-Taylor mixing, Physics Letters A, vol.222, issue.3, pp.171-1763, 1996.
J. Glimm, D. Saltz, and D. H. Sharp, Two-phase modelling of a fluid mixing layer, Journal of Fluid Mechanics, vol.378, pp.119-143, 1999.
K. A. Gonthier and J. M. Powers, A High-Resolution Numerical Method for a Two-Phase Model of Deflagration-to-Detonation Transition, Journal of Computational Physics, vol.163, issue.2, pp.376-433, 2000.
J. Hérard, A three-phase flow model, Mathematical and Computer Modelling, vol.45, issue.5-6, pp.432-455, 2007.
DOI : 10.1016/j.mcm.2006.07.018
J. Hérard, Une classe de modèles diphasiques bi-fluides avec changement de régime, 2010.
J. Hérard and O. Hurisse, Computing two-fluid models of compressible watervapour flows with mass transfer, 2012.
M. Ishii, Thermofluid dynamic theory of two-phase flow, 1975.
H. Jin and J. Glimm, Weakly compressible two-pressure two-phase flow, Acta Mathematica Scientia, vol.29, issue.6, pp.1497-1540, 2009.
DOI : 10.1016/S0252-9602(10)60001-X
A. K. Kapila, R. Menikoff, J. B. Bdzil, S. F. Son, and D. S. Stewart, Two-phase modeling of deflagration-to-detonation transition in granular materials: Reduced equations, Physics of Fluids, vol.13, issue.10, pp.3002-3024, 2001.
D. Lhuillier, A mean-field description of two-phase flows with phase changes, International Journal of Multiphase Flow, vol.29, issue.3, pp.511-525, 2003.
D. Lhuillier, Evolution of the volumetric interfacial area in two-phase mixtures, Comptes Rendus M??canique, vol.332, issue.2, pp.103-108, 2004.
C. A. Lowe, Two-phase shock-tube problems and numerical methods of solution, Journal of Computational Physics, vol.204, issue.2, pp.598-632, 2005.
D. W. Schwendeman, A. K. Kapila, and W. D. Henshaw, A study of detonation diffraction and failure for a model of compressible two-phase reactive flow, Combustion Theory and Modelling, vol.14, issue.3, pp.331-366, 2010. | CommonCrawl |
Rule of product
In combinatorics, the rule of product or multiplication principle is a basic counting principle (a.k.a. the fundamental principle of counting). Stated simply, it is the intuitive idea that if there are a ways of doing something and b ways of doing another thing, then there are a · b ways of performing both actions.[1][2]
Examples
${\begin{matrix}&\underbrace {\left\{A,B,C\right\}} &&\underbrace {\left\{X,Y\right\}} \\\mathrm {To} \ \mathrm {choose} \ \mathrm {one} \ \mathrm {of} &\mathrm {these} &\mathrm {AND} \ \mathrm {one} \ \mathrm {of} &\mathrm {these} \end{matrix}}$
${\begin{matrix}\mathrm {is} \ \mathrm {to} \ \mathrm {choose} \ \mathrm {one} \ \mathrm {of} &\mathrm {these} .\\&\overbrace {\left\{AX,AY,BX,BY,CX,CY\right\}} \end{matrix}}$
In this example, the rule says: multiply 3 by 2, getting 6.
The sets {A, B, C} and {X, Y} in this example are disjoint sets, but that is not necessary. The number of ways to choose a member of {A, B, C}, and then to do so again, in effect choosing an ordered pair each of whose components are in {A, B, C}, is 3 × 3 = 9.
As another example, when you decide to order pizza, you must first choose the type of crust: thin or deep dish (2 choices). Next, you choose one topping: cheese, pepperoni, or sausage (3 choices).
Using the rule of product, you know that there are 2 × 3 = 6 possible combinations of ordering a pizza.
Applications
In set theory, this multiplication principle is often taken to be the definition of the product of cardinal numbers.[1] We have
$|S_{1}|\cdot |S_{2}|\cdots |S_{n}|=|S_{1}\times S_{2}\times \cdots \times S_{n}|$
where $\times $ is the Cartesian product operator. These sets need not be finite, nor is it necessary to have only finitely many factors in the product; see cardinal number.
An extension of the rule of product considers there are n different types of objects, say sweets, to be associated with k objects, say people. How many different ways can the people receive their sweets?
Each person may receive any of the n sweets available, and there are k people, so there are $\overbrace {n\cdots \cdot n} ^{k}=n^{k}$ ways to do this.
Related concepts
The rule of sum is another basic counting principle. Stated simply, it is the idea that if we have a ways of doing something and b ways of doing another thing and we can not do both at the same time, then there are a + b ways to choose one of the actions.[3]
See also
• Combinatorial principles
References
1. Johnston, William, and Alex McAllister. A transition to advanced mathematics. Oxford Univ. Press, 2009. Section 5.1
2. "College Algebra Tutorial 55: Fundamental Counting Principle". Retrieved December 20, 2014.
3. Rosen, Kenneth H., ed. Handbook of discrete and combinatorial mathematics. CRC pres, 1999.
| Wikipedia |
Characterizing the roles of changing population size and selection on the evolution of flux control in metabolic pathways
Alena Orlenko1,2,
Peter B. Chi1,3 &
David A. Liberles ORCID: orcid.org/0000-0003-3487-88261,2
Understanding the genotype-phenotype map is fundamental to our understanding of genomes. Genes do not function independently, but rather as part of networks or pathways. In the case of metabolic pathways, flux through the pathway is an important next layer of biological organization up from the individual gene or protein. Flux control in metabolic pathways, reflecting the importance of mutation to individual enzyme genes, may be evolutionarily variable due to the role of mutation-selection-drift balance. The evolutionary stability of rate limiting steps and the patterns of inter-molecular co-evolution were evaluated in a simulated pathway with a system out of equilibrium due to fluctuating selection, population size, or positive directional selection, to contrast with those under stabilizing selection.
Depending upon the underlying population genetic regime, fluctuating population size was found to increase the evolutionary stability of rate limiting steps in some scenarios. This result was linked to patterns of local adaptation of the population. Further, during positive directional selection, as with more complex mutational scenarios, an increase in the observation of inter-molecular co-evolution was observed.
Differences in patterns of evolution when systems are in and out of equilibrium, including during positive directional selection may lead to predictable differences in observed patterns for divergent evolutionary scenarios. In particular, this result might be harnessed to detect differences between compensatory processes and directional processes at the pathway level based upon evolutionary observations in individual proteins. Detecting functional shifts in pathways reflects an important milestone in predicting when changes in genotypes result in changes in phenotypes.
Understanding the processes that drive lineage-specific evolution is a fundamental challenge in comparative genomics [1]. Many methods have been developed that detect selection, including positive directional selection, at the level of the protein encoding gene. However, proteins function together in pathways and networks, and metabolic pathways are a particularly well understood system. When enzymes under positive selection cluster in a pathway, this can be a sign of directional selection on the pathway, but it can also potentially be explained by negative selection on the pathway with compensatory co-evolution of individual enzymes. These epistatic effects are an important part of the genotype-phenotype map that while ignored by most existing methods, are critical to predicting when changes in genome sequences result in changes to molecular, cellular, and organismal phenotypes. The importance of this work is in characterizing the epistatic nature of the genotype-phenotype map towards this type of prediction of functional shift, using the particular example of metabolic pathways. The same processes that apply to metabolic pathways, apply to other types of pathways and inter-molecular networks, as well as to intra-molecular epistasis [2, 3]. At higher levels of organization, pathways can be redundant and can also have epistatic effects on each other, resulting in ridges in fitness landscapes and more complex patterns of evolution [4]. These complex patterns of evolution are shaped by the interplay of selection on phenotypes, mutational processes, drift, and population genetic processes, which must be understood together to characterize the genotype-phenotype map.
A previous study examined the co-evolution of enzymes in a pathway under negative (stabilizing) selection to preserve pathway flux, which presents a null model for what co-evolution looks like as pathways evolve under more complex scenarios [5]. This previous work, using forward evolutionary simulations [5] and computational analysis of pathways like glycolysis [6] and pyrimidine biosynthesis [7], established the dynamics of the systems, including an important role for mutation-selection-drift balance over longer evolutionary periods. A point to be emphasized is that even when negative selection prevails at the pathway level, individual enzymes can evolve more rapidly than the cumulative function of the pathway as a whole within this paradigm as selection does not act to preserve the activities of an individual enzyme in isolation from the rest of the pathway. If individual enzymes are shifting in their activities relative to the flux through the entire pathway, then their control over the flux of the pathway will shift. This corresponds to potential to shift the flux of the entire pathway through changes to individual enzymes, the potential for mutations of large effect in individual enzyme encoding genes. The stability of flux control on evolutionary timescales can be measured by the number of generations a particular enzymatic step is the slowest and has the most effect on flux.
Before examining selection, one key aspect that has not yet been examined is how the system responds to fluctuations in population size, for example as driven by ecology (e.g. seasonally). An example of this is the seasonal bottlenecking of mammals, reptiles, insects, fungal and plant species [8,9,10,11]. Dramatic changes in population size are observed during seasonal and ecological shifts. One hypothesis here is that flux-control stability may be affected by shifting population size. It is hypothesized by us that flux-control step stability may be prolonged due to extreme changes in population size. Similar to shifts in population size, one might also expect interesting dynamics with fluctuating selection.
Moreover, adaptive shifts are expected to pull the population out of the mutation-selection balance and facilitate directional changes in fitness component parameters. Many studies have looked for evidence of pathway-level selection by examining pathways where multiple individual genes show co-temporal evidence for lineage-specific positive selection [12,13,14,15,16]. This will lead to candidate hypotheses, but does not explicitly differentiate between compensatory processes and directional processes. Here the role of fluctuating population size and selection as well as an adaptive directional shift in flux were examined to evaluate patterns of co-evolution and of flux control.
Simulated evolution of metabolic pathways
To evaluate the role of population genetic parameters in biochemical pathway evolution, a population of cells with a key metabolic pathway was evolved under different selective and population genetic schemes. Evolution involves proposing mutations in parameters of the system of ordinary differential equations, followed by selection based upon their effects. The key elements of that scheme were described previously [5] and are summarized here. We simulate the evolution of a metabolic pathway with five reversible reactions and one regulatory loop that controls the rate of production of the first step and one mass action reaction to remove the final product from the system. The simplified kinetic model contains features of glycolysis [17] and is shown in S1. This includes the feedback loop (as an approximation to the regulation of glycolysis) and the synthesis of final metabolite F as analogous to pyruvate in a linear pathway. The model is described by a system of ordinary differential equations where reactions are represented by reversible Michaelis-Menten kinetics [18]. Each enzyme has parameters for enzyme concentration [Enzyme] (mmol/l), the catalytic constant (kcat) (mmol/l/s), the Michaelis constant for the substrate (KM) (mmol/l), the reversible catalytic constant (kcatr) (mmol/l/s), and the Michaelis constant for the product (KMr) (mmol/l). The kinetic model has a single inhibitory reaction that is described in the system by the inhibition constant KI (mmol/l). Additional dynamics include a constant influx of metabolite A and a mass action reaction utilizing F. The steady state solution of that system is calculated using the COPASI environment [19]. Below, we describe two evolutionary simulation frameworks with explicit and non-explicit populations of cells containing the pathways.
First, we have introduced a forward evolutionary simulation framework where a population of cells (an explicit population) containing the synthetic pathways evolves with Wright-Fisher dynamics with mutation and weighted sampling with replacement between generations. We can assign different mutation rates and effects and use various selective schemes to evaluate the fixation of introduced mutations in order to test how the synthetic metabolic pathways evolve. Each forward simulation was repeated 5 times. Mutations were introduced with a probability of 1.5*10−2 per parameter per individual per generation. The mutational effects on the catalytic rate constant and enzyme concentration (both indicated by p below) are derived from a standard normal distribution with variable mean \( {\mu}_{n_1} \),
$$ {\mu}_{n_1}={-0.01 e}^{c^{\ast}\bullet {p}_{n_1-1}}. $$
The mutational effects on the binding constants (K) are described by a standard normal distribution with a variable mean \( {\mu}_{n_2} \),
$$ {\mu}_{n_2}=\frac{1}{{-0.01 e}^{c\bullet {K}_{n_2-1}}} $$
The index value c is used to scale the mutational effects, with the following values for each constant:
$$ c=\left\{\begin{array}{c}{2.5}^{\ast }{10}^{-2}, enzyme\ concentration\\ {}{2.5}^{\ast }{10}^{-2}, inhibition\ constant\\ {}{1.0}^{\ast }{10}^{-2}, catalytic\ constant\\ {}3.{3^{\acute{\mkern6mu}}}^{\ast }{10}^{-4}, reversible\ catalytic\ constant\\ {}1, product\ constant\\ {}3.{3^{\acute{\mkern6mu}}}^{\ast }{10}^{-2}, reversable\ product\ constant\end{array}\right. $$
This mutational scheme allows for scaling across orders of magnitude in kinetic parameters and generates a distribution of mutational effects with a bias towards slightly degrading change that is dependent upon the activity and expression level of the protein.
Fitness of an individual is described as
$$ {F}_1=\frac{1}{1+{\left({e}^{-\left( flux-650\right)}\right)}^{0.07}} $$
Values in this logistic function control the asymptotic fitness and the gradient of the flux to fitness relationship. As enzymes reach limits of adaptation because of the ability to utilize products, so do pathways, where the end products are also subjected to the rules of binding and catalysis. The asymptotic control of 650 and slope of 0.07 are arbitrary, but are chosen to reflect the ultimate utilizable flux.
Each of these simulations was run until the point of mutation-selection balance was reached. The point of mutation-selection balance was determined by the stability of the fitness of the median individual across generational time as assessed by observation of approximately equal rates of positive and negative changes.
Another evolutionary simulation framework used a scheme where the Kimura fixation probability was used to evaluate the fixation of proposed mutations, eliminating an explicit population and any probability of multiple segregating changes and representing the population by a single wild type individual. We have
$$ \psi =\frac{1-{e}^{-2 c{N}_e s p}}{1-{e}^{-2 c{N}_e s}} $$
to represent the fixation probability, where Ne is the population size, c is the ploidy (haploid, c = 1), s is the selective coefficient (s = f'/f0–1, where f' is the fitness after mutation and f0 before) and p is the initial frequency of the allele in a population. The initial frequency p was set to ½ rather than 1/N for computational efficiency, giving the property that a neutral mutation has a 50% chance of fixation, which scales the selective coefficient. The effects of population size contributed to experimental results in rising from a 0.5 frequency to fixation and the introduction of new mutations was independent of population size.
This experimental setup with the Kimura fixation probability was run for 200,000 generations per experimental replicate and the rate-limiting step length was calculated as was previously described [5]. Each instance of population size was run for 30 replicates. Sensitivity analysis was used to calculate the rate-limiting step for a generation by changing one reaction step at a time by 10%, while others were fixed, and calculating the difference between the original and perturbed state fluxes, generating a sensitivity coefficient. When a reaction is rate-limiting, changing the reaction rate has a larger effect than it does for reactions that are not rate-limiting.
Experiments with fluctuating population size and fluctuating selective pressure
A subset of experiments used an explicit population. For these experiments with an explicit population, mutations were introduced with a higher rate of 1.5*10−2 per parameter per individual per generation (as compared with previous studies). The previously published scheme [5] with selection on flux only was used as the basis for all of the experiments. Simulations with an explicit population had various schemes for fluctuating the population size (summarized in Table 1). Six experiments will be analyzed here: N1 with a periodicity of 360 generations and amplitudes that range between 25 and 225 individuals, N2 with periodicity 45 generations and amplitudes that range between 50 and 150 individuals, N3 with periodicity 360 generations and amplitudes with the range of [50:150] individuals, N4 with periodicity 720 generations and amplitude with the range of [25:225] individuals, N5 with periodicity 45 generations and amplitude with the range of [25:225] individuals, N6 with periodicity 22.5 generations and amplitudes with the range of [25:225] individuals (see Additional file 1: Figure S2 for experimental schemes). Corresponding to each population scheme, three control experiments were examined: the lowest population size, the median population size, and the highest population size. For schemes N2, N3 controls are the experiments with population size 50, 100, 150. For schemes N1, N4, N5 and N6 controls are the experiments with population size 25, 150, and 225.
Table 1 A summary of the parameters used across experiments (amplitude and periodicity) is shown
A number of fluctuating population size schemes were implemented in the experiments with a calculated fixation probability for each introduced mutation (Additional file 1: Figures S3A, 3B). The amplitude was set to a range of 100 to 1,000,000 individuals for the following schemes with variable periodicity: K1 5760 generations, K2 11,520 generations, K3 23,040 generations. For the periodicity of 23,040 generations, two more amplitudes were tested: experiment K4 with the amplitude range of 100 to 1000 individuals, and experiment K5 with the amplitude range of 100 to 10,000 individuals. Control experiments correspondingly contain population sizes of 100, 10,000, and 1,000,000.
Two fluctuating selection schemes were tested here by adjusting the asymptote of the fitness parameter that was originally introduced in [5]:
$$ {F}_1=\frac{1}{1+{\left({e}^{-\left( flux-{a}^{\ast }650\right)}\right)}^{0.07}} $$
Schemes following the same amplitude range [325, a = 0.5; 975, a = 1.5] and different periodicity included S1 with periodicity 45 and S2 with periodicity 360. Three controls with a set to 0.5, 1.0, and 1.5 were evaluated correspondingly. These changes in the a value result in changes to the amplitude.
Simulations with positive directional selection
In simulating with positive directional selection, for the first 2000 generations of simulations, the asymptotic parameter X was equal to 650 (equilibrium state 1) representing the previously described selection on flux only. The adaptive shift experiment contains the following scheme. Two different asymptotic control parameters X were applied to the fitness function as below.
$$ {F}_1=\frac{1}{1+{\left({e}^{-\left( flux- X\right)}\right)}^{0.07}} $$
After 2000 generations, X was set to 700, which triggered a fitness recalculation in the system and enabled the adaptation process to begin. After applying a positive selection stimulus (X = 700), a new mutation-selection equilibrium (equilibrium state 2) was established after 28,000 generations (30,000 generations after generation 0) with system adaptation to the new conditions. It was additionally run for 2000 generations after equilibrium state 2 (for a total of 30,000 generations) (Fig. 1).
During positive directional selection, fitness (green), flux (blue) and the selected fitness value (red) over the time-course in the experiment with an explicit population are shown
Statistical analysis of flux control stability
To assess statistical significance of any step spending a longer period as rate limiting, a permutation test was utilized for the null hypothesis of no flux control stability, which implicitly means that each reaction should have the same average number of consecutive generations that it remains rate limiting. Thus, we chose the average absolute deviation as our statistic of interest and generated its null distribution in each case, in a manner similar to that in [5]. Additionally, bootstrap confidence intervals were constructed by first bootstrapping the replicates, and then bootstrapping the values of consecutive runs within each replicate. Error bars on the corresponding figures indicate 95% confidence bounds, obtained by taking the 2.5th and 97.5th percentiles from the bootstrap sampling distribution.
Examination of evolution and co-evolution in experiments under positive directional selection
Simulations where positive directional selection was applied using methodology that was similar to that which has been described previously with an explicit population of individuals [5]. Briefly, coevolution was estimated using the kinetic parameter values (kcat, kcatr, Km, Kmr [enzyme]) in the median individual at each generation for each 2000 generations. Every 2000 generations, complete linkage clustering was performed using absolute correlations as a measure of relatedness between the rates of change of parameters of the system. The largest clusters that were significant at the 0.05 level were used to identify co-evolving parameters. A total of 16 periods of evolution were tested and subjected to statistical analysis.
Statistical analysis of co-evolution during selection for increased flux
Testing whether the rate of change of flux was associated with the amount of inter-molecular co-evolution was a hypothesis to be tested here. Each block of 2000 consecutive generations was examined, for the change in flux from its beginning to end, and the number of inter-molecular clusters identified (reflecting parameter values from different enzyme steps that showed evidence for co-evolution). A mixed-effects model was employed, to account for the multi-level data structure induced by the fact that observations within each replicate are correlated. Our statistic of interest is the slope main effect describing the extent to which an increase in the rate of change of flux is associated with increased inter-molecular co-evolution. We analyzed this non-parametrically by utilizing a permutation test to generate a null distribution for the slope.
It has previously been shown that flux control in metabolic pathways is expected to be unstable under mutation-selection-drift balance with simple population genetic and selective schemes [5]. One question raised from the observed results is when flux control stability might emerge. The evolutionary ecology [8,9,10,11] and molecular evolution literatures [20] have emphasized an important role of fluctuating environments or population sizes in complex evolutionary dynamics. Fluctuation of both flux asymptotes (selection levels) and population sizes have been evaluated in forward evolutionary simulation frameworks to examine this in parameter-controlled settings.
Experiments with an explicit population and a fluctuating population size
The analysis started with an explicit population framework that estimates the effect of a fluctuating population size with different amplitudes and frequencies (given a fixed mutation rate and effect distribution) on the evolutionary stability of rate-limiting steps. A number of fluctuating population size schemes and controls with fixed population sizes were tested for stability of rate-limiting steps and are shown here (Table 1, Additional file 1: Figure S2). The schemes (Additional file 1: Figure S2) represent different amplitudes and periodicities of population size fluctuations. Schemes N2 and N5 have the same periodicity, while schemes N1, N4, N5, N6 and N2, N3 have the same amplitude. Rate-limiting step stability was assessed when mutation-selection-drift balance equilibrium was reached and flux changes over the time-course didn't show any directional fluctuations. Figure 2 shows that fitnesses of all of the experiments do not show local temporal adaptation to fluctuation in the population size, where all observed changes come from compensatory processes under mutation-selection balance with no directional adaptive changes. It is well understood that population size alters the relative roles of drift and selection, with a stronger role for selection in larger population sizes. This is in fact observed in the population size effects in Fig. 2. It is also understood that the dynamics of fluctuating population sizes are driven by the bottlenecks (smaller Ne values).
The fluxes from experiments with an explicit population that fluctuated in size are shown. a. The fluxes for fluctuating experiments N1 (green), N2 (blue), N3 (yellow), N4 (red) are shown. Black lines correspond to the control experiments with population sizes 25 (CN25), 50 (CN50), 100 (CN100), 150 (CN150), 225 (CN225). b. The fluxes for fluctuating experiments N2 (blue), N5 (purple), N6 (brown) are shown. Black lines correspond to the control experiments with population sizes 25 (CN25), 50 (CN50), 100 (CN100), 150 (CN150), 225 (CN225)
Controls for the experimental schemes were assessed correspondingly at the highest, lowest, and middle point of each scheme: with population sizes of 50, 100 and 150 for schemes N2, N3 and population sizes of 25, 150 and 225 for schemes N1, N4, N5 and N6. Calculation of the average consecutive length of each reaction being rate-limiting (Fig. 3) showed that scheme N5 has a statistically significant difference in the number of consecutive generations spent as rate limiting, across the reactions. This scheme has elevated amplitude as compared to the scheme N2 and N3, but also has a smaller periodicity as compared to schemes N1 and N4 and larger periodicity as compared to scheme N6. Also it could be seen that the experiment with higher amplitude and lower periodicity (N6) has a signal for elevated rate-limiting step stability when compared to N2 and controls CN100, CN150, and CN225, but there is not statistical support for differences across reactions. A further increase in periodicity (N1 and N4) did not result in a signal for elevated rate-limiting step stability. The strongest signal for differences in the consecutive number of rate limiting steps across reactions was observed in N3. However, it should be noted that one of the control experiments, CN50, showed a p-value for unequal flux control across steps of less than 0.05, at p = 0.04779, and this was the control scenario specifically for schemes N2 and N3.
The distributions of the average length of rate-limiting steps between the reactions in the experiments with an explicit population and a fluctuating population size are shown. The schemes for fluctuating experiments N1 (green), N2 (blue), N3 (yellow), N4 (red), N5 (purple), and N6 (brown) are shown. Black bars correspond to the control experiments with population sizes 25 (C25), 50 (C50), 100 (C100), 150 (C150), 225 (C225). Nominal p-values were obtained via permutation tests with the null hypothesis that the average absolute deviation in the number of consecutive generations for each step was zero
Experiments with a calculated fixation probability and a fluctuating population size
With an explicit population, multiple mutations can simultaneously segregate, an elevated amplitude with an appropriate frequency (tuned to the mutation rate to enable response to environmental changes) in population size fluctuations increased the evolutionary stability of rate-limiting steps. However, because the increase of the amplitude is computationally expensive in the explicit population framework, an alternative population framework involving the Kimura fixation probability that allows the implementation of any population size in a computationally feasible manner was used. Five different schemes with an explicit fixation probability were studied here in order to estimate various ranges of amplitudes. First, an elevated amplitude [100, 1,000,000] was tested on different periodicity schemes (K1, K2, K3) corresponding to consecutive doubled increases in periodicity. The resulting stability increase slightly correlates with the periodicity increase (Fig. 4, Table 2). Two other amplitude schemes K4 [100, 1000] and K5 [100, 10,000] were examined with both having the same periodicity as scheme K3. Scheme K4 showed the largest average rate-limiting step stability out of all tested schemes. As in the previous experimental design, the specific range of both periodicity and amplitude can result in a signal of elevated rate-limiting step stability. Analysis of finesses revealed that schemes K1-K3 represent adaptive behavior of the system (Fig. 5). Here, the range of population size fluctuations independent of the periodicity represent local directional selection on flux. This can be seen in the fluctuations of the flux that follow the fluctuations in population size. This behavior changes when the amplitude of population size fluctuations is reduced to be less extreme causing less intense selective pressure, as can be seen for experiments K4 and K5. It should be noted that the asymmetry in the responses to increasing and decreasing flux reflect the relative frequencies of flux increasing (rarer) and flux decreasing mutations (more common), according to the mutational scheme that has been designed. This reflects the natural bias towards "deleterious" mutations that is known. Controls for the experimental schemes were assessed correspondingly at the highest, lowest, and middle point of scheme K1, K2 and K3: with population size 100, 10,000 and 1,000,000.
The distributions of the average consecutive length of rate-limiting steps in the experiments with a calculated fixation probability for each mutation and a fluctuating population size are shown. The schemes of fluctuating experiments K1 (green), K2 (red), K3 (blue), K4 (yellow), and K5 (purple) are shown. Black bars correspond to the control experiments with population sizes 100 (CK100), 10,000 (CK10000), 1,000,000 (CK1M).). Nominal p-values were obtained via permutation tests with the null hypothesis that the average absolute deviation in the number of consecutive generations for each step was zero
Table 2 The fraction of time each reaction spent rate-limiting for the experiments with fluctuating population size and two distinct population frameworks, with an explicit population and with an explicit fixation probability are shown
The fluxes of the experiments with a calculated mutational fixation probability and a fluctuating population size are shown. a. The fluxes of fluctuating experiments K1 (green), K2 (red), K3 (blue) are shown. Black lines correspond to the control experiments with population sizes 100 (CK100), 10,000 (CK10000), 1,000,000 (CK1000000). b. The fluxes of fluctuating experiments K3 (blue), K4 (yellow), K5 (purple) are shown. Black lines correspond to the control experiments with population sizes 100 (CK100), 10,000 (CK10000), 1,000,000 (CK1000000)
There is also a noticeable decrease in the evolutionary stability of rate-limiting step at reaction 5 in some of the experiments including controls. Additional tests for differences in rates of mutational acceptance at each step did not show any deviations. Moreover, examination of the fraction of time each reaction was rate-limiting showed that all reactions have somewhat equal time being rate-limiting in all of the experiments (Table 2), although less equal than suggested by the rate limiting step stability. These findings suggest that the reduced evolutionary stability of rate limitation for reaction 5 is connected to the location of reaction as the last step in the pathway and the interplay of the pathway flux with the mass action procedure. Changing the mass action procedure to ensure that it cannot be rate-limiting causes this evolutionary instability to disappear.
Overall, elevated evolutionary stability of rate-limiting steps could be found in a limited range of parameters space (both of amplitude and periodicity). As can be seen from the Figs. 6 and 7, different range of the amplitude and periodicity cause different system responses. An extremely large amplitude (Fig. 7a) brings an adaptive response of the system fitness and decreased evolutionary stability of rate-limiting steps (Fig. 4), while less extreme schemes of parameter change (Figs. 6b and 7b) generate stable fitness with no directional changes and elevated evolutionary stability of rate-limiting steps. Some schemes (Fig. 6a) show no directional changes but fail to show elevated evolutionary stability of rate-limiting steps, as this appears to be tuned to the mutation rate and effect size of the population with the amplitude and frequency of the differences that would enable a response to occur. Overall, this does not support a hypothesized role for non-equilibrium processes in generating different rates of equilibration at different steps and corresponding different flux control during the adaptation process, at least in the conditions tested here.
Fluxes overlaid with the experimental design are shown for fluctuating experiments with an explicit population and a fluctuating population size are shown. a. The flux (blue line) and the experimental design (black line) for experiment N2 are shown. b. The flux (purple line) and the experimental design (black line) for experiment N5 are shown
Fluxes overlaid with the experimental design are shown for fluctuating experiments with an explicit population and a fluctuating population size are shown. a. The flux (blue line) and the experimental design (black line) for experiment K4 are shown. b. The flux (yellow line) and the experimental design (black line) for experiment N5 are shown
Experiments with an explicit population and a fluctuating asymptotic flux
Along with a fluctuating population size, fluctuating optimal flux was suggested to make a difference in evolutionary stability of rate-limiting steps, as this also has the potential to lead to non-equilibrium dynamics. Fluctuating selection in metabolic networks was studied previously and an increased robustness to changes was reported as a result of this fluctuating selection scheme [21]. Several schemes of fluctuating asymptotic flux were tested here, but only two were able to establish mutation-selection-drift balance because of numerical instability in solving the set of differential equations. (S1 and S2 (Additional file 1: Figure S4)). As can be seen in Fig. 8, there is no significant signal for elevated rate-limiting step stability in either experiment, although a slight increase in reaction stability in S2 was detected. Controls for the experimental schemes were assessed correspondingly at the highest, lowest, and middle point of schemes with a set to 0.5, 1.0 and 1.5. Fitness behavior over evolutionary time resembles adaptive changes for both schemes studied here (Additional file 1: Figure S4). A closer look at the combined plot of the amplitude and flux (Figs. 9 and 10) revealed that in both experiments, adaptive directional changes in flux correspond to fluctuations of asymptotic flux coefficients (a), as with population size. These observations support the findings above about the absence of elevated evolutionary stability of rate-limiting steps when fitness directional changes are present.
The distributions of the average lengths of rate-limiting steps between the reactions in the fluctuating selection experiments with an explicit population and fluctuating asymptotic flux, S1 (green), S2 (red), are shown. Black bars correspond to the control experiments with a = 0.5; (C_0.5), a = 1.00 (C_1.0), and a = 1.5 (C_1.5) correspondingly
The fluxes of the experiments with an explicit population and fluctuating asymptotic flux, S1 (green), S2 (red), are shown. Black lines correspond to the control experiments with a = 0.5 (C_0.5), a = 1.00 (C_1.0), and a = 1.5 (C_1.5) correspondingly
Fluxes and experimental designs of experiments with an explicit population and a fluctuating asymptotic flux are shown. a. The flux (red line) and the experimental design (black line) for experiment S1 are shown. b. The flux (green line) and the experimental design (black line) for experiment S2 are shown
Positive directional selection in an explicit population
To complement evolution with negative selection, a scheme involving positive selection was introduced. Selective pressures here directionally changed the flux. The co-evolution of enzyme activities across the pathway during adaptation is poorly understood, but presents a major challenge in generating a basic understanding of adaptive processes and differentiation from compensatory processes reflecting stabilizing selection. Co-evolutionary analysis was performed for the positive selection experiment with an explicit population. Cluster analysis estimated co-evolutionary relationships between various enzymatic parameters at different stages, the pre-adaptation equilibrium state (stage 1), during adaptation (stage 2–8), and late adaptation towards equilibrium (stage 9 on) (Fig. 1). As was expected, there are differences between the described stages (Fig. 11). Both equilibrium stages (1 and 16) contain clustered enzymatic parameters that belong to the same enzyme (Enzyme A and Enzyme C for stage 1 and Enzymes B and C for stage 16), similar to patterns observed for negative selection on flux only in previous studies [5]. The stages not in equilibrium showed different distributions of clusters. Stage 2 has the first 2000 generation after adaptive shift and is similar to stage 1 by containing intra enzyme clustered parameters. Stages 3–8 all contain various combination of clusters with inter- and intra-enzyme parameters clustered together, but never just intra-enzymes parameters clustered alone.
Clusters for significant co-evolving parameters during each evolutionary time regime were generated. This cluster analysis includes early and late periods of equilibrium surrounding a longer period of adaptation. Parameters that show the same color belong to the same cluster and co-evolve together, while black parameters are not significantly part of any cluster
Since mutations that required increasing flux values come from the beneficial part of the mutational distribution, this process became time consuming, in taking 28,000 generations to reach equilibrium. The rate of adaptation varies at the different stages. The earlier and middle stages (2–5) show more rapid accumulation of beneficial changes and are responsible for most of the new equilibrium flux value gain, as might be expected. Mixed-effects regression analyses were performed in order to assess the association between flux change and the ratio/count of clusters across stages of adaptation, shown Fig. 12.
The association between the change in flux and the time period (and associated evolutionary regime) are compared with the count of inter-molecular pairwise parameter clusters (a) and the ratio of inter-molecular to intra-molecular pairwise clusters (b) per period
Mixed-effects regression analyses were performed in order to assess the association between flux change and the ratio/count of clusters across stages of adaptation, summarized numerically in Table 3 and graphically in Fig. 12. A statistically significant association was found between the count of inter-molecular clusters and the change in flux at the 0.05 level (p = 0.0424). However, a corresponding association was not found between the ratio of inter-molecular clusters and the change in flux at the 0.05 level (p = 0.3264). Overall, this provides some suggestive evidence that high changes in flux may have an effect on the amount of inter-molecular coevolution occurring at any given time and can be informative in analyzing the sequence co-evolution of enzymes in metabolic pathways from comparative genomic analysis. The exact patterns will be indicative of features of the enzymes and the fitness landscape.
Table 3 A statistical analysis is presented to evaluate the role of adaptation in driving inter-molecular vs. intra-molecular patterns of co-evolution
Here, various demographic and selective scenarios were tested in order to find out if specific population and selection parameters when systems may be out of equilibrium can give rise to a more evolutionary stable rate-limiting steps, following from previous work [5]. Further, we sought to examine if there was a systematic difference in the patterns of co-evolution between positive diversifying selection and compensatory covariation that might be predictive. Our findings here suggest that there is a specific range of population size fluctuations that cause elevated evolutionary stability of rate-limiting steps. It can be seen from two experimental frameworks, with an explicit population where scheme N5 generated the highest stabilities, while a decrease in periodicity representing more extreme shifts in population size don't lead to an increase in the evolutionary stability of rate-limiting steps, but instead slightly decrease it (N6). Increased amplitude was shown to have a major impact in increasing rate-limiting steps stability. To investigate further the influence of the amplitude in fluctuating population size experiments, a switch to a non-explicit population experimental scheme was necessitated as the explicit scheme become too computationally intensive.
A number of schemes were implemented in the experiments with a calculated mutational fixation probability (Additional file 1: Figure S3). Surprisingly, assigning the amplitude to a higher range [100; 1,000,000] and maintaining the periodicity that was used in the previous experiments (explicit N2) didn't give an increased rate-limiting step stability. Instead, the average consecutive length was slightly smaller than controls. It was possible that the values of periodicity and amplitude were too extreme and more relaxed amplitude and periodicity values were tested. Experimental scheme K4 showed the highest signal of stability from our sampling, suggesting that a reduced amplitude has a major impact on rate-limiting step evolutionary stability and that significant stability increase potentially exists in a certain periodicity and amplitude range, which would need to be established further by parameter sampling.
Positive diversifying selection resulted in an increase in the co-evolution of parameters that belong to different enzymes when compared with compensatory covariation under negative selection. In previous work [5], it was observed that more complex stabilizing selective or mutational schemes could also give rise to increases in inter-molecular parameter co-evolution, so the null model of stabilizing selection needs to be properly tuned to use this result in the identification of positive directional selection. The particular results observed in this study and the nature of the ridge in the fitness landscape are dependent upon the structure and the constraints of the pathway, which is treated in isolation here. While in an actual cell pathways may be less isolated and this is likely to change the size and nature of the fitness ridge of optima, we expect that the conclusions of this and prior work upon which this builds [5] are generalizable.
It had been hypothesized that constant adaptation when out of equilibrium would lead to different rates of adaptation in different parts of the pathway and the emergence of flux control, but this was not what was observed. This observation relates to patterns of evolution in changing environments that correspond to generalists vs. specialists. Specialist species are affected by isolation of their populations; they follow scenarios where there is constant rapid local adaptation. Specialists are known for stronger genetic differentiation among populations due to small and sometimes fluctuating population sizes with high frequencies of genetic bottlenecks [22]. Scenarios where this constant rapid adaptation does not happen are found for more generalist species. Generalists often have high genetic diversities in their populations and low genetic differentiation among them [23, 24]. This is the consequence of the absence of genetic bottlenecks and strong gene flow among populations. In regards to the overall evolutionary rate it is thought that specialists adapt faster than generalists to any given set of environmental conditions [25]. Several studies have reported rates of adaptation that are consistent with the prediction that specialists evolve faster than generalists [26, 27]. Long-term environmental variation may not lead to the evolution of generalists in all population genetic scenarios. Instead repeated evolution of specialists adapted to each set of growth conditions happens, as has been observed in adaptation of bacteriophage to alternate hosts [28].
This study discovers that demographic and ecological variation has a direct impact on the evolutionary dynamics of metabolic pathways. Population size fluctuations when following a particular scheme with tuned periodicity facilitates increasing the evolutionary stability of flux-control points in metabolic pathway. Adaptation itself is a factor that changes co-evolutionary dynamics and adds a particular signal on top of the corresponding dynamics associated with stabilizing selection and compensatory processes. While at the mutation-selection-drift balance in a simple selective scheme, compensatory intra-enzyme mechanisms dominate, while during adaptive directional processes, inter-molecular parameters within the system co-evolve with stronger signals. Overall, these studies give a picture of the nature of pathway evolution under more complex selective and population genetic schemes and gives the potential to develop methods that might detect such scenarios from comparative sequence evolution patterns.
[enzyme]:
This is the concentration of the enzyme
kcat :
This is the catalytic constant for the enzyme
kcatr :
This is the catalytic constant of the reverse reaction for the enzyme
Km :
This is the Michaelis constant for the enzyme
Kmr :
This is the Michaelis constant of the reverse reaction for the enzyme
Anisimova M, Liberles DA. Detecting and understanding natural selection. In: Cannarozzi GM, Schneider A, editors. Codon evolution mechanisms and models. Oxford: Oxford University Press; 2012.
Starr TN, Thornton JW. Epistasis in protein evolution. Protein Sci. 2016;25:1204–18. doi:10.1002/pro.2897.
Imielinski M, Belta C. Exploiting the pathway structure of metabolism to reveal high-order epistasis. BMC Syst Biol. 2008;2:40. doi:10.1186/1752-0509-2-40.
Fonville JM. Expected effect of deleterious mutations on within-host adaptation of pathogens. J Virol. 2015;89:9242–51. doi:10.1128/JVI.00832-15.
Orlenko A, Teufel AI, Chi PB, Liberles DA. Selection on metabolic pathway function in the presence of mutation-selection-drift balance leads to rate-limiting steps that are not evolutionarily stable. Biol Direct. 2016;11:31. doi:10.1186/s13062-016-0133-6.
Orlenko A, Hermansen RA, Liberles DA. Flux control in glycolysis varies across the tree of life. J Mol Evol. 2016;82:146–61.
Hermansen RA, Mannakee BK, Knecht W, Liberles DA, Gutenkunst RN. Characterizing selective pressures on the pathway for de novo biosynthesis of pyrimidines in yeast. BMC Evol Biol. 2015;15:232. doi:10.1186/s12862-015-0515-x.
Morehouse NI, Mandon N, Christides JP, Body M, Bimbard G, Casas J. Seasonal selection and resource dynamics in a seasonally polyphenic butterfly. J Evol Biol. 2013;26:175–85.
Morrison CD, Boyce MS, Nielsen SE, Bacon MM. Habitat selection of a re-colonized cougar population in response to seasonal fluctuations of human activity. J Wild Mgmt. 2014;78:1394–403.
Shakya SK, Goss EM, Dufault NS, van Bruggen AH. Potential effects of diurnal temperature oscillations on potato late blight with special reference to climate change. Phytopathology. 2015;105:230–8.
Suffert F, Virginie R, Sache I. seasonal changes drive short-term selection for fitness traits in the wheat pathogen zymoseptoria tritici. Appl Environ Microbiol. 2015;81:6367–79.
Ardawatia H, Liberles DA. A systematic analysis of lineage-specific evolution in metabolic pathways. Gene. 2007;387:67–74.
Alvarez-Ponce D, Aguadé M, Rozas J. Comparative genomics of the vertebrate insulin/TOR signal transduction pathway genes: a network-level analysis of selective pressures along the pathway. Genome Biol Evol. 2011;3:87–101.
O'Connell M. Selection and the cell cycle: positive Darwinian selection in a well-known DNA damage response pathway. J Mol Evol. 2010;71:444–57. doi:10.1007/s00239-010-9399-y.
Olson-Manning C, Lee C, Rausher M, Mitchell-Olds T. Evolution of flux control in the glucosinolate pathway in Arabidopsis thaliana. Mol Biol Evol. 2012;30:14–23. doi:10.1093/molbev/mss204.
Wright K, Rausher M. The evolution of control and distribution of adaptive mutations in a metabolic pathway. Genetics. 2009;184:483–502. doi:10.1534/genetics.109.110411.
Weijden CC, Schepper M, Walsh MC, Bakker BM, van Dam K, Westerhoff HV, et al. Can yeast glycolysis be understood in terms of in vitro kinetics of the constituent enzymes? Testing biochemistry. Eur J Biochem. 2000;267:5313–29.
Menten L, Michaelis MI. Die Kinetik der Invertinwirkung. Biochem Z. 1913;49:333–69.
Hoops S, Sahle S, Gauges R, Lee C, Pahle J, Simus N, et al. COPASI--a COmplex PAthway SImulator. Bioinformatics. 2006;22:3067–74. doi:10.1093/bioinformatics/btl485.
Goldstein RA. Population Size Dependence of Fitness Effect Distribution and Substitution Rate Probed by Biophysical Model of Protein Thermostability. Genome Biol Evol. 2013;5(9):1584–93. doi:10.1093/gbe/evt110.
Soyer OS, Pfeiffer T. Evolution under Fluctuating Environments Explains Observed Robustness in Metabolic Networks. PLoS Comput Biol. 2010;6(8):e1000907. doi:10.1371/journal.pcbi.1000907.
Hughes J, Ponniah M, Hurwood D, Chenoweth S, Arthington A. Strong genetic structuring in a habitat specialist, the Oxleyan Pygmy Perch Nannoperca oxleyana. Heredity. 1999;83:5–14.
Habel JC, Meyer M, Schmitt T. The genetic consequence of differing ecological demands of a generalist and a specialist butterfly species. Biodivers Conserv. 2009;18:1895–908.
Schmitt T, Röber S, Seitz A. Is the last glaciation the only relevant event for the present genetic population structure of the Meadow Brown butterfly Maniola jurtina (Lepidoptera: Nymphalidae)? Biol J Linn Soc. 2005;85:419–31.
Whitlock MC. The Red Queen beats the jack-of-all-trades: limitations on the evolution of phenotypic plasticity and niche breadth. Am Nat. 1996;148:S65–77.
Bennett AF, Lenski RE, Mittler JE. Evolutionary adaptation to temperature. I. Fitness responses of Escherichia coli to changes in its thermal environment. Evolution. 1992;46:16–30.
Kassen R, Bell G. Experimental evolution in Chlamydomonas. IV. Selection in environments that vary through time at different scales. Heredity. 1998;80:732–41.
Crill WD, Wichman HA, Bull JJ. Evolutionary reversals during viral adaptation to alternating hosts. Genetics. 2000;154:27–37.
The authors wish to thank Dr. Ashley Teufel for support and comments in the generation of this manuscript.
This research was funded by NSF award DBI-0743374.
Raw data and scripts used for statistical analysis are available for download from https://liberles.cst.temple.edu/public/pathway_evolution/.
This study was conceived by DAL. Simulations were performed by AO. Statistical analysis was performed by AO and PBC. DAL, AO, and PBC all contributed to the writing of this manuscript. All authors read and approved the final manuscript.
Department of Biology and Center for Computational Genetics and Genomics, Temple University, Philadelphia, PA, 19122, USA
Alena Orlenko, Peter B. Chi & David A. Liberles
Department of Molecular Biology, University of Wyoming, Laramie, WY, 82071, USA
Alena Orlenko & David A. Liberles
Department of Mathematics and Computer Science, Ursinus College, Collegeville, PA, 19426, USA
Peter B. Chi
Alena Orlenko
David A. Liberles
Correspondence to David A. Liberles.
This table shows the ratio of positive fitness change counts, negative fitness change counts, total positive fitness change, total negative fitness change and total fitness change per evolutionary simulation step. Table S2. This table shows the average fitness for the first 1000 generations of each simulation step, the average fitness for the second 1000 generations of simulation step and p-values of Mann-Whitney's test comparing fitness values of the first and the second halves of the simulation step. Figure S1. The simplified pathway that was simulated is shown. This pathway contains features from glycolysis [26]. A constant concentration of compound A is converted to compound F and the steady state flux is measured. Figure S2. Schemes of the experiments with an explicit population and a fluctuating population size are shown. The schemes for experiments N1 (green), N2 (blue), N3 (yellow), N4 (red), N5 (purple), N6 (brown) are shown. Black lines correspond to the control experiments with population sizes 25, 50, 100, 150, and 225. Figure S3. Schemes of the experiments with a calculated fixation probability and with fluctuating population size. A. The schemes for experiments K1 (green), K2 (red), K3 (blue) are shown. B. The schemes for the experiments K3 (blue), K4 (yellow), K5 (purple) are shown. Black lines correspond to the control experiments with population size 100, 1000, 1,000,000. Figure S4. Schemes of the experiments with an explicit population and fluctuating asymptotic flux. S1 (green) and S2 (yellow). Black lines correspond to the control experiments with a set to 0.5, 1.0, 1.5, corresponding to flux amplitudes of 325, 650, and 975. (DOCX 9986 kb)
Orlenko, A., Chi, P.B. & Liberles, D.A. Characterizing the roles of changing population size and selection on the evolution of flux control in metabolic pathways. BMC Evol Biol 17, 117 (2017). https://doi.org/10.1186/s12862-017-0962-7
Computational systems biology
Metabolic pathway evolution
Positive directional selection
Fluctuating selection
Fluctuating population size | CommonCrawl |
On the existence of balanced metrics on six-manifolds of cohomogeneity one
Izar Alonso ORCID: orcid.org/0000-0002-7634-733X1 &
Francesca Salvatore2
Annals of Global Analysis and Geometry (2021)Cite this article
We consider balanced metrics on complex manifolds with holomorphically trivial canonical bundle, most commonly known as balanced SU(n)-structures. Such structures are of interest for both Hermitian geometry and string theory, since they provide the ideal setting for the Hull–Strominger system. In this paper, we provide a non-existence result for balanced non-Kähler \(\text {SU}(3)\)-structures which are invariant under a cohomogeneity one action on simply connected six-manifolds.
A \(\text {U}(n)\)-structure on a 2n-dimensional smooth manifold M is the data of a Riemannian metric g and a g-orthogonal almost complex structure J. The pair \(\left( g,J\right) \) is also known as an almost Hermitian structure on M. When J is integrable, i.e., \(\left( M,J\right) \) is a complex manifold, the pair \(\left( g,J\right) \) defines a Hermitian structure on M. In this case, the metric g is called balanced when \(\text {d}\omega ^{n-1}=0\), \(\omega {:}{=}g\left( J\cdot ,\cdot \right) \) denoting the associated fundamental form, and we shall refer to \(\left( g,J\right) \) as a balanced \(\text {U}(n)\)-structure on M. Balanced metrics have been extensively studied in [4, 10,11,12,13, 23, 25] (see also the references therein).
Balanced metrics are also interesting in the context of \(\text {SU}(n)\)-structures, especially in the six-dimensional case, thanks to their applications in physics. An \(\text {SU}(n)\)-structure \(\left( g,J,\Psi \right) \) on a 2n-dimensional smooth manifold M, is a \(\text {U}(n)\)-structure \(\left( g,J\right) \) on M together with a \(\left( n,0\right) \)-form of nonzero constant norm \(\Psi =\psi _+ + i\psi _-\) satisfying the normalization condition \(\Psi \wedge {\overline{\Psi }}=(-1)^{\frac{n(n+1)}{2}}(2i)^n \frac{\omega ^n}{n!}\). An \(\text {SU}(n)\)-structure \(\left( g,J,\Psi \right) \) on M with underlying balanced \(\text {U}(n)\)-structure \(\left( g,J\right) \) for which \(\text{ d }\omega \ne 0\) and \(\text {d}\Psi =0\) will be referred to as a balanced non-Kähler \(\text {SU}(n)\)-structure.
In 1986, Hull and Strominger [22, 30], independently, introduced a system of pdes, now known as the Hull–Strominger system, to formalize certain properties of the inner space model used in string theory. Let M be a 2n-dimensional complex manifold equipped with a nowhere-vanishing holomorphic \(\left( n,0\right) \)-form \(\Psi \) and let E be a holomorphic vector bundle on M endowed with the Chern connection. The Hull–Strominger system consists of a set of pdes involving a pair of Hermitian metrics \(\left( g,h\right) \) on \(\left( M,E\right) \). One of these equations dictates the metric g on M to be conformally balanced, more precisely \(\text {d}\left( \Vert \Psi \Vert _{\omega }\omega ^{n-1}\right) =0\), where \(\Vert \Psi \Vert _{\omega }\) is the norm of \(\Psi \) given explicitly by \(\Psi \wedge {\overline{\Psi }}=(-1)^{\frac{n(n+1)}{2}}\frac{i^n}{n!} \Vert \Psi \Vert _{\omega }^2 \omega ^n\). When one assumes all structures to be invariant under the smooth action of a certain Lie group G, the aforementioned condition reduces to the balanced equation \(\text {d}\omega ^{n-1}=0\), since the norm of \(\Psi \) is constant. Notice that in these cases, \((g,J,\Psi )\) is a balanced \(\text {SU}(n)\)-structure on M, up to a suitable uniform scaling of \(\Psi \).
The issue of the existence and uniqueness of a general solution to the Hull–Strominger system is still an open problem. Nonetheless, solutions have been found under more restrictive hypotheses; for the non-Kählerian case, we refer the reader, for instance, to [6, 7, 14,15,16, 18, 24]. Other interesting solutions are given in [8], where a class of invariant solutions to the Hull–Strominger system on complex Lie groups was provided; these solutions extend to solutions on all compact complex parallelizable manifolds, by Wang's classification theorem [33]. Moreover, in [10], it was shown that a compact complex homogeneous space with invariant complex volume admitting a balanced metric is necessarily a complex parallelizable manifold. Then, the invariant solutions given in [8] exhaust the complex compact homogeneous case. If one allows the Lie group acting on the homogeneous space to be real, many other solutions to the Hull–Strominger system are known in the literature, see for instance [18, 27, 28, 31]. Then, one may wonder what happens in the cohomogeneity one case. A cohomogeneity one manifold M is a connected smooth manifold with an action of a compact Lie group G having an orbit of codimension one. Currently, there are no known examples of balanced non-Kähler \(\text {SU}(n)\)-structures invariant under a cohomogeneity one action. In this paper, we investigate their existence. In particular, we focus on the simply connected \(2n=6\)-dimensional case. Recall that when a cohomogeneity one manifold M has finite fundamental group, then M/G is homeomorphic to an interval I, see [5]. If we denote by \(\pi :M\rightarrow M/G\) the canonical projection onto the orbit space, we shall call \(\pi ^{-1}(t)\), for every \( t\in \overset{\circ }{I}\), principal orbits and the inverse images of the boundary points singular orbits. Denoting by \(M^{\text {princ}}\) the union of all principal orbits, which is a dense open subset of M, and by K the isotropy group of a principal point, which is unique up to conjugation along \(M^{\text {princ}}\), the pair \(\left( G,K\right) \) completely determines the principal part \(M^{\text {princ}}\) of the cohomogeneity one manifold, up to G-equivariant diffeomorphisms. Given a Lie group H, we denote its Lie algebra \(\text {Lie}(H)\) by the corresponding gothic letter \(\mathfrak {h}\).
We first give a local result for the existence of balanced non-Kähler \(\text {SU}(3)\)-structures by working on \(M^{\text {princ}}\).
Theorem A
Let M be a 6-dimensional simply connected cohomogeneity one manifold under the almost effective action of a connected Lie group G, and let K be the principal isotropy group. Then, the principal part \(M^{\text {princ}}\) admits a G-invariant balanced non-Kähler \(\text {SU}(3)\)-structure \(\left( g,J,\Psi \right) \) if and only if M is compact and \(\left( \mathfrak {g}, \mathfrak {k} \right) =\left( \mathfrak {su}(2)\oplus 2\mathbb {R},\{0\} \right) \).
We then prove that none of these local solutions can be extended to a global one. This leads us to state our main theorem:
Theorem B
Let M be a six-dimensional simply connected cohomogeneity one manifold under the almost effective action of a connected Lie group G. Then, M admits no G-invariant balanced non-Kähler \(\text {SU}(3)\)-structures.
In [13], balanced metrics were constructed on the connected sum of \(k \ge 2\) copies of \(S^3 \times S^3\). However, it is not known whether \(S^3 \times S^3\) admits balanced structures. In [23, Example 1.8], Michelsohn proved that \(S^3\times S^3\) endowed with the Calabi–Eckmann complex structure does not admit any compatible balanced metric. By [2, Remark 1], in a manifold with six real dimensions, there is no non-Kähler Hermitian metric which is simultaneously balanced and strong Kähler-with-torsion (a.k.a SKT). In [11], Fino and Vezzoni conjectured that on non-Kähler compact complex manifolds it is never possible to find an SKT metric and also a balanced metric. In [17], an example of a SKT structure on \(S^3 \times S^3\) is provided. The key case that needs to be tackled in Theorem B is precisely \(S^3 \times S^3\).
The paper is organized as follows. In Sect. 2, we review some basic facts about cohomogeneity one manifolds and \(\text {SU}(3)\)-structures which will be useful for our discussion. In Sect. 3, we present our problem, write a classification of the pairs \((\mathfrak {g},\mathfrak {k})\) that can occur, and use the hypothesis of simply connectedness to reduce the list to only three possibilities. At the end of Sect. 3, we state Theorem A, which we prove in Sect. 4 via a case-by-case analysis. Finally, in Sect. 5, we prove Theorem B.
Preliminary notions
Cohomogeneity one manifolds
Here, we recall the basic structure of cohomogeneity one manifolds. For further details, see for instance [1, 5, 20, 21, 34].
Definition 2.1
A cohomogeneity one manifold is a connected differentiable manifold M with an action \(\alpha :G\times M\rightarrow M\) of a compact Lie group G having an orbit of codimension one. We denote by \({\tilde{\alpha }}:G \rightarrow \text {Diff}\left( M\right) \) the Lie group homomorphism induced by the action.
From now on, let us assume that M is a simply connected cohomogeneity one manifold, and G is connected. By the compactness of G, the action \(\alpha \) is proper and there exists a G-invariant Riemannian metric g on M; this is equivalent to saying that G acts on the Riemannian manifold \(\left( M,g\right) \) by isometries. Moreover, we assume that the action \(\alpha \) is almost effective, namely \(\text {ker}\, {\tilde{\alpha }}\) is discrete. As usual, we denote by \(\pi :M\rightarrow M/G\) the canonical projection and we equip M/G with the quotient topology relative to \(\pi \). By a result of Bérard Bergery [5], the quotient space M/G is homeomorphic to a circle or an interval. As we are assuming that M is simply connected, we have that M/G is homeomorphic to an interval I. The inverse images of the interior points of the orbit space M/G are known as principal orbits, while the inverse images of the boundary points are called singular orbits. We denote by \(M^{\text {princ}}\) the union of all principal orbits, which is an open dense subset of M, and by \(G_p\) the isotropy group at \(p\in M\).
First, we will suppose M is compact. It follows that M/G is homeomorphic to the closed interval \(I=[-1,1]\). Denote by \(\mathcal {O}_1\) and \(\mathcal {O}_2\) the two singular orbits \(\pi ^{-1}\left( -1\right) \) and \(\pi ^{-1}\left( 1\right) \), respectively, and fix \(q_1 \in \mathcal {O}_1\). By compactness of the G-orbits, there exists a minimizing geodesic \(\gamma _{q_1}:[-1,1]\rightarrow M\) from \(q_1\) to \(\mathcal {O}_2\) which is orthogonal to every principal orbit. We call a normal geodesic a geodesic orthogonal to every principal orbit. Let \(\gamma :[-1,1]\rightarrow M\) be a normal geodesic between \(\pi ^{-1}\left( -1\right) \) and \(\pi ^{-1}\left( 1\right) \); up to rescaling, we can always suppose that the orbit space M/G is such that \(\pi \circ \gamma =\text {Id}_{[-1,1]}\). Then, by Kleiner's Lemma, there exists a subgroup K of G such that \(G_{\gamma \left( t\right) }=K\) for all \(t\in (-1,1)\) and K is subgroup of \(G_{\gamma \left( -1\right) }\) and \(G_{\gamma \left( 1\right) }\).
For M non-compact, M/G is homeomorphic either to an open interval or to an interval with a closed end. In the former case, M is a product manifold \(M \cong I \times G/K\). In the latter case, there exists exactly one singular orbit, and \(M/G \cong I\) where \(I=[0, L)\) and L is either a positive number or \(+\infty \). Analogously to the compact case, there exists a normal geodesic \(\gamma : [0, L) \rightarrow M\) such that \(\gamma (0) \in \pi ^{-1}(0)\) and we can suppose \(\pi \circ \gamma =\text {Id}_{[0,L)}\). In addition, there exists a subgroup K of G such that \(G_{\gamma (t)}=K\) for all \(t \in (0,L)\) and if \(H {:}{=}G_{\gamma (0)}\), K is a subgroup of H.
So we have that:
\(\pi ^{-1}\left( t\right) \cong G/K\) for all \(t\in \overset{\circ }{I}\),
\(M^{\text {princ}} = \bigcup _{t\in \overset{\circ }{I}}\pi ^{-1}\left( t\right) =\bigcup _{t\in \overset{\circ }{I}} G\cdot \gamma \left( t\right) \),
for every \(p_1, p_2\in M^{\text {princ}}\), \(G\cdot p_1\) and \(G\cdot p_2\) are diffeomorphic.
Therefore, up to conjugation along the orbits, when M is compact we have three possible isotropy groups \(H_1{:}{=}G_{\gamma \left( -1\right) }\), \(H_2 {:}{=}G_{\gamma \left( 1\right) }\) and \(K{:}{=}G_{\gamma \left( t\right) }\), \(t\in \left( -1,1\right) \). When M is non-compact and has one singular orbit, instead, we have two possible isotropy groups \(H {:}{=}G_{\gamma \left( 0\right) }\) and \(K{:}{=}G_{\gamma \left( t\right) }\), \(t\in \left( 0,L\right) \). From all of the above, we have that
$$\begin{aligned} M^{\text {princ}} \cong \overset{\circ }{I} \times G/K, \end{aligned}$$
and so, by fixing a suitable global coordinate system, we can decompose the G-invariant metric g as
$$\begin{aligned} g_{\gamma (t)}={\text {d}}t^2 + g_t, \end{aligned}$$
where \({\text {d}}t^2\) is the (0, 2)-tensor corresponding to the vector field \(\xi {:}{=}\gamma '\left( t\right) \) evaluated at the point \(\gamma \left( t\right) \), and \(g_t\) is a G-invariant metric on the homogeneous orbit \(G\cdot \gamma \left( t\right) \) through the point \(\gamma \left( t\right) \in M\).
Now, we will assume M is compact. By the density of \(M^{\text {princ}}\) in M and the Tube Theorem, M is homotopically equivalent to
$$\begin{aligned} \left( G \times _{H_1} S_{\gamma \left( -1\right) }\right) \cup _{G/K}\left( G \times _{H_2} S_{\gamma \left( 1\right) }\right) , \end{aligned}$$
where the geodesic balls \( S_{\gamma \left( \pm 1\right) }{:}{=}\text {exp}\left( B_{\varepsilon ^{\pm }}\left( 0\right) \right) \), \(B_{\varepsilon ^{\pm }}\left( 0\right) \subset T_{\gamma \left( \pm 1\right) }\left( G\cdot \gamma \left( \pm 1\right) \right) ^{\perp }\), are normal slices to the singular orbits in \(\gamma \left( \pm 1\right) \). Here, \( G \times _{H_i} S_{\gamma \left( \pm 1\right) }\) is the associated fiber bundle to the principal bundle \(G \rightarrow G/H_i\) with type fiber \(S_{\gamma \left( \pm 1\right) }\). By Bochner's linearization theorem, M is also homotopically equivalent to
$$\begin{aligned} \left( G \times _{H_1} B_{\varepsilon ^{-}}\left( 0\right) \right) \cup _{G/K}\left( G \times _{H_2} B_{\varepsilon ^{+}}\left( 0\right) \right) . \end{aligned}$$
The isotropy groups \(H_i\) act on \( B_{\varepsilon ^{\pm }}\left( 0\right) \) via the slice representation and, since the boundary of the tubular neighborhood \(\text {Tub}(\mathcal {O}_i) {:}{=}G \times _{H_i} B_{\varepsilon ^{\pm }}\left( 0\right) \), \(i=1,2\), is identified with the principal orbit G/K, and the G-action on \(\text {Tub}(\mathcal {O}_i) \) is identified with the \(H_i\)-action on \( B_{\varepsilon ^{\pm }} \left( 0 \right) \), then \(H_i\) acts transitively on the sphere \(S^{l_i} {:}{=}\partial B_{\varepsilon ^{\pm }}\), \(l_i>0\) still having isotropy K. The normal spheres \(S^{l_i}\) are thus the homogeneous spaces \(H_i/K\), \(i=1,2\). The \(H_i\)-action on \(S^{l_i}\), \(i=1,2\), may be ineffective, but it is sufficient to quotient \(H_i\) by the ineffective kernel to obtain an effective action: transitive effective actions of compact Lie groups on spheres were classified by Borel and are summarized in Table 1.
Table 1 Transitive effective actions of compact Lie groups on spheres
The collection of G with its isotropy groups \(G\supset H_1, H_2 \supset K \) is called the group diagram of the cohomogeneity one manifold M. Viceversa, let \(G\supset H_1, H_2 \supset K \) be compact groups with \(H_i/K=S^{l_i}\), \(i=1,2\). By the classification of transitive actions on spheres one has that the \(H_i\)-action on \(S^{l_i}\) is linear and hence it can be extended to an action on \( B_{\varepsilon ^{\pm }}\) bounded by \(S^{l_i}\), \(i=1,2\). Therefore, 2.3 defines a cohomogeneity one manifold M. Analogously, if M is a non-compact cohomogeneity one manifold with one singular orbit, we define the group diagram of M to be the collection of G and the isotropy groups \(G \supset H \supset K\), where the homogeneous space H/K will be a sphere. The converse is also true: the group diagram defines a non-compact cohomogeneity one manifold M. In these cases, M is homotopically equivalent to \(G\times _{H}B_{\epsilon }(0)\), where \(B_{\epsilon }(0)\subseteq T_{\gamma (0)} (G\cdot \gamma (0))^{\perp }\) as before.
Let \(M_i\) be cohomogeneity one manifolds with respect to the action of Lie groups \(G_i\), \(i=1,2\). We say that the action of \(G_1 \) on \(M_1\) is equivalent to the action of \(G_2\) on \(M_2\) if there exists a Lie group isomorphism \(\varphi :G_1 \rightarrow G_2\) and an equivariant diffeomorphism \(f:M_1 \rightarrow M_2\) with respect to the isomorphism \(\varphi \). We shall study cohomogeneity one manifolds up to this type of equivalence.
Moreover, if a cohomogeneity one manifold M has group diagram \(G\supset H_1, H_2 \supset K\) or \(G\supset H \supset K\), one can show that any of the following operations results in a G-equivariantly diffeomorphic manifold:
switching \(H_1\) and \(H_2\),
conjugating each group in the diagram by the same element of G,
replacing \(H_i\) (respectively H) with \(aH_ia^{-1}\) (respectively \(aHa^{-1}\)) for \(a\in N(K)_0\), where \(N(K)_0\) is the identity component of the normalizer of K.
SU(3)-structures
An \(\text {SU}(3)\)-structure on a six-dimensional differentiable manifold M is the data of a Riemannian metric g, a g-orthogonal almost complex structure J, and a (3, 0)-form of nonzero constant norm \(\Psi =\psi _++i\psi _-\) satisfying the normalization condition \(\psi _+ \wedge \psi _-=\frac{2}{3}\omega ^3\).
Following a result obtained in [29] and later reformulated in [19, Section 2], one can show that giving an \(\text {SU}(3)\)-structure is equivalent to giving a pair of differential forms \(\left( \omega , \psi _+ \right) \in \Lambda ^2\left( M\right) \times \Lambda ^3\left( M\right) \) satisfying suitable conditions. Here, \(\Lambda ^k\left( M\right) \) denotes the space of differential forms of degree k on M. To see this, let us briefly recall the concept of stability in the context of vector spaces.
Let V be a real six-dimensional vector space and let \(\alpha \) be a k-form on V. We say that \(\alpha \) is stable if its orbit under the action of \(\text {GL}(V)\) is open in \(\Lambda ^k\left( V^*\right) \). Fix a volume form \(\Omega \in \Lambda ^6(V^*)\) on V and consider the isomorphism \(A:\Lambda ^5(V^*)\rightarrow V\otimes \Lambda ^6(V^*)\) defined for any \(\alpha \in \Lambda ^5(V^*)\) by \(A(\alpha )=v\otimes \Omega \), where \(v\in V\) is the unique vector such that \(\iota _v \Omega =\alpha \); here, \(\iota _v \Omega \) is the contraction of \(\Omega \) by the vector v. Fix a 3-form \(\psi \in \Lambda ^3(V^*)\) and define
$$\begin{aligned} K_\psi :V&\rightarrow V\otimes \Lambda ^6(V^*), \\ v&\mapsto A(\iota _v \psi \wedge \psi ) \end{aligned}$$
$$\begin{aligned} P:\Lambda ^3(V^*)&\rightarrow \Lambda ^6(V^*)^{\otimes 2}, \\ \psi&\mapsto \dfrac{1}{6} \text {tr}(K_\psi ^2). \end{aligned}$$
Finally, we define the function \(\lambda :\Lambda ^3(V^*)\rightarrow \mathbb {R}\), \(\lambda (\psi )=\iota _{\Omega \otimes \Omega }P(\psi )\).
Proposition 2.2
[19, 29] Let V be an oriented, six-dimensional real vector space. Then,
a 2-form \(\omega \in \Lambda ^2(V^*)\) is stable if and only if it is non-degenerate, i.e., \(\omega ^3\ne 0\),
a 3-form \(\psi \in \Lambda ^3(V^*)\) is stable if and only if \(\lambda (\psi )\ne 0\).
We denote by \(\Lambda ^3_+(V^*)\) the open orbit of stable 3-forms satisfying \(\lambda (\psi )<0\). The \(\text {GL}_+(V)\)-stabilizer of a 3-form lying in this orbit is isomorphic to \(\text {SL}(3,\mathbb {C})\). As a consequence, every \(\psi \in \Lambda ^3_{+}(V^*)\) gives rise to a complex structure
$$\begin{aligned} J_{\psi }:V \rightarrow V,\quad J_{\psi } {:}{=}- \frac{1}{\sqrt{|P(\psi )|}}\,K_{\psi }, \end{aligned}$$
which depends only on \(\psi \) and on the volume form \(\Omega \). Moreover, the complex form \(\psi + i J_{\psi } \psi \) is of type (3, 0) with respect to \(J_{\psi }\), and the real 3-form \(J_{\psi } \psi \) is stable, too.
We say that a k-form \(\alpha \in \Lambda ^k(M)\) is stable if \(\alpha _p\) is a stable form on the vector space \(T_pM\), for all \(p \in M\). Let \(\left( \omega ,\psi _+\right) \in \Lambda ^2(M)\times \Lambda ^3_+(M)\) be a pair of stable forms on M satisfying the compatibility condition \(\omega \wedge \psi _+=0\) and \(\lambda (\psi _+)<0\). Consider the almost complex structure \(J=J_{\psi _+}\) determined by \(\psi _+\) and the volume form \(\frac{\omega ^3}{6}\). Then, the 3-form \(\psi _+\) is the real part of a nowhere-vanishing (3, 0)-form \(\Psi {:}{=}\psi _+ +i \psi _-\) with \(\psi _- {:}{=}J\psi _+ = \psi _+(J\cdot ,J\cdot ,J\cdot ) = -\psi _+(J\cdot ,\cdot ,\cdot )\), where the last identity holds since \(\psi _+\) is of type \((3,0)+(0,3)\) with respect to J. Moreover, \(\omega \) is of type (1, 1) and, as a consequence, the (0, 2)-tensor \(g {:}{=}\omega (\cdot ,J\cdot )\) is symmetric. Under these assumptions, the pair \((\omega ,\psi _+)\) defines an \(\text {SU}(3)\)-structure \((g,J,\Psi )\) on M provided that g is a Riemannian metric and the normalization condition \( \psi _+ \wedge \psi _-=\frac{2}{3} \omega ^3=4 \, dV_g \) is satisfied, \(dV_g\) being the Riemannian volume form. Conversely, given an \(\text {SU}(3)\)-structure \((g,J,\Psi )\) on M, the pair \((\omega ,\psi _+)\) given by
$$\begin{aligned}\omega {:}{=}g(J\cdot ,\cdot ), \quad \psi _+{:}{=}\text {Re}(\Psi )\end{aligned}$$
satisfies the compatibility condition \(\omega \wedge \psi _+=0\) and the stability condition \(\lambda (\psi _+)<0\).
Balanced \(\text {SU}(3)\)-structures
Following [9], we call an \(\text {SU}(3)\)-structure \(\left( g,J,\Psi \right) \) on a 6-manifold M balanced if:
J is integrable, i.e., \(\left( M,J\right) \) is a complex manifold. We recall that for \(\text {SU}(3)\)-structures, the integrability of J is equivalent to requiring \((\text {d}\Psi )^{2,2}=0\),
\(\Psi \) is a holomorphic (3, 0)-form,
\(\text {d}\omega ^{2}=0\), \(\omega \) being the fundamental form.
In particular, we are interested in the non-Kählerian case, i.e., \(d\omega \ne 0\).
Remark 2.3
We can equivalently say that an \(\text {SU}(3)\)-structure \(\left( g,J,\Psi \right) \) on M is balanced if and only if
$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} \text {d}\Psi =0, \\ \text {d}\omega ^{2}=0, \end{array}\right. } \end{aligned}\end{aligned}$$
since \(\text {d}\Psi =0\) if and only if \(\Psi =\psi _+ + i \psi _- \) is holomorphic and the induced almost complex structure \(J= J_{\psi _+}\) is integrable.
From the formulas in [3], we have that if \(\left( g,J,\Psi \right) \) is a balanced non-Kähler \(\text {SU}(3)\)-structure on a six-dimensional differentiable manifold M, \(\text {Scal}(g)< 0\), \(\text {Scal}(g)\) being the scalar curvature associated with the metric g.
Balanced \(\text {SU}(3)\)-structures on six-dimensional cohomogeneity one manifolds
Let \(\left( g,J,\Psi \right) \) be an \(\text {SU}(3)\)-structure on a simply connected cohomogeneity one manifold M of complex dimension 3 for the almost effective action of a compact connected Lie group G. We are thus requiring G to preserve the \(\text {SU}(3)\)-structure on M. For the convenience of the reader, recall that
G preserves the metric g if and only if \({\tilde{\alpha }}\left( h\right) \) is an isometry for each \(h\in G\),
G preserves the almost complex structure J if and only if J commutes with the differential \( \text {d}{\tilde{\alpha }}\left( h\right) \) for each \(h \in G\),
G preserves the 3-form \(\Psi \) if and only if \({\tilde{\alpha }}\left( h\right) ^*\Psi =\Psi \), for each \(h\in G\).
This in particular implies that the principal isotropy K acts on \(T_p M\) preserving \(\left( g_p,J_p,\Psi _p\right) \) for any \(p\in M\), which means that K is a subgroup of \(\text {SU}(3)\). Now, since the J-invariant K-action fixes the subspace \(\langle \xi |_p\rangle \) of \(T_pM\), then it fixes \(\langle {J\xi }|_p\rangle \) as well. Let us write \(T_p M \) as
$$\begin{aligned} T_p M= \langle \xi |_p\rangle \oplus \langle {J \xi }|_p\rangle \oplus V, \end{aligned}$$
where V is the four-dimensional \(g_p\)-orthogonal complement of \(\langle \xi |_p,J\xi |_p\rangle \) in \(T_p M\). Notice that V is \(J_p\)- and K-invariant. To see the K-invariance, let \(h \in K\) and \(v \in V\). Then, if \(\text {d}{\tilde{\alpha }}(h)_p \left( v \right) = \lambda \left( J \xi |_p \right) +w\), for some \(\lambda \in \mathbb {R}\), \(w\in V\), we would have \(J \left( \text {d} {\tilde{\alpha }}(h)_p \left( v \right) \right) =\text {d}{\tilde{\alpha }}(h)_p \left( J_pv \right) =-\lambda \,\xi |_p+J_p w\), which is a contradiction since the K-action is closed along the G-orbits. Therefore, for each \(h\in K\), its action on \(T_p M\) is described by a \(6 \times 6\) block matrix
$$\begin{aligned} \left( \begin{array}{c|c} \begin{matrix} 1 &{} 0 \\ 0 &{} 1 \end{matrix} &{} \\ \hline &{} A \end{array} \right) \end{aligned}$$
with respect to the decomposition of \(T_p M= \langle \xi |_p\rangle \oplus \langle {J \xi }|_p\rangle \oplus V\). Since the matrix above is in \(\text {SU}(3)\), we have \(A \in \text {SU}(2)\) and hence K can be identified with a subgroup of \(\text {SU}(2)\). Therefore, \( \mathfrak {k}{:}{=}\text {Lie}\left( K\right) \) is \(\{0\}, \, \mathbb {R}\), or \(\mathfrak {su}\left( 2\right) \). As observed in [26], all the possible candidate pairs \(\left( \mathfrak {g},\mathfrak {k}\right) \), with \(\mathfrak {g}\) compact, which may admit an \(\text {SU}(3)\)-structure in cohomogeneity one are:
if \(\mathfrak {k}=\{0\}\), then
\(\mathfrak {g}=\mathfrak {su}\left( 2\right) \oplus \mathbb {R} \oplus \mathbb {R} \),
\(\mathfrak {g}= \underbrace{\mathbb {R} \oplus \ldots \oplus \mathbb {R}}_{\text {5 times}}\),
if \(\mathfrak {k}=\mathbb {R}\), then
\(\mathfrak {g}=\mathfrak {su}\left( 2\right) \oplus \mathfrak {su}\left( 2\right) \),
\( \mathfrak {g}= \mathfrak {su}\left( 2\right) \oplus \mathbb {R} \oplus \mathbb {R} \oplus \mathbb {R} \),
if \(\mathfrak {k}=\mathfrak {su}\left( 2\right) \), then
\(\mathfrak {g}=\mathfrak {su}\left( 2\right) \oplus \mathfrak {su}\left( 2\right) \oplus \mathbb {R} \oplus \mathbb {R}\),
\(\mathfrak {g}= \mathfrak {su}\left( 2\right) \oplus \underbrace{\mathbb {R} \oplus \ldots \oplus \mathbb {R}}_{\text {5 times}}\),
\(\mathfrak {g}= \mathfrak {su}\left( 3\right) \).
Under the assumption of simply connectedness of M, we can discard some pairs of this list. If M is compact, by Hoelcher's classification [20, Proposition 3.1], we can readily discard cases (a.2), (b.2), (b.3), (c.1), and (c.2). For the case when M is non-compact and has one singular orbit, we can suitably adapt [20, Proposition 1.8] which deals with the compact case, to obtain:
Let M be the non-compact cohomogeneity one manifold given by the group diagram \(G \supset H \supset K\) with \(H/K = S^l\). Then, \(\pi _1(M) \cong \pi _1(G/K)/N\) where
$$\begin{aligned} N = \ker \{ \pi _1(G/K) \rightarrow \pi _1(G/H) \} = {\text {Im}} \{\pi _1(H/K) \rightarrow \pi _1(G/K) \}. \end{aligned}$$
In particular, M is simply connected if and only if the image of \(\pi _1 (S^l)\) generates \(\pi _1(G/K)\) under the natural inclusions.
We know that \(\pi _1 (S^l)\) is either \(\{ 0 \}\) or \(\mathbb {Z}\). Now, we observe that for cases (a.1) and (c.1), \(\pi _1 (G/K)= \mathbb {Z}^2\), for cases (a.2), (b.3) and (c.2), \(\pi _1 (G/K)= \mathbb {Z}^5\), and for case (b.2), \(\pi _1 (G/K)\) is either \(\mathbb {Z}^2\) or \(\mathbb {Z}^3\). If M is non-compact and has no singular orbits, \(\pi _1 (M)=\pi _1 (G/K)\). Hence, when M is non-compact, we can discard the pairs (a.1), (a.2), (b.2), (b.3), (c.1) and (c.2) as \(\pi _1(M)\) would be infinite. Therefore, the possible pairs which may admit a balanced \(\text {SU}(3)\)-structure on a simply connected manifold of cohomogeneity one under the almost effective action of a compact connected Lie group G are (a.1) (only when M is compact), (b.1), and (c.3).
In case (b.1), we shall need to divide the discussion depending on the embeddings of \(\mathfrak {k}=\mathbb {R}\) in \(\mathfrak {g}=\mathfrak {su}(2)\oplus \mathfrak {su}(2)\) which, up to isomorphism, are all generated by an element of the form
$$\begin{aligned} \begin{pmatrix} ip &{} 0 &{} 0 &{} 0 \\ 0 &{} -ip &{} 0 &{} 0 \\ 0 &{} 0 &{} iq &{} 0 \\ 0 &{} 0 &{} 0 &{} -iq \end{pmatrix} \in \mathfrak {su}(2)\oplus \mathfrak {su}(2), \end{aligned}$$
with fixed \(p, q \in \mathbb {N}\). Up to uniform rescalings, which do not change the immersion of \(\mathfrak {k}\), we can assume either \((p,q)=(1,0)\) or p, q to be coprime if neither is zero. Notice that when \((p,q)=(1,1)\) or \((p,q)=(1,0)\), \(\mathfrak {k}\) induces a decomposition of \(\mathfrak {g}\) in \(\text {Ad}(K)\)-modules, some of which are equivalent. In the former case, we shall say that \(\mathfrak {k}\) is diagonally embedded in \(\mathfrak {g}\), while in the latter \(\mathfrak {k}\) is said to be trivially embedded in one of the two \(\mathfrak {su}(2)\)-factors of \(\mathfrak {g}\). When instead p, q are different and nonzero, the \(\text {Ad}(K)\)-modules are pairwise inequivalent.
From now on, for each \(p\in M^{\text {princ}}\), let \(\mathfrak {m}_p{=}{:}\mathfrak {m}\) be an \(\text {Ad}(K)\)-invariant complement of \(\mathfrak {k}\) in \(\mathfrak {g}\). For each \(p\in M^{\text {princ}}\), we have that \(T_p M=\langle \xi |_p\rangle \oplus \widehat{\mathfrak {m}}|_p\), where for every \(X \in \mathfrak {g}\), we denote by \({\widehat{X}}\) the action field
$$\begin{aligned} {\widehat{X}}_p = \frac{{\text {d}}}{{\text {d}}t}\Big |_{t=0}(\exp tX)\cdot p,\quad p \in M. \end{aligned}$$
It is known that since \(M^{\text {princ}}\cong \overset{\circ }{I} \times G/K \), every G-invariant structure on \(M^{\text {princ}}\) can be expressed via a K-invariant structure on \(\langle \xi \rangle \oplus \widehat{\mathfrak {m}} \), with \(C^{\infty }(\overset{\circ }{I})\)-coefficients. Let \(\mathfrak {m}=\mathfrak {m}_1 \oplus \ldots \oplus \mathfrak {m}_r\) be the decomposition of \(\mathfrak {m}\) into irreducible \(\text {Ad}(K)\)-modules. Recall that if the \(\mathfrak {m}_i\)'s are pairwise inequivalent, then they are orthogonal with respect the metric \(g_t\), for every t (see 2.1). Otherwise, the expression of the metric strongly depends on the specific equivalence of the modules. In all cases, we recover the whole \(\text {SU}(3)\)-structure from a pair of G-invariant stable forms \(\left( \omega ,\psi _+\right) \) of degree two and three, respectively.
To fix the notations, in what follows, we shall denote by
\(\mathcal {B}\) the opposite of the Killing–Cartan form on \(\mathfrak {g}\),
\(\left\{ {\tilde{e}}_i\right\} _{i=1,2,3}\) the standard basis for \(\mathfrak {su}(2)\) given by
$$\begin{aligned} {\tilde{e}}_1=\begin{pmatrix} i &{} 0 \\ 0 &{} -i \end{pmatrix}, \qquad {\tilde{e}}_2=\begin{pmatrix} 0 &{} i \\ i &{} 0 \end{pmatrix}, \qquad {\tilde{e}}_3=\begin{pmatrix} 0 &{} 1 \\ -1 &{} 0 \end{pmatrix}, \end{aligned}$$
\(\left\{ f_i\right\} _{i=1,\ldots , m}\) the generic basis for \(\mathfrak {g}=\mathfrak {k}\oplus \mathfrak {m}\), \(\mathfrak {k}=\langle f_1,\ldots ,f_k\rangle \), \(\mathfrak {m}=\langle f_{k+1},\ldots ,f_m\rangle \), where \(k={\text {dim}}\mathfrak {k}\), \(m={\text {dim}}\mathfrak {g}\),
\(e_1 {:}{=}\xi \cong \frac{\partial }{\partial t}\),
\(e_i {:}{=}\widehat{f}_{j}\), \(j={\text {dim}}\mathfrak {k}-1+i\), the Killing vector fields on \(M^{\text {princ}}\) induced by the G-action, for \(i=2,\ldots ,6\),
\(e^i\) the dual 1-forms to \(e_i\).
Therefore, in what follows \(\left\{ e_i \right\} _{i=1,\ldots ,6}\) will be vectors on \(M^{\text {princ}}\) which provide a basis for \(T_pM\) at each point \(p=\gamma (t)\in M^{\text {princ}}\), where \(\gamma :\overset{\circ }{I} \rightarrow M\) is a normal geodesic through the point p.
Moreover, we recall some basic facts about G-actions which will be useful for our discussion:
Since \(g \cdot \gamma _p=\gamma _{g\cdot p } \) for the uniqueness of the normal geodesic \(\gamma \) starting from the point \(g\cdot p \), we have that \(\Phi _1^{\hat{X}}\circ \Phi _t^{\xi }(p)=\Phi _t^{\xi }\circ \Phi _1^{\hat{X}}(p)\), where \(\Phi _t^v\) denotes the flow of the vector field v evaluated at time t. This is equivalent to \([\xi , \hat{X}]=0\), for each \(X\in \mathfrak {g}\);
A k-form \(\alpha \) on \(M^\text {princ}\) of the form
$$\begin{aligned} \alpha =\sum _{i_1<\ldots <i_k=1}^{6} a_{i_1\ldots i_k} \,e^{i_1\ldots i_k}, \end{aligned}$$
with \(a_{i_1\ldots i_k} \in C^{\infty }(\overset{\circ }{I})\) for all \(i_1<\ldots <i_k\), is G-invariant if and only if \(\alpha _p\) is K-invariant for all \(p\in M^{\text {princ}}\). Here, \(e^{i_1\ldots i_k}\) is a shorthand for the wedge product \(e^{i_1}\wedge \ldots \wedge e^{i_k}\) of 1-forms. Analogously, we shall indicate with \(\beta ^k\) the wedge product of \(\beta \) with itself for k-times \(\beta \wedge \ldots \wedge \beta \);
If \(\alpha \) is a G-invariant k-form on M and \(v_1,\ldots , v_k\) are G-invariant vector fields on M, then \(\alpha \left( v_1,\ldots ,v_n\right) |_p\) is constant along the G-orbit through p, for each \(p\in M\).
We are now ready to state Theorem A.
Let M be a six-dimensional simply connected cohomogeneity one manifold under the almost effective action of a connected Lie group G and let K be the principal isotropy group. Then, the principal part \(M^{\text {princ}}\) admits a G-invariant balanced non-Kähler \(\text {SU}(3)\)-structure \(\left( g,J,\Psi \right) \) if and only if M is compact and \(\left( \mathfrak {g}, \mathfrak {k} \right) =\left( \mathfrak {su}(2)\oplus 2\mathbb {R},\{0\} \right) \).
Proof of Theorem A
From all the above discussion and the previous lemmas, the only possible pairs allowing \(M^{\text {princ}}\) to support a balanced \(\text {SU}(3)\)-structure are (a.1) with M compact, (c.3), and (b.1). We investigate these three cases separately.
For any of these cases, we shall consider the generic pair \((\omega ,\psi _+)\) of G-invariant forms on \(M^{\text {princ}}\) of degree two and three, respectively, with \(C^{\infty }(\overset{\circ }{I})\)-coefficients. In order for the pair \((\omega ,\psi _+)\) to define a G-invariant balanced non-Kähler \(\text {SU}(3)\)-structure on \(M^{\text {princ}}\), we have to impose the following conditions:
the stability conditions:
\(\omega ^3 \ne 0\),
\(\lambda {:}{=}\lambda \left( \psi _+\right) <0\),
the compatibility conditions \(\psi _{\pm } \wedge \omega =0\),
the normalization conditions:
\(\psi _+ \wedge \psi _-=\frac{2}{3} \omega ^3\),
\(\frac{1}{6} \omega ^3=\pm \sqrt{{\text {det}}(g)} \,e^{1\ldots 6}\) where the sign ± depends on the fixed orientation \(\pm e^{1\ldots 6}\),
\(\text {d} \psi _{\pm }=0\)
the balanced condition \(\text {d}\omega ^2=0\),
the non-Kähler condition \(\text {d}\omega \ne 0\),
the positive-definiteness of the induced symmetric bilinear form \(g {:}{=}\omega (\cdot , J \cdot )\) on \(M^{\text {princ}}\).
We start by case (b.1).
Case (b.1)
\(\left( \mathfrak {g},\mathfrak {k}\right) =\left( \mathfrak {su}(2)\oplus \mathfrak {su}(2), \mathbb {R}\right) \).
In the notation of Remark 3.2, let us suppose p, q nonzero and coprime with \((p,q) \ne (1,1)\), first. Consider the \(\mathcal {B}\)-orthonormal basis of \(\mathfrak {g}\) given by
$$\begin{aligned} \begin{aligned}&f_1=\dfrac{1}{2\sqrt{2(p^2+q^2)}}\begin{pmatrix} p {\tilde{e}}_1 &{} 0 \\ 0 &{} q{\tilde{e}}_1 \end{pmatrix} \quad&f_2=\dfrac{1}{2\sqrt{2(p^2+q^2)}}\begin{pmatrix} q {\tilde{e}}_1 &{} 0 \\ 0 &{} -p{\tilde{e}}_1 \end{pmatrix} \\&f_3=\dfrac{1}{2\sqrt{2}}\begin{pmatrix} {\tilde{e}}_3 &{} 0 \\ 0 &{} 0 \end{pmatrix} \quad&f_4=\dfrac{1}{2\sqrt{2}}\begin{pmatrix} 0 &{} 0 \\ 0 &{} {\tilde{e}}_3 \end{pmatrix} \\&f_5=\dfrac{1}{2\sqrt{2}}\begin{pmatrix} {\tilde{e}}_2 &{} 0 \\ 0 &{} 0 \end{pmatrix} \quad&f_6=\dfrac{1}{2\sqrt{2}}\begin{pmatrix} 0 &{} 0 \\ 0 &{} {\tilde{e}}_2 \end{pmatrix}. \end{aligned} \end{aligned}$$
and take \(\mathfrak {k}=\langle f_1\rangle \). Notice that since rk\((\mathfrak {su}(2))=1\), this assumption is not restrictive. The decomposition of \(\mathfrak {g}\) into irreducible \(\text {Ad}\left( K\right) \)-modules is given by
$$\begin{aligned} \mathfrak {g}=\mathfrak {k}\oplus \mathfrak {a}\oplus \mathfrak {b}_1\oplus \mathfrak {b}_2, \end{aligned}$$
where \(\mathfrak {a}{:}{=}\langle f_2\rangle \) is \(\text {Ad}(K)\)-fixed, \(\mathfrak {b}_1{:}{=}\langle f_3, f_5\rangle \) and \(\mathfrak {b}_2{:}{=}\langle f_4, f_6\rangle \), hence \(\mathfrak {m}= \mathfrak {a}\oplus \mathfrak {b}_1\oplus \mathfrak {b}_2\). Fix the orientation given by \(\Omega =e^{1\ldots 6}\) and consider the generic G-invariant 3-form \(\psi _+\) on \(M^{\text {princ}}\),
$$\begin{aligned} \psi _+ {:}{=}p_1\, e^{135} + p_2\, e^{146}+ p_3\, e^{235} + p_4\, e^{246}, \end{aligned}$$
where \(p_j\in C^{\infty }(\overset{\circ }{I})\), \(j=1,\ldots , 4\). A simple calculation shows that the stability condition \(\lambda (\psi _+)<0\) never holds, since \(\lambda (\psi _+)=\left( p_1 p_4 - p_2 p_3 \right) ^2\ge 0\).
Now let \((p,q)=(1,0)\) and consider the \(\mathcal {B}\)-orthogonal basis of \(\mathfrak {g}\) given by 4.1 when \((p,q)=(1,0)\) and assume \(\mathfrak {k}=\langle f_1\rangle \) as before. Then, the decomposition of \(\mathfrak {g}\) into irreducible \(\text {Ad}(K)\)-modules is given by
$$\begin{aligned} \mathfrak {g}=\mathfrak {k}\oplus \mathfrak {b}_1\oplus \mathfrak {a}_1\oplus \mathfrak {a}_2 \oplus \mathfrak {a}_3, \end{aligned}$$
where \(\mathfrak {b}_1 {:}{=}\langle f_3, f_5\rangle \), \(\mathfrak {a}_1{:}{=}\langle f_2\rangle \), \(\mathfrak {a}_2{:}{=}\langle f_4\rangle \) and \(\mathfrak {a}_3{:}{=}\langle f_6\rangle \). Observe that the \(\mathfrak {a}_i\)'s are equivalent. Consider the generic G-invariant 3-form \(\psi _+\) on \(M^{\text {princ}}\), which is of the form
$$\begin{aligned} \psi _+ {:}{=}p_1\, e^{124} + p_2\, e^{126}+ p_3\, e^{135} + p_4\, e^{146}+ p_5\, e^{235} + p_6\, e^{246}+ p_7\, e^{345} + p_8\, e^{356}, \end{aligned}$$
where \(p_j\in C^{\infty }(\overset{\circ }{I})\), \(j=1,\ldots ,8\). It is straightforward to show that \(\lambda (\psi _+)=\left( p_1 p_8 + p_2 p_7 - p_3 p_6 +p_4 p_5\right) ^2\ge 0\).
By the previous discussion we have that when \(\left( \mathfrak {g},\mathfrak {k}\right) =\left( \mathfrak {su}(2)\oplus \mathfrak {su}(2), \mathbb {R}\right) \) with \(\mathfrak {k}\) not diagonally embedded in \(\mathfrak {g}\), M admits no G-invariant \(\text {SL}(3,\mathbb {C})\)-structures, i.e., G-invariant stable 3-forms inducing an almost complex structure on M.
Finally, let us consider the case when \(\mathfrak {k}\) is diagonally embedded in \(\mathfrak {g}\). Without loss of generality, we can assume \((p,q)=(1,1)\). We consider the \(\mathcal {B}\)-orthonormal basis of \(\mathfrak {g}\) given by 4.1 when \((p,q)=(1,1)\). The decomposition of \(\mathfrak {g}\) into irreducible \(\text {Ad}\left( K\right) \)-modules is given by
where \(\mathfrak {k}=\langle f_1\rangle \), \(\mathfrak {a}{:}{=}\langle f_2\rangle \) is \(\text {Ad}(K)\)-fixed, \(\mathfrak {b}_1{:}{=}\langle f_3, f_5\rangle \) and \(\mathfrak {b}_2{:}{=}\langle f_4, f_6\rangle \). Then, \(\mathfrak {m}= \mathfrak {a}\oplus \mathfrak {b}_1\oplus \mathfrak {b}_2\). Unlike the case \(p\ne q\) both nonzero, here the equivalence of the \(\mathfrak {b}_i\)-modules implies that the metric g on \(M^{\text {princ}}\) is not necessarily diagonal but of the form:
$$\begin{aligned} g={\text {d}}t^2 + f(t)^2 \mathcal {B}|_{\mathfrak {a}\times \mathfrak {a}}+ h_1(t)^2 \mathcal {B}|_{\mathfrak {b}_1\times \mathfrak {b}_1} + h_2(t)^2 \mathcal {B}|_{\mathfrak {b}_2\times \mathfrak {b}_2}+ \mathcal {Q}|_{\mathfrak {b}_1\times \mathfrak {b}_2}, \end{aligned}$$
for some \(f, h_1, h_2 \in C^{\infty }(\overset{\circ }{I})\), where \(\mathcal {Q}\) denotes a symmetric quadratic form on the isotypic component \(\mathfrak {b}_1\oplus \mathfrak {b}_2\). In particular, the metric coefficients \(g_{ij}{:}{=}g(e_i,e_j)\) must satisfy
$$\begin{aligned} \begin{aligned} g_{1i}&=g_{i1}=0, \quad i=2, \ldots , 6, \\ g_{2i}&=g_{i2}=0, \quad i=3,\ldots , 6, \\ g_{33}&=g_{55}, \quad g_{35}=g_{53}=0, \\ g_{44}&=g_{66}, \quad g_{46}=g_{64}=0. \end{aligned} \end{aligned}$$
where \(e_i\), \(i=1,\ldots ,6\), are the vector fields defined in the usual way. Fix the orientation given by \(\Omega {:}{=}e^{1\ldots 6}\), and consider a pair of G-invariant forms \(\left( \omega ,\psi _+\right) \) of degree two and three, given respectively by
$$\begin{aligned} \omega {:}{=}&h_1\, e^{12}+h_2\, e^{35} + h_3\, e^{46}+h_4 (e^{34}+e^{56}) + h_5 (e^{36}+e^{45} ), \\ \psi _+ {:}{=}&p_1\, e^{135} + p_2\, e^{146}+ p_3 (e^{134}+e^{156}) + p_4 (e^{136}+e^{145})\\&+ p_5 \,e^{235} + p_6\, e^{246}+ p_7 ( e^{234} +e^{256})+ p_8 (e^{236}+e^{245}), \end{aligned}$$
where \(h_i, p_j\in C^{\infty }(\overset{\circ }{I})\), \(i=1,\ldots ,5\), \(j=1,\ldots ,8\). Moreover, the structure equations are given by
$$\begin{aligned} \begin{aligned} \text {d}e^1=0, ~ \text {d}e^2=\frac{1}{2} \left( e^{35} -e^{46}\right) , ~ \text {d}e^3=- \frac{1}{2} e^{25}, ~ \text {d}e^4=\frac{1}{2}e^{26}, ~ \text {d}e^5=\frac{1}{2}e^{23}, ~ \text {d}e^6=- \frac{1}{2}e^{24}. \end{aligned}\end{aligned}$$
In order to find a G-invariant balanced non-Kähler \(\text {SU}(3)\)-structure on \(M^{\text {princ}}\), we have to impose the conditions 1 to 7 listed at the beginning of this Section, together with 4.2. We shall show that this system of equations is incompatible. This implies there are no G-invariant balanced non-Kähler \(\text {SU}(3)\)-structures on the corresponding M. In order to see this, we write all conditions in terms of the coefficients \(h_i, p_j\) of \(\left( \omega ,\psi _+\right) \), for \(i=1,\ldots ,5\), \(j=1,\ldots ,8\). One has that \(\text {d}\omega ^2=0\) if and only if
$$\begin{aligned} \dfrac{h_1}{2}\left( h_3-h_2\right) -\left( h_2h_3-h_4^2-h_5^2\right) '=0 \end{aligned}$$
and, in particular, \(\text {d}\omega =0\) if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} -\dfrac{h_1}{2}+h_2'=0, \\ (h_2+h_3)'=0,\\ h_4=h_5=0. \end{array}\right. } \end{aligned}$$
Similarly, \(\text {d}\psi _+=0\) if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} p_8'-p_3=0, \\ p_7'+p_4=0, \\ p_5=p_6, \\ p_6'=0. \end{array}\right. } \end{aligned}$$
Let us suppose that \(\psi _+\) is stable with \(\lambda <0\), and consider the induced almost complex structure J on \(M^{\text {princ}}\). Recall that by G-invariance, \(\psi _-=J \psi _+\) needs to be of the same general form of \(\psi _+\), namely
$$\begin{aligned} \psi _- =&q_1 e^{135} + q_2 e^{146}+ q_3 (e^{134}+e^{156}) + q_4 (e^{136}+e^{145})\\&+q_5 e^{235} + q_6 e^{246}+ q_7 ( e^{234} +e^{256})+ q_8 (e^{236}+e^{245}), \end{aligned}$$
where the \(q_i\)'s are functions of \(\{p_j\}_{j=1,\ldots ,8}\) for \(i=1,\ldots ,8\). Therefore, \(\text {d}\psi _-=0\) if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} q_8'-q_3=0, \\ q_7'+q_4=0, \\ q_5=q_6, \\ q_6'=0. \end{array}\right. } \end{aligned}$$
Moreover, 4.2 is equivalent to
$$\begin{aligned} {\left\{ \begin{array}{ll} p_1p_6+p_2p_6-2p_3p_7-p_4p_8=0, \\ h_2(p_3 p_8-p_4 p_7 )+h_4(p_4 p_6 - p_1 p_8)+2h_5(p_1 p_7 -p_3 p_6)=0, \\ h_3(p_3 p_8-p_4 p_7))+h_4(p_4 p_6 - p_2 p_8)+2h_5(p_2 p_7 -p_3 p_6)=0, \\ h_5 (p_4 p_6 - p_1 p_8)=0, \\ h_5 (p_2 p_8 - p_4 p_6)=0, \\ h_2 (p_2 p_6-p_1 p_6)+2 h_4 (p_1 p_7-p_3 p_6)=0,\\ h_3 (p_2 p_6-p_1 p_6)+2 h_4 (p_3 p_6-p_2 p_7)=0, \end{array}\right. } \end{aligned}$$
where we have already assumed \(p_5=p_6\) from 4.3. Since \(p_6'=0\) and all the conditions for the G-invariant balanced non-Kähler \(\text {SU}(3)\)-structure involve only homogeneous polynomials, we can assume either \(p_6=0\) or \(p_6=1\), up to scalings. Some possibilities can be excluded using the following lemmas.
Lemma 4.2
Assume \(p_6=0\). If \(p_1=0\), or \(p_2=0\), or \(p_7=0\), then conditions (1)-(7) are incompatible.
Let us assume \(p_1=0\). Then, \(\lambda (\psi _+)=-2(p_3 p_8-p_4 p_7)^2\le 0\) and
$$\begin{aligned} q_i&= 0, \quad i=4,5,8,\\ q_3&= \pm \dfrac{1}{\sqrt{2}} p_4, \\ q_7&= \pm \dfrac{1}{\sqrt{2}} p_8, \end{aligned}$$
where the signs of \(q_3, q_7\) depend on that of \((p_3 p_8-p_4 p_7)\). Then, \(\text {d}\psi _{\pm }=0\) implies \(p_3=p_4=0\), from which \(\lambda =0\) follows.
Assume instead \(p_2=0\). Then, we have \(\lambda =-2(p_3 p_8-p_4 p_7)^2 \le 0\), as in the previous case. Moreover, one can easily compute
$$\begin{aligned} q_4&=q_8=0, \\ q_3&= \pm \dfrac{1}{\sqrt{2}} \,p_4, \\ q_7&= \pm \dfrac{1}{\sqrt{2}} \,p_8, \end{aligned}$$
by which we can draw the same conclusion.
Finally, let us assume \(p_7=0\). Then, 4.3 implies \(p_4=0\). In this case, \(\lambda (\psi _+)=2 p_8^2 (p_1 p_2-p_3^2)\) can be strictly negative, and one can compute that
$$\begin{aligned} q_i&= 0, \quad i=3,4,8,\\ q_5&= \dfrac{p_1 p_8^2}{\sqrt{-\lambda }},\\ q_6&= \dfrac{p_2 p_8^2}{\sqrt{-\lambda }},\\ q_7&= \dfrac{p_3 p_8^2}{\sqrt{-\lambda }}. \end{aligned}$$
Therefore, assuming \(p_8\ne 0\) to ensure \(\lambda (\psi _+)\ne 0\), the requirement \(\text {d}\psi _{\pm }=0\) imposes
$$\begin{aligned} {\left\{ \begin{array}{ll} p_1=p_2, \\ q_6'=0, \\ q_7'=0, \end{array}\right. } \end{aligned}$$
which implies \(\lambda \ge 0\). \(\square \)
If \(h_5 \ne 0\), \(p_6=1\), \(p_8=0\), then conditions (1)-(7) and 4.5 are incompatible.
From 4.5 and the closure of \(\psi _+\), one has
$$\begin{aligned}&p_3=0, \\&p_4=0, \\&p_1=-p_2, \end{aligned}$$
from which it follows that \(\lambda =-4p_2^2(p_7^2-1)\), and
$$\begin{aligned} q_5=-\dfrac{2(p_7^2-1)p_2}{\sqrt{-\lambda }}=-q_6. \end{aligned}$$
Thus \(q_5=q_6=0\), from 4.4, which would force \(\lambda \) to vanish. \(\square \)
We can divide the discussion into the following cases:
\(h_5\ne 0, p_6=0\),
\(h_5 \ne 0, p_6=1\),
\(h_5=0, p_6=0\),
\(h_5=0, p_6=1\).
We study each case separately.
Case (1). By 4.5, Lemma 4.2 and \(\text {d}\psi _{+}=0\), it follows that \(p_3=p_8=h_4=0.\) Then, \(\text {d}\psi _{-}=0\) implies \(p_1=p_2\). Now, \(\lambda =2 p_7^2 (2 p_2^2-p_4^2)\), and the compatibility condition \(\psi _+\wedge \omega =0\) holds if and only if \(p_2 (h_2 + h_3)=2 h_5 p_4\). Then, if \(h_2\ne -h_3\), we can write \(p_2=\frac{2 p_4 h_5}{(h_2+h_3)}\). Therefore, 4.5 reduces to
$$\begin{aligned} {\left\{ \begin{array}{ll} -p_4 p_7 (h_2^2+h_2 h_3 -4 h_5^2)=0, \\ -p_4 p_7 (h_3^2+h_2 h_3 -4 h_5^2)=0, \end{array}\right. } \end{aligned}$$
all of whose solutions imply \(\lambda \ge 0 \). When \(h_2=-h_3 \), the condition \(\psi _+\wedge \omega =0\) implies \(p_4=0\) from which \(\lambda \ge 0\).
Case (2). By Lemma 4.3, we can assume \(p_8\ne 0\). Then, by 4.5, we have that
$$\begin{aligned}&p_1=p_2, \\&p_4=p_2 p_8. \end{aligned}$$
Moreover, since in this case \(\lambda =-2(p_8^2-2)(p_2 p_7-p_3)^2\), 4.5 implies \(h_4=0\) as well. Then, 4.5 implies
$$\begin{aligned}&(p_2 p_7-p_3)(h_2 p_8-2 h_5)=0, \\&(p_2 p_7-p_3)(h_3 p_8-2 h_5)=0, \end{aligned}$$
from which it follows that \(h_2=h_3=2 \frac{h_5}{p_8}\), since \(\lambda \) must not vanish. Then, \(\psi _+ \wedge \omega =0\) if and only if \(p_8^2-2=0\), which would imply \(\lambda =0\).
Case (3). By 4.5 and Lemma 4.2, we have \(h_4=0\), which implies \({\text {det}}(g)=h_1^2 h_2^2 h_3^2\). Then, from 4.5, we also have that
$$\begin{aligned} \begin{aligned} p_3 p_8&=p_4 p_7, \\ 2 p_3 p_7&=-p_4 p_8. \end{aligned} \end{aligned}$$
If \(p_3,p_8\ne 0\), then 4.6 implies that \(p_8^2+2p_7^2=0\), which contradicts our hypothesis. If \(p_8=0\), the closure of \(\psi _+\) implies \(p_3=0\). Then, we only need to discuss the remaining case \(p_3=0\). Supposing this is the case, then 4.6, together with Lemma 4.2, implies \(p_4=0\). Under these hypotheses, one can easily compute that \(\lambda =2 p_1 p_2 (2p_7^2+p_8^2)\),
$$\begin{aligned} q_5&= \dfrac{(2 p_7^2+p_8^2)p_1}{\sqrt{-\lambda }}, \\ q_6&= \dfrac{(2 p_7^2+p_8^2)p_2}{\sqrt{-\lambda }}, \end{aligned}$$
so that \(\lambda <0\) forces \(q_5 \ne q_6\), a contradiction.
Case (4). Here, the compatibility condition \(\psi _+\wedge \omega =0\), which holds if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} h_2=2 h_4 p_7 -h_3, \\ -h_2 p_2 -h_3 p_1 + 2 h_4 p_3=0, \end{array}\right. } \end{aligned}$$
together with 4.5, implies that one of the following must hold:
(4.a)
\( h_4=0\),
(4.b)
\(p_4=0\),
(4.c)
\(2 p_7^2+p_8^2=2\).
Let us start with case (4.a). By 4.7, we have that \(h_2=-h_3\). In particular, since \({\text {det}}(g)=h_1^2 h_2^2 h_3^2\), we must have \(h_3\ne 0 \). Then, a simple calculation show that d\(\omega ^2=0\) if and only if d\(\omega =0\). As for Case (4.b), by 4.7 and 4.5, we have that
$$\begin{aligned} p_1&= 2 p_3 p_7 - p_2, \\ h_2&= 2 h_4 p_7 -h_3, \end{aligned}$$
from which it follows that \(\lambda =-2 (2 p_7^2 + p_8^2 -2)(-2 p_2 p_3 p_7 + p_2^2 + p_3^2)\). Moreover, one can show that \(q_5=q_6\) implies \(p_2=p_3 p_7\). Now, 4.5 implies \(h_4=0\), which was already ruled out in the previous case. In case (4.c), again by 4.7 and 4.5, we have that
$$\begin{aligned} p_1&= 2 p_3 p_7 + p_4 p_8 - p_2, \\ h_2&= 2 h_4 p_7 -h_3, \end{aligned}$$
which implies \(\lambda =0\). This concludes case (b.1).
Case (c.3)
\(\mathfrak {g}=\mathfrak {su}(3)\), \(\mathfrak {k}=\mathfrak {su}(2)\).
Consider the \(\mathcal {B}\)-orthogonal basis of \(\mathfrak {g}\) given by
$$\begin{aligned} \begin{aligned} f_1&=\begin{pmatrix} 0 &{}\quad i &{}\quad 0 \\ i &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{pmatrix}&f_2&=\begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 \\ -1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{pmatrix}&f_3&=\begin{pmatrix} i &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad -i &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{pmatrix}&f_4&=\begin{pmatrix} 0 &{}\quad 0 &{}\quad i \\ 0 &{}\quad 0 &{}\quad 0 \\ i &{}\quad 0 &{}\quad 0 \end{pmatrix} \\ f_5&=\begin{pmatrix} 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 0 \\ -1 &{}\quad 0 &{}\quad 0 \end{pmatrix}&f_6&=\begin{pmatrix} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad i \\ 0 &{}\quad i &{}\quad 0 \end{pmatrix}&f_7&=\begin{pmatrix} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad -1 &{}\quad 0 \end{pmatrix}&f_8&=\frac{1}{\sqrt{3}} \begin{pmatrix} i &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad i &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad -2i \end{pmatrix}. \end{aligned} \end{aligned}$$
Then, \(\mathfrak {k}=\langle f_1, f_2, f_3\rangle \). Let \(\mathfrak {a}{:}{=}\langle f_8\rangle \) and \(\mathfrak {n}{:}{=}\langle f_4, f_5, f_6, f_7\rangle \); hence, \(\mathfrak {m}=\) \(\mathfrak {a}\oplus \mathfrak {n}\). Since the \(\text {Ad}(K)\)-invariant irreducible modules in the decomposition of \(\mathfrak {g}\) are pairwise inequivalent, the metric g on \(M^{\text {princ}}\) is diagonal and, in particular, it is of the form
$$\begin{aligned} g={\text {d}}t^2+h(t)^2\mathcal {B}|_{\mathfrak {a}\times \mathfrak {a}}+f(t)^2\mathcal {B}|_{\mathfrak {n}\times \mathfrak {n}}, \end{aligned}$$
for some positive \(h, f \in C^{\infty }(\overset{\circ }{I})\). Moreover, with respect to the frame \(\{e_i\}_{i=1,\ldots ,6}\) of \(M^{\text {princ}}\), the structure equations are given by
$$\begin{aligned}&\mathrm{d}e^1=0,&\mathrm{d}e^2=-\sqrt{3}e^{36},&\mathrm{d}e^3=\sqrt{3}e^{26}, \\&\mathrm{d}e^4=-\sqrt{3}e^{56},&\mathrm{d}e^5=\sqrt{3}e^{46},&\mathrm{d}e^6=-\sqrt{3}(e^{23}+e^{45}). \end{aligned}$$
Fix the volume form \(\Omega =e^{1\ldots 6}\). One can easily show that a pair of generic G-invariant forms \(\left( \omega ,\psi _+\right) \) on \(M^\text {princ}\) of degree two and three is given, respectively, by
$$\begin{aligned} \omega {:}{=}&h_1\, e^{16}+h_2\, (e^{23}+e^{45}) + h_3\,(e^{24}-e^{35})+h_4 (e^{25}+e^{34} ), \\ \psi _+ {:}{=}&p_1\, (e^{123}+e^{145}) + p_2\, (e^{124}-e^{135})+ p_3 (e^{246}-e^{356}) + p_4 (e^{236}+e^{456})\\&+ p_5 \,(e^{125}+e^{134}) + p_6\,(e^{256}+e^{346}), \end{aligned}$$
where \(h_i, p_j\in C^{\infty }(\overset{\circ }{I})\), \(i=1,\ldots ,4\), \(j=1,\ldots ,6\). As we did for case (b.1), we are going to show that the system of equations resulting from imposing conditions (1)-(7) is incompatible.
A simple computation shows that d\(\psi _+=0\) if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} p_6'-2\sqrt{3}\,p_2=0, \\ p_3'+2\sqrt{3}\,p_5=0, \\ p_4=p_4'=0. \end{array}\right. } \end{aligned}$$
From the G-invariance,
$$\begin{aligned} \psi _- {:}{=}&q_1\, (e^{123}+e^{145}) + q_2\, (e^{124}-e^{135})+ q_3 (e^{246}-e^{356}) + q_4 (e^{236}+e^{456})\\&+ q_5 \,(e^{125}+e^{134}) + q_6\,(e^{256}+e^{346}), \end{aligned}$$
where the \(q_i\)'s are functions of \(\{p_j\}_{j=1,\ldots ,6}\) for \(i=1,\ldots ,6\). Therefore, d\(\psi _-=0\) if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} q_6'-2\sqrt{3}q_2=0, \\ q_3'+2\sqrt{3}q_5=0, \\ q_4=q_4'=0. \end{array}\right. } \end{aligned}$$
In particular, from \(p_4=0\), it follows
$$\begin{aligned} q_4=\dfrac{2(p_3^2+p_6^2)p_1}{\sqrt{-\lambda }}, \end{aligned}$$
with \(\lambda =-4(p_1^2 \, (p_3^2+p_6^2)+(p_2\,p_6-p_3\,p_5)^2)\). We suppose that \(\psi _+\) is stable with \(\lambda <0\). Then, \(q_4=0\) if and only if \(p_1=0\). Since \(p_1\) has to be equal to zero, it can be shown that the compatibility condition \(\psi _+ \wedge \omega =0\) is equivalent to the following system of equations:
$$\begin{aligned} {\left\{ \begin{array}{ll} h_3p_3+h_4p_6=0, \\ h_3p_2+h_4p_5=0. \end{array}\right. } \end{aligned}$$
Moreover, the positive-definiteness of g implies \(h_1>0\). Then, the normalization condition \(\psi _+ \wedge \psi _-=\frac{2}{3}\omega ^3\) is equivalent to
$$\begin{aligned} |p_2p_6-p_3p_5|=h_1(h_2^2+h_3^2+h_4^2). \end{aligned}$$
The balanced condition d\(\omega ^2=0\) is satisfied if and only if
$$\begin{aligned} 2\sqrt{3}h_1h_2+(h_2^2+h_3^2+h_4^2)'=0. \end{aligned}$$
Finally, the Kähler condition d\(\omega =0\) holds if and only if
$$\begin{aligned} {\left\{ \begin{array}{ll} h_3=h_4=0\\ \sqrt{3}h_1+h_2'=0. \end{array}\right. } \end{aligned}$$
Multiplying 4.9 by \(h_4\) and using 4.8, we obtain \(h_4h_1(h_2^2+h_3^2+h_4^2)=0\) and, since \(h_1>0\) and \(h_2=h_3=h_4=0\) would imply \(\omega ^3=0\), we necessarily have \(h_4=0\). Then, 4.8 implies
$$\begin{aligned} {\left\{ \begin{array}{ll} h_3p_3=0\\ h_3p_2=0, \end{array}\right. } \end{aligned}$$
from which it follows \(h_3=0\) since \(p_2=p_3=0\) would imply \(\lambda =0\). Then, 4.10 reads \(h_2(\sqrt{3}h_1+h_2')=0\) and, since \(h_2\ne 0 \), in order to have \(\omega ^3\ne 0\), we have \(\sqrt{3}h_1+h_2'=0\), namely \(d\omega =0\). Therefore, any G-invariant balanced \(\text {SU}(3)\)-structure on the corresponding M is necessarily Kähler. This concludes case (c.3).
Case (a.1)
\(\mathfrak {g}=\mathfrak {su}(2)\oplus 2\mathbb {R}\), \(\mathfrak {k}=\{0\}\).
Since \(\mathfrak {k}=\{0\}\), we can write \(T_pM\cong \langle e_1|_p\rangle \oplus \hat{\mathfrak {g}}\big |_p\), for each \(p\in M^{\text {princ}}\). Moreover, every k-form \(\alpha \) on \(M^{\text {princ}}\) of the form
$$\begin{aligned} \alpha =\sum _{1\le i_1< \ldots < i_k\le 6} \alpha _{i_1\ldots i_k} e^{i_1\ldots i_k}, \end{aligned}$$
where \(\alpha _{i_1\ldots i_k} \in C^{\infty }(\overset{\circ }{I})\), is G-invariant. Let
$$\begin{aligned} \omega {:}{=}\sum _{1 \le i< j \le 6} h_{ij}e^{ij}, \qquad \psi _+ {:}{=}\sum _{1\le i< j < k \le 6}p_{ijk} e^{ijk} \end{aligned}$$
be a pair of generic G-invariant forms on \(M^{\text {princ}}\) of degree two and three, respectively, with coefficients \(h_{ij}, p_{ijk} \in C^{\infty }(\overset{\circ }{I})\). If we choose a \(\mathcal {B}\)-orthogonal basis of \(\mathfrak {su}(2)\) with vectors of constant norm, say
$$\begin{aligned} f_i=\left( \begin{array}{c|c|c} {\tilde{e}}_i &{} &{} \\ \hline &{} 0 &{} \\ \hline &{} &{} 0 \end{array} \right) \,, \, i=1,2,3, \end{aligned}$$
and extend it to a basis \(\{f_i\}_{i=1,\ldots ,5}\) of \(\mathfrak {g}\), the structure equations with respect to the frame \(\{e_i\}_{i=1,\ldots ,6}\) of \(M^{\text {princ}}\) are given by
$$\begin{aligned} de^1=0, \quad de^2=-2e^{34}, \quad de^3=2e^{24}, \quad de^4=-2e^{23}, \quad de^5=0, \quad de^6=0. \end{aligned}$$
Fix the volume form \(\Omega {:}{=}-e^{1\ldots 6}\). We consider the forms given in 4.12 and set
$$\begin{aligned}&p_{134}=p_{234}=1, \\&p_{136}=p_{235}=p_{246}=-p_{145}=e^{2t}, \\&h_{12}=\dfrac{3}{2}\dfrac{e^{4t}}{\sqrt{9+3e^{6t}}}, \\&h_{34}=-\dfrac{1}{3}\left( -3+\sqrt{9+3e^{6t}}\right) e^{-2t},\\&h_{35}=h_{36}=h_{46}=-h_{45}=1,\\&h_{56}=2e^{2t}, \end{aligned}$$
for each \(t\in (-1,1)\), and all other coefficients equal to zero. Then, by performing the change of variable
$$\begin{aligned} {\tilde{t}}(t){:}{=}\int _0^t a(s) {\text {d}}s, \quad a(s)=\sqrt{\frac{3}{2}}(9+3\,e^{6t})^{-\frac{1}{4}}e^{2t}, \end{aligned}$$
one can easily check that the resulting pair \(\left( \omega ,\psi _+\right) \) defines a G-invariant balanced non-Kähler \(\text {SU}(3)\)-structure on the corresponding \(M^{\text {princ}}\). With respect to the t parameter, the metric on \(M^{\text {princ}}\) is represented by the matrix
$$\begin{aligned} (g_{ij})= \begin{pmatrix} \dfrac{3}{2}\dfrac{e^{4t}}{\sqrt{9+3e^{6t}}} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \dfrac{3}{2}\dfrac{e^{4t}}{\sqrt{9+3e^{6t}}} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \dfrac{3+ \sqrt{9+3e^{6t}}}{3 e^{2t}} &{}\quad 0 &{}\quad 1 &{}\quad -1 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \dfrac{3+ \sqrt{9+3e^{6t}}}{3 e^{2t}} &{}\quad 1 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 1 &{}\quad 2e^{2t} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad -1 &{}\quad 1 &{}\quad 0 &{}\quad 2e^{2t} \end{pmatrix}. \end{aligned}$$
However, using the results from [32], we can check that this example cannot be extended to the singular orbits to give a smooth metric on the whole manifold. This concludes the proof of Theorem A.
Proof of Theorem B
We will finally prove our main theorem.
By Theorem A, we only need to discuss if there exist balanced non-Kähler \(\text {SU}(3)\)-structures of cohomogeneity one arising as the compactification of the principal part determined by the pair \(\left( \mathfrak {g}, \mathfrak {k} \right) =\left( \mathfrak {su}(2)\oplus 2\mathbb {R},\{0\} \right) \).
By [21], a six-dimensional compact simply connected cohomogeneity one manifold M whose corresponding principal part is given by the pair \(\left( \mathfrak {g},\mathfrak {k}\right) =\left( \mathfrak {su}(2)\oplus 2\mathbb {R},\{0\}\right) \) at the Lie algebra level is G-equivariantly diffeomorphic to the product of two three-dimensional spheres, i.e., \(M\cong S^3\times S^3\). If we denote by \(H_i\) for \(i=1,2\), the singular isotropy groups for the G-action on M and by \(\mathfrak {h}_i=\text {Lie}(H_i)\), for \(i=1,2\), their Lie algebras, we have that both \(\mathfrak {h}_1\) and \(\mathfrak {h}_2\) are isomorphic to \(\mathbb {R}\) so that both the singular orbits of M are four-dimensional compact submanifolds of M. Let \(b_i\) be the i-th Betti number of M, then, up to G-equivariant diffeomorphisms, we may assume \(b_4=0\). By Michelsohn's obstruction [23, Corollary 1.7], if M admitted any 4-dimensional compact complex submanifold S, then M would not admit a balanced metric. Therefore, we can make a few considerations by focusing on one tubular neighborhood of singular orbit \(G\supset H\supset K\) at a time. In particular, we divide the discussion depending on the immersion of \(\mathfrak {h}\subset \mathfrak {g}\). Let S be the singular orbit given by the group diagram \(G\supset H\supset K\). We notice that if S is J-invariant, a complex structure on M would give rise to a complex structure on S, so we can discard all these cases by Michelsohn's obstruction. In particular, we have that \(T_qM=T_qS \oplus V\) where \(V=T_qS^{\perp }\) is the slice at \(q\in S\); since S is four-dimensional, V is always a 2-plane. We recall that the H-action on \(T_qS\) is given by the adjoint representation while the H-action on V is given via slice representation, and since V is bi-dimensional, this action is just a rotation on V of a certain speed, say a. Let us start by considering the case when \(\mathfrak {h}\) is contained in the center of \(\mathfrak {g}\), \(\xi (\mathfrak {g})\). In this case, the H-action on \(T_qS\) is trivial. Therefore, \(T_qS \oplus V\) are inequivalent modules of the H-action on \(T_qM\) and, since J commutes with the H-action, J preserves \(T_qS\) for each \(q\in S\), i.e., S is an almost complex manifold and we may apply Michelsohn's obstruction to discard this case. Therefore, we may suppose that \(\mathfrak {h}\) has a non-trivial component in the \(\mathfrak {su}(2)\)-factor of \(\mathfrak {g}\). In particular, since \(\text {rank}(\mathfrak {su}(2))=1\) and the adjoint action ignores components in the center of \(\mathfrak {g}\), we may assume, without loss of generality and using the notation from Sect. 4.3, \(\mathfrak {h}=\langle f_1\rangle \). Moreover, if we denote by \(\mathfrak {m}\) the tangent space to S via the usual identification, the decomposition of \(\mathfrak {m}\) in irreducible H-modules is given by
$$\begin{aligned} \mathfrak {m}=l_0\oplus l_1, \end{aligned}$$
where H acts on \(l_0\) trivially and on \(l_1\) via rotation of speed d. Therefore, when the integer a is different from d, the modules \(l_0\), \(l_1\) and V are inequivalent for the H-action and again, since J commutes with the H-action, it cannot exchange two different modules. In particular, \(J(T_qS)\subseteq T_qS\) and we may apply Michelsohn's obstruction as before. For the remaining case \(a=d\), we have that the two modules \(l_1\) and V are equivalent; hence, \(J(l_1\oplus V)\subseteq l_1\oplus V\) but not necessarily \(J(l_1)\subseteq l_1\). In particular, when this case occurs, the orbit S is not J-invariant, and we do not have obstructions to the existence of balanced metrics. Therefore, from now on, we assume this is the case.
Let \(\partial / \partial x\) be a vector field such that \(\left( \xi |_q, \partial / \partial x|_q \right) \) is an orthonormal basis for the slice V and \(T_q^*M = \langle e^1|_q, {\text {d}}x|_q, e^3|_q, e^4|_q, e^5|_q, e^6|_q \rangle \). Let \(\varphi : \mathfrak {h} \rightarrow \text {End}(T_qM)\) be the \(\mathfrak {h}\)-action on \(T_qM\). Then in order to have \(l_1\) and V \(\mathfrak {h}\)-equivalent, \(\varphi (f_1)^*\) acts on 1-forms given in the previous basis as
$$\begin{aligned} \varphi (f_1)^*= \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ -1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad -1 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$
Fix the volume form \(\Omega = e^{1\ldots 6}\) and consider the three form
$$\begin{aligned} \begin{array}{ll} \psi _+ &{}:= p_1 e^{123} + p_2 e^{124} + p_3 e^{125} + p_4 e^{126} + p_5 e^{134} + p_6 e^{135} + p_7 e^{136} + p_8 e^{145} \\ &{}\quad + p_9 e^{146} + p_{11} e^{234} + p_{12} e^{235} + p_{13} e^{236} + p_{14} e^{245} + p_{15} e^{246} +p_{16} e^{256}\\ &{}\quad + p_{17} e^{345} + p_{18} e^{346} + p_{19} e^{356} +p_{20} e^{456}, \end{array} \end{aligned}$$
where \(p_j \in C^\infty ((-1, 1))\) for any \(j = 1, \dots , 20\).
The condition \(d \psi _+=0\) is equivalent to the following ODE system:
$$\begin{aligned} {\left\{ \begin{array}{ll} p_{11}'=0, \\ p_{12}' +2 p_8=0, \\ p_{13}' +2 p_9=0, \\ p_{14}' -2 p_6=0, \\ p_{15}' -2 p_7=0, \\ p_{17}' +2 p_3=0, \\ p_{18}' +2 p_4=0, \\ p_{16}=p_{19}=p_{20}=0. \end{array}\right. } \end{aligned}$$
From now on, we will assume \(p_{16}=p_{19}=p_{20}=0\).
Let the slice be \(V \cong \mathbb {R}^2\) so that the singular point \(q \in \mathcal {O}_1\) is identified with \(0 \in \mathbb {R}^2\), and let \(r:V \rightarrow \mathbb {R}\) be the radial distance, such that for \(v=(v_1, v_2) \in V\), \(r(v) = | v| = \sqrt{v_1^2 + v_2^2}\). Then \(r \not \in C^\infty (V)\), and neither are the odd powers of r. Via the exponential map, we can identify \(t+1\) with the radial distance r.
Let \(\alpha \) be a G-invariant 1-form on M. Then,
$$\begin{aligned} \alpha (t)= \sum _{i=1}^6 \alpha _i(t) e^i, \end{aligned}$$
for \(t \in (-1,1)\) and some smooth functions \(\alpha _i\), \(i=1, ..., 6\). This expression has to extend smoothly to \(t=-1\). In particular, the Taylor expansion of \(\alpha _k(t)\) around \(t=-1\) for \(k \ge 2 \) only has even powers of \(t+1\):
$$\begin{aligned} \alpha _k(t) \sim \sum _{n>1} a_{k, 2n} (t+1)^{2n}. \end{aligned}$$
Now for \(2 \le i< j<k \le 6\) fixed, the \(e^{ijk}\)-coefficients extend smoothly to \(t=-1\). Hence,
$$\begin{aligned} p_{12}(t) \sim \sum _{n > 1} a_{2n} (t+1)^{2n}, \end{aligned}$$
as well as for the Taylor expansion of \(p_{13}(t), p_{14}(t)\) and \(p_{15}(t)\) around \(t=-1\). Therefore, \(\lim _{t \rightarrow -1} p_{12}'(t)= \lim _{t \rightarrow -1} p_{13}'(t)= \lim _{t \rightarrow -1} p'_{14}(t)= \lim _{t \rightarrow -1} p'_{15}(t)=0\). From 5.1, we obtain that \(\lim _{t \rightarrow -1} p_6(t)= \lim _{t \rightarrow -1} p_7(t)= \lim _{t \rightarrow -1} p_8(t)= \lim _{t \rightarrow -1} p_9(t)=0\).
The three form \(\psi _+\) at \(t=0\) has to be H-invariant, and hence can be written as
$$\begin{aligned} \begin{array}{ll} \rho &{}= c_3 e^1 \wedge {\text {d}}x \wedge e^5 +c_4 e^1 \wedge {\text {d}}x \wedge e^6 + c_6 e^{135} +c_7 e^{136} \\ &{}\quad +c_8 e^{145} + c_9 e^{146} - c_8 {\text {d}}x \wedge e^{35} -c_9 {\text {d}}x \wedge e^{36} \\ &{}\quad +c_6 {\text {d}}x \wedge e^{45} + c_7 {\text {d}}x \wedge e^{46} +c_{17} e^{345} +c_{18} e^{346}, \end{array} \end{aligned}$$
for some \(c_3, c_4, c_6, c_7, c_8, c_9, c_{17}, c_{18} \in \mathbb {R}\). But \(c_i = \lim _{t \rightarrow -1} p_i(t)=0\) for \(i=6, 7, 8, 9\). Therefore, one can easily compute that
$$\begin{aligned} \lambda |_{t=-1}= (c_{18} c_3 - c_{17} c_4)^2 \ge 0. \end{aligned}$$
This concludes case (a.1).
We note that it is possible to reach a contradiction by just studying the behavior around one of the singular orbits. However, if we do not use the information coming from Michelsohn's obstruction, the computations get significantly more complicated. The main point is that from \(d \psi _-=0\) and using the stability condition \(\lambda <0\), we get \(p_{10}=0\). If we assume this too, the three form \(\psi _+\) at \(t=-1\) can be written as
$$\begin{aligned} \begin{array}{ll} \rho &{}= c_1 e^{1} \wedge {\text {d}}x \wedge e^3 + c_2 e^{1} \wedge {\text {d}}x \wedge e^4 + c_3 e^{1} \wedge {\text {d}}x \wedge e^5 + c_4 e^{1} \wedge {\text {d}}x \wedge e^6 \\ &{}\quad + c_5 e^{134} + c_6 e^{135} + c_7 e^{136} + c_8 e^{145} + c_9 e^{146} \\ &{}\quad + c_{11} {\text {d}}x \wedge e^{34} + c_{12} {\text {d}}x \wedge e^{35} + c_{13} {\text {d}}x \wedge e^{36} + c_{14} {\text {d}}x \wedge e^{45} + c_{15} {\text {d}}x \wedge e^{46} \\ &{}\quad + c_{17} e^{345} + c_{18} e^{346}, \end{array} \end{aligned}$$
for some \(c_i \in \mathbb {R}\), \(i=1,\ldots ,18, i\ne 10,16\). Then, once again we find that \(\lambda |_{t=-1}= (c_{18} c_3 - c_{17} c_4)^2 \ge 0\) which finishes the case.
We also note that in case (a.1) and when \(\mathfrak {h}= \mathbb {R}\), we can remove the hypothesis of simply connectedness from the non-compact case and still get a non existence result. Let M be a six-dimensional non-compact cohomogeneity one manifold under the almost effective action of a connected Lie group G and let K, H be the principal and singular isotropy groups, respectively, with \(\left( \mathfrak {g}, \mathfrak {h}, \mathfrak {k} \right) =\left( \mathfrak {su}(2)\oplus 2\mathbb {R},\mathbb {R},\{0\} \right) \). Then, M admits no G-invariant balanced non-Kähler \(\text {SU}(3)\)-structures.
From Theorem B, we get the following Corollary.
Corollary 5.2
There is no non-Kähler balanced \( \text {SU}(3)\)-structure on \(S^3 \times S^3\) which is invariant under a cohomogeneity one action.
In the non-simply-connected case, by Theorem A, we can discard cases (b.1) and (c.3), as these do not admit local solutions to conditions (1)–(7). Moreover, as observed in [26, Section 3], one can also rule out cases (b.3) and (c.2), as the G-action would not be almost effective, as well as case (c.1) since it would give rise to a three-dimensional J-invariant subspace, a contradiction.
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Alexandrino, M.M., Bettiol, R.G.: Lie Groups and Geometric Aspects of Isometric Actions. Springer, Berlin (2015)
Alexandrov, B., Ivanov, S.: Vanishing theorems on Hermitian manifolds. Differ. Geom. Appl. 14, 251–265 (2001)
MathSciNet Article Google Scholar
Bedulli, L., Vezzoni, L.: The Ricci tensor of SU(3)-manifolds. J. Geom. Phys. 57, 1125–1146 (2007)
Bedulli, L., Vezzoni, L.: A parabolic flow of balanced metrics. J. Reine Angew. Math. 723, 79–99 (2017)
MathSciNet MATH Google Scholar
Bérard Bergery, L.: Sur des nouvelles varietes Riemanniennes d'Einstein, Publ. de Inst. E. Cartan 6 (1982), 1–60
Fei, T.: A construction of non-Kähler Calabi–Yau manifolds and new solutions to the Strominger system. Adv. Math. 302, 529–550 (2016)
Fei, T., Huang, Z., Picard, S.: A construction of infinitely many solutions to the Strominger system. J. Differ. Geom. 117, 23–39 (2021)
Fei, T., Yau, S.-T.: Invariant solutions to the Strominger system on complex Lie groups and their quotients. Commun. Math. Phys. 338, 1183–1195 (2015)
Fernández, M., Tomassini, A., Ugarte, L., Villacampa, R.: Balanced Hermitian metrics from SU(2)-structures. J. Math. Phys. 50, 033507 (2009)
Fino, A., Grantcharov, G., Vezzoni, L.: Astheno-Kähler and balanced structures on fibrations. Int. Math. Res. Not. 22, 7093–7117 (2019)
Fino, A., Vezzoni, L.: Special Hermitian metrics on compact solvmanifolds. J. Geom. Phys. 91, 40–53 (2015)
Fino, A., Vezzoni, L.: On the existence of balanced and SKT metrics on nilmanifolds. Proc. Am. Math. Soc. 144, 2455–2459 (2016)
Fu, J., Li, J., Yau, S.-T.: Balanced metrics on non-Kähler Calabi–Yau threefolds. J. Differ. Geom. 90, 81–129 (2012)
Fu, J.-X., Tseng, L.-S., Yau, S.-T.: Local heterotic torsional models. Commun. Math. Phys. 289, 1151–1169 (2009)
Fu, J.-X., Yau, S.-T.: The theory of superstring with flux on non-Kähler manifolds and the complex Monge–Ampére equation. J. Differ. Geom. 78, 369–428 (2008)
Garcia-Fernandez, M.: T-dual solutions of the Hull–Strominger system on non-Kähler threefolds, arXiv:1810.04740 (2018)
Grantcharov, D., Grantcharov, G., Poon, Y.S.: Calabi–Yau connections with torsion on toric bundles. J. Differ. Geom. 78, 13–32 (2008)
Grantcharov, G.: Geometry of compact complex homogeneous spaces with vanishing first Chern class. Adv. Math. 226, 3136–3159 (2011)
Hitchin, N.: The geometry of three-forms in six dimensions. J. Differ. Geom. 55, 547–576 (2000)
Hoelscher, C.A.: Classification of cohomogeneity one manifolds in low dimensions. Pac. J. Math. 246, 129–185 (2010)
Hoelscher, C.A.: Diffeomorphism type of six-dimensional cohomogeneity one manifolds. Ann. Global Anal. Geom. 38, 1–9 (2010)
Hull, C.: Superstring compactifications with torsion and space-time supersymmetry. In: Turin 1985 Proceedings Superunification and Extra Dimensions, 347–375 (1986)
Michelsohn, M.L.: On the existence of special metrics in complex geometry. Acta Math. 149, 261–295 (1982)
Otal, A., Ugarte, L., Villacampa, R.: Invariant solutions to the Strominger system and the heterotic equations of motion. Nucl. Phys. B 920, 442–474 (2017)
Phong, D.H., Picard, S., Zhang, X.: A flow of conformally balanced metrics with Kähler fixed points. Math. Ann. 374, 2005–2040 (2019)
Podestà, F., Spiro, A.: 6-dimensional nearly Kähler manifolds of cohomogeneity one. J. Geom. Phys. 60, 156–164 (2010)
Pujia, M.: The Hull–Strominger system and the Anomaly flow on a class of solvmanifolds, arXiv:2103.09854
Pujia, M., Ugarte, L.: The Anomaly flow on nilmanifolds, arXiv:2004.06744
Reichel, W.: Über die Trilinearen Alternierenden Formen in 6 und 7 Veränder-lichen, Ph.D. thesis, Greifswald (1907)
Strominger, A.: Superstrings with torsion. Nucl. Phys. B 274, 253–284 (1986)
Ugarte, L., Villacampa, R.: Balanced Hermitian geometry on 6-dimensional nilmanifolds. Forum Math. 27, 1025–1070 (2015)
Verdiani, L., Ziller, W.: Smoothness Conditions in Cohomogeneity one manifolds, Transform. Groups, arXiv:1804.04680 (to appear)
Wang, H.-C.: Complex parallisable manifolds. Proc. Am. Math. Soc. 5, 771–776 (1954)
Ziller, W.: On the geometry of cohomogeneity one manifolds with positive curvature. In: Riemannian Topology and Geometric Structures on Manifolds, Progress in Mathematics 271, Birkhäuser (2009)
The first named author wants to thank Andrew Dancer and Jason Lotay for introducing the problem and their support and help with it. The second named author would like to thank Lucio Bedulli for introducing the problem and for useful conversations and comments and Anna Fino for her constant support, encouragement and patient guidance. The second named author would like to thank also Alberto Raffero and Fabio Podestà for helpful discussions and remarks.
Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK
Izar Alonso
Dipartimento di Matematica "G. Peano", Università degli Studi di Torino, Via Carlo Alberto 10, 10123, Turin, Italy
Francesca Salvatore
Correspondence to Izar Alonso.
The first author was supported by the EPSRC. The second author was supported by GNSAGA of INdAM.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Alonso, I., Salvatore, F. On the existence of balanced metrics on six-manifolds of cohomogeneity one. Ann Glob Anal Geom (2021). https://doi.org/10.1007/s10455-021-09807-z
DOI: https://doi.org/10.1007/s10455-021-09807-z
Cohomogeneity one actions
Balanced metrics
\(\text {SU}(3)\)-structures
Mathematics Subject Classification | CommonCrawl |
Carlo Severini
Carlo Severini (10 March 1872 – 11 May 1951) was an Italian mathematician: he was born in Arcevia (Province of Ancona) and died in Pesaro. Severini, independently from Dmitri Fyodorovich Egorov, proved and published earlier a proof of the theorem now known as Egorov's theorem.
Carlo Severini
Born10 March 1872
Arcevia (Ancona)
Died11 May 1951(1951-05-11) (aged 79)
Pesaro
NationalityItalian
Alma materUniversità di Bologna
Known forSeverini-Egorov theorem
Scientific career
FieldsReal analysis
InstitutionsUniversità di Bologna
University of Catania
University of Genova
Doctoral advisorSalvatore Pincherle
Biography
He graduated in Mathematics from the University of Bologna on November 30, 1897:[1][2] the title of his "Laurea" thesis was "Sulla rappresentazione analitica delle funzioni arbitrarie di variabili reali".[3] After obtaining his degree, he worked in Bologna as an assistant to the chair of Salvatore Pincherle until 1900.[4] From 1900 to 1906, he was a senior high school teacher, first teaching in the Institute of Technology of La Spezia and then in the lyceums of Foggia and of Turin;[5] then, in 1906 he became full professor of Infinitesimal Calculus at the University of Catania. He worked in Catania until 1918, then he went to the University of Genova, where he stayed until his retirement in 1942.[5]
Work
He authored more than 60 papers, mainly in the areas of real analysis, approximation theory and partial differential equations, according to Tricomi (1962). His main contributions belong to the following fields of mathematics:[6]
Approximation theory
In this field, Severini proved a generalized version of the Weierstrass approximation theorem. Precisely, he extended the original result of Karl Weierstrass to the class of bounded locally integrable functions, which is a class including particular discontinuous functions as members.[7]
Measure theory and integration
Severini proved Egorov's theorem one year earlier than Dmitri Egorov[8] in the paper (Severini 1910), whose main theme is however sequences of orthogonal functions and their properties.[9]
Partial differential equations
Severini proved an existence theorem for the Cauchy problem for the non linear hyperbolic partial differential equation of first order
$\left\{{\begin{array}{lc}{\frac {\partial u}{\partial x}}=f\left(x,y,u,{\frac {\partial u}{\partial y}}\right)&(x,y)\in \mathbb {R} ^{+}\times [a,b]\\u(0,y)=U(y)&y\in [a,b]\Subset \mathbb {R} \end{array}}\right.,$
assuming that the Cauchy data $U$ (defined in the bounded interval $[a,b]$) and that the function $f$ has Lipschitz continuous first order partial derivatives,[10] jointly with the obvious requirement that the set $\scriptstyle \{(x,y,z,p)=(0,y,U(y),U^{\prime }(y));y\in [a,b]\}$ is contained in the domain of $f$.[11]
Real analysis and unfinished works
According to Straneo (1952, p. 99), he worked also on the foundations of the theory of real functions.[12] Severini also left an unpublished and unfinished treatise on the theory of real functions, whose title was planned to be "Fondamenti dell'analisi nel campo reale e i suoi sviluppi".[13]
Selected publications
• Severini, Carlo (1897) [1897-1898], "Sulla rappresentazione analitica delle funzioni reali discontinue di variabile reale", Atti della Reale Accademia delle Scienze di Torino. (in Italian), 33: 1002–1023, JFM 29.0354.02. In the paper "On the analytic representation of discontinuous real functions of a real variable" (English translation of title) Severini extends the Weierstrass approximation theorem to a class of functions which can have particular kind of discontinuities.
• Severini, C. (1910), "Sulle successioni di funzioni ortogonali", Atti dell'Accademia Gioenia, serie 5a (in Italian), 3 (5): Memoria XIII, 1–7, JFM 41.0475.04. "On sequences of orthogonal functions" (English translation of title) contains Severini's most known result, i.e. the Severini–Egorov theorem.
See also
• Hyperbolic partial differential equation
• Orthogonal functions
• Severini-Egorov theorem
• Weierstrass approximation theorem
Notes
1. According to the summary of his student file available from the Archivio Storico dell'Università di Bologna (2004) (an electronic version of the archives of the University of Bologna).
2. The content of this section is based on references (Tricomi 1962) and (Straneo 1952): this last one also refers that he was married and had several children, however without giving any other detail.
3. An English translation reads as "On the Analytic Representation of Arbitrary Functions of Real variables"; despite the similarities in the title and the same year of publication, the biographical sources do not say if the paper (Severini 1897) is somewhat related to his thesis.
4. The 1897–1898 yearbook of the university already lists him between the assistant professors.
5. According to Straneo (1952, p. 98).
6. Only his most known results are described in the following sections: Straneo (1952) reviews his research in greater detail.
7. According to Straneo (1952), the result is given in various papers, source (Severini 1897) perhaps being the most accessible of them.
8. Egorov's proof is given in the paper (Egoroff 1911).
9. Also, according to Straneo (1952, p. 101), Severini, while acknowledging his own priority in the publication of the result, was unwilling to disclose it publicly: it was Leonida Tonelli who, in the note (Tonelli 1924), credited him the priority for the first time.
10. This means that f belongs to the class $C^{(1,1)}$.
11. For more details about his researches in this field, see (Cinquini-Cibrario & Cinquini 1964) and the references cited therein
12. Straneo (1952, p. 99) lists Severini's researches on this field under as "Fondamenti dell'analisi infinitesimale (Foundations of infinitesimal analysis)": however, the topics covered range from the theory of integration to absolutely continuous functions and to operations on series of real functions.
13. "Foundations of Analysis on the Real Field and its Developments": again according to Straneo (1952, p. 101), the treatise would have included his later original results and covered all the fundamental topics required for the study of functional analysis on the real field.
References
Biographical and general references
• Archivio Storico dell'Università di Bologna (2004) [1897], "Carlo Severini", Fascicoli degli studenti, Fascicolo della Facoltà di Scienze Fisiche Matematiche Naturali n° (in Italian), 2843, archived from the original on March 10, 2012, retrieved March 1, 2011. A very short summary of the student file of Carlo Severini, giving however useful information about his laurea.
• Straneo, Paolo (1952), "Carlo Severini", Bollettino della Unione Matematica Italiana, Serie 3 (in Italian), 7 (3): 98–101, MR 0050531, available from the Biblioteca Digitale Italiana di Matematica. The obituary of Carlo Severini.
• Tonelli, Leonida (1924), "Su una proposizione fondamentale dell'analisi" [On a fundamental proposition of analysis], Bollettino della Unione Matematica Italiana, Serie 2 (in Italian), 3: 103–104, JFM 50.0192.01. In this short note Leonida Tonelli credits Severini for the first proof of Severini–Egorov theorem.
• Tricomi, F. G. (1962), "Carlo Severini", Matematici italiani del primo secolo dello stato unitario, Memorie dell'Accademia delle Scienze di Torino. Classe di Scienze fisiche matematiche e naturali. Serie IV (in Italian), vol. I, Torino, p. 120, Zbl 0132.24405, archived from the original on 2011-01-11, retrieved 2010-05-21{{citation}}: CS1 maint: location missing publisher (link). "Italian mathematicians of the first century of the unitary state" is an important historical memoir giving brief biographies of the Italian mathematicians who worked and lived between 1861 and 1961. Its content is available from the website of the .
• Università di Bologna (1898), "Facoltà di Scienze Fisiche, Matematiche e Naturali. Assistenti", Annuario della Regia Università di Bologna (in Italian), Bologna: Premiato Stabilimento Tipografico Succ. Monti, p. 170.
Scientific references
• Cinquini-Cibrario, M.; Cinquini, S. (1964), Equazioni a derivate parziali di tipo iperbolico [Partial differential equations of hyperbolic type], Monografie matematiche del Consiglio Nazionale delle Ricerche (in Italian), vol. 12, Roma: Edizioni Cremonese, pp. VIII+552, MR 0203199, Zbl 0145.35404. A monograph surveying the theory of hyperbolic equations up to its state of the art in the early 1960s, published by the Consiglio Nazionale delle Ricerche.
• Egoroff, D. Th. (1911), "Sur les suites des fonctions mesurables", Comptes rendus hebdomadaires des séances de l'Académie des sciences (in French), 152: 244–246, JFM 42.0423.01, available at Gallica.
External links
• Guerraggio, Angelo; Nastasi, Pietro; Tricomi, Francesco (2008–2010), Carlo Severini (1872 – 1951) (in Italian), retrieved March 2, 2011. Available from the Edizione Nazionale Mathematica Italiana.
Authority control: Academics
• zbMATH
| Wikipedia |
Glossary of group theory
A group is a set together with an associative operation which admits an identity element and such that every element has an inverse.
For general description of the topic, see group theory.
See also: list of group theory topics
Look up Appendix:Glossary of group theory in Wiktionary, the free dictionary.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Throughout the article, we use $e$ to denote the identity element of a group.
A
abelian group
A group $(G,*)$ is abelian if $*$ is commutative, i.e. $g*h=h*g$ for all $g$,$h$ ∈ $G$. Likewise, a group is nonabelian if this relation fails to hold for any pair $g$,$h$ ∈ $G$.
ascendant subgroup
A subgroup H of a group G is ascendant if there is an ascending subgroup series starting from H and ending at G, such that every term in the series is a normal subgroup of its successor. The series may be infinite. If the series is finite, then the subgroup is subnormal.
automorphism
An automorphism of a group is an isomorphism of the group to itself.
C
center of a group
The center of a group G, denoted Z(G), is the set of those group elements that commute with all elements of G, that is, the set of all h ∈ G such that hg = gh for all g ∈ G. Z(G) is always a normal subgroup of G. A group G is abelian if and only if Z(G) = G.
centerless group
A group G is centerless if its center Z(G) is trivial.
central subgroup
A subgroup of a group is a central subgroup of that group if it lies inside the center of the group.
characteristic subgroup
A subgroup of a group is a characteristic subgroup of that group if it is mapped to itself by every automorphism of the parent group.
characteristically simple group
A group is said to be characteristically simple if it has no proper nontrivial characteristic subgroups.
class function
A class function on a group G is a function that it is constant on the conjugacy classes of G.
class number
The class number of a group is the number of its conjugacy classes.
commutator
The commutator of two elements g and h of a group G is the element [g, h] = g−1h−1gh. Some authors define the commutator as [g, h] = ghg−1h−1 instead. The commutator of two elements g and h is equal to the group's identity if and only if g and h commutate, that is, if and only if gh = hg.
commutator subgroup
The commutator subgroup or derived subgroup of a group is the subgroup generated by all the commutators of the group.
composition series
A composition series of a group G is a subnormal series of finite length
$1=H_{0}\triangleleft H_{1}\triangleleft \cdots \triangleleft H_{n}=G,$
with strict inclusions, such that each Hi is a maximal strict normal subgroup of Hi+1. Equivalently, a composition series is a subnormal series such that each factor group Hi+1 / Hi is simple. The factor groups are called composition factors.
conjugacy-closed subgroup
A subgroup of a group is said to be conjugacy-closed if any two elements of the subgroup that are conjugate in the group are also conjugate in the subgroup.
conjugacy class
The conjugacy classes of a group G are those subsets of G containing group elements that are conjugate with each other.
conjugate elements
Two elements x and y of a group G are conjugate if there exists an element g ∈ G such that g−1xg = y. The element g−1xg, denoted xg, is called the conjugate of x by g. Some authors define the conjugate of x by g as gxg−1. This is often denoted gx. Conjugacy is an equivalence relation. Its equivalence classes are called conjugacy classes.
conjugate subgroups
Two subgroups H1 and H2 of a group G are conjugate subgroups if there is a g ∈ G such that gH1g−1 = H2.
contranormal subgroup
A subgroup of a group G is a contranormal subgroup of G if its normal closure is G itself.
cyclic group
A cyclic group is a group that is generated by a single element, that is, a group such that there is an element g in the group such that every other element of the group may be obtained by repeatedly applying the group operation to g or its inverse.
D
derived subgroup
Synonym for commutator subgroup.
direct product
The direct product of two groups G and H, denoted G × H, is the cartesian product of the underlying sets of G and H, equipped with a component-wise defined binary operation (g1, h1) · (g2, h2) = (g1 ⋅ g2, h1 ⋅ h2). With this operation, G × H itself forms a group.
F
factor group
Synonym for quotient group.
FC-group
A group is an FC-group if every conjugacy class of its elements has finite cardinality.
finite group
A finite group is a group of finite order, that is, a group with a finite number of elements.
finitely generated group
A group G is finitely generated if there is a finite generating set, that is, if there is a finite set S of elements of G such that every element of G can be written as the combination of finitely many elements of S and of inverses of elements of S.
G
generating set
A generating set of a group G is a subset S of G such that every element of G can be expressed as a combination (under the group operation) of finitely many elements of S and inverses of elements of S.
group automorphism
See automorphism.
group homomorphism
See homomorphism.
group isomorphism
See isomorphism.
H
homomorphism
Given two groups (G, ∗) and (H, ·), a homomorphism from G to H is a function h : G → H such that for all a and b in G, h(a∗b) = h(a) · h(b).
I
index of a subgroup
The index of a subgroup H of a group G, denoted |G : H| or [G : H] or (G : H), is the number of cosets of H in G. For a normal subgroup N of a group G, the index of N in G is equal to the order of the quotient group G / N. For a finite subgroup H of a finite group G, the index of H in G is equal to the quotient of the orders of G and H.
isomorphism
Given two groups (G, ∗) and (H, ·), an isomorphism between G and H is a bijective homomorphism from G to H, that is, a one-to-one correspondence between the elements of the groups in a way that respects the given group operations. Two groups are isomorphic if there exists a group isomorphism mapping from one to the other. Isomorphic groups can be thought of as essentially the same, only with different labels on the individual elements.
L
lattice of subgroups
The lattice of subgroups of a group is the lattice defined by its subgroups, partially ordered by set inclusion.
locally cyclic group
A group is locally cyclic if every finitely generated subgroup is cyclic. Every cyclic group is locally cyclic, and every finitely-generated locally cyclic group is cyclic. Every locally cyclic group is abelian. Every subgroup, every quotient group and every homomorphic image of a locally cyclic group is locally cyclic.
N
normal closure
The normal closure of a subset S of a group G is the intersection of all normal subgroups of G that contain S.
normal core
The normal core of a subgroup H of a group G is the largest normal subgroup of G that is contained in H.
normalizer
For a subset S of a group G, the normalizer of S in G, denoted NG(S), is the subgroup of G defined by
$\mathrm {N} _{G}(S)=\{g\in G\mid gS=Sg\}.$
normal series
A normal series of a group G is a sequence of normal subgroups of G such that each element of the sequence is a normal subgroup of the next element:
$1=A_{0}\triangleleft A_{1}\triangleleft \cdots \triangleleft A_{n}=G$
with
$A_{i}\triangleleft G$.
normal subgroup
A subgroup N of a group G is normal in G (denoted $N\triangleleft G$) if the conjugation of an element n of N by an element g of G is always in N, that is, if for all g ∈ G and n ∈ N, gng−1 ∈ N. A normal subgroup N of a group G can be used to construct the quotient group G/N (G mod N).
no small subgroup
A topological group has no small subgroup if there exists a neighborhood of the identity element that does not contain any nontrivial subgroup.
O
orbit
Consider a group G acting on a set X. The orbit of an element x in X is the set of elements in X to which x can be moved by the elements of G. The orbit of x is denoted by G⋅x
order of a group
The order of a group $(G,*)$ is the cardinality (i.e. number of elements) of $G$. A group with finite order is called a finite group.
order of a group element
The order of an element g of a group G is the smallest positive integer n such that gn = e. If no such integer exists, then the order of g is said to be infinite. The order of a finite group is divisible by the order of every element.
P
perfect core
The perfect core of a group is its largest perfect subgroup.
perfect group
A perfect group is a group that is equal to its own commutator subgroup.
periodic group
A group is periodic if every group element has finite order. Every finite group is periodic.
permutation group
A permutation group is a group whose elements are permutations of a given set M (the bijective functions from set M to itself) and whose group operation is the composition of those permutations. The group consisting of all permutations of a set M is the symmetric group of M.
p-group
If p is a prime number, then a p-group is one in which the order of every element is a power of p. A finite group is a p-group if and only if the order of the group is a power of p.
p-subgroup
A subgroup which is also a p-group. The study of p-subgroups is the central object of the Sylow theorems.
Q
quotient group
Given a group $G$ and a normal subgroup $N$ of $G$, the quotient group is the set $G$/$N$ of left cosets $\{aN:a\in G\}$ together with the operation $aN*bN=abN.$ The relationship between normal subgroups, homomorphisms, and factor groups is summed up in the fundamental theorem on homomorphisms.
R
real element
An element g of a group G is called a real element of G if it belongs to the same conjugacy class as its inverse, that is, if there is a h in G with $g^{h}=g^{-1}$, where $g^{h}$ is defined as h−1gh. An element of a group G is real if and only if for all representations of G the trace of the corresponding matrix is a real number.
S
serial subgroup
A subgroup H of a group G is a serial subgroup of G if there is a chain C of subgroups of G from H to G such that for each pair of consecutive subgroups X and Y in C, X is a normal subgroup of Y. If the chain is finite, then H is a subnormal subgroup of G.
simple group
A simple group is a nontrivial group whose only normal subgroups are the trivial group and the group itself.
subgroup
A subgroup of a group G is a subset H of the elements of G that itself forms a group when equipped with the restriction of the group operation of G to H×H. A subset H of a group G is a subgroup of G if and only if it is nonempty and closed under products and inverses, that is, if and only if for every a and b in H, ab and a−1 are also in H.
subgroup series
A subgroup series of a group G is a sequence of subgroups of G such that each element in the series is a subgroup of the next element:
$1=A_{0}\leq A_{1}\leq \cdots \leq A_{n}=G.$
subnormal subgroup
A subgroup H of a group G is a subnormal subgroup of G if there is a finite chain of subgroups of the group, each one normal in the next, beginning at H and ending at G.
symmetric group
Given a set M, the symmetric group of M is the set of all permutations of M (the set all bijective functions from M to M) with the composition of the permutations as group operation. The symmetric group of a finite set of size n is denoted Sn. (The symmetric groups of any two sets of the same size are isomorphic.)
T
torsion group
Synonym for periodic group.
transitively normal subgroup
A subgroup of a group is said to be transitively normal in the group if every normal subgroup of the subgroup is also normal in the whole group.
trivial group
A trivial group is a group consisting of a single element, namely the identity element of the group. All such groups are isomorphic, and one often speaks of the trivial group.
Basic definitions
Subgroup. A subset $H$ of a group $(G,*)$ which remains a group when the operation $*$ is restricted to $H$ is called a subgroup of $G$.
Given a subset $S$ of $G$. We denote by $<S>$ the smallest subgroup of $G$ containing $S$. $<S>$ is called the subgroup of $G$ generated by $S$.
Normal subgroup. $H$ is a normal subgroup of $G$ if for all $g$ in $G$ and $h$ in $H$, $g*h*g^{-1}$also belongs to $H$.
Both subgroups and normal subgroups of a given group form a complete lattice under inclusion of subsets; this property and some related results are described by the lattice theorem.
Group homomorphism. These are functions $f\colon (G,*)\to (H,\times )$ that have the special property that
$f(a*b)=f(a)\times f(b),$
for any elements $a$ and $b$ of $G$.
Kernel of a group homomorphism. It is the preimage of the identity in the codomain of a group homomorphism. Every normal subgroup is the kernel of a group homomorphism and vice versa.
Group isomorphism. Group homomorphisms that have inverse functions. The inverse of an isomorphism, it turns out, must also be a homomorphism.
Isomorphic groups. Two groups are isomorphic if there exists a group isomorphism mapping from one to the other. Isomorphic groups can be thought of as essentially the same, only with different labels on the individual elements. One of the fundamental problems of group theory is the classification of groups up to isomorphism.
Direct product, direct sum, and semidirect product of groups. These are ways of combining groups to construct new groups; please refer to the corresponding links for explanation.
Types of groups
Finitely generated group. If there exists a finite set $S$ such that $<S>=G,$ then $G$ is said to be finitely generated. If $S$ can be taken to have just one element, $G$ is a cyclic group of finite order, an infinite cyclic group, or possibly a group $\{e\}$ with just one element.
Simple group. Simple groups are those groups having only $e$ and themselves as normal subgroups. The name is misleading because a simple group can in fact be very complex. An example is the monster group, whose order is about 1054. Every finite group is built up from simple groups via group extensions, so the study of finite simple groups is central to the study of all finite groups. The finite simple groups are known and classified.
The structure of any finite abelian group is relatively simple; every finite abelian group is the direct sum of cyclic p-groups. This can be extended to a complete classification of all finitely generated abelian groups, that is all abelian groups that are generated by a finite set.
The situation is much more complicated for the non-abelian groups.
Free group. Given any set $A$, one can define a group as the smallest group containing the free semigroup of $A$. The group consists of the finite strings (words) that can be composed by elements from $A$, together with other elements that are necessary to form a group. Multiplication of strings is defined by concatenation, for instance $(abb)*(bca)=abbbca.$
Every group $(G,*)$ is basically a factor group of a free group generated by $G$. Please refer to presentation of a group for more explanation. One can then ask algorithmic questions about these presentations, such as:
• Do these two presentations specify isomorphic groups?; or
• Does this presentation specify the trivial group?
The general case of this is the word problem, and several of these questions are in fact unsolvable by any general algorithm.
General linear group, denoted by GL(n, F), is the group of $n$-by-$n$ invertible matrices, where the elements of the matrices are taken from a field $F$ such as the real numbers or the complex numbers.
Group representation (not to be confused with the presentation of a group). A group representation is a homomorphism from a group to a general linear group. One basically tries to "represent" a given abstract group as a concrete group of invertible matrices which is much easier to study.
See also
• Glossary of Lie groups and Lie algebras
• Glossary of ring theory
• Composition series
• Normal series
| Wikipedia |
Chinese monoid
In mathematics, the Chinese monoid is a monoid generated by a totally ordered alphabet with the relations cba = cab = bca for every a ≤ b ≤ c. An algorithm similar to Schensted's algorithm yields characterisation of the equivalence classes and a cross-section theorem. It was discovered by Duchamp & Krob (1994) during their classification of monoids with growth similar to that of the plactic monoid, and studied in detail by Julien Cassaigne, Marc Espie, Daniel Krob, Jean-Christophe Novelli, and Florent Hivert in 2001.[1]
The Chinese monoid has a regular language cross-section
$a^{*}\ (ba)^{*}b^{*}\ (ca)^{*}(cb)^{*}c^{*}\cdots $
and hence polynomial growth of dimension ${\frac {n(n+1)}{2}}$.[2]
The Chinese monoid equivalence class of a permutation is the preimage of an involution under the map $w\mapsto w\circ w^{-1}$ where $\circ $ denotes the product in the Iwahori-Hecke algebra with $q_{s}=0$.[3]
See also
• Plactic monoid
References
1. Cassaigne, Julien; Espie, Marc; Krob, Daniel; Novelli, Jean-Christophe; Hivert, Florent (2001), "The Chinese monoid", International Journal of Algebra and Computation, 11 (3): 301–334, doi:10.1142/S0218196701000425, ISSN 0218-1967, MR 1847182, Zbl 1024.20046
2. Jaszuńska, Joanna; Okniński, Jan (2011), "Structure of Chinese algebras.", J. Algebra, 346 (1): 31–81, arXiv:1009.5847, doi:10.1016/j.jalgebra.2011.08.020, ISSN 0021-8693, S2CID 119280148, Zbl 1246.16022
3. Hamaker, Zachary; Marberg, Eric; Pawlowski, Brendan (2017-05-01). "Involution words II: braid relations and atomic structures". Journal of Algebraic Combinatorics. 45 (3): 701–743. arXiv:1601.02269. doi:10.1007/s10801-016-0722-6. ISSN 1572-9192. S2CID 119330473.
• Duchamp, Gérard; Krob, Daniel (1994), "Plactic-growth-like monoids", Words, languages and combinatorics, II (Kyoto, 1992), World Sci. Publ., River Edge, NJ, pp. 124–142, MR 1351284, Zbl 0875.68720
| Wikipedia |
Outline of probability
Probability is a measure of the likeliness that an event will occur. Probability is used to quantify an attitude of mind towards some proposition whose truth is not certain. The proposition of interest is usually of the form "A specific event will occur." The attitude of mind is of the form "How certain is it that the event will occur?" The certainty that is adopted can be described in terms of a numerical measure, and this number, between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty) is called the probability. Probability theory is used extensively in statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.
Probability
• Outline
• Catalog of articles
• Probabilists
• Glossary
• Notation
• Journals
• Category
• Mathematics portal
Introduction
• Probability and randomness.
Basic probability
(Related topics: set theory, simple theorems in the algebra of sets)
Events
• Events in probability theory
• Elementary events, sample spaces, Venn diagrams
• Mutual exclusivity
Elementary probability
• The axioms of probability
• Boole's inequality
Meaning of probability
• Probability interpretations
• Bayesian probability
• Frequency probability
Calculating with probabilities
• Conditional probability
• The law of total probability
• Bayes' theorem
Independence
• Independence (probability theory)
Probability theory
(Related topics: measure theory)
Measure-theoretic probability
• Sample spaces, σ-algebras and probability measures
• Probability space
• Sample space
• Standard probability space
• Random element
• Random compact set
• Dynkin system
• Probability axioms
• Event (probability theory)
• Complementary event
• Elementary event
• "Almost surely"
Independence
• Independence (probability theory)
• The Borel–Cantelli lemmas and Kolmogorov's zero–one law
Conditional probability
• Conditional probability
• Conditioning (probability)
• Conditional expectation
• Conditional probability distribution
• Regular conditional probability
• Disintegration theorem
• Bayes' theorem
• Rule of succession
• Conditional independence
• Conditional event algebra
• Goodman–Nguyen–van Fraassen algebra
Random variables
Discrete and continuous random variables
• Discrete random variables: Probability mass functions
• Continuous random variables: Probability density functions
• Normalizing constants
• Cumulative distribution functions
• Joint, marginal and conditional distributions
Expectation
• Expectation (or mean), variance and covariance
• Jensen's inequality
• General moments about the mean
• Correlated and uncorrelated random variables
• Conditional expectation:
• law of total expectation, law of total variance
• Fatou's lemma and the monotone and dominated convergence theorems
• Markov's inequality and Chebyshev's inequality
Independence
• Independent random variables
Some common distributions
• Discrete:
• constant (see also degenerate distribution),
• Bernoulli and binomial,
• negative binomial,
• (discrete) uniform,
• geometric,
• Poisson, and
• hypergeometric.
• Continuous:
• (continuous) uniform,
• exponential,
• gamma,
• beta,
• normal (or Gaussian) and multivariate normal,
• χ-squared (or chi-squared),
• F-distribution,
• Student's t-distribution, and
• Cauchy.
Some other distributions
• Cantor
• Fisher–Tippett (or Gumbel)
• Pareto
• Benford's law
Functions of random variables
• Sum of normally distributed random variables
• Borel's paradox
Generating functions
(Related topics: integral transforms)
Common generating functions
• Probability-generating functions
• Moment-generating functions
• Laplace transforms and Laplace–Stieltjes transforms
• Characteristic functions
Applications
• A proof of the central limit theorem
Convergence of random variables
(Related topics: convergence)
Modes of convergence
• Convergence in distribution and convergence in probability,
• Convergence in mean, mean square and rth mean
• Almost sure convergence
• Skorokhod's representation theorem
Applications
• Central limit theorem and Laws of large numbers
• Illustration of the central limit theorem and a 'concrete' illustration
• Berry–Esséen theorem
• Law of the iterated logarithm
Stochastic processes
Some common stochastic processes
• Random walk
• Poisson process
• Compound Poisson process
• Wiener process
• Geometric Brownian motion
• Fractional Brownian motion
• Brownian bridge
• Ornstein–Uhlenbeck process
• Gamma process
Markov processes
• Markov property
• Branching process
• Galton–Watson process
• Markov chain
• Examples of Markov chains
• Population processes
• Applications to queueing theory
• Erlang distribution
Stochastic differential equations
• Stochastic calculus
• Diffusions
• Brownian motion
• Wiener equation
• Wiener process
Time series
• Moving-average and autoregressive processes
• Correlation function and autocorrelation
Martingales
• Martingale central limit theorem
• Azuma's inequality
See also
• Catalog of articles in probability theory
• Glossary of probability and statistics
• Notation in probability and statistics
• List of mathematical probabilists
• List of probability distributions
• List of probability topics
• List of scientific journals in probability
• Timeline of probability and statistics
• Topic outline of statistics
Wikipedia Outlines
General reference
• Culture and the arts
• Geography and places
• Health and fitness
• History and events
• Mathematics and logic
• Natural and physical sciences
• People and self
• Philosophy and thinking
• Religion and belief systems
• Society and social sciences
• Technology and applied sciences
| Wikipedia |
View source for How to Build a Mechanically Powered Battery Charger for LED Lighting
← How to Build a Mechanically Powered Battery Charger for LED Lighting
{{Statusboxtop}} {{Template:Status-Design}} {{Template:Status-Model}} {{Template:Status-Prototype}} You can help Appropedia by contributing to the next step in this [[OSAT]]'s [[:Category:Status]] {{boxbottom}} = '''Project Description''' = The purpose of this [[Mech425]] project is to design a mechanically powered ([[Human power]]ed) battery charger that will recharge low voltage batteries used for [[LED lighting]]. The reason it charges low voltage batteries is because a LED does not require much voltage to produce light. The lower the rated voltage of the battery that has to be recharged, the less work the human has to do in order to recharge it. The design should be efficient and cost effective to provide a means of charging batteries to those who may not be able to afford electricity{{w|Electricity}} and or those who are in areas where electricity is unavailable. The design should also be made of materials that are easily available. This design utilizes materials that are scavenged from other applications. The proposed design is constructed from an old bicycle rim that is connected to a DC motor from an old printer via a belt that comes from the same printer. When the rim is manually spun it will cause the motor to produce an electric current which is needed to charge a battery. The proposed design is human driven but the overall design could be altered to use a source of power such as the wind to spin the motor, alleviating the need for a human to spin the motor. <!-- ********** RIGHT BOX ********** --> {|style="border:1px solid #A3BFA3; background-color: #E6FFE6; margin-left:.1em; margin-top:10px; -moz-border-radius:15px;" align="right" width="200px" | <big> '''Mechanically Powered Battery Charger''' </big> |- | [[File:The Battery Recharger.JPG|thumb|center|The Prototype]] | [[File:Good Results.JPG|thumb|center|A reading of 3.85 volts]] |- |} __TOC__ = '''Introduction and Theory''' = This section is intended to give some insight into the technical background of the project to help understand the overall design. A battery{{w|Battery_(electricity)}} is a device that converts stored chemical energy into electrical energy, and is a source of direct current{{w|Direct_current}}.<ref> Answers.com,"Storage Battery",http://www.answers.com/topic/battery-electricity,Accessed April 10, 2009 </ref> The applications for a battery are endless, but for the purpose of this project they will be theoretically used as a power source for LED lighting. There are two broad categories of batteries: primary and secondary. Primary batteries are not rechargeable; once the chemical energy in the cell is depleted it cannot be restored by electrical means. Secondary batteries can be recharged by applying electrical energy into the cell.<ref> Wikipedia, "Battery (electricity)", http://en.wikipedia.org/wiki/Battery_(electricity), Accessed April 10, 2009</ref> A battery charger{{w|Battery_charger}} is a device used to put energy into a rechargeable (secondary) battery by forcing an electric current through it.<ref name="WikiBatteryCharger"> Wikipedia, "Battery Charger", http://en.wikipedia.org/wiki/Battery_charger, Accessed April 10, 2009</ref> A mechanically powered battery charger uses human power to generate the electricity needed to create the necessary electric current to charge the battery. == Rotating Electric Machines == There are three basic classifications of rotating electric machines: Direct-current Machines, Synchronous machines and Induction machines.<ref name="Storey"> Storey, N. (2006). Electronics A Systems Approach 3rd Edition. Hampshire: Pearson Education.</ref> If a machine converts mechanical energy into electrical energy then the machine is acting as a generator. If the machine converts electrical energy into mechanical energy then the machine is acting as a motor. There are similarities with respect to all rotating electric machines in that the two main components are a stator and a rotor. The rotor rotates inside the stator and is separated by an air gap. The rotor and stator each have a magnetic core and windings to produce a magnetic flux{{w| Magnetic_flux}} (or the stator is a permanent magnet as is the case in the DC motor used in this project). The rotor is fastened upon a bearing-supported surface and is either connected to a prime mover (if the rotating electric machine is a generator) or to a mechanical load (if the rotating electrical machine is a motor).<ref name="Storey"/> The rotor is attached by means of chains, pulleys, belts etc. The speed of the motor is determined by the applied voltage{{w|Voltage}} and the torque of the motor is related to the electric current{{w|Electric_current}}.<ref name="Storey"/> == Producing Electricity with a DC Motor == A DC motor is generally used in applications that require accurate speed control.<ref name="Storey"/> The motor is connected to a power supply which provides electrical current to the windings (coils{{w|Coil}}) in the rotor. When a current is passed through windings, a magnetic field is produced. This magnetic field causes the rotor to spin therefore creating useful mechanical work. These windings may also be used to induce a voltage if the reverse action is taken (rotor is manually spun and the DC motor acts as a generator). This can be explained by Faraday's Law of Induction{{w|Faraday%27s_law_of_induction}} which in short states that a voltage is induced by a changing magnetic field.<ref name="Storey"/> The presence of the magnetic fields makes it possible to produce an electric current and generate power. A magnet also produces a magnetic field where the strength is quantified by: Magnetic flux ϕ and magnetic flux density B, units of weber (Wb) and tesla (T) repectively, where: <math>1 T = \frac{1 Wb}{1 m^2}</math> and <math>1 Wb = \frac{1 V}{1 s}</math> <ref name="Storey"/> A magnetic field is generally represented by lines, and the strength of the magnet can be visualized by analyzing the density of these lines. By definition the magnetic field lines travel from north to south.<ref name="Storey"/> == Charging a Battery == The motor must be able to produce enough voltage to charge the battery. For example if the rated voltage of the battery is 5 V then the motor must be able to produce at least 5 V. It also must be able to generate enough current so that the time required to charge the battery is lowered. The larger the amperage the shorter time will be required to recharge the battery. Rechargeable battery capacity is rated in mAh (milliampere-hours). The total capacity of a battery is defined as "C", that is it can supply C mA for 1 hour, or 2C for 30 minutes etc.<ref>Intelligent NiCd/NiMH Battery Charger - Construction Project http://www.angelfire.com/electronic/hayles/charge1.html Accessed: April 09, 2010</ref> The rate of charge is determined by how much electrical current is allowed into the battery by the battery recharger.<ref name="Battery4"/> The charge current depends upon the technology and capacity of the battery being charged.<ref name="WikiBatteryCharger"/> As a general rule, to arrive at an appropraite charge rate, the capacity of the batter should be divided by 10 (this is called the C/10 rate). There are however charge rates as high as C/3, but this charge rate will only be maintained for a short period of time.<ref name="EnergyAlternativesLtd"/> Therefore the fastest amount of time that a battery can be recharged is 3 hours but the recommended charge rate should be used. To find the recommended charge rate of the battery one should contact the battery manufacturer. A battery rated at 150 mAh (the one used in this design) can theoretically sustain a 15 mA discharge current for 10 hours (150 mAh/ 15 mA). Some LEDs have a maximum rated amperage of 15 mA. However it is very important to note that the battery is not overcharged, nor should it be charged at a rate that the battery cannot handle. If the battery is overcharged it may explode. It is important to understand all of the parameters involved in charging a battery before constructing a battery charger. It is also important to note that no battery will last forever as they wear out and will eventually need to be replaced<ref name="EnergyAlternativesLtd"> Energy Alternatives Ltd., "Battery Chargers" http://energyalternatives.ca/SystemDesign/chargers1.html Accessed: April 07, 2010 </ref>, however rechargeable batteries are a great way to reduce cost and waste.<ref name="Battery4"> How Stuff Works, "How Batteries Work",http://electronics.howstuffworks.com/battery4.htm Accessed April 08, 2009</ref> One downside to batteries is that when they do need to be replaced they contain toxic materials and should be disposed of properly.<ref name="EnergyAlternativesLtd"/> To avoid shortening the life of a battery considerably, the battery should not be completely discharged before being recharged.<ref name="Battery4"/> = '''The Prototype''' = The performance of the prototype and a discussion is included in this section. The parts, tools and steps required to construct the prototype design are also included. == Performance and Discussion == The final prototype works although there are many improvements to be made on the design. The most notable design change would be to have a better system to spin the rim, ideally one that was not human powered. The design consists of a wheel that is rotated by hand that is connected to a DC motor. The motor is connected to the battery and when the wheel spins it provides electricity to the charge battery. The battery chosen was a 4.8V 150mAh battery that was turned into a 3.6V 150mAh battery. This means the fastest possible charge rate is 50mAh based on the C/3 rate or lower depending on what charge rate is chosen. To be safe the battery will be aimed to be charged at C/5 (which means it would take 5 hours to charge this battery). Some testing done on the prototype was done and the data shown in table 1 shows the results. The amount of current and voltage through the circuit is proportional to the rotating speed of the rim. The motor easily puts out the required amount of voltage needed to charge the battery and exceeds the acceptable charge rate current which is why a resistor must be incorporated into the electrical circuit design. <center> {| class="wikitable" style="border:#A3BFA3; background:#F5FFF5" |+ ''Table 1: Test Results'' |- style="text-align:center; font-weight:bold; background:#CEF2CE;" | Speed of the Rim | Reading Amps (mA) | Reading Volts (V) |- | Maximum achieved | 280 | 8.35 |- | Tolerable, about 2 rim revolutions per second | 110 | 3.60 |- | Slow, about 1 rim revolutions per second | 50 | 1.54 |} </center> The most interesting design change would be to have the wheel spin by some other means than human power. If the design could incorporate a waterwheel or small wind turbine blades to spin the rotor the overall effectiveness of the machine greatly increases. After doing some testing it was found that when the wheel is moving rather slowly it produces enough voltage to charge a 1.5V battery at 50mA. This means that one cell of the battery used in this prototype (1.2V) would take only 3 hours to charge permitting it could charge at a rate of 50mAh, or it could charge a larger capacity battery for example a 500mAh battery in 10 hours of spinning. This slow speed could possibly be obtained by turning the rim of the bicycle into a waterwheel design. This would greatly improve the overall effectiveness of the system in that someone would not have to waste time turning the rim and could focus on other tasks. A more efficient human powered design would be a pedal powered design. It would allow for a significant increase in power to be input into the system and therefore an increase in the amount of electricity that can be produced. Rather than using a voltmeter to regulate the voltage in the system, a simple voltage regulator chip could be implemented. Since the project is intended to be a practical technology this may not be a viable design change. And if there is a more constant source of spinning (such as a water wheel) one could regulate the voltage by the rate of speed the rim turns and the gear ratio from the rim to the motor. More voltage and current could be produced if there was a much larger section of rim the belt went around. The larger in diameter the rim/belt connection is the faster the motor spins for every revolution of the wheel. == Parts List and Cost == <center> {| class="wikitable" style="border:#A3BFA3; background:#F5FFF5" |+ ''Table 2: Parts List'' |- style="text-align:center; font-weight:bold; background:#CEF2CE;" | Part | Required Units | Total Cost (CAD) |- |DC Motor (capable of producing enough V to charge battery) | 1 | $0.00 |- | Rubber Belt (capable of being fixed to rotor) | 1 | $0.00 |- | Old Bicycle Rim | 1 | $0.00 |- | Nails | ~12 | # |- | Scrap Wood | (see construction section) | # |- | NiMH Battery (4.8 V, 150 mAh) | 1 | $13.95 <ref name="LeadingEdge"> Leading Edge Hobbies, http://www.leadingedgehobbies.com/oscommerce/catalog/default.php, Accessed: Aprile 10,2010 </ref> |- | Copper Wire | 6 inches to a Foot | # |- | Diode | 1 | # |- | Resistor (120 Ohm) | 1 | # |- | Lengths of wire | - | $0.00 |- | Alligator clips | 2 | $3.10 <ref name="LeadingEdge"/> |- | Electrical Tape | ~1 foot | $2.79 per roll <ref> Home Depot, http://www.homedepot.ca/, Accessed: April 10, 2010 </ref> |} </center> '''Note:''' The DC motor and the corresponding rubber belt were removed from an EPSON Stylus CX3810 printer that no longer functioned properly. There were 3 motors in the printer (two DC motors and one stepper motor). The motor that ran the cartridge feeder was used as it has the gear connection to a rubber belt already attached. In other words it was the motor that was meant to be connected to the rubber belt. The lengths of wire were also extracted from the printer. The bicycle rim was found in a scrap yard; the bearings in the rim were still in good condition and allowed the rim to spin freely. The rim does not have to be in the best condition, as long as it can spin freely on the axle. There is no specific lengths/type of wood that need to be used, as long as there is enough material to allow the rim to be properly mounted and rotate freely as discussed in the ''mounting the rim section'' (step 1). Because all of the above materials were salvaged the cost of the project was dramatically reduced. The most significant costs of the project will be due to the battery and the motor. Unfortunately the DC motor from the printer was unable to be identified. To choose the right motor one should hook up the leads of the motor a voltmeter and manually spin the rotor. The voltage should be able to exceed the rated voltage of the battery that is planned to be used. == Tools == <center> {| class="wikitable" style="border:#A3BFA3; background:#F5FFF5" |+ ''Table 3: Required Tools and Their Use'' |- style="text-align:center; font-weight:bold; background:#CEF2CE;" | Tool | Use | Required | Alternatives |- | Digital Multimeter | Measuring the V and A in the circuit | Yes (very useful tool) | Can also use analog voltmeters and ammeters |- | Hammer | Nailing the rim supports to the base and mounting the motor to the base. | Yes (unless screws are available) | Anything to pound a nail into the boards (ie. hard pipe). |- | Drill | Making the holes for the rim axle in the rim supports. | Yes | If screws are available the hammer and nails can be replaced entirely by the drill and screws. |- | Utility knife | Deconstructing the battery. Can also be used to strip wires. | Yes | The sharp point of a nail. |- | Wire Strippers | Strip the wire for proper connections. | No | Carefully use a utility knife or scissors. |- | Soldering Iron + Solder | Connecting the wires in circuit (this was not used but it is a good idea if you plan to make a permanent circuit) | No | Twist the wires together and tape them with electrical tape. |} </center> '''Note:''' One should be careful when using any of the tools listed above. Be sure to carefully follow the recommended procedures as outline by the tool manufacturers. == Building the Prototype == === Making a New Battery === The reason this battery is being taken apart is because the recharger has to provide at least as much voltage as the rated battery voltage to charge it. If the battery is 4.8V it would require one to spin the rim faster to produce enough voltage to charge the battery, so the battery will be turned into a 3.6V battery (possible to be generated at a moderate rotation speed of the rim). The battery will still have a 150mAh capacity. {{How to |title=Figure 1: Prototype Battery Construction |File:Mechanical battery recharger battery1.JPG |Step 1. Original battery |1|This is a 4.8V, 150mAh NiMH battery. NiMH batteries come in 1.2V cells which means that there must be 4 cells in this battery to produce 4.8V (4.8V/(1.2V/cell) = 4 cells). It cannot be seen in the picture but there is a small white connector on the end of the red and black wires (used to connect the battery to toy cars). This can be removed by cutting it off to expose the ends of the red and black wires. It is important to not let the ends of the red and black wires touch each other as they will short out the battery. |File:Mechanical battery recharger battery2.JPG|Step 2. Cut off outer layer |2| Very carefully apply pressure to the middle of the battery with the utility knife to remove the outer casing of the battery. It is possible to see/feel where there is an air gap between the cells; this is where the outer layer should be cut. |File:Mechanical battery recharger battery3.JPG|Step 3. After outside layer cutoff |3| The outer layer should be removed to reveal what is seen in the step 3 picture. There are two casing packages left, each containing 2 NiMH cells (therefore they are 2.4V cases). |File:Mechanical battery recharger battery4.JPG|Step 4. Separate two packs of cells |4| Separate the two battery casing packages as well as the protective cardboard layer end pieces. |File:Mechanical battery recharger battery5.JPG|Step 5. Remove final plastic layer |5| Remove the final casing layers from the two casing packages with the utility knife to expose the individual cells. The two packs of individual cells are soldered together. One of the packs must be separated in order to create a 3.6V battery. |File:Mechanical battery recharger battery6.JPG|Step 6. Fasten 3 cells together |6| Three cells are taped together. The battery should now be tested with a digital multimeter to ensure there is a proper connection of the cells. }} '''Note: The red wire was removed in the prototype design although in retrospect it would be much easier to keep this attached to the battery. Also the cell with the black wire attached should have been installed in the pack of three rather than the other cell. This would produce a battery with a wire connection on each side of the battery virtually eliminating the need to make a battery holder. The red wire was removed from the battery before realizing the benefit of having it attached.''' === Making a Battery Holder === A battery holder is made to allow for an easy connection of the battery to the circuit. With careful modifications of the battery this step may be skipped as discussed above in the making a new battery section. {{How to |title=Figure 2: Making a Battery Holder |File:Battery holder 1.JPG |Step 1. Cut Cardboard |1| Cut cardboard (use battery packaging if possible) into rectangular shape as shown in step 1. The length of the cardboard is the length of the battery plus about a centimeter on each side (2cm total + battery length) but the exact length is not important. The width of the cardboard is 3 widths of the battery. Mark the lines as shown in step 1, dividing the piece into 3 equal widths. |File:Battery holder 2.JPG |Step 2. Score Edges |2| Score the marked lines with a pair of scissors or the utility knife. This is just a score to make bending the cardboard easier, do not cut fully through the cardboard. |File:Battery holder 3.JPG |Step 3. Cut slits |3| The slits should be about 1 cm long. |File:Battery holder 4.JPG |Step 4. Cut small flaps |4| Cut about half of a centimeter off the ends of the cardboard as seen in step 4. This allows for proper dimensions when folded together later. |File:Battery holder 5.JPG |Step 5. Shows all Materials (other than tape) |5| All the materials used for the battery holder excluding electrical tape are shown in step 5. This gives the relative dimensions of the materials used to make the battery holder. Strip both ends of two pieces of wire to allow for a proper connection to the battery (wires are about 5 cm in length, strip about a centimeter off each end). Cut two rectangular pieces of aluminum foil about 5 cm in length by 1 cm in width. |File:Battery holder 6.JPG |Step 6. Fold Foil |6| Fold the foil in half around a stripped end of one of the wires making sure that exposed wire is touching the foil (pinch it together to get a tight fit to the wire to ensure proper electrical conduction). |File:Battery holder 7.JPG |Step 7. Fold Foil 2 |7| Roll the foil around the wire as shown in step 7. Try to make a tight 'rectangular' roll (again pinch it to get a tight fit to the wire and have proper electrical conduction). |File:Battery holder 8.JPG |Step 8. Fold Foil 3 |8| Fold the foil lengthwise. Repeat steps 5 through 8 for the other piece of wire, resulting in 2 wires, each with an exposed wire end and an end wrapped in foil. |File:Battery holder 9.JPG |Step 9. Fold Carboard |9| Fold the flaps of the cardboard. First fold the middle flap up then fold in the other outside flaps. Tape these in place. It is wise to tape it all together at once with the battery and two wires in place to ensure a tight fit (proper electrical conduction). |File:Battery holder 10.JPG |Step 10. Finished Product |10| his is what the battery holder should look like in the end. It is wise to test the final holder before continuing to avoid problems with the circuit in the future. |File:Battery holder test.JPG |Test It || '''Test It''' The test reveals the battery is working. The negative sign simply means the leads of the voltmeter were connected to the wrong sides of the battery. Although this is not a problem, To get a positive reading simply flip the leads of the voltmeter to touch the opposite wires. }} === Mounting the Rim === {{How to |title=Figure 3: Steps to mounting the rim |File:Mounting_the_rim1.JPG |Step 1. Ensure Clearance |1| It is important to ensure that there will be proper clearance between the wheel and the ground. This step shows the wheel being placed on a rim support to make sure that there is room from the wood to hold the rim up of the ground allowing it rotate freely. |File:Mounting_the_rim2.JPG |Step 2. Cut Two peices the same length |2| Once the length of the rim supports are checked, the two supports should be cut to the same length. It is not absolutely necessary to include this step, but it makes the final product look more appealing. |File:Mounting_the_rim3.JPG |Step 3. Drill Axle Hole |3| Drill a hole the same diameter of the axle of the rim through both of the supports simultaneously. Be sure to line up the ends of the support that will be touching the ground so that when the holes are drilled and the axle is inserted, it is horizontally aligned with the ground. (refer to the picture in step 5) |File:Mounting_the_rim4.JPG |Step 4. Mount and Lock |4| The rim came with nuts attached to the axle. These should be fastened to the axle to hold the wood supports in place. If the hole diameter is slightly smaller than the axle diameter then the supports will have to be threaded (screwed) onto the axle alleviating the need of attaching the nuts. |File:Mounting_the_rim5.JPG |Step 5. Mounted Rim |5| The mounted rim should be able to stand freely on the ground although it tips easily without the base secured. The step 5 picture is to show that there is clearance between the rim and the ground (there is about 10 centimeters of clearance from the rim to the ground). |File:Mounting_the_rim6.JPG |Step 6. DO NOT FORGET |6| '''Do not forget''' this step. Be sure to pull the rubber belt under the support on the side of the rim that it will be attached to. Once the base is secured to the supports there will be no way to fasten the belt to the axle. The rubber band will dangle around the axle until it is later fastened to the rotor. |||7| '''Creating the base.''' Use scrap wood to create a base upon which the motor can be mounted. Be sure to build the base long enough to hold the motor at a distance that will cause the belt to be taught when connected to the motor. The base provides support for the overall structure. Simply nail the supports to the base. '''Be sure before you nail the pieces together that the belt it still around the axle in between the support and the rim as once it is nailed together there is no way to attach the belt to the axle.''' Ensure the rim still spins freely once the base is attached. {{Gallery |height=135 |File:Mounting_the_rim7a.JPG |'''Step 7a.''' No base |File:Mounting_the_rim7b.JPG |'''Step 7b.''' Fitting in the pieces |File:Mounting_the_rim7c.JPG |'''Step 7c.''' Full Base }} }} === Mounting the Motor === {{How to |title=Figure 4: Steps to mounting the motor |File:Mounting_the_rim8.JPG|Step 1. Pull belt taught |1| Have the belt wrap around the axle in the position it will permanently remain in. Pull the belt taught (the tighter the better, but don't get carried away and have the belt snap) and position the motor on the base. The belt should be positioned to make a straight line from the motor rotor to the axle of the rim the belt will spin on. Any off alignment will alter the efficiency of the machine and make it more difficult to effectively generate power. |File:Mounting_the_rim9.JPG |Step 2. Mount the motor ||[[File:Mounting_the_rim10.JPG |right|Side view of mounted motor|180px]] '''Step 2:''' The motor is mounted with 4 nails and some copper wire. The four nails were first slightly tapped into the base to be sure the position of the motor was proper. Then the copper wire was wound in the criss-cross pattern shown and then the nails were tapped further into the base to secure the copper wire to the top of the motor and hold it firmly in place. It is important that the motor is held firmly in place and does not move at all. The side view of the mounted motor shows what the system will look like up until this point. }} The prototype can now be spun to produce electricity. Wiring the charger up to recharge a battery requires some calculations as shown in the section below. The testing should be done at this stage to determine what voltages and amperages are able to be produced by the system. These are important parameters needed for the wiring of the protoype. === Wiring the charger === The circuit diagram for this battery recharger is shown in figure 5. [[File:Circuit diagram mechanical recharger.png|thumb|centre|Figure 5: Circuit Diagram from Mechanical Recharger]] Firstly the proper current direction must be established. The red wire was chosen to be positive. Hook the positive lead of the voltmeter up to the red (positive) wire and the other lead to the black wire and spin the rim to determine which direction gives a positive voltage. Mark this on the support as shown in figure 6. [[File:Directional turning.JPG|thumb|centre|Figure 6: Mark the direction on the rim support.]] Some calculations need to be done to determine which resistor will be incorporated into the circuit. This charger will use a charge rate of C/5 to be safe. This means it would take 5 hours to charge a fully discharged battery. Thankfully the battery is not completely depleted so it will not take this long to fully charge the battery. Since we are using a charging rate of C/5 this means that for 5 h the charge current will be 30mA (150mA/5). <math> charge-rate = C/5 = 150mAh / 5 = 30 mAh </math> Ohm's law{{w|Ohm's Law}} is needed to determine the resistor{{w|Resistor}} value required to keep the current at an acceptable rate for battery charging. Ohm's Law is expressed by the realtion; <math> \ V=I*R </math> [1] Where V is the voltage in volts (V), I is the current in amperes (A) and R is the resistance in ohms Ω. If the voltage has to be ~ 3.6V to charge the battery then we can rearrange the equation and solve for R. (watch the units as the current is in mA therefore divide the mA value by 1000 to convert to A) Rearrange V=IR to R=V/I <math> R= V/I =3.6V/(30mA/1000)=3.6V/0.030A = 120 ohms </math> Go here to find the colour of the proper resistor [http://www.hfradio.org/resistor/ resistor colour guide] If there is not a resistor for this value you should normally round up to the next standard resistor to add a factor of safety into the design. However since we have already incorporated a factor of safety into the design by charging at a rate of C/5 a 100 ohm resistor will work fine as it gives a charging rate of (3.6V/100Ω = 0.036A = 36mA) which gives a charging rate of ~C/4.16. This is okay for the prototype although one should contact the battery manufacturer to get the optimum charge rate. Wire the circuit according to the wiring diagram in figure 5. It is useful to use alligator clips to connect the battery to the circuit. The diode{{w|diode}} is used to direct the current in only one direction so the battery does no try and power the motor. This is a crucial piece of the circuit. Be sure to connect the diode in the proper orientation. = Future Work = As discussed above it would be beneficial to have a water wheel design so a human does not constantly have to spin the rim. The rim should be converted into a water wheel and large buckets could be used as a water supply. Ideally a large bucket would be moutned with a spout the pours into another large bucket. When the top bucket pours over the water wheel the water is caught by the second barrel. The top barrel would have to be routinely filled but would require shorter time intervals of human work to charge the battery. A pedal powered design would be a very effective human powered battery recharger. It should also be able to fit to any bike to increase its usefulness. = Relevant Links = Here are some relevant links for a water wheel design and pedal powered designs. * [http://current.com/12n164c Hydro-Electric-Barrel Generator ] <ref> Current.com "Hydro Electric Barrel Generator",http://current.com/12n164c , Accessed: April 12, 2010 </ref> * [http://www.pedalpowergenerator.com/ pedalpower.com]] <ref> Pedal Power.com,http://www.pedalpowergenerator.com/, Accessed: April 09, 2010 </ref> * [http://www.alternative-energy-news.info/pedal-powered-electricity-generator-windstream/ Alternative Energy News]<ref> Alternative Energy News,"Pedal Powered Electricity Generator from Windstream",http://www.alternative-energy-news.info/pedal-powered-electricity-generator-windstream/, Accessed: April 09, 2010 </ref> * [http://www.youtube.com/watch?v=wcY1ADGcrfs You Tube Clip] <ref> You Tube, "Mobile Pedal-Powered Generator",http://www.youtube.com/watch?v=wcY1ADGcrfs, Accessed: April 09,2010 </ref> =References = <references/> {{Mech425}} [[Category: Appropriate technology]] [[Category:Energy]] [[Category:Electricity generation]] [[Category:Batteries]] [[Category:Human power]]
Template:! (view source) (semi-protected)
Template:Attrib mech425 (edit)
Template:Boxbottom (edit)
Template:Gallery (edit)
Template:Gallery/aux (edit)
Template:How to (edit)
Template:How to/Step (edit)
Template:Mech425 (edit)
Template:Status-Design (edit)
Template:Status-Model (edit)
Template:Status-Prototype (edit)
Template:Status-base (edit)
Template:Statusboxtop (edit)
Template:W (edit)
Return to How to Build a Mechanically Powered Battery Charger for LED Lighting.
Retrieved from "https://www.appropedia.org/How_to_Build_a_Mechanically_Powered_Battery_Charger_for_LED_Lighting" | CommonCrawl |
Physics Meta
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Noether's Theorem: There are conserved quantities corresponding to symmetries of position, orientation, and time, but why not velocity?
Asked 1 year, 10 months ago
Modified 1 year, 10 months ago
Noether's Theorem seems to be one of the most fundamental and beautiful results in all of physics. As I understand it, the fact that the laws of physics are the same independent of position, orientation, and time, leads to the conservation of momentum, angular momentum, and energy, respectively.
But the laws of physics are also independent of velocity. Why does this not lead to another conserved quantity? Or is it just Newton's third law (forces in a closed system must add to zero)?
classical-mechanics
conservation-laws
noethers-theorem
Qmechanic♦
asked Mar 15, 2021 at 15:10
Roger WoodRoger Wood
$\begingroup$ Related: What is the invariant associated with the symmetry of boosts? $\endgroup$
– Qmechanic ♦
$\begingroup$ You might want to look at this in a more relativistic way. The homogeneity of spacetime leads to the energy-momentum being conserved in field theory. The Isotropy leads to general relativistic version of angular momentum. That is it. The Poincaré group is the most general there is (susy being a bit under right now). That is not to say that other systems might present other invariances and then have a conserved quantity attached to it that but being limited in generality and scope. $\endgroup$
– Nelson Vanegas A.
$\begingroup$ @ Nelson Vanegas A. I interpret "homogeneity of spacetime" to mean it's the same at every place and time. I wouldn't necessarily include velocity. Is that wrong thinking? $\endgroup$
– Roger Wood
$\begingroup$ @Qmechanic I followed your link which does seem to be exactly the same question. The answer seems to be just a statement about where the center of mass is at a particular instant in time. But in what sense is this a conserved quantity, if it doesn't apply to other instants in time? $\endgroup$
The issue here is that the statement about laws of physics being independent of velocity has been misunderstood or is not precise enough. Noether's theorem pertains explicitly, theories described through a Lagrangian. A Lagrangian in its bare bones, should contain a kinetic term of some sort. Without going to a relativistic setting, you can consider a free particle in classical mechanics, for which $$L_{\rm free} = \frac{1}{2} m \left(\frac{dx}{dt}\right)^2 = \frac{1}{2}m v^2$$ You can easily try to displace the speed $v\rightarrow v + \Delta v$, and you will see that the Lagrangian actually changes by an amount which should be proportional to acceleration, therefore a system described by such Lagrangian is not invariant under infinitesimal changes in velocity in general. Since this term (or similar) appears in most Lagrangians of physical systems, changes in speed do not correspond to symmetries.
This however does not mean that one cannot build a Lagrangian which is indeed symmetric under such changes. As it has been mentioned in the comments and other answers, if you interpret displacements in speed, as "boosts", then a relativistic Lagrangian does display such symmetry, although it is often more useful to think about it as an imaginary rotation, rather than a shift in velocity (if it is understood in a linear way), or in other words one is changing time and space in a very specific way, see Lorentz transformations.
I would suggest to try to follow the usual derivation of conserved currents to see the Lagrangians do not generally display a "speed" symmetry at least not in the Noether sense of symmetries. I also recommend considering the action associated to the Lagrangian above with the addition of a square root, which makes the action "speed invariant" because it will only depend on the end-points. This illustrates clearly the point that kinetic energy is frame-dependent (in classical mechanics), but the actual arc-length of the trajectory is not.
ohneValohneVal
$\begingroup$ That answer is helpful. Do you have any comment on "forces in a closed system must add to zero" as representing a conserved quantity? $\endgroup$
$\begingroup$ If there are no external forces, the immediate consequence is that internal forces will add up to zero, but moreover, no external sources, means momentum conservation. So it is again rephrasing invariance under translations. $\endgroup$
– ohneVal
There are conserved quantities related to Lorentz boosts. These are included in the angular momentum tensor, which is conserved if the Lagrangian is invariant under Lorentz transformations.
my2ctsmy2cts
$\begingroup$ @my2cents I interpret Lorentz boosts as having to do with velocity rather than angular momentum. Is there a more intuitive explanation perhaps? $\endgroup$
$\begingroup$ @RogerWood The general Lorentz transformation includes rotations and boosts. Thee is a connection between a boost and a rotation in Minkowski space. See e.g. physics.stackexchange.com/questions/544002/… . $\endgroup$
– my2cts
$\begingroup$ thanks, I understand now. $\endgroup$
Thanks for contributing an answer to Physics Stack Exchange!
How exactly are Lorentz transformations rotations?
What is the invariant associated with the symmetry of boosts?
Can Noether's theorem be understood intuitively?
Intuition as to why the orientation (of a 3D object) is not a conserved quantity?
Why do the Lagrangian and Hamiltonian formulations give the same conserved quantities for the same symmetries?
Noether's theorem, energy and time invariance
Conserved quantity corresponding to the identity symmetry
Is Feynman correct when he suggests that Noether's Theorem requires quantum mechanics? | CommonCrawl |
For the mathematics journal, see Discrete Mathematics (journal).
Find sources: "Discrete mathematics" – news · newspapers · books · scholar · JSTOR (February 2015) (Learn how and when to remove this template message)
Graphs like this are among the objects studied by discrete mathematics, for their interesting mathematical properties, their usefulness as models of real-world problems, and their importance in developing computer algorithms.
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic[1] – do not vary smoothly in this way, but have distinct, separated values.[2][3] Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus or Euclidean geometry. Discrete objects can often be enumerated by integers. More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets[4] (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics."[5] Indeed, discrete mathematics is described less by what is included than by what is excluded: continuously varying quantities and related notions.
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in discrete steps and store data in discrete bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems, such as in operations research.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from continuous mathematics are often employed as well.
In university curricula, "Discrete Mathematics" appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore it is nowadays a prerequisite for mathematics majors in some universities as well.[6][7] Some high-school-level discrete mathematics textbooks have appeared as well.[8] At this level, discrete mathematics is sometimes seen as a preparatory course, not unlike precalculus in this respect.[9]
The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
Grand challenges, past and presentEdit
Much research in graph theory was motivated by attempts to prove that all maps, like this one, can be colored using only four colors so that no areas of the same color share an edge. Kenneth Appel and Wolfgang Haken proved this in 1976.[10]
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).[10]
In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done.
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers.[11] At the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. Operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.[12]
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.[13]
Topics in discrete mathematicsEdit
Theoretical computer scienceEdit
Main article: Theoretical computer science
Complexity studies the time taken by algorithms, such as this sorting routine.
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms for computing mathematical results. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theoryEdit
Main article: Information theory
The ASCII codes for the word "Wikipedia", given here in binary, provide a way of representing the word in information theory, as well as for information-processing algorithms.
Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption.
LogicEdit
Main article: Mathematical logic
Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law (((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has applications to automated theorem proving and formal verification of software.
Logical formulas are discrete structures, as are proofs, which form finite trees[14] or, more generally, directed acyclic graph structures[15][16] (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied,[17] e.g. infinitary logic.
Set theoryEdit
Main article: Set theory
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
CombinatoricsEdit
Main article: Combinatorics
Combinatorics studies the way in which discrete structures can be combined or arranged. Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field. Order theory is the study of partially ordered sets, both finite and infinite.
Graph theoryEdit
Main article: Graph theory
Graph theory has close links to group theory. This truncated tetrahedron graph is related to the alternating group A4.
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right.[18] Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory. There are also continuous graphs, however for the most part research in graph theory falls within the domain of discrete mathematics.
ProbabilityEdit
Main article: Discrete probability theory
Discrete probability theory deals with events that occur in countable sample spaces. For example, count observations such as the numbers of birds in flocks comprise only natural number values {0, 1, 2, ...}. On the other hand, continuous observations such as the weights of birds comprise real number values and would typically be modeled by a continuous probability distribution such as the normal. Discrete probability distributions can be used to approximate continuous ones and vice versa. For highly constrained situations such as throwing dice or experiments with decks of cards, calculating the probability of events is basically enumerative combinatorics.
Number theoryEdit
The Ulam spiral of numbers, with black pixels showing prime numbers. This diagram hints at patterns in the distribution of prime numbers.
Main article: Number theory
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
AlgebraEdit
Main article: Abstract algebra
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Calculus of finite differences, discrete calculus or discrete analysisEdit
Main article: Finite difference
A function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to a differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as the discrete metric there are more general discrete or finite metric spaces and finite topological spaces.
GeometryEdit
Computational geometry applies computer algorithms to representations of geometrical objects.
Main articles: Discrete geometry and Computational geometry
Discrete geometry and combinatorial geometry are about combinatorial properties of discrete collections of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane. Computational geometry applies algorithms to geometrical problems.
TopologyEdit
Although topology is the field of mathematics that formalizes and generalizes the intuitive notion of "continuous deformation" of objects, it gives rise to many discrete topics; this can be attributed in part to the focus on topological invariants, which themselves usually take discrete values. See combinatorial topology, topological graph theory, topological combinatorics, computational topology, discrete topological space, finite topological space, topology (chemistry).
Operations researchEdit
Main article: Operations research
PERT charts like this provide a project management technique based on graph theory.
Operations research provides techniques for solving practical problems in engineering, business, and other fields — problems such as allocating resources to maximize profit, or scheduling project activities to minimize risk. Operations research techniques include linear programming and other areas of optimization, queuing theory, scheduling theory, network theory. Operations research also includes continuous topics such as continuous-time Markov process, continuous-time martingales, process optimization, and continuous and hybrid control theory.
Game theory, decision theory, utility theory, social choice theoryEdit
Cooperate Defect
Cooperate −1, −1 −10, 0
Defect 0, −10 −5, −5
Payoff matrix for the Prisoner's dilemma, a common example in game theory. One player chooses a row, the other a column; the resulting pair gives their payoffs
Decision theory is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision.
Utility theory is about measures of the relative economic satisfaction from, or desirability of, consumption of various goods and services.
Social choice theory is about voting. A more puzzle-based approach to voting is ballot theory.
Game theory deals with situations where success depends on the choices of others, which makes choosing the best course of action more complex. There are even continuous games, see differential game. Topics include auction theory and fair division.
DiscretizationEdit
Main article: Discretization
Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example.
Discrete analogues of continuous mathematicsEdit
There are many concepts in continuous mathematics which have discrete versions, such as discrete calculus, discrete probability distributions, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, difference equations, discrete dynamical systems, and discrete vector measures.
In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation.
In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form V ( x − c ) ⊂ Spec K [ x ] = A 1 {\displaystyle V(x-c)\subset \operatorname {Spec} K[x]=\mathbb {A} ^{1}}
for K {\displaystyle K}
a field can be studied either as Spec K [ x ] / ( x − c ) ≅ Spec K {\displaystyle \operatorname {Spec} K[x]/(x-c)\cong \operatorname {Spec} K}
, a point, or as the spectrum Spec K [ x ] ( x − c ) {\displaystyle \operatorname {Spec} K[x]_{(x-c)}}
of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings.
Hybrid discrete and continuous mathematicsEdit
The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical system.
Discrete mathematics portal
Outline of discrete mathematics
Cyberchase, a show that teaches Discrete Mathematics to children
^ Richard Johnsonbaugh, Discrete Mathematics, Prentice Hall, 2008.
^ Weisstein, Eric W. "Discrete mathematics". MathWorld.
^ https://cse.buffalo.edu/~rapaport/191/S09/whatisdiscmath.html accessed 16 Nov 18
^ Biggs, Norman L. (2002), Discrete mathematics, Oxford Science Publications (2nd ed.), New York: The Clarendon Press Oxford University Press, p. 89, ISBN 9780198507178, MR 1078626, Discrete Mathematics is the branch of Mathematics in which we deal with questions involving finite or countably infinite sets.
^ Brian Hopkins, Resources for Teaching Discrete Mathematics, Mathematical Association of America, 2008.
^ Ken Levasseur; Al Doerr. Applied Discrete Structures. p. 8.
^ Albert Geoffrey Howson, ed. (1988). Mathematics as a Service Subject. Cambridge University Press. pp. 77–78. ISBN 978-0-521-35395-3.
^ Joseph G. Rosenstein. Discrete Mathematics in the Schools. American Mathematical Soc. p. 323. ISBN 978-0-8218-8578-9.
^ "UCSMP". uchicago.edu.
^ a b Wilson, Robin (2002). Four Colors Suffice. London: Penguin Books. ISBN 978-0-691-11533-7.
^ Hodges, Andrew (1992). Alan Turing: The Enigma. Random House.
^ Trevor R. Hodkinson; John A. N. Parnell (2007). Reconstruction the Tree of Life: Taxonomy And Systematics of Large And Species Rich Taxa. CRC PressINC. p. 97. ISBN 978-0-8493-9579-6.
^ "Millennium Prize Problems". 2000-05-24. Retrieved 2008-01-12.
^ A. S. Troelstra; H. Schwichtenberg (2000-07-27). Basic Proof Theory. Cambridge University Press. p. 186. ISBN 978-0-521-77911-1.
^ Samuel R. Buss (1998). Handbook of Proof Theory. Elsevier. p. 13. ISBN 978-0-444-89840-1.
^ Franz Baader; Gerhard Brewka; Thomas Eiter (2001-10-16). KI 2001: Advances in Artificial Intelligence: Joint German/Austrian Conference on AI, Vienna, Austria, September 19-21, 2001. Proceedings. Springer. p. 325. ISBN 978-3-540-42612-7.
^ Brotherston, J.; Bornat, R.; Calcagno, C. (January 2008). "Cyclic proofs of program termination in separation logic". ACM SIGPLAN Notices. 43 (1). CiteSeerX 10.1.1.111.1105. doi:10.1145/1328897.1328453.
^ Graphs on Surfaces, Bojan Mohar and Carsten Thomassen, Johns Hopkins University press, 2001
Norman L. Biggs (2002-12-19). Discrete Mathematics. Oxford University Press. ISBN 978-0-19-850717-8.
John Dwyer (2010). An Introduction to Discrete Mathematics for Business & Computing. ISBN 978-1-907934-00-1.
Susanna S. Epp (2010-08-04). Discrete Mathematics With Applications. Thomson Brooks/Cole. ISBN 978-0-495-39132-6.
Ronald Graham, Donald E. Knuth, Oren Patashnik, Concrete Mathematics.
Ralph P. Grimaldi (2004). Discrete and Combinatorial Mathematics: An Applied Introduction. Addison Wesley. ISBN 978-0-201-72634-3.
Donald E. Knuth (2011-03-03). The Art of Computer Programming, Volumes 1-4a Boxed Set. Addison-Wesley Professional. ISBN 978-0-321-75104-1.
Jiří Matoušek; Jaroslav Nešetřil (1998). Discrete Mathematics. Oxford University Press. ISBN 978-0-19-850208-1.
Obrenic, Bojana (2003-01-29). Practice Problems in Discrete Mathematics. Prentice Hall. ISBN 978-0-13-045803-2.
Kenneth H. Rosen; John G. Michaels (2000). Hand Book of Discrete and Combinatorial Mathematics. CRC PressI Llc. ISBN 978-0-8493-0149-0.
Kenneth H. Rosen (2007). Discrete Mathematics: And Its Applications. McGraw-Hill College. ISBN 978-0-07-288008-3.
Andrew Simpson (2002). Discrete Mathematics by Example. McGraw-Hill Incorporated. ISBN 978-0-07-709840-7.
Veerarajan, T.(2007), Discrete mathematics with graph theory and combinatorics, Tata Mcgraw Hill
Wikibooks has a book on the topic of: Discrete Mathematics
Media related to Discrete mathematics at Wikimedia Commons
Discrete mathematics at the utk.edu Mathematics Archives, providing links to syllabi, tutorials, programs, etc.
Iowa Central: Electrical Technologies Program Discrete mathematics for Electrical engineering.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Discrete_mathematics&oldid=902457387" | CommonCrawl |
Xiaoyan Li, Chao Zhang, Lianxun Wang, Harald Behrens, Francois Holtz. Experiments on the Saturation of Fluorite in Magmatic Systems: Implications for Maximum F Concentration and Fluorine-Cation Bonding in Silicate Melt. Journal of Earth Science, 2020, 31(3): 456-467. doi: 10.1007/s12583-020-1305-y
Citation: Xiaoyan Li, Chao Zhang, Lianxun Wang, Harald Behrens, Francois Holtz. Experiments on the Saturation of Fluorite in Magmatic Systems: Implications for Maximum F Concentration and Fluorine-Cation Bonding in Silicate Melt. Journal of Earth Science, 2020, 31(3): 456-467. doi: 10.1007/s12583-020-1305-y
Experiments on the Saturation of Fluorite in Magmatic Systems: Implications for Maximum F Concentration and Fluorine-Cation Bonding in Silicate Melt
doi: 10.1007/s12583-020-1305-y
Xiaoyan Li1, 2 , ,
Chao Zhang1, 2 , , , ,
Lianxun Wang3 ,
Harald Behrens2 ,
Francois Holtz2
State Key Laboratory of Continental Dynamics, Department of Geology, Northwest University, Xi'an 710069, China
Institute of Mineralogy, Leibniz University Hannover, Hannover 30167, Germany
State Key Laboratory of Geological Processes and Mineral Resources, School of Earth Sciences, China University of Geosciences, Wuhan 430074, China
Corresponding author: Chao Zhang, ORCID: http://orcid.org/0000-0001-7019-5075, [email protected]
The effects of melt composition, temperature and pressure on the solubility of fluorite (CaF2), i.e., fluorine concentration in silicate melts in equilibrium with fluorite, are summarized in this paper. The authors present a statistic study based on experimental data in literature and propose a predictive model to estimate F concentration in melt at the saturation of fluorite (CF in meltFl-sat). The modeling indicates that the compositional effect of melt cations on the variation in CF in meltFl-sat can be expressed quantitatively as one parameter FSI (fluorite saturation index):FSI=(3AlNM+Fe2++6Mg+Ca+1.5Na-K)/(Si+Ti+AlNF+Fe3+), in which all cations are in mole, and AlNF and AlNM are Al as network-forming and network-modifying cations, respectively. The dependence of CF in meltFl-sat on FSI is regressed as:CF in meltFl-sat=1.130-2.014·exp (1 000/T)+2.747·exp (P/T)+0.111·CmeltH2O+17.641·FSI, in which T is temperature in Kelvin, P is pressure in MPa, CmeltH2O is melt H2O content in wt.%, and CF in meltFl-sat is in wt.% (normalized to anhydrous basis). The reference dataset used to establish the expression for conditions within 540-1 010 ℃, 50-500 MPa, 0-7 wt.% melt H2O content, 0.4 to 1.7 for A/CNK, 0.3 wt.%-7.0 wt.% for CF in meltFl-sat. The discrepancy of CF in meltFl-sat between calculated and measured values is less than ±0.62 wt.% with a confidence interval of 95%. The expression of FSI and its effect on CF in meltFl-sat indicate that fluorine incorporation in silicate melts is largely controlled by bonding with network-modifying cations, favorably with Mg, AlNM, Na, Ca and Fe2+ in a decreasing order. The proposed model for predicting CF in meltFl-sat is also supported by our new experiments saturated with magmatic fluorite performed at 100-200 MPa and 800-900 ℃. The modeling of magma fractional crystallization emphasizes that the saturation of fluorite is dependent on both the compositions of primary magmas and their initial F contents.KEY WORDS:fluorine, fluorite solubility, silicate melt, experimental petrology.
fluorine,
fluorite solubility,
silicate melt,
experimental petrology
Ahmed, H. A., Ma, C. Q., Wang, L. X., et al., 2018. Petrogenesis and Tectonic Implications of Peralkaline A-Type Granites and Syenites from the Suizhou-Zaoyang Region, Central China. Journal of Earth Science, 29(5):1181-1202. https://doi.org/10.1007/s12583-018-0877-2 doi: 10.1007/s12583-018-0877-2
Aiuppa, A., Baker, D. R., Webster, J. D., 2009. Halogens in Volcanic Systems. Chemical Geology, 263(1/2/3/4):1-18. https://doi.org/10.1016/j.chemgeo.2008.10.005 doi: 10.1016/j.chemgeo.2008.10.005
Aseri, A. A., Linnen, R. L., Che, X. D., et al., 2015. Effects of Fluorine on the Solubilities of Nb, Ta, Zr and Hf Minerals in Highly Fluxed Water-Saturated Haplogranitic Melts. Ore Geology Reviews, 64:736-746. https://doi.org/10.1016/j.oregeorev.2014.02.014 doi: 10.1016/j.oregeorev.2014.02.014
Baasner, A., Schmidt, B. C., Dupree, R., et al., 2014. Fluorine Speciation as a Function of Composition in Peralkaline and Peraluminous Na2O-CaO-Al2O3-SiO2 Glasses:A Multinuclear NMR Study. Geochimica et Cosmochimica Acta, 132:151-169. https://doi.org/10.1016/j.gca.2014.01.041 doi: 10.1016/j.gca.2014.01.041
Baasner, A., Schmidt, B. C., Webb, S. L., 2013. The Effect of Chlorine, Fluorine and Water on the Viscosity of Aluminosilicate Melts. Chemical Geology, 357:134-149. https://doi.org/10.1016/j.chemgeo.2013.08.020 doi: 10.1016/j.chemgeo.2013.08.020
Badanina, E. V., Trumbull, R. B., Dulski, P., et al., 2006. The Behavior of Rare-Earth and Lithophile Trace Elements in Rare-Metal Granites:A Study of Fluorite, Melt Inclusions and Host Rocks from the Khangilay Complex, Transbaikalia, Russia. The Canadian Mineralogist, 44(3):667-692. https://doi.org/10.2113/gscanmin.44.3.667 doi: 10.2113/gscanmin.44.3.667
Bailey, J. C., 1977. Fluorine in Granitic Rocks and Melts:A Review. Chemical Geology, 19(1/2/3/4):1-42. https://doi.org/10.1016/0009-2541(77)90002-x doi: 10.1016/0009-2541(77)90002-x
Baker, D. R., Vaillancourt, J., 1995. The Low Viscosities of F+H2O-Bearing Granitic Melts and Implications for Melt Extraction and Transport. Earth and Planetary Science Letters, 132(1/2/3/4):199-211. https://doi.org/10.1016/0012-821x(95)00054-g doi: 10.1016/0012-821x(95)00054-g
Bao, B., Webster, J. D., Zhang, D. H., et al., 2016. Compositions of Biotite, Amphibole, Apatite and Silicate Melt Inclusions from the Tongchang Mine, Dexing Porphyry Deposit, SE China:Implications for the Behavior of Halogens in Mineralized Porphyry Systems. Ore Geology Reviews, 79:443-462. https://doi.org/10.1016/j.oregeorev.2016.05.024 doi: 10.1016/j.oregeorev.2016.05.024
Bartels, A., Behrens, H., Holtz, F., et al., 2013. The Effect of Fluorine, Boron and Phosphorus on the Viscosity of Pegmatite Forming Melts. Chemical Geology, 346:184-198. https://doi.org/10.1016/j.chemgeo.2012.09.024 doi: 10.1016/j.chemgeo.2012.09.024
Berndt, J., Liebske, C., Holtz, F., et al., 2002. A Combined Rapid-Quench and H2-Membrane Setup for Internally Heated Pressure Vessels:Description and Application for Water Solubility in Basaltic Melts. American Mineralogist, 87(11/12):1717-1726. https://doi.org/10.2138/am-2002-11-1222 doi: 10.2138/am-2002-11-1222
Botcharnikov, R. E., Holtz, F., Almeev, R. R., et al., 2008. Storage Conditions and Evolution of Andesitic Magma Prior to the 1991-95 Eruption of Unzen Volcano:Constraints from Natural Samples and Phase Equilibria Experiments. Journal of Volcanology and Geothermal Research, 175(1/2):168-180. https://doi.org/10.1016/j.jvolgeores.2008.03.026 doi: 10.1016/j.jvolgeores.2008.03.026
Chelle-Michou, C., Chiaradia, M., 2017. Amphibole and Apatite Insights into the Evolution and Mass Balance of Cl and S in Magmas Associated with Porphyry Copper Deposits. Contributions to Mineralogy and Petrology, 172(11/12):105. https://doi.org/10.1007/s00410-017-1417-2 doi: 10.1007/s00410-017-1417-2
Chevychelov, V. Y., Botcharnikov, R. E., Holtz, F., 2008. Experimental Study of Fluorine and Chlorine Contents in Mica (Biotite) and Their Partitioning between Mica, Phonolite Melt, and Fluid. Geochemistry International, 46(11):1081-1089. https://doi.org/10.1134/s0016702908110025 doi: 10.1134/s0016702908110025
Dalou, C. L., Le Losq, C., Mysen, B. O., et al., 2015. Solubility and Solution Mechanisms of Chlorine and Fluorine in Aluminosilicate Melts at High Pressure and High Temperature. American Mineralogist, 100(10):2272-2283. https://doi.org/10.2138/am-2015-5201 doi: 10.2138/am-2015-5201
Dingwell, D. B., Mysen, B. O., 1985. Effects of Water and Fluorine on the Viscosity of Albite Melt at High Pressure:A Preliminary Investigation. Earth and Planetary Science Letters, 74(2/3):266-274. https://doi.org/10.1016/0012-821x(85)90026-3 doi: 10.1016/0012-821x(85)90026-3
Doherty, A. L., Webster, J. D., Goldoff, B. A., et al., 2014. Partitioning Behavior of Chlorine and Fluorine in Felsic Melt-Fluid(s)-Apatite Systems at 50 MPa and 850-950 ℃. Chemical Geology, 384:94-111. https://doi.org/10.1016/j.chemgeo.2014.06.023 doi: 10.1016/j.chemgeo.2014.06.023
Dolejš, D., Baker, D. R., 2006. Fluorite Solubility in Hydrous Haplogranitic Melts at 100 MPa. Chemical Geology, 225(1/2):40-60. https://doi.org/10.1016/j.chemgeo.2005.08.007 doi: 10.1016/j.chemgeo.2005.08.007
Gabitov, R. I., Price, J. D., Watson, E. B., 2005. Solubility of Fluorite in Haplogranitic Melt of Variable Alkalis and Alumina Content at 800-1 000 ℃ and 100 MPa. Geochemistry, Geophysics, Geosystems, 6(3):Q03007. https://doi.org/10.1029/2004gc000870 doi: 10.1029/2004gc000870
Giesting, P. A., Filiberto, J., 2014. Quantitative Models Linking Igneous Amphibole Composition with Magma Cl and OH Content. American Mineralogist, 99(4):852-865. https://doi.org/10.2138/am.2014.4623 doi: 10.2138/am.2014.4623
Gualda, G. A. R., Ghiorso, M. S., Lemons, R. V., et al., 2012. Rhyolite-MELTS:A Modified Calibration of MELTS Optimized for Silica-Rich, Fluid-Bearing Magmatic Systems. Journal of Petrology, 53(5):875-890. https://doi.org/10.1093/petrology/egr080 doi: 10.1093/petrology/egr080
Holtz, F., Sato, H., Lewis, J., et al., 2005. Experimental Petrology of the 1991-1995 Unzen Dacite, Japan. Part I:Phase Relations, Phase Composition and Pre-Eruptive Conditions. Journal of Petrology, 46(2):319-337. https://doi.org/10.1093/petrology/egh077 doi: 10.1093/petrology/egh077
Hou, T., Charlier, B., Namur, O., et al., 2017. Experimental Study of Liquid Immiscibility in the Kiruna-Type Vergenoeg Iron-Fluorine Deposit, South Africa. Geochimica et Cosmochimica Acta, 203:303-322. https://doi.org/10.1016/j.gca.2017.01.025 doi: 10.1016/j.gca.2017.01.025
Huang, H., Wang, T., Zhang, Z. C., et al., 2018. Highly Differentiated Fluorine-Rich, Alkaline Granitic Magma Linked to Rare Metal Mineralization:A Case Study from the Boziguo'er Rare Metal Granitic Pluton in South Tianshan Terrane, Xinjiang, NW China. Ore Geology Reviews, 96:146-163. https://doi.org/10.1016/j.oregeorev.2018.04.021 doi: 10.1016/j.oregeorev.2018.04.021
Icenhower, J. P., London, D., 1997. Partitioning of Fluorine and Chlorine between Biotite and Granitic Melt:Experimental Calibration at 200 MPa H2O. Contributions to Mineralogy and Petrology, 127(1/2):17-29. https://doi.org/10.1007/s004100050262 doi: 10.1007/s004100050262
Iveson, A. A., Webster, J. D., Rowe, M. C., et al., 2017. Major Element and Halogen (F, Cl) Mineral-Melt-Fluid Partitioning in Hydrous Rhyodacitic Melts at Shallow Crustal Conditions. Journal of Petrology, 58(12):2465-2492. https://doi.org/10.1093/petrology/egy011 doi: 10.1093/petrology/egy011
Jiang, W. C., Li, H., Wu, J. H., et al., 2018. A Newly Found Biotite Syenogranite in the Huangshaping Polymetallic Deposit, South China:Insights into Cu Mineralization. Journal of Earth Science, 29(3):537-555. https://doi.org/10.1007/s12583-017-0974-7 doi: 10.1007/s12583-017-0974-7
Keppler, H., 1993. Influence of Fluorine on the Enrichment of High Field Strength Trace Elements in Granitic Rocks. Contributions to Mineralogy and Petrology, 114(4):479-488. https://doi.org/10.1007/bf00321752 doi: 10.1007/bf00321752
Keppler, H., Wyllie, P. J., 1991. Partitioning of Cu, Sn, Mo, W, U, and Th between Melt and Aqueous Fluid in the Systems Haplogranite-H2O-HCl and Haplogranite-H2O-HF. Contributions to Mineralogy and Petrology, 109(2):139-150. https://doi.org/10.1016/j.gca.2017.03.015 doi: 10.1016/j.gca.2017.03.015
Kohn, S., Dupree, R., Mortuza, M., et al., 1991. NMR Evidence for Five- and Six-Coordinated Aluminum Fluoride Complexes in F-Bearing Aluminosilicate Glasses. American Mineralogist, 76(1/2):309-312
Kress, V. C., Carmichael, I. S. E., 1991. The Compressibility of Silicate Liquids Containing Fe2O3 and the Effect of Composition, Temperature, Oxygen Fugacity and Pressure on Their Redox States. Contributions to Mineralogy and Petrology, 108(1/2):82-92. https://doi.org/10.1007/bf00307328 doi: 10.1007/bf00307328
Li, H. J., Hermann, J., 2015. Apatite as an Indicator of Fluid Salinity:An Experimental Study of Chlorine and Fluorine Partitioning in Subducted Sediments. Geochimica et Cosmochimica Acta, 166:267-297. https://doi.org/10.1016/j.gca.2015.06.029 doi: 10.1016/j.gca.2015.06.029
Li, X. Y., Zhang, C., Behrens, H., et al., 2018. Fluorine Partitioning between Titanite and Silicate Melt and Its Dependence on Melt Composition:Experiments at 50-200 MPa and 875-925 ℃. European Journal of Mineralogy, 30(1):33-44. https://doi.org/10.1127/ejm/2017/0029-2689 doi: 10.1127/ejm/2017/0029-2689
Lukkari, S., Holtz, F., 2007. Phase Relations of a F-Enriched Peraluminous Granite:An Experimental Study of the Kymi Topaz Granite Stock, Southern Finland. Contributions to Mineralogy and Petrology, 153(3):273-288. https://doi.org/10.1007/s00410-006-0146-8 doi: 10.1007/s00410-006-0146-8
Manning, D. A. C., 1981. The Effect of Fluorine on Liquidus Phase Relationships in the System Qz-Ab-Or with Excess Water at 1 kb. Contributions to Mineralogy and Petrology, 76(2):206-215. https://doi.org/10.1007/bf00371960 doi: 10.1007/bf00371960
Mathez, E. A., Webster, J. D., 2005. Partitioning Behavior of Chlorine and Fluorine in the System Apatite-Silicate Melt-Fluid. Geochimica et Cosmochimica Acta, 69(5):1275-1286. https://doi.org/10.1016/j.gca.2004.08.035 doi: 10.1016/j.gca.2004.08.035
McCubbin, F. M., Vander Kaaden, K. E., Tartèse, R., et al., 2015. Experimental Investigation of F, Cl, and OH Partitioning between Apatite and Fe-Rich Basaltic Melt at 1.0-1.2 GPa and 950-1 000 ℃. American Mineralogist, 100(8/9):1790-1802. https://doi.org/10.2138/am-2015-5233 doi: 10.2138/am-2015-5233
Mysen, B. O., Cody, G. D., Smith, A., 2004. Solubility Mechanisms of Fluorine in Peralkaline and Meta-Aluminous Silicate Glasses and in Melts to Magmatic Temperatures. Geochimica et Cosmochimica Acta, 68(12):2745-2769. https://doi.org/10.1016/j.gca.2003.12.015 doi: 10.1016/j.gca.2003.12.015
Pichavant, M., Manning, D., 1984. Petrogenesis of Tourmaline Granites and Topaz Granites; The Contribution of Experimental Data. Physics of the Earth and Planetary Interiors, 35(1/2/3):31-50. https://doi.org/10.1016/0031-9201(84)90032-3 doi: 10.1016/0031-9201(84)90032-3
Price, J. D., Hogan, J. P., Gilbert, M. C., et al., 1999. Experimental Study of Titanite-Fluorite Equilibria in the A-Type Mount Scott Granite:Implications for Assessing F Contents of Felsic Magma. Geology, 27(10):951-954. https://doi.org/10.1130/0091-7613(1999)027<0951:esotfe>2.3.co; 2 doi: 10.1130/0091-7613(1999)027<0951:esotfe>2.3.co;2
Scaillet, B., Macdonald, R., 2003. Experimental Constraints on the Relationships between Peralkaline Rhyolites of the Kenya Rift Valley. Journal of Petrology, 44(10):1867-1894. https://doi.org/10.1093/petrology/egg062 doi: 10.1093/petrology/egg062
Scaillet, B., Macdonald, R., 2004. Fluorite Stability in Silicic Magmas. Contributions to Mineralogy and Petrology, 147(3):319-329. https://doi.org/10.1007/s00410-004-0559-1 doi: 10.1007/s00410-004-0559-1
Schwab, R., Küstner, D., 1981. The Equilibrium Fugacities of Important Oxygen Buffers in Technology and Petrology. Neues Jahrbuch für Mineralogie, 140:112-142
Stebbins, J. F., Zeng, Q., 2000. Cation Ordering at Fluoride Sites in Silicate Glasses:A High-Resolution 19F NMR Study. Journal of Non-Crystalline Solids, 262(1/2/3):1-5. https://doi.org/10.1016/s0022-3093(99)00695-x doi: 10.1016/s0022-3093(99)00695-x
Tossell, J., 1993. Theoretical Studies of the Speciation of Al in F-Bearing Aluminosilicate Glasses. American Mineralogist, 78(1/2):16-22
Van den Bleeken, G., Koga, K. T., 2015. Experimentally Determined Distribution of Fluorine and Chlorine Upon Hydrous Slab Melting, and Implications for F-Cl Cycling through Subduction Zones. Geochimica et Cosmochimica Acta, 171:353-373. https://doi.org/10.1016/j.gca.2015.09.030 doi: 10.1016/j.gca.2015.09.030
Veksler, I. V., Dorfman, A. M., Kamenetsky, M., et al., 2005. Partitioning of Lanthanides and Y between Immiscible Silicate and Fluoride Melts, Fluorite and Cryolite and the Origin of the Lanthanide Tetrad Effect in Igneous Rocks. Geochimica et Cosmochimica Acta, 69(11):2847-2860. https://doi.org/10.1016/j.gca.2004.08.007 doi: 10.1016/j.gca.2004.08.007
Veksler, I. V., Thomas, R., Schmidt, C., 2002. Experimental Evidence of Three Coexisting Immiscible Fluids in Synthetic Granitic Pegmatite. American Mineralogist, 87(5/6):775-779. https://doi.org/10.2138/am-2002-5-621 doi: 10.2138/am-2002-5-621
Wang, L. X., Ma, C. Q., Zhang, C., et al., 2018. Halogen Geochemistry of I- and A-Type Granites from Jiuhuashan Region (South China):Insights into the Elevated Fluorine in A-Type Granite. Chemical Geology, 478:164-182. https://doi.org/10.1016/j.chemgeo.2017.09.033 doi: 10.1016/j.chemgeo.2017.09.033
Wang, L. X., Marks, M. A. W., Wenzel, T., et al., 2016. Halogen-Bearing Minerals from the Tamazeght Complex (Morocco):Constraints on Halogen Distribution and Evolution in Alkaline to Peralkaline Magmatic Systems. The Canadian Mineralogist, 54(6):1347-1368. https://doi.org/10.3749/canmin.1600007 doi: 10.3749/canmin.1600007
Webster, J. D., Goldoff, B. A., Flesch, R. N., et al., 2017. Hydroxyl, Cl, and F Partitioning between High-Silica Rhyolitic Melts-Apatite-Fluid(s) at 50-200 MPa and 700-1 000 ℃. American Mineralogist, 102(1):61-74. https://doi.org/10.2138/am-2017-5746 doi: 10.2138/am-2017-5746
Webster, J. D., Tappen, C. M., Mandeville, C. W., 2009. Partitioning Behavior of Chlorine and Fluorine in the System Apatite-Melt-Fluid. II: Felsic Silicate Systems at 200 MPa. Geochimica et Cosmochimica Acta, 73(3): 559-581. https://doi.org/10.1016/j.gca.2008.10.034
Wengorsch, T., 2013. Experimental Constraints on the Storage Conditions of a Tephriphonolite from the Cumbre Vieja Volcano (La Palma, Canary Islands) at 200 and 400 MPa: [Dissertation]. Leibniz Universität Hannover, Hannover. 97
Xiong, X. L., Rao, B., Chen, F. R., et al., 2002. Crystallization and Melting Experiments of a Fluorine-Rich Leucogranite from the Xianghualing Pluton, South China, at 150 MPa and H2O-Saturated Conditions. Journal of Asian Earth Sciences, 21(2):175-188. https://doi.org/10.1016/s1367-9120(02)00030-5 doi: 10.1016/s1367-9120(02)00030-5
Zeng, Q., Stebbins, J. F., 2000. Fluoride Sites in Aluminosilicate Glasses:High-Resolution19F NMR Results. American Mineralogist, 85(5/6):863-867. https://doi.org/10.2138/am-2000-5-630 doi: 10.2138/am-2000-5-630
Zhang, C., Holtz, F., Ma, C. Q., et al., 2012. Tracing the Evolution and Distribution of F and Cl in Plutonic Systems from Volatile-Bearing Minerals:A Case Study from the Liujiawa Pluton (Dabie Orogen, China). Contributions to Mineralogy and Petrology, 164(5):859-879. https://doi.org/10.1007/s00410-012-0778-9 doi: 10.1007/s00410-012-0778-9
Zhang, C., Koepke, J., Albrecht, M., et al., 2017. Apatite in the Dike-Gabbro Transition Zone of Mid-Ocean Ridge:Evidence for Brine Assimilation by Axial Melt Lens. American Mineralogist, 102(3):558-570. https://doi.org/10.2138/am-2017-5906 doi: 10.2138/am-2017-5906
Zhang, C., Koepke, J., Wang, L. X., et al., 2016. A Practical Method for Accurate Measurement of Trace Level Fluorine in Mg- and Fe-Bearing Minerals and Glasses Using Electron Probe Microanalysis. Geostandards and Geoanalytical Research, 40(3):351-363. https://doi.org/10.1111/j.1751-908x.2015.00390.x doi: 10.1111/j.1751-908x.2015.00390.x
Figures(6) / Tables(3)
Article views(181) PDF downloads(29) Cited by()
Xiaoyan Li1, 2, ,
Chao Zhang1, 2, , , ,
Lianxun Wang3,
Harald Behrens2,
1. State Key Laboratory of Continental Dynamics, Department of Geology, Northwest University, Xi'an 710069, China
2. Institute of Mineralogy, Leibniz University Hannover, Hannover 30167, Germany
3. State Key Laboratory of Geological Processes and Mineral Resources, School of Earth Sciences, China University of Geosciences, Wuhan 430074, China
Corresponding author: Chao Zhang, ORCID:0000-0001-7019-5075.E-mail:[email protected]
fluorine /
fluorite solubility /
silicate melt /
Abstract: The effects of melt composition, temperature and pressure on the solubility of fluorite (CaF2), i.e., fluorine concentration in silicate melts in equilibrium with fluorite, are summarized in this paper. The authors present a statistic study based on experimental data in literature and propose a predictive model to estimate F concentration in melt at the saturation of fluorite (CF in meltFl-sat). The modeling indicates that the compositional effect of melt cations on the variation in CF in meltFl-sat can be expressed quantitatively as one parameter FSI (fluorite saturation index):FSI=(3AlNM+Fe2++6Mg+Ca+1.5Na-K)/(Si+Ti+AlNF+Fe3+), in which all cations are in mole, and AlNF and AlNM are Al as network-forming and network-modifying cations, respectively. The dependence of CF in meltFl-sat on FSI is regressed as:CF in meltFl-sat=1.130-2.014·exp (1 000/T)+2.747·exp (P/T)+0.111·CmeltH2O+17.641·FSI, in which T is temperature in Kelvin, P is pressure in MPa, CmeltH2O is melt H2O content in wt.%, and CF in meltFl-sat is in wt.% (normalized to anhydrous basis). The reference dataset used to establish the expression for conditions within 540-1 010 ℃, 50-500 MPa, 0-7 wt.% melt H2O content, 0.4 to 1.7 for A/CNK, 0.3 wt.%-7.0 wt.% for CF in meltFl-sat. The discrepancy of CF in meltFl-sat between calculated and measured values is less than ±0.62 wt.% with a confidence interval of 95%. The expression of FSI and its effect on CF in meltFl-sat indicate that fluorine incorporation in silicate melts is largely controlled by bonding with network-modifying cations, favorably with Mg, AlNM, Na, Ca and Fe2+ in a decreasing order. The proposed model for predicting CF in meltFl-sat is also supported by our new experiments saturated with magmatic fluorite performed at 100-200 MPa and 800-900 ℃. The modeling of magma fractional crystallization emphasizes that the saturation of fluorite is dependent on both the compositions of primary magmas and their initial F contents.KEY WORDS:fluorine, fluorite solubility, silicate melt, experimental petrology.
Fluorine is ubiquitous in magmatic and hydrothermal systems with a wide range of concentration that may vary from a few ppm to several percentages (Aiuppa et al., 2009; Bailey, 1977). High concentrations of fluorine in granitic melts may strongly affect phase stability (e.g., Veksler et al., 2002; Price et al., 1999; Pichavant and Manning, 1984; Manning, 1981), melt density and viscosity (e.g., Baasner et al., 2013; Bartels et al., 2013; Baker and Vaillancourt, 1995; Dingwell and Mysen, 1985). Potential effects of fluorine on complexing specific rare metals have been revealed by experimental studies (Aseri et al., 2015; Veksler et al., 2005; Keppler, 1993; Keppler and Wyllie, 1991) and investigations on natural granitic systems (Ahmed et al., 2018; Huang et al., 2018; Jiang et al., 2018; Wang et al., 2018; Badanina et al., 2006). Therefore, tracing the evolution of fluorine in magmatic systems has important implications on petrogenesis and origin of some ore deposits.
Besides the direct analysis of fluorine concentration in melt inclusions, which is usually only possible for volcanic rocks, there are several indirect approaches based on fluorine concentration in magmatic fluorine-bearing phases, such as apatite, biotite, amphibole and titanite (Li et al., 2018; Chelle-Michou and Chiaradia, 2017; Zhang et al., 2017, 2012; Bao et al., 2016; Wang et al., 2016; Li and Hermann, 2015; Van den Bleeken and Koga, 2015; Giesting and Filiberto, 2014), and on experimentally determined mineral/melt partition coefficients (Li et al., 2018; Webster et al., 2017; Li and Hermann, 2015; McCubbin et al., 2015; Doherty et al., 2014; Webster et al., 2009; Mathez and Webster, 2005; Icenhower and London, 1997).
The geochemical significance of fluorine and its behavior in natural magmatic systems are basically dependent on its incorporation mechanisms in silicate melts, which have been studied for simplified haplogranitic system through nuclear magnetic resonance (NMR) spectroscopy, Raman spectroscopy (e.g., Mysen et al., 2004; Kohn et al., 1991) and Ab initio quantum mechanical calculation (e.g., Tossell, 1993). The results indicate that, for replacing oxygen in silicate melt structure, fluorine can form complex environment-based bonding species depending on specific melt composition, although bonding with network-modifying cations is commonly favored. Based on thermodynamic treatment of fluorine-rich experimental melt composition data, Dolejš and Baker (2006) and Li et al. (2018) showed that fluorine-oxygen binary mixing is non-ideal. Nevertheless, a detailed quantitative evaluation of the maximum fluorine contents in complex natural silicate melts is still challenging.
In magmatic and hydrothermal systems, besides apatite and mica as the most ordinary F-bearing minerals, fluorite (CaF2) also usually occurs as a common fluoride mineral. Although it is natural to expect high contents of Ca and/or F in the systems, a clear relation between the occurrence of fluorite and melt composition is lacking as revealed by compositions of natural glasses and melt inclusions at the saturation of fluorite (see below). Previous experiments indicate that magmatic fluorite can be in equilibrium with silicate melts with a wide range of fluorine concentration, which seems positively correlated to the excess in Ca, Na and K in peraluminous melt, as well as to the excess Al in peraluminous melt (Lukkari and Holtz, 2007; Dolejš and Baker, 2006; Scaillet and Macdonald, 2004). Dolejš and Baker (2006) observed that peralkaline melts tend to have relatively lower fluorine concentrations than metaluminous or peraluminous melts at the saturation of fluorite. However, the general dependence of fluorite saturation on melt F concentration and other compositional parameters, and/or temperature and pressure, is unclear. In this paper, we present a synthesis of previous experimental data and propose a predictive model that can be used to estimate the fluorine concentration in melt at the saturation of fluorite, which further allows us to place constraints on the incorporation mechanism of fluorine in silicate melt.
There are several experiments involving saturation of fluorite in fluorine-rich magmatic systems (Li et al., 2018; Hou et al., 2017; Lukkari and Holtz, 2007; Dolejš and Baker, 2006; Gabitov et al., 2005; Scaillet and Macdonald, 2003; Xiong et al., 2002; Price et al., 1999; Icenhower and London, 1997; see summary Table 1 and details in Table S1), which cover relatively wide ranges in pressure (50–500 MPa), temperature (540–1 010 ℃), and melt composition (e.g., SiO2 59 wt.%–79 wt.%, CaO 0.1 wt.%–5.8 wt.%). Particularly, the silicate melts in equilibrium with fluorite involve peralkaline, metaluminous and peraluminous compositions, spanning a large range in aluminum saturation index [ASI=molar Al2O3/(CaO+Na2O+K2O), also termed as A/CNK] of 0.4–1.7 (Fig. 1a).
References n P (MPa) T (℃) fO2 (ΔNNO) aH2O Melt composition (wt.%) A/NK A/CNK
SiO2 Al2O3 TiO2 FeOT MnO MgO CaO Na2O K2O F
Icenhower and London (1997) 14 200 640–680 0 1.0 67–70 13–14 0–0.04 0.3–0.6 ~0.04 ~0.05 ~0.4 3.0–4.0 4.1–4.8 0.8–2.0 1.2–1.4 1.1–1.3
Price et al. (1999) 7 200 850 0 1.0 70–73 12–13 ~0.2 1.0–1.7 ~0.06 0.1–0.2 0.6–1.1 3.5–4.0 4.0–5.0 1.2–1.6 1.1–1.2 0.9–1.0
Xiong et al. (2002) 1 150 540 0 1.0 65.73 17.05 0.00 0.55 0.00 0.00 0.56 4.82 2.78 3.03 1.6 1.4
Scaillet and Macdonald (2003) 25 52–156 661–794 -3–+4 n.d. 64–78 8–12 0.0–0.7 1.3–8.8 0.0–0.2 ~0.02 0.1–0.3 3.9–8.9 3.6–5.1 0.4–4.3 0.4–1.0 0.4–0.9
Gabtiov et al. (2005) 18 100 850–1 000 -3–0 n.d. 68–76 8–12 0.00 0.00 0.00 0.00 0.2–3.0 3.5–8.0 3.4–6.3 0.3–1.7 0.5–1.3 0.4–1.1
Dolejs and Baker (2006) 43 100 800–950 n.d. 1.0 66–74 10–13 0.00 0.00 0.00 0.00 0.1–2.5 2.9–5.1 3.3–4.5 0.3–6.0 0.8–1.5 0.6–1.2
Lukkari and Holtz (2007) 36 100–500 575–750 0 0.6–1.0 62–69 14–18 0.00 0.2–0.9 0.00 ~0.01 0.1–0.8 3.2–5.2 3.4–4.7 1.8–5.9 1.2–1.7 1.1–1.7
Hou et al. (2017) 14 100–200 1 010 -2.5–+2.5 0.1–1.0 59–73 8.1–10.2 0.1–1.2 6.6–13.1 0.2–0.4 0.1–0.7 1.9–5.5 1.3–2.2 3.5–4.8 2.4–4.2 1.0–1.6 0.5–0.7
Li et al. (2018) 1 50 900 0 1.0 60.89 17.90 0.70 2.00 0.10 0.48 2.21 6.78 3.43 3.59 1.2 0.9
Notes: n denotes the number of experiments. The value of fO2 is expressed as deviation in log unit from nickel-nickel oxide (NNO) oxygen buffer. n.d. not determined. * Details of experimental data are listed in Table S1.
Table 1. Summary of experiments with saturation of fluorite*
Figure 1. Plots of compositional key parameters of experimental silicate melts at the saturation of fluorite. (a) A/NK vs. A/CNK. (b) F concentration in melt (wt.%, normalized on anhydrous basis) vs. A/CNK. (c) F concentration vs. CaO content. The field of natural glasses and melt inclusions are after the compilation of Dolejš and Baker (2006). (d) Mole fractions of CaO vs. F2O-1. The solid line denotes stoichiometry ratio of fluorite. Data from different references are grouped with different legends.
The fluorine concentration in silicate melt at the saturation of fluorite ($ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $) varies widely from 0.3 wt.% to 6.7 wt.%, which does not show any apparent correlation with A/CNK (Fig. 1b). Scaillet and Macdonald (2004) and Lukkari and Holtz (2007) proposed that $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ is largely dependent on ASI: for melts with ASI < 1, $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ increases with decreasing ASI, while the opposite behavior is observed for melts with ASI > 1. However, this simple correlation may be correct for compositions in which only the Al proportions are changing but it is not systematically observed for a much larger dataset summarized in this study (Fig. 1b), suggesting that the potential effects of other cations (e.g., Fe and Mg) must be considered.
There seems to be a dichotomy between $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ and CaO content (Fig. 1c): melts with high F contents usually contain less than 1 wt.% CaO; on the other hand, most high Ca melts plot around the stoichiometry line of CaF2 (the fluorite saturation is attained in melts with a molar ratio of Ca : F of 1 : 2). Compared to natural glasses and melt inclusions, which show a much stronger dichotomy between Ca and F (Dolejš and Baker, 2006), the experimental melts at the saturation of fluorite (except for having a lower limit at ~0.3 wt.%) show a roughly overlapping range in fluorine concentration, but a large range in CaO content. These data imply that, saturation of fluorite is preferentially achieved in either high F or high Ca magmatic systems; in both cases, the concentration of the less abundant component is more crucial than the other for the saturation of fluorite (Dolejš and Baker, 2006). However, for magmatic systems rich in both F and Ca, their concentrations in silicate melt seem to be buffered by the stoichiometry of fluorite, which do not follow the general dichotomy distribution.
According to Dolejš and Baker (2006), the formation of fluorite in magmatic systems is controlled by the simplified reaction as follows
and the apparent equilibrium constant K can be written as
in which XCaO and XF2O-1 are mole anion fractions on an anhydrous basis, and c indicates the stoichiometric coefficient. Particuarly, the virtual component of F2O-1 is defined by the substitution of 2F-1 for 1O2- in silicate melts. At the saturation of fluorite, aCaF2 equals to unity, and thus log XCaO and log XF2O-1 should be correlated in a linear relation. Dolejš and Baker (2006) suggested that, based on their experimental data, the value of c increases systematically from peraluminous, through subaluminous, to peralkaline systems; however, this simple trend is not valid for all the data compiled in Fig. 1d, implying there should be important contributions from other cations, such as Fe and Mg, to the value of c.
2. QUANTITATIVE MODEL
Quantitative relations have been proposed in literature to correlate $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ as a function of melt composition. Scaillet and Macdonald (2004) introduced a cation ratio (MFe), which is expressed as
MFe exhibits a negative linear correlation with $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ for the peralkaline silicate melts of Scaillet and Macdonald (2004), which indicates that the solubility of fluorite is controlled by the abundance of network-forming cations (Si, Al, Fe3+) relative to network-modifying cations (Na, K, Ca2+, Fe2+). For peraluminous silicic melts, Dolejš and Baker (2006) introduced the excess Al2O3 over alkali oxides for correlating melt composition with $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $, which is calculated as
in which the cation oxides denote mole fractions.
In order to correlate melt composition with $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ for more complex natural multicomponent silicate melts, we propose a new melt composition parameter, termed as Fluorite Saturation Index (FSI), to treat peralkaline, metaluminous and peraluminous silicate melts consistently. As inspired by the MFe proposed by Scaillet and Macdonald (2004), a preliminary expression of FSI is
AlNF indicates network-forming Al, whereas AlNM indicates network-modifying Al. Fe2+ and Fe3+ fractions have been calculated with the model of Kress and Carmichael (1991). With network-forming cations in the denominator and network-modifying cations in the numerator, FSI is supposed to be positively correlated to $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $.
According to Dolejš and Baker (2006), for peralkaline and metaluminous melts (A/CNK≤1),
in which Altot is total Al mole fraction. For peraluminous melts (A/CNK > 1),
and AlNM is calculated as
As shown in Fig. 2a, a positive linear correlation can be approximately observed between measured $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ and FSI for the majority of data collected in this study. A few significant deviations occur for melt compositions with high F contents (> 4.5 wt.%) from Dolejš and Baker (2006) that are difficult to interpret; one possibility would be due to analytical issue involving assimilation of fine-grained fluorite in glass measurements.
Figure 2. Plots of F concentration in melt at the saturation of fluorite ($ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $) vs. fluorite saturation index (FSI). (a) $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ vs. preliminary FSI. (b) $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ vs. updated FSI. The data in the marked circles are excluded from multiple linear regression. Data from different references are grouped with different legends.
For evaluating the potential effect of FSI on $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $, as well as the potential effects of pressure, temperature, and melt H2O content, we regress the following multiple linear equation based on experimental data
in which T is temperature in Kelvin, P is pressure in MPa, $ {C}_{\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{H}2\mathrm{O}} $ is melt H2O content in wt.%, and a, b, c, d and e are constants. The F concentration, $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $, is given in wt.% on an anhydrous basis. The regression result of Eq. 10 involving the preliminary FSI (expressed as Eq. 5) is listed as No. 1 in Table 2, which yields a coefficient of determination (R2) of 0.749 and supports a dominant effect of FSI as indicated by the low relative standard deviation of the coefficient of FSI, in comparison to that of other coefficients.
Model No. R2 Intercept Exp (1 000/T) Exp (P/T) Melt compositional parameters
H2O FSI (preliminary) FSI (updated) AlNM/NFC Fe2+/NFC Mg/NFC Ca/NFC Na/NFC K/NFC
1 0.749 -2.870 -0.836 2.291 0.069 17.686 - - - - - -
sd 0.890 0.367 0.805 0.048 0.994 - - - - -
2 0.781 1.491 -2.010 2.739 0.079 - - 54.985 16.590 102.931 16.793 24.970 -19.087
sd 1.504 0.508 0.764 0.064 - - 4.449 5.197 54.320 10.594 2.718 12.594
3 0.789 1.130 -2.014 2.747 0.111 - 17.641 - - - - -
sd 0.874 0.355 0.733 0.044 - 0.881 - - - - - -
Notes: For multiple linear regression, temperature in Kelvin, pressure in MPa, H2O content in wt.%. NFC (network-forming cations)=Si+Ti+AlNF+Fe3+. sd is standard deviation. Expressions of preliminary FSI and updated FSI are shown as Eq. 5 and Eq. 14 in the text, respectively.
Table 2. Coefficients of multiple linear regression for CF in meltFl-sat
In order to better constrain quantitatively the individual contributions of different network-modifying cations, we performed another multiple linear regression for the following equation
in which NFC is the sum of network-forming cations
The constant hn denotes the effect of each network-modifying cation. The regression result of Eq. 11 is listed as No. 2 in Table 2, which yields a better R2 of 0.781 and indicates that
The above quantitative relation demonstrates that the largest contribution is predicted from Mg compared to other network modifiers. Correspondingly, an updated expression of FSI can be written as
Introducing the updated FSI into Eq. 10, the new regression result is further slightly better with a R2 of 0.788 and a lower standard deviation of the coefficient of FSI (see regression result No. 3 in Table 2). Therefore, we propose that the updated FSI is an appropriate parameter to describe collectively the effect of melt composition on $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ (see Fig. 2b), with additional minor influences from pressure, temperature and melt H2O content. The calculated FSI values according to Eq. 14 of reference data range from 0.04 to 0.42.
A comparison between experimentally determined $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ and that predicted from the empirical model shows that they are in general agreement and the discrepancy is mostly within ±1 wt.% (Fig. 3a). The variation is less than 0.62 wt.% with a confidence interval of 95%. However, the maximum discrepancy of $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ between calculated and measured values can be large (ca. ±2.0 wt.% F) in some cases. We propose that these large discrepancies between modeled and measured $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ may be partly due to large analytical errors [e.g., see Zhang et al. (2016) for the potential analytical issues in EPMA of F concentration] in some studies.
Figure 3. (a) Plots of measured and estimated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $. The solid line is 1 : 1 line. The two dotted lines denote uncertainty of ±0.62 wt.% with a confidence interval of 95%. (b) Histogram of discrepancy in F concentration between measured and estimated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $. Data from different references and from this study are grouped with different legends.
The uncertainty of calculated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ by the solubility model expressed as Eq. 11 is dependent on the uncertainties of the input parameters, namely temperature, pressure, $ {C}_{\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{H}2\mathrm{O}} $ and FSI. We modeled this uncertainty propagation by Monte Carlo simulation, and the results indicate that the primary uncertainty of calculated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ is derived from temperature and FSI, of common uncertainty ranges of these parameters are considered. As shown in Fig. 4a, for a given condition with temperature of 700 ℃, pressure of 100±10 MPa, $ {C}_{\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{H}2\mathrm{O}} $ of (4.0±1.0) wt.% and FSI of 0.2±0.02, with an increasing standard deviation (sd) of temperature from zero to 100, the sd value of $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ increases from 0.35 to 0.75. Alternatively, as shown in Fig. 4b, for a given condition with temperature of 700±50 ℃, pressure of 100±10 MPa, $ {C}_{\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{H}2\mathrm{O}} $ of (4.0±1.0) wt.% and FSI of 0.2, with an increasing sd of FSI from zero to 0.03, the sd value of $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ increases from 0.3 to 0.6. The sd values of calculated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ in these two simulated cases are roughly overlapping with the model intrinsic uncertainty of ±0.62 wt.% at the confidence interval of 95% (Fig. 3a).
Figure 4. Monte Carlo simulation of propagated uncertainty (standard deviation, 1sd) for the calculated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $. (a) The dependence of 1sd of $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ on variable 1sd of temperature. (b) The dependence of 1sd of $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ on variable 1sd of FSI.
3. NEW EXPERIMENTS
In order to test our empirical model for predicting melt F concentration at fluorite saturation in magmatic systems, we performed dissolution experiments for a variety of magmatic systems with fluorite as a stable phase. The starting materials, experimental approaches, analytical methods and experimental results are described below.
3.1. Starting Materials
Four different starting glasses were used in our experiments, and their compositions corresponded to tephriphonolite (SG-01, ref. Wengorsch, 2013), andesite (SG-02, ref. Botcharnikov et al., 2008), dacite (SG-03, ref. Holtz et al., 2005) and rhyolite (SG-04, ref. Bartels et al., 2013) (Table S2). Starting glasses SG-01 and SG-03 were synthesized from natural rocks by melting into glass. Starting glasses SG-02 and SG-04 were synthesized by melting mixed powders of oxides (SiO2, Al2O3, TiO2, Fe2O3, Mn3O4, MgO) and carbonates (CaCO3, Na2CO3, K2CO3) in the desired proportion. For both cases, the melting was performed in a platinum crucible at 1 600 ℃ in a muffle furnace for about 3 hours, and the obtained glasses were subsequently crushed in a steel mortar and grinded in an agate mortar. The melting and crushing were repeated two or three times for yielding homogeneous starting glasses. A colorless gem-quality natural fluorite was used as source of fluorite, which was added to the experimental systems for ensuring saturation of fluorite. The fluorite was crushed and powdered, and grains with a size of 100−200 μm were sieved for experimental use.
3.2. Experimental Approaches
Powders of dry silicate glass and fluorite were weighed with an assigned ratio of 9 : 1 and mixed in a mortar by hand. For each experimental run, about 50 mg mixed powders were added to a gold capsule together with ~8 wt.% water. The capsules were then closed by arc welding, and during the procedure the capsules were cooled with a surrounding wet paper that had been frozen by liquid N2. The experiments were performed in an Ar-pressurized internally heated pressure vessels (IHPV) at the Institute of Mineralogy, Leibniz University Hannover. Details about the apparatus were described in Berndt et al. (2002). The experimental duration was 7 days for all runs, at pressures of 1 or 2 kbar and temperatures of 800, 850 or 900 ℃. All experiments were performed at the same oxygen fugacity (fO2) close to the nickel-nickel oxide (NNO) oxygen buffer, by adjusting H2 partial pressure in the vessel and applying the equation of Schwab and Küstner (1981) assuming a water-saturated condition. Sample quenching in IHPV was achieved by dropping the capsule directly down to a cold zone at ~50 ℃, yielding a rapid quench rate of ~150 ℃/s (Berndt et al., 2002). After confirming that there was no potential leakage by weighing the capsule again after the experiment, we opened the capsules and mounted experimental products in epoxy for compositional analysis.
3.3. Analytical Methods
Compositions of the experimental products were measured with electron probe microanalysis (EPMA) using a CAMECA SX100 electron microprobe at the Institute of Mineralogy, Leibniz University Hannover. The calibration materials included synthetic oxides (Al2O3 and TiO2, Fe2O3, MgO, MnO), wollastonite (for Si and Ca), albite (for Na), orthoclase (for K), apatite (for P) and strontium fluoride (for F). For analyzing glass compositions, a beam size of 20 μm and a beam current of 10 nA were used. F concentration was analyzed using PC1 crystal as diffraction crystal applying the method of Zhang et al. (2016). In order to minimize potential losses of Na and K during analysis, these two elements were analyzed first. Peak-intensity counting time was 20 s for F and 10 s for other elements.
3.4. Experimental Results
Fluorite and silicate melt (i.e., quenched as glass) were observed as stable phases in all experimental products. For experiments using the granitic starting glass (SG-4), no other phases were observed. For other experiments using the tephriphonolitic, andesitic and dacitic starting glasses (SG-1, SG-2 and SG-3), there were variably other phases including amphibole, biotite, titanite, ilmenite and apatite (Table 3). As we used fluorite powders as a starting material in our experiments, it is important to examine if a global saturation of fluorite has been achieved inside the capsules. For this purpose, we performed element mapping for selected samples (No. 1a and No. 4b), and the results show that there is no gradient in any cation or F in experimental glasses as a function of distance from fluorite grains (Fig. 5), indicating homogeneous distributions of these elements in silicate melts. In addition, EPMA of experimental glasses performed at 10 to 15 random locations shows identical compositions within analytical error (Table 3). Therefore, we conclude that near equilibrium conditions were achieved and the analytical data on glasses were representative for the fluorine concentrations in melts saturated with fluorite.
No. Starting glass P (MPa) T (℃) fO2 (ΔNNO) Duration (hour) H2O (wt.%) a Phases b Glass composition (wt.%) c
SiO2 Al2O3 TiO2 FeOT MnO MgO CaO Na2O K2O P2O5 F -O=F Total
1a SG-1 200 900 0 168 8.01 Gl, Fl, Amp, Bt, Ttn, Ap 46.47 17.14 0.83 4.53 0.15 1.31 8.43 6.50 3.14 0.10 6.76 2.85 94.12
0.38 0.11 0.03 0.10 0.04 0.05 0.17 0.17 0.03 0.09 0.16
1c SG-1 200 850 0 168 8.13 Gl, Fl, Amp, Ttn 51.17 19.32 0.33 2.84 0.14 0.39 4.84 7.32 4.24 0.04 4.09 1.72 94.03
2a SG-2 200 900 0 168 8.10 Gl, Fl, Amp 49.15 15.28 0.85 5.81 0.01 1.93 11.44 2.89 1.38 0.07 6.74 2.84 94.19
2c SG-2 200 850 0 168 7.92 Gl, Fl, Amp, Plg 58.00 15.48 0.36 3.78 0.02 0.76 7.67 3.20 2.00 0.04 3.56 1.50 94.12
3a SG-3 200 900 0 168 8.00 Gl, Fl, Ap 52.75 15.22 0.57 4.42 0.11 1.65 9.61 3.10 2.10 0.13 5.81 2.45 94.24
3b SG-3 200 800 0 168 8.20 Gl, Fl, Ap, Amp, Plg 64.18 15.33 0.17 2.17 0.08 0.23 4.56 3.30 3.32 0.02 1.90 0.80 94.71
3c SG-3 200 850 0 168 8.09 Gl, Fl, Amp, Plg, Ttn 58.58 16.29 0.32 3.42 0.13 0.59 7.29 3.39 2.65 0.05 2.24 0.94 94.16
3d SG-3 100 850 0 168 8.96 Gl, Fl, Amp, Plg, Ilm 66.36 15.13 0.29 2.47 0.08 0.34 4.57 3.49 3.59 0.06 1.17 0.49 96.93
4a SG-4 200 900 0 168 8.36 Gl, Fl 61.90 18.73 0.01 0.02 0.02 0.00 1.88 7.63 3.80 0.03 3.42 1.44 96.45
4b SG-4 200 800 0 168 8.15 Gl, Fl 62.94 18.88 0.00 0.03 0.02 0.01 1.32 7.63 3.85 0.02 2.91 1.23 96.78
4c SG-4 200 850 0 168 8.22 Gl, Fl 62.52 18.65 0.01 0.02 0.03 0.01 1.55 7.08 4.08 0.03 3.12 1.31 96.23
4d SG-4 100 850 0 168 8.06 Gl, Fl 62.96 18.68 0.01 0.03 0.02 0.01 1.43 7.66 4.19 0.02 2.95 1.24 97.12
a. Initial water added to starting glass; b. phase abbreviations: Ap. apatite; Amp. amphibole; Bt. biotite; Fl. fluorite; Gl. glass; Ilm. ilmenite; Plg. plagioclase; Ttn. titanite; c. each glass composition is average of 10–15 analytical points; numbers in italics are one standard deviation.
Table 3. Conditions and results of equilibrium experiments with saturation of fluorite performed in this study
Figure 5. Back scattered electron (BSE) images and element mapping (AlKα, CaKα and FKα) for selected experimental products. (a) Run No. 1a. (b) Run No. 4b. Phase abbreviations: Ap. apatite; Bt. biotite; Fl. fluorite; Gl. glass; Ttn. titanite. See experimental conditions and results in Table 3.
The new experimental glasses with fluorite saturation obtained at variable experimental temperatures (800−900 ℃) and pressures (100 and 200 MPa) span relatively wide ranges for major cation contents and F concentration (1.2 wt.%−6.8 wt.%), which are used as independent tests of our proposed empirical model for estimating $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ (Eq. 10 and Eq. 14). The experimental glasses show a metaluminous affinity with A/CNK ranging within 0.6−1.0. The calculated updated FSI values for the new experimental glasses according to Eq. 14 range from 0.1 to 0.4, which are within the range of the reference data. As plotted in Fig. 3a, the estimated $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ values for the glass compositions based on our model are well consistent with the measured F concentrations, and the discrepancies between estimated and measured values are less than 0.7 wt.% (Fig. 3b). Therefore, our new experiments confirm that, at least for metaluminous melts, the predictive model proposed above based on reference data is reliable within uncertainty for estimating melt F concentration at the saturation of fluorite.
4.1. Implications for Fluorine Incorporation Mechanism in Silicate Melt
The formulation of the updated FSI in Eq. 14 is confirmed by studies focusing on the incorporation mechanisms of fluorine in silicate melts, which have been investigated for less complex synthetic aluminosilicate glasses using NMR spectroscopy and Raman spectroscopy. The Al-F bonding is predominant in aluminosilicate glasses, while the Si-F bonding is subordinate (Mysen et al., 2004; Zeng and Stebbins, 2000). Fluoride ion is preferentially bonded to network modifying cations with higher field-strength, such as F-Ca bonding (Baasner et al., 2014; Stebbins and Zeng, 2000). In K-bearing aluminosilicate glasses, fluorine is bonded as Si-F, Al-F, and Na-F complexes whereas no K-F bonding is detected (Dalou et al., 2015), which is confirmed by the negative term for K in Eq. 11. To our knowledge, spectroscopic studies about potential bonding of F with Ti, Fe and Mg in aluminosilicate glasses are lacking. In this study, the results of multiple linear regression relating $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ and the updated FSI (Eq. 12 and Eq. 14) demonstrate that the incorporation of F in silicate melt is predominantly controlled by the abundance of network-modifying cations (AlNM, Fe2+, Mg, Ca, Na) relative to network-forming cations (Si, Ti, AlNF, Fe3+). Equation 11 indicates that fluorine tends to form F-cation bonding in an order of Mg > AlNM > Na > Ca > Fe2+, while F-K bonding is negligible. The findings are in general agreement with the F bonding mechanism associated with Al, Si, Ca, Na and K inferred from NMR spectroscopy and Raman spectroscopy.
4.2. Implications for Fluorine Enrichment during Magma Crystallization
Because F in silicate melts may play a unique and important role in petrological, geochemical and ore-forming processes, quantitative constraints on the maximum F concentrations that can be reached in evolved magmatic systems are of great importance. F usually behaves as an incompatible element in magmatic differentiation processes, except if apatite, mica, amphibole or fluorite occur a major solid phase. Our empirical model can be used to evaluate the initial F concentration in the parental magma and enrichment of F in melt as a consequence of magma crystallization prior to the saturation of fluorite, which can hardly be reflected by bulk rock F concentration because of fluid-exsolution induced loss at late magmatic stages (e.g., Zhang et al., 2012).
Here we show an example of application of our predictive model to granitic rocks from the Jiuhuashan region, South China, for evaluating the potential saturation of fluorite at a late stage of magma evolution (Wang et al., 2018). Two distinctive rocks were chosen from the study of Wang et al. (2018), including a fluorite-bearing syenogranite (sample 14JHS18-1, bulk F ~150 ppm) and a fluorite-free granodiorite (sample 14JHS23-2, bulk F ~660 ppm). We modeled variation of melt composition due to magma differentiation using rhyolite-MELTS (Gualda et al., 2012) and the bulk rock major element compositions as starting melts. As a result of fractional crystallization, the composition of residual melt evolves and results in variations in $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $, which is highly dependent on melt composition as revealed above. As shown in Fig. 5, if the crystallization proceeds from the initial stage to a near-solidus condition with a residual melt proportion of 5 wt.% or 15 wt.%, the modeled $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ for the syenogranitic magma increases remarkably from ~4 wt.% to ~13 wt.% or 7 wt.% respectively. Meanwhile, the $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ for the granodioritic magma decreases strongly from ~8 wt.% to ~2 wt.%. Therefore, if the interstitial fluorite that is observed in the syenogranite sample crystallized at a late magmatic stage corresponding to a residual melt proportion of 5 wt.%–15 wt.%, the parental magma should have an initial F concentration of 0.7 wt.%–1.0 wt.% (Fig. 6a). This estimated value is dramatically higher than the measured bulk rock F concentration (~150 ppm), indicating that abundant F must have been lost due to fluid extraction, which is actually consistent with the wide occurrence of miarolitic cavities along zones which are rich in quartz and fluorite (Wang et al., 2018). In contrast, because no fluorite has been observed in the granodiorite, its parental magma should have an initial F concentration lower than 0.12 wt.%–0.35 wt.%, otherwise the F concentration in residual melt would approach the modeled $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ at the assumed late magmatic stage (Fig. 6b).
Figure 6. Modeling of the evolution of F concentration in melt along the crystallization path as a function of the residual melt proportion for two F-bearing rocks from the Jiuhuashan region, South China. (a) Syenogranite (sample 14JHS18-1, fluorite-present) as bulk magma composition. (b) Granodiorite (sample 14JHS23-2, fluorite-free) as bulk magma composition. The modeling of crystallization is performed using rhyolite-MELTS (Gualda et al., 2012) at constant pressure of 100 MPa, a buffered oxygen fugacity of FQM+1, and an initial H2O content of 1 wt.%. $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ is calculated using Eq. 10 and updated FSI (i.e., prediction model No. 3 in Table 2). For both cases, the initial F content in the primary melt is adjusted to achieve final fluorite saturation (i.e., so that F in residual melt equals to $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $) with a residual melt proportion of 5 wt.% or 15 wt.%.
In addition, F usually behaves as a compatible element for apatite, titanite, biotite and amphibole against silicate melt (e.g., Li et al., 2018; Iveson et al., 2017; Webster et al., 2009; Chevychelov et al., 2008), and thus crystallization of these minerals in the magmatic system may suppress the saturation of fluorite due to an earlier partitioning of F from melt into these minerals. Nevertheless, the existence of other F-bearing minerals does not affect the solubility of CaF2 in silicate melt as expressed in Eq. 11. In such cases, the example discussed above illustrates an approach that can be used to constrain the minimum F content in primary melt leading to crystallization of magmatic fluorite.
A parameter of fluorite saturation index (FSI) is proposed to describe the melt compositional effect on fluorine concentration at the saturation of fluorite, which implies that the tendency of bonding mechanism between fluorine and network-modifying cations is in the order of Mg > AlNM > Na > Ca > Fe2+ > K. Our model predicting $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ has the advantage to be applicable to a very wide compositional range, covering peraluminous, to calc-alkaline and peralkaline. However, the fit between predicted $ {C}_{\mathrm{F}\mathrm{ }\mathrm{i}\mathrm{n}\mathrm{ }\mathrm{m}\mathrm{e}\mathrm{l}\mathrm{t}}^{\mathrm{F}\mathrm{l}-\mathrm{s}\mathrm{a}\mathrm{t}} $ and experimental results is not always satisfying (discrepancy up to 2 wt.% F in some rare cases), which may be due in part to analytical problems, especially at high F contents of the glasses. The reliability of the empirical model is supported by our new equilibrium experiments in fluorite-saturated magmatic systems.
This study was supported by the National Natural Science Foundation of China (No. 41902052) and the German Research Foundation (DFG) (No. BE 1720/40). We thank two anonymous reviewers for their helpful comments that have substantially improved this paper. The final publication is available at Springer via https://doi.org/10.1007/s12583-020-1305-y.
Electronic Supplementary Materials: Supplementary materials (Tables S1–S2) are available in the online version of this article at https://doi.org/10.1007/s12583-020-1305-y.
[1] Ahmed H. A., Ma C. Q., Wang L. X., 2018: Petrogenesis and Tectonic Implications of Peralkaline A-Type Granites and Syenites from the Suizhou-Zaoyang Region, Central China[J]. Journal of Earth Science, 29, 1181-1202. doi: 10.1007/s12583-018-0877-2
[2] Aiuppa A., Baker D. R., Webster J. D., 2009: Halogens in Volcanic Systems[J]. Chemical Geology, 263, 1-18. doi: 10.1016/j.chemgeo.2008.10.005
[3] Aseri A. A., Linnen R. L., Che X. D., 2015: Effects of Fluorine on the Solubilities of Nb, Ta, Zr and Hf Minerals in Highly Fluxed Water-Saturated Haplogranitic Melts[J]. Ore Geology Reviews, 64, 736-746. doi: 10.1016/j.oregeorev.2014.02.014
[4] Baasner A., Schmidt B. C., Dupree R., 2014: Fluorine Speciation as a Function of Composition in Peralkaline and Peraluminous Na2O-CaO-Al2O3-SiO2 Glasses:A Multinuclear NMR Study[J]. Geochimica et Cosmochimica Acta, 132, 151-169. doi: 10.1016/j.gca.2014.01.041
[5] Baasner A., Schmidt B. C., Webb S. L., 2013: The Effect of Chlorine, Fluorine and Water on the Viscosity of Aluminosilicate Melts[J]. Chemical Geology, 357, 134-149. doi: 10.1016/j.chemgeo.2013.08.020
[6] Badanina E. V., Trumbull R. B., Dulski P., 2006: The Behavior of Rare-Earth and Lithophile Trace Elements in Rare-Metal Granites:A Study of Fluorite, Melt Inclusions and Host Rocks from the Khangilay Complex, Transbaikalia, Russia[J]. The Canadian Mineralogist, 44, 667-692. doi: 10.2113/gscanmin.44.3.667
[7] Bailey J. C., 1977: Fluorine in Granitic Rocks and Melts:A Review[J]. Chemical Geology, 19, 1-42. doi: 10.1016/0009-2541(77)90002-x
[8] Baker D. R., Vaillancourt J., 1995: The Low Viscosities of F+H2O-Bearing Granitic Melts and Implications for Melt Extraction and Transport[J]. Earth and Planetary Science Letters, 132, 199-211. doi: 10.1016/0012-821x(95)00054-g
[9] Bao B., Webster J. D., Zhang D. H., 2016: Compositions of Biotite, Amphibole, Apatite and Silicate Melt Inclusions from the Tongchang Mine, Dexing Porphyry Deposit, SE China:Implications for the Behavior of Halogens in Mineralized Porphyry Systems[J]. Ore Geology Reviews, 79, 443-462. doi: 10.1016/j.oregeorev.2016.05.024
[10] Bartels A., Behrens H., Holtz F., 2013: The Effect of Fluorine, Boron and Phosphorus on the Viscosity of Pegmatite Forming Melts[J]. Chemical Geology, 346, 184-198. doi: 10.1016/j.chemgeo.2012.09.024
[11] Berndt J., Liebske C., Holtz F., 2002: A Combined Rapid-Quench and H2-Membrane Setup for Internally Heated Pressure Vessels:Description and Application for Water Solubility in Basaltic Melts[J]. American Mineralogist, 87, 1717-1726. doi: 10.2138/am-2002-11-1222
[12] Botcharnikov R. E., Holtz F., Almeev R. R., 2008: Storage Conditions and Evolution of Andesitic Magma Prior to the 1991-95 Eruption of Unzen Volcano:Constraints from Natural Samples and Phase Equilibria Experiments[J]. Journal of Volcanology and Geothermal Research, 175, 168-180. doi: 10.1016/j.jvolgeores.2008.03.026
[13] Chelle-Michou C., Chiaradia M., 2017: Amphibole and Apatite Insights into the Evolution and Mass Balance of Cl and S in Magmas Associated with Porphyry Copper Deposits[J]. Contributions to Mineralogy and Petrology, 172, 105-. doi: 10.1007/s00410-017-1417-2
[14] Chevychelov V. Y., Botcharnikov R. E., Holtz F., 2008: Experimental Study of Fluorine and Chlorine Contents in Mica (Biotite) and Their Partitioning between Mica, Phonolite Melt, and Fluid[J]. Geochemistry International, 46, 1081-1089. doi: 10.1134/s0016702908110025
[15] Dalou C. L., Le Losq C., Mysen B. O., 2015: Solubility and Solution Mechanisms of Chlorine and Fluorine in Aluminosilicate Melts at High Pressure and High Temperature[J]. American Mineralogist, 100, 2272-2283. doi: 10.2138/am-2015-5201
[16] Dingwell D. B., Mysen B. O., 1985: Effects of Water and Fluorine on the Viscosity of Albite Melt at High Pressure:A Preliminary Investigation[J]. Earth and Planetary Science Letters, 74, 266-274. doi: 10.1016/0012-821x(85)90026-3
[17] Doherty A. L., Webster J. D., Goldoff B. A., 2014: Partitioning Behavior of Chlorine and Fluorine in Felsic Melt-Fluid(s)-Apatite Systems at 50 MPa and 850-950 ℃[J]. Chemical Geology, 384, 94-111. doi: 10.1016/j.chemgeo.2014.06.023
[18] Dolejš D., Baker D. R., 2006: Fluorite Solubility in Hydrous Haplogranitic Melts at 100 MPa[J]. Chemical Geology, 225, 40-60. doi: 10.1016/j.chemgeo.2005.08.007
[19] Gabitov R. I., Price J. D., Watson E. B., 2005: Solubility of Fluorite in Haplogranitic Melt of Variable Alkalis and Alumina Content at 800-1 000 ℃ and 100 MPa[J]. Geochemistry, Geophysics, Geosystems, 6, Q03007-. doi: 10.1029/2004gc000870
[20] Giesting P. A., Filiberto J., 2014: Quantitative Models Linking Igneous Amphibole Composition with Magma Cl and OH Content[J]. American Mineralogist, 99, 852-865. doi: 10.2138/am.2014.4623
[21] Gualda G. A. R., Ghiorso M. S., Lemons R. V., 2012: Rhyolite-MELTS:A Modified Calibration of MELTS Optimized for Silica-Rich, Fluid-Bearing Magmatic Systems[J]. Journal of Petrology, 53, 875-890. doi: 10.1093/petrology/egr080
[22] Holtz F., Sato H., Lewis J., 2005: Experimental Petrology of the 1991-1995 Unzen Dacite, Japan. Part I:Phase Relations, Phase Composition and Pre-Eruptive Conditions[J]. Journal of Petrology, 46, 319-337. doi: 10.1093/petrology/egh077
[23] Hou T., Charlier B., Namur O., 2017: Experimental Study of Liquid Immiscibility in the Kiruna-Type Vergenoeg Iron-Fluorine Deposit, South Africa[J]. Geochimica et Cosmochimica Acta, 203, 303-322. doi: 10.1016/j.gca.2017.01.025
[24] Huang H., Wang T., Zhang Z. C., 2018: Highly Differentiated Fluorine-Rich, Alkaline Granitic Magma Linked to Rare Metal Mineralization:A Case Study from the Boziguo'er Rare Metal Granitic Pluton in South Tianshan Terrane, Xinjiang, NW China[J]. Ore Geology Reviews, 96, 146-163. doi: 10.1016/j.oregeorev.2018.04.021
[25] Icenhower J. P., London D., 1997: Partitioning of Fluorine and Chlorine between Biotite and Granitic Melt:Experimental Calibration at 200 MPa H2O[J]. Contributions to Mineralogy and Petrology, 127, 17-29. doi: 10.1007/s004100050262
[26] Iveson A. A., Webster J. D., Rowe M. C., 2017: Major Element and Halogen (F, Cl) Mineral-Melt-Fluid Partitioning in Hydrous Rhyodacitic Melts at Shallow Crustal Conditions[J]. Journal of Petrology, 58, 2465-2492. doi: 10.1093/petrology/egy011
[27] Jiang W. C., Li H., Wu J. H., 2018: A Newly Found Biotite Syenogranite in the Huangshaping Polymetallic Deposit, South China:Insights into Cu Mineralization[J]. Journal of Earth Science, 29, 537-555. doi: 10.1007/s12583-017-0974-7
[28] Keppler H., 1993: Influence of Fluorine on the Enrichment of High Field Strength Trace Elements in Granitic Rocks[J]. Contributions to Mineralogy and Petrology, 114, 479-488. doi: 10.1007/bf00321752
[29] Keppler H., Wyllie P. J., 1991: Partitioning of Cu, Sn, Mo, W, U, and Th between Melt and Aqueous Fluid in the Systems Haplogranite-H2O-HCl and Haplogranite-H2O-HF[J]. Contributions to Mineralogy and Petrology, 109, 139-150. doi: 10.1016/j.gca.2017.03.015
[30] Kohn S., Dupree R., Mortuza M., 1991: NMR Evidence for Five- and Six-Coordinated Aluminum Fluoride Complexes in F-Bearing Aluminosilicate Glasses[J]. American Mineralogist, 76, 309-312.
[31] Kress V. C., Carmichael I. S. E., 1991: The Compressibility of Silicate Liquids Containing Fe2O3 and the Effect of Composition, Temperature, Oxygen Fugacity and Pressure on Their Redox States[J]. Contributions to Mineralogy and Petrology, 108, 82-92. doi: 10.1007/bf00307328
[32] Li H. J., Hermann J., 2015: Apatite as an Indicator of Fluid Salinity:An Experimental Study of Chlorine and Fluorine Partitioning in Subducted Sediments[J]. Geochimica et Cosmochimica Acta, 166, 267-297. doi: 10.1016/j.gca.2015.06.029
[33] Li X. Y., Zhang C., Behrens H., 2018: Fluorine Partitioning between Titanite and Silicate Melt and Its Dependence on Melt Composition:Experiments at 50-200 MPa and 875-925 ℃[J]. European Journal of Mineralogy, 30, 33-44. doi: 10.1127/ejm/2017/0029-2689
[34] Lukkari S., Holtz F., 2007: Phase Relations of a F-Enriched Peraluminous Granite:An Experimental Study of the Kymi Topaz Granite Stock, Southern Finland[J]. Contributions to Mineralogy and Petrology, 153, 273-288. doi: 10.1007/s00410-006-0146-8
[35] Manning D. A. C., 1981: The Effect of Fluorine on Liquidus Phase Relationships in the System Qz-Ab-Or with Excess Water at 1 kb[J]. Contributions to Mineralogy and Petrology, 76, 206-215. doi: 10.1007/bf00371960
[36] Mathez E. A., Webster J. D., 2005: Partitioning Behavior of Chlorine and Fluorine in the System Apatite-Silicate Melt-Fluid[J]. Geochimica et Cosmochimica Acta, 69, 1275-1286. doi: 10.1016/j.gca.2004.08.035
[37] McCubbin F. M., Vander Kaaden K. E., Tartèse R., 2015: Experimental Investigation of F, Cl, and OH Partitioning between Apatite and Fe-Rich Basaltic Melt at 1.0-1.2 GPa and 950-1 000 ℃[J]. American Mineralogist, 100, 1790-1802. doi: 10.2138/am-2015-5233
[38] Mysen B. O., Cody G. D., Smith A., 2004: Solubility Mechanisms of Fluorine in Peralkaline and Meta-Aluminous Silicate Glasses and in Melts to Magmatic Temperatures[J]. Geochimica et Cosmochimica Acta, 68, 2745-2769. doi: 10.1016/j.gca.2003.12.015
[39] Pichavant M., Manning D., 1984: Petrogenesis of Tourmaline Granites and Topaz Granites; The Contribution of Experimental Data[J]. Physics of the Earth and Planetary Interiors, 35, 31-50. doi: 10.1016/0031-9201(84)90032-3
[40] Price J. D., Hogan J. P., Gilbert M. C., 1999: Experimental Study of Titanite-Fluorite Equilibria in the A-Type Mount Scott Granite:Implications for Assessing F Contents of Felsic Magma[J]. Geology, 27, 951-954. doi: 10.1130/0091-7613(1999)027<0951:esotfe>2.3.co;2
[41] Scaillet B., Macdonald R., 2003: Experimental Constraints on the Relationships between Peralkaline Rhyolites of the Kenya Rift Valley[J]. Journal of Petrology, 44, 1867-1894. doi: 10.1093/petrology/egg062
[42] Scaillet B., Macdonald R., 2004: Fluorite Stability in Silicic Magmas[J]. Contributions to Mineralogy and Petrology, 147, 319-329. doi: 10.1007/s00410-004-0559-1
[43] Schwab R., Küstner D., 1981: The Equilibrium Fugacities of Important Oxygen Buffers in Technology and Petrology[J]. Neues Jahrbuch für Mineralogie, 140, 112-142.
[44] Stebbins J. F., Zeng Q., 2000: Cation Ordering at Fluoride Sites in Silicate Glasses:A High-Resolution 19F NMR Study[J]. Journal of Non-Crystalline Solids, 262, 1-5. doi: 10.1016/s0022-3093(99)00695-x
[45] Tossell J., 1993: Theoretical Studies of the Speciation of Al in F-Bearing Aluminosilicate Glasses[J]. American Mineralogist, 78, 16-22.
[46] Van den Bleeken G., Koga K. T., 2015: Experimentally Determined Distribution of Fluorine and Chlorine Upon Hydrous Slab Melting, and Implications for F-Cl Cycling through Subduction Zones[J]. Geochimica et Cosmochimica Acta, 171, 353-373. doi: 10.1016/j.gca.2015.09.030
[47] Veksler I. V., Dorfman A. M., Kamenetsky M., 2005: Partitioning of Lanthanides and Y between Immiscible Silicate and Fluoride Melts, Fluorite and Cryolite and the Origin of the Lanthanide Tetrad Effect in Igneous Rocks[J]. Geochimica et Cosmochimica Acta, 69, 2847-2860. doi: 10.1016/j.gca.2004.08.007
[48] Veksler I. V., Thomas R., Schmidt C., 2002: Experimental Evidence of Three Coexisting Immiscible Fluids in Synthetic Granitic Pegmatite[J]. American Mineralogist, 87, 775-779. doi: 10.2138/am-2002-5-621
[49] Wang L. X., Ma C. Q., Zhang C., 2018: Halogen Geochemistry of I- and A-Type Granites from Jiuhuashan Region (South China):Insights into the Elevated Fluorine in A-Type Granite[J]. Chemical Geology, 478, 164-182. doi: 10.1016/j.chemgeo.2017.09.033
[50] Wang L. X., Marks M. A. W., Wenzel T., 2016: Halogen-Bearing Minerals from the Tamazeght Complex (Morocco):Constraints on Halogen Distribution and Evolution in Alkaline to Peralkaline Magmatic Systems[J]. The Canadian Mineralogist, 54, 1347-1368. doi: 10.3749/canmin.1600007
[51] Webster J. D., Goldoff B. A., Flesch R. N., 2017: Hydroxyl, Cl, and F Partitioning between High-Silica Rhyolitic Melts-Apatite-Fluid(s) at 50-200 MPa and 700-1 000 ℃[J]. American Mineralogist, 102, 61-74. doi: 10.2138/am-2017-5746
[52] Webster, J. D., Tappen, C. M., Mandeville, C. W., 2009. Partitioning Behavior of Chlorine and Fluorine in the System Apatite-Melt-Fluid. II: Felsic Silicate Systems at 200 MPa. Geochimica et Cosmochimica Acta, 73(3): 559-581. https://doi.org/10.1016/j.gca.2008.10.034
[53] Wengorsch, T., 2013. Experimental Constraints on the Storage Conditions of a Tephriphonolite from the Cumbre Vieja Volcano (La Palma, Canary Islands) at 200 and 400 MPa: [Dissertation]. Leibniz Universität Hannover, Hannover. 97
[54] Xiong X. L., Rao B., Chen F. R., 2002: Crystallization and Melting Experiments of a Fluorine-Rich Leucogranite from the Xianghualing Pluton, South China, at 150 MPa and H2O-Saturated Conditions[J]. Journal of Asian Earth Sciences, 21, 175-188. doi: 10.1016/s1367-9120(02)00030-5
[55] Zeng Q., Stebbins J. F., 2000: Fluoride Sites in Aluminosilicate Glasses:High-Resolution19F NMR Results[J]. American Mineralogist, 85, 863-867. doi: 10.2138/am-2000-5-630
[56] Zhang C., Holtz F., Ma C. Q., 2012: Tracing the Evolution and Distribution of F and Cl in Plutonic Systems from Volatile-Bearing Minerals:A Case Study from the Liujiawa Pluton (Dabie Orogen, China)[J]. Contributions to Mineralogy and Petrology, 164, 859-879. doi: 10.1007/s00410-012-0778-9
[57] Zhang C., Koepke J., Albrecht M., 2017: Apatite in the Dike-Gabbro Transition Zone of Mid-Ocean Ridge:Evidence for Brine Assimilation by Axial Melt Lens[J]. American Mineralogist, 102, 558-570. doi: 10.2138/am-2017-5906
[58] Zhang C., Koepke J., Wang L. X., 2016: A Practical Method for Accurate Measurement of Trace Level Fluorine in Mg- and Fe-Bearing Minerals and Glasses Using Electron Probe Microanalysis[J]. Geostandards and Geoanalytical Research, 40, 351-363. doi: 10.1111/j.1751-908x.2015.00390.x | CommonCrawl |
Trends and determinants of an acceptable antenatal care coverage in Ethiopia, evidence from 2005-2016 Ethiopian demographic and health survey; Multivariate decomposition analysis
Tilahun Yemanu Birhan ORCID: orcid.org/0000-0002-1614-88141 &
Wullo Sisay Seretew1
Archives of Public Health volume 78, Article number: 129 (2020) Cite this article
an acceptable antenatal care (ANC4+) is defined as attending at least four antenatal care visit, received at least one dose of tetanus toxoid (TT) injections and consumed 100 iron-folic acids (IFA) tablets/syrup during the last pregnancy. Since maternal health care service utilization continues to be an essential indicator for monitoring the improvements of maternal and child health outcomes. This study aimed to analyze the trends and determinants that contributed to the change in an acceptable antenatal care visit over the last 10 years in Ethiopia.
Nationally representative repeated cross-sectional survey was conducted using 2005, 2011, and 2016 Ethiopian Demographic and Health Survey datasets. The data were weighted and analyzed by STATA 14.1 software. Multivariate decomposition regression analysis was used to identify factors that contribute for the change in an acceptable antenatal care visit. A p-value < 0.05 was taken to declare statistically significant predictors to acceptable antenatal care visit.
among the reproductive age women the rate of an acceptable antenatal care visits was increased from 16% in 2005 to 35% in 2016 in Ethiopia. In the multivariate decomposition analysis, about 29% of the increase in acceptable antenatal care visit was due to a difference in composition of women (endowments) across the surveys. Residence, religion, husband educational attainment, and wealth status was the main source of compositional change factors for the improvements of an acceptable antenatal care visit. Almost two-thirds of an overall change in acceptable antenatal care visit was due to the difference in coefficients/ change in behavior of the population. Religion, educational attainment (both women and husband), and residence are significantly contributed to the change in full antenatal care visit in Ethiopia over the last decades.
Besides the relevance of receiving an acceptable antenatal care visit for pregnant women and their babies, an acceptable antenatal care visit was slightly increased over time in Ethiopia. Women's characteristics and behavior change were significantly associated with the change in acceptable antenatal care visits. Public interventions needed to improve acceptable antenatal care coverage, women's education, and further advancing of health care facilities in rural communities should be done to maintain the further improvements acceptable antenatal care visits.
An acceptable antenatal care (ANC4+) is defined as attending at least four antenatal care visit, received at least one dose of tetanus toxoid (TT) injections and consumed 100 iron-folic acids (IFA) tablets/syrup during the last pregnancy [1]. Globally, antenatal care (ANC) remains an essential intervention for improving maternal and child health. World Health Organization (WHO) had previously recommended at least four visits (i.e. the `reduced' ANC model) and more recently, the standard ANC model has been implemented at least eight ANC contacts [2,3,4]. Globally, 72.9% (95% CI 69.1–76.8) of women used ANC including blood pressure monitoring, urine and blood testing [5, 6], among that only 53.3% (44.3–63.3) was in low-income countries and 74.8% (68.6–80.9) was in lower-middle-income countries while developed countries were used 93.3% (91.4–95.2) [5, 6]. Even though there is high ANC coverage across all low-income countries, nearly a third of women who accessed antenatal care was not received a basic package of three services during their pregnancy [2, 5, 6]. Since, maternal health care service utilization continues to be an essential indicator for monitoring the improvements in maternal and child health outcomes. However, inadequate access to ANC, intrapartum, and postnatal care services are the pertinent reason for high maternal and child morbidities and mortalities in Sub-Saharan Africa (SSA) [7, 8]. ANC performs a vital heroine in ensuring a healthy baby and mother throughout pregnancy and after delivery; all pregnant women receive quality ANC service regardless of their economic, cultural, and social background [9,10,11]. ANC is very relevant to optimize quality health outcomes, such as normal birth weight, reduction in maternal and child death as well as low postpartum anemia [11, 12]. Despite, progress in reducing maternal mortality and improving the uptake of ANC4+ and tetanus toxoid (TT) injection in Ethiopia, an inclusive understanding of full ANC utilization coverage is still lacking in Ethiopia [7, 13, 14]. Ethiopia is one of the Sub-Saharan countries with the highest maternal mortality (420/100,000 live births) in the world linked with low utilization of full ANC visits and skilled delivery [15, 16]. ANC is one of the main vital indicators for safe motherhood initiative, which helps to reduce pregnancy-related complications and death in developing countries including Ethiopia [4, 11, 17, 18]. Efforts have been made to ensure quality maternal health service across all aspects of populations, national subnational and global levels. However, most health care systems are not accessible to every community in Ethiopia, benefiting the urban than the rural and underprivileged [13, 19,20,21]. 2016 WHO guideline-recommended eight ANC contacts, five contacts in the third trimester to reduce pregnancy-related complications, morbidity, and mortality, but four visits are still lagging in Ethiopia [1, 4, 22]. An astonishing progress has been done to optimize the coverage of ANC in Ethiopia. However, several factors hindering the availability of ANC services such as lack of improved transportation, inaccessibility to communication technology, low rate of education and low socioeconomic status [19, 23]. This paper aimed to quantify the contributing factors that explain an acceptable ANC coverage, which may be useful for informing policy and indicate specific programming intervention to resolve the utilization of an acceptable ANC visit and further improvements of maternal health service in Ethiopia.
Study design and sampling
This study was based on a secondary analysis of cross-sectional population data from Ethiopia Demographic Health Surveys (EDHS) 20,005, 2011, and 2016 to investigate trends and the factors associated with ANC4+ in Ethiopia.
So far, in Ethiopia, four consecutive surveys were conducted in the cross-sectional years of 2000, 2005, 2011, and 2016 respectively. Similar to other demographic and health surveys, the principal objective Ethiopian Demographic and Health Survey (EDHS) was to offer current and consistent data on fertility and family planning behavior, child mortality, adult and maternal mortality, children's nutritional status, use of maternal and child health services, as well as data, were collected on knowledge and attitudes of women and men about sexually transmitted diseases and HIV/AIDS and evaluated potential exposure to the risk of HIV infection by exploring high-risk behaviors and condom use.
The sampling frame used for the 2016 EDHS was the Ethiopia Population and Housing Census (EPHC), which was conducted in 2007 by the Ethiopia Central Statistical Agency. The census frame is a complete list of 84,915 enumeration areas (EAs) created for the 2007 PHC. An EA is a geographic area covering on average 181 households. The sampling frame contains information about the EA location, type of residence (urban or rural), and an estimated number of residential households. Except for EAs in six zones of the Somali region, each EA has accompanying cartographic materials. These materials delineate geographic locations; boundaries, main access, and landmarks in or outside the EA that help identify the EA. In Somali, a cartographic frame was used in three zones where sketch maps delineating the EA geographic boundaries were available for each EA; in the remaining six zones, satellite image maps were used to provide a map for each EA.
Variables and measurement
The outcome variable was 'ANC4+'. A woman was counted as having acceptable ANC, if she had to get four ANC visits, received at least one dose of tetanus toxoid (TT) injections and consumed 100 iron-folic acids (IFA) tablets/syrup during the last pregnancy.
The predictor variables are Socio-demographic Characteristics: Age, Marital status, Level of education, media exposure, and occupation
Socio-cultural factors: Unplanned Pregnancy, Fear of testing for HIV status, knowledge about ANC benefits, Peer influence, TBA influence, decision-making authority
Obstetric factors and Economic factors: Gravida, Parity, Complications during pregnancy, history of abortion, history of stillbirth, trimester of pregnancy and wealth status
Data management and analysis
The data were cleaned and analyzed using STATA14 software and the data was weighted for analysis.
The trend was assessed using descriptive analysis by selected explanatory variables of the study population as well as the trend was assessed separately from 2005 to 2011, 2011–2016, and 2005–2016.
Multivariate decomposition analysis of change in ANC4+ was employed to answer the major factors contributing to the difference in the percentage of ANC4+ over the study period. This methods are used for many purposes in economic, demography, and other specialties. The present analysis focused on how the ANC4+ rate responds to difference in women's characteristics and how these factors shape the differences across surveys conducted at different times. The analysis was a regression analysis of the difference in the percentage of ANC4+ rate between EDHS 2005 and 2016. The multivariate decomposition analysis was to identify the source of difference in the percentage of ANC4+ in the last 10 years. Both the difference in composition (Endowment) of the population and the difference in the effect of characteristics (Coefficients) between the surveys is essential to identify the factors contributing to the increase in ANC4+ rate overtime.
The multivariate decomposition analysis for nonlinear response model utilizes the output from a logistic regression model since it is "a binary outcome" to parcel out the observed difference in ANC4+ into components. The difference in the rate of ANC4+ between the surveys can be attributed to the compositional difference in population (difference characteristics or endowment) and the difference in the effect of explanatory variable (difference in coefficients) between the surveys.
Logit based decomposition analysis technique was used for the analysis of factors contributing to the change in ANC4+ rate over time to identify factors contributing to the ANC4+ in the last 10 years. The change of ANC4+ over time can be attributed to the compositional difference between the surveys and difference in the effect of selected covariates. Hence, the observed difference in ANC4+ between the surveys is additively decomposed into characteristics (or endowments) component and a coefficient (or effect of characteristics) component. For the decomposition analysis, the 2005 EDHS data appended to the 2016 EDHS data by using the command "append". Since all variables are coded before merging in similar situation.
The mean difference in Y between groups A and B can be decomposed as:
$$ {Y}_A-{Y}_B=F\left({X}_A{\beta}_A\right)-F\left({X}_B{\beta}_B\right) $$
For our logistic regression, the logit or log-odds of ANC4+ is taken as:
$$ Logit(A)- Logit(B)=F\left({X}_A{\beta}_A\right)-F\left({X}_B{\beta}_B\right)=\underset{E}{\underbrace{\left[\mathrm{F}\left(\mathrm{XA}\upbeta \mathrm{A}\right)-\mathrm{F}\left(\mathrm{XB}\upbeta \mathrm{A}\right)\right]}}+\underset{C}{\underbrace{\left[\mathrm{F}\left(\mathrm{XB}\upbeta \mathrm{A}\right)-\mathrm{F}\right(\mathrm{XB}\upbeta \mathrm{B}\Big]}} $$
The E component refers to the part of the differential owing to differences in endowments or characteristics. The C component refers to that part of the differential attributable to differences in coefficients or effects [24].
The equation can be presented as:
$$ \mathrm{Logit}\ \left(\mathrm{A}\right)-\mathrm{Logit}\ \left(\mathrm{B}\right)=\left[\upbeta 0\mathrm{A}-\upbeta 0\mathrm{B}\right]+\Sigma \mathrm{XijB}\ast \left[\upbeta \mathrm{ijA}-\upbeta \mathrm{ijB}\right]+\Sigma \upbeta \mathrm{ijB}\ast \left[\mathrm{XijA}-\mathrm{XijB}\right] $$
XijB is the proportion of the jth category of the ith determinant in the DHS 2005,
XijA is the proportion of the jth category of the ith determinant in DHS 2016,
βijB is the coefficient of the jth category of the ith determinant in DHS 2005,
βijA is the coefficient of the jth category of the ith determinant in DHS 2016,
β0B is the intercept in the regression equation fitted to DHS 2005, and
β0A is the intercept in the regression equation fitted to DHS 2016
The recently developed multivariate decomposition for the non-linear model was used for the decomposition analysis of ANC4+ using mvdcmp STATA command [24].
Characteristics of the study population
This section presents the characteristics of respondents over three EDHS surveys. Among the respondents, more women Visit ANC in the second trimester in three consecutive EDHS surveys, and some percentage of women visit in the third trimester. Regarding the husband's educational status, in 2005 and 2011, 21% of respondents take primary school, increasing to 31% in 2011. In terms of women's educational status, in 2005 81% were not educated and in 2011 76% were not educated. Similarly, more women were taken primary school in three survey periods. Regarding women's fertility preference, the difference was not observed between 2011 and 2016 in the category of wants soon. The percentage of women exposed to media about ANC visits increases from 38% in 2005 and 68% in 2016 (Table 1).
Table 1 Percentage Distribution of Socio-demographic Characteristics among Respondents from 2005 to 2016 Ethiopian Demographic and Health Survey
Trends of an acceptable antenatal care coverage in Ethiopia
In this section, we present trends of full ANC4+ coverage during three consecutive EDHS survey periods. Perceiving at the overall trend, Ethiopia shows slow progress on the coverage of ANC4+ over a study period, overall trends of full antenatal care coverage was increased from 16% in 2005 to 21% in 2011 and 35% in 2016 (Fig. 1).
Trends of an acceptable antenatal care (ANC4+) visit over last 10 years in Ethiopia, Ethiopian Demographic and Health Survey 2005–2016
The trends in ANC4+ has increased in Amhara, Somali, Tigray, Afar, SNNPR, Benshangul-Gumuz, and Oromia regions over time (Fig. 2). In terms of residence, the percentage of acceptable ANC visitors increases at a 211 percentage points among rural residents from 2005 to 2016. Regarding maternal education there was an increase in ANC4+ visit among all categories with the highest increase in secondary and higher education from (2011–2016) at 75 percentage points. Similarly in birth order, there was an increase in ANC4+ visit in each category with the highest increase in 6+ birth orders in the entire study period at 145.4% (Table 2).
Trends of an acceptable antenatal care (ANC4+) visit across regions over the last 10 years in Ethiopia, Ethiopian demographic and Health Survey 2005–2016
Table 2 Trends of an acceptable antenatal care visit (ANC4+) in Ethiopia by selected characteristics of respondents from 2005 to 2016 Ethiopian Demographic and Health Survey
Decomposition analysis of an acceptable antenatal care coverage
Difference due to characteristics (endowments)
The decomposition analysis revealed that about 29% of the overall percentage change in an acceptable ANC visit was due to a difference in characteristics (compositional factors). Among compositional factors, a significant contribution to the change in acceptable ANC visit was associated with the husband's education, religion, wealth status, residence, and place of delivery (Table 3).
Table 3 Decomposition analysis of an acceptable antenatal care visit (ANC4+) among women who gave birth in Ethiopia, Ethiopian Demographic and health Survey 2005–2016
A husband who attains primary school was an important contributor to the increment of acceptable ANC visits. The proportion of husbands who attain primary school increases from 21 to 32% in the last decades, with an important compositional contribution to the improvements of acceptable ANC visit by 2% (Fig. 3).
Contributions of change in the distribution 'compositional effect' of the determinants of ANC4+ in Ethiopia
Religion is a significant contributor to the improvements of an acceptable ANC visit. The proportion of Muslim followers who visit acceptable ANC was doubled in the last decades, 13% in 2005, and 27% in 2016 (Table 2), with a significant compositional contributor in the improvements of acceptable ANC visit by 30% (Table 3).
Similarly, the proportion of women who deliver in health institutions increases from 5 to 27% in the last decades, with a significant compositional contributor in the improvements of acceptable ANC visit by 26% (Table 3).
Also, the residence is a significant compositional contributor to the improvements of an acceptable ANC visit. The percentage of women who visit acceptable ANC residing in rural areas was tripled in the last decades, 9% in 2005, and 28% in 2016 Table 2, with a significant compositional contributor for the increment of acceptable ANC visit by 2% (Table 3).
Difference due to effect of coefficients (C)
After controlling the role of compositional changes, 71% of improvements were due to behavioral change towards acceptable ANC visit controlling the roll of change in compositional characteristics (Table 3). Factors including Educational status, birth order, religion, and residence associated with a significant effect of coefficient contribution to the change in an acceptable ANC visit. Controlling the role of compositional changes, women's education and husband education (completed secondary and higher) had a significant contribution to the increase in an acceptable ANC visits by 2 and 4% respectively over the last decades.
Controlling the role of compositional changes, compared with followers of the Catholic religion, followers of other religions, especially Orthodox Christian religion showed significantly associated with the contribution to the increase in an acceptable ANC visit over a decade. The effect of religion becomes more important over time (Table 3). Further, the behavioral change in the rural population leads to the improvements of acceptable ANC visit by 46% controlling the effects of change in compositional factors (Table 3).
This study aimed to examine the trends and the major factors associated whether positively or negatively contributing to the change in acceptable ANC visit over the last decades.
ANC4+ visit was increased substantially over the last 10 years especially in the second survey period 2011–2016 i.e. by 14%. This might be attributed to the demanding efforts of the government to create awareness for the community about the significance of ANC service to meet the millennium development goal (MDG) via the health sector development plans [25].
All most two-third of the overall change in an acceptable ANC visit was due to differences in coefficients (C), implied that a significant contribution of the change arises when changing population behaviors via a significant explanatory variable. In this study religion is a significant contribution to the improvements of an acceptable ANC visit, meaning that there was a cultural belief that everyone is done on the will of God, the help of health professionals on birth, and pregnancy-related consultations are refused in most rural areas in Ethiopia. However, the government of Ethiopia had been done a pertinent awareness creation for the rural communities by enhancing community involvement and empowering through participation to identify needs, suggested solutions, implementation, and follow up activities to solve such challenges through the launch of health extension workers [26, 27].
Women's attainment of secondary and above education exhibited a significant contribution to the improvements of an acceptable ANC visit consistent to a study done in Sub-Saharan Africa systematic review and meta-analysis [28]. Ethiopia has been worked to achieve the millennium development goal to advocate women's educational attainment and lunched the growth and transformation plan I (GTP I) [20, 29]. Therefore, the compositional increase in both husband and women's education in the last decade had a positive contribution to the improvements of ANC4+ service.
Similarly, a compositional change in husband primary education attainment has a significant contribution to the improvements of an acceptable ANC visit consistent in a study [20, 30]. Meaning that educated husband had a positive attitude towards the importance of frequent ANC visit, this leads to make a strong decision and support to use frequent ANC service for safe birth outcomes.
In terms of wealth status, a compositional change in a middle and rich person over time has a significant contribution to the improvements of an acceptable ANC visit over the last 10 years. This might be the improvement of the economy helps to advance health care utilization and able to afford medical and non-medical costs associated with ANC service during pregnancy [2, 5, 19, 31, 32]. Thus, lack of financial access is a barrier to use an acceptable ANC service by pregnant women; it limits the number of ANC visits or even initiates ANC late during pregnancy [19].
The proportion of rural women who use ANC4+ is increased from 9% in 2005 to 28% in 2016, with an overall decomposition change in coefficients/ change in behavior of women by 46% in the last 10 years. Even though, the proportion of urban women an acceptable ANC use was 66% in 2016, even far exceeds rural proportion in the same year. However, the high progress has been observed in the rural areas where most of the population lives previous studies documented similarly [32, 33]. This might be due to the execution of the Health extension workers [34, 35] and expansion of primary health care units in the last decade via rehabilitation and advancing of the existing health facilities as well as the building of new facilities i.e. the number of a health post and health centers in 2005 was 6, 191and 668 while in 2013 this number was increased to 16, 045 and 3, 245 respectively [27, 34,35,36,37]. Also, the provision of health insurance, free medical costs, improvements of human resource and road construction in each district might be contributed to the improvements of ANC4+ for pregnant women in Ethiopia.
Also, an astonishing finding was obtained in this analysis is in the effect of religion both the composition of characteristics (Endowment) and coefficients/behavioral change in women. The change in coefficients/ behavior among orthodox Christians contributed to the improvements of an acceptable ANC visits during pregnancy in line with a study conducted in Sub-Saharan Africa [28]. However, there is no supportive evidence on the reason for differences among religions.
The strength of this study was, the study was done on large data set representing the whole country, and thus findings were based on adequate statistical power. Second, calculations were utilized after the data were weighted for sampling probabilities and non-response. Complex sampling procedures were also considered during testing of statistical significance. Third, analytic techniques such as decomposition analysis were done to understand the source of change in acceptable ANC visits.
The study tries to address important findings to support an acceptable ANC visits in Ethiopia, however not without limitation which may affect the conclusions of our findings. As the data were cross-sectional surveys likely to prone to recall bias and social desirability bias. During decomposition analysis, important variables such as women's decision-making capacity, type investigation during ANC visit, maternal health service and quality, maternal medical and obstetric condition; variables like diabetes mellitus, hypertension, HIV/AIDS, heart failure, renal disease and attitude towards ANC were not addressed in this study because these variables were not available. Further research is needed including alternative methodology to the decomposition analysis.
Conclusion and recommendation
An acceptable ANC visit among women has been slightly increased over the last 10 years in Ethiopia. Nearly, one-third of the overall change in acceptable ANC visit over the last 10 years was due to the difference in characteristics of the population between 2005 and 2016 in Ethiopia. The compositional change in religion, husband educational attainment, residence, and wealth status are the potential factors for the improvements of acceptable ANC visit in Ethiopia. Also, almost two-thirds of the improvement in acceptable ANC visit was relay on change in coefficients/behavior of pregnant women towards acceptable ANC visit. Factors contributed to the change in coefficients of acceptable ANC visits are residence, religion, and educational attainments.
Public intervention should continue to enhance the ANC program and further advancing of health care facilities should be done in rural communities to maintain further improvements of acceptable ANC visit. It is mandatory advancing the education of the young population to empower girls and develop a positive attitude towards ANC visits during pregnancy.
The data was available from the corresponding author and we can provide upon request.
AIDS:
ANC:
EAs:
Enumeration areas
EDHS:
Ethiopia Demographic and Health Survey
EPHC:
Ethiopian Population and Housing Census
CDC:
Center of Disease Control
EHNRI:
Ethiopian DHS obtained Ethical clearance from Ethiopian Health Nutrition and Research institute
IRB:
HIV:
NGO:
SNNP:
Southern nations, nationalities, and people's region
MDG:
Millennium development goal
SSA:
SDG:
TBA:
Traditional birth attendance
TT:
Tetanus toxoid
Organization, W.H. WHO recommendations on antenatal care for a positive pregnancy experience: executive summary: Geneva: World Health Organization; 2016.
Ataguba JE-O. A reassessment of global antenatal care coverage for improving maternal health using sub-Saharan Africa as a case study. PLoS One. 2018;13(10):e0204822.
Secretariat U. The millennium development goals report 2015. New York: United Nations; 2015.
Organization, W.H. WHO recommendations on antenatal care for a positive pregnancy experience: Geneva: World Health Organization; 2016.
Catherine Arsenault KJ, Lee D, Dinsa G, Manzi F, Marchant T, Kruk ME. Equity in antenatal care quality: an analysis of 91 national household surveys. Lancet Glob Health. 2018;6(11):86–95.
Alkema L, et al. Global, regional, and national levels and trends in maternal mortality between 1990 and 2015, with scenario-based projections to 2030: a systematic analysis by the UN maternal mortality estimation inter-agency group. Lancet. 2016;387(10017):462–74.
Birmeta K, Dibaba Y, Woldeyohannes D. Determinants of maternal health care utilization in Holeta town, central Ethiopia. BMC Health Serv Res. 2013;13(25):6.
Nazmul Alam MH, Dumont A, Fournier P. Inequalities in Maternal Health Care Utilization in Sub-Saharan African Countries: A Multiyear and Multi-Country Analysis. PLoS One. 2015;10(4):e0120922.
Amoakoh-Coleman M, et al. Predictors of skilled attendance at delivery among antenatal clinic attendants in Ghana: a cross-sectional study of population data. BMJ Open. 2015;5(5):e007810.
Hijazi HH, et al. Determinants of antenatal care attendance among women residing in highly disadvantaged communities in northern Jordan: a cross-sectional study. Reprod Health. 2018;15(1):106.
Nyongesa C, et al. Factors influencing choice of skilled birth attendance at ANC: evidence from the Kenya demographic health survey. BMC Pregnancy Childbirth. 2018;18(1):88.
Yakoob MY, et al. The effect of providing skilled birth attendance and emergency obstetric care in preventing stillbirths. BMC Public Health. 2011;11(3):S7.
Markos Mezmur KN. Gobopamang Letamo and Hadgu Bariagaber, Socioeconomic inequalities in the uptake of maternal healthcare services in Ethiopia. BMC Health Serv Res. 2017;17(3):67.
Ambel AA, et al. Examining changes in maternal and child health inequalities in Ethiopia. Int J Equity Health. 2017;16(1):152.
Guevvera Y. World Health Organisation: Neonatal and perinatal mortality: country, regional and global estimates. Cebu: sun: WHO; 2006.
Sialubanje C, et al. Improving access to skilled facility-based delivery services: Women's beliefs on facilitators and barriers to the utilisation of maternity waiting homes in rural Zambia. Reprod Health. 2015;12(1):61.
Zere E, Kirigia J, M, Duale S, Akazili J. Inequities in maternal and child health outcomes and interventions in Ghana. BMC Public Health. 2012;12(5):257.
Ahmed S, et al. Economic status, education and empowerment: implications for maternal health service utilization in developing countries. PLoS One. 2010;5(6):e11190.
Gebreyohannes Y, et al. Improving antenatal care services utilization in Ethiopia: an evidence–based policy brief. Int J Health Econ Policy. 2017;2:111–7.
Tesfalidet Tekelab CC, Smith R, Loxton D. Factors affecting utilization of antenatal care in Ethiopia: A systematic review and metaanalysis. PLoS One. 2019;14(4):e0214848.
Bradley E, et al. Hospital quality improvement in Ethiopia: a partnership–mentoring model. Int J Qual Health Care. 2008;20(6):392–9.
Iyaniwura C, Yussuf Q. Utilization of antenatal care and delivery services in Sagamu, south western Nigeria. Afr J Reprod Health. 2009;13(3).
Zegeye AM, Bitew BD, Koye DN. Prevalence and Determinants of Early Antenatal Care Visit Pregnant Women Attending Antenatal Care in Debre Berhan Institutions, Central Ethiopia. Afr J Reprod Health. 2013;17(4):130.
Powers DA, Yoshioka H, Yun M-S. mvdcmp: Multivariate decomposition for nonlinear response models. Stata J. 2011;11(4):556–76.
USAID, A. Three successful sub-Saharan Africa family planning programs: lessons for meeting the MDGs. Washington DC: USAID; 2012.
Federal Democratic Republic of Ethiopia, M.o.H. Health Sector Development Program IV. Addis Ababa: Ethiopia Federal Ministry of Health; 2010.
Alebachew A, Waddington C. Improving health system efficiency. Ethiopia: Human resource for health reforms; 2015.
Okedo-Alex IN, et al. Determinants of antenatal care utilisation in sub-Saharan Africa: a systematic review. BMJ Open. 2019;9(10):e031890.
Addis Abeba E. Assessment of Ethiopia's Progress towards theMDGs; 2015.
Dickson KS, Amu H. Determinants of skilled birth attendance in the northern parts of Ghana. Adv Public Health. 2017;2017:e031890.
Makate M, Makate C. The evolution of socioeconomic status-related inequalities in maternal health care utilization: evidence from Zimbabwe, 1994–2011. Global Health Res Policy. 2017;2(1):1.
Kumar G, et al. Utilisation, equity and determinants of full antenatal care in India: analysis from the National Family Health Survey 4. BMC Pregnancy Childbirth. 2019;19(1):327.
Medhanyie A, et al. The role of health extension workers in improving utilization of maternal health services in rural areas in Ethiopia: a cross sectional study. BMC Health Serv Res. 2012;12(1):352.
Afework MF, et al. Effect of an innovative community based health program on maternal health service utilization in north and south Central Ethiopia: a community based cross sectional study. Reprod Health. 2014;11(1):28.
Karim AM, et al. Effect of Ethiopia's health extension program on maternal and newborn health care practices in 101 rural districts: a dose-response study. PLoS One. 2013;8(6):e65160.
Ali EE. Health care financing in Ethiopia: implications on access to essential medicines. Value Health Reg Issues. 2014;4:37–40.
Bradley E, et al. Access and quality of rural healthcare: Ethiopian millennium rural initiative. Int J Qual Health Care. 2011;23(3):222–30.
We, authors, acknowledge The Demographic and Health Surveys (DHS) Program funded by the U.S. Agency for International Development (USAID) for the accusation dataset.
We did not receive any funds for this study.
Department of Epidemiology and Biostatistics, Institute of Public Health, College of Medicine and Health Science, University of Gondar, Gondar, Ethiopia
Tilahun Yemanu Birhan & Wullo Sisay Seretew
Tilahun Yemanu Birhan
Wullo Sisay Seretew
TY was involved in this study from the inception to design, acquisition of data, data cleaning, data analysis and interpretation, and drafting and revising of the manuscript. WS was involved in project administration, principal supervision, and revising the final manuscript. All authors read and approved the final manuscript.
Correspondence to Tilahun Yemanu Birhan.
Ethiopian DHS obtained Ethical clearance from Ethiopian Health Nutrition and Research Institute (EHNRI) Review Board, the National Research Ethics Review Committee (NRERC) at the Ministry of Science and Technology of Ethiopia, the Institutional Review Board (IRB) of ICF International, and Center of Disease Control (CDC).
During the data collection, the interviewer read aloud a statement to get consent from the respondents. The respondents provided verbal consent, as DHS is conducted in areas where not all respondents can write. The interviewers then signed their name to document that the statement was read and that consent was granted or declined. Children were not respondents to interview; however, parents/ guardians gave consent for measurements. The authors have submitted the title of the research to DHS Program/ICF International and permission was granted to download and use the data for this study. The DHS Program authorized data access; and data were used solely for the current study.
We, the authors, declare that we had no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Birhan, T.Y., Seretew, W.S. Trends and determinants of an acceptable antenatal care coverage in Ethiopia, evidence from 2005-2016 Ethiopian demographic and health survey; Multivariate decomposition analysis. Arch Public Health 78, 129 (2020). https://doi.org/10.1186/s13690-020-00510-2
Multivariate decomposition analysis | CommonCrawl |
Let $\mathcal U$ be a cover for $S$.
A subcover of $\mathcal U$ for $S$ is a set $\mathcal V \subseteq \mathcal U$ such that $\mathcal V$ is also a cover for $S$.
A finite subcover of $\mathcal U$ for $S$ is a subcover $\mathcal V \subseteq \mathcal U$ which is finite.
A countable subcover of $\mathcal U$ for $S$ is a subcover $\mathcal V \subseteq \mathcal U$ which is countable. | CommonCrawl |
\begin{definition}[Definition:Normal Real Number]
A real number $r$ is '''normal''' with respect to a number base $b$ {{iff}} its basis expansion in number base $b$ is such that:
:no finite sequence of digits of $r$ of length $n$ occurs more frequently than any other such finite sequence of length $n$.
In particular, for number base $b$, all digits of $r$ have the same natural density in the basis expansion of $r$.
\end{definition} | ProofWiki |
\begin{document}
\title{A Generalized Matching Reconfiguration Problem} \thispagestyle{empty}
\begin{abstract} The goal in {\em reconfiguration problems} is to compute a {\em gradual transformation} between two feasible solutions of a problem such that all intermediate solutions are also feasible. In the {\em Matching Reconfiguration Problem} (MRP), proposed in a pioneering work by Ito et al.\ from 2008, we are given a graph $G$ and two matchings $M$ and $M'$, and we are asked whether there is a sequence of matchings in $G$ starting with $M$ and ending at $M'$, each resulting from the previous one by either adding or deleting a single edge in $G$,
without ever going through a matching of size $< \min\{|M|,|M'|\}-1$. Ito et al.\ gave a polynomial time algorithm for the problem, which uses the Edmonds-Gallai decomposition.
In this paper we introduce a natural generalization of the MRP that depends on an integer parameter $\Delta \ge 1$: here we are allowed to make $\Delta$ changes to the current solution rather than 1 at each step of the {transformation procedure}. There is always a valid sequence of matchings transforming $M$ to $M'$ if $\Delta$ is sufficiently large, and naturally we would like to minimize $\Delta$. We first devise an optimal transformation procedure for unweighted matching with $\Delta = 3$, and then extend it to weighted matchings to achieve asymptotically optimal guarantees. The running time of these procedures is linear.
We further demonstrate the applicability of this generalized problem to dynamic graph matchings. In this area, the number of changes to the maintained matching per update step (the \emph{recourse bound}) is an important quality measure. Nevertheless, the \emph{worst-case} recourse bounds of almost all known dynamic matching algorithms are prohibitively large, much larger than the corresponding update times. We fill in this gap via a surprisingly simple black-box reduction: Any dynamic algorithm for maintaining a $\beta$-approximate maximum cardinality matching with update time $T$, for any $\beta \ge 1$, $T$ and $\epsilon > 0$, can be \emph{transformed} into an algorithm for maintaining a $(\beta(1 +\epsilon))$-approximate maximum cardinality matching with update time $T + O(1/\epsilon)$ and worst-case recourse bound $O(1/\epsilon)$. This result generalizes for approximate maximum weight matching, where the update time and worst-case recourse bound grow from $T + O(1/\epsilon)$ and $O(1/\epsilon)$ to $T + O(\psi/\epsilon)$ and $O(\psi/\epsilon)$, respectively; $\psi$ is the graph {\em aspect-ratio}. We complement this positive result by showing that, for $\beta = 1+\epsilon$, the worst-case recourse bound of any algorithm produced by our reduction is optimal. As a corollary, several key dynamic approximate matching algorithms--- with poor worst-case recourse bounds--- are strengthened to achieve near-optimal worst-case recourse bounds with no loss in update time. \end{abstract}
\setcounter{page}{1} \section{Introduction}
The study of graph algorithms is mostly concerned with the measure of \emph{(static) runtime}. Given a graph optimization problem, the standard objective is to design a fast (possibly approximation) algorithm, and ideally complement it with a matching lower bound on the runtime of any (approximation) algorithm for solving the problem. As an example, computing (from scratch) a 2-approximate minimum vertex cover (VC) can be done trivially in linear time,
whereas a better-than-2 approximation for the minimum VC cannot be computed in polynomial time under the unique games conjecture \cite{KR08}.
The current paper is motivated by a natural need arising in networks that are prone to temporary or permanent changes. Such changes are sometimes part of the normal behavior of the network, as in {\em dynamic networks}, but changes could also be the result of unpredictable failures of nodes and edges, particularly in \emph{faulty networks}. Consider a large-scale network $G = (V,E,w)$ for which we need to solve, perhaps approximately, some graph optimization problem, and the underlying solution (e.g., a maximum matching) is being used for some practical purpose (e.g., scheduling in packet switches) throughout a long time span. If the network changes over time, the quality of the used solution may degrade until it is too poor to be used in practice and it may even become infeasible.
Instead of the standard objectives of optimization, the questions that arise here concern \emph{reoptimization}: Can we ``efficiently'' transform one given solution (the \emph{source}) to another one (the \emph{target}) under ``real-life constraints''?
The efficiency of the {\em transformation procedure} could be measured in terms of running time, but in some applications making even small changes to the currently used solution may incur huge costs, possibly much higher than the runtime cost of computing (from scratch) a better solution; we shall use ``procedure'' and ``process'' interchangeably.
In particular, this is often the case whenever the edges of the currently used solution are ``hard-wired'' in some physical sense, as in road networks.
Various real-life constraints or objectives may be studied; the one we focus on in this work
is that at any step (or every few steps) throughout the transformation process the current solution should be both feasible and of quality no worse (by much) than that of either the source or target solutions. This constraint is natural as it might be prohibitively expensive or even impossible to carry out the transformation process \emph{instantaneously}.
Instead, the transformation can be broken into \emph{phases} each performing $\le \Delta$ changes to the transformed solution, where $\Delta \ge 1$ is some parameter,
so that the solution obtained at the end of each phase--- to be used instead of the source solution--- is both feasible and of quality no (much) worse than either the source or target. The transformed solution is to eventually coincide with the target solution.
The arising \emph{reoptimization} meta-problem generalizes the well-studied framework of {\emph reconfiguration problems}, which we discuss in Section 1.1. It is interesting from both practical and theoretical perspectives, since even the most basic and well-understood optimization problems become open in this setting. E.g., for the VC problem,
{\em given} a better-than-2 approximate target VC, can we transform to it from any source VC subject to the above constraints? This is an example for a problem that is computationally hard in the standard sense but might be easy from a reoptimization perspective. In contrast, perhaps computationally easy problems, such as approximate maximum matching, are hard from a reoptimization perspective?
This meta-problem captures tension between (1) the {\em global} objective of transforming one global solution to another, and (2) the {\em local} objective of transforming \emph{gradually} while having a feasible and high quality solution throughout the process.
A similar tension is captured by various models of computation that involve locality, including dynamic graph algorithms, distributed computing, property testing and local computation algorithms (LCA). The study of the meta-problem presented here could borrow from these related research fields, but, more importantly, we anticipate that it will also contribute to them; indeed, we present here an application of this meta-problem to dynamic graph algorithms.
\\ \noindent {\bf 1.1~ Graph Reconfiguration.~} The framework of {\em reconfiguration problems} has been subject to growing interest in recent years. The term {\em reconfiguration} was coined in the work of Ito et al.\ \cite{DBLP:journals/tcs/ItoDHPSUU11}, which unified earlier related problems and terminology (see, e.g., \cite{DBLP:conf/icalp/HearnD02,gopalan2009connectivity,bonsma2009finding}) into a single framework. The general goal is to compute a {\em transformation} between two feasible solutions of a problem such that all intermediate solutions are also feasible, where each pair of consecutive solutions need to be {\em adjacent} under a fixed polynomially testable symmetric adjacency relation on the set of feasible solutions. Such a transformation arises naturally in many contexts, such as solving puzzles, motion planning, questions of evolvability (can genotype evolve into another one via individual ``adjacent'' mutations?), and similarity of DNA sequences in computational genomics and particularly gene editing, which is among the hottest scientific topics these days;
see the surveys of \cite{DBLP:books/cu/p/Heuvel13,DBLP:journals/algorithms/Nishimura18} for further details. In most previous work, two solutions are called adjacent if their symmetric difference has size 1. The most well-studied problem under this framework is graph matching. For brevity, we shall only discuss here papers on graph matching; see the surveys \cite{DBLP:books/cu/p/Heuvel13,DBLP:journals/algorithms/Nishimura18} for discussions on other problems.
In the {\em Matching Reconfiguration Problem} (MRP), proposed in \cite{DBLP:journals/tcs/ItoDHPSUU11}, we are given a graph $G$ and two matchings $M$ and $M'$, and we are asked whether there is a sequence of matchings in $G$ starting with $M$ and ending at $M'$,
each resulting from the previous one by either adding or deleting a {\em single edge} in $G$,
without ever going through a matching of size $< \min\{|M|,|M'|\}-1$.
Ito et al.\ gave a polynomial time algorithm for the problem, which uses the Edmonds-Gallai decomposition. In particular, in some cases such a transformation does not exist,
and much of the difficulty is in the decision problem (decide if exists or not).
The problem of generalizing this algorithm for weighted matchings was proposed as an open problem in \cite{DBLP:journals/tcs/ItoDHPSUU11}, and remained open to date, partially since the algorithm of \cite{DBLP:journals/tcs/ItoDHPSUU11} already for unweighted matchings relies on a rather intricate decomposition. The work of \cite{DBLP:journals/tcs/ItoDHPSUU11} triggered interesting followups on MRP \cite{DBLP:journals/tcs/KaminskiMM12,DBLP:journals/jco/ItoKKKO19,gupta2019complexity,DBLP:conf/esa/ItoKK0O19,DBLP:conf/mfcs/BonamyBHIKMMW19,DBLP:conf/wg/BousquetHIM19}. In all these followups,
the symmetric difference between two adjacent matchings is rather strict: it is fixed by either 1 or 2 in \cite{DBLP:journals/tcs/KaminskiMM12,DBLP:journals/jco/ItoKKKO19,gupta2019complexity,DBLP:conf/wg/BousquetHIM19}, whereas in the context of perfect matchings the symmmetric difference is an alternating cycle of length 4 \cite{DBLP:conf/mfcs/BonamyBHIKMMW19,DBLP:conf/esa/ItoKK0O19}. Perhaps since the symmetric difference in all the previous work is so strict, the goal was polynomial-time algorithms and hardness for the problem. The natural generalization of parameterizing the symmetric difference by an arbitrary $\Delta, \Delta \ge 1$--- as in our reoptizmiation meta-problem--- was not studied in prior work.
\\ \noindent {\bf 1.2~ Our contribution.~} We study two fundamental graph matching problems under the aforementioned meta-problem:
(approximate) maximum cardinality matching (MCM) and maximum weight matching (MWM).
Our meta-problem is, in fact, inherently different than the original MRP. We are not interested in the decision version of the problem--- we take $\Delta$ to be large enough so that a transformation is {\em guaranteed} to exist. Thus we shift the focus from per-instance optimization to {\em existential optimization}, and our goal is to optimize $\Delta$ so that any source matching can be transformed to any target matching
by performing at most $\Delta$ changes per step, while never reaching a {\em much worse} matching than either the source or the target along the way. By ``worse'' we mean either in terms of size or weight, and we must indeed do a bit worse in some cases even for large $\Delta$; the original MRP formulation for unweighted matchings allows to go down by 1 unit of size, and this slack is required also for large $\Delta$. For weighted graphs, naturally, a bigger slack is required. For both unweighted and weighted matchings, we provide transformation procedures with near-optimal guarantees and linear running time. Our results are summarized next; the transformation for approximate MWM (Theorem~\ref{th:main}) is the most technically challenging part of this work.
\begin {Theorem} [MCM] \label{th:MCM}
For any source and target matchings ${\cal M}$ and ${\cal M}'$, one can transform ${\cal M}$ into (a possibly superset of) ${\cal M}'$ via a sequence of phases consisting of $\le$ 3 operations each (i.e., $\Delta = 3$), such that the matching at the end of each phase throughout this transformation is a valid matching for $G$ of size $\ge \min\{|{\cal M}|,|{\cal M}'|-1\}$. The runtime of this transformation procedure is $O(|{\cal M}| + |{\cal M}'|)$. \end {Theorem}
\begin {Theorem} [MWM] \label{th:main} For any source and target matchings ${\cal M}$ and ${\cal M}'$ with $w({\cal M}') > w({\cal M})$, and any $\epsilon >0$, one can transform ${\cal M}$ into (a possibly superset of) ${\cal M}'$ via a sequence of phases consisting of $O(\frac 1 \epsilon)$ operations each (i.e., $\Delta = O(\frac 1 \epsilon)$), such that the matching obtained at the end of each phase throughout this transformation is a valid matching for $G$ of weight $\ge \max\{w({\cal M})-W,(1-\epsilon)w({\cal M})\}$, where $W = \max_{e \in {\cal M}} w(e)$.
The runtime of this transformation procedure is $O(|{\cal M}| + |{\cal M}'|)$. \end {Theorem} {\bf Remark.} Theorem \ref{th:main} assumes that $w({\cal M}') > w({\cal M})$. This assumption is made without loss of generality, since, if $w({\cal M}') \le w({\cal M})$, we can apply a reversed transformation, so that the matching will always be of weight $\ge \max\{w({\cal M}')-W',(1-\epsilon)w({\cal M}')\}$, where $W' = \max_{e \in {\cal M}'} w(e)$. \ignore{
\begin {Theorem} [MSF] \label{forest} For any source and target spanning forests ${\cal F}$ and ${\cal F}'$, one can gradually transform ${\cal F}$ into ${\cal F}'$ via a sequence of constant-time operations, grouped into phases of two operations each, such that the spanning forest obtained at the end of each phase throughout this transformation process is a valid spanning forest for $G$ of weight at most $\max\{w({\cal F}),w({\cal F}')\}$.
Moreover, the runtime is $O\bigl((|{\mathcal{F}}|+|{\mathcal{F}}'|)\log(|{\mathcal{F}}|+|{\mathcal{F}}'|)\bigr)$. \end {Theorem} }
In App.\ \ref{tightness}, we show that the guarantees provided by Theorems \ref{th:MCM} and \ref{th:main} are tight and asymptotically tight, respectively. Although our results may lead to the impression that there exists an efficient gradual transformation process to any graph optimization problem, we briefly discuss in App.\ \ref{discuss} two trivial hardness results for the minimum VC and maximum independent set problems.
\\ \noindent {\bf 1.2.1~ Application: A worst-case recourse bound for dynamic matching algorithms.} In the standard \emph{fully dynamic} setting we start from an empty graph $G_0$ on $n$ fixed vertices, and at each time step $i$ a single edge $(u,v)$ is either inserted to the graph $G_{i-1}$ or deleted from it, resulting in graph $G_i$.
In the {\em vertex update} setting we have vertex updates instead of edge updates;
this setting was mostly studied for bipartite graphs \cite{BLSZ14,BLSZ15,BHR17}.
The problem of maintaining a large matching in fully dynamic graphs was subject to intensive interest recently \cite{OR10,BGS11,NS13,GP13, PS16,Sol16,BHN17,BHR17,CS18,ACCSW18,GLSSS19,BFH19}.
The basic goal is to devise an algorithm for maintaining a large matching while keeping a tab on the \emph{update time}, i.e., the time required to update the matching at each step. One may try to optimize the \emph{amortized} (average) update time of the algorithm or its \emph{worst-case} (maximum) update time, but both measures are defined with respect to a \emph{worst-case} sequence of graphs.
``Maintaining'' a matching with update time $u_T$ translates into maintaining a data structure with update time $u_T$, which answers queries regarding the matching with a low, ideally constant, \emph{query time} $q_T$. For a queried vertex $v$ the answer is the only matched edge incident on $v$, or $\textsc{null}$ if $v$ is free, while for a queried edge $e$ the answer is whether edge $e$ is matched or not. All queries made following the same update step $i$ should be answered \emph{consistently} with respect to the same matching, hereafter the \emph{output matching (at step $i$)}, but queries made in the next update step $i+1$ may be answered with respect to a completely different matching.
Thus
even if the worst-case update time is low,
the output matching
may change significantly from one update step to the next; some natural scenarios where the output matching changes significantly per update step are discussed in App.\ \ref{12}.
The number of changes (or replacements) to the output matching per update step is an important measure of quality, sometimes referred to as the \emph{recourse bound},
and the problem of optimizing it has received growing attention recently \cite{GKKV95,CDKL09,GKS14,BLSZ15,BLZS17,BHR17,BKPPS17,ADJ18,MSV18}.
In applications such as job scheduling, web hosting, streaming content delivery, data storage and hashing, a replacement of a matched edge by another one may be costly, possibly much more than the runtime of computing these replacements.
Moreover, when the recourse bound is low, one can efficiently \emph{output} all the changes to the matching following every update step, which could be important in practical scenarios. In particular, a low recourse bound is important when the matching algorithm is used as a black-box subroutine inside a larger data structure or algorithm \cite{BS16,ADKKP16}; see App.\ \ref{123} for more details. We remark that the recourse bound (generally defined as the number of changes to some underlying structure per update step) has been well studied in the areas of dynamic and online algorithms for a plethora of optimization problems besides graph matching, such as MIS, set cover, Steiner tree, flow and scheduling; see \cite{GGK13,GK14,GKS14,BGKPSS15,MSVW16,AOSS18,CHK16,GKKP17,SSTT18},
and the references therein.
There is a strong separation between the state-of-the-art amortized versus worst-case bounds for dynamic matching algorithms, in terms of both the time and the recourse bounds. A similar separation exists for numerous other problems, such as dynamic minimum spanning forest. In various practical scenarios, particularly in systems designed to provide real-time responses, a strict tab on the \emph{worst-case update time} or on the \emph{worst-case recourse bound} is crucial, thus an algorithm with a low amortized guarantee but a high worst-case guarantee is useless.
Despite the importance of the recourse bound measure, all known algorithms but one in the area of dynamic matchings (described in App.\ \ref{111}) provide no nontrivial worst-case recourse bounds whatsoever! The sole exception is an algorithm for maintaining a maximal matching with a worst-case update time $O(\sqrt{m})$ and a constant recourse bound \cite{NS13}. In this paper we fill in this gap via a surprisingly simple
yet powerful black-box reduction (throughout \emph{$\beta$-MCM} is a shortcut for $\beta$-approximate MCM):
\begin{Theorem} \label{main} Any dynamic algorithm maintaining a $\beta$-MCM with update time $T$,\footnote{Besides
answering queries, we naturally assume that at any update step
the entire matching can be output within time (nearly) linear in its size. All known algorithms satisfy this assumption.}
for any $\beta \ge 1$, $T$ and $\epsilon > 0$, can be \emph{transformed} into an algorithm maintaining a $(\beta(1 +\epsilon))$-MCM with update time $T + O(1/\epsilon)$ and worst-case recourse bound $O(1/\epsilon)$. If the original time bound $T$ is amortized/worst-case, so is the resulting time bound of $T + O(1/\epsilon)$, while the recourse bound $O(1/\epsilon)$ always holds in the worst-case. This applies to the fully dynamic setting under edge and/or vertex updates.
\end{Theorem}
The proof of Theorem \ref{main} is carried out in two steps. First we prove Theorem \ref{th:MCM} by showing a simple transformation process for any two matchings ${\cal M}$ and ${\cal M}'$ of the same \emph{static} graph.
The second step of the proof, which is the key insight behind it, is that the gradual transformation process can be used \emph{essentially as is} in fully dynamic graphs, while incurring a negligible loss to the size and approximation guarantee of the transformed matching.
In Section \ref{recourse} we complement the positive result provided by Theorem \ref{main} by proving that the recourse bound $O(1/\epsilon)$ is optimal (up to a constant factor) in the regime $\beta = 1+\epsilon$. In fact, the lower bound $\Omega(1/\epsilon)$ on the recourse bound holds even in the amortized sense and even in the incremental (insertion only) and decremental (deletion only) settings. For larger values of $\beta$, taking $\epsilon$ to be a sufficiently small constant gives rise to an approximation guarantee arbitrarily close to $\beta$ with a constant recourse bound.
\\
{\bf A corollary of Theorem \ref{main}.~}
As a corollary of Theorem \ref{main}, all previous algorithms \cite{GP13,BS15,PS16,CS18,ACCSW18,bernstein2019deamortization,DBLP:journals/corr/abs-1911-05545} with low worst-case update time are strengthened to achieve a worst-case recourse bound of $O(1/\epsilon)$ with only an additive overhead of $O(1/\epsilon)$ to the update time. (Some of these results were already strengthened in this way by using a previous version of the current work, which was posted to arXiv in 2018.) Since the update time of all these algorithms is larger than $O(1/\epsilon)$, we get a recourse bound of $O(1/\epsilon)$ with no loss whatsoever in the update time! Moreover, all known algorithms with low amortized update time can be strengthened in the same way; e.g.,
in SODA'19 \cite{GLSSS19} (cf.\ \cite{BLSZ14}) it was shown that one can maintain a $(1+\epsilon)$-MCM in the incremental edge update setting with a constant (depending exponentially on $\epsilon$) amortized update time.
While this algorithm yields a constant amortized recourse bound, no nontrivial (i.e., $o(n)$) worst-case recourse bound was known for this problem. Theorem \ref{main} strengthens the result of \cite{GLSSS19} to maintain a $(1+\epsilon)$-MCM with a constant amortized update time and the optimal worst-case recourse bound of $O(1 / \epsilon)$.
Since the recourse bound is an important measure of quality, this provides a significant contribution to the area of dynamic matching algorithms. \noindent
\\
{\bf Weighted matchings.~}
The result of Theorem \ref{main} can be generalized for approximate MWM in graphs with bounded aspect ratio $\psi$, by using the much more intricate transformation provided by Theorem \ref{th:main} (compared to Theorem \ref{th:MCM}), as summarized in the next theorem. (The \emph{aspect ratio} $\psi = \psi(G)$ of a weighted graph $G=(V,E,w)$ is defined as $\psi = \frac{\max_{e \in E} w(e)} {\min_{e \in E} w(e)}$.)
\begin{Theorem} \label{main2} Any dynamic algorithm for maintaining a $\beta$-approximate MWM (shortly, \emph{$\beta$-MWM}) with update time $T$ in a dynamic graph with aspect ratio always bounded by $\psi$, for any $\beta \ge 1$, $T, \epsilon > 0$ and $\psi$, can be \emph{transformed} into an algorithm for maintaining a $(\beta(1 +\epsilon))$-MWM with update time $T + O(\psi/\epsilon)$ and worst-case recourse bound $O(\psi/\epsilon)$. If the original time bound $T$ is amortized/worst-case, so is the resulting time bound of $T + O(\psi/\epsilon)$, while the recourse bound $O(\psi/\epsilon)$ always holds in the worst-case. This applies to the fully dynamic setting under edge and/or vertex updates.
\end{Theorem}
\noindent {\bf Scenarios with high recourse bounds.~} There are various scenarios where high recourse bounds may naturally arise. In such scenarios our reductions (Theorems \ref{main} and \ref{main2}) can come into play to achieve low worst-case recourse bounds. Furthermore, although a direct application of our reductions may only hurt the update time, we demonstrate the usefulness of these reductions in achieving low update time bounds in some natural settings (where we might not care at all about recourse bounds); this, we believe, provides another strong motivation for our reductions. The details are deferred to App.\ \ref{12}.
\\ \noindent {\bf 1.3~ Related work.~} We discussed in Section 1.1 prior work on graph reconfiguration problems.
Other than this line of work, there are also inherently different lines of work on ``reoptimiziation'', which indeed can be interpreted broadly--- there is an extensive and diverse body of research devoted to various notions of reoptimization; see \cite{thiongane2006,AusielloEMP09,BoriaP10,BiloBKKMSZ11,ausiello2011complexity,bender2015reallocation,bender2017cost,SSTT18,Bilo18}, and the references therein. The common goal in all previous work on reoptimization (besides the one discussed in Section 1.1 on reconfiguration) is to (efficiently) compute an exact or approximate solution to a new problem instance by using the solution for the old instance, where typically the solution for the new instance should be close to the original one under certain distance measure.
Our work is inherently different than all such previous work, since our starting point is that \emph{some solution to the new problem instance is given}, and the goal is to compute a \emph{gradual transformation process} (subject to some constraints) between the two given solutions. Also, our work is inherently different than previous work on reconfiguration, as explained in Section 1.2.
\\ \noindent {\bf 1.4~ Organization.~}
This extended abstract (Sections \ref{changes}--\ref{recourse}) focuses on providing a tight worst-case recourse bound for dynamic approximate maximum matching. We start (Section \ref{changes}) by describing a basic scheme for dynamic approximate matchings that was introduced in \cite{GP13}.
In Section 3.1 we present a simple transformation process for MCM in static graphs, thus proving Theorem \ref{th:MCM}. This result is generalized for MWM via a much more intricate transformation process that proves Theorem \ref{th:main}, which is deferred to App.\ \ref{sec:wema} due to space constraints. These transformations, which apply to static graphs, are adapted to the fully dynamic setting in Sections 3.2 and 3.3, thus proving Theorems \ref{main} and \ref{main2}, respectively.
Our lower bound of $\Omega(1/\epsilon)$ on the recourse bound of $(1+\epsilon)$-MCMs is provided in Section \ref{recourse}.
A discussion is deferred to App.\ \ref{discuss}.
\section{The scheme of \cite{GP13}} \label{changes}
This section provides a short overview of a basic scheme for dynamic approximate matchings from \cite{GP13}. Although such an overview is not required for proving Theorems \ref{main} and \ref{main2}, it is instructive to provide it, as it shows that the scheme of \cite{GP13} is insufficient for providing any nontrivial worst-case recourse bound. Also, the scheme of \cite{GP13} exploits a basic \emph{stability} property of matchings, which we use for proving Theorems \ref{main} and \ref{main2},
thus an overview of this scheme may facilitate the understanding of our proof.
\noindent
\\
{\bf 2.1~ The amortization scheme of \cite{GP13}.~} The \emph{stability} property of matchings used in \cite{GP13} is that the maximum matching size changes by at most 1 following each update step. Thus if we have a $\beta$-MCM, for any $\beta \ge 1$,
the approximation guarantee of the matching will remain close to $\beta$ throughout a long update sequence.
Formally, the following lemma is a simple adaptation of Lemma 3.1 from \cite{GP13}; its proof is given in Appendix \ref{app:lazy} for completeness.
(Lemma 3.1 of \cite{GP13} is stated for approximation guarantee $1+\epsilon$ and for edge updates, whereas Lemma \ref{lazylemma2} here holds for any approximation guarantee and also for vertex updates.)
\ignore{
\begin{lemma} \label{lazylemma}
Let $\epsilon,\epsilon' \le 1/2$. Suppose that ${\cal M}_i$ is a $(1+\epsilon)$-MCM for $G_i$. For $j = i,i+1,\ldots, i+\lfloor \epsilon'\cdot |{\cal M}_i| \rfloor$, let ${\cal M}^{(j)}_i$ denote the matching ${\cal M}_i$ after removing from it all edges that got deleted during the updates $i+1,\ldots,j$. Then ${\cal M}^{(j)}_i$ is a $(1+2\epsilon+2\epsilon')$-MCM for the graph $G_j$. \end{lemma} }
\ignore{ \subsection{An Adjustment to Lemma \ref{lazylemma}} Lemma \ref{lazylemma} applies to approximation guarantees close to 1 and to the standard edge update setting. However, as we show next, it is straightforward to extend it to any approximation guarantee $\beta$, in both the edge update and the vertex update settings. }
\begin{lemma} \label{lazylemma2}
Let $\epsilon' \le 1/2$. Suppose ${\cal M}_t$ is a $\beta$-MCM for $G_t$, for any $\beta \ge 1$. For $i = t,t+1,\ldots, t+\lfloor \epsilon'\cdot |{\cal M}_t| \rfloor$, let ${\cal M}^{(i)}_t$ denote the matching ${\cal M}_t$ after removing from it all edges that got deleted during updates $t+1,\ldots,i$. Then ${\cal M}^{(i)}_t$ is a $(\beta(1 +2\epsilon'))$-MCM for $G_i$. \end{lemma}
For concreteness, we shall focus on the regime of approximation guarantee $1+\epsilon$, and sketch the argument of \cite{GP13} for maintaining a $(1+\epsilon)$-MCM in fully dynamic graphs. (As Lemma \ref{lazylemma2} applies to any approximation guarantee $\beta \ge 1+\epsilon$, it is readily verified that the same argument carries over to any approximation guarantee.)
One can compute a $(1+\epsilon/4)$-MCM ${\cal M}_t$ at a certain update step $t$, and then re-use the same matching ${\cal M}^{(i)}_t$ throughout all update steps $i = t,t+1,\ldots, t' = t+ \lfloor \epsilon/4\cdot |{\cal M}_t| \rfloor$ (after removing from it all edges that got deleted from the graph between steps $t$ and $i$). By Lemma \ref{lazylemma2}, assuming $\epsilon \le 1/2$, ${\cal M}^{(i)}_t$ provides a $(1+\epsilon)$-MCM for all graphs $G_i$.
Next compute a fresh $(1+\epsilon/4)$-MCM ${\cal M}_{t'}$ following update step $t'$ and re-use it throughout all update steps $t',t'+1,\ldots,t'+ \lfloor \epsilon/4 \cdot |{\cal M}_{t'}|\rfloor$, and repeat. In this way the static time complexity of computing a $(1+\epsilon)$-MCM ${\cal M}$
is \emph{amortized} over $1 + \lfloor \epsilon/4\cdot |{\cal M}| \rfloor = \Omega(\epsilon \cdot |{\cal M}|)$ update steps. As explained in Appendix \ref{amort}, the static computation time of an approximate matching is $O(|{\cal M}| \cdot \alpha/\epsilon^2)$, where $\alpha$ is the arboricity bound.
(This bound on the static computation time was established in \cite{PS16}; it reduces to $O(|{\cal M}| \cdot \sqrt{m}/\epsilon^2)$
and $O(|{\cal M}| \cdot {\Delta}/\epsilon^2)$ for general graphs and graphs of degree bounded by $\Delta$, respectively, which are the bounds provided by \cite{GP13}.)
\noindent
\\
{\bf 2.2~ A Worst-Case Update time.~}
In the amortization scheme of \cite{GP13} described above, a $(1+\epsilon/4)$-MCM ${\cal M}$ is computed \emph{from scratch}, and then being re-used throughout $\lfloor \epsilon/4 \cdot |{\cal M}| \rfloor$ additional update steps. The worst-case update time is thus the static computation time of an approximate matching, namely, $O(|{\cal M}| \cdot \alpha/\epsilon^2)$. To improve the worst-case guarantee, the tweak used in \cite{GP13} is to simulate the static approximate matching computation within a ``time window'' of $1+\lfloor \epsilon/4 \cdot |{\cal M}| \rfloor$ consecutive update steps, so that following each update step the algorithm simulates only
$O(|{\cal M}| \cdot \alpha/\epsilon^2) / (1+\lfloor \epsilon/4 \cdot |{\cal M}| \rfloor = O(\alpha \cdot \epsilon^{-3})$
steps of the static computation. During this time window the gradually-computed matching, denoted by ${\cal M}'$, is useless, so the previously-computed matching ${\cal M}$ is re-used as the output matching. This means that each matching is re-used throughout a time window of twice as many update steps, hence the approximation guarantee increases from $1+\epsilon$ to $1+2\epsilon$, but we can reduce it back to $1+\epsilon$ by a straightforward scaling argument.
Note that the gradually-computed matching does not include edges that got deleted from the graph during the time window. \noindent
\\ {\bf 2.3~ Recourse bounds.~}
Consider an arbitrary time window used in the amortization scheme of \cite{GP13}, and note that the same matching is being re-used throughout the entire window.
Hence there are no changes to the matching in the ``interior'' of the window except for those triggered by adversarial deletions, which may trigger at most one change to the matching per update step. On the other hand, at the start of any time window (except for the first), the output matching is switched from the old matching ${\cal M}$ to the new one ${\cal M}'$,
which may require $|{\cal M}| + |{\cal M}'|$ replacements to the output matching at that time.
Note that the amortized number of replacements per update step is quite low, being upper bounded by $(|{\cal M}| + |{\cal M}'|) / (1+\lfloor \epsilon/4 \cdot |{\cal M}| \rfloor)$. In the regime of approximation guarantee $\beta = O(1)$, we have $|{\cal M}| = O(|{\cal M}'|)$, hence the amortized recourse bound is bounded by $O(1/\epsilon)$. For a general approximation guarantee $\beta$, the naive amortized recourse bound is $O(\beta/\epsilon)$.
On the negative side, the worst-case recourse bound may still be as high as $|{\cal M}| + |{\cal M}'|$, even after performing the above tweak. Indeed, that tweak only causes the time windows to be twice longer, and it does not change the fact that
once the computation of ${\cal M}'$ finishes, the output matching is switched from the old matching ${\cal M}$ to the new one ${\cal M}'$ \emph{instantaneously}, which may require $|{\cal M}| + |{\cal M}'|$ replacements to the output matching at that time.
\section{Proofs of Theorems \ref{main} and \ref{main2}} \label{proofmain} This section is mostly devoted (see Sections 3.1 and 3.2) to the proof of Theorem~\ref{main}. At the end of this section (Section 3.3) we sketch the adjustments needed for deriving the result of Theorem \ref{main2},
whose proof follows along similar lines to those of Theorem \ref{main}.
\\ \noindent{\bf 3.1~ A simple transformation in static graphs.~} This section is devoted to the proof of Theorem \ref{th:MCM}, which provides the first step in the proof of Theorem \ref{main}. We remark that this theorem can be viewed as a ``warm up'' to Theorem \ref{th:main} for MWM, which is deferred to Section~\ref{sec:wema}, and is considerably more technically involved.
Let ${\cal M}$ and ${\cal M}'$ be two matchings for the same graph $G$. Our goal is to gradually transform ${\cal M}$ into (a possibly superset of) ${\cal M}'$ via a sequence of constant-time operations to be described next, each making at most 3 changes to the matching,
such that the matching obtained at any point throughout this transformation process is a valid matching for $G$ of size at least $\min\{|{\cal M}|,|{\cal M}'|-1\}$. It is technically convenient to denote by ${\cal M}^*$ the \emph{transformed} matching, which is initialized as ${\cal M}$ at the outset, and being gradually transformed into ${\cal M}'$; we refer to ${\cal M}$ and ${\cal M}'$ as the \emph{source} and \emph{target} matchings, respectively. Each operation starts by adding a single edge of ${\cal M}' \setminus {\cal M}^*$ to ${\cal M}^*$ and then removing from ${\cal M}^*$ the at most two edges incident on the newly added edge; thus at most 3 changes to the matching are made per operation. It is instructive to assume that $|{\cal M}'| > |{\cal M}|$, as the motivation for applying this transformation, which will become clear in Section 3.2, is to increase the matching size; in this case the size $|{\cal M}^*|$ of the transformed matching ${\cal M}^*$ never goes below the size $|{\cal M}|$ of the source matching ${\cal M}$.
We say that an edge of ${\cal M}' \setminus {\cal M}^*$ that is incident on at most one edge of ${\cal M}^*$ is \emph{good}, otherwise it is \emph{bad}, being incident on two edges of ${\cal M}^*$. Since ${\cal M}^*$ has to be a valid matching throughout the transformation process, adding a bad edge to ${\cal M}^*$ must trigger
the removal of two edges from ${\cal M}^*$. Thus if we keep adding bad edges to ${\cal M}^*$, the size of ${\cal M}^*$ may halve throughout the transformation process.
The following lemma shows that if all edges of ${\cal M}' \setminus {\cal M}^*$ are bad, the transformed matching ${\cal M}^*$ is just as large as the target matching ${\cal M}'$.
\begin{lemma} \label{badedge}
If all edges of ${\cal M}' \setminus {\cal M}^*$ are bad, then $|{\cal M}^*| \ge |{\cal M}'|$. \end{lemma} \begin{proof}
Consider a bipartite graph $L \cup R$, where each vertex in $L$ corresponds to an edge of ${\cal M}' \setminus {\cal M}^*$ and each vertex in $R$ corresponds to an edge of ${\cal M}^* \setminus {\cal M}'$, and there is an edge between a vertex in $L$ and a vertex in $R$ iff the corresponding matched edges share a common vertex in the original graph.
If all edges of ${\cal M}' \setminus {\cal M}^*$ are bad, then any edge of ${\cal M}' \setminus {\cal M}^*$ is incident on two edges of ${\cal M}^*$, and since ${\cal M}'$ is a valid matching, those two edges cannot be in ${\cal M}'$. In other words, the degree of each vertex in $L$ is exactly 2. Also, the degree of each vertex in $R$ is at most 2, as ${\cal M}'$ is a valid matching.
It follows that $|R| \ge |L|$, or in other words $|{\cal M}^* \setminus {\cal M}'| \ge |{\cal M}' \setminus {\cal M}^*|$, yielding $|{\cal M}^*| \ge |{\cal M}'|$. \quad\blackslug\lower 8.5pt\null\par \end{proof}
The transformation process is carried out as follows. At the outset we initialize ${\cal M}^* = {\cal M}$ and compute the sets ${\cal G}$ and ${\cal B}$ of good and bad edges in ${\cal M}' \setminus {\cal M}^* = {\cal M}' \setminus {\cal M}$ within time $O(|{\cal M}| + |{\cal M}'|)$ in the obvious way, and store them in doubly-linked lists. We keep mutual pointers between each edge of ${\cal M}^*$ and its at most two incident edges in the corresponding linked lists ${\cal G}$ and ${\cal B}$. Then we perform a sequence of operations, where each operation starts by adding an edge of ${\cal M}' \setminus {\cal M}^*$ to ${\cal M}^*$, giving precedence to good edges (i.e., adding a bad edge to ${\cal M}^*$ only when there are no good edges to add), and then removing from ${\cal M}^*$ the at most two edges incident on the newly added edge. Following each such operation, we update the lists ${\cal G}$ and ${\cal B}$ of good and bad edges in ${\cal M}' \setminus {\cal M}^*$ within constant time in the obvious way. This process is repeated until ${\cal M}' \setminus {\cal M}^* = \emptyset$, at which stage we have ${\cal M}^* \supseteq {\cal M}'$.
Note that the number of operations performed before emptying ${\cal M}' \setminus {\cal M}^*$ is bounded by $|{\cal M}'|$, since each operation removes at least one edge from ${\cal M}' \setminus {\cal M}^*$. It follows that the total runtime of the transformation process is bounded by $O(|{\cal M}| + |{\cal M}'|)$.
It is immediate that ${\cal M}^*$ remains a valid matching throughout the transformation process, as we pro-actively remove from it edges that share a common vertex with new edges added to it. To complete the proof of Theorem \ref{th:MCM} it remains to prove the following lemma.
\begin{lemma} \label{sizebound}
At any moment in time we have $|{\cal M}^*| \ge \min\{|{\cal M}|,|{\cal M}'|-1\}$. \end{lemma} \begin{proof}
Suppose for contradiction that the lemma does not hold, and consider the first time step $t^*$ throughout the transformation process in which $|{\cal M}^*| < \min\{|{\cal M}|,|{\cal M}'|-1\}$. Since initially $|{\cal M}^*| = |{\cal M}|$ and
as every addition of a good edge to ${\cal M}^*$ triggers at most one edge removal from it,
time step $t^*$ must occur after an addition of a bad edge.
Recall that a bad edge is added to ${\cal M}^*$ only when there are no good edges to add.
Just before this addition we have $|{\cal M}^*| \ge |{\cal M}'|$ by Lemma \ref{badedge}, thus we have $|{\cal M}^*| \ge |{\cal M}'| - 1$ after adding that edge to ${\cal M}^*$ and removing the two edges incident on it from there, yielding a contradiction.
\quad\blackslug\lower 8.5pt\null\par \end{proof}
\begin {remark} \label{re:opt}
When $|{\cal M}| < |{\cal M}'|$, it is possible to gradually transform ${\cal M}$ to ${\cal M}'$ without ever being in deficit compared to the initial value of ${\cal M}$, i.e., $|{\cal M}^*| \ge |{\cal M}|$ throughout the transformation process. However, if $|{\cal M}'| \le |{\cal M}|$, this no longer holds true; refer to App.\ \ref{tightnessun} for more details.
\end {remark}
\ignore{
as follows. The sliding window of length $W = \Theta(\epsilon \cdot |{\cal M}|)$ is halved into two smaller windows. The sets of good and bad edges in ${\cal M}'_t \setminus {\cal M}^*_t$ can be computed within time $O(|{\cal M}| + |{\cal M}'|)$ in the obvious way, and we can thus gradually simulate this static computation over the $W/2$ update steps of the first window, performing $O(\alpha \cdot \epsilon^{-3})$ computational steps following each update. Moreover, following each edge deletion from the graph, we can update the sets of good and bad edges in constant time. We may henceforth assume that the sets of good and bad edges are fully up-to-date at the start of the second window, and we continue to maintain them throughout this window. (Note also that at the start of the second window, the source matching coincides with the old matching, i.e., ${\cal M}^*_{t+W/2} = {\cal M}_{t+W/2}$, since we did not make any changes to it throughout the first window.) At every update step of the second window $i = t + W/2, t+W/2 + 1, \ldots,t'$, we add $O(\alpha \cdot \epsilon^{-3})$ edges of ${\cal M}'_i \setminus {\cal M}^*_i$ (if any) to ${\cal M}^*_i$, by giving precedence to good edges, i.e., adding bad edges to ${\cal M}^*_i$ only when there are no good edges to add. Following every such addition, we delete the edges in ${\cal M}^*_i$ incident on the newly added edge, and update the sets of good and bad edges in constant time. (Note that ${\cal M}^*_i$ is being changed during this process. Although the matching that we output as ${\cal M}^*_i$ is the one resulting at the end of this process, we may use ${\cal M}^*_i$ to refer to any of the matchings obtained during this process.)
This transformation process naturally guarantees that the number of replacements made to the output matching is bounded by $O(\alpha \cdot \epsilon^{-3})$ in the worst-case.
At the end of the process, the output matching coincides with the target matching, and then we repeat.
To conclude the argument, we argue that the output matching ${\cal M}^*_i$ is a valid $(1+O(\epsilon))$-MCM, for any $i, t \le i \le t'$.
In the first window of length $W/2$ steps the output matching coincides with the old matching ${\cal M}_i$, which is a $(1+O(\epsilon))$-MCM by Lemma \ref{lazylemma}. It is left to prove the following lemma.
\begin{lemma} ${\cal M}^*_i$ is a valid $(1+O(\epsilon))$-MCM, for any $i = t + W/2, t+W/2 + 1, \ldots,t'$. \end{lemma} \begin{proof} It is immediate that the output matching ${\cal M}^*_i$ is a valid matching, for any $i = t + W/2, t+W/2 + 1, \ldots,t'$, as we proactively delete from it edges that share a common vertex with new edges added to it.
Fix any index $i = t + W/2, t+W/2 + 1, \ldots,t'$. We next analyze the approximation guarantee of ${\cal M}^*_i$.
Suppose first that we add only good edges to the output matching between update steps $t+W/2$ and $i$.
Recalling that ${\cal M}^*_{t+W/2} = {\cal M}_{t+W/2}$, the size of the output matching cannot be smaller than that of the old matching by more than $W/2$, which upper bounds the number of edges deleted from the graph in the entire second window, and in particular until step $i$, thus we have $|{\cal M}^*_i| \ge |{\cal M}_i| - W/2$.
We henceforth assume that at least one bad edge is added to the output matching between steps $t+W/2$ and $i$, and let $j$ be the last step when such an addition occurs. Just before this addition, we have $|{\cal M}^*_j| \ge |{\cal M}'_j|$ by Lemma \ref{badedge}, thus we have
$|{\cal M}^*_j| \ge |{\cal M}'_j| - 1$ after adding that edges to ${\cal M}^*_j$ and deleting the two edges incident on it from there. At any subsequent moment in time until step $i$, only good edges are added to the output matching, hence its size cannot be smaller than that of the target matching by more than $W/2 + 1$, thus we have $|{\cal M}^*_i| \ge |{\cal M}'_i| - W/2 - 1$.
We have shown that $|{\cal M}^*_i| \ge \min\{|{\cal M}_i|,|{\cal M}'_i|\} - W/2 - 1$. By Lemma \ref{lazylemma}, both ${\cal M}_i$ and ${\cal M}'_i$ are $(1+O(\epsilon))$-MCM. It follows that $ |{\cal M}| \le (1 + O(\epsilon)) \cdot \min\{|{\cal M}_i|, |{\cal M}'_i|\}$, hence $W = \Theta(\epsilon \cdot |{\cal M}|) = O(\epsilon \cdot \min\{|{\cal M}_i|,|{\cal M}'_i|\})$, which completes the proof of the lemma. \quad\blackslug\lower 8.5pt\null\par \end{proof} } \noindent
\\ {\bf 3.2~ The Fully Dynamic Setting.~} In this section we provide the second step in the proof of Theorem \ref{main}, showing that the simple transformation process described in Section 3.1 for static graphs can be generalized for the fully dynamic setting, thus completing the proof of Theorem \ref{main}.
Consider an arbitrary dynamic algorithm, Algorithm ${\cal A}$, for maintaining a $\beta$-MCM with an update time of $T$, for any $\beta \ge 1$ and $T$. The matching maintained by Algorithm ${\cal A}$, denoted by ${\cal M}^A_i$, for $i = 1,2,\ldots$, may change significantly following a single update step. All that is guaranteed by Algorithm ${\cal A}$ is that it can update the matching following every update step within a time bound of $T$, either in the worst-case sense or in the amortized sense, following which queries regarding the matching can be answered in (nearly) constant time.
Recall also that we assume that,
for any update step $i$,
the matching ${\cal M}^A_i$ provided by Algorithm ${\cal A}$ at step $i$ can be output within time (nearly) linear in the matching size.
Our goal is to output a matching $\tilde {\cal M} = \tilde {\cal M}_i$, for $i = 1,2,\ldots$, possibly very different from ${\cal M}^A = {\cal M}^A_i$, which changes very slightly from one update step to the next. To this end, the basic idea is to use the matching ${\cal M}^A$ provided by Algorithm ${\cal A}$ at a certain update step, and then re-use it (gradually removing from it edges that get deleted from the graph) throughout a sufficiently long window of $\Theta(\epsilon \cdot |{\cal M}^A|)$ consecutive update steps, while gradually transforming it into a larger matching, provided again by Algorithm ${\cal A}$ at some later step.
The \emph{gradual transformation process} is obtained by adapting the process described in Section 3.1 for static graphs to the fully dynamic setting. Next, we describe this adaptation.
We assume that $\beta = O(1)$;
the case of a general $\beta$ is addressed in Section 3.2.1.
Consider the beginning of a new time window, at some update step $t$. Denote the matching provided by Algorithm ${\cal A}$ at that stage by ${\cal M}' = {\cal M}^A_t$ and the matching output by our algorithm by ${\cal M} = \tilde{{\cal M}}_t$. Recall that the entire matching ${\cal M}' = {\cal M}^A_t$ can be output in time (nearly) linear in its size, and we henceforth assume that ${\cal M}'$ is given as a list of edges.
(For concreteness, we assume that the time needed for storing the edges of ${\cal M}'$ in an appropriate list is $O(|{\cal M}'|$.) While ${\cal M}'$ is guaranteed to provide a $\beta$-MCM at any update step, including $t$, the approximation guarantee of ${\cal M}$ may be worse. Nevertheless, we will show (Lemma \ref{complete}) that ${\cal M}$ provides a
$(\beta(1+2\epsilon'))$-MCM for $G_t$. Under the assumption that $\beta = O(1)$, we thus have $|{\cal M}| = O(|{\cal M}'|)$.
The length of the time window is $W = \Theta(\epsilon \cdot |{\cal M}|)$, i.e., it starts at update step $t$ and ends at update step $t' = t + W-1$.
During this time window, we gradually transform ${\cal M}$ into (a possibly superset of) ${\cal M}'$, using the transformation described in Section 3.1 for static graphs; recall that the matching output throughout this transformation process is denoted by ${\cal M}^*$. We may assume that $|{\cal M}|, |{\cal M}'| = \Omega(1/\epsilon)$, where the constant hiding in the $\Omega$-notation is sufficiently large; indeed, otherwise $|{\cal M}| + |{\cal M}'| = O(1/\epsilon)$ and there is no need to apply the transformation process, as the trivial worst-case recourse bound is $O(1/\epsilon)$.
We will show (Lemma \ref{complete}) that the output matching $\tilde {\cal M}_i$ provides a $(\beta(1 + O(\epsilon))$-MCM at any update step $i$. Two simple adjustments are needed for adapting the transformed matching ${\cal M}^*$ of the static setting to the fully dynamic setting: \begin{itemize} \item To achieve a low worst-case recourse bound and guarantee that the overhead in the update time (with respect to the original update time) is low in the worst-case, we cannot carry out the entire computation at once (i.e. following a single update step), but should rather \emph{simulate it gradually} over the entire time window of the transformation process. Specifically, recall that the transformation process for static graphs consists of two phases, a preprocessing phase in which the matching ${\cal M}' = {\cal M}^A_t$ and the sets ${\cal G}$ and ${\cal B}$ of good and bad edges
in ${\cal M}' \setminus {\cal M}$ are computed, and the actual transformation phase that transforms ${\cal M}^*$, which is initialized as ${\cal M}$, into (a possibly superset of) ${\cal M}'$.
Each of these phases requires time $O(|{\cal M}| + |{\cal M}'|) = O(|{\cal M}|)$. The first phase does not make any replacements to ${\cal M}^*$,
whereas the second phase consists of a sequence of at most $|{\cal M}'|$ constant-time operations, each of which may trigger a constant number of replacements to ${\cal M}^*$. The computation of the first phase is simulated
in the first $W/2$ update steps of the window, performing $O(|{\cal M}| + |{\cal M}'|) / (W/2) = O(1/\epsilon)$ computation steps and zero replacements to ${\cal M}^*$ following every update step. The computation of the second phase is simulated in the second $W/2$ update steps of the window,
performing $O(|{\cal M}| + |{\cal M}'|) / (W/2) = O(1/\epsilon)$ computation steps and replacements to ${\cal M}^*$ following every update step. \item Denote by ${\cal M}^*_i$ the matching output at the $i$th update step by the resulting gradual transformation process, which simulates $O(1/\epsilon)$ computation steps and replacements to the output matching following every update step.
While ${\cal M}^*_i$ is a valid matching for the (static) graph $G_t$ at the beginning of the time window,
some of its edges may get deleted from the graph in subsequent update steps $i= t+1, t+2, \ldots, t'$. Consequently, the matching that we shall output for graph $G_i$, denoted by $\tilde {{\cal M}}_i$, is
the one obtained from ${\cal M}^*_i$ by removing from it all edges that got deleted from the graph between steps $t$ and $i$. \end{itemize}
Once the current time window terminates, a new time window starts, and the same transformation process is repeated, with $\tilde {\cal M}_{t'}$ serving as ${\cal M}$ and ${\cal M}^A_{t'}$ serving as ${\cal M}'$. Since all time windows are handled in the same way, it suffices to analyze the output matching of the current time window,
and this analysis would carry over to the entire update sequence.
It is immediate that the output matching $\tilde {\cal M}_i$ is a valid matching for any $i = t, t+1, \ldots, t'$. Moreover, since we make sure to simulate $O(1/\epsilon)$ computation steps and replacements following every update step, the worst-case recourse bound of the resulting algorithm is bounded by $O(1/\epsilon)$ and the update time is bounded by $T + O(1/\epsilon)$, where this time bound is worst-case/amortized if the time bound $T$ of Algorithm ${\cal A}$ is worst-case/amortized.
It is left to bound the approximation guarantee of the output matching $\tilde {\cal M}_i$.
Recall that $W = \Theta(\epsilon \cdot |{\cal M}|)$, and write $W = \epsilon' \cdot |{\cal M}|$, with $\epsilon' = \Theta(\epsilon)$. (We assume that $\epsilon$ is sufficiently small so that $\epsilon' \le 1/2$. We need this restriction on $\epsilon'$ to apply Lemma \ref{lazylemma2}.)
\begin{lemma} \label{complete} $\tilde {\cal M}_t$ and $\tilde {\cal M}_{t'}$ provide a $(\beta(1+2\epsilon'))$-MCM for $G_t$ and $G_{t'}$, respectively. Moreover, $\tilde {\cal M}_i$ provides a $(\beta((1+2\epsilon')^2))$-MCM for $G_i$, for any $i = t, t+1, \ldots,t'$. \end{lemma} \begin{proof} First, we bound the approximation guarantee of the matching $\tilde {\cal M}_{t'}$, which is obtained from ${\cal M}^*_{t'}$ by removing from it all edges that got deleted from the graph throughout the time window. By the description of the transformation process, ${\cal M}^*_{t'}$ is a superset of ${\cal M}'$, hence $\tilde {\cal M}_{t'}$ is a superset of the matching obtained from ${\cal M}'$ by removing from it all edges that got deleted throughout the time window. Since ${\cal M}'$ is a $\beta$-MCM for $G_t$, Lemma \ref{lazylemma2} implies that $\tilde {\cal M}_{t'}$ is a $(\beta(1+2\epsilon'))$-MCM for $G_{t'}$. More generally, this argument shows that the matching obtained at the end of any time window is a $(\beta(1+2\epsilon'))$-MCM for the graph at that step.
Next, we argue that the matching obtained at the start of any time window (as described above) is a $(\beta(1+2\epsilon'))$-MCM for the graph at that step. This assertion is trivially true for the first time window, where both the matching and the graph are empty. For any subsequent time window, this assertion follows from the fact that the matching at the start of a new time window is the one obtained at the end of the old time window, for which we have already shown that the required approximation guarantee holds. It follows that $\tilde {\cal M}_t = {\cal M}$ is a $(\beta(1+2\epsilon'))$-MCM for $G_t$.
Finally, we bound the approximation guarantee of the output matching $\tilde {\cal M}_i$ in the entire time window. (It suffices to consider the interior of the window.)
Lemma \ref{sizebound} implies that $|{\cal M}^*_i| \ge \min\{|{\cal M}|,|{\cal M}'|-1\}$, for any $i = t, t+1, \ldots,t'$. We argue that ${\cal M}^*_i$ is a $(\beta(1+2\epsilon'))$-MCM for $G_t$. If $|{\cal M}^*_i| \ge |{\cal M}|$, then this assertion follows from the fact that ${\cal M}$ provides such an approximation guarantee. We henceforth assume that $|{\cal M}^*_i| \ge |{\cal M}'|-1$. Recall that $|{\cal M}'| = \Omega(1/\epsilon) = \Omega(1/\epsilon')$, where the constants hiding in the $\Omega$-notation are sufficiently large, hence removing a single edge from ${\cal M}'$ cannot hurt the approximation guarantee by more than an additive factor of, say $\epsilon'$, i.e., less than $\beta (2\epsilon')$.
Since ${\cal M}'$ provides a $\beta$-MCM for $G_t$, it follows that ${\cal M}^*_i$ is indeed a $(\beta(1+2\epsilon'))$-MCM for $G_t$, which completes the proof of the above assertion.
Consequently, Lemma \ref{lazylemma2} implies that $\tilde {\cal M}_{i}$, which is obtained from ${\cal M}^*_i$ by removing from it all edges that got deleted from the graph between steps $t$ and $i$, is a $(\beta((1+2\epsilon')^2))$-MCM for $G_{i}$. \quad\blackslug\lower 8.5pt\null\par \end{proof}
\noindent
{\bf 3.2.1~ A general approximation guarantee.~}
In this section we consider the case of a general approximation parameter $\beta \ge 1$.
The bound on the approximation guarantee of the output matching provided by Lemma \ref{complete}, namely $(\beta((1+2\epsilon')^2))$, remains unchanged. Recalling that $\epsilon' \le 1/2$, it follows that the size of ${\cal M}'$ cannot be larger than that of ${\cal M}$ by more than a factor of $(\beta((1+2\epsilon')^2)) \le 2\beta$. Consequently, the number of computation steps and replacements performed per update step, namely, $O(|{\cal M}| + |{\cal M}'|) / (W/2)$, is no longer bounded by $O(1/\epsilon)$, but rather by $O(\beta/\epsilon)$. To achieve a bound of $O(1/\epsilon)$ for a general $\beta$, we shall use a matching ${\cal M}''$ different from ${\cal M}'$, which includes a possibly small fraction of the edges of ${\cal M}'$. Recall that we can output $\ell$ arbitrary edges of the matching ${\cal M}' = {\cal M}^A_t$ in time (nearly) linear in $\ell$, for any integer $\ell = 1,2,\ldots,|{\cal M}'|$. Let ${\cal M}''$ be a matching that consists of (up to) $2|{\cal M}|$ arbitrary edges of ${\cal M}'$; that is, if $|{\cal M}'| > 2 |{\cal M}|$, ${\cal M}''$ consists of $2|{\cal M}|$ arbitrary edges of ${\cal M}'$, otherwise ${\cal M}'' = {\cal M}'$. We argue that ${\cal M}''$ is a $\beta$-MCM for $G_t$. Indeed, if $|{\cal M}'| > 2 |{\cal M}|$ the approximation guarantee follows from the approximation guarantee of ${\cal M}$ and the fact that ${\cal M}''$ is twice larger than ${\cal M}$, whereas in the complementary case the approximation guarantee follows from that of ${\cal M}'$.
In any case it is immediate that $|{\cal M}''| = O(|{\cal M}|)$.
(For concreteness, we assume that the time needed for storing the edges of ${\cal M}''$ in an appropriate list is $O(|{\cal M}''|) = O(|{\cal M}|)$.)
We may henceforth carry out the entire transformation process with ${\cal M}''$ taking the role of ${\cal M}'$, and in this way guarantee that the number of computation steps and replacements to the output matching performed per update step is reduced from $O(\beta/\epsilon)$ to $O(1/\epsilon)$.
\noindent
\\ {\bf 3.3~ Proof of Theorem~\ref{main2}.~}
The proof of Theorem~\ref{main2} is very similar to the one of Theorem~\ref{main}. Specifically, we derive Theorem~\ref{main2} by making a couple of simple adjustments to the proof of Theorem~\ref{main} given above, which we sketch next. First, instead of using the transformation of Theorem~\ref{th:MCM}, we use the one of Theorem~\ref{th:main}, whose proof appears in Section \ref{sec:wema}. Second, the \emph{stability} property of unweighted matchings used in the proof of Theorem~\ref{main} is that the maximum matching size changes by at most 1 following each update step. This stability property enables us in the proof of Theorem \ref{main} to consider a time window of $W = \Theta(\epsilon \cdot |{\cal M}|)$ update steps, so that any $\beta$-MCM computed at the beginning of the window will provide (after removing from it all the edges that get deleted from the graph) a $(\beta(1+\epsilon))$-MCM throughout the entire window, for any $\beta \ge 1$. It is easy to see that this stability property generalizes for weighted matchings, where the maximum matching weight may change by an additive factor of at most $\psi$. (Recall that the aspect ratio of the dynamic graph is always bounded by $\psi$.)
In order to obtain a $(\beta(1+\epsilon))$-MWM throughout the entire time window, it suffices to consider a time window of $W' = W'_\psi = W / \psi = \Theta(\epsilon \cdot |{\cal M}| / \psi)$, i.e.,
a time window shorter than that used for unweighted matchings by a factor of $\psi$, and as a result the update time of the resulting algorithm will grow from $T + O(1/\epsilon)$ to $T + O(\psi/\epsilon)$ and the worst-case recourse bound will grow from $O(1/\epsilon)$ to $O(\psi/\epsilon)$.
\ignore{
Let $t$ and $t' = t + W$ denote the update steps in which this transformation process starts and ends, respectively, where $W = \Theta(\epsilon \cdot |{\cal M}^*_t|)$. The matching ${\cal M}^*_{i}$ that we output during this time window, for $i = t, t+1, \ldots, t'$, hereafter the \emph{output matching}, is gradually transformed into ${\cal M}^A_{i}$, hereafter the \emph{target matching}, until coinciding with it at (or before) step $t'$, i.e., ${\cal M}^*_{t'} = {\cal M}^A_{t'}$. The target matching ${\cal M}^A_i$, for a general $i = t, t+1, \ldots, t'$, ${\cal M}^A_i$ is obtained from ${\cal M}^A_t$ by removing from it all edges that got deleted from the graph between steps $t$ and $i$. The output matching ${\cal M}^*_t$ at the beginning of the new window is also the output matching at the end of the old window; the approximation guarantee of the matching output at the end of a time window coincides with the target matching of the beginning of that window, hence its approximation guarantee is no greater than $\beta + 4\epsilon$ by Lemma \ref{lazylemma}.
}
\section{Optimality of the Recourse Bound} \label{recourse} In this section we show that an approximation guarantee of $(1+\epsilon)$ requires a recourse bound of $\Omega(1/\epsilon)$, even in the amortized sense and even in the incremental (insertion only) and decremental (deletion only) settings. We only consider edge updates, but the argument extends seamlessly to vertex updates.
This lower bound of $\Omega(1/\epsilon)$ on the recourse bound does not depend on the update time of the algorithm in any way. Let us fix $\epsilon$ to be any parameter satisfying $\epsilon = \Omega(1/n), \epsilon \ll 1$, where $n$ is the (fixed) number of vertices.
Consider a simple path $P_\ell = (v_1,v_2,\ldots,v_{2\ell})$ of length $2\ell-1$, for an integer $\ell = c(1/\epsilon)$ such that $\ell \ge 1$ and $c$ is a sufficiently small constant. (Thus $P_\ell$ spans at least two but no more than $n$ vertices.) There is a single maximum matching ${\cal M}^{OPT}_\ell$ for $P_\ell$, of size $\ell$, which is also the only $(1+\epsilon)$-MCM for $P_\ell$. After adding the two edges $(v_0,v_1)$ and $(v_{2\ell},v_{2\ell+1})$ to $P_\ell$, the maximum matching ${\cal M}^{OPT}_\ell$ for the old path $P_\ell$ does not provide a $(1+\epsilon)$-MCM for the new path, $(v_0,v_1,\ldots,v_{2\ell+1})$, which we may rewrite as $P_{\ell + 1} = (v_1,v_2,\ldots, v_{2(\ell+1)})$. The only way to restore a $(1+\epsilon)$-approximation guarantee is by removing all $\ell$ edges of ${\cal M}^{OPT}_\ell$ and adding the remaining $\ell + 1$ edges instead, which yields ${\cal M}^{OPT}_{\ell+1}$. One may carry out this argument repeatedly until the length of the path reaches, say, $4\ell -1$.
The amortized number of replacements to the matching per update step throughout this process is $\Omega(1/\epsilon)$. Moreover, the same amortized bound, up to a small constant factor, holds if we start from an empty path instead of a path of length $2\ell-1$. We then delete all $4\ell - 1$ edges of the final path and start again from scratch, which may reduce the amortized bound by another small constant. In this way we get an amortized recourse bound of $\Omega(1/\epsilon)$ for the fully dynamic setting.
To adapt this lower bound to the incremental setting, we construct $n' = \Theta(\epsilon \cdot n)$ vertex-disjoint copies $P^1, P^2,\ldots,P^{n'}$ of the aforementioned incremental path, one after another, in the following way. Consider the $i$th copy $P^i$, from the moment its length becomes $2\ell - 1$ and until it reaches $4\ell - 1$. If at any moment during this gradual construction of $P^i$, the matching restricted to $P^i$ is not the (only) maximum matching for $P^i$, we \emph{halt} the construction of $P^i$ and move on to constructing the $(i+1)$th copy $P^{i+1}$, and then subsequent copies, in the same way. A copy whose construction started but was halted is called \emph{incomplete}; otherwise it is \emph{complete}. (There are also \emph{empty} copies, whose construction has not started yet.) For any incomplete copy $P^j$, the matching restricted to it
is not the maximum matching for $P^j$, hence its approximation guarantee is worse than $1+\epsilon$; more precisely, the approximation guarantee provided by any matching other than the maximum matching for $P^j$ is at least $1 + c' \cdot \epsilon$, for a constant $c'$ that can be made as large as we want by decreasing the aforementioned constant $c$, or equivalently, $\ell$. (Recall that $\ell = c(1/\epsilon)$.)
If the matching restricted to $P^j$ is changed to the maximum matching for $P^j$ at some later moment in time, we return to that incomplete copy and resume its construction from where we left off, thereby \emph{temporarily suspending} the construction of some other copy $P_{j'}$. The construction of $P^j$ may get halted again, in which case we return to handling the temporarily suspended copy $P_{j'}$, otherwise we return to handling $P_{j'}$ only after the construction of $P^j$ is complete, and so forth. In this way we maintain the invariant that the approximation guarantee of the matching restricted to any incomplete copy (whose construction is not temporarily suspended) is at least $1 + c' \cdot \epsilon$, for a sufficiently large constant $c'$. While incomplete copies may get completed later on, a complete copy remains complete throughout the entire update sequence. At the end of the update sequence no copy is empty or temporarily suspended, i.e., any copy at the end of the update sequence is either incomplete or complete.
The above argument implies that any complete copy has an amortized recourse bound of $\Omega(1/\epsilon)$, over the update steps restricted to that copy.
Observe also that at least a constant fraction of the $n'$ copies must be complete at the end of the update sequence, otherwise the entire matching cannot provide a $(1+\epsilon)$-MCM for the entire graph, i.e., the graph obtained from the union of these $n'$ copies. It follows that the amortized recourse bound over the entire update sequence is $\Omega(1/\epsilon)$.
The lower bound for the incremental setting can be extended to the decremental setting using a symmetric argument to the one given above.
\appendix \centerline{\LARGE\bf Appendix}
\section{Scenarios with high recourse bounds} \label{12} We briefly discuss some scenarios where high recourse bounds may naturally arise. In all such scenarios our reductions (Theorems \ref{main} and \ref{main2}) can come into play to achieve low worst-case recourse bounds; for clarity we focus in this discussion, sometimes implicitly, on large (unweighted) matching, but the entire discussion carries over with very minor changes to the generalized setting of weighted matchings.
Appendix A.3 demonstrates that, although we \emph{may not care at all about recourse bounds}, maintaining a large (weight) matching with a low update time requires in some cases the use of a dynamic matching algorithm with a low recourse bound; this is another situation where our reductions can come into play, but more than that, we believe that it provides an additional strong motivation for our reductions.
\noindent
\\
{\bf A.1~ Randomized algorithms} \label{121} \noindent
\\ {\bf Multiple matchings.~} Given a randomized algorithm for maintaining a large matching in a dynamic graph, it may be advantageous to run multiple instances of the algorithm (say $\mbox{polylog}(n)$), since this may increase the chances that at least one of those instances provides a large matching with high probability (w.h.p.) at any point in time. Notice, however, that it is not the same matching that is guaranteed to be large throughout the entire update sequence, hence the ultimate algorithm (or data structure), which outputs the largest among the $\mbox{polylog}(n)$ matchings, may need to switch between a pool of possibly very different matchings when going from one update step to the next. Thus even if the recourse bound of the given randomized algorithm is low, and so each of the maintained matchings changes gradually over time, we do not get any nontrivial recourse bound for the ultimate algorithm.
\noindent
\\ {\bf Large matchings.~} Sometimes the approximation guarantee of the given randomized algorithm holds w.h.p.\ only when the matching is sufficiently large. This is the case with the algorithm of \cite{CS18} that achieves $\mbox{polylog}(n)$ worst-case update time, where the approximation guarantee of $2+\epsilon$ holds w.h.p.\ only when the size of the matching is $\Omega(\log^5 n / \epsilon^4)$. To perform efficiently, \cite{CS18} also maintains a matching that is guaranteed to be maximal (and thus provide a 2-MCM) when the maximum matching size is smaller than $\delta = O(\log^5 n / \epsilon^4)$, via a deterministic procedure with a worst-case update time of $O(\delta)$.
The ultimate algorithm of \cite{CS18} switches between the matching given by the randomized algorithm and that by the deterministic procedure, taking the larger of the two.
Thus even if the recourse bounds of both the randomized algorithm and the deterministic procedure are low, the worst-case recourse bound of the ultimate algorithm, which might be of the order of the ``large matching'' threshold, could be very high. (The large matching threshold is the threshold on the matching size above which a high probability bound on the approximation guarantee holds.)
In \cite{CS18} the large matching threshold is $\delta = O(\log^5 n / \epsilon^4)$, so the recourse bound is reasonably low.
(This is not the bottleneck for the recourse bound of \cite{CS18}, as discussed next.) In general, however, the large matching threshold may be significantly higher than $\mbox{polylog}(n)$.
\noindent
\\ {\bf Long update sequences.~} For the probabilistic guarantees of a randomized dynamic algorithm to hold w.h.p., the update sequence must be of bounded length. In particular, polylogarithmic guarantees on the update time usually require that the length of the update sequence will be polynomially bounded. This is the case with numerous dynamic graph algorithms also outside the scope of graph matchings (cf.\ \cite{KKM13,ADKKP16}), and the basic idea is to partition the update sequence into sub-sequences of polynomial length each and to run a fresh instance of the dynamic algorithm
in each sub-sequence. In the context of matchings, the algorithm of \cite{CS18} uses this approach.
Notice, however, that an arbitrary sub-sequence (other than the first) does not start from an empty graph. Hence, for the ultimate algorithm of \cite{CS18} to provide a low worst-case update time, it has to gradually construct the graph at the beginning of each sub-sequence from scratch and maintain for it a new gradually growing matching, while re-using the ``old'' matching used for the previous sub-sequence throughout this gradual process. Once the gradually constructed graph coincides with the true graph, the ultimate algorithm switches from the old matching to the new one. (See \cite{CS18} for further details.) While this approach guarantees that the worst-case update time of the algorithm is in check, it does not provide any nontrivial worst-case recourse bound. \noindent
\\ {\bf A.2~~ From amortized to worst-case.~} There are techniques for transforming algorithms with low amortized bounds into algorithms with similar worst-case bounds. For approximate matchings, such a technique was first presented in \cite{GP13}. Alas, the transformed algorithms do not achieve any nontrivial worst-case recourse bound;
see Section \ref{changes} for details.
\noindent
\\ {\bf A.3~ When low update time requires low recourse bound.~} \label{123} When a dynamic matching algorithm is used as a black-box subroutine inside a larger data structure or algorithm, a low recourse bound of the algorithm used as a subroutine is needed for achieving a low update time for the larger algorithm. We next consider a natural question motivating this need; one may refer to \cite{BS16,ADKKP16} for additional motivation.
\begin{Question} \label{q} Given $k$ dynamic matchings of a dynamic graph $G$, whose union is guaranteed to contain a large matching for $G$ at any time, for an arbitrary parameter $k$, can we combine those $k$ matchings into a large dynamic matching for $G$ \emph{efficiently}? \end{Question}
This question may arise when there are physical limitations, such as memory constraints, e.g., as captured by MapReduce-style computation, where the edges of the graph are partitioned into $k$ parties.
More specifically, consider a fully dynamic graph $G$ of huge scale, for which we want to maintain a large matching with low update time.
The edges of the graph are dynamically partitioned into $k$ parties due to memory constraints, each capable of maintaining a large matching for the graph induced by its own edges with low update time, and the only guarantee on those $k$ dynamically changing matchings is the following global one: The \emph{union} of the $k$ matchings at any point in time \emph{contains} a large matching for the entire dynamic graph $G$.
(E.g., if we maintain at each update step the invariant that the edges of $G$ are partitioned across the $k$ parties uniformly at random, such a global guarantee can be provided via the framework of \emph{composable randomized coresets} \cite{MZ15,AK17,ABBMS17}.)
This question may also arise when the input data set is noisy.
Coping with noisy input usually requires \emph{randomization}, which may lead to high recourse bounds as discussed in Appendix A.1. Let us revisit the scenario where we run multiple instances of a randomized dynamic algorithm with low update time; denote the number of such instances by $k$. If the input is noisy, we may not be able to guarantee that at least one of the $k$ maintained matchings is large w.h.p.\ at any point in time, as suggested in Appendix A.1. A weaker, more reasonable assumption is that the {union} of those $k$ matchings contains a large matching.
The key observation is that it is insufficient to maintain each of the $k$ matchings with low update time, even in the worst-case, as each such matching may change significantly following a single update step, thereby changing significantly the union of those matchings. ``Feeding'' this union to \emph{any} dynamic matching algorithm would result with poor update time bounds, even in the amortized sense. Consequently, to resolve Question \ref{q}, each of the $k$ maintained matchings must change \emph{gradually} over time, or in other words, the underlying algorithm(s) needed for maintaining those matchings should guarantee a low recourse bound. A low amortized/worst-case recourse bound of the underlying algorithm(s) translates into a low amortized/worst-case update time of the ultimate algorithm, provided of course that the underlying algorithm(s) for maintaining those $k$ matchings, as well as the dynamic matching algorithm to which their union is fed, all achieve a low amortized/worst-case update time.
\ignore{ There are two main open questions in this area. The first is whether one can maintain a \emph{better-than-2} approximate matching in \emph{amortized} polylogarithmic update time. The second is the following: \begin{question} Can one maintain a ``good'' (close to 2) approximate matching and/or vertex cover with \emph{worst-case} polylogarithmic update time? \end{question}
In a recent breakthrough, Bhattacharya, Henzinger and Nanongkai devised a deterministic algorithm that maintains a \emph{constant} approximation to the minimum vertex cover, and thus also a constant-factor estimate of the maximum matching size, with polylogarithmic worst-case update time. While this result makes significant progress towards Question 1, this fundamental question remained open.\footnote{Later (in SODA'17 Proc.\ \cite{BHN17}) Bhattacharya et al.\ significantly improved the approximation factor all the way to $2+\epsilon$; moreover, it seems that their result can be used to maintain a $(3+\epsilon)$-MCM, as sketched in App.\ \ref{3+eps}. However, our result was done independently to \cite{BHN17}. Moreover, even if one considers the improved result of \cite{BHN17}, it solves Question 1 in the affirmative only for vertex cover, leaving the question on matching open; in particular, no algorithm for maintaining a matching with sub-polynomial worst-case update time was known for any approximation guarantee better than $3+\epsilon$.} In particular, no algorithm for maintaining a matching with sub-polynomial worst-case update time was known, even if a polylogarithmic approximation guarantee on the matching size is allowed!
In this paper we devise a randomized algorithm that maintains an \emph{almost-maximal matching (AMM)} with a polylogarithmic update time. We say that a matching for $G$ is \emph{almost-maximal} w.r.t.\ some slack parameter $\epsilon$, or \emph{$(1-\epsilon)$-maximal} in short, if it is maximal w.r.t.\ any graph obtained from $G$ after removing $\epsilon \cdot |{\cal M}^*|$ arbitrary vertices, where ${\cal M}^*$ is a maximum matching for $G$. Just as a maximal matching provides a 2-approximation for the maximum matching and minimum vertex cover, an AMM provides a $(2+\epsilon)$-approximation. We show that for any $\epsilon > 0$,
one can maintain an AMM with worst-case update time $O(\poly(\log n,\epsilon^{-1}))$, where the $(1-\epsilon)$-maximality guarantee holds w.h.p. Specifically, our update time is $O(\max\{\log^7 n /\epsilon,\log^5 n / \epsilon^4\})$; although reducing this upper bound towards constant is an important goal, this goal lies outside the scope of the current paper (see Section \ref{discuss} for some details).
Our result resolves Question 1 in the affirmative, up to the $\epsilon$ dependency. In particular, under the unique games conjecture, it is essentially the best result possible for the dynamic vertex cover problem.\footnote{Since our result (which started to circulate in Nov.\ 2016) was done independently of the $(2+\epsilon)$-approximate vertex cover result of \cite{BHN17}, it provides the first $(2+\epsilon)$-approximation
for both vertex cover (together with \cite{BHN17}) and (integral) matching.}
On the way to this result, we devise a \emph{deterministic} algorithm that maintains an almost-maximal matching with a polylogarithmic update time in a natural \emph{offline model} that is described next. This deterministic algorithm is of independent interest, as it is likely to be applicable in various real-life scenarios. Our randomized algorithm for the oblivious adversarial model is derived from this deterministic algorithm in the offline model, and this approach is likely to be useful in other dynamic graph problems. }
\section{Optimality of our Transformations} \label{tightness}
\subsection{Unweighted matchings} \label{tightnessun}
In the unweighted case, when $|{\cal M}| < |{\cal M}'|$, Theorem \ref{th:MCM} states that ${\cal M}$ can gradually transform into ${\cal M}'$ without ever being in deficit compared to the initial value of ${\cal M}$, i.e., $|{\cal M}^*| \ge |{\cal M}|$ throughout the entire transformation process. If $|{\cal M}'| \le |{\cal M}|$, however, this no longer holds; in this case the theorem states that we'll reach a deficit of at most 1 unit. To see that this bound is tight,
consider the case when $|{\cal M}| = |{\cal M}'|$ and $H = {\cal M} \oplus {\cal M}'$ is a simple alternating cycle that consists of all edges in ${\cal M}$ and ${\cal M}'$, and thus of length $2|{\cal M}|$. Throughout any transformation process and until handling the last edge of the cycle,
it must be that $|{\cal M}^*| < |{\cal M}|$ if $\Delta < 2|{\cal M}|$.
\\ \noindent{\bf Remark.} In fact, the same situation will occur if $\Delta = 2$. In the particular case of $\Delta = 1$, we'll be in deficit of up to 2 throughout the process--- adding the first edge of ${\cal M}'$ requires us to delete its two incident edges in ${\cal M}$, which already leads to a deficit of 2 units.
\subsection{Weighted matchings} \label{tightnesswe} In the weighted case, quantifying this deficit throughout the process is more subtle, but the worst-case scenario remains essentially the same:
$|{\cal M}|= |{\cal M}'|$, all edges have weight $W$ and $H = {\cal M} \oplus {\cal M}'$ is a simple alternating cycle that consists of all edges in ${\cal M}$ and ${\cal M}'$. Throughout any transformation process and until handling the last edge of the cycle,
it must be that $w({\cal M}^*) \le w({\cal M}) - W$ if $\Delta < 2|{\cal M}|$. In general, the deficit to the weight of the matching is inverse linear in $\Delta$, hence taking $\Delta$ to be $\Theta(1/\epsilon)$ ensures that the weight of the matching throughout the process never goes below $(1-\epsilon)w({\cal M})$.
Interestingly, here a similar situation occurs also when $w({\cal M}') > w({\cal M})$. Specifically, consider the same example as above but add $\eta$ to each edge weight of ${\cal M}'$, i.e.,
assume that the edge weights of ${\cal M}$ and ${\cal M}'$ are now $W$ and $W' = W + \eta$, respectively.
Then
the deficit is no longer $W$ as before (for $1 < \Delta < 2|{\cal M}|$),
but rather $W - \lfloor \Delta/2 \rfloor \cdot \eta$ (for $1 < \Delta < 2|{\cal M}|$). Indeed, adding the first edge of ${\cal M}'$ requires the deletion of its two incident edges in ${\cal M}$, at which stage the deficit is $W-\eta$; from that moment onwards, a single edge of ${\cal M}$ is deleted so that another edge of ${\cal M}'$ can be added, which reduces $\eta$ from the deficit each time.
Therefore, if $\Delta \cdot \eta = W$, the deficit is always at least $W/2$, while $w({\cal M}') > w({\cal M}) + W/2$.
This scenario shows that the bound of Theorem \ref{th:main} is asymptotically tight.
\\ \noindent {\bf Remark.} If $\Delta = 1$, we'll be in deficit of $2W$ (rather than $W$) throughout the process, similarly to the unweighted case. In the degenerate case that ${\cal M}$ and ${\cal M}'$ consist of a single edge each and $\Delta = 1$, the weight after the first edge deletion reduces to 0. \section{Prior work on Dynamic Matching Algorithms} \label{111} In this appendix we provide a concise literature survey on dynamic approximate matching algorithms. (For a more detailed account, see \cite{OR10,BGS11,PS16, Sol16,BHN17,GLSSS19}, and the references therein.)
There is a large body of work on algorithms for maintaining large matchings with low \emph{amortized} update time,
but none of the papers in this context provides a low \emph{worst-case} recourse bound. For example, in FOCS'14 Bosek et al.\ \cite{BLSZ14} showed that one can maintain a $(1+\epsilon)$-MCM in the incremental vertex update setting with a total of $O(m / \epsilon)$ time and $O(n / \epsilon)$ matching replacements, where $m$ and $n$ denote the number of edges and vertices of the final graph, respectively; moreover, in SODA'19 \cite{GLSSS19} this result was extended to the incremental edge update setting, achieving a constant (depending exponentially on $\epsilon$) amortized update time. While the algorithms of \cite{BLSZ14,GLSSS19} yield constant (depending on $\epsilon$) amortized recourse bounds, no nontrivial (i.e., $o(n)$) worst-case recourse bound is known for the incremental vertex or edge update settings.
As another example, in STOC'16 Bhattacharya et al.\ \cite{BHN16} presented an algorithm for maintaining a $(2+\epsilon)$-MCM in the fully dynamic edge update setting with an amortized update time $\poly(\log n, 1/\epsilon)$. While the amortized recourse bound of the algorithm of \cite{BHN16} is dominated by its amortized update time, $\poly(\log n, 1/\epsilon)$, no algorithm for maintaining $(2+\epsilon)$-MCM with similar amortized update time and nontrivial worst-case recourse bound is known.
We next focus on algorithms with low \emph{worst-case} update time.
In STOC'13 \cite{NS13} Neiman and Solomon presented an algorithm for maintaining a maximal matching with a worst-case update time $O(\sqrt{m})$, where $m$ is the dynamic number of edges in the graph. A maximal matching provides a 2-MCM. \cite{NS13} also provides a 3/2-MCM algorithm with the same update time. The algorithms of \cite{NS13} provide a constant worst-case recourse bound. Remarkably, all other dynamic matching algorithms for general and bipartite graphs (described next) do not provide any nontrivial worst-case recourse bound.
In FOCS'13, Gupta and Peng \cite{GP13} presented a scheme for maintaining approximate maximum matchings in fully dynamic graphs, yielding algorithms for maintaining $(1+\epsilon)$-MCM with worst-case update times of $O(\sqrt{m} / \epsilon^2)$ and $O(\Delta / \epsilon^2)$ in general graphs and in graphs with degree bounded by $\Delta$, respectively. The scheme of \cite{GP13} was refined in SODA'16 by Peleg and Solomon \cite{PS16} to provide a worst-case update time of $O(\alpha / \epsilon^2)$ for graphs with {arboricity} bounded by $\alpha$. (A graph $G=(V,E)$ has \emph{arboricity} $\alpha$ if $\alpha=\max_{U\subseteq V}\left\lceil\frac{|E(U)|}{|U|-1}\right\rceil$, where $E(U)=\left\{(u,v)\in E\mid u,v\in U\right\}$.) Since the arboricity of any $m$-edge graph is $O(\sqrt{m})$ and at most twice the maximum degree $\Delta$, the result of \cite{PS16} generalizes \cite{GP13}. In ICALP'15 Bernstein and Stein \cite{BS15} gave an algorithm for maintaining a $(3/2+\epsilon)$-MCM for bipartite graphs with a worst-case update time of $O(m^{1/4}/\epsilon^{2.5})$; to achieve this result, they employ the algorithm of \cite{GP13} for graphs with degree bounded by $\Delta$, for $\Delta = O(m^{1/4} \sqrt{\epsilon})$.
The algorithms of \cite{NS13,GP13,BS15,PS16} are deterministic. Charikar and Solomon \cite{CS18} and Arar et al.\ \cite{ACCSW18} independently presented randomized algorithms for maintaining a $(2+\epsilon)$-MCM with a worst-case update time of $\poly(\log n, 1/\epsilon)$, assuming an oblivious adversary. Recently Wajc \cite{DBLP:journals/corr/abs-1911-05545} strengthened the results of \cite{CS18,ACCSW18} by achieving similar bounds against an adaptive adversary.
The algorithms of \cite{ACCSW18,DBLP:journals/corr/abs-1911-05545} employs the algorithm of \cite{GP13} for graphs with degree bounded by $\Delta$, for $\Delta = O(\log n / \epsilon^2)$.
The drawback of the scheme of \cite{GP13} is that the worst-case recourse bounds of algorithms that follow it may be linear in the matching size.
The algorithms of \cite{GP13,BS15,PS16,ACCSW18,DBLP:journals/corr/abs-1911-05545} either follow the scheme of \cite{GP13} or employ one of the aforementioned algorithms provided in \cite{GP13} as a black-box, hence they all suffer from the same drawback. Although the algorithm of \cite{CS18} does not use \cite{GP13} in any way, it also suffers from this drawback, as discussed in Appendix \ref{121}.
\section{Proof of Theorem~\ref{th:main}} \label{sec:wema} The setup is as follows. Let ${\cal M}$ and ${\cal M}'$ be two matchings for the same weighted graph $G = (V,E,w)$. We denote $$w({\cal M}) = \sum_{e\in {\cal M}} w(e),$$ and assume in what follows that ${\cal M}'$ is an improvement over ${\cal M}$, i.e., that $w({\cal M}') > w({\cal M})$.\footnote{The assumption that $w({\cal M}') > w({\cal M})$ is merely for simplicity of presentation, since in case $w({\cal M}') \le w({\cal M})$, one can gradually transform $w({\cal M}')$ into $w({\cal M})$, and finally reverse the transformation; the weight throughout the process would be at least $\max\{w({\cal M}')-W',(1-\epsilon)w({\cal M}')\}$ with $W' = \max_{e \in {\cal M}'} w(e)$.} Our goal is to gradually transform ${\cal M}$ into (a possibly superset of) ${\cal M}'$ via a sequence of constant-time operations to be described next, such that the matching obtained at any point throughout this transformation process is a valid matching for $G$ of weight at least $w({\cal M})-W$, where $W = \max_{e \in {\cal M}} w(e)$, and also at least $(1-{\varepsilon})w({\cal M})$. It is technically convenient to denote by ${\cal M}^*$ the transformed matching, which is initialized as ${\cal M}$ at the outset, and being gradually transformed into ${\cal M}'$; we refer to ${\cal M}$ and ${\cal M}'$ as the source and target matchings, respectively.
We achieve this goal in two steps. In the first step (Theorem \ref{th:app}) we show that the weight of the transformed matching never goes below $w({\cal M})-W$, and in the second step (Theorem \ref{th:main}) we show that the weight never goes below $\max\{w({\cal M})-W,(1-{\varepsilon})w({\cal M})\}$.
Though the proof of Theorem~\ref{th:app} is technically involved, the idea behind it is quite intuitive, and we believe it is instructive to give a high-level overview before getting into the technical details of the proof. The first observation is that ${\cal M} \cup {\cal M}'$ is a relatively easy-to-analyze graph. Specifically, it is a union of vertex-disjoint {\em ${\cal M}-{\cal M}'$ alternating} simple paths and cycles, except for isolated vertices that we may ignore;
a path in $G$ is called {\em ${\cal M}-{\cal M}'$ alternating} if it consists of an edge in ${\cal M}$ followed by an edge in ${\cal M}'$ and so on, or vice versa. As $w({\cal M}')$ is assumed to be greater than $w({\cal M})$, we can then efficiently find an ``improving path'' $\Pi$ in ${\cal M} \cup {\cal M}'$, in the sense that the total weight of the edges in ${\cal M}' \cap \Pi$ is greater than the total weight of the edges in ${\cal M} \cap \Pi$. We then show that one can efficiently find a ``minimum vertex'' on $\Pi$ with the following property. We can walk in one direction from that minimum vertex in a cyclic order (even if $\Pi$ is not a cycle) along $\Pi$, deleting the edges of ${\cal M}$ along $\Pi$ and adding the edges of ${\cal M}'$ along $\Pi$ essentially ``one-by-one'' (the formal details appear below).
This will only increase the weight of ${\cal M}^*$, except for a possible \emph{small} loss that we refer to as the \emph{deficit}, which is bounded above by $W = \max_{e \in {\cal M}} w(e)$. We are now ready to state and prove Theorem~\ref{th:app}.
\begin {theorem} \label{th:app}
One can gradually transform ${\cal M}$ into (a possibly superset of) ${\cal M}'$ via a sequence of phases, each running in constant time (and thus making at most a constant number of changes to the matching), such that the matching obtained at the end of each phase throughout this transformation process is a valid matching for $G$ of weight at least $w({\cal M})-W$, where $W = \max_{e \in {\cal M}} w(e)$. The runtime of this transformation procedure is $O(|{\cal M}| + |{\cal M}'|)$. \end {theorem}
\begin {proof} Recall that the transformed matching, denoted by ${\cal M}^*$, is initialized to the source matching ${\cal M}$. An edge in ${\cal M}' \setminus {\cal M}^*$ is \emph{good} if the sum of the weights of the edges in ${\cal M}^*$ that are adjacent to it is no greater than its own weight; otherwise it is \emph{bad}. (These notions generalize the respective notions from Section \ref{proofmain} for unweighted matchings.)
Handling good edges is easy: For any good edge $e$, move it from ${\cal M}' \setminus {\cal M}^*$ to ${\cal M}^*$, and then delete from ${\cal M}^*$ the at most two edges adjacent to $e$; thus the weight of ${\cal M}^*$ cannot decrease as a result of handling a good edge. Since every edge in ${\cal M}$ is deleted at most once from ${\cal M}^*$, and every edge in ${\cal M}'$ is added at most once to ${\cal M}^*$, the total time of handling the good edges throughout the algorithm is $O(|{\cal M}|+ |{\cal M}'|) = O(n)$.
Next we describe an algorithm that proves Theorem \ref{th:app}. During the execution of this algorithm, some edges are moved from ${\cal M}' \setminus {\cal M}^*$ to ${\cal M}^*$, which triggers the removal of edges from ${\cal M}^*$, and as a result some bad edges become good. Similarly to the treatment of Section \ref{proofmain} for the unweighted case, we can use a data structure for maintaining the good edges in ${\cal M}' \setminus {\cal M}^*$, so that testing if there is a good edge in ${\cal M}^*$ and returning an arbitrary one if exists can be done in $O(1)$ time.
Just as in the unweighted case, here too we always try to handle good edges as described above, as long as one exists. The difference between the unweighted case and the weighted one is in how bad edges are handled: In the unweighted case bad edges are handled greedily (in an obvious way), whereas the weighted case calls for a much more intricate treatment, as described next.
We believe it is instructive to refer to the edges of ${\cal M}'$ as \emph{red edges} and to those of ${\cal M}$ as \emph{blue edges}, and our transformation will delete blue edges from ${\cal M}^*$ (recall that ${\cal M}^*$ is initialized to ${\cal M}$) and copy red edges from ${\cal M}'$ to ${\cal M}^*$ so that the invariant in Theorem~\ref{th:app} is always maintained.
We denote the symmetric difference of two sets $A$ and $B$ by $A \oplus B$, i.e., $A \oplus B = (A \cup B) \setminus (A \cap B)$. We use the following well-known observation of Hopcroft and Karp~\cite{HK73}.
\begin {lemma} \label{le:h}
$H := {\cal M} \oplus {\cal M}'$ is a union of vertex-disjoint alternating blue-red (simple) paths and alternating blue-red (simple) cycles. \end {lemma}
The \emph{colored weight} of a subgraph $G' \subset G$, denoted by $c(G')$, is defined as the difference between the sum of weights of the red edges in $G'$ and the sum of weights of the blue edges in $G'$.
Since $w({\cal M}') > w({\cal M})$, we have $c(H) > 0$, hence the sum of the colored weights of the alternating blue-red paths and cycles in $H$ is positive.
The following lemma shows a reduction from a general $H$ to the case where
$H$ is a simple blue-red path or cycle of positive colored weight.
\begin {lemma} \label {le:red} If $w({\cal M}') > w({\cal M})$, we may assume that $H$ is an alternating blue-red path or cycle of positive colored weight. \end {lemma}
\begin {proof} Let $\Pi_1,\ldots, \Pi_r$ denote the simple alternating blue-red paths or cycles in $H$ ordered so that the paths and cycles of positive colored weight appear first, and only then the paths or cycles of non-positive colored weight. In other words, the paths and cycles of positive colored weight have smaller indices than those of non-positive colored weight.
As $\sum_{j=1}^r c(\Pi_j) = w({\cal M}') - w({\cal M}^*) > 0$ and $c(\Pi_j)$ are ordered positive first, it easily follows that \begin {equation} \label{eq: aug} \sum_{j=1}^i c(\Pi_j) > 0 \quad \text{ for every } 1 \le i \le r . \end {equation} Therefore, after treating $\Pi_1, \ldots, \Pi_i$, the weight of ${\cal M}^*$ has increased by $\sum_{j=1}^i c(\Pi_j)$, which is a positive value. When treating the next path $\Pi_{i+1}$ in $H$, we add the non-negative value $\sum_{j=1}^i c(\Pi_j)$ to the weight of the first red edge in $\Pi_{i+1}$, which allows us to view $\Pi_{i+1}$ as
having a positive colored weight.
To complete the proof of this lemma,
we note that the total transformation is obtained by concatenating all the transformations of $\Pi_1,\ldots, \Pi_r$ in the order in which they appear; and, in particular, if each of these transformations is carried out in time linear in the length of the path/cycle,
the total runtime of the transformation will be $O(|{\cal M}| + |{\cal M}'|)$. \quad\blackslug\lower 8.5pt\null\par \end {proof}
By Lemma~\ref{le:red}, we may assume that $H$ is a path $\Pi$ consisting of $k$ pairs of blue-red edges $b_1 \circ r_1 \circ \cdots \circ b_k \circ r_k$, where $b_i \in {\cal M}^*, r_i \in {\cal M}'$, for $i=1,\ldots, k$, and we allow for $b_1$ or $r_k$ (or both) to be empty; we will make this assumption henceforth.
The algorithm
iteratively changes ${\cal M}^*$ by deleting from ${\cal M}^*$, in iteration $i$, the blue edges $b_{i+1}$ and $b_i$ (if not previously deleted from ${\cal M}^*$), for $i=1,\ldots, k$, thus allowing for an addition of the red edge $r_i \in {\cal M}'$ to ${\cal M}^*$; thus at most 3 changes to ${\cal M}^*$ are made in each iteration.
As this basic procedure is used repeatedly below, we include its pseudo-code, and note that implementing it can be easily done in time $O(|\Pi|)$. We also note that Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$ changes the auxiliary matching ${\cal M}^*$, and by keeping all the intermediate values of ${\cal M}^*$ throughout the process, over all the paths $\Pi$ in $H$, we obtain the entire transformation of ${\cal M}$ into ${\cal M}'$. \\
\begin{mdframed}[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt] Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$:
\\
{\bf For} $i = 1$ to $k$: \begin{enumerate} \item Delete the blue edge $b_{i+1}$ (if exists) and the blue edge $b_{i}$ (if exists), from ${\cal M}^*$. \item Add the edge $r_{i+1}$ to ${\cal M}^*$.
\end{enumerate}
\end{mdframed}
For a path $\Pi = b_1 \circ r_1 \circ \cdots \circ b_k \circ r_k$, we denote by $\Pi_i = b_1 \circ r_1 \circ \cdots \circ b_i \circ r_i$ the \emph{alternating blue-red $i$-prefix} of $\Pi$, for any $1\le i \le k$, which is the subpath of $\Pi$ that consists of the first $i$ pairs of blue-red edges. The colored weight of the alternating blue-red $i$-prefix of $\Pi$ is given by $$c(\Pi_i) = c\left(b_1 \circ r_1 \circ \cdots \circ b_i \circ r_i \right) = \sum_{j=1}^i w(r_j)-w(b_j),$$ for $0 \le i \le k$, with the convention that $\Pi_0$ is the empty path and $c(\Pi_0)=0$. Let \begin {equation} \label{eq:imin} i_{min} = \underset {i \in \{0,1,\ldots, k\}}{\arg\min c(\Pi_i)}. \end {equation}
Clearly, $i_{min}$ and $c(\Pi_{i_{min}})$ can be computed in $O(|\Pi|)$ time.
\paragraph*{First case: $c(\Pi_{i_{min}}) \ge 0$.\\} In this case we can add the red edges of $\Pi$ and delete the blue edges of $\Pi$ one after another by making a call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$. Concretely, after $i$ iterations of the {\bf for} loop in Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$, for $1 \le i \le k$, the value of ${\cal M}^*$ is changed by $$c(\Pi_i) - w(b_{i+1}) \ge c(\Pi_{i_{min}}) - w(b_{i+1}) \ge -W,$$
hence the value of ${\cal M}^*$ never decreases by more than $W$. Moreover, by our assumptions that $H = \Pi$ and $w({\cal M}') > w({\cal M})$, at the end of the execution of the procedure the value of ${\cal M}^*$ is changed by $c(\Pi) = w({\cal M}') - w({\cal M}') > 0$. This shows that the invariant of Theorem~\ref{th:app} is always maintained.
\paragraph*{Second case: $c(\Pi_{i_{min}}) < 0$.\\} Since $c(\Pi_k) = c(\Pi) > 0$, it follows that $0 < i_{min} < k$. Let $\Pi_{pref} := \Pi_{i_{min}}$ denote the alternating blue-red $i_{min}$-prefix of $\Pi$, namely $b_1 \circ r_1 \circ \cdots \circ b_{i_{min}} \circ r_{i_{min}}$. Similarly, we define the \emph{alternating blue-red $i$-suffix} of $\Pi$, for any $1\le i \le k-1$, as the subpath of $\Pi$ that consists of the last $k - i$ pairs of blue-red edges, and let $\Pi_{suf}$ denote the alternating blue-red $i_{min}$-suffix of $\Pi$, namely, $b_{i_{min} + 1} \circ r_{i_{min} + 1} \circ \cdots \circ b_k \circ r_k$. Since
$0 < c(\Pi) = c(\Pi_{pref}) + c(\Pi_{suf})$ and $c(\Pi_{pref}) = c(\Pi_{i_{min}}) < 0$,
we have $c(\Pi_{pref}) < 0 < c(\Pi_{suf}).$
Clearly, $\Pi_{pref}$ and $\Pi_{suf}$ can be computed in $O(|\Pi|)$ time.
We add to ${\cal M}^*$ the red edges in $\Pi_{suf}$ and delete from ${\cal M}^*$ the blue edges in $\Pi_{suf}$ by making a call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{suf})$. After this execution,
the weight of ${\cal M}^*$ is increased by $c(\Pi_{suf})>0$.
Moreover, by the definition of $i_{min}$, after $i$ iterations of the {\bf for} loop of the procedure,
for $1 \le i \le k-i_{min}$, the value of ${\cal M}^*$ may decrease by at most $w(b_{i+1})\le W$, as compared to its value prior to the call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{suf})$.
After handling the edges in $\Pi_{suf}$, the value of ${\cal M}^*$ is changed by a value of $c(\Pi_{suf})>0$. Next, we add to ${\cal M}^*$ the remaining red edges in $\Pi$ and delete from ${\cal M}^*$ the remaining blue edges of $\Pi$, by making the call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{pref})$.
After performing $i$ iterations of the {\bf for} loop of Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{pref})$, for $1 \le i \le i_{min}$, the value of ${\cal M}^*$ is changed by at least
\begin {equation} \begin {array}{ll} c(\Pi_{i}) - w(b_{i+1}) ~\ge~ c(\Pi_{i}) ~-~ W \ge c(\Pi_{i_{min}}) - W ~=~ c(\Pi_{pref}) - W ~>~ - c(\Pi_{suf}) - W,\end {array} \end {equation} as compared to its value \emph{after} the call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{suf})$, where the last inequality follows as $c(\Pi_{pref}) + c(\Pi_{suf}) > 0$. Since we made the call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{pref})$ after the value of ${\cal M}^*$ has already increased by $c(\Pi_{suf})>0$, it follows that during the execution of Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{pref})$, the value of $w({\cal M}^*)$ may not decrease by more than $W$, as compared to the value of $w({\cal M}^*)$ \emph{prior} to handling the edges along $\Pi$.
To conclude, in both cases, the value of $w({\cal M}^*)$ never decreases by more than $W$ during the process of handling the edges along $\Pi$, and at the end of this process the value grows by $c(\Pi)>0$.
Moreover, the runtime of this procedure is $O(|\Pi|)$.
This completes the proof of Theorem \ref{th:app}. \quad\blackslug\lower 8.5pt\null\par \end {proof}
Theorem~\ref{th:app} defines a gradual transformation of ${\cal M}$ into ${\cal M}'$, composed of a sequence of operations of insertions of edges from ${\cal M}'$ into ${\cal M}^*$ and deletions of edges of ${\cal M}$ from ${\cal M}^*$, where each operation makes at most 3 changes to ${\cal M}^*$. By cleverly partitioning this transformation into phases consisting of $O(\frac 1 {\varepsilon})$ operations each,
we can prove the strengthening of Theorem~\ref{th:app} given in Theorem \ref{th:main}, asserting that the weight of the transformed matching at the end of each phase is smaller than the original weight (of ${\cal M}$) not only by at most $W$,
but also by at most a factor of $1-{\varepsilon}$. \begin {proof} (Proof of Theorem \ref{th:main}.) If $\max\{w({\cal M})-W,(1-{\varepsilon})w({\cal M})\} = w({\cal M})-W$, then the theorem follows immediately from Theorem~\ref{th:app}. It is henceforth assumed that $(1-{\varepsilon})w({\cal M}) \ge w({\cal M})-W$.
We consider the transformation used in the proof of Theorem~\ref{th:app}, and show how to partition it into phases consisting of $O(\frac 1 {\varepsilon})$ operations each, such that the condition in the theorem holds. We keep the same notation as in the proof of Theorem~\ref{th:app}, and use the same reduction of Lemma~\ref{le:red} to the case where $H = {\cal M}^* \oplus {\cal M}'$ is an alternating blue-red path or cycle, denoted by $\Pi$.
By definition of $i_{min}$ (via~(\ref{eq:imin}), within the proof Theorem~\ref{th:app}), we have \begin {equation} \label{eq:mat} c(\Pi_{i+1}) = c(\Pi_i) + w(r_{i+1}) - w(b_{i+1}) \ge c(\Pi_i), \text{ for } i \ge i_{min}. \end {equation} Therefore, $w(r_i) \ge w(b_i) \text{ for } i > i_{min}.$
It is possible that many consecutive blue edges $b_{i}$ along $\Pi$ are ``heavy'', i.e., of weight $>{\varepsilon} {\cal M}$, which is why we partition the transformation into phases, where the specific partition depends on whether the specific sub-path that we deal with
(either $\Pi, \Pi_{pref}$, or $\Pi_{suf})$) contains many consecutive heavy blue edges. The following analysis applies to Procedure $ReplaceBlueRed({\cal M}^*. {\cal M}', \Pi)$,
but
the exact same analysis applies verbatim to Procedure $ReplaceBlueRed({\cal M}^*. {\cal M}', \Pi_{suf})$ or Procedure $ReplaceBlueRed({\cal M}^*. {\cal M}', \Pi_{pref})$, with the only difference being that the value of $i_{min}$ is set as 0 for the calls of Procedure $ReplaceBlueRed({\cal M}^*. {\cal M}', \Pi)$ and Procedure $ReplaceBlueRed({\cal M}^*. {\cal M}', \Pi_{pref})$, and is defined via~(\ref{eq:imin}) for the call to Procedure $ReplaceBlueRed({\cal M}^*. {\cal M}', \Pi_{suf})$.
We distinguish between two cases. \paragraph*{First case: $\forall i_{min} < i \le i_{min} + \frac 1 {\varepsilon}: \ w(b_i) \ge {\varepsilon} w({\cal M})$.\\}
In this case the number of edges in $\Pi$ is $\le \frac 2 {\varepsilon} + 1$, as there can be at most $\frac 1 {\varepsilon}$ edges of ${\cal M}$ of weight greater or equal to ${\varepsilon} {\cal M}$. We can thus perform Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$ in a single phase of $O(\frac 1 {\varepsilon})$ operations, following which the value of ${\cal M}^*$ grows by $c(\Pi)>0$. Clearly, the runtime is $O(|\Pi|)$.
\paragraph*{Second case: $\exists i_{min} < i_0 \le i_{min} + \frac 1 {\varepsilon}: \ w(b_{i_0}) < {\varepsilon} w({\cal M})$.\\} In this case we perform the first $i_0 - i_{min}$ iterations of the {\bf for} loop in Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$ in one phase of $O(\frac 1 {\varepsilon})$ operations. This deletes the blue edges from ${\cal M}^*$ and adds the red edges of ${\cal M}'$ in the sub-path $b_{i_{min}} \circ r_{i_{min}} \circ \cdots \circ b_{i_0-1} \circ r_{i_0 - 1}$ of $\Pi$, within one phase. At this point, we have deleted from ${\cal M}^*$ the edges $b_{i_{min}}, \ldots, b_{i_0}$, and added to ${\cal M}^*$ the edges $r_{i_{min}}, \ldots, r_{i_0-1}$. (Indeed, in step (i) of iteration $i_0-1$ of Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$, the edge $b_{i_0}$ is deleted.)
The value of ${\cal M}^*$ has thus changed by $c(\Pi_{i_0})-c(\Pi_{i_{min}}) - w(b_{i_0}) \ge - w(b_{i_0}) $, where the inequality follows by~(\ref{eq:mat}). Therefore, the value of ${\cal M}^*$ decreased by at most $w(b_{i_0})$, which is smaller than ${\varepsilon} {\cal M}$ by definition of $i_0$, and thus ${\cal M}^*$ has weight at least $(1-{\varepsilon})w({\cal M})\}$ at the end of this phase.
We repeat this process recursively, treating the remaining edges in $\Pi$ with the same bifurcation into two cases, thus maintaining the invariant that at the end of each phase throughout this transformation process, ${\cal M}^*$ is a valid matching with $w({\cal M}^*) \ge (1-{\varepsilon})w({\cal M}) \ge \max\{w({\cal M})-W, (1-{\varepsilon})w({\cal M})\}$. Here too the runtime is $O(|\Pi|)$. \ignore{ In the first case, we proceed to perform the $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi_{rem})$ in one batch for $O(\frac 1 {\varepsilon})$ steps, and increase the value of ${\cal M}^*$ by $c(\Pi)>0$, without further decreasing ${\cal M}^*$. In the second case, we find $i_0 < i_1 \le i_0 + \frac 1 {\varepsilon}$ with $w(b(i_1)) < {\varepsilon} w({\cal M})$, and continue to perform the next $i_1 - i_0 + 1$ iterations of (i), (ii) and (iii) in $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$ in one batch of $O(\frac 1 {\varepsilon})$ steps. This will delete the blue edges from ${\cal M}^*$ and add the red edges of ${\cal M}^*$ in $b_{i_0} \circ r_{i_0} \circ \cdots \circ b_{i_1-1} \circ r_{i_1 - 1}$ according to their appearance in $\Pi$, within one batch. At this point, since the beginning of the execution of $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$, we have deleted from ${\cal M}^*$ the edges $b_{i_{min}}, \ldots, b_{i_1}$ and added to ${\cal M}^*$ the edges $r_{i_{min}}, \ldots, r_{i_1-1}$. The value of ${\cal M}^*$ has increased (since the beginning of treating $\Pi$) by $c(\Pi_{i_1})-c(\Pi_{i_{min}}) - w(b_{i_1}) \ge - w(b_{i_1})$, where the inequality again follows by~(\ref{eq:mat}). Therefore, the value of ${\cal M}^*$ decreased by at most $w(b_{i_1}) < \min\{{\varepsilon} {\cal M}, W\}$, and we can repeat this partition into batches of size $O(\frac 1 {\varepsilon})$ of the transformation in $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$.}
\paragraph* {Runtime analysis.}
We partition the edges of $H = {\cal M} \cup {\cal M}'$ into alternating blue-red simple paths and cycles, and order them so that the paths and cycles of positive colored weight appear first, which can be easily done in time $O(|{\cal M}| +|{\cal M}'|)$. The treatment of each alternating path or cycle $\Pi$ in $H$ is done by making a call to Procedure $ReplaceBlueRed({\cal M}^*, {\cal M}', \Pi)$, where we partition the resulting transformation into batches of size $O(\frac 1 \epsilon)$ each; summing over all paths in $H$, this can also be easily done in time $O(|{\cal M}| +|{\cal M}'|)$.
The total time required for handling the good edges of ${\cal M}'$ throughout the algorithm is $O(|{\cal M}| +|{\cal M}'|)$, hence the running time of the transformation is $O(|{\cal M}| +|{\cal M}'|)$.
The proof of Theorem~\ref{th:main} follows. \quad\blackslug\lower 8.5pt\null\par \end {proof}
\begin {remark}
Recall that in the unweighted case, when $|{\cal M}| < |{\cal M}'|$, it was possible to gradually transform ${\cal M}$ to ${\cal M}'$ without ever being in deficit compared go the initial value of ${\cal M}$, i.e., $|{\cal M}^*| \ge |{\cal M}|$ throughout the transformation process. However, if $|{\cal M}'| \le |{\cal M}|$, we showed in App.\ \ref{tightnessun}
that this is no longer the case and there are cases in which we are always at a deficit of at least one unit until the very end of the transformation process.
In the weighted case, there are examples where $w({\cal M}') > w({\cal M})$, yet in every gradual transformation of ${\cal M}$ to ${\cal M}'$, we have $w({\cal M}^*) < w({\cal M})$ throughout the transformation. The deficit
throughout the process is more subtle to quantify, and this was discussed in detail in App.\ \ref{tightnesswe}.
\end {remark}
\section{Proof of Lemma \ref{lazylemma2}} \label{app:lazy}
Write $k = \lfloor \epsilon'\cdot |{\cal M}_t| \rfloor$, and let $k_{ins}$ and $k_{del}$ denote the number of (edge or vertex) insertions and deletions that occur during the $k$ updates $t+1,\ldots,t+k$, respectively, where $k = k_{ins} + k_{del}$. We have $|{\cal M}^{OPT}_t| \le \beta \cdot |{\cal M}_t|$, where ${\cal M}^{OPT}_i$ is a maximum matching for $G_i$, for all $i$. Since each insertion may increase the size of the MCM by at most 1, we have
$$|{\cal M}^{OPT}_i| ~\le~ |{\cal M}^{OPT}_t| + k_{ins} ~\le~ \beta \cdot |{\cal M}_t| + k ~\le~ \beta \cdot |{\cal M}_t| (1 + \epsilon').$$ Also, each deletion may remove at most one edge from ${\cal M}_t$, hence
$$|{\cal M}^{(i)}_t| ~\ge~ |{\cal M}_t| - k_{del} ~\ge~ |{\cal M}_t| - k ~\ge~ |{\cal M}_t| - \epsilon' \cdot |{\cal M}_t| ~=~ |{\cal M}_t| (1- \epsilon').$$
It follows that $$\frac{|{\cal M}^{OPT}_i|}{|{\cal M}^{(i)}_t|} ~\le~
\frac{\beta \cdot |{\cal M}_t| (1 + \epsilon')}{|{\cal M}_t| (1- \epsilon')} ~\le~ \beta(1 + 2\epsilon'), $$
where the last inequality holds for all $\epsilon' \le 1/2$. The lemma follows.
\section{Further details on the scheme of \cite{GP13}} \label{amort} The key insight behind the scheme of \cite{GP13} and of its generalization \cite{PS16} is not to compute the approximate matching on the entire graph, but rather on a \emph{matching sparsifier}, which is a sparse subgraph $\tilde G$ of the entire graph $G$ that preserves the maximum matching size to within a factor of $1+\epsilon$. The matching sparsifier of \cite{GP13, PS16} is derived from a constant approximate minimum vertex cover that is maintained dynamically \emph{by other means}. We will not describe here the manners in which a constant approximate minimum vertex cover is maintained and the sparsifier is computed on top of it; the interested reader can refer to \cite{GP13,PS16} for details. The bottom-line of \cite{GP13,PS16} is that for graphs with arboricity bounded by $\alpha$, for any $1 \le \alpha = O(\sqrt{m})$,
the matching sparsifier $\tilde G$ of \cite{PS16} has $O(|{\cal M}| \cdot \alpha/\epsilon)$ edges, and it can be computed in time linear in its size. (The scheme of \cite{PS16} generalizes that of \cite{GP13}, hence we might as well restrict attention to \cite{PS16}.)
An $(1+O(\epsilon))$-MCM can be computed for the sparsifier $\tilde G$ in time $O(|\tilde G| / \epsilon) = O(|{\cal M}| \cdot \alpha/\epsilon^2)$ \cite{HK73,MV80,Vaz12}, and assuming the constant hiding in the $O$-notation is sufficiently small, it provides a $(1+\epsilon/4)$-MCM for the entire graph. Since the cost $O(|{\cal M}| \cdot \alpha/\epsilon^2)$ of this static computation is amortized over $\Omega(\epsilon \cdot |{\cal M}|)$ update steps, the resulting amortized update time is $O(\alpha \cdot \epsilon^{-3})$. As shown in \cite{PS16}, one can shave a $1/\epsilon$-factor from this update time bound, reducing it to $O(\alpha \cdot \epsilon^{-2})$, but the details of this improvement lie outside the scope of this short overview of the scheme of \cite{GP13,PS16}.
\section{Discussion and Open Problems} \label{discuss} This paper introduces a natural generalization of the MRP, and provides near-optimal transformations to the problems of maximum cardinality matching and maximum weight matching.
One application of this meta-problem is to dynamic graph algorithms. In particular, by building on our transformation for maximum cardinality matching we have shown that any algorithm for maintaining a $\beta$-MCM can be transformed into an algorithm for maintaining a $\beta(1+\epsilon)$-MCM with essentially the same update time as that of the original algorithm and with a worst-case recourse bound of $O(1/\epsilon)$, for any $\beta \ge 1$ and $\epsilon>0$. This recourse bound is optimal for the regime $\beta = 1+\epsilon$. We also extended this result for weighted matchings, but there is a linear dependence on the aspect-ratio of the graph in the update time and recourse bounds. It would be interesting to improve this dependency to be polylogaritmic in the aspect-ratio.
It would be interesting to study additional basic graph problems under this generalized framework. Although our positive results may lead to the impression that there exists an efficient gradual transformation process to any optimization graph problem, we conclude with a sketch of two trivial hardness results.
For the \emph{maximum independent set problem} any gradual transformation process cannot provide any nontrivial approximation guarantee, regardless of the approximation guarantees of the source and target independent sets. To see this, denote the source approximate maximum independent set (the one we start from) by ${\cal S}$ and the target approximate maximum independent set (the one we gradually transform into) by ${\cal S}'$, and suppose there is a complete bipartite graph between ${\cal S}$ and ${\cal S}'$. Since we cannot add even a single vertex of ${\cal S}'$ to the output independent set ${\cal S}^*$ (which is initialized as ${\cal S}$) before removing from it all vertices of ${\cal S}$ and assuming each step of the transformation process makes only $\Delta$ changes to ${\cal S}^*$,
the approximation guarantee of the output independent set
must reach $\Omega(|{\cal S}'|/ \Delta)$ at some moment throughout the transformation process. In other words, the approximation guarantee may be arbitrarily large.
As another example, an analogous argument shows that for the \emph{minimum vertex cover problem}, any gradual transformation process cannot provide an approximation guarantee better than
$\frac{|{\cal C}| + |{\cal C}'|}{|{\cal C}'|} > 2$, where ${\cal C}$ and ${\cal C}'$ are the source and target vertex covers, respectively.
On the other hand, one can easily see that the approximation guarantee throughout the entire transformation process does not exceed $\frac{|{\cal C}| + |{\cal C}'|}{|{\cal C}^{OPT}|}$,
where ${\cal C}^{OPT}$ is a minimum vertex cover for the graph,
by gradually adding all vertices of the target vertex cover ${\cal C}'$ to the output vertex cover ${\cal C}^*$ (which is initialized as ${\cal C}$), and later gradually removing the vertices of ${\cal C}$ from the output vertex cover ${\cal C}^*$.
These examples demonstrate a basic limitation of the our generalized framework, and suggest that further research of this framework is required. One interesting direction for further research is studying the maximum independent set and minimum vertex cover problems for bounded degree graphs; note that the trivial hardness results mentioned above do not apply directly to bounded degree graphs.
More generally, studying additional combinatorial optimization problems under this framework may contribute to a deeper understanding of its inherent limitations and strengths, and in particular,
to finding additional applications of this framework, possibly outside the area of dynamic matching algorithms.
\end{document} | arXiv |
\begin{document}
\title{The geometry of the disk complex}
\author{Howard Masur} \address{\hskip-\parindent
Department of Mathematics\\
University of Chicago\\
Chicago, Illinois 60637} \email{[email protected]} \author{Saul Schleimer} \address{\hskip-\parindent
Department of Mathematics\\
University of Warwick\\
Coventry, CV4 7AL, UK} \email{[email protected]}
\thanks{This work is in the public domain.}
\date{\today}
\begin{abstract} We give a distance estimate for the metric on the disk complex and show that it is Gromov hyperbolic. As another application of our techniques, we find an algorithm which computes the Hempel distance of a Heegaard splitting, up to an error depending only on the genus. \end{abstract} \maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} \label{Sec:Introduction}
In this paper we initiate the study of the geometry of the disk complex of a handlebody $V$. The disk complex $\mathcal{D}(V)$ has a natural simplicial inclusion into the curve complex $\mathcal{C}(S)$ of the boundary of the handlebody. Surprisingly, this inclusion is not a quasi-isometric embedding; there are disks which are close in the curve complex yet very far apart in the disk complex. As we will show, any obstruction to joining such disks via a short path is a topologically meaningful subsurface of $S = \partial V$. We call such subsurfaces {\em holes}. A path in the disk complex must travel into and then out of these holes; paths in the curve complex may skip over a hole by using the vertex representing the boundary of the subsurface. We classify the holes:
\begin{theorem} \label{Thm:ClassificationHolesDiskComplex} Suppose $V$ is a handlebody. If $X \subset \partial V$ is a hole for the disk complex $\mathcal{D}(V)$ of diameter at least $61$ then: \begin{itemize} \item $X$ is not an annulus. \item If $X$ is compressible then there are disks $D, E$ with boundary contained in $X$ so that the boundaries fill $X$. \item If $X$ is incompressible then there is an $I$-bundle $\rho_F \colon T \to F$ so that $T$ is a component of $V {\smallsetminus} \partial_v T$ and $X$ is a component of $\partial_h T$. \end{itemize} \end{theorem}
See Theorems~\ref{Thm:Annuli}, \ref{Thm:CompressibleHoles} and \ref{Thm:IncompressibleHoles} for more precise statements. The $I$--bundles appearing in the classification lead us to study the arc complex $\mathcal{A}(F)$ of the base surface $F$. Since the $I$--bundle $T$ may be twisted the surface $F$ may be non-orientable.
Thus, as a necessary warm-up to the difficult case of the disk complex, we also analyze the holes for the curve complex of an non-orientable surface, as well as the holes for the arc complex.
\subsection*{Topological application} It is a long-standing open problem to decide, given a Heegaard diagram, whether the underlying splitting surface is reducible.
This question has deep connections to the geometry, topology, and algebra of the ambient three-manifold. For example, a resolution of this problem would give new solutions to both the three-sphere recognition problem and the triviality problem for three-manifold groups.
The difficulty of deciding reducibility is underlined by its connection to the Poincar\'e conjecture: several approaches to the Poincar\'e Conjecture fell at essentially this point. See~\cite{CavicchioliSpaggiari06} for a entrance into the literature.
One generalization of deciding reducibility is to find an algorithm that, given a Heegaard diagram, computes the {\em distance} of the Heegaard splitting as defined by Hempel~\cite{Hempel01}. (For example, see~\cite[Section~2]{Birman06}.) The classification of holes for the disk complex leads to a coarse answer to this question.
\begin{restate}{Theorem}{Thm:CoarselyComputeDistance} In every genus $g$ there is a constant $K = K(g)$ and an algorithm that, given a Heegaard diagram, computes the distance of the Heegaard splitting with error at most $K$. \end{restate}
In addition to the classification of holes, the algorithm relies on the Gromov hyperbolicity of the curve complex~\cite{MasurMinsky99} and the quasi-convexity of the disk set inside of the curve complex~\cite{MasurMinsky04}. However the algorithm does not depend on our geometric applications of \refthm{ClassificationHolesDiskComplex}.
\subsection*{Geometric application}
The hyperbolicity of the curve complex and the classification of holes allows us to prove:
\begin{restate}{Theorem}{Thm:DiskComplexHyperbolic} The disk complex is Gromov hyperbolic. \end{restate}
Again, as a warm-up to the proof of \refthm{DiskComplexHyperbolic} we prove that $\mathcal{C}(F)$ and $\mathcal{A}(S)$ are hyperbolic in \refcor{NonorientableCurveComplexHyperbolic} and \refthm{ArcComplexHyperbolic}. Note that Bestvina and Fujiwara~\cite{BestvinaFujiwara07} have previously dealt with the curve complex of a non-orientable surface, following Bowditch~\cite{Bowditch06}.
These results cannot be deduced from the fact that $\mathcal{D}(V)$, $\mathcal{C}(F)$, and $\mathcal{A}(S)$ can be realized as quasi-convex subsets of $\mathcal{C}(S)$. This is because the curve complex is locally infinite. As simple example consider the Cayley graph of $\mathbb{Z}^2$ with the standard generating set. Then the cone $C(\mathbb{Z}^2)$ of height one-half is a Gromov hyperbolic space and $\mathbb{Z}^2$ is a quasi-convex subset. Another instructive example, very much in-line with our work, is the usual embedding of the three-valent tree $T_3$ into the Farey tessellation.
The proof of \refthm{DiskComplexHyperbolic} requires the {\em distance estimate} \refthm{DiskComplexDistanceEstimate}: the distance in $\mathcal{C}(F)$, $\mathcal{A}(S)$, and $\mathcal{D}(V)$ is coarsely equal to the sum of subsurface projection distances in holes. However, we do not use the hierarchy machine introduced in~\cite{MasurMinsky00}. This is because hierarchies are too flexible to respect a symmetry, such as the involution giving a non-orientable surface, and at the same time too rigid for the disk complex. For $\mathcal{C}(F)$ we use the highly rigid {Teichm\"uller~} geodesic machine, due to Rafi~\cite{Rafi10}. For $\mathcal{D}(V)$ we use the extremely flexible train track machine, developed by ourselves and Mosher~\cite{MasurEtAl10}.
Theorems~\ref{Thm:DiskComplexDistanceEstimate} and \ref{Thm:DiskComplexHyperbolic} are part of a more general framework. Namely, given a combinatorial complex $\mathcal{G}$ we understand its geometry by classifying the holes: the geometric obstructions lying between $\mathcal{G}$ and the curve complex. In Sections~\ref{Sec:Axioms} and \ref{Sec:Partition} we show that any complex $\mathcal{G}$ satisfying certain axioms necessarily satisfies a distance estimate. That hyperbolicity follows from the axioms is proven in \refsec{Hyperbolicity}.
Our axioms are stated in terms of a path of markings, a path in the the combinatorial complex, and their relationship. For the disk complex the combinatorial paths are surgery sequences of essential disks while the marking paths are provided by train track splitting sequences; both constructions are due to the first author and Minsky~\cite{MasurMinsky04} (\refsec{BackgroundTrainTracks}). The verification of the axioms (\refsec{PathsDisk}) relies on our work with Mosher, analyzing train track splitting sequences in terms of subsurface projections~\cite{MasurEtAl10}.
We do not study non-orientable surfaces directly; instead we focus on symmetric multicurves in the double cover. This time marking paths are provided by {Teichm\"uller~} geodesics, using the fact that the symmetric Riemann surfaces form a totally geodesic subset of {Teichm\"uller~} space. The combinatorial path is given by the systole map. We use results of Rafi~\cite{Rafi10} to verify the axioms for the complex of symmetric curves. (See \refsec{PathsNonorientable}.) \refsec{PathsArc} verifies the axioms for the arc complex again using {Teichm\"uller~} geodesics and the systole map. It is interesting to note that the axioms for the arc complex can also be verified using hierarchies or, indeed, train track splitting sequences.
The distance estimates for the marking graph and the pants graph, as given by the first author and Minsky~\cite{MasurMinsky00}, inspired the work here, but do not fit our framework. Indeed, neither the marking graph nor the pants graph are Gromov hyperbolic. It is crucial here that all holes {\em interfere}; this leads to hyperbolicity. When there are non-interfering holes, it is unclear how to partition the marking path to obtain the distance estimate.
\subsection*{Acknowledgments} We thank Jason Behrstock, Brian Bowditch, Yair Minsky, Lee Mosher, Hossein Namazi, and Kasra Rafi for many enlightening conversations.
We thank Tao Li for pointing out that our original bound inside of \refthm{IncompressibleHoles} of $O(\log g(V))$ could be reduced to a constant.
\section{Background on complexes} \label{Sec:BackgroundComplexes}
We use $S_{g,b,c}$ to denote the compact connected surface of genus $g$ with $b$ boundary components and $c$ cross-caps.
If the surface is orientable we omit the subscript $c$ and write $S_{g,b}$. The {\em complexity} of $S = S_{g, b}$ is $\xi(S) = 3g - 3 + b$. If the surface is closed and orientable we simply write $S_g$.
\subsection{Arcs and curves}
A simple closed curve $\alpha \subset S$ is {\em essential} if $\alpha$ does not bound a disk in $S$. The curve $\alpha$ is {\em non-peripheral} if $\alpha$ is not isotopic to a component of $\partial S$. A simple arc $\beta \subset S$ is proper if $\beta \cap \partial S = \partial \beta$. An isotopy of $S$ is proper if it preserves the boundary setwise. A proper arc $\beta \subset S$ is {\em essential} if $\beta$ is not properly isotopic into a regular neighborhood of $\partial S$.
Define $\mathcal{C}(S)$ to be the set of isotopy classes of essential, non-peripheral curves in $S$. Define $\mathcal{A}(S)$ to be the set of proper isotopy classes of essential arcs. When $S = S_{0,2}$ is an annulus define $\mathcal{A}(S)$ to be the set of essential arcs, up to isotopies fixing the boundary pointwise. For any surface define $\mathcal{AC}(S) = \mathcal{A}(S) \cup \mathcal{C}(S)$.
For $\alpha, \beta \in \mathcal{AC}(S)$ the geometric intersection number $\iota(\alpha, \beta)$ is the minimum intersection possible between $\alpha$ and any $\beta'$ equivalent to $\beta$. When $S = S_{0,2}$ we do not count intersection points occurring on the boundary. If $\alpha$ and $\beta$ realize their geometric intersection number then $\alpha$ is {\em tight} with respect to $\beta$. If they do not realize their geometric intersection then we may {\em tighten} $\beta$ until they do.
Define $\Delta \subset \mathcal{AC}(S)$ to be a {\em multicurve} if for all $\alpha, \beta \in \Delta$ we have $\iota(\alpha, \beta) = 0$. Following Harvey~\cite{Harvey81} we may impose the structure of a simplical complex on $\mathcal{AC}(S)$: the simplices are exactly the multicurves. Also, $\mathcal{C}(S)$ and $\mathcal{A}(S)$ naturally span sub-complexes.
Note that the curve complexes $\mathcal{C}(S_{1,1})$ and $\mathcal{C}(S_{0,4})$ have no edges. It is useful to alter the definition in these cases. Place edges between all vertices with geometric intersection exactly one if $S = S_{1,1}$ or two if $S = S_{0,4}$. In both cases the result is the Farey graph. Also, with the current definition $\mathcal{C}(S)$ is empty if $S = S_{0,2}$. Thus for the annulus only we set $\mathcal{AC}(S) = \mathcal{C}(S) = \mathcal{A}(S)$.
\begin{definition} \label{Def:Distance} For vertices $\alpha, \beta \in \mathcal{C}(S)$ define the {\em distance} $d_S(\alpha, \beta)$ to be the minimum possible number of edges of a path in the one-skeleton $\mathcal{C}^1(S)$ which starts at $\alpha$ and ends at $\beta$. \end{definition}
Note that if $d_S(\alpha, \beta) \geq 3$ then $\alpha$ and $\beta$ {\em fill} the surface $S$. We denote distance in the one-skeleton of $\mathcal{A}(S)$ and of $\mathcal{AC}(S)$ by $d_\mathcal{A}$ and $d_\mathcal{AC}$ respectively. Recall that the geometric intersection of a pair of curves gives an upper bound for their distance.
\begin{lemma} \label{Lem:Hempel} Suppose that $S$ is a compact connected surface which is not an annulus. For any $\alpha, \beta \in \mathcal{C}^0(S)$ with $\iota(\alpha, \beta) > 0$ we have $d_S(\alpha, \beta) \leq 2 \log_2(\iota(\alpha, \beta)) + 2$. \qed \end{lemma}
\noindent This form of the inequality, stated for closed orientable surfaces, may be found in~\cite{Hempel01}. A proof in the bounded orientable case is given in~\cite{Schleimer06b}. The non-orientable case is then an exercise. When $S = S_{0,2}$ an induction proves \begin{equation} \label{Eqn:DistanceInAnnulus} d_X(\alpha, \beta) = 1 + \iota(\alpha, \beta) \end{equation} for distinct vertices $\alpha, \beta \in \mathcal{C}(X)$. See~\cite[Equation 2.3]{MasurMinsky00}.
\subsection{Subsurfaces}
Suppose that $X \subset S$ is a connected compact subsurface. We say $X$ is {\em essential} exactly when all boundary components of $X$ are essential in $S$. We say that $\alpha \in \mathcal{AC}(S)$ {\em cuts} $X$ if all representatives of $\alpha$ intersect $X$. If some representative is disjoint then we say $\alpha$ {\em misses} $X$.
\begin{definition} \label{Def:CleanlyEmbedded} An essential subsurface $X \subset S$ is {\em cleanly embedded} if for all components $\delta \subset \partial X$ we have: $\delta$ is isotopic into $\partial S$ if and only if $\delta$ is equal to a component of $\partial S$. \end{definition}
\begin{definition} \label{Def:Overlap} Suppose $X, Y \subset S$ are essential subsurfaces. If $X$ is cleanly embedded in $Y$ then we say that $X$ is {\em nested} in $Y$. If $\partial X$ cuts $Y$ and also $\partial Y$ cuts $X$ then we say that $X$ and $Y$ {\em overlap}. \end{definition}
A compact connected surface $S$ is {\em simple} if $\mathcal{AC}(S)$ has finite diameter.
\begin{lemma} \label{Lem:SimpleSurfaces} Suppose $S$ is a connected compact surface. The following are equivalent: \begin{itemize} \item $S$ is not simple. \item The diameter of $\mathcal{AC}(S)$ is at least five. \item $S$ admits an ending lamination or $S = S_1$ or $S_{0,2}$. \item $S$ admits a pseudo-Anosov map or $S = S_1$ or $S_{0,2}$. \item $\chi(S) < -1$ or $S = S_{1,1}, S_1, S_{0,2}$. \end{itemize} \end{lemma}
\noindent Lemma~4.6 of~\cite{MasurMinsky99} shows that pseudo-Anosov maps have quasi-geodesic orbits, when acting on the associated curve complex. A Dehn twist acting on $\mathcal{C}(S_{0,2})$ has geodesic orbits.
Note that \reflem{SimpleSurfaces} is only used in this paper when $\partial S$ is non-empty. The closed case is included for completeness.
\begin{proof}[Proof sketch of \reflem{SimpleSurfaces}] If $S$ admits a pseudo-Anosov map then the stable lamination is an ending lamination. If $S$ admits a filling lamination then, by an argument of Kobayashi~\cite{Kobayashi88b}, $\mathcal{AC}(S)$ has infinite diameter. (This argument is also sketched in~\cite{MasurMinsky99}, page 124, after the statement of Proposition~4.6.)
If the diameter of $\mathcal{AC}$ is infinite then the diameter is at least five. To finish, one may check directly that all surfaces with $\chi(S) > -2$, other than $S_{1,1}$, $S_1$ and the annulus have $\mathcal{AC}(S)$ with diameter at most four. (The difficult cases, $S_{012}$ and $S_{003}$, are discussed by Scharlemann~\cite{Scharlemann82}.) Alternatively, all surfaces with $\chi(S) < -1$, and also $S_{1,1}$, admit pseudo-Anosov maps. The orientable cases follow from Thurston's construction~\cite{Thurston88}. Penner's generalization~\cite{Penner88} covers the non-orientable cases.
\end{proof}
\subsection{Handlebodies and disks}
Let $V_g$ denote the {\em handlebody} of genus $g$: the three-manifold obtained by taking a closed regular neighborhood of a polygonal, finite, connected graph in $\mathbb{R}^3$. The genus of the boundary is the {\em genus} of the handlebody. A properly embedded disk $D \subset V$ is {\em essential} if $\partial D \subset \partial V$ is essential.
Let $\mathcal{D}(V)$ be the set of essential disks $D \subset V$, up to proper isotopy. A subset $\Delta \subset \mathcal{D}(V)$ is a multidisk if for every $D, E \in \Delta$ we have $\iota(\partial D, \partial E) = 0$. Following McCullough~\cite{McCullough91} we place a simplical structure on $\mathcal{D}(V)$ by taking multidisks to be simplices. As with the curve complex, define $d_\mathcal{D}$ to be the distance in the one-skeleton of $\mathcal{D}(V)$.
\subsection{Markings} \label{Sec:Markings}
A finite subset $\mu \subset \mathcal{AC}(S)$ {\em fills} $S$ if for all $\beta \in \mathcal{C}(S)$ there is some $\alpha \in \mu$ so that $\iota(\alpha, \beta) > 0$. For any pair of finite subsets $\mu, \nu \subset \mathcal{AC}(S)$ we extend the intersection number: \[ \iota(\mu,\nu) =
\sum_{\alpha \in \mu, \beta \in \nu} \iota(\alpha, \beta). \] We say that $\mu, \nu$ are {\em $L$--close} if $\iota(\mu, \nu) \leq L$. We say that $\mu$ is a {\em $K$--marking} if $\iota(\mu, \mu) \leq K$. For any $K,L$ we may define $\mathcal{M}_{K,L}(S)$ to be the graph where vertices are filling $K$--markings and edges are given by $L$--closeness.
As defined in~\cite{MasurMinsky00} we have:
\begin{definition} A {\em complete clean marking} $\mu = \{ \alpha_i \} \cup \{ \beta_i \}$ consists of \begin{itemize} \item A collection of {\em base} curves $\operatorname{base}(\mu) = \{ \alpha_i \}$: a maximal simplex in $\mathcal{C}(S)$. \item A collection of {\em transversal} curves $\{\beta_i\}$: for each $i$ define $X_i = S {\smallsetminus} \bigcup_{j \neq i} \alpha_j$ and take $\beta_i \in \mathcal{C}(X_i)$ to be a Farey neighbor of $\alpha_i$. \end{itemize} \end{definition}
\noindent If $\mu$ is a complete clean marking then $\iota(\mu, \mu) \leq 2 \xi(S) + 6 \chi(S)$. As discussed in~\cite{MasurMinsky00} there are two kinds of {\em elementary moves} which connected markings. There is a {\em twist} about a pants curve $\alpha$, replacing its transversal $\beta$ by a new transversal $\beta'$ which is a Farey neighbor of both $\alpha$ and $\beta$. We can {\em flip} by swapping the roles of $\alpha_i$ and $\beta_i$. (In the case of the flip move, some of the other transversals must be {\em cleaned}.)
It follows that for any surface $S$ there are choices of $K, L$ so that $\mathcal{M}(S)$ is non-empty and connected. We use $d_\mathcal{M}(\mu, \nu)$ to denote distance in the marking graph.
\section{Background on coarse geometry} \label{Sec:BackgroundGeometry}
Here we review a few ideas from coarse geometry. See~\cite{Bridson99}, \cite{CDP90}, or \cite{Gromov87} for a fuller discussion.
\subsection{Quasi-isometry}
Suppose $r, s, A$ are non-negative real numbers, with $A \geq 1$. If $s \leq A \cdot r + A$ then we write $s \mathbin{\leq_{A}} r$. If $s \mathbin{\leq_{A}} r$ and $r \mathbin{\leq_{A}} s$ then we write $s \mathbin{=_A} r$ and call $r$ and $s$ {\em quasi-equal} with constant $A$.
We also define the {\em cut-off function} $[r]_c$ where $[r]_c = 0$ if $r < c$ and $[r]_c = r$ if $r \geq c$.
Suppose that $(\mathcal{X}, d_\mathcal{X})$ and $(\mathcal{Y}, d_\mathcal{Y})$ are metric spaces. A relation $f \colon \mathcal{X} \to \mathcal{Y}$ is an $A$--{\em quasi-isometric embedding} for $A \geq 1$ if, for every $x, y \in \mathcal{X}$, \[ d_\mathcal{X}(x, y) \mathbin{=_A} d_\mathcal{Y}(f(x), f(y)). \] The relation $f$ is a {\em quasi-isometry}, and $\mathcal{X}$ is {\em quasi-isometric} to $\mathcal{Y}$, if $f$ is an $A$--quasi-isometric embedding and the image of $f$ is $A$--{\em dense}: the $A$--neighborhood of the image equals all of $\mathcal{Y}$.
\subsection{Geodesics}
Fix an interval $[u,v] \subset \mathbb{R}$. A {\em geodesic}, connecting $x$ to $y$ in $\mathcal{X}$, is an isometric embedding $f \colon [u, v] \to \mathcal{X}$ with $f(u) = x$ and $f(v) = y$. Often the exact choice of $f$ is unimportant and all that matters are the endpoints $x$ and $y$. We then denote the image of $f$ by $[x,y] \subset \mathcal{X}$.
Fix now intervals $[m,n], [p,q] \subset \mathbb{Z}$. An $A$--quasi-isometric embedding $g \colon [m,n] \to \mathcal{X}$ is called an $A$--{\em quasi-geodesic} in $\mathcal{X}$. A function $g \colon [m,n] \to \mathcal{X}$ is an $A$--{\em unparameterized quasi-geodesic} in $\mathcal{X}$ if \begin{itemize} \item there is an increasing function $\rho \colon [p,q] \to [m,n]$ so that $g \circ \rho \colon [p,q] \to \mathcal{X}$ is an $A$--{\em quasi-geodesic} in $\mathcal{X}$ and \item for all $i \in [p,q-1]$, $\operatorname{diam}_\mathcal{X}\left(g\left[\rho(i), \rho(i+1)\right]\right) \leq A$. \end{itemize} (Compare to the definition of $(K, \delta, s)$--quasi-geodesics found in~\cite{MasurMinsky99}.)
A subset $\mathcal{Y} \subset \mathcal{X}$ is $Q$--{\em quasi-convex} if every $\mathcal{X}$--geodesic connecting a pair of points of $\mathcal{Y}$ lies within a $Q$--neighborhood of $\mathcal{Y}$.
\subsection{Hyperbolicity}
We now assume that $\mathcal{X}$ is a connected graph with metric induced by giving all edges length one.
\begin{definition} \label{Def:GromovHyperbolic} The space $\mathcal{X}$ is $\delta$--{\em hyperbolic} if, for any three points $x, y, z$ in $\mathcal{X}$ and for any geodesics $k = [x, y]$, $g = [y, z]$, $h = [z, x]$, the triangle $ghk$ is $\delta$--{\em slim}: the $\delta$--neighborhood of any two sides contains the third.
\end{definition}
An important tool for this paper is the following theorem of the first author and Minsky~\cite{MasurMinsky99}:
\begin{theorem} \label{Thm:C(S)IsHyperbolic} The curve complex of an orientable surface is Gromov hyperbolic. \qed \end{theorem}
For the remainder of this section we assume that $\mathcal{X}$ is $\delta$--hyperbolic graph, $x, y, z \in \mathcal{X}$ are points, and $k = [x, y], g = [y, z], h = [z, x]$ are geodesics.
\begin{definition} \label{Def:ProjectionToGeodesic} We take $\rho_k \colon \mathcal{X} \to k$ to be the {\em closest points relation}: \[ \rho_k(z) = \big\{ w \in k \mathbin{\mid} \mbox{ for all $v \in k$,
$d_\mathcal{X}(z, w) \leq d_\mathcal{X}(z, v)$ } \big\}. \] \end{definition}
We now list several lemmas useful in the sequel.
\begin{lemma} \label{Lem:RightTriangle} There is a point on $g$ within distance $2\delta$ of $\rho_k(z)$. The same holds for $h$. \qed \end{lemma}
\begin{lemma} \label{Lem:ProjectionHasBoundedDiameter} The closest points $\rho_k(z)$ have diameter at most $4\delta$. \qed \end{lemma}
\begin{lemma} \label{Lem:CenterExists} The diameter of $\rho_g(x) \cup \rho_h(y) \cup \rho_k(z)$ is at most $6\delta$. \qed \end{lemma}
\begin{lemma} \label{Lem:MovePoint} Suppose that $z'$ is another point in $\mathcal{X}$ so that $d_\mathcal{X}(z, z') \leq R$. Then $d_\mathcal{X}(\rho_k(z), \rho_k(z')) \leq R + 6\delta.$ \qed \end{lemma}
\begin{lemma} \label{Lem:MoveGeodesic} Suppose that $k'$ is another geodesic in $X$ so that the endpoints of $k'$ are within distance $R$ of the points $x$ and $y$. Then $d_X(\rho_k(z), \rho_{k'}(z)) \leq R + 11\delta$. \qed \end{lemma}
We now turn to a useful consequence of the Morse stability of quasi-geodesics in hyperbolic spaces.
\begin{lemma} \label{Lem:FirstReverse} For every $\delta$ and $A$ there is a constant $C$ with the following property: If $\mathcal{X}$ is $\delta$--hyperbolic and $g \colon [0, N] \to \mathcal{X}$ is an $A$--unparameterized quasi-geodesic then for any $m < n < p$ in $[0, N]$ we have: \[ d_\mathcal{X}(x, y) + d_\mathcal{X}(y, z) < d_\mathcal{X}(x, z) + C \] where $x, y, z = g(m), g(n), g(p)$. \qed \end{lemma}
\subsection{A hyperbolicity criterion} \label{Sec:HyperbolicityCriterion}
Here we give a hyperbolicity criterion tailored to our setting. We thank Brian Bowditch for both finding an error in our first proof of \refthm{HyperbolicityCriterion} and for informing us of Gilman's work~\cite{Gilman94, Gilman02}.
Suppose that $\mathcal{X}$ is a graph with all edge-lengths equal to one. Suppose that $\gamma \colon [0, N] \to \mathcal{X}$ is a loop in $\mathcal{X}$ with unit speed. Any pair of points $a, b \in [0, N]$ gives a {\em chord} of $\gamma$. If $a < b$, $N/4 \leq b - a$ and $N/4 \leq a + (N - b)$ then the chord is {\em $1/4$--separated}. The length of the chord is $d_\mathcal{X}(\gamma(a), \gamma(b))$.
Following Gilman~\cite[Theorem~B]{Gilman94} we have:
\begin{theorem} \label{Thm:Gilman} Suppose that $\mathcal{X}$ is a graph with all edge-lengths equal to one. Then $\mathcal{X}$ is Gromov hyperbolic if and only if there is a constant $K$ so that every loop $\gamma \colon [0, N] \to \mathcal{X}$ has a $1/4$--separated chord of length at most $N/7 + K$. \qed \end{theorem}
Gilman's proof goes via the subquadratic isoperimetric inequality. We now give our criterion, noting that it is closely related to another paper of Gilman~\cite{Gilman02}.
\begin{theorem} \label{Thm:HyperbolicityCriterion} Suppose that $\mathcal{X}$ is a graph with all edge-lengths equal to one. Then $\mathcal{X}$ is Gromov hyperbolic if and only if there is a constant $M \geq 0$ and, for all unordered pairs $x, y \in \mathcal{X}^0$, there is a connected subgraph $g_{x, y}$ containing $x$ and $y$ with the following properties: \begin{itemize} \item (Local) If $d_\mathcal{X}(x, y) \leq 1$ then $g_{x,y}$ has diameter at
most $M$. \item (Slim triangles) For all $x, y, z \in \mathcal{X}^0$ the subgraph
$g_{x,y}$ is contained in an $M$--neighborhood of $g_{y,z} \cup
g_{z,x}$. \end{itemize} \end{theorem}
\begin{proof} Suppose that $\gamma \colon [0, N] \to \mathcal{X}$ is a loop. If $\epsilon$ is the empty string let $I_\epsilon = [0,N]$. For any binary string
$\omega$ let $I_{\omega0}$ and $I_{\omega1}$ be the first and second half of $I_\omega$. Note that if $|\omega| \geq \lceil \log_2 N
\rceil$ then $|I_\omega| \leq 1$.
Fix a string $\omega$ and let $[a, b] = I_\omega$. Let $g_\omega$ be the subgraph connecting $\gamma(a)$ to $\gamma(b)$. Note that $g_0 = g_1$ because $\gamma(0) = \gamma(N)$. Also, for any binary string
$\omega$ the subgraphs $g_{\omega}, g_{\omega 0}, g_{\omega 1}$ form an $M$--slim triangle. If $|\omega| \leq \lceil \log_2 N \rceil$ then every $x \in g_\omega$ has some point $b \in I_\omega$ so that \[
d_\mathcal{X}(x, \gamma(b)) \leq M (\lceil \log_2 N \rceil - |\omega|) + 2M. \]
Since $g_0$ is connected there is a point $x \in g_0$ that lies within the $M$--neighborhoods both of $g_{00}$ and of $g_{01}$. Pick some $b \in I_1$ so that $d_\mathcal{X}(x, \gamma(b))$ is bounded as in the previous paragraph. It follows that there is a point $a \in I_0$ so that $a, b$ are $1/4$--separated and so that \[ d_\mathcal{X}(\gamma(a), \gamma(b)) \leq 2M \lceil \log_2 N \rceil + 2M. \] Thus there is an additive error $K$ large enough so that $\mathcal{X}$ satisfies the criterion of \refthm{Gilman} and we are done. \end{proof}
\section{Natural maps} \label{Sec:Natural}
There are several natural maps between the complexes and graphs defined in \refsec{BackgroundComplexes}. Here we review what is known about their geometric properties, and give examples relevant to the rest of the paper.
\subsection{Lifting, surgery, and subsurface projection}
Suppose that $S$ is not simple. Choose a hyperbolic metric on the interior of $S$ so that all ends have infinite areas. Fix a compact essential subsurface $X \subset S$ which is not a peripheral annulus. Let $S^X$ be the cover of $S$ so that $X$ lifts homeomorphically and so that $S^X \mathrel{\cong} {\operatorname{interior}}(X)$. For any $\alpha \in \mathcal{AC}(S)$ let $\alpha^X$ be the full preimage.
Since there is a homeomorphism between $X$ and the Gromov compactification of $S^X$ in a small abuse of notation we identify $\mathcal{AC}(X)$ with the arc and curve complex of $S^X$.
\begin{definition} \label{Def:CuttingRel} We define the {\em cutting relation} $\kappa_X \colon \mathcal{AC}(S) \to \mathcal{AC}(X)$ as follows: $\alpha' \in \kappa_X(\alpha)$ if and only if $\alpha'$ is an essential non-peripheral component of $\alpha^X$. \end{definition}
Note that $\alpha$ cuts $X$ if and only if $\kappa_X(\alpha)$ is non-empty. Now suppose that $S$ is not an annulus.
\begin{definition} \label{Def:SurgeryRel} We define the {\em surgery relation} $\sigma_X \colon \mathcal{AC}(S) \to \mathcal{C}(S)$ as follows: $\alpha' \in \sigma_S(\alpha)$ if and only if $\alpha' \in \mathcal{C}(S)$ is a boundary component of a regular neighborhood of $\alpha \cup \partial S$. \end{definition}
With $S$ and $X$ as above:
\begin{definition} \label{Def:SubsurfaceProjection} The {\em subsurface projection relation} $\pi_X \colon \mathcal{AC}(S) \to \mathcal{C}(X)$ is defined as follows: If $X$ is not an annulus then define $\pi_X = \sigma_X \circ \kappa_X$. When $X$ is an annulus $\pi_X = \kappa_X$. \end{definition}
If $\alpha, \beta \in \mathcal{AC}(S)$ both cut $X$ we write $d_X(\alpha, \beta) = \operatorname{diam}_X(\pi_X(\alpha) \cup \pi_X(\beta))$. This is the {\em subsurface projection distance} between $\alpha$ and $\beta$ in $X$.
\begin{lemma} \label{Lem:SubsurfaceProjectionLipschitz} Suppose $\alpha, \beta \in \mathcal{AC}(S)$ are disjoint and cut $X$. Then $\operatorname{diam}_X(\pi_X(\alpha)), d_X(\alpha, \beta) \leq 3$. \qed \end{lemma}
See Lemma~2.3 of~\cite{MasurMinsky00} and the remarks in the section Projection Bounds in~\cite{Minsky10}.
\begin{corollary} \label{Cor:ProjectionOfPaths} Fix $X \subset S$. Suppose that $\{\beta_i\}_{i=0}^N$ is a path in $\mathcal{AC}(S)$. Suppose that $\beta_i$ cuts $X$ for all $i$. Then $d_X(\beta_0, \beta_N) \leq 3N + 3$. \qed \end{corollary}
It is crucial to note that if some vertex of $\{ \beta_i \}$ {\em misses} $X$ then the projection distance $d_X(\beta_0, \beta_n)$ may be arbitrarily large compared to $n$. \refcor{ProjectionOfPaths} can be greatly strengthened when the path is a geodesic~\cite{MasurMinsky00}:
\begin{theorem} \label{Thm:BoundedGeodesicImage}[Bounded Geodesic Image] There is constant $M_0$ with the following property. Fix $X \subset S$. Suppose that $\{\beta_i\}_{i=0}^n$ is a geodesic in $\mathcal{C}(S)$. Suppose that $\beta_i$ cuts $X$ for all $i$. Then $d_X(\beta_0, \beta_n) \leq M_0$. \qed \end{theorem}
Here is a converse for \reflem{SubsurfaceProjectionLipschitz}.
\begin{lemma} \label{Lem:BoundedProjectionImpliesBoundedIntersection} For every $a \in \mathbb{N}$ there is a number $b \in \mathbb{N}$ with the following property: for any $\alpha, \beta \in \mathcal{AC}(S)$ if $d_X(\alpha, \beta) \leq a$ for all $X \subset S$ then $\iota(\alpha, \beta) \leq b$. \end{lemma}
Corollary~D of~\cite{ChoiRafi07} gives a more precise relation between projection distance and intersection number.
\begin{proof}[Proof of \reflem{BoundedProjectionImpliesBoundedIntersection}] We only sketch the contrapositive: Suppose we are given a sequence of curves $\alpha_n, \beta_n$ so that $\iota(\alpha_n, \beta_n)$ tends to infinity. Passing to subsequences and applying elements of the mapping class group we may assume that $\alpha_n = \alpha_0$ for all $n$. Setting $c_n = \iota(\alpha_0, \beta_n)$ and passing to subsequences again we may assume that $\beta_n / c_n$ converges to
$\lambda \in \mathcal{PML}(S)$, the projectivization of Thurston's space of measured laminations. Let $Y$ be any connected component of the subsurface filled by $\lambda$, chosen so that $\alpha_0$ cuts $Y$. Note that $\pi_Y(\beta_n)$ converges to $\lambda|_Y$. Again applying Kobayashi's argument~\cite{Kobayashi88b}, the distance $d_Y(\alpha_0, \beta_n)$ tends to infinity. \end{proof}
\subsection{Inclusions}
We now record a well known fact:
\begin{lemma} \label{Lem:C(S)QuasiIsometricToAC(S)} The inclusion $\nu \colon \mathcal{C}(S) \to \mathcal{AC}(S)$ is a quasi-isometry. The surgery map $\sigma_S \colon \mathcal{AC}(S) \to \mathcal{C}(S)$ is a quasi-inverse for $\nu$. \end{lemma}
\begin{proof} Fix $\alpha, \beta \in \mathcal{C}(S)$. Since $\nu$ is an inclusion we have $d_\mathcal{AC}(\alpha, \beta) \leq d_S(\alpha, \beta)$. In the other direction, let $\{ \alpha_i \}_{i = 0}^N$ be a geodesic in $\mathcal{AC}(S)$ connecting $\alpha$ to $\beta$. Since every $\alpha_i$ cuts $S$ we apply \refcor{ProjectionOfPaths} and deduce $d_S(\alpha, \beta) \leq 3N + 3$.
Note that the composition $\sigma_S \circ \nu = \operatorname{Id}|\mathcal{C}(S)$. Also, for any arc $\alpha \in \mathcal{A}(S)$ we have $d_\mathcal{AC}(\alpha, \nu(\sigma_S(\alpha))) = 1$. Finally, $\mathcal{C}(S)$ is $1$--dense in $\mathcal{AC}(S)$, as any arc $\gamma \subset S$ is disjoint from the one or two curves of $\sigma_S(\gamma)$. \end{proof}
Brian Bowditch raised the question, at the Newton Institute in August 2003, of the geometric properties of the inclusion $\mathcal{A}(S) \to \mathcal{AC}(S)$. The natural assumption, that this inclusion is again a quasi-isometric embedding, is false. In this paper we will exactly characterize how the inclusion distorts distance.
We now move up a dimension. Suppose that $V$ is a handlebody and $S = \partial V$. We may take any disk $D \in \mathcal{D}(V)$ to its boundary $\partial D \in \mathcal{C}(S)$, giving an inclusion $\nu \colon \mathcal{D}(V) \to \mathcal{C}(S)$. It is important to distinguish the disk complex from its image $\nu(\mathcal{D}(V))$; thus we will call the image the {\em disk set}.
The first author and Minsky~\cite{MasurMinsky04} have shown:
\begin{theorem} \label{Thm:DiskComplexConvex} The disk set is a quasi-convex subset of the curve complex. \qed \end{theorem}
It is natural to ask if this map is a quasi-isometric embedding. If so, the hyperbolicity of $\mathcal{C}(V)$ immediately follows. In fact, the inclusion again badly distorts distance and we investigate exactly how, below.
\subsection{Markings and the mapping class group}
Once the connectedness of $\mathcal{M}(S)$ is in hand, it is possible to use local finiteness to show that $\mathcal{M}(S)$ is quasi-isometric to the Cayley graph of the mapping class group~\cite{MasurMinsky00}.
Using subsurface projections the first author and Minsky~\cite{MasurMinsky00} obtained a {\em distance estimate} for the marking complex and thus for the mapping class group.
\begin{theorem} \label{Thm:MarkingGraphDistanceEstimate} There is a constant ${C_0} = {C_0}(S)$ so that, for any $c \geq {C_0}$ there is a constant $A$ with \[ d_\mathcal{M}(\mu, \mu') \,\, \mathbin{=_A} \,\, \sum [d_X(\mu, \mu')]_c \] independent of the choice of $\mu$ and $\mu'$. Here the sum ranges over all essential, non-peripheral subsurfaces $X \subset S$. \end{theorem}
This, and their similar estimate for the pants graph, is a model for the distance estimates given below. Notice that a filling marking $\mu \in \mathcal{M}(S)$ cuts all essential, non-peripheral subsurfaces of $S$. It is not an accident that the sum ranges over the same set.
\section{Holes in general and the lower bound on distance} \label{Sec:Holes}
Suppose that $S$ is a compact connected surface. In this paper a {\em combinatorial complex} $\mathcal{G}(S)$ will have vertices being isotopy classes of certain multicurves in $S$. We will assume throughout that vertices of $\mathcal{G}(S)$ are connected by edges only if there are representatives which are disjoint. This assumption is made only to simplify the proofs --- all arguments work in the case where adjacent vertices are allowed to have uniformly bounded intersection. In all cases $\mathcal{G}$ will be connected. There is a natural map $\nu \colon \mathcal{G} \to \mathcal{AC}(S)$ taking a vertex of $\mathcal{G}$ to the isotopy classes of the components. Examples in the literature include the marking complex~\cite{MasurMinsky00}, the pants complex~\cite{Brock03a} \cite{BehrstockEtAl05}, the Hatcher-Thurston complex~\cite{HatcherThurston80},
the complex of separating curves~\cite{BrendleMargalit04}, the arc complex and the curve complexes themselves.
For any combinatorial complex $\mathcal{G}$ defined in this paper {\em other than the curve complex} we will denote distance in the one-skeleton of $\mathcal{G}$ by $d_\mathcal{G}(\cdot,\cdot)$. Distance in $\mathcal{C}(S)$ will always be denoted by $d_S(\cdot, \cdot)$.
\subsection{Holes, defined}
Suppose that $S$ is non-simple. Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Suppose that $X \subset S$ is an cleanly embedded subsurface. A vertex $\alpha \in \mathcal{G}$ {\em cuts} $X$ if some component of $\alpha$ cuts $X$.
\begin{definition} \label{Def:Hole} We say $X \subset S$ is a {\em hole} for $\mathcal{G}$ if every vertex of $\mathcal{G}$ cuts $X$. \end{definition}
Almost equivalently, if $X$ is a hole then the subsurface projection $\pi_X \colon \mathcal{G}(S) \to \mathcal{C}(X)$ never takes the empty set as a value.
Note that the entire surface $S$ is always a hole, regardless of our choice of $\mathcal{G}$.
A boundary parallel annulus cannot be cleanly embedded (unless $S$ is also an annulus), so generally cannot be a hole. A hole $X \subset S$ is {\em strict} if $X$ is not homeomorphic to $S$.
We now classify the holes for $\mathcal{A}(S)$.
\begin{example} \label{Exa:HolesArcComplex} Suppose that $S = S_{g,b}$ with $b > 0$ and consider the arc complex $\mathcal{A}(S)$. The holes, up to isotopy, are exactly the cleanly embedded surfaces which contain $\partial S$. So, for example, if $S$ is planar then only $S$ is a hole for $\mathcal{A}(S)$. The same holds for $S = S_{1,1}$. In these cases it is an exercise to show that $\mathcal{C}(S)$ and $\mathcal{A}(S)$ are quasi-isometric.
In all other cases the arc complex admits infinitely many holes. \end{example}
\begin{definition} \label{Def:DiameterOfHole} If $X$ is a hole and if $\pi_X(\mathcal{G}) \subset \mathcal{C}(X)$ has diameter at least $R$ we say that the hole $X$ has {\em diameter} at least $R$. \end{definition}
\begin{example} Continuing the example above: Since the mapping class group acts on the arc complex, all non-simple holes for $\mathcal{A}(S)$ have infinite diameter. \end{example}
Suppose now that $X, X' \subset S$ are disjoint holes for $\mathcal{G}$. In the presence of symmetry there can be a relationship between
$\pi_X|\mathcal{G}$ and $\pi_{X'}|\mathcal{G}$ as follows:
\begin{definition} \label{Def:PairedHoles} Suppose that $X, X'$ are holes for $\mathcal{G}$, both of infinite diameter. Then $X$ and $X'$ are {\em paired} if there is a homeomorphism $\tau \colon X \to X'$ and a constant $L_4$ so that $$ d_{X'}(\pi_{X'}(\gamma), \tau(\pi_X(\gamma))) \leq L_4 $$ for every $\gamma \in \mathcal{G}$. Furthermore, if $Y \subset X$ is a hole then $\tau$ pairs $Y$ with $Y' = \tau(Y)$. Lastly, pairing is required to be symmetric; if $\tau$ pairs $X$ with $X'$ then $\tau^{-1}$ pairs $X'$ with $X$. \end{definition}
\begin{definition} \label{Def:Interfere} Two holes $X$ and $Y$ {\em interfere} if either $X \cap Y \neq \emptyset$ or $X$ is paired with $X'$ and $X' \cap Y \neq \emptyset$. \end{definition}
Examples arise in the symmetric arc complex and in the discussion of twisted $I$--bundles inside of a handlebody.
\subsection{Projection to holes is coarsely Lipschitz}
The following lem\-ma is used repeatedly throughout the paper:
\begin{lemma} \label{Lem:LipschitzToHoles} Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Suppose that $X$ is a hole for $\mathcal{G}$. Then for any $\alpha, \beta \in \mathcal{G}$ we have $$d_X(\alpha, \beta) \leq 3 + 3 \cdot d_\mathcal{G}(\alpha, \beta).$$ The additive error is required only when $\alpha = \beta$. \end{lemma}
\begin{proof} This follows directly from Corollary~\ref{Cor:ProjectionOfPaths} and our assumption that vertices of $\mathcal{G}$ connected by an edge represent disjoint multicurves. \end{proof}
\subsection{Infinite diameter holes} \label{Sec:InfiniteDiameterHoles}
We may now state a first answer to Bowditch's question.
\begin{lemma} \label{Lem:InfiniteDiameterHoleImpliesNotQIEmbedded} Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Suppose that there is a strict hole $X \subset S$ having infinite diameter. Then $\nu \colon \mathcal{G} \to \mathcal{AC}(S)$ is not a quasi-isometric embedding. \qed \end{lemma}
This lemma and \refexa{HolesArcComplex} completely determines when the inclusion of $\mathcal{A}(S)$ into $\mathcal{AC}(S)$ is a quasi-isometric embedding. It quickly becomes clear that the set of holes tightly constrains the intrinsic geometry of a combinatorial complex.
\begin{lemma} \label{Lem:DisjointHolesImpliesNotHyperbolic} Suppose that $\mathcal{G}(S)$ is a combinatorial complex invariant under the natural action of $\mathcal{MCG}(S)$. Then every non-simple hole for $\mathcal{G}$ has infinite diameter. Furthermore, if $X, Y \subset S$ are disjoint non-simple holes for $\mathcal{G}$ then there is a quasi-isometric embedding of $\mathbb{Z}^2$ into $\mathcal{G}$. \qed \end{lemma}
We will not use Lemmas~\ref{Lem:InfiniteDiameterHoleImpliesNotQIEmbedded} or~\ref{Lem:DisjointHolesImpliesNotHyperbolic} and so omit the proofs. Instead our interest lies in proving the far more powerful distance estimate (Theorems~\ref{Thm:LowerBound} and~\ref{Thm:UpperBound}) for $\mathcal{G}(S)$.
\subsection{A lower bound on distance}
Here we see that the sum of projection distances in holes gives a lower bound for distance.
\begin{theorem} \label{Thm:LowerBound} Fix $S$, a compact connected non-simple surface. Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Then there is a constant ${C_0}$ so that for all $c \geq {C_0}$ there is a constant $A$ satisfying \[ \sum [d_X(\alpha, \beta)]_c \mathbin{\leq_{A}} d_\mathcal{G}(\alpha, \beta). \] Here $\alpha, \beta \in \mathcal{G}$ and the sum is taken over all holes $X$ for the complex $\mathcal{G}$. \qed \end{theorem}
The proof follows the proof of Theorems~6.10 and~6.12 of~\cite{MasurMinsky00}, practically word for word. The only changes necessary are to \begin{itemize} \item replace the sum over {\em all} subsurfaces by the sum over all holes, \item replace Lemma~2.5 of~\cite{MasurMinsky00}, which records how markings differing by an elementary move project to an essential subsurface, by \reflem{LipschitzToHoles} of this paper, which records how $\mathcal{G}$ projects to a hole. \end{itemize}
One major goal of this paper is to give criteria sufficient obtain the reverse inequality; \refthm{UpperBound}.
\section{Holes for the non-orientable surface} \label{Sec:HolesNonorientable}
Fix $F$ a compact, connected, and non-orientable surface. Let $S$ be the orientation double cover with covering map $\rho_F \colon S \to F$. Let $\tau \colon S \to S$ be the associated involution; so for all $x \in S$, $\rho_F(x) = \rho_F(\tau(x))$.
\begin{definition} A multicurve $\gamma \subset \mathcal{AC}(S)$ is {\em symmetric} if $\tau(\gamma) \cap \gamma = \emptyset$ or $\tau(\gamma) = \gamma$.
A multicurve $\gamma$ is {\em invariant} if there is a curve or arc $\gamma' \subset F$ so that $\gamma = \rho_F^{-1}(\gamma')$. The same definitions holds for subsurfaces $X \subset S$. \end{definition}
\begin{definition} The {\em invariant complex} $\mathcal{C}^\tau(S)$ is the simplicial complex with vertex set being isotopy classes of invariant multicurves. There is a $k$--simplex for every collection of $k+1$ distinct isotopy classes having pairwise disjoint representatives. \end{definition}
Notice that $\mathcal{C}^\tau(S)$ is simplicially isomorphic to $\mathcal{C}(F)$. There is also a natural map $\nu \colon \mathcal{C}^\tau(S) \to \mathcal{C}(S)$. We will prove:
\begin{lemma} \label{Lem:InvariantQI} $\nu \colon \mathcal{C}^\tau(S) \to \mathcal{C}(S)$ is a quasi-isometric embedding. \end{lemma}
It thus follows from the hyperbolicity of $\mathcal{C}(S)$ that:
\begin{corollary}[\cite{BestvinaFujiwara07}] \label{Cor:NonorientableCurveComplexHyperbolic} $\mathcal{C}(F)$ is Gromov hyperbolic. \qed \end{corollary}
We begin the proof of \reflem{InvariantQI}: since $\nu$ sends adjacent vertices to adjacent edges we have \begin{equation} \label{Eqn:NonOrientableLowerBound} d_S(\alpha, \beta) \leq d_{\mathcal{C}^\tau}(\alpha, \beta), \end{equation} as long as $\alpha$ and $\beta$ are distinct in $\mathcal{C}^\tau(S)$. In fact, since the surface $S$ itself is a hole for $\mathcal{C}^\tau(S)$ we may deduce a slightly weaker lower bound from \reflem{LipschitzToHoles} or indeed from \refthm{LowerBound}.
The other half of the proof of \reflem{InvariantQI} consists of showing that $S$ is the {\em only} hole for $\mathcal{C}^\tau(S)$ with large diameter. After a discussion of {Teichm\"uller~} geodesics we will prove:
\begin{restate}{Lemma}{Lem:SymmetricSurfaces} There is a constant $K$ with the following property: Suppose that $\alpha, \beta$ are invariant multicurves in $S$. Suppose that $X \subset S$ is an essential subsurface where $d_X(\alpha, \beta) > K$. Then $X$ is symmetric. \end{restate}
From this it follows that:
\begin{corollary} \label{Cor:NonorientableHoles} With $K$ as in \reflem{SymmetricSurfaces}: If $X \subset S$ is a hole for $\mathcal{C}^\tau(S)$ with diameter greater than $K$ then $X = S$. \end{corollary}
\begin{proof} Suppose that $X \subset S$ is a strict subsurface, cleanly embedded. Suppose that $\operatorname{diam}_X(\mathcal{C}^\tau(S)) > K$. Thus $X$ is symmetric. It follows that $\partial X {\smallsetminus} \partial S$ is also symmetric. Since $\partial X$ does not cut $X$ deduce that $X$ is not a hole for $\mathcal{C}^\tau(S)$. \end{proof}
This corollary, together with the upper bound (\refthm{UpperBound}), proves \reflem{InvariantQI}.
\section{Holes for the arc complex} \label{Sec:HolesArcComplex}
Here we generalize the definition of the arc complex and classify its holes.
\begin{definition} \label{Def:RelativeArcComplex} Suppose that $S$ is a non-simple surface with boundary. Let $\Delta$ be a non-empty collection of components of $\partial S$. The {\em arc complex} $\mathcal{A}(S, \Delta)$ is the subcomplex of $\mathcal{A}(S)$ spanned by essential arcs $\alpha \subset S$ with $\partial \alpha \subset \Delta$. \end{definition}
Note that $\mathcal{A}(S, \partial S)$ and $\mathcal{A}(S)$ are identical.
\begin{lemma} \label{Lem:ArcComplexHoles} Suppose $X \subset S$ is cleanly embedded. Then $X$ is a hole for $\mathcal{A}(S, \Delta)$ if and only if $\Delta \subset \partial X$. \qed \end{lemma}
This follows directly from the definition of a hole. We now have an straight-forward observation:
\begin{lemma} \label{Lem:ArcComplexHolesIntersect} If $X, Y \subset S$ are holes for $\mathcal{A}(S, \Delta)$ then $X \cap Y \neq \emptyset$. \qed \end{lemma}
The proof follows immediately from \reflem{ArcComplexHoles}. \reflem{DisjointHolesImpliesNotHyperbolic} indicates that \reflem{ArcComplexHolesIntersect} is essential to proving that $\mathcal{A}(S, \Delta)$ is Gromov hyperbolic.
In order to prove the upper bound theorem for $\mathcal{A}$ we will use pants decompositions of the surface $S$. In an attempt to avoid complications in the non-orientable case we must carefully lift to the orientation cover.
Suppose that $F$ is non-simple, non-orientable, and has non-empty boundary. Let $\rho_F \colon S \to F$ be the orientation double cover and let $\tau \colon S \to S$ be the induced involution. Fix $\Delta' \subset \partial F$ and let $\Delta = \rho_F^{-1}(\Delta')$.
\begin{definition} We define $\mathcal{A}^\tau(S, \Delta)$ to be the {\em invariant arc complex}: vertices are invariant multi-arcs and simplices arise from disjointness. \end{definition}
Again, $\mathcal{A}^\tau(S, \Delta)$ is simplicially isomorphic to $\mathcal{A}(F, \Delta')$. If $X \cap \tau(X) = \emptyset$ and $\Delta \subset X \cup \tau(X)$ then the subsurfaces $X$ and $\tau(X)$ are paired holes, as in \refdef{PairedHoles}. Notice as well that all non-simple symmetric holes $X \subset S$ for $\mathcal{A}^\tau(S, \Delta)$ have infinite diameter.
Unlike $\mathcal{A}(F, \Delta')$ the complex $\mathcal{A}^\tau(S, \Delta)$ may have disjoint holes. None\-the\-less, we have:
\begin{lemma} \label{Lem:ArcComplexHolesInterfere} Any two non-simple holes for $\mathcal{A}^\tau(S, \Delta)$ interfere. \end{lemma}
\begin{proof} Suppose that $X, Y$ are holes for the $\tau$--invariant arc complex, $\mathcal{A}^\tau(S, \Delta)$. It follows from \reflem{SymmetricSurfaces} that $X$ is symmetric with $\Delta \subset X \cup \tau(X)$. The same holds for $Y$. Thus $Y$ must cut either $X$ or $\tau(X)$. \end{proof}
\section{Background on three-manifolds} \label{Sec:BackgroundThreeManifolds}
Before discussing the holes in the disk complex, we record a few facts about handlebodies and $I$--bundles.
Fix $M$ a compact connected irreducible three-manifold. Recall that $M$ is {\em irreducible} if every embedded two-sphere in $M$ bounds a three-ball. Recall that if $N$ is a closed submanifold of $M$ then $\operatorname{fr}(N)$, the frontier of $N$ in $M$, is the closure of $\partial N {\smallsetminus} \partial M$.
\subsection{Compressions}
Suppose that $F$ is a surface embedded in $M$. Then $F$ is {\em compressible} if there is a disk $B$ embedded in $M$ with $B \cap \partial M = \emptyset$, $B \cap F = \partial B$, and $\partial B$ essential in $F$. Any such disk $B$ is called a {\em compression} of $F$.
In this situation form a new surface $F'$ as follows: Let $N$ be a closed regular neighborhood of $B$. First remove from $F$ the annulus $N \cap F$. Now form $F'$ by gluing on both disk components of $\partial N {\smallsetminus} F$. We say that $F'$ is obtained by {\em compressing} $F$ along $B$. If no such disk exists we say $F$ is {\em incompressible}.
\begin{definition} \label{Def:BdyCompression} A properly embedded surface $F$ is {\em boundary compressible} if there is a disk $B$ embedded in $M$ with \begin{itemize} \item ${\operatorname{interior}}(B) \cap \partial M = \emptyset$, \item $\partial B$ is a union of connected arcs $\alpha$ and $\beta$, \item $\alpha \cap \beta = \partial \alpha = \partial \beta$, \item $B \cap F = \alpha$ and $\alpha$ is properly embedded in $F$, \item $B \cap \partial M = \beta$, and \item $\beta$ is essential in $\partial M {\smallsetminus} \partial F$. \end{itemize} \end{definition}
A disk, like $B$, with boundary partitioned into two arcs is called a {\em bigon}. Note that this definition of boundary compression is slightly weaker than some found in the literature; the arc $\alpha$ is often required to be essential in $F$. We do not require this additional property because, for us, $F$ will usually be a properly embedded disk in a handlebody.
Just as for compressing disks we may {\em boundary compress} $F$ along $B$ to obtain a new surface $F'$: Let $N$ be a closed regular neighborhood of $B$. First remove from $F$ the rectangle $N \cap F$. Now form $F'$ by gluing on both bigon components of $\operatorname{fr}(N) {\smallsetminus} F$. Again, $F'$ is obtained by {\em boundary compressing} $F$ along $B$. Note that the relevant boundary components of $F$ and $F'$ cobound a pair of pants embedded in $\partial M$. If no boundary compression exists then $F$ is {\em boundary incompressible}.
\begin{remark} \label{Rem:SurfacesInHandlebodies} Recall that any surface $F$ properly embedded in a handlebody $V_g$, $g \geq 2$, is either compressible or boundary compressible. \end{remark}
Suppose now that $F$ is properly embedded in $M$ and $\Gamma$ is a multicurve in $\partial M$.
\begin{remark} \label{Rem:IntersectionNumber} Suppose that $F'$ is obtained by a boundary compression of $F$ performed in the complement of $\Gamma$. Suppose that $F' = F_1 \cap F_2$ is disconnected and each $F_i$ cuts $\Gamma$. Then $\iota(\partial F_i, \Gamma) < \iota(\partial F, \Gamma)$ for $i = 1, 2$.
\end{remark}
It is often useful to restrict our attention to boundary compressions meeting a single subsurface of $\partial M$. So suppose that $X \subset \partial M$ is an essential subsurface. Suppose that $\partial F$ is tight with respect to $\partial X$. Suppose $B$ is a boundary compression of $F$. If $B \cap \partial M \subset X$ we say that $F$ is {\em boundary compressible into $X$}.
\begin{lemma} \label{Lem:XCompressibleImpliesBdyXCompressible} Suppose that $M$ is irreducible. Fix $X$ a connected essential subsurface of $\partial M$. Let $F \subset M$ be a properly embedded, incompressible surface. Suppose that $\partial X$ and $\partial F$ are tight and that $X$ compresses in $M$. Then either: \begin{itemize} \item $F \cap X = \emptyset$, \item $F$ is boundary compressible into $X$, or \item $F$ is a disk with $\partial F \subset X$. \end{itemize} \end{lemma}
\begin{proof} Suppose that $X$ is compressible via a disk $E$. Isotope $E$ to make $\partial E$ tight with respect to $\partial F$. This can be done while maintaining $\partial E \subset X$ because $\partial F$ and $\partial X$ are tight. Since $M$ is irreducible and $F$ is incompressible we may isotope $E$, rel $\partial$, to remove all simple closed curves of $F \cap E$. If $F \cap E$ is non-empty then an outermost bigon of $E$ gives the desired boundary compression lying in $X$.
Suppose instead that $F \cap E = \emptyset$ but $F$ does cut $X$. Let $\delta \subset X$ be a simple arc meeting each of $F$ and $E$ in exactly one endpoint. Let $N$ be a closed regular neighborhood of $\delta \cup E$. Note that $\operatorname{fr}(N) {\smallsetminus} F$ has three components. One is a properly embedded disk parallel to $E$ and the other two $B, B'$ are bigons attached to $F$. At least one of these, say $B'$ is trivial in the sense that $B' \cap \partial M$ is a trivial arc embedded in $\partial M {\smallsetminus} \partial F$. If $B$ is non-trivial then $B$ provides the desired boundary compression.
Suppose that $B$ is also trivial. It follows that $\partial E$ and one component $\gamma \subset \partial F$ cobound an annulus $A \subset X$. So $D = A \cup E$ is a disk with $(D, \partial D) \subset (M, F)$. As $\partial D = \gamma$ and $F$ is incompressible and $M$ is irreducible deduce that $F$ is isotopic to $E$.
\end{proof}
\subsection{Band sums}
A {\em band sum} is the inverse operation to boundary compression: Fix a pair of disjoint properly embedded surfaces $F_1, F_2 \subset M$. Let $F' = F_1 \cup F_2$. Fix a simple arc $\delta \subset \partial M$ so that $\delta$ meets each of $F_1$ and $F_2$ in exactly one point of $\partial \delta$. Let $N \subset M$ be a closed regular neighborhood of $\delta$. Form a new surface by adding to $F' {\smallsetminus} N$ the rectangle component of $\operatorname{fr}(N) {\smallsetminus} F'$.
The surface $F$ obtained is the result of {\em band summing} $F_1$ to $F_2$ along $\delta$. Note that $F$ has a boundary compression {\em dual} to $\delta$ yielding $F'$: that is, there is a boundary compression $B$ for $F$ so that $\delta \cap B$ is a single point and compressing $F$ along $B$ gives $F'$.
\subsection{Handlebodies and I-bundles}
Recall that handlebodies are irreducible.
Suppose that $F$ is a compact connected surface with at least one boundary component. Let $T$ be the orientation $I$--bundle over $F$. If $F$ is orientable then $T \mathrel{\cong} F \times I$. If $F$ is not orientable then $T$ is the unique $I$--bundle over $F$ with orientable total space. We call $T$ the {\em $I$--bundle} and $F$ the {\em base space}. Let $\rho_F \colon T \to F$ be the associated bundle map. Note that $T$ is homeomorphic to a handlebody.
If $A \subset T$ is a union of fibers of the map $\rho_F$ then $A$ is {\em vertical} with respect to $T$. In particular take $\partial_v T = \rho_F^{-1}(\partial F)$ to be the {\em vertical boundary} of $T$. Take $\partial_h T$ to be the union of the boundaries of all of the fibers: this is the {\em horizontal boundary} of $T$. Note that $\partial_h T$ is always incompressible in $T$ while $\partial_v T$ is incompressible in $T$ as long as $F$ is not homeomorphic to a disk.
Note that, as $|\partial_v T| \geq 1$, any vertical surface in $T$ can be boundary compressed. However no vertical surface in $T$ may be boundary compressed into $\partial_h T$.
We end this section with:
\begin{lemma} \label{Lem:BdyIncompImpliesVertical} Suppose that $F$ is a compact, connected surface with $\partial F \neq \emptyset$. Let $\rho_F \colon T \to F$ be the orientation $I$--bundle over $F$. Let $X$ be a component of $\partial_h T$. Let $D \subset T$ be a properly embedded disk. If \begin{itemize} \item $\partial D$ is essential in $\partial T$,
\item $\partial D$ and $\partial X$ are tight, and \item $D$ cannot be boundary compressed into $X$ \end{itemize} then $D$ may be properly isotoped to be vertical with respect to $T$. \qed \end{lemma}
\section{Holes for the disk complex} \label{Sec:HolesDisk}
Here we begin to classify the holes for the disk complex, a more difficult analysis than that of the arc complex. To fix notation let $V$ be a handlebody. Let $S = S_g = \partial V$. Recall that there is a natural inclusion $\nu \colon \mathcal{D}(V) \to \mathcal{C}(S)$.
\begin{remark} \label{Rem:ComplementOfHoleIsIncompressible} The notion of a hole $X \subset \partial V$ for $\mathcal{D}(V)$ may be phrased in several different ways: \begin{itemize} \item every essential disk $D \subset V$ cuts the surface $X$, \item $\overline{S {\smallsetminus} X}$ is incompressible in $V$, or \item $X$ is {\em disk-busting} in $V$. \end{itemize} \end{remark}
The classification of holes $X \subset S$ for $\mathcal{D}(V)$ breaks roughly into three cases: either $X$ is an annulus, is compressible in $V$, or is incompressible in $V$. In each case we obtain a result:
\begin{restate}{Theorem}{Thm:Annuli} Suppose $X$ is a hole for $\mathcal{D}(V)$ and $X$ is an annulus. Then the diameter of $X$ is at most $5$. \end{restate}
\begin{restate}{Theorem}{Thm:CompressibleHoles} Suppose $X$ is a compressible hole for $\mathcal{D}(V)$ with diameter at least $15$. Then there are a pair of essential disks $D, E \subset V$ so that \begin{itemize} \item $\partial D, \partial E \subset X$ and \item $\partial D$ and $\partial E$ fill $X$. \end{itemize} \end{restate}
\begin{restate}{Theorem}{Thm:IncompressibleHoles} Suppose $X$ is an incompressible hole for $\mathcal{D}(V)$ with diameter at least $61$. Then there is an $I$--bundle $\rho_F \colon T \to F$ embedded in $V$ so that \begin{itemize} \item $\partial_h T \subset S$, \item $X$ is isotopic in $S$ to a component of $\partial_h T$, \item some component of $\partial_v T$ is boundary parallel into $S$,
\item $F$ supports a pseudo-Anosov map. \end{itemize} \end{restate}
As a corollary of these theorems we have:
\begin{corollary} \label{Cor:LargeImpliesInfiniteDiam} If $X$ is hole for $\mathcal{D}(V)$ with diameter at least $61$ then $X$ has infinite diameter. \end{corollary}
\begin{proof}
If $X$ is a hole with diameter at least $61$ then either \refthm{CompressibleHoles} or~\refthm{IncompressibleHoles} applies.
If $X$ is compressible then Dehn twists, in opposite directions, about the given disks $D$ and $E$ yields an automorphism $f \colon V \to V$
so that $f|X$ is pseudo-Anosov. This follows from Thurston's construction~\cite{Thurston88}. By \reflem{SimpleSurfaces} the hole $X$ has infinite diameter.
If $X$ is incompressible then $X \subset \partial_h T$ where $\rho_F \colon T \to F$ is the given $I$--bundle. Let $f \colon F \to F$ be the given pseudo-Anosov map. So $g$, the suspension of $f$, gives a automorphism of $V$. Again it follows that the hole $X$ has infinite diameter. \end{proof}
Applying \reflem{InfiniteDiameterHoleImpliesNotQIEmbedded} we find another corollary:
\begin{theorem} \label{Thm:D(V)NotQuasiIsomEmbeddedInC(S)} If $S = \partial V$ contains a strict hole with diameter at least $61$ then the inclusion $\nu \colon \mathcal{D}(V) \to \mathcal{C}(S)$ is not a quasi-isometric embedding. \qed \end{theorem}
\section{Holes for the disk complex -- annuli} \label{Sec:Annuli}
The proof of \refthm{Annuli} occupies the rest of this section. This proof shares many features with the proofs of Theorems~\ref{Thm:CompressibleHoles} and~\ref{Thm:IncompressibleHoles}. However, the exceptional definition of $\mathcal{C}(S_{0,2})$ prevents a unified approach. Fix $V$, a handlebody.
\begin{theorem} \label{Thm:Annuli} Suppose $X$ is a hole for $\mathcal{D}(V)$ and $X$ is an annulus. Then the diameter of $X$ is at most $5$. \end{theorem}
We begin with:
\begin{proofclaim}
For all $D \in \mathcal{D}(V)$, $|D \cap X| \geq 2$. \end{proofclaim}
\begin{proof}
Since $X$ is a hole, every disk cuts $X$. Since $X$ is an annulus, let $\alpha$ be a core curve for $X$. If $|D \cap X| = 1$, then we may band sum parallel copies of $D$ along an subarc of $\alpha$. The resulting disk misses $\alpha$, a contradiction. \end{proof}
Assume, to obtain a contradiction, that $X$ has diameter at least $6$.
Suppose that $D \in \mathcal{D}(V)$ is a disk chosen to minimize $D \cap X$. Among all disks $E \in \mathcal{D}(V)$ with $d_X(D, E) \geq 3$
choose one which minimizes $|D \cap E|$. Isotope $D$ and $E$ to make the boundaries tight and also tight with respect to $\partial X$. Tightening triples of curves is not canonical; nonetheless there is a tightening so that $S {\smallsetminus} (\partial D \cup \partial E \cup X)$ contains no triangles. See Figure~\ref{Fig:NoTriangles}.
\begin{figure}
\caption{Triangles outside of $X$ (see the left side) can be moved in
(see the right side). This decreases the number of points of $D
\cap E \cap (S {\smallsetminus} X)$.}
\label{Fig:NoTriangles}
\end{figure}
After this tightening we have: \begin{proofclaim} Every arc of $\partial D \cap X$ meets every arc of $\partial E \cap X$ at least once. \end{proofclaim}
\begin{proof} Fix components arcs $\alpha \subset D \cap X$ and $\beta \subset E \cap X$. Let $\alpha', \beta'$ be the corresponding arcs in $S^X$ the annular cover of $S$ corresponding to $X$. After the tightening we find that \[
|\alpha \cap \beta| \geq |\alpha' \cap \beta'| - 1. \]
Since $d_X(D, E) \geq 3$ \refeqn{DistanceInAnnulus}
implies that $|\alpha' \cap \beta'| \geq 2$. Thus $|\alpha \cap
\beta| \geq 1$, as desired. \end{proof}
\begin{proofclaim} There is an outermost bigon $B \subset E {\smallsetminus} D$ with the following properties: \begin{itemize} \item $\partial B = \alpha \cup \beta$ where $\alpha = B \cap D$, $\beta = \partial
B {\smallsetminus} \alpha \subset \partial E$, \item $\partial \alpha = \partial \beta \subset X$, and \item
$|\beta \cap X| = 2$. \end{itemize}
Furthermore, $|D \cap X| = 2$. \end{proofclaim}
See the lower right of Figure~\ref{Fig:PossibleAlphas} for a picture.
\begin{proof} Consider the intersection of $D$ and $E$, thought of as a collection of arcs and curves in $E$. Any simple closed curve component of $D \cap E$ can be removed by an isotopy of $E$, fixed on the boundary.
(This follows from the irreducibility of $V$ and an innermost disk argument.) Since we have assumed that $|D \cap E|$ is minimal it follows that there are no simple closed curves in $D \cap E$.
So consider any outermost bigon $B \subset E {\smallsetminus} D$. Let $\alpha = B \cap D$. Let $\beta = \partial B {\smallsetminus} \alpha = B \cap \partial V$. Note that $\beta$ cannot completely contain a component of $E \cap X$ as this would contradict either the fact that $B$ is outermost or the claim that every arc of $E \cap X$ meets some arc of $D \cap X$. Using this observation, Figure~\ref{Fig:PossibleAlphas} lists the possible ways for $B$ to lie inside of $E$.
\begin{figure}
\caption{The arc $\alpha$ cuts a bigon $B$ off of $E$. The darker
part of $\partial E$ are the arcs of $E \cap X$. Either $\beta$ is
disjoint from $X$, $\beta$ is contained in $X$, $\beta$ meets $X$ in
a single subarc, or $\beta$ meets $X$ in two subarcs.}
\label{Fig:PossibleAlphas}
\end{figure}
Let $D'$ and $D''$ be the two essential disks obtained by boundary compressing $D$ along the bigon $B$. Suppose $\alpha$ is as shown in one of the first three pictures of Figure~\ref{Fig:PossibleAlphas}. It follows that either $D'$ or $D''$ has, after tightening, smaller intersection with $X$ than $D$ does, a contradiction. We deduce that $\alpha$ is as pictured in lower right of Figure~\ref{Fig:PossibleAlphas}.
Boundary compressing $D$ along $B$ still gives disks $D', D'' \in
\mathcal{D}(V)$. As these cannot have smaller intersection with $X$ we deduce that $|D \cap X| \leq 2$ and the claim holds. \end{proof}
Using the same notation as in the proof above, let $B$ be an outermost bigon of $E {\smallsetminus} D$. We now study how $\alpha \subset \partial B$ lies inside of $D$.
\begin{proofclaim} The arc $\alpha \subset D$ connects distinct components of $D \cap X$. \end{proofclaim}
\begin{proof} Suppose not. Then there is a bigon $C \subset D {\smallsetminus} \alpha$ with $\partial C = \alpha \cup \gamma$ and $\gamma \subset \partial D \cap X$. The disk $C \cup B$ is essential and intersects $X$ at most once after tightening, contradicting our first claim. \end{proof}
We finish the proof of Theorem~\ref{Thm:Annuli} by noting that $D \cup B$ is homeomorphic to $\Upsilon \times I$ where $\Upsilon$ is the simplicial tree with three edges and three leaves. We may choose the homeomorphism so that $(D \cup B) \cap X = \Upsilon \times \partial I$. It follows that we may properly isotope $D \cup B$ until $(D \cup B) \cap X$ is a pair of arcs.
Recall that $D'$ and $D''$ are the disks obtained by boundary compressing $D$ along $B$. It follows that one of $D'$ or $D''$ (or both) meets $X$ in at most a single arc, contradicting our first claim. \qed
\section{Holes for the disk complex -- compressible} \label{Sec:CompressibleHoles}
The proof of Theorem~\ref{Thm:CompressibleHoles} occupies the second half of this section.
\subsection{Compression sequences of essential disks} \label{Sec:CompressionSequences}
Fix a multicurve $\Gamma \subset S = \partial V$. Fix also an essential disk $D \subset V$. Properly isotope $D$ to make $\partial D$ tight with respect to $\Gamma$.
If $D \cap \Gamma \neq \emptyset$ we may define:
\begin{definition} \label{Def:Sequence} A {\em compression sequence} $\{ \Delta_k \}_{k = 1}^n$ starting at $D$ has $\Delta_1 = \{D\}$ and $\Delta_{k+1}$ is obtained from $\Delta_k$ via a boundary compression, disjoint from $\Gamma$, and tightening.
Note that $\Delta_k$ is a collection of exactly $k$ pairwise disjoint disks properly embedded in $V$. We further require, for $k \leq n$, that every disk of $\Delta_k$ meets some component of $\Gamma$. We call a compression sequence {\em maximal} if either \begin{itemize} \item no disk of $\Delta_n$ can be boundary compressed into $S {\smallsetminus} \Gamma$ or \item there is a component $Z \subset S {\smallsetminus} \Gamma$ and a boundary compression of $\Delta_n$ into $S {\smallsetminus} \Gamma$ yielding an essential disk $E$ with $\partial E \subset Z$. \end{itemize} We say that such maximal sequences {\em end essentially} or {\em end in $Z$}, respectively. \end{definition}
All compression sequences must end, by Remark~\ref{Rem:IntersectionNumber}. Given a maximal sequence we may relate the various disks in the sequence as follows:
\begin{definition} \label{Def:DisjointnessPair} Fix $X$, a component of $S {\smallsetminus} \Gamma$. Fix $D_k \in \Delta_k$. A {\em disjointness pair} for $D_k$ is an ordered pair $(\alpha, \beta)$ of essential arcs in $X$ where \begin{itemize} \item $\alpha \subset D_k \cap X$, \item $\beta \subset \Delta_n \cap X$, and \item $d_\mathcal{A}(\alpha, \beta) \leq 1$. \end{itemize} \end{definition}
If $\alpha \neq \alpha'$ then the two disjointness pairs $(\alpha, \beta)$ and $(\alpha', \beta)$ are distinct, even if $\alpha$ is properly isotopic to $\alpha'$. A similar remark holds for the second coordinate.
The following lemma controls how subsurface projection distance changes in maximal sequences.
\begin{lemma} \label{Lem:DisjointnessPairs} Fix a multicurve $\Gamma \subset S$. Suppose that $D$ cuts $\Gamma$ and choose a maximal sequence starting at $D$. Fix any component $X \subset S {\smallsetminus} \Gamma$. Fix any disk $D_k \in \Delta_k$. Then either $D_k \in \Delta_n$ or there are four distinct disjointness pairs $\{ (\alpha_i, \beta_i) \}_{i = 1}^4$ for $D_k$ in $X$ where each of the arcs $\{ \alpha_i \}$ appears as the first coordinate of at most two pairs. \end{lemma}
\begin{proof} We induct on $n - k$. If $D_k$ is contained in $\Delta_n$ there is nothing to prove. If $D_k$ is contained in $\Delta_{k+1}$ we are done by induction. Thus we may assume that $D_k$ is the disk of $\Delta_k$ which is boundary compressed at stage $k$. Let $D_{k+1}, D_{k+1}' \in \Delta_{k+1}$ be the two disks obtained after boundary compressing $D_k$ along the bigon $B$. See \reffig{Pants} for a picture of the pair of pants cobounded by $\partial D_k$ and $\partial D_{k+1} \cup \partial D_{k+1}'$.
\begin{figure}
\caption{All arcs connecting $D_k$ to itself or to $D_{k+1} \cup
D'_{k+1}$ are arcs of $\Gamma \cap P$. The boundary compressing arc
$B \cup S$ meets $D_k$ twice and is parallel to the vertical arcs of
$\Gamma \cap P$.}
\label{Fig:Pants}
\end{figure}
Let $\delta$ be a band sum arc dual to $B$ (the dotted arc in
\reffig{Pants}). We may assume that $|\Gamma \cap \delta|$ is minimal over all arcs dual to $B$. It follows that the band sum of $D_{k+1}$ with $D_{k+1}'$ along $\delta$ is tight, without any isotopy. (This is where we use the fact that $B$ is a boundary compression {\em in the complement of $\Gamma$}, as opposed to being a general boundary compression of $D_k$ in $V$.)
There are now three possibilities: neither, one, or both points of $\partial \delta$ are contained in $X$.
First suppose that $X \cap \partial \delta = \emptyset$. Then every arc of $D_{k+1} \cap X$ is parallel to an arc of $D_k \cap X$, and similarly for $D_{k+1}'$. If $D_{k+1}$ and $D_{k+1}'$ are both components of $\Delta_n$ then choose any arcs $\beta, \beta'$ of $D_{k+1} \cap X$ and of $D_{k+1}' \cap X$. Let $\alpha, \alpha'$ be the parallel components of $D_k \cap X$. The four disjointness pairs are then $(\alpha, \beta)$, $(\alpha, \beta')$, $(\alpha', \beta)$, $(\alpha', \beta')$. Suppose instead that $D_{k+1}$ is not a component of $\Delta_n$. Then $D_k$ inherits four disjointness pairs from $D_{k+1}$.
Second suppose that exactly one endpoint $x \in \partial \delta$ meets $X$. Let $\gamma \subset D_{k+1}$ be the component of $D_{k+1} \cap X$ containing $x$. Let $X'$ be the component of $X \cap P$ that contains $x$ and let $\alpha, \alpha'$ be the two components of $D_k \cap X'$. Let $\beta$ be any arc of $D_{k+1}' \cap X$.
If $D_{k+1} \mathbin{\notin} \Delta_n$ and $\gamma$ is not the first coordinate of one of $D_{k+1}$'s four pairs then $D_k$ inherits disjointness pairs from $D_{k+1}$. If $D_{k+1}' \mathbin{\notin} \Delta_n$ then $D_k$ inherits disjointness pairs from $D_{k+1}'$.
Thus we may assume that both $D_{k+1}$ and $D_{k+1}'$ are in $\Delta_n$ {\em or} that only $D_{k+1}' \in \Delta_n$ while $\gamma$ appears as the first arc of disjointness pair for $D_{k+1}$. In case of the former the required disjointness pairs are $(\alpha, \beta)$, $(\alpha', \beta)$, $(\alpha, \gamma)$, and $(\alpha', \gamma)$. In case of the latter we do not know if $\gamma$ is allowed to appear as the second coordinate of a pair. However we are given four disjointness pairs for $D_{k+1}$ and are told that $\gamma$ appears as the first coordinate of at most two of these pairs. Hence the other two pairs are inherited by $D_k$. The pairs $(\alpha, \beta)$ and $(\alpha', \beta)$ give the desired conclusion.
Third suppose that the endpoints of $\delta$ meet $\gamma \subset D_{k+1}$ and $\gamma' \subset D_{k+1}'$. Let $X'$ be a component of $X \cap P$ containing $\gamma$. Let $\alpha$ and $\alpha'$ be the two arcs of $D_k \cap X'$.
Suppose both $D_{k+1}$ and $D_{k+1}'$ lie in $\Delta_n$. Then the desired pairs are $(\alpha, \gamma)$, $(\alpha', \gamma)$, $(\alpha, \gamma')$, and $(\alpha', \gamma')$. If $D_{k+1}' \in \Delta_n$ while $D_{k+1}$ is not then $D_k$ inherits two pairs from $D_{k+1}$. We add to these the pairs $(\alpha, \gamma')$, and $(\alpha', \gamma')$. If neither disk lies in $\Delta_n$ then $D_k$ inherits two pairs from each disk and the proof is complete. \end{proof}
Given a disk $D \in \mathcal{D}(V)$ and a hole $X \subset S$ our \reflem{DisjointnessPairs} allows us to adapt $D$ to $X$.
\begin{lemma} \label{Lem:DistanceIsUnchangedInSequences} Fix a hole $X \subset S$ for $\mathcal{D}(V)$. For any disk $D \in \mathcal{D}(V)$ there is a disk $D'$ with the following properties: \begin{itemize} \item $\partial X$ and $\partial D'$ are tight. \item If $X$ is incompressible then $D'$ is not boundary compressible
into $X$ and $d_\mathcal{A}(D, D') \leq 3$. \item If $X$ is compressible then $\partial D' \subset X$ and
$d_\mathcal{AC}(D, D') \leq 3$. \end{itemize} Here $\mathcal{A} = \mathcal{A}(X)$ and $\mathcal{AC} = \mathcal{AC}(X)$. \end{lemma}
\begin{proof} If $\partial D \subset X$ then the lemma is trivial. So assume, by Remark~\ref{Rem:ComplementOfHoleIsIncompressible}, that $D$ cuts $\partial X$. Choose a maximal sequence with respect to $\partial X$ starting at $D$.
Suppose that the sequence is non-trivial ($n > 1$). By \reflem{DisjointnessPairs} there is a disk $E \in \Delta_n$ so that $D \cap X$ and $E \cap X$ contain disjoint arcs.
If the sequence ends essentially then choose $D' = E$ and the lemma is proved. If the sequence ends in $X$ then there is a boundary compression of $\Delta_n$, disjoint from $\partial X$, yielding the desired disk $D'$ with $\partial D' \subset X$. Since $E \cap D' = \emptyset$ we again obtain the desired bound.
Assume now that the sequence is trivial ($n = 1$). Then take $E = D \in \Delta_n$ and the proof is identical to that of the previous paragraph. \end{proof}
\begin{remark} \label{Rem:WhyDistanceIsUnchangedInSequences} \reflem{DistanceIsUnchangedInSequences} is unexpected: after all, any pair of curves in $\mathcal{C}(X)$ can be connected by a sequence of band sums. Thus arbitrary band sums can change the subsurface projection to $X$. However, the sequences of band sums arising in \reflem{DistanceIsUnchangedInSequences} are very special. Firstly they do not cross $\partial X$ and secondly they are ``tree-like'' due to the fact every arc in $D$ is separating.
When $D$ is replaced by a surface with genus then \reflem{DistanceIsUnchangedInSequences} does not hold in general; this is a fundamental observation due to Kobayashi~\cite{Kobayashi88b} (see also~\cite{Hartshorn02}).
Namazi points out that even if $D$ is only replaced by a planar surface \reflem{DistanceIsUnchangedInSequences} does not hold in general. \end{remark}
\subsection{Proving the theorem}
We now prove:
\begin{theorem} \label{Thm:CompressibleHoles} Suppose $X$ is a compressible hole for $\mathcal{D}(V)$ with diameter at least $15$. Then there are a pair of essential disks $D, E \subset V$ so that \begin{itemize} \item $\partial D, \partial E \subset X$ and \item $\partial D$ and $\partial E$ fill $X$. \end{itemize} \end{theorem}
\begin{proof} Choose disks $D'$ and $E'$ in $\mathcal{D}(V)$ so that $d_X(D', E') \geq 15$. By \reflem{DistanceIsUnchangedInSequences} there are disks $D$ and $E$ so that $\partial D, \partial E \subset X$, $d_X(D', D) \leq 6$, and $d_X(E', E) \leq 6$. It follows from the triangle inequality that $d_X(D, E) \geq 3$. \end{proof}
\section{Holes for the disk complex -- incompressible} \label{Sec:IncompressibleHoles}
This section classifies incompressible holes for the disk complex.
\begin{theorem} \label{Thm:IncompressibleHoles} Suppose $X$ is an incompressible hole for $\mathcal{D}(V)$ with diameter at least $61$. Then there is an $I$--bundle $\rho_F \colon T \to F$ embedded in $V$ so that \begin{itemize} \item $\partial_h T \subset \partial V$, \item $X$ is a component of $\partial_h T$, \item some component of $\partial_v T$ is boundary parallel into $\partial V$,
\item $F$ supports a pseudo-Anosov map. \end{itemize} \end{theorem}
Here is a short plan of the proof: We are given $X$, an incompressible hole for $\mathcal{D}(V)$. Following \reflem{DistanceIsUnchangedInSequences} we may assume that $D, E$ are essential disks, without boundary compressions into $X$ or $S {\smallsetminus} X$, with $d_X(D,E) > 43$.
Examine the intersection pattern of $D$ and $E$ to find two families of rectangles $\mathcal{R}$ and $\mathcal{Q}$. The intersection pattern of these rectangles in $V$ will determine the desired $I$--bundle $T$. The third conclusion of the theorem follows from standard facts about primitive annuli. The fourth requires another application of \reflem{DistanceIsUnchangedInSequences} as well as \reflem{SimpleSurfaces}.
\subsection{Diagonals of polygons}
To understand the intersection pattern of $D$ and $E$ we discuss diagonals of polygons. Let $D$ be a $2n$ sided regular polygon. Label the sides of $D$ with the letters $X$ and $Y$ in alternating fashion. Any side labeled $X$ (or $Y$) will be called an {\em $X$ side} (or {\em $Y$ side}).
\begin{definition} \label{Def:Diagonal} An arc $\gamma$ properly embedded in $D$ is a {\em diagonal} if the points of $\partial \gamma$ lie in the interiors of distinct sides of $D$. If $\gamma$ and $\gamma'$ are diagonals for $D$ which together meet three different sides then $\gamma$ and $\gamma'$ are {\em non-parallel}. \end{definition}
\begin{lemma} \label{Lem:EightDiagonals} Suppose that $\Gamma \subset D$ is a collection of pairwise disjoint non-parallel diagonals. Then there is an $X$ side of $D$ meeting at most eight diagonals of $\Gamma$. \end{lemma}
\begin{proof}
A counting argument shows that $|\Gamma| \leq 4n - 3$. If every $X$
side meets at least nine non-parallel diagonals then $|\Gamma| \geq \frac{9}{2}n > 4n -3$, a contradiction. \end{proof}
\subsection{Improving disks} \label{Sec:ImprovingDisks}
Suppose now that $X$ is an incompressible hole for $\mathcal{D}(V)$ with diameter at least $61$. Note that, by Theorem~\ref{Thm:Annuli}, $X$ is not an annulus. Let $Y = \overline{S {\smallsetminus} X}$.
Choose disks $D'$ and $E'$ in $V$ so that $d_X(D', E') \geq 61$. By \reflem{DistanceIsUnchangedInSequences} there are a pair of disks $D$ and $E$ so that both are essential in $V$, cannot be boundary compressed into $X$ or into $Y$, and so that $d_{\mathcal{A}(X)}(D', D) \leq 3$ and $d_{\mathcal{A}(X)}(E', E) \leq 3$. Thus $d_X(D', D) \leq 9$ and $d_X(E', E) \leq 9$ (\reflem{LipschitzToHoles}). By the triangle inequality $d_X(D, E) \geq 61 - 18 = 43$.
Recall, as well, that $\partial D$ and $\partial E$ are tight with respect to
$\partial X$. We may further assume that $\partial D$ and $\partial E$ are tight with respect to each other. Also, minimize the quantities $|X \cap
(\partial D \cap \partial E)|$ and $|D \cap E|$ while keeping everything tight. In particular, there are no triangle components of $\partial V {\smallsetminus} (D \cup E \cup \partial X)$.
Now consider $D$ and $E$ to be even-sided polygons, with vertices being the points $\partial D \cap \partial X$ and $\partial E \cap \partial X$ respectively. Let $\Gamma = D \cap E$. See Figure~\ref{Fig:RectangleWithBadArcs} for one {\it a\thinspace priori}~possible collection $\Gamma \subset D$.
\begin{figure}
\caption{In fact, $\Gamma \subset D$ cannot contain simple closed
curves or non-diagonals.}
\label{Fig:RectangleWithBadArcs}
\end{figure}
From our assumptions and the irreducibility of $V$ it follows that $\Gamma$ contains no simple closed curves. Suppose now that there is a $\gamma \subset \Gamma$ so that, in $D$, both endpoints of $\gamma$ lie in the same side of $D$. Then there is an outermost such arc, say $\gamma' \subset \Gamma$, cutting a bigon $B$ out of $D$. It follows that $B$ is a boundary compression of $E$ which is disjoint from $\partial X$. But this contradicts the construction of $E$.
We deduce that all arcs of $\Gamma$ are diagonals for $D$ and, via a similar argument, for $E$.
Let $\alpha \subset D \cap X$ be an $X$ side of $D$ meeting at most eight distinct types of diagonal of $\Gamma$. Choose $\beta \subset E \cap X$ similarly. As $d_X(D, E) \geq 43$ we have that $d_X(\alpha, \beta) \geq 43 - 6 = 37$.
Now break each of $\alpha$ and $\beta$ into at most eight subarcs $\{ \alpha_i \}$ and $\{ \beta_j \}$ so that each subarc meets all of the diagonals of fixed type and only of that type. Let $R_i \subset D$ be the rectangle with upper boundary $\alpha_i$ and containing all of the diagonals meeting $\alpha_i$. Let $\alpha_i'$ be the lower boundary of $R_i$. Define $Q_j$ and $\beta_j'$ similarly. See \reffig{RectanglesInDisks} for a picture of $R_i$.
\begin{figure}
\caption{The rectangle $R_i \subset D$ is surrounded by the dotted line.
The arc $\alpha_i$ in $\partial D \cap X$ is indicated. In general the
arc $\alpha'_i$ may lie in $X$ or in $Y$.}
\label{Fig:RectanglesInDisks}
\end{figure}
Call an arc $\alpha_i$ {\em large} if there is an arc $\beta_j$ so that $|\alpha_i \cap \beta_j| \geq 3$. We use the same notation for $\beta_j$. Let $\Theta$ be the union of all of the large $\alpha_i$ and $\beta_j$. Thus $\Theta$ is a four-valent graph in $X$. Let $\Theta'$ be the union of the corresponding large $\alpha_i'$ and $\beta_i'$.
\begin{claim} \label{Clm:ThetaNonEmpty} The graph $\Theta$ is non-empty. \end{claim}
\begin{proof}
If $\Theta = \emptyset$, then all $\alpha_i$ are small. It follows that $|\alpha \cap \beta| \leq 128$ and thus $d_X(\alpha, \beta) \leq 16$, by \reflem{Hempel}. As $d_X(\alpha, \beta) \geq 37$ this is a contradiction. \end{proof}
Let $Z \subset \partial V$ be a small regular neighborhood of $\Theta$ and define $Z'$ similarly.
\begin{claim} \label{Clm:ThetaEssential} No component of $\Theta$ or of $\Theta'$ is contained in a disk $D \subset \partial V$. No component of $\Theta$ or of $\Theta'$ is contained in an annulus $A \subset \partial V$ that is peripheral in $X$. \end{claim}
\begin{proof} For a contradiction suppose that $W$ is a component of $Z$ contained in a disk. Then there is some pair $\alpha_i, \beta_j$ having a bigon in $\partial V$. This contradicts the tightness of $\partial D$ and $\partial E$. The same holds for $Z'$.
Suppose now that some component $W$ is contained in an annulus $A$, peripheral in $X$. Thus $W$ fills $A$. Suppose that $\alpha_i$ and $\beta_j$ are large and contained in $W$. By the classification of arcs in $A$ we deduce that either $\alpha_i$ and $\beta_j$ form a bigon in $A$ or $\partial X$, $\alpha_i$ and $\beta_j$ form a triangle. Either conclusion gives a contradiction. \end{proof}
\begin{claim} \label{Clm:ThetaFills} The graph $\Theta$ fills $X$. \end{claim}
\begin{proof} Suppose not. Fix attention on any component $W \subset Z$. Since $\Theta$ does not fill, the previous claim implies that there is a component $\gamma \subset \partial W$ that is essential and non-peripheral in $X$. Note that any large $\alpha_i$ meets $\partial W$ in at most two points, while any small $\alpha_i$ meets $\partial W$ in at most $32$
points. Thus $|\alpha \cap \partial W| \leq 256$ and the same holds for $\beta$. Thus $d_X(\alpha, \beta) \leq 36$ by the triangle inequality. As $d_X(\alpha, \beta) \geq 37$ this is a contradiction. \end{proof}
The previous two claims imply:
\begin{claim} \label{Clm:ThetaConnected} The graph $\Theta$ is connected. \qed \end{claim}
There are now two possibilities: either $\Theta \cap \Theta'$ is empty or not. In the first case set $\Sigma = \Theta$ and in the second set $\Sigma = \Theta \cup \Theta'$. By the claims above, $\Sigma$ is connected and fills $X$. Let $\mathcal{R} = \{ R_i \}$ and $\mathcal{Q} = \{ Q_j \}$ be the collections of large rectangles.
\subsection{Building the I-bundle}
We are given $\Sigma$, $\mathcal{R}$ and $\mathcal{Q}$ as above. Note that $\mathcal{R} \cup \mathcal{Q}$ is an $I$--bundle and $\Sigma$ is the component of its horizontal boundary meeting $X$. See Figure~\ref{Fig:RandQ} for a simple case.
\begin{figure}
\caption{$\mathcal{R} \cup \mathcal{Q}$ is an $I$--bundle: all arcs of intersection are parallel.}
\label{Fig:RandQ}
\end{figure}
Let $T_0$ be a regular neighborhood of $\mathcal{R} \cup \mathcal{Q}$, taken in $V$. Again $T_0$ has the structure of an $I$--bundle. Note that $\partial_h T_0 \subset \partial V$, $\partial_h T_0 \cap X$ is a component of $\partial_h T_0$, and this component fills $X$ due to Claim~\ref{Clm:ThetaFills}. We will enlarge $T_0$ to obtain the correct $I$--bundle in $V$.
Begin by enumerating all annuli $\{ A_i \} \subset \partial_v T_0$ with the property that some component of $\partial A_i$ is inessential in $\partial V$. Suppose that we have built the $I$--bundle $T_i$ and are now considering the annulus $A = A_i$. Let $\gamma \cup \gamma' = \partial A \subset \partial V$ with $\gamma$ inessential in $\partial V$. Let $B \subset \partial V$ be the disk which $\gamma$ bounds. By induction we assume that no component of $\partial_h T_i$ is contained in a disk embedded in $\partial V$ (the base case holds by Claim~\ref{Clm:ThetaEssential}). It follows that $B \cap T_i = \partial B = \gamma$. Thus $B \cup A$ is isotopic, rel $\gamma'$, to be a properly embedded disk $B' \subset V$. As $\gamma'$ lies in $X$ or $Y$, both incompressible, $\gamma'$ must bound a disk $C \subset \partial V$. Note that $C \cap T_i = \partial C = \gamma'$, again using the induction hypothesis.
It follows that $B \cup A \cup C$ is an embedded two-sphere in $V$.
As $V$ is a handlebody $V$ is irreducible. Thus $B \cup A \cup C$ bounds a three-ball $U_i$ in $V$. Choose a homeomorphism $U_i \mathrel{\cong} B \times I$ so that $B$ is identified with $B \times \{0\}$, $C$ is identified with $B \times \{1\}$, and $A$ is identified with $\partial B \times I$. We form $T_{i+1} = T_i \cup U_i$ and note that $T_{i+1}$ still has the structure of an $I$--bundle. Recalling that $A = A_i$ we have $\partial_v T_{i+1} = \partial_v T_i {\smallsetminus} A_i$. Also $\partial_h T_{i+1} = \partial_h T_i \cup (B \cup C) \subset \partial V$. It follows that no component of $\partial_h T_{i+1}$ is contained in a disk embedded in $\partial V$. Similarly, $\partial_h T_{i+1} \cap X$ is a component of $\partial_h T_{i+1}$ and this component fills $X$.
After dealing with all of the annuli $\{ A_i \}$ in this fashion we are left with an $I$--bundle $T$. Now all components of $\partial \partial_v T$ [{\it sic}] are essential in $\partial V$. All of these lying in $X$ are peripheral in $X$. This is because they are disjoint from $\Sigma \subset \partial_h T$, which fills $X$, by induction. It follows that the component of $\partial_h T$ containing $\Sigma$ is isotopic to $X$.
This finishes the construction of the promised $I$--bundle $T$ and demonstrates the first two conclusions of \refthm{IncompressibleHoles}. For future use we record:
\begin{remark} \label{Rem:CornersAreEssential} Every curve of $\partial \partial_v T = \partial \partial_h T$ is essential in $S = \partial V$. \end{remark}
\subsection{A vertical annulus parallel into the boundary}
Here we obtain the third conclusion of \refthm{IncompressibleHoles}: at least one component of $\partial_v T$ is boundary parallel in $\partial V$.
Fix $T$ an $I$--bundle with the incompressible hole $X$ a component of $\partial_h T$.
\begin{claim} \label{Clm:VerticalAnnuliIncompressible} All components of $\partial_v T$ are incompressible in $V$. \end{claim}
\begin{proof} Suppose that $A \subset \partial_v T$ was compressible. By Remark~\ref{Rem:CornersAreEssential} we may compress $A$ to obtain a pair of essential disks $B$ and $C$. Note that $\partial B$ is isotopic into the complement of $\partial_h T$. So $\overline{S {\smallsetminus} X}$ is compressible, contradicting Remark~\ref{Rem:ComplementOfHoleIsIncompressible}. \end{proof}
\begin{claim} \label{Clm:VerticalAnnulusBdyParallel} Some component of $\partial_v T$ is boundary parallel. \end{claim}
\begin{proof} Since $\partial_v T$ is incompressible (Claim~\ref{Clm:VerticalAnnuliIncompressible}) by Remark~\ref{Rem:SurfacesInHandlebodies}, we find that $\partial_v T$ is boundary compressible in $V$. Let $B$ be a boundary compression for $\partial_v T$. Let $A$ be the component of $\partial_v T$ meeting $B$. Let $\alpha$ denote the arc $A \cap B$.
The arc $\alpha$ is either essential or inessential in $A$. Suppose $\alpha$ is inessential in $A$. Then $\alpha$ cuts a bigon, $C$, out of $A$. Since $B$ was a boundary compression the disk $D = B \cup C$ is essential in $V$. Since $B$ meets $\partial_v T$ in a single arc, either $D \subset T$ or $D \subset \overline{V {\smallsetminus} T}$. The former implies that $\partial_h T$ is compressible and the latter that $X$ is not a hole. Either gives a contradiction.
It follows that $\alpha$ is essential in $A$. Now carefully boundary compress $A$: Let $N$ be the closure of a regular neighborhood of $B$, taken in $V {\smallsetminus} A$. Let $A'$ be the closure of $A {\smallsetminus} N$ (so $A'$ is a rectangle). Let $B' \cup B''$ be the closure of $\operatorname{fr}(N) {\smallsetminus} A$. Both $B'$ and $B''$ are bigons, parallel to $B$. Form $D = A' \cup B' \cup B''$: a properly embedded disk in $V$. If $D$ is essential then, as above, either $D \subset T$ or $D \subset \overline{V {\smallsetminus} T}$. Again, either gives a contradiction.
It follows that $D$ is inessential in $V$. Thus $D$ cuts a closed three-ball $U$ out of $V$. There are two final cases: either $N \subset U$ or $N \cap U = B' \cup B''$. If $U$ contains $N$ then $U$ contains $A$. Thus $\partial A$ is contained in the disk $U \cap \partial V$. This contradicts Remark~\ref{Rem:CornersAreEssential}. Deduce instead that $W = U \cup N$ is a solid torus with meridional disk $B$. Thus $W$ gives a parallelism between $A$ and the annulus $\partial V \cap \partial W$, as desired. \end{proof}
\begin{remark} \label{Rem:DiskBusting} Similar considerations prove that the multicurve $$\{ \partial A \mathbin{\mid}
\mbox{$A$ is a boundary parallel component of $\partial_v T$} \}$$ is disk-busting for $V$. \end{remark}
\subsection{Finding a pseudo-Anosov map}
Here we prove that the base surface $F$ of the $I$--bundle $T$ admits a pseudo-Anosov map.
As in Section~\ref{Sec:ImprovingDisks}, pick essential disks $D'$ and $E'$ in $V$ so that $d_X(D', E') \geq 61$. \reflem{DistanceIsUnchangedInSequences} provides disks $D$ and $E$ which cannot be boundary compressed into $X$ or into $\overline{S {\smallsetminus} X}$ -- thus $D$ and $E$ cannot be boundary compressed into $\partial_h T$. Also, as above, $d_X(D, E) \geq 61 - 18 = 43$.
After isotoping $D$ to minimize intersection with $\partial_v T$ it must be the case that all components of $D \cap \partial_v T$ are essential arcs in $\partial_v T$. By \reflem{BdyIncompImpliesVertical} we conclude that $D$ may be isotoped in $V$ so that $D \cap T$ is vertical in $T$. The same holds of $E$. Choose $A$ and $B$, components of $D \cap T$ and $E \cap T$. Each are vertical rectangles. Since $\operatorname{diam}_X(\pi_X(D)) \leq 3$ (\reflem{SubsurfaceProjectionLipschitz}) we now have $d_X(A, B) \geq 43 - 6 = 37$.
We now begin to work in the base surface $F$. Recall that $\rho_F \colon T \to F$ is an $I$--bundle. Take $\alpha = \rho_F(A)$ and $\beta = \rho_F(B)$. Note that the natural map $\mathcal{C}(F) \to \mathcal{C}(X)$, defined by taking a curve to its lift, is distance non-increasing (see Equation~\ref{Eqn:NonOrientableLowerBound}). Thus $d_F(\alpha, \beta) \geq 37$. By \refthm{Annuli} the surface $F$ cannot be an annulus. Thus, by \reflem{SimpleSurfaces} the subsurface $F$ supports a pseudo-Anosov map and we are done.
\subsection{Corollaries}
We now deal with the possibility of disjoint holes for the disk complex.
\begin{lemma} \label{Lem:DiskComplexPairedHoles} Suppose that $X$ is a large incompressible hole for $\mathcal{D}(V)$ supported by the $I$--bundle $\rho_F \colon T \to F$. Let $Y = \partial_h T {\smallsetminus} X$. Let $\tau \colon \partial_h T \to \partial_h T$ be the involution switching the ends of the $I$--fibres. Suppose that $D \in \mathcal{D}(V)$ is an essential disk. \begin{itemize} \item If $F$ is orientable then $d_{\mathcal{A}(F)}(D \cap X, D \cap Y) \leq 6$. \item If $F$ is non-orientable then $d_X(D, \mathcal{C}^\tau(X)) \leq 3$. \end{itemize} \end{lemma}
\begin{proof} By \reflem{DistanceIsUnchangedInSequences} there is a disk $D' \subset V$ which is tight with respect to $\partial_h T$ and which cannot be boundary compressed into $\partial_h T$ (or into the complement). Also, for any component $Z \subset \partial_h T$ we have $d_{\mathcal{A}(Z)}(D, D') \leq 3$.
Properly isotope $D'$ to minimize $D' \cap \partial_v T$. Then $D' \cap \partial_v T$ is properly isotopic, in $\partial_v T$, to a collection of vertical arcs. Let $E \subset D' \cap T$ be a component. \reflem{BdyIncompImpliesVertical} implies that $E$ is vertical in $T$, after an isotopy of $D'$ preserving $\partial_h T$ setwise. Since $E$ is vertical, the arcs $E \cap \partial_h T \subset D'$ are $\tau$--invariant. The conclusion follows. \end{proof}
Recall \reflem{ArcComplexHolesIntersect}: all holes for the arc complex intersect. This cannot hold for the disk complex. For example if $\rho_F \colon T \to F$ is an $I$--bundle over an orientable surface then take $V = T$ and notice that both components of $\partial_h T$ are holes for $\mathcal{D}(V)$. However, by the first conclusion of \reflem{DiskComplexPairedHoles}, $X$ and $Y$ are paired holes, in the sense of \refdef{PairedHoles}. So, as with the invariant arc complex (\reflem{ArcComplexHolesInterfere}), all holes for the disk complex interfere:
\begin{lemma} \label{Lem:DiskComplexHolesInterfere} Suppose that $X, Z \subset \partial V$ are large holes for $\mathcal{D}(V)$. If $X \cap Z = \emptyset$ then there is an $I$--bundle $T \mathrel{\cong} F \times I$ in $V$ so that $\partial_h T = X \cup Y$ and $Y \cap Z \neq \emptyset$. \end{lemma}
\begin{proof} Suppose that $X \cap Z = \emptyset$. It follows from Remark~\ref{Rem:ComplementOfHoleIsIncompressible} that both $X$ and $Z$ are incompressible. Let $\rho_F \colon T \to F$ be the $I$--bundle in $V$ with $X \subset \partial_h T$, as provided by \refthm{IncompressibleHoles}. We also have a component $A \subset \partial_v T$ so that $A$ is boundary parallel. Let $U$ be the solid torus component of $V {\smallsetminus} A$. Note that $Z$ cannot be contained in $\partial U {\smallsetminus} A$ because $Z$ is not an annulus (\refthm{Annuli}).
Let $\alpha = \rho_F(A)$. Choose any essential arc $\delta \subset F$ with both endpoints in $\alpha \subset \partial F$. It follows that $\rho_F^{-1}(\delta)$, together with two meridional disks of $U$, forms an essential disk $D$ in $V$. Let $W = \partial_h T \cup (U {\smallsetminus} A)$ and note that $\partial D \subset W$.
If $F$ is non-orientable then $Z \cap W = \emptyset$ and we have a contradiction. Deduce that $F$ is orientable. Now, if $Z$ misses $Y$ then $Z$ misses $W$ and we again have a contradiction. It follows that $Z$ cuts $Y$ and we are done. \end{proof}
\section{Axioms for combinatorial complexes} \label{Sec:Axioms}
The goal of this section and the next is to prove, inductively, an upper bound on distance in a combinatorial complex $\mathcal{G}(S) = \mathcal{G}$. This section presents our axioms on $\mathcal{G}$: sufficient hypotheses for \refthm{UpperBound}. The axioms, apart from \refax{Holes}, are quite general. \refax{Holes} is necessary to prove hyperbolicity and greatly simplifies the recursive construction in \refsec{Partition}.
\begin{theorem} \label{Thm:UpperBound} Fix $S$ a compact connected non-simple surface. Suppose that $\mathcal{G} = \mathcal{G}(S)$ is a combinatorial complex satisfying the axioms of \refsec{Axioms}. Let $X$ be a hole for $\mathcal{G}$ and suppose that $\alpha_X, \beta_X \in \mathcal{G}$ are contained in $X$. For any constant $c > 0$ there is a constant $A$ satisfying: \[ d_\mathcal{G}(\alpha_X, \beta_X) \mathbin{\leq_{A}} \sum [d_Y(\alpha_X, \beta_X)]_c \] where the sum is taken over all holes $Y \subseteq X$ for $\mathcal{G}$. \end{theorem}
The proof of the upper bound is more difficult than that of the lower bound, \refthm{LowerBound}. This is because naturally occurring paths in $\mathcal{G}$ between $\alpha_X$ and $\beta_X$ may waste time in non-holes. The first example of this is the path in $\mathcal{C}(S)$ obtained by taking the short curves along a {Teichm\"uller~} geodesic. The {Teichm\"uller~} geodesic may spend time rearranging the geometry of a subsurface. Then the systole path in the curve complex must be much longer than the curve complex distance between the endpoints.
In Sections~\ref{Sec:PathsNonorientable}, \ref{Sec:PathsArc}, \ref{Sec:PathsDisk} we will verify these axioms for the curve complex of a non-orientable surface, the arc complex, and the disk complex.
\subsection{The axioms}
Suppose that $\mathcal{G} = \mathcal{G}(S)$ is a combinatorial complex. We begin with the axiom required for hyperbolicity.
\begin{axiom}[Holes interfere] \label{Ax:Holes} All large holes for $\mathcal{G}$ interfere, as given in \refdef{Interfere}. \end{axiom}
Fix vertices $\alpha_X, \beta_X \in \mathcal{G}$, both contained in a hole $X$. We are given $\Lambda = \{ \mu_n \}_{n = 0}^N$, a path of markings in $X$.
\begin{axiom}[Marking path] \label{Ax:Marking} We require: \begin{enumerate} \item The support of $\mu_{n+1}$ is contained inside the support of $\mu_n$. \item For any subsurface $Y \subseteq X$, if $\pi_Y(\mu_k) \neq \emptyset$ then for all $n \leq k$ the map $n \mapsto \pi_Y(\mu_n)$ is an unparameterized quasi-geodesic with constants depending only on $\mathcal{G}$. \end{enumerate} \end{axiom}
\noindent The second condition is crucial and often technically difficult to obtain.
We are given, for every essential subsurface $Y \subset X$, a perhaps empty interval $J_Y \subset [0, N]$ with the following properties.
\begin{axiom}[Accessibility] \label{Ax:Access} The interval for $X$ is $J_X = [0, N]$. There is a constant ${B_3}$ so that \begin{enumerate} \item If $m \in J_Y$ then $Y$ is contained in the support of $\mu_m$. \item If $m \in J_Y$ then $\iota(\partial Y, \mu_m) < {B_3}$. \item If $[m, n] \cap J_Y = \emptyset$ then $d_Y(\mu_m, \mu_n) < {B_3}$. \end{enumerate} \end{axiom}
\noindent There is a combinatorial path $\Gamma = \{ \gamma_i \}_{i = 0}^K \subset \mathcal{G}$ starting with $\alpha_X$ ending with $\beta_X$ and each $\gamma_i$ is contained in $X$. There is a strictly increasing reindexing function $r \colon [0, K] \to [0, N]$ with $r(0) = 0$ and $r(K) = N$.
\begin{axiom}[Combinatorial] \label{Ax:Combin}
There is a constant ${C_2}$ so that: \begin{itemize} \item $d_Y(\gamma_i, \mu_{r(i)}) < {C_2}$, for every $i \in [0, K]$ and
every hole $Y \subset X$, \item $d_\mathcal{G}(\gamma_i, \gamma_{i+1}) < {C_2}$, for every $i \in [0, K - 1]$. \end{itemize} \end{axiom}
\begin{axiom}[Replacement] \label{Ax:Replace} There is a constant ${C_4}$ so that: \begin{enumerate} \item If $Y \subset X$ is a hole and $r(i) \in J_Y$ then there is a vertex $\gamma' \in \mathcal{G}$ so that $\gamma'$ is contained in $Y$ and $d_\mathcal{G}(\gamma_i, \gamma') < {C_4}$. \item If $Z \subset X$ is a non-hole and $r(i) \in J_Z$ then there is a vertex $\gamma' \in \mathcal{G}$ so that $d_\mathcal{G}(\gamma_i, \gamma') < {C_4}$ and so that $\gamma'$ is contained in $Z$ or in $X {\smallsetminus} Z$. \end{enumerate} \end{axiom}
There is one axiom left: the axiom for straight intervals. This is given in the next subsection.
\subsection{Inductive, electric, shortcut and straight intervals}
We describe subintervals that arise in the partitioning of $[0, K]$. As discussed carefully in \refsec{Deductions}, we will choose a lower threshold ${L_1}(Y)$ for every essential $Y \subset X$ and a general upper threshold, ${L_2}$.
\begin{definition} \label{Def:Inductive} Suppose that $[i, j] \subset [0, K]$ is a subinterval of the combinatorial path. Then $[i, j]$ is an {\em inductive interval} associated to a hole $Y \subsetneq X$ if \begin{itemize} \item $r([i, j]) \subset J_Y$ (for paired $Y$ we require $r([i, j]) \subset
J_Y \cap J_{Y'}$) and \item $d_Y(\gamma_i, \gamma_j) \geq {L_1}(Y)$. \end{itemize} \end{definition}
When $X$ is the only relevant hole we have a simpler definition:
\begin{definition} \label{Def:Electric} Suppose that $[i, j] \subset [0, K]$ is a subinterval of the combinatorial path. Then $[i, j]$ is an {\em electric interval} if $d_Y(\gamma_i, \gamma_j) < {L_2}$ for all holes $Y \subsetneq X$. \end{definition}
Electric intervals will be further partitioned into shortcut and straight intervals.
\begin{definition} \label{Def:Shortcut} Suppose that $[p, q] \subset [0, K]$ is a subinterval of the combinatorial path. Then $[p, q]$ is a {\em shortcut} if \begin{itemize} \item $d_Y(\gamma_p, \gamma_q) < {L_2}$ for all holes $Y$, including
$X$ itself, and \item there is a non-hole $Z \subset X$ so that $r([p, q]) \subset J_Z$. \end{itemize} \end{definition}
\begin{definition} \label{Def:Straight} Suppose that $[p, q] \subset [0, K]$ is a subinterval of the combinatorial path and is contained in an electric interval $[i, j]$. Then $[p, q]$ is a {\em straight interval} if $d_Y(\mu_{r(p)}, \mu_{r(q)}) < {L_2}$ for all non-holes $Y$. \end{definition}
Our final axiom is:
\begin{axiom}[Straight] \label{Ax:Straight} There is a constant $A$ depending only on $X$ and $\mathcal{G}$ so that for every straight interval $[p, q]$: \[ d_\mathcal{G}(\gamma_p, \gamma_q) \mathbin{\leq_{A}} d_X(\gamma_p, \gamma_q) \] \end{axiom}
\subsection{Deductions from the axioms} \label{Sec:Deductions}
\refax{Marking} and \reflem{FirstReverse} imply that the reverse triangle inequality holds for projections of marking paths.
\begin{lemma} \label{Lem:Reverse} There is a constant ${C_1}$ so that \[ d_Y(\mu_m, \mu_n) + d_Y(\mu_n, \mu_p) < d_Y(\mu_m, \mu_p) + {C_1} \] for every essential $Y \subset X$ and for every $m < n < p$ in $[0, N]$. \qed \end{lemma}
We record three simple consequences of \refax{Access}.
\begin{lemma} \label{Lem:Access} There is a constant ${C_3}$, depending only on ${B_3}$, with the follow properties: \begin{itemize} \item[(i)] If $Y$ is strictly nested in $Z$ and $m \in J_Y$ then
$d_Z(\partial Y,\mu_m) \leq {C_3}$. \item[(ii)] If $Y$ is strictly nested in $Z$ then for any $m, n \in
J_Y$, $d_Z(\mu_m, \mu_n) < {C_3}$. \item[(iii)] If $Y$ and $Z$ overlap then for any $m, n \in J_Y \cap
J_Z$ we have $d_Y(\mu_m, \mu_n), d_Z(\mu_m, \mu_n) < {C_3}$.
\end{itemize} \end{lemma}
\begin{proof} We first prove conclusion (i): Since $Y$ is strictly nested in $Z$ and since $Y$ is contained in the support of $\mu_m$ (part (1) of \refax{Access}), both $\partial Y$ and $\mu_m$ cut $Z$. By \refax{Access}, part (2), we have that $\iota(\partial Y, \mu_m) \leq {B_3}$. It follows that $\iota(\partial Y, \pi_Z(\mu_m)) \leq 2{B_3}$. By \reflem{Hempel} we deduce that $d_Z(\partial Y, \mu_m) \leq 2 \log_2 {B_3} + 3$. We take ${C_3}$ larger than this right hand side.
Conclusion (ii) follows from a pair of applications of conclusion (i) and the triangle inequality.
For conclusion (iii): As in (ii), to bound $d_Z(\mu_m, \mu_n)$ it suffices to note that $\partial Y$ cuts $Z$ and that $\partial Y$ has bounded intersection with both of $\mu_m, \mu_n$. \end{proof}
We now have all of the constants ${C_1}, {C_3}, {C_2}, {C_4}$ in hand. Recall that $L_4$ is the pairing constant of \refdef{PairedHoles} and that $M_0$ is the constant of \ref{Thm:BoundedGeodesicImage}. We must choose a lower threshold ${L_1}(Y)$ for every essential $Y \subset X$. We must also choose the general upper threshold, ${L_2}$ and general lower threshold ${L_0}$. We require, for all essential $Z, Y$ in $X$, with $\xi(Z) < \xi(Y) \leq \xi(X)$: \begin{gather} \label{Eqn:InductBigger}
{L_0} > {C_3} + 2{C_2} + 2L_4 \\ \label{Eqn:UpperBigger}
{L_2} > {L_1}(X) + 2L_4 + 6{C_1} + 2{C_2} + 14{C_3} +
10 \\ \label{Eqn:LowerBigger}
{L_1}(Y) > M_0 + 2{C_3} + 4{C_2} + 2L_4 +{L_0} \\ \label{Eqn:Order}
{L_1}(X) > {L_1}(Z) + 2{C_3} + 4{C_2} + 4L_4 \end{gather}
\section{Partition and the upper bound on distance} \label{Sec:Partition}
In this section we prove \refthm{UpperBound} by induction on $\xi(X)$. The first stage of the proof is to describe the {\em inductive partition}: we partition the given interval $[0, K]$ into inductive and electric intervals. The inductive partition is closely linked with the hierarchy machine~\cite{MasurMinsky00} and with the notion of antichains introduced in~\cite{RafiSchleimer09}.
We next give the {\em electric partition}: each electric interval is divided into straight and shortcut intervals. Note that the electric partition also gives the base case of the induction. We finally bound $d_\mathcal{G}(\alpha_X, \beta_X)$ from above by combining the contributions from the various intervals.
\subsection{Inductive partition}
We begin by identifying the relevant surfaces for the construction of the partition. We are given a hole $X$ for $\mathcal{G}$ and vertices $\alpha_X, \beta_X \in \mathcal{G}$ contained in $X$. Define \[ B_X = \{ Y \subsetneq X \mathbin{\mid} \mbox{$Y$ is a hole and~}
d_Y(\alpha_X, \beta_X) \geq {L_1}(X) \}. \] For any subinterval $[i,j] \subset [0, K]$ define \[ B_X(i, j) = \{ Y \in B_X \mathbin{\mid} d_Y(\gamma_i, \gamma_j) \geq {L_1}(X)\}. \]
We now partition $[0, K]$ into inductive and electric intervals. Begin with the partition of one part $\mathcal{P}_X = \{ [0, K] \}$. Recursively $\mathcal{P}_X$ is a partition of $[0, K]$ consisting of intervals which are either inductive, electric, or undetermined. Suppose that $[i, j] \in \mathcal{P}_X$ is undetermined.
\begin{proofclaim} If $B_X(i,j)$ is empty then $[i,j]$ is electric. \end{proofclaim}
\begin{proof} Since $B_X(i, j)$ is empty, every hole $Y \subsetneq X$ has either $d_Y(\gamma_i, \gamma_j) < {L_1}(X)$ or $Y \mathbin{\notin} B_X$. In the former case, as ${L_1}(X) < {L_2}$, we are done.
So suppose the latter holds. Now, by the reverse triangle inequality (\reflem{Reverse}), \[ d_Y(\mu_{r(i)}, \mu_{r(j)}) < d_Y(\mu_0, \mu_N) + 2{C_1}. \] Since $r(0) = 0$ and $r(K) = N$ we find: \[ d_Y(\gamma_i, \gamma_j) <
d_Y(\alpha_X, \beta_X) + 2{C_1} + 4{C_2}. \] Deduce that \[ d_Y(\gamma_i, \gamma_j) < {L_1}(X) + 2{C_1} + 4{C_2} < {L_2}. \] This completes the proof. \end{proof}
Thus if $B_X(i, j)$ is empty then $[i, j] \in \mathcal{P}_X$ is determined to be electric. Proceed on to the next undetermined element. Suppose instead that $B_X(i,j)$ is non-empty. Pick a hole $Y \in B_X(i,j)$ so that $Y$ has maximal $\xi(Y)$ amongst the elements of $B_X(i,j)$
Let $p, q \in [i,j]$ be the first and last indices, respectively, so that $r(p), r(q) \in J_Y$. (If $Y$ is paired with $Y'$ then we take the first and last indices that, after reindexing, lie inside of $J_Y \cap J_{Y'}$.)
\begin{proofclaim} The indices $p, q$ are well-defined. \end{proofclaim}
\begin{proof} By assumption $d_Y(\gamma_i, \gamma_j) \geq {L_1}(X)$. By \refeqn{InductBigger}, \[ {L_1}(X) > {C_3}+2{C_2}. \] We deduce from \refax{Access} and \refax{Combin} that $J_Y \cap r([i, j])$ is non-empty. Thus, if $Y$ is not paired, the indices $p, q$ are well-defined.
Suppose instead that $Y$ is paired with $Y'$. Recall that measurements made in $Y$ and $Y'$ differ by at most the pairing constant $L_4$ given in \refdef{PairedHoles}. By (\ref{Eqn:LowerBigger}), $${L_1}(X) >{C_3} + 2{C_2} + 2L_4.$$ We deduce again from \refax{Access} that $J_{Y'} \cap r([i, j])$ is non-empty.
Suppose now, for a contradiction, that $J_Y \cap J_{Y'} \cap r([i, j])$ is empty. Define $$ h = \max \{ \ell \in [i, j] \mathbin{\mid} r(\ell) \in J_Y \}, \quad k = \min \{ \ell \in [i, j] \mathbin{\mid} r(\ell) \in J_{Y'} \} $$
Without loss of generality we may assume that $h < k$. It follows that $d_{Y'}(\gamma_i, \gamma_h) < {C_3} + 2{C_2}$. Thus $d_Y(\gamma_i, \gamma_h) < {C_3} + 2{C_2} + 2L_4$. Similarly, $d_Y(\gamma_h, \gamma_j) < {C_3} + 2{C_2}$. Deduce $$ d_Y(\gamma_i, \gamma_j) < 2{C_3} + 4{C_2} + 2L_4 < {L_1}(X) ,$$ the last inequality by (\ref{Eqn:LowerBigger}). This is a contradiction to the assumption. \end{proof}
\begin{proofclaim} The interval $[p, q]$ is inductive for $Y$. \end{proofclaim}
\begin{proof} We must check that $d_Y(\gamma_p, \gamma_q) \geq {L_1}(Y)$. Suppose first that $Y$ is not paired. Then by the definition of $p, q$,
(2) of \refax{Access}, and the triangle inequality we have $$ d_Y(\mu_{r(i)}, \mu_{r(j)}) \leq d_Y(\mu_{r(p)}, \mu_{r(q)}) + 2{C_3}. $$ Thus by \refax{Combin}, $$ d_Y(\gamma_i, \gamma_j) \leq d_Y(\gamma_p, \gamma_q) + 2{C_3} + 4{C_2}. $$ Since by (\ref{Eqn:Order}), $$ {L_1}(Y) + 2{C_3} + 4{C_2} < {L_1}(X) \leq d_Y(\gamma_i, \gamma_j) $$ we are done.
When $Y$ is paired the proof is similar but we must use the slightly stronger inequality ${L_1}(Y) + 2{C_3} + 4{C_2} + 4L_4 < {L_1}(X)$. \end{proof}
Thus, when $B_X(i, j)$ is non-empty we may find a hole $Y$ and indices $p, q$ as above. In this situation, we subdivide the element $[i, j] \in \mathcal{P}_X$ into the elements $[i, p-1]$, $[p, q]$, and $[q+1, j]$. (The first or third intervals, or both, may be empty.) The interval $[p, q] \in \mathcal{P}_X$ is determined to be inductive and associated to $Y$. Proceed on to the next undetermined element. This completes the construction of $\mathcal{P}_X$.
As a bit of notation, if $[i, j] \in \mathcal{P}_X$ is associated to $Y \subset X$ we will sometimes write $I_Y = [i, j]$.
\subsection{Properties of the inductive partition}
\begin{lemma} \label{Lem:HolesContained} Suppose that $Y, Z$ are holes and $I_Z$ is an inductive element of $\mathcal{P}_X$ associated to $Z$. Suppose that $r(I_Z) \subset J_Y$ (or $r(I_Z) \subset J_Y \cap J_{Y'}$, if $Y$ is paired). Then \begin{itemize} \item $Z$ is nested in $Y$ or \item $Z$ and $Z'$ are paired and $Z'$ is nested in $Y$. \end{itemize} \end{lemma}
\begin{proof} Let $I_Z = [i, j]$. Suppose first that $Y$ is strictly nested in $Z$. Then by (ii) of \reflem{Access}, $d_Z(\mu_{r(i)}, \mu_{r(j)}) < {C_3}$. Then by Axiom \ref{Ax:Combin} \[ d_Z(\gamma_i, \gamma_j) < {C_3} + 2{C_2} < {L_1}(Z), \] a contradiction. We reach the same contradiction if $Y$ and $Z$ overlap using (iii) of \reflem{Access}.
Now, if $Z$ and $Y$ are disjoint then there are two cases: Suppose first that $Y$ is paired with $Y'$. Since all holes interfere, $Y'$ and $Z$ must meet. In this case we are done, just as in the previous paragraph.
Suppose now that $Z$ is paired with $Z'$. Since all holes interfere, $Z'$ and $Y$ must meet. If $Z'$ is nested in $Y$ then we are done. If $Y$ is strictly nested in $Z'$ then, as $r([i, j]) \subset J_Y$, we find that as above by Axioms \ref{Ax:Combin} and (ii) of \reflem{Access} that $$d_{Z'}(\gamma_i, \gamma_j) < {C_3} + 2{C_2}$$ and so $d_Z(\gamma_i, \gamma_j) < {C_3} + 2{C_2} + 2L_4 < {L_1}(Z)$, a contradiction. We reach the same contradiction if $Y$ and $Z'$ overlap. \end{proof}
\begin{proposition} \label{Prop:AtMostThreeInductives} Suppose $Y \subsetneq X$ is a hole for $\mathcal{G}$. \begin{enumerate} \item If $Y$ is associated to an inductive interval $I_Y \in \mathcal{P}_X$ and $Y$ is paired with $Y'$ then $Y'$ is not associated to any inductive interval in $\mathcal{P}_X$. \item There is at most one inductive interval $I_Y \in \mathcal{P}_X$ associated to $Y$. \item There are at most two holes $Z$ and $W$, distinct from $Y$ (and from $Y'$, if $Y$ is paired) such that \begin{itemize} \item there are inductive intervals $I_Z = [h, i]$ and $I_W = [j, k]$ and \item $d_Y(\gamma_h, \gamma_i), d_Y(\gamma_j, \gamma_k) \geq {L_0}$. \end{itemize} \end{enumerate} \end{proposition}
\begin{remark} \label{Rem:AtMostThreeInductive} It follows that for any hole $Y$ there are at most three inductive intervals in the partition $\mathcal{P}_X$ where $Y$ has projection distance greater than ${L_0}$. \end{remark}
\begin{proof}[Proof of \refprop{AtMostThreeInductives}] To prove the first claim: Suppose that $I_Y = [p, q]$ and $I_{Y'} = [p', q']$ with $q < p'$. It follows that $[r(p), r(q')] \subset J_Y \cap J_{Y'}$. If $q + 1 = p'$ then the partition would have chosen a larger inductive interval for one of $Y$ or $Y'$. It must be the case that there is an inductive interval $I_Z \subset [q + 1, p' - 1]$ for some hole $Z$, distinct from $Y$ and $Y'$, with $\xi(Z) \geq \xi(Y)$. However, by \reflem{HolesContained} we find that $Z$ is nested in $Y$ or in $Y'$. It follows that $Z = Y$ or $Y$, a contradiction.
The second statement is essentially similar.
Finally suppose that $Z$ and $W$ are the first and last holes, if any, satisfying the hypotheses of the third claim. Since $d_Y(\gamma_h, \gamma_i) \geq {L_0}$ we find by \refax{Combin} that $$d_Y(\mu_{r(h)}, \mu_{r(i)}) \geq {L_0} - 2{C_2}.$$ By
(\ref{Eqn:InductBigger}), ${L_0} - 2{C_2} > {C_3}$ so that $$J_Y \cap r(I_Z) \neq \emptyset.$$ If $Y$ is paired then, again by (\ref{Eqn:InductBigger}) we have ${L_0}> {C_3} + 2{C_2} + 2L_4$, we also find that $J_{Y'} \cap r(I_Z) \neq \emptyset$. Symmetrically, $J_Y \cap r(I_W)$ (and $J_{Y'} \cap r(I_W)$) are also non-empty.
It follows that the interval between $I_Z$ and $I_W$, after reindexing, is contained in $J_Y$ (and $J_{Y'}$, if $Y$ is paired). Thus for any inductive interval $I_V = [p, q]$ between $I_Z$ and $I_W$ the associated hole $V$ is nested in $Y$ (or $V'$ is nested in $Y$), by \reflem{HolesContained}. If $V = Y$ or $V = Y'$ there is nothing to prove. Suppose instead that $V$ (or $V'$) is strictly nested in $Y$. It follows that $$d_Y(\gamma_p, \gamma_q) < {C_3} + 2{C_2} < {L_0}.$$ Thus there are no inductive intervals between $I_Z$ and $I_W$ satisfying the hypotheses of the third claim. \end{proof}
The following lemma and proposition bound the number of inductive intervals. The discussion here is very similar to (and in fact inspired) the {\em antichains} defined in~\cite[Section 5]{RafiSchleimer09}. Our situation is complicated by the presence of non-holes and interfering holes.
\begin{lemma} \label{Lem:PigeonForHoles} Suppose that $X, \alpha_X, \beta_X$ are given, as above. For any $\ell \geq (3 \cdot {L_2})^{\xi(X)}$, if $\{Y_i\}_{i = 1}^\ell$ is a collection of distinct strict sub-holes of $X$ each having $d_{Y_i}(\alpha_X, \beta_X)\geq {L_1}(X)$ then there is a hole $Z \subseteq X$ such that $d_Z(\alpha_X, \beta_X) \geq {L_2} - 1$ and $Z$ contains at least ${L_2}$ of the $Y_i$. Furthermore, for at least ${L_2} - 4({C_1} + {C_3} + 2{C_3} + 2)$ of these $Y_i$ we find that $J_{Y_i} \subsetneq J_Z$. (If $Z$ is paired then $J_{Y_i} \subsetneq J_Z \cap J_{Z'}$.) Each of these $Y_i$ is disjoint from a distinct vertex $\eta_i \in [ \pi_Z(\alpha_X), \pi_Z(\beta_X) ]$. \end{lemma}
\begin{proof} Let $g_X$ be a geodesic in $\mathcal{C}(X)$ joining $\alpha_X, \beta_X$. By the Bounded Geodesic Image Theorem (\refthm{BoundedGeodesicImage}), since ${L_1}(X) > M_0$, for every $Y_i$ there is a vertex $\omega_i\in g_X$ such that $Y_i\subset X {\smallsetminus} \omega_i$. Thus $d_X(\omega_i,\partial Y_i) \leq 1$. If there are at least ${L_2}$ distinct $\omega_i$, associated to distinct $Y_i$, then $d_X(\alpha_X, \beta_X) \geq {L_2} - 1$. In this situation we take $Z = X$. Since $J_X = [0, N]$ we are done.
Thus assume there do not exist at least ${L_2}$ distinct $\omega_i$. Then there is some fixed $\omega$ among these $\omega_i$ such that at least $\frac{\ell}{{L_2}}\geq 3 (3 \cdot {L_2})^{\xi(X)-1}$ of the $Y_i$ satisfy \[ Y_i \subset (X {\smallsetminus} \omega). \] Thus one component, call it $W$, of $X{\smallsetminus} \omega$ contains at least $(3 \cdot {L_2})^{\xi(X)-1}$ of the $Y_i$.
Let $g_W$ be a geodesic in $\mathcal{C}(W)$ joining $\alpha_W = \pi_W(\alpha_X)$ and $\beta_W=\pi_W(\beta_X)$. Notice that $$d_{Y_i}(\alpha_W, \beta_W) \geq d_{Y_i}(\alpha_X, \beta_X) - 8$$ because we are projecting to nested subsurfaces. This follows for example from \reflem{SubsurfaceProjectionLipschitz}. Hence $d_{Y_i}(\alpha_W, \beta_W) \geq {L_1}(W)$.
Again apply \refthm{BoundedGeodesicImage}. Since ${L_1}(W) > M_0$, for every remaining $Y_i$ there is a vertex $\eta_i \in g_W$ such that \[ Y_i \subset (W {\smallsetminus} \eta_i) \] If there are at least ${L_2}$ distinct $\eta_i$ then we take $Z = W$. Otherwise we repeat the argument. Since the complexity of each successive subsurface is decreasing by at least $1$, we must eventually find the desired $Z$ containing at least ${L_2}$ of the $Y_i$, each disjoint from distinct vertices of $g_Z$.
So suppose that there are at least ${L_2}$ distinct $\eta_i$ associated to distinct $Y_i$ and we have taken $Z = W$. Now we must find at least ${L_2} - 4({C_1} + {C_3} + 2{C_3} + 2)$ of these $Y_i$ where $J_{Y_i} \subsetneq J_Z$.
To this end we focus attention on a small subset $\{Y^j\}_{j = 1}^5 \subset \{Y_i\}$. Let $\eta_j$ be the vertex of $g_Z = g_W$ associated to $Y^j$. We choose these $Y^j$ so that \begin{itemize} \item the $\eta_j$ are arranged along $g_Z$ in order of index and \item $d_Z(\eta_j, \eta_{j + 1}) > {C_1} + {C_3} + 2{C_3} + 2$, for $j = 1, 2, 3, 4$. \end{itemize} This is possible by (\ref{Eqn:UpperBigger}) because \[ {L_2} > 4({C_1} + {C_3} + 2{C_3}). \] Set $J_j = J_{Y^j}$ and pick any indices $m_j \in J_j$. (If $Z$ is paired then $Y^j$ is as well and we pick $m_j \in J_{Y^j} \cap J_{(Y^j)'}$.) We use $\mu(m_j)$ to denote $\mu_{m_j}$. Since $\partial Y^j$ is disjoint from $\eta_j$, \refax{Access} and \reflem{Hempel} imply \begin{equation} \label{Eqn:Eta} d_Z(\mu(m_j), \eta_j) \leq {C_3} + 1. \end{equation}
Since the sequence $\pi_Z(\mu_n)$ satisfies the reverse triangle inequality (\reflem{Reverse}), it follows that the $m_j$ appear in $[0, N]$ in order agreeing with their index. The triangle inequality implies that \[ d_Z(\mu(m_1), \mu(m_2)) > {C_3}. \] Thus \refax{Access} implies that $J_Z \cap [m_1, m_2]$ is non-empty. Similarly, $J_Z \cap [m_4, m_5]$ is non-empty. It follows that $[m_2, m_4] \subset J_Z$. (If $Z$ is paired then, after applying the symmetry $\tau$ to $g_Z$, the same argument proves $[m_2, m_4] \subset J_{Z'}$.)
Notice that $J_2 \cap J_3 = \emptyset$. For if $m \in J_2 \cap J_3$ then by (\ref{Eqn:Eta}) both $d_Z(\mu_m, \eta_2)$ and $d_Z(\mu_m, \eta_3)$ are bounded by ${C_3} + 1$. It follows that \[ d_Z(\eta_2, \eta_3) < 2{C_3} + 2, \] a contradiction. Similarly $J_3 \cap J_4 = \emptyset$. We deduce that $J_3 \subsetneq [m_2, m_4] \subset J_Z$. (If $Z$ is paired $J_3 \subset J_Z \cap J_{Z'}$.) Finally, there are at least $$ {L_2} - 4({C_1} + {C_3} + 2{C_3} + 2)$$ possible $Y_i$'s which satisfy the hypothesis on $Y^3$. This completes the proof. \end{proof}
Define $$ \mathcal{P}_{\text{ind}} = \{ I \in \mathcal{P}_X \mathbin{\mid} \mbox{ $I$ is inductive}\}. $$
\begin{proposition} \label{Prop:NumberOfInductives} The number of inductive intervals is a lower bound for the projection distance in $X$: $$
d_X(\alpha_X, \beta_X) \geq \frac{|\mathcal{P}_{\text{ind}}|}{2(3 \cdot {L_2})^{\xi(X) - 1} + 1} - 1. $$ \end{proposition}
\begin{proof} Suppose, for a contradiction, that the conclusion fails. Let $g_X$ be a geodesic in $\mathcal{C}(X)$ connecting $\alpha_X$ to $\beta_X$. Then, as in the proof of \reflem{PigeonForHoles}, there is a vertex $\omega$ of $g_X$ and a component $W \subset X {\smallsetminus} \omega$ where at least $(3 \cdot {L_2})^{\xi(X) - 1}$ of the inductive intervals in $I_X$ have associated surfaces, $Y_i$, contained in $W$.
Since $\xi(X) - 1 \geq \xi(W)$ we may apply \reflem{PigeonForHoles} inside of $W$. So we find a surface $Z \subseteq W \subsetneq X$ so that \begin{itemize} \item $Z$ contains at least ${L_2}$ of the $Y_i$, \item $d_Z(\alpha_X, \beta_X) \geq {L_2}$, and \item there are at least ${L_2} - 4({C_1} + {C_3} + 2 {C_3} + 2)$
of the $Y_i$ where $J_{Y_i} \subsetneq J_Z$. \end{itemize} Since $Y_i \subsetneq Z$ and $Y_i$ is a hole, $Z$ is also a hole. Since ${L_2} > {L_1}(X)$ it follows that $Z \in B_X$. Let $\mathcal{Y} = \{ Y_i \}$ be the set of $Y_i$ satisfying the third bullet. Let $Y^1 \in \mathcal{Y}$ and $\eta_1 \in g_Z$ satisfy $\partial Y^1 \cap \eta_1 = \emptyset$ and $\eta_1$ is the first such. Choose $Y^2$ and $\eta_2$ similarly, so that $\eta_2$ is the last such. By \reflem{PigeonForHoles} \begin{equation} \label{Eqn:EtaBigger} d_Z(\eta_1,\eta_2) \geq L_2 - 4({C_1} + {C_3} + 2{C_3} + 2). \end{equation} Let $p = \min I_{Y^1}$ and $q = \max I_{Y^2}$. Note that $[p, q] \subset J_Z$. (If $Z$ is paired with $Z'$ then $[p, q] \subset J_Z \cap J_{Z'}$.) Again by (1) of \refax{Access}, and \reflem{Hempel}, \[ d_Z(\mu_{r(p)}, \partial Y^1) < {C_3}. \] It follows that \[ d_Z(\mu_{r(p)}, \eta_1) \leq {C_3} + 1 \] and the same bound applies to $d_Z(\mu_{r(q)}, \eta_2)$. Combined with (\ref{Eqn:EtaBigger}) we find that \[ d_Z(\mu_{r(p)}, \mu_{r(q)}) \geq {L_2} - 4{C_1} - 4{C_3} - 10{C_3} - 10. \] By the reverse triangle inequality (\reflem{Reverse}), for any $p' \leq p, q \leq q'$, \[ d_Z(\mu_{r(p')}, \mu_{r(q')}) \geq
{L_2} - 6{C_1} - 4{C_3} - 10 {C_3} - 10. \] Finally by \refax{Combin} and the above inequality we have \[ d_Z(\gamma_{p'}, \gamma_{q'}) \geq {L_2} - 6{C_1} - 4{C_3} - 10{C_3} - 10 - 2{C_2}. \] By (\ref{Eqn:UpperBigger}) the right-hand side is greater than ${L_1}(X) + 2L_4$ so we deduce that $Z \in B_X(p', q')$, for any such $p', q'$. (When $Z$ is paired deduce also that $Z' \in B_X(p', q')$.)
Let $I_V$ be the first inductive interval chosen by the procedure with the property that $I_V \cap [p, q] \neq \emptyset$. Note that, since $I_{Y^1}$ and $I_{Y^2}$ will also be chosen, $I_V \subset [p, q]$. Let $p', q'$ be the indices so that $V$ is chosen from $B_X(p', q')$. Thus $p' \leq p$ and $q \leq q'$. However, since $I_V \subset [p, q] \subset J_Z$, \reflem{HolesContained} implies that $V$ is strictly nested in $Z$. (When pairing occurs we may find instead that $V \subset Z'$ or $V' \subset Z$.) Thus $\xi(Z) > \xi(V)$ and we find that $Z$ would be chosen from $B_X(p', q')$, instead of $V$. This is a contradiction. \end{proof}
\subsection{Electric partition}
The goal of this subsection is to prove:
\begin{proposition} \label{Prop:Electric} There is a constant $A$ depending only on $\xi(X)$, so that: if $[i,j] \subset [0, K]$ is a electric interval then \[ d_\mathcal{G}(\gamma_i, \gamma_j) \mathbin{\leq_{A}} d_X(\gamma_i, \gamma_j). \] \end{proposition}
We begin by building a partition of $[i,j]$ into straight and shortcut intervals. Define $$ C_X = \{ Y \subsetneq X \mathbin{\mid} \mbox{$Y$ is a non-hole and } d_Y(\mu_{r(i)}, \mu_{r(j)}) \geq {L_1}(X) \}. $$ We also define, for all $[p, q] \subset [i, j]$ $$ C_X(p,q) = \{ Y \in C_X \mathbin{\mid} J_Y \cap [r(p), r(q)] \neq \emptyset \}.
$$
Our recursion starts with the partition of one part, $\mathcal{P}(i, j) = \{[i, j]\}$. Recursively $\mathcal{P}(i, j)$ is a partition of $[i, j]$ into shortcut, straight, or undetermined intervals. Suppose that $[p, q] \in \mathcal{P}(i, j)$ is undetermined.
\begin{proofclaim} If $C_X(p, q)$ is empty then $[p, q]$ is straight. \end{proofclaim}
\begin{proof} We show the contrapositive. Suppose that $Y$ is a non-hole with $d_Y(\mu_{r(p)}, \mu_{r(q)}) \geq {L_2}$. Since ${L_2} > {C_3}$, \refax{Access} implies that $J_Y \cap [r(p), r(q)]$ is non-empty. Also, the reverse triangle inequality (\reflem{Reverse}) gives: $$ d_Y(\mu_{r(p)}, \mu_{r(q)}) < d_Y(\mu_{r(i)}, \mu_{r(j)}) + 2{C_1}. $$ Since ${L_2} > {L_1}(X) + 2{C_1}$, we find that $Y \in C_X$. It follows that $Y \in C_X(p,q)$. \end{proof}
So when $C_X(p, q)$ is empty the interval $[p, q]$ is determined to be straight. Proceed onto the next undetermined element of $\mathcal{P}(i, j)$. Now suppose that $C_X(p, q)$ is non-empty. Then we choose any $Y \in C_X(p,q)$ so that $Y$ has maximal $\xi(Y)$ amongst the elements of $C_X(p,q)$. Notice that by the accessibility requirement that $J_Y \cap [r(p), r(q)]$ is non-empty.
There are two cases. If $J_Y \cap r([p, q])$ is empty then let $p' \in [p, q]$ be the largest integer so that $r(p') < \min J_Y$. Note that $p'$ is well-defined. Now divide the interval $[p, q]$ into the two undetermined intervals $[p, p']$, $[p' + 1, q]$. In this situation we say $Y$ is associated to a {\em shortcut of length one} and we add the element $[p' + \frac{1}{2}]$ to $\mathcal{P}(i, j)$.
Next suppose that $J_Y \cap r([p, q])$ is non-empty. Let $p', q' \in [p,q]$ be the first and last indices, respectively, so that $r(p'), r(q') \in J_Y$. (Note that it is possible to have $p' = q'$.) Partition $[p, q] = [p, p'-1] \cup [p', q'] \cup [q'+1, q]$. The first and third parts are undetermined; either may be empty. This completes the recursive construction of the partition.
Define $$ \mathcal{P}_{\text{short}} = \{ I \in \mathcal{P}(i, j) \mathbin{\mid} \mbox{$I$ is a shortcut}\} $$ and $$ \mathcal{P}_{\text{str}} = \{ I \in \mathcal{P}(i, j) \mathbin{\mid} \mbox{$I$ is straight}\}. $$
\begin{proposition} \label{Prop:NumberOfShortcuts} With $\mathcal{P}(i, j)$ as defined above, $$
d_X(\gamma_i, \gamma_j) \geq \frac{|\mathcal{P}_{\text{short}}|}{2(3 \cdot {L_2})^{\xi(X) - 1} + 1} - 1. $$ \end{proposition}
\begin{proof} The proof is identical to that of \refprop{NumberOfInductives} with the caveat that in \reflem{PigeonForHoles} we must use the markings $\mu_{r(i)}$ and $\mu_{r(j)}$ instead of the endpoints $\gamma_i$ and $\gamma_j$. \end{proof}
Now we ``electrify'' every shortcut interval using \refthm{UpperBound} recursively.
\begin{lemma} \label{Lem:Shortcut} There is a constant ${L_3} = {L_3}(X, \mathcal{G})$, so that for every shortcut interval $[p, q]$ we have $d_\mathcal{G}(\gamma_p, \gamma_q) < {L_3}$. \end{lemma}
\begin{proof} As $[p,q]$ is a shortcut we are given a non-hole $Z \subset X$ so that $r([p,q]) \subset J_Z$. Let $Y = X {\smallsetminus} Z$. Thus \refax{Replace} gives vertices $\gamma_p', \gamma_q'$ of $\mathcal{G}$ lying in $Y$ or in $Z$, so that $d_\mathcal{G}(\gamma_p, \gamma_p'), d_\mathcal{G}(\gamma_q, \gamma_q') \leq {C_4}$.
If one of $\gamma_p', \gamma_q'$ lies in $Y$ while the other lies in $Z$ then \[ d_\mathcal{G}(\gamma_p, \gamma_q) < 2{C_4} + 1. \] If both lie in $Z$ then, as $Z$ is a non-hole, there is a vertex $\delta \in \mathcal{G}(S)$ disjoint from both of $\gamma_p'$ and $\gamma_q'$ and we have \[ d_\mathcal{G}(\gamma_p, \gamma_q) < 2{C_4} + 2. \] If both lie in $Y$ then there are two cases. If $Y$ is not a hole for $\mathcal{G}(S)$ then we are done as in the previous case. If $Y$ is a hole then by the definition of shortcut interval, \reflem{LipschitzToHoles}, and the triangle inequality we have \[ d_W(\gamma_p', \gamma_q') < 6 + 6{C_4} + {L_2} \] for all holes $W \subset Y$. Notice that $Y$ is strictly contained in $X$. Thus we may inductively apply \refthm{UpperBound} with $c = 6 + 6{C_4} + {L_2}$. We deduce that all terms on the right-hand side of the distance estimate vanish and thus $d_\mathcal{G}(\gamma_p', \gamma_q')$ is bounded by a constant depending only on $X$ and $\mathcal{G}$. The same then holds for $d_\mathcal{G}(\gamma_p, \gamma_q)$ and we are done. \end{proof}
We are now equipped to give:
\begin{proof}[Proof of \refprop{Electric}] Suppose that $\mathcal{P}(i, j)$ is the given partition of the electric interval $[i, j]$ into straight and shortcut subintervals. As a bit of notation, if $[p, q] = I \in \mathcal{P}(i, j)$, we take $d_\mathcal{G}(I) = d_\mathcal{G}(\gamma_p, \gamma_q)$ and $d_X(I) = d_X(\gamma_p, \gamma_q)$. Applying \refax{Combin} we have \begin{align} \label{Eqn:StraightUpperBound} d_\mathcal{G}(\gamma_i, \gamma_j) & \leq \sum_{I \in \mathcal{P}_{\text{str}}} d_\mathcal{G}(I)
+ \sum_{I \in \mathcal{P}_{\text{short}}} d_\mathcal{G}(I) + {C_2}|\mathcal{P}(i, j)| \end{align} The last term arises from connecting left endpoints of intervals with right endpoints. We must bound the three terms on the right.
We begin with the third; recall that $|\mathcal{P}(i, j)| = |\mathcal{P}_{\text{short}}| +
|\mathcal{P}_{\text{str}}|$, that $|\mathcal{P}_{\text{str}}| \leq |\mathcal{P}_{\text{short}}| + 1$,
and that $|\mathcal{P}_{\text{short}}| \mathbin{\leq_{A}} d_X(\gamma_i, \gamma_j)$. The second inequality follows from the construction of the partition while the last is implied by \refprop{NumberOfShortcuts}. Thus the third term of \refeqn{StraightUpperBound} is quasi-bounded above by $d_X(\gamma_i, \gamma_j)$.
By \reflem{Shortcut}, the second term of \refeqn{StraightUpperBound}
at most ${L_3}|\mathcal{P}_{\text{short}}|$. Finally, by \refax{Straight}, for all $I \in \mathcal{P}_{\text{str}}$ we have $$ d_\mathcal{G}(I) \mathbin{\leq_{A}} d_X(I), $$ Also, it follows from the reverse triangle inequality (\reflem{Reverse}) that \[ \sum_{I \in \mathcal{P}_{\text{str}}} d_X(I) \leq d_X(\gamma_i, \gamma_j) +
(2{C_1} + 2{C_2})|\mathcal{P}_{\text{str}}| + 2{C_2}. \] We deduce that $\sum_{I \in \mathcal{P}_{\text{str}}} d_\mathcal{G}(I)$ is also quasi-bounded above by $d_X(\gamma_i, \gamma_j)$. Thus for a somewhat larger value of $A$ we find \[ d_\mathcal{G}(\gamma_i, \gamma_j) \mathbin{\leq_{A}} d_X(\gamma_i, \gamma_j). \] This completes the proof. \end{proof}
\subsection{The upper bound}
We will need:
\begin{proposition} \label{Prop:Convert} For any $c > 0$ there is a constant $A$ with the following property. Suppose that $[i, j] = I_Y$ is an inductive interval in $\mathcal{P}_X$. Then we have: \[ d_\mathcal{G}(\gamma_i, \gamma_j) \mathbin{\leq_{A}} \sum_Z [d_Z(\gamma_i, \gamma_j)]_{c} \] where $Z$ ranges over all holes for $\mathcal{G}$ contained in $X$. \end{proposition}
\begin{proof} \refax{Replace} gives vertices $\gamma'_i$, $\gamma'_j \in \mathcal{G}$, contained in $Y$, so that $d_\mathcal{G}(\gamma_i, \gamma'_i) \leq {C_4}$ and the same holds for $j$. Since projection to holes is coarsely Lipschitz (\reflem{LipschitzToHoles}) for any hole $Z$ we have $d_Z(\gamma_i, \gamma'_i) \leq 3 + 3{C_3}$.
Fix any $c > 0$. Now, since \begin{align*} d_\mathcal{G}(\gamma_i, \gamma_j)
& \leq d_\mathcal{G}(\gamma'_i, \gamma'_j) + 2{C_3} \end{align*} to find the required constant $A$ it suffices to bound $d_\mathcal{G}(\gamma'_i, \gamma'_j)$. Let $c' = c + 6{C_3} + 6$. Since $Y \subsetneq X$, induction gives us a constant $A$ so that \begin{align*} d_\mathcal{G}(\gamma'_i, \gamma'_j)
& \mathbin{\leq_{A}} \sum_Z [d_Z(\gamma'_i, \gamma'_j)]_{c'} \\
& \leq \sum_Z [d_Z(\gamma_i, \gamma_j) + 6{C_3} + 6]_{c'} \\
& < (6{C_3} + 6)N + \sum_Z [d_Z(\gamma_i, \gamma_j)]_c \end{align*} where $N$ is the number of non-zero terms in the final sum. Also, the sum ranges over sub-holes of $Y$. We may take $A$ somewhat larger to deal with the term $(6{C_3} + 6)N$ and include all holes $Z \subset X$ to find \begin{align*} d_\mathcal{G}(\gamma_i, \gamma_j)
& \mathbin{\leq_{A}} \sum_Z [d_Z(\gamma_i, \gamma_j)]_c \end{align*} where the sum is over all holes $Z \subset X$.
\end{proof}
\subsection{Finishing the proof}
Now we may finish the proof of \refthm{UpperBound}. Fix any constant $c \geq 0$. Suppose that $X$, $\alpha_X$, $\beta_X$ are given as above. Suppose that $\Gamma = \{ \gamma_i \}_{i = 0}^K$ is the given combinatorial path and $\mathcal{P}_X$ is the partition of $[0, K]$ into inductive and electric intervals. So we have: \begin{align} \label{Eqn:UpperBound} d_\mathcal{G}(\alpha_X, \beta_X) & \leq \sum_{I \in \mathcal{P}_{\text{ind}}} d_\mathcal{G}(I) + \sum_{I \in
\mathcal{P}_{\text{ele}}} d_\mathcal{G}(I) + {C_2}|\mathcal{P}_X| \end{align} Again, the last term arises from adjacent right and left endpoints of different intervals.
We must bound the terms on the right-hand side; begin by noticing that
$|\mathcal{P}_X| = |\mathcal{P}_{\text{ind}}| + |\mathcal{P}_{\text{ele}}|$, $|\mathcal{P}_{\text{ele}}| \leq
|\mathcal{P}_{\text{ind}}| + 1$ and $|\mathcal{P}_{\text{ind}}| \mathbin{\leq_{A}} d_X(\alpha_X, \beta_X)$. The second inequality follows from the way the partition is constructed and the last follows from \refprop{NumberOfInductives}. Thus the third term of \refeqn{UpperBound} is quasi-bounded above by $d_X(\alpha_X, \beta_X)$.
Next consider the second term of \refeqn{UpperBound}: \begin{align*} \sum_{I \in \mathcal{P}_{\text{ele}}} d_\mathcal{G}(I)
& \mathbin{\leq_{A}} \sum_{I \in \mathcal{P}_{\text{ele}}} d_X(I) \\
& \leq d_X(\alpha_X, \beta_X) + (2{C_1} + 2{C_2})|\mathcal{P}_{\text{ele}}| + 2{C_2} \end{align*} with the first inequality following from \refprop{Electric} and the second from the reverse triangle inequality (\reflem{Reverse}).
Finally we bound the first term of \refeqn{UpperBound}. Let $c' = c + {L_0}$. Thus, \begin{align*} \sum_{I \in \mathcal{P}_{\text{ind}}} d_\mathcal{G}(I)
& \leq \sum_{I_Y \in \mathcal{P}_{\text{ind}}} \left( A'_Y \left( \sum_{Z
\subsetneq Y} [d_Z(I_Y)]_{c'} \right) + A'_Y \right) \\
& \leq A'' \left( \sum_{I \in \mathcal{P}_{\text{ind}}} \sum_{Z \subsetneq X} [d_Z(I)]_{c'} \right)
+ A'' \cdot |\mathcal{P}_{\text{ind}}| \\
& \leq A'' \left( \sum_{Z \subsetneq X} \sum_{I \in \mathcal{P}_{\text{ind}}} [d_Z(I)]_{c'} \right)
+ A'' \cdot |\mathcal{P}_{\text{ind}}| \end{align*} Here $A'_Y$ and the first inequality are given by \refprop{Convert}. Also $A'' = \max \{ A'_Y \mathbin{\mid} Y \subsetneq X \}$. In the last line, each sum of the form $\sum_{I \in \mathcal{P}_{\text{ind}}} [d_Z(I)]_{c'}$ has at most three terms, by \refrem{AtMostThreeInductive} and the fact that $c' > {L_0}$. For the moment, fix a hole $Z$ and any three elements $I, I', I'' \in \mathcal{P}_{\text{ind}}$.
By the reverse triangle inequality (\reflem{Reverse}) we find that \[ d_Z(I) + d_Z(I') + d_Z(I'') < d_Z(\alpha_X, \beta_X) + 6{C_1} + 8{C_2} \] which in turn is less than $d_Z(\alpha_X, \beta_X) + {L_0}$.
It follows that \[ [d_Z(I)]_{c'} + [d_Z(I')]_{c'} + [d_Z(I'')]_{c'} < [d_Z(\alpha_X,
\beta_X)]_c + {L_0}. \] Thus, \begin{align*} \sum_{Z \subsetneq X} \sum_{I \in \mathcal{P}_{\text{ind}}} [d_Z(I)]_{c'} & \leq {L_0} \cdot N + \sum_{Z \subsetneq X} [d_Z(\alpha_X, \beta_X)]_c \end{align*} where $N$ is the number of non-zero terms in the final sum. Also, the sum ranges over all holes $Z \subsetneq X$.
Combining the above inequalities, and increasing $A$ once again, implies that \[ d_\mathcal{G}(\alpha_X, \beta_X) \mathbin{\leq_{A}} \sum_Z [d_Z(\alpha_X, \beta_X)]_c \] where the sum ranges over all holes $Z \subseteq X$. This completes the proof of \refthm{UpperBound}. \qed
\section{Background on {Teichm\"uller~} space} \label{Sec:BackgroundTeich}
Our goal in Sections~\ref{Sec:PathsNonorientable}, \ref{Sec:PathsArc} and~\ref{Sec:PathsDisk} will be to verify the axioms stated in \refsec{Axioms} for the complex of curves of a non-orientable surface, for the arc complex, and for the disk complex. Here we give the necessary background on {Teichm\"uller~} space.
Fix now a surface $S = S_{g,n}$ of genus $g$ with $n$ punctures. Two conformal structures on $S$ are equivalent, written $\Sigma \sim \Sigma'$, if there is a conformal map $f \colon \Sigma \to \Sigma'$ which is isotopic to the identity. Let $\mathcal{T} = \mathcal{T}(S)$ be the {\em {Teichm\"uller~} space} of $S$; the set of equivalence classes of conformal structures $\Sigma$ on $S$.
Define the {Teichm\"uller~} metric by, \[ d_\mathcal{T}(\Sigma,\Sigma') =
\inf_f \left\{ \frac{1}{2} \log K(f) \right\} \] where the infimum ranges over all quasiconformal maps $f \colon \Sigma \to \Sigma'$ isotopic to the identity and where $K(f)$ is the maximal dilatation of $f$. Recall that the infimum is realized by a {Teichm\"uller~} map that, in turn, may be defined in terms of a quadratic differential.
\subsection{Quadratic differentials}
\begin{definition} A {\em quadratic differential} $q(z)\,dz^2$ on $\Sigma$ is an assignment of a holomorphic function to each coordinate chart that is a disk and of a meromorphic function to each chart that is a punctured disk. If $z$ and $\zeta$ are overlapping charts then we require \[ q_z(z) = q_\zeta(\zeta) \left(\frac{d\zeta}{dz}\right)^2 \] in the intersection of the charts. The meromorphic function $q_z(z)$ has at most a simple pole at the puncture $z = 0$. \end{definition}
At any point away from the zeroes and poles of $q$ there is a natural coordinate $z = x + iy$ with the property that $q_z \equiv 1$. In this natural coordinate the foliation by lines $y = c$ is called the {\em horizontal foliation}. The foliation by lines $x = c$ is called the {\em vertical foliation}.
Now fix a quadratic differential $q$ on $\Sigma = \Sigma_0$. Let $x, y$ be natural coordinates for $q$. For every $t \in \mathbb{R}$ we obtain a new quadratic differential $q_t$ with coordinates \[ x_t = e^{t} x, \qquad y_t = e^{-t} y. \] Also, $q_t$ determines a conformal structure $\Sigma_t$ on $S$. The map $t \mapsto \Sigma_t$ is the {Teichm\"uller~} geodesic determined by $\Sigma$ and $q$.
\subsection{Marking coming from a {Teichm\"uller~} geodesic} \label{Sec:MarkingFromTeich}
Suppose that $\Sigma$ is a Riemann surface structure on $S$ and $\sigma$ is the uniformizing hyperbolic metric in the conformal class of $\Sigma$. In a slight abuse of terminology, we call the collection of shortest simple non-peripheral closed geodesics the {\em systoles} of $\sigma$. Fix a constant $\epsilon$ smaller than the Margulis constant. The $\epsilon$--thick part of {Teichm\"uller~} space consists of those Riemann surfaces such that the hyperbolic systole has length at least $\epsilon$.
We define $P = P(\sigma)$, a {\em Bers pants decomposition} of $S$, as follows: pick $\alpha_1$, any systole for $\sigma$. Define $\alpha_i$ to be any systole of $\sigma$ restricted to $S {\smallsetminus} (\alpha_1 \cup \ldots \cup \alpha_{i - 1})$. Continue in this fashion until $P$ is a pants decomposition. Note that any curve with length less than the Margulis constant will necessarily be an element of $P$.
Suppose that $\Sigma, \Sigma' \in \mathcal{T}(S)$. Suppose that $P, P'$ are Bers pants decompositions with respect to $\Sigma$ and $\Sigma'$. Suppose also that $d_\mathcal{T}(\Sigma, \Sigma') \leq 1$. Then the curves in $P$ have uniformly bounded lengths in $\Sigma'$ and conversely. By the Collar Lemma, the intersection $\iota(P, P')$ is bounded, solely in terms of $\xi(S)$.
Suppose now that $\{ \Sigma_t \mathbin{\mid} t \in [-M, M] \}$ is the {Teichm\"uller~} geodesic defined by the quadratic differentials $q_t$. Let $\sigma_t$ be the hyperbolic metric uniformizing $\Sigma_t$. Let $P_t = P(\sigma_t)$ be a Bers pants decomposition.
We now find transversals in order to complete $P_t$ to a {\em Bers marking} $\nu_t$. Suppose that $P_t = \{ \alpha_i \}$. For each $i$, let $A^i$ be the annular cover of $S$ corresponding to $\alpha_i$. Note that $q_t$ lifts to a singular Euclidean metric $q^i_t$ on $A^i$. Let $\alpha^i$ be a geodesic representative of the core curve of $A^i$ with respect to the metric $q_t^i$. Choose $\gamma_i \in \mathcal{C}(A^i)$ to be any geodesic arc, also with respect to $q^i_t$, that is perpendicular to $\alpha^i$. Let $\beta_i$ be any curve in $S {\smallsetminus} (\{ \alpha_j \}_{j \neq i})$ which meets $\alpha_i$ minimally and so that $d_{A_i}(\beta_i, \gamma_i) \leq 3$. (See the discussion after the proof of Lemma~2.4 in~\cite{MasurMinsky00}.) Doing this for each $i$ gives a complete clean marking $\nu_t = \{ \alpha_i \} \cup \{ \beta_i \}$.
We now have:
\begin{lemma} \cite[Remark 6.2 and Equation (3)]{Rafi10} There is a constant $B_0 = B_0(S)$ with the following property. For any {Teichm\"uller~} geodesic and for any time $t$, there is a constant
$\delta > 0$ so that if $|t - s| \leq \delta$ then \[ \iota(\nu_t, \nu_s) < B_0. \] \end{lemma}
Suppose that $\Sigma_t$ and $\Sigma_s$ are surfaces in the $\epsilon$--thick part of $\mathcal{T}(S)$. We take $B_0$ sufficiently large so that if $\iota(\nu_t, \nu_s) \geq B_0$ then $d_\mathcal{T}(\Sigma_t, \Sigma_s) \geq 1$.
\subsection{The marking axiom}
We construct a sequence of markings $\mu_n$, for $n \in [0, N] \subset \mathbb{N}$, as follows. Take $\mu_0 = \nu_{-M}$. Now suppose that $\mu_n = \nu_t$ is defined. Let $s > t$ be the first time that there is a marking with $\iota(\nu_t, \nu_s) \geq B_0$, if such a time exists. If so, let $\mu_{n+1} = \nu_s$. If no such time exists take $N = n$ and we are done.
We now show that $\mu_n = \nu_t$ and $\mu_{n+1} = \nu_s$ have bounded intersection. By the above lemma there is a marking $\nu_r$ with $t \leq r < s$ and \[ \iota(\nu_r,\nu_s) \leq B_0. \] By construction \[ \iota(\nu_t, \nu_r) < B_0. \] Since intersection number bounds distance in the marking complex we find that by the triangle inequality, $\nu_t$ and $\nu_s$ are bounded distance in the marking complex. Conversely, since distance bounds intersection in the marking complex we find that $\iota(\mu_n,\mu_{n+1})$ is bounded. It follows that $d_Y(\mu_n, \mu_{n+1})$ is uniformly bounded, independent of $Y \subset S$ and of $n \in [0, N]$.
It now follows from Theorem 6.1 of \cite{Rafi10} that, for any subsurface $Y \subset S$, the sequence $\{ \pi_Y(\mu_n) \} \subset \mathcal{C}(Y)$ is an unparameterized quasi-geodesic. Thus the marking path $\{ \mu_n \}$ satisfies the second requirement of \refax{Marking}. The first requirement is trivial as every $\mu_n$ fills $S$.
\subsection{The accessibility axiom}
We now turn to \refax{Access}. Since $\mu_n$ fills $S$ for every $n$, the first requirement is a triviality.
In Section 5 of \cite{Rafi10} Rafi defines, for every subsurface $Y \subset S$, an {\em interval of isolation} $I_Y$ inside of the parameterizing interval of the {Teichm\"uller~} geodesic. Note that $I_Y$ is defined purely in terms of the geometry of the given quadratic differentials. Further, for all $t \in I_Y$ and for all components $\alpha \subset \partial Y$ the hyperbolic length of $\alpha$ in $\Sigma_t$ is less than the Margulis constant. Furthermore, by Theorem~5.3~\cite{Rafi10}, there is a constant ${B_3}$ so that if $[s,t] \cap I_Y = \emptyset$ then \[ d_Y(\nu_s, \nu_t) \leq {B_3}. \] So define $J_Y \subset [0, N]$ to be the subinterval of the marking path where the time corresponding to $\mu_n$ lies in $I_Y$. The third requirement follows. Finally, if $m \in J_Y$ then $\partial Y$ is contained in $\operatorname{base}(\mu_m)$ and thus $\iota(\partial Y, \mu_m) \leq 2
\cdot |\partial Y|$.
\subsection{The distance estimate in {Teichm\"uller~} space}
We end this section by quoting another result of Rafi:
\begin{theorem}\cite[Theorem~2.4]{Rafi10} \label{Thm:TeichDistanceEstimate} Fix a surface $S$ and a constant $\epsilon > 0$. There is a constant ${C_0} = {C_0}(S, \epsilon)$ so that for any $c > {C_0}$ there is a constant $A$ with the following property. Suppose that $\Sigma$ and $\Sigma'$ lie in the $\epsilon$--thick part of $\mathcal{T}(S)$. Then \[ d_\mathcal{T}(\Sigma, \Sigma') \mathbin{=_A}
\sum_X [d_X(\mu, \mu')]_c + \sum_\alpha [\log d_\alpha(\mu, \mu')]_c \] where $\mu$ and $\mu'$ are Bers markings on $\Sigma$ and $\Sigma'$, $Y \subset S$ ranges over non-annular surfaces and $\alpha$ ranges over vertices of $\mathcal{C}(S)$. \qed \end{theorem}
\section{Paths for the non-orientable surface} \label{Sec:PathsNonorientable}
Fix $F$ a compact, connected, and non-orientable surface. Let $S$ be the orientation double cover with covering map $\rho_F \colon S \to F$. Let $\tau \colon S \to S$ be the associated involution. Note that $\mathcal{C}(F) = \mathcal{C}^\tau(S)$. Let $\mathcal{C}^\tau(S) \to \mathcal{C}(S)$ be the relation sending a symmetric multicurve to its components.
Our goal for this section is to prove \reflem{SymmetricSurfaces}, the classification of holes for $\mathcal{C}(F)$. As remarked above, \reflem{InvariantQI} and \refcor{NonorientableCurveComplexHyperbolic} follow, proving the hyperbolicity of $\mathcal{C}(F)$.
\subsection{The marking path}
We will use the extreme rigidity of {Teichm\"uller~} geodesics to find $\tau$--invariant marking paths. We first show that $\tau$--invariant Bers pants decompositions exist.
\begin{lemma} \label{Lem:TauInvar} Fix a $\tau$--invariant hyperbolic metric $\sigma$. Then there is a Bers pants decomposition $P = P(\sigma)$ which is $\tau$--invariant. \end{lemma}
\begin{proof} Let $P_0 = \emptyset$. Suppose that $0 \leq k < \xi(S)$ curves have been chosen to form $P_k$. By induction we may assume that $P_k$ is $\tau$--invariant. Let $Y$ be a component of $S {\smallsetminus} P_k$ with $\xi(Y) \geq 1$. Note that since $\tau$ is orientation reversing, $\tau$ does not fix any boundary component of $Y$.
Pick any systole $\alpha$ for $Y$.
\begin{proofclaim} Either $\tau(\alpha) = \alpha$ or $\alpha \cap \tau(\alpha) = \emptyset$. \end{proofclaim}
\begin{proof} Suppose not and take $p \in \alpha \cap \tau(\alpha)$. Then $\tau(p) \in \alpha \cap \tau(\alpha)$ as well, and, since $\tau$ has no fixed points, $p \neq \tau(p)$. The points $p$ and $\tau(p)$ divide $\alpha$ into segments $\beta$ and $\gamma$. Since $\tau$ is an isometry, we have \[ \ell_\sigma(\tau(\alpha)) = \ell_\sigma(\alpha) \quad \mbox{and} \quad \ell_\sigma(\tau(\beta)) = \ell_\sigma(\beta). \] Now concatenate to obtain (possibly immersed) loops \[ \beta' = \beta*\tau(\beta) \quad \mbox{and} \quad \gamma' = \gamma*\tau(\gamma). \]
If $\beta'$ is null-homotopic then $\alpha \cup \tau(\alpha)$ cuts a monogon or a bigon out of $S$, contradicting our assumption that $\alpha$ was a geodesic. Suppose, by way of contradiction, that $\beta'$ is homotopic to some boundary curve $b \subset \partial Y$. Since $\tau(\beta') = \beta'$, it follows that $\tau(b)$ and $\beta'$ are also homotopic. Thus $b$ and $\tau(b)$ cobound an annulus, implying that $Y$ is an annulus, a contradiction. The same holds for $\gamma'$.
Let $\beta''$ and $\gamma''$ be the geodesic representatives of $\beta'$ and $\gamma'$. Since $\beta$ and $\tau(\beta)$ meet transversely, $\beta''$ has length in $\sigma$ strictly smaller than $2\ell_\sigma(\beta)$. Similarly the length of $\gamma''$ is strictly smaller than $2\ell_\sigma(\gamma)$. Suppose that $\beta''$ is shorter then $\gamma''$. It follows that $\beta''$ strictly shorter than $\alpha$. If $\beta''$ is embedded then this contradicts the assumption that $\alpha$ was shortest. If $\beta''$ is not embedded then there is an embedded curve $\beta'''$ inside of a regular neighborhood of $\beta''$ which is again essential, non-peripheral, and has geodesic representative shorter than $\beta''$. This is our final contradiction and the claim is proved. \end{proof}
Thus, if $\tau(\alpha) = \alpha$ we let $P_{k+1} = P_k \cup \{ \alpha \}$ and we are done. If $\tau(\alpha) \neq \alpha$ then by the above claim $\tau(\alpha) \cap \alpha = \emptyset$. In this case let $P_{k+2} = P_k \cup \{ \alpha, \tau(\alpha) \}$ and \reflem{TauInvar} is proved. \end{proof}
Transversals are chosen with respect to a quadratic differential metric. Suppose that $\alpha, \beta \in \mathcal{C}^\tau(S)$. If $\alpha$ and $\beta$ do not fill $S$ then we may replace $S$ by the support of their union. Following Thurston~\cite{Thurston88} there exists a square-tiled quadratic differential $q$ with squares associated to the points of $\alpha \cap \beta$. (See~\cite{Bowditch06} for analysis of how the square-tiled surface relates to paths in the curve complex.) Let $q_t$ be image of $q$ under the {Teichm\"uller~} geodesic flow. We have:
\begin{lemma} \label{Lem:TauIsom} $\tau^*q_t = q_t$. \end{lemma}
\begin{proof} Note that $\tau$ preserves $\alpha$ and also $\beta$. Since $\tau$ permutes the points of $\alpha \cap \beta$ it permutes the rectangles of the singular Euclidean metric $q_t$ while preserving their vertical and horizontal foliations. Thus $\tau$ is an isometry of the metric and the conclusion follows. \end{proof}
We now choose the {Teichm\"uller~} geodesic $\{ \Sigma_t \mathbin{\mid} t \in [-M, M] \}$ so that the hyperbolic length of $\alpha$ is less than the Margulis constant in $\sigma_{-M}$ and the same holds for $\beta$ in $\sigma_M$. Also, $\alpha$ is the shortest curve in $\sigma_{-M}$ and similarly for $\beta$ in $\sigma_M$
\begin{lemma} Fix $t$. There are transversals for $P_t$ which are close to being quadratic perpendicular in $q_t$ and which are $\tau$--invariant. \end{lemma}
\begin{proof} Let $P = P_t$ and fix $\alpha \in P$. Let $X = S {\smallsetminus} (P {\smallsetminus} \alpha)$. There are two cases: either $\tau(X) \cap X = \emptyset$ or $\tau(X) = X$. Suppose the former. So we choose any transversal $\beta \subset X$ close to being $q_t$--perpendicular and take $\tau(\beta)$ to be the transversal to $\tau(\alpha)$.
Suppose now that $\tau(X) = X$. It follows that $X$ is a four-holed sphere. The quotient $X/\tau$ is homeomorphic to a twice-holed $\mathbb{RP}^2$. Therefore there are only four essential non-peripheral curves in $X/\tau$. Two of these are cores of M\"obius bands and the other two are their doubles. The cores meet in a single point. Perforce $\alpha$ is the double cover of one core and we take $\beta$ the double cover of the other.
It remains only to show that $\beta$ is close to being $q_t$--perpendicular. Let $S^\alpha$ be the annular cover of $S$ and lift $q_t$ to $S^\alpha$. Let $\perp$ be the set of $q_t^\alpha$--perpendiculars. This is a $\tau$-invariant diameter one subset of $\mathcal{C}(S^\alpha)$. If $d_\alpha(\perp, \beta)$ is large then it follows that $d_\alpha(\perp, \tau(\beta))$ is also large. Also, $\tau(\beta)$ twists in the opposite direction from $\beta$. Thus \[ d_\alpha(\beta, \tau(\beta)) - 2d_\alpha(\perp, \beta) = O(1) \] and so $d_\alpha(\beta, \tau(\beta))$ is large, contradicting the fact that $\beta$ is $\tau$--invariant. \end{proof}
Thus $\tau$--invariant markings exist; these have bounded intersection with the markings constructed in \refsec{BackgroundTeich}. It follows that the resulting marking path satisfies the marking path and accessibility requirements, Axioms~\ref{Ax:Marking} and~\ref{Ax:Access}.
\subsection{The combinatorial path}
As in \refsec{BackgroundTeich} break the interval $[-M, M]$ into short subintervals and produce a sequence of $\tau$-invariant markings $\{ \mu_n \}_{n = 0}^N$. To choose the combinatorial path, pick $\gamma_n \in \operatorname{base}(\mu_n)$ so that $\gamma_n$ is a $\tau$--invariant curve or pair of curves and so that $\gamma_n$ is shortest in $\operatorname{base}(\mu_n)$.
We now check the combinatorial path requirements given in \refax{Combin}. Note that $\gamma_0 = \alpha$, $\gamma_N = \beta$; also the reindexing map is the identity.
Since \[ \iota(\gamma_n, \mu_{r(n)}) = \iota(\gamma_n, \mu_n) = 2 \] the first requirement is satisfied. Since $\mu_n$ and $\mu_{n+1}$ have bounded intersection, the same holds for $\gamma_n$ and $\gamma_{n+1}$. Projection to $F$, surgery, and \reflem{Hempel} imply that $d_{\mathcal{C}^\tau}(\gamma_n, \gamma_{n+1})$ is uniformly bounded. This verifies \refax{Combin}.
\subsection{The classification of holes}
We now finish the classification of large holes for $\mathcal{C}^\tau(S)$. Fix $L_0 > 3{C_3} + 2{C_2} + 2{C_1}$. Note that these constants are available because we have verified the axioms that give them.
\begin{lemma} \label{Lem:SymmetricSurfaces} Suppose that $\alpha, \beta \in \mathcal{C}^\tau(S)$. Suppose that $X \subset S$ has $d_X(\alpha, \beta) > L_0$. Then $X$ is symmetric. \end{lemma}
\begin{proof} Let $(\Sigma_t, q_t)$ be the {Teichm\"uller~} geodesic defined above and let $\sigma_t$ be the uniformizing hyperbolic metric. Since $L_0 > {C_3} + 2{C_2}$ it follows from the accessibility requirement that $J_X = [m, n]$ is non-empty. Now for all $t$ in the interval of isolation $I_X$ \[ \ell_{\sigma_t}(\delta) < \epsilon, \] where $\delta$ is any component of $\partial X$ and $\epsilon$ is the Margulis constant. Let $Y = \tau(X)$. Since $\tau$ is an isometry (\reflem{TauIsom}) and since the interval of isolation is metrically defined we have $I_Y = I_X$ and thus $J_Y = J_X$. Deduce that $\partial Y$ is also short in $\sigma_t$. This implies that $\partial X \cap \partial Y = \emptyset$. If $X$ and $Y$ overlap then by (iii) of \reflem{Access} we have \[ d_X(\mu_m, \mu_n) < {C_3} \] and so by the triangle inequality, two applications of (2) of \refax{Access}, we have \[ d_X(\mu_0, \mu_N) < 3{C_3}. \] By the combinatorial axiom it follows that \[ d_X(\alpha, \beta) < 3{C_3} + 2{C_2} \] a contradiction. Deduce that either $X = Y$ or $X \cap Y = \emptyset$ as desired. \end{proof}
As noted in \refsec{HolesNonorientable} this shows that the only hole for $\mathcal{C}^\tau(S)$ is $S$ itself. Thus all holes trivially interfere, verifying \refax{Holes}.
\subsection{The replacement axiom}
We now verify \refax{Replace} for subsurfaces $Y \subset S$ with $d_Y(\alpha, \beta) \geq L_0$. (We may ignore all subsurfaces with smaller projection by taking ${L_1}(Y) > L_0$.)
By \reflem{SymmetricSurfaces} the subsurface $Y$ is symmetric. If $Y$ is a hole then $Y = S$ and the first requirement is vacuous. Suppose that $Y$ is not a hole. Suppose that $\gamma_n$ is such that $n \in J_Y$. Thus $\gamma_n \in \operatorname{base}(\mu_n)$. All components of $\partial Y$ are also pants curves in $\mu_n$. It follows that we may take any symmetric curve in $\partial Y$ to be $\gamma'$ and we are done.
\subsection{On straight intervals}
Lastly we verify \refax{Straight}. Suppose that $[p, q]$ is a straight interval. We must show that $d_{\mathcal{C}^\tau}(\gamma_p, \gamma_q) \leq d_S(\gamma_p, \gamma_q)$. Suppose that $\mu_p = \nu_s$ and $\mu_q = \nu_t$; that is, $s$ and $t$ are the times when $\mu_p, \mu_q$ are short markings. Thus $d_X(\mu_p, \mu_q) \leq {L_2}$ for every $X \subsetneq S$. This implies that the {Teichm\"uller~} geodesic, along the straight interval, lies in the thick part of {Teichm\"uller~} space.
Notice that $d_{\mathcal{C}^\tau}(\gamma_p, \gamma_q) \leq {C_2}|p - q|$, since for all $i \in [p, q - 1]$, $d_{\mathcal{C}^\tau}(\gamma_i,
\gamma_{i+1}) \leq {C_2}$. So it suffices to bound $|p - q|$. By our choice of $B_0$ and because the {Teichm\"uller~} geodesic lies in the thick part we find that $|p - q| \leq d_\mathcal{T}(\Sigma_s, \Sigma_t)$. Rafi's distance estimate (\refthm{TeichDistanceEstimate}) gives: \[ d_\mathcal{T}(\Sigma_s, \Sigma_t) \mathbin{=_A} d_S(\nu_s, \nu_t). \] Since $\nu_s = \mu_p$, $\nu_t = \mu_q$, and since $\gamma_p \in \operatorname{base}(\mu_p)$, $\gamma_q \in \operatorname{base}(\mu_q)$ deduce that \[ d_S(\mu_p, \mu_q) \leq d_S(\gamma_p, \gamma_q) + 4. \] This verifies \refax{Straight}. Thus the distance estimate holds for $\mathcal{C}^\tau(S) = \mathcal{C}(F)$. Since there is only one hole for $\mathcal{C}(F)$ we deduce that the map $\mathcal{C}(F) \to \mathcal{C}(S)$ is a quasi-isometric embedding. As a corollary we have:
\begin{theorem} \label{Thm:NonOrientableCCHyperbolic} The curve complex $\mathcal{C}(F)$ is Gromov hyperbolic. \qed \end{theorem}
\section{Paths for the arc complex} \label{Sec:PathsArc}
Here we verify that our axioms hold for the arc complex $\mathcal{A}(S, \Delta)$. It is worth pointing out that the axioms may be verified using {Teichm\"uller~} geodesics, train track splitting sequences, or resolutions of hierarchies. Here we use the former because it also generalizes to the non-orientable case; this is discussed at the end of this section.
First note that \refax{Holes} follows from \reflem{ArcComplexHolesIntersect}.
\subsection{The marking path}
We are given a pair of arcs $\alpha, \beta \in \mathcal{A}(X, \Delta)$. Recall that $\sigma_S \colon \mathcal{A}(X) \to \mathcal{C}(X)$ is the surgery map, defined in \refdef{SurgeryRel}. Let $\alpha' = \sigma_S(\alpha)$ and define $\beta'$ similarly. Note that $\alpha'$ cuts a pants off of $S$. As usual, we may assume that $\alpha'$ and $\beta'$ fill $X$. If not we pass to the subsurface they do fill.
As in the previous sections let $q$ be the quadratic differential determined by $\alpha'$ and $\beta'$. Exactly as above, fix a marking path $\{ \mu_n \}_{n = 0}^N$. This path satisfies the marking and accessibility axioms (\ref{Ax:Marking}, \ref{Ax:Access}).
\subsection{The combinatorial path}
Let $Y_n \subset X$ be any component of $X {\smallsetminus} \operatorname{base}(\mu_n)$ meeting $\Delta$. So $Y_n$ is a pair of pants. Let $\gamma_n$ be any essential arc in $Y_n$ with both endpoints in $\Delta$. Since $\alpha' \subset \operatorname{base}(\mu_0)$ and $\beta' \subset \operatorname{base}(\mu_N)$ we may choose $\gamma_0 = \alpha$ and $\gamma_N = \beta$.
As in the previous section the reindexing map is the identity. It follows immediately that $\iota(\gamma_n, \mu_n) \leq 4$. This bound, the bound on $\iota(\mu_n, \mu_{n+1})$, and \reflem{BoundedProjectionImpliesBoundedIntersection} imply that $\iota(\gamma_n, \gamma_{n+1})$ is likewise bounded. The usual surgery argument shows that if two arcs have bounded intersection then they have bounded distance. This verifies \refax{Combin}.
\subsection{The replacement and the straight axioms}
Suppose that $Y \subset X$ is a subsurface and $\gamma_n$ has $n \in J_Y$. Let $\mu_n = \nu_t$; that is $t$ is the time when $\mu_n$ is a short marking. Thus $\partial Y \subset \operatorname{base}(\mu_n)$ and so $\gamma_n \cap \partial Y = \emptyset$. So regardless of the hole-nature of $Y$ we may take $\gamma' = \gamma_n$ and the axiom is verified.
\refax{Straight} is verified exactly as in \refsec{PathsNonorientable}.
\subsection{Non-orientable surfaces}
Suppose that $F$ is non-orientable and $\Delta_F$ is a collection of boundary components. Let $S$ be the orientation double cover and $\tau \colon S \to S$ the involution so that $S/\tau = F$. Let $\Delta$ be the preimage of $\Delta_F$. Then $\mathcal{A}^\tau(S, \Delta)$ is the invariant arc complex.
Suppose that $\alpha_F, \beta_F$ are vertices in $\mathcal{A}(F, \Delta')$. Let $\alpha, \beta$ be their preimages. As above, without loss of generality, we may assume that $\sigma_F(\alpha_F)$ and $\sigma_F(\beta_F)$ fill $F$. Note that $\sigma_F(\alpha_F)$ cuts a surface $X$ off of $F$. The surface $X$ is either a pants or a twice-holed $\mathbb{RP}^2$. When $X$ is a pants we define $\alpha' \subset S$ to be the preimage of $\sigma_F(\alpha_F)$. When $X$ is a twice-holed $\mathbb{RP}^2$ we take $\gamma_F$ to be a core of one of the two M\"obius bands contained in $X$ and we define $\alpha'$ to be the preimage of $\gamma_F \cup \sigma_F(\alpha_F)$. We define $\beta'$ similarly. Notice that $\alpha$ and $\alpha'$ meet in at most four points.
We now use $\alpha'$ and $\beta'$ to build a $\tau$--invariant {Teichm\"uller~} geodesic. The construction of the marking and combinatorial paths for $\mathcal{A}^\tau(S, \Delta)$ is unchanged. Notice that we may choose combinatorial vertices because $\operatorname{base}(\mu_n)$ is $\tau$--invariant. There is a small annoyance: when $X$ is a twice-holed $\mathbb{RP}^2$ the first vertex, $\gamma_0$, is disjoint from but not equal to $\alpha$. Strictly speaking, the first and last vertices are $\gamma_0$ and $\gamma_N$; our constants are stated in terms of their subsurface projection distances. However, since $\alpha \cap \gamma_0 = \emptyset$, and the same holds for $\beta$, $\gamma_N$, their subsurface projection distances are all bounded.
\section{Background on train tracks} \label{Sec:BackgroundTrainTracks}
Here we give the necessary definitions and theorems regarding train tracks. The standard reference is~\cite{PennerHarer92}. See also~\cite{Mosher03}. We follow closely the discussion found in~\cite{MasurEtAl10}.
\subsection{On tracks}
A {\em generic train track} $\tau \subset S$ is a smooth, embedded trivalent graph. As usual we call the vertices {\em switches} and the edges {\em branches}. At every switch the tangents of the three branches agree. Also, there are exactly two {\em incoming} branches and one {\em outgoing} branch at each switch. See Figure~\ref{Fig:TrainTrackModel} for the local model of a switch.
\begin{figure}
\caption{The local model of a train track.}
\label{Fig:TrainTrackModel}
\end{figure}
Let $\mathcal{B}(\tau)$ be the set of branches. A transverse measure on $\tau$ is function $w \colon \mathcal{B} \to \mathbb{R}_{\geq 0}$ satisfying the switch conditions: at every switch the sum of the incoming measures equals the outgoing measure. Let $P(\tau)$ be the projectivization of the cone of transverse measures. Let $V(\tau)$ be the vertices of $P(\tau)$. As discussed in the references, each vertex measure gives a simple closed curve carried by $\tau$.
For every track $\tau$ we refer to $V(\tau)$ as the marking corresponding to $\tau$ (see \refsec{Markings}). Note that there are only finitely many tracks up to the action of the mapping class group. It follows that $\iota(V(\tau))$ is uniformly bounded, depending only on the topological type of $S$.
If $\tau$ and $\sigma$ are train tracks, and $Y \subset S$ is an essential surface, then define \[ d_Y(\tau, \sigma) = d_Y(V(\tau), V(\sigma)). \] We also adopt the notation $\pi_Y(\tau) = \pi_Y(V(\tau))$.
A train track $\sigma$ is obtained from $\tau$ by {\em sliding} if $\sigma$ and $\tau$ are related as in Figure~\ref{Fig:Slide}. We say that a train track $\sigma$ is obtained from $\tau$ by {\em splitting} if $\sigma$ and $\tau$ are related as in Figure~\ref{Fig:Split}.
\begin{figure}
\caption{All slides take place in a small regular neighborhood of the
affected branch.}
\label{Fig:Slide}
\end{figure}
\begin{figure}
\caption{There are three kinds of splitting: right, left, and central.}
\label{Fig:Split}
\end{figure}
Again, since the number of tracks is bounded (up to the action of the mapping class group) if $\sigma$ is obtained from $\tau$ by either a slide or a split we find that $\iota(V(\tau), V(\sigma))$ is uniformly bounded.
\subsection{The marking path}
We will use sequences of train tracks to define our marking path.
\begin{definition} \label{Def:SplittingSequence} A {\em sliding and splitting sequence} is a collection $\{ \tau_n \}_{n = 0}^N$ of train tracks so that $\tau_{n+1}$ is obtained from $\tau_n$ by a slide or a split. \end{definition}
The sequence $\{ \tau_n \}$ gives a sequence of markings via the map $\tau_n \mapsto V_n = V(\tau_n)$. Note that the support of $V_{n+1}$ is contained within the support of $V_n$ because every vertex of $\tau_{n+1}$ is carried by $\tau_n$. Theorem 5.5 of~\cite{MasurEtAl10} verifies the remaining half of \refax{Marking}.
\begin{theorem} \label{Thm:TrainTrackUnparamGeodesic} Fix a surface $S$. There is a constant $A$ with the following property. Suppose that $\{ \tau_n \}_{n=0}^N$ is a sliding and splitting sequence in $S$ of birecurrent tracks. Suppose that $Y \subset S$ is an essential surface. Then the map $n \mapsto \pi_Y(\tau_n)$, as parameterized by splittings, is an $A$--unparameterized quasi-geodesic. \qed \end{theorem}
Note that, when $Y = S$, \refthm{TrainTrackUnparamGeodesic} is essentially due to the first author and Minsky; see Theorem~1.3 of~\cite{MasurMinsky04}.
In Section 5.2 of~\cite{MasurEtAl10}, for every sliding and splitting sequence $\{ \tau_n \}_{n = 0}^N$ and for any essential subsurface $X \subsetneq S$ an accessible interval $I_X \subset [0,N]$ is defined. \refax{Access} is now verified by Theorem 5.3 of~\cite{MasurEtAl10}.
\subsection{Quasi-geodesics in the marking graph}
We will also need Theorem~6.1 from~\cite{MasurEtAl10}. (See~\cite{Hamenstadt09} for closely related work.)
\begin{theorem} \label{Thm:TrainTrackQuasi} Fix a surface $S$. There is a constant $A$ with the following property. Suppose that $\{ \tau_n \}_{n=0}^N$ is a sliding and splitting sequence of birecurrent tracks, injective on slide subsequences, where $V_N$ fills $S$. Then $\{ V(\tau_n) \}$ is an $A$--quasi-geodesic in the marking graph. \qed \end{theorem}
\section{Paths for the disk complex} \label{Sec:PathsDisk}
Suppose that $V = V_g$ is a genus $g$ handlebody. The goal of this section is to verify the axioms of \refsec{Axioms} for the disk complex $\mathcal{D}(V)$ and so complete the proof of the distance estimate.
\begin{theorem} \label{Thm:DiskComplexDistanceEstimate} There is a constant ${C_0} = {C_0}(V)$ so that, for any $c \geq {C_0}$ there is a constant $A$ with \[ d_\mathcal{D}(D, E) \mathbin{=_A} \sum [d_X(D, E)]_c \] independent of the choice of $D$ and $E$. Here the sum ranges over the set of holes $X \subset \partial V$ for the disk complex. \end{theorem}
\subsection{Holes}
The fact that all large holes interfere is recorded above as \reflem{DiskComplexHolesInterfere}. This verifies \refax{Holes}.
\subsection{The combinatorial path}
Suppose that $D, E \in \mathcal{D}(V)$ are disks contained in a compressible hole $X \subset S = \partial V$. As usual we may assume that $D$ and $E$ fill $X$. Recall that $V(\tau)$ is the set of vertices for the track $\tau \subset X$. We now appeal to a result of the first author and Minsky, found in~\cite{MasurMinsky04}.
\begin{theorem} \label{Thm:DiskSurgerySequence} There exists a surgery sequence of disks $\{ D_i \}_{i=0}^K$, a sliding and splitting sequence of birecurrent tracks $\{ \tau_n \}_{n=0}^N$, and a reindexing function $r \colon [0, K] \to [0, N]$ so that \begin{itemize} \item $D_0 = D$, \item $E \in V_N$, \item $D_i \cap D_{i+1} = \emptyset$ for all $i$, and \item $\iota(\partial D_i, V_{r(i)})$ is uniformly bounded for all $i$. \end{itemize} \qed \end{theorem}
\begin{remark} For the details of the proof we refer to~\cite{MasurMinsky04}. Note that the double-wave curve replacements of that paper are not needed here; as $X$ is a hole, no curve of $\partial X$ compresses in $V$. It follows that consecutive disks in the surgery sequence are disjoint (as opposed to meeting at most four times). Also, in the terminology of~\cite{MasurEtAl10}, the disk $D_i$ is a {\em wide dual} for the track $\tau_{r(i)}$. Finally, recurrence of $\tau_n$ follows because $E$ is fully carried by $\tau_N$. Transverse recurrence follows because $D$ is fully dual to $\tau_0$. \end{remark}
Thus $V_n$ will be our marking path and $D_i$ will be our combinatorial path. The requirements of \refax{Combin} are now verified by \refthm{DiskSurgerySequence}.
\subsection{The replacement axiom}
We turn to \refax{Replace}. Suppose that $Y \subset X$ is an essential subsurface and $D_i$ has $r(i) \in J_Y$. Let $n = r(i)$. From \refthm{DiskSurgerySequence} we have that $\iota(\partial D_i, V_n)$ is uniformly bounded. By \refax{Access} we have $Y \subset \operatorname{supp}(V_n)$ and $\iota(\partial Y, \mu_n)$ is bounded. It follows that there is a constant $K$ depending only on $\xi(S)$ so that \[ \iota(\partial D_i, \partial Y) < K. \]
Isotope $D_i$ to have minimal intersection with $\partial Y$. As in \refsec{CompressionSequences} boundary compress $D_i$ as much as possible into the components of $X {\smallsetminus} \partial Y$ to obtain a disk $D'$ so that either \begin{itemize} \item $D'$ cannot be boundary compressed any more into $X {\smallsetminus} \partial Y$ or \item $D'$ is disjoint from $\partial Y$. \end{itemize} We may arrange matters so that every boundary compression reduces the intersection with $\partial Y$ by at least a factor of two. Thus: \[ d_\mathcal{D}(D_i, D') \leq \log_2(K). \]
Suppose now that $Y$ is a compressible hole. By \reflem{XCompressibleImpliesBdyXCompressible} we find that $\partial D' \subset Y$ and we are done.
Suppose now that $Y$ is an incompressible hole. Since $Y$ is large there is an $I$-bundle $T \to F$, contained in the handlebody $V$, so that $Y$ is a component of $\partial_h T$. Isotope $D'$ to minimize intersection with $\partial_v T$. Let $\Delta$ be the union of components of $\partial_v T$ which are contained in $\partial V$. Let $\Gamma = \partial_v T {\smallsetminus} \Delta$. Notice that all intersections $D' \cap \Gamma$ are essential arcs in $\Gamma$: simple closed curves are ruled out by minimal intersection and inessential arcs are ruled out by the fact that $D'$ cannot be boundary compressed in the complement of $\partial Y$. Let $D''$ be a outermost component of $D' {\smallsetminus} \Gamma$. Then \reflem{BdyIncompImpliesVertical} implies that $D''$ is isotopic in $T$ to a vertical disk.
If $D'' = D'$ then we may replace $D_i$ by the arc $\rho_F(D')$. The inductive argument now occurs inside of the arc complex $\mathcal{A}(F, \rho_F(\Delta))$.
Suppose that $D'' \neq D'$. Let $A \in \Gamma$ be the vertical annulus meeting $D''$. Let $N$ be a regular neighborhood of $D'' \cup A$, taken in $T$. Then the frontier of $N$ in $T$ is again a vertical disk, call it $D'''$. Note that $\iota(D''', D') < K - 1$. Finally, replace $D_i$ by the arc $\rho_F(D''')$.
Suppose now that $Y$ is not a hole. Then some component $S {\smallsetminus} Y$ is compressible. Applying \reflem{XCompressibleImpliesBdyXCompressible} again, we find that either $D'$ lies in $Z = X {\smallsetminus} Y$ or in $Y$. This completes the verification of \refax{Replace}.
\subsection{Straight intervals}
We end by checking \refax{Straight}. Suppose that $[p, q] \subset [0, K]$ is a straight interval. Recall that $d_Y(\mu_{r(p)}, \mu_{r(q)})
< {L_2}$ for all strict subsurfaces $Y \subset X$. We must check that $d_\mathcal{D}(D_p, D_q) \mathbin{\leq_{A}} d_X(D_p, D_q)$. Since $d_\mathcal{D}(D_p, D_q) \leq {C_2} |p - q|$ it is enough to bound $|p - q|$. Note that
$|p - q| \leq |r(p) - r(q)|$ because the reindexing map is increasing. Now, $|r(p) - r(q)| \mathbin{\leq_{A}} d_{\mathcal{M}(X)}(\mu_{r(p)}, \mu_{r(q)})$ because the sequence $\{ \mu_n \}$ is a quasi-geodesic in $\mathcal{M}(X)$ (\refthm{TrainTrackQuasi}). Increasing $A$ as needed and applying \refthm{MarkingGraphDistanceEstimate} we have \[ d_\mathcal{M}(\mu_{r(p)}, \mu_{r(q)}) \mathbin{\leq_{A}}
\sum_Y [d_Y(\mu_{r(p)}, \mu_{r(q)})]_{L_2} \] and the right hand side is thus less than $d_X(\mu_{r(p)}, \mu_{r(q)})$ which in turn is less than $d_X(D_p, D_q) + 2{C_2}$. This completes our discussion of \refax{Straight} and finishes the proof of \refthm{DiskComplexDistanceEstimate}.
\section{Hyperbolicity} \label{Sec:Hyperbolicity}
The ideas in this section are related to the notion of ``time-ordered domains'' and to the hierarchy machine of~\cite{MasurMinsky00} (see also Chapters~4 and~5 of Behrstock's thesis~\cite{Behrstock04}). As remarked above, we cannot use those tools directly as the hierarchy machine is too rigid to deal with the disk complex.
\subsection{Hyperbolicity}
We prove:
\begin{theorem} \label{Thm:GIsHyperbolic} Fix $\mathcal{G} = \mathcal{G}(S)$, a combinatorial complex. Suppose that $\mathcal{G}$ satisfies the axioms of \refsec{Axioms}. Then $\mathcal{G}$ is Gromov hyperbolic. \end{theorem}
As corollaries we have
\begin{theorem} \label{Thm:ArcComplexHyperbolic} The arc complex is Gromov hyperbolic. \qed \end{theorem}
\begin{theorem} \label{Thm:DiskComplexHyperbolic} The disk complex is Gromov hyperbolic. \qed \end{theorem}
In fact, \refthm{GIsHyperbolic} follows quickly from:
\begin{theorem} \label{Thm:GoodPathsGiveSlimTriangles} Fix $\mathcal{G}$, a combinatorial complex. Suppose that $\mathcal{G}$ satisfies the axioms of \refsec{Axioms}. Then for all $A \geq 1$ there exists $\delta \geq 0$ with the following property: Suppose that $T \subset \mathcal{G}$ is a triangle of paths where the projection of any side of $T$ into into any hole is an $A$--unparameterized quasi-geodesic. Then T is $\delta$--slim. \end{theorem}
\begin{proof}[Proof of \refthm{GIsHyperbolic}] As laid out in \refsec{Partition} there is a uniform constant $A$ so that for any pair $\alpha, \beta \in \mathcal{G}$ there is a recursively constructed path $\mathcal{P} = \{\gamma_i\} \subset \mathcal{G}$ so that \begin{itemize} \item for any hole $X$ for $\mathcal{G}$, the projection $\pi_X(\mathcal{P})$ is an $A$--un\-pa\-ram\-e\-ter\-ized quasi-geodesic and \item
$|\mathcal{P}| \mathbin{=_A} d_\mathcal{G}(\alpha, \beta)$. \end{itemize}
So if $\alpha \cap \beta = \emptyset$ then $|\mathcal{P}|$ is uniformly short. Also, by \refthm{GoodPathsGiveSlimTriangles}, triangles made of such paths are uniformly slim. Thus, by \refthm{HyperbolicityCriterion}, $\mathcal{G}$ is Gromov hyperbolic. \end{proof}
The rest of this section is devoted to proving \refthm{GoodPathsGiveSlimTriangles}.
\subsection{Index in a hole}
For the following definitions, we assume that $\alpha$ and $\beta$ are fixed vertices of $\mathcal{G}$.
For any hole $X$ and for any geodesic $k \in \mathcal{C}(X)$ connecting a point of $\pi_X(\alpha)$ to a point of $\pi_X(\beta)$ we also define
$\rho_k \colon \mathcal{G} \to k$ to be the relation $\pi_X|\mathcal{G} \colon \mathcal{G} \to \mathcal{C}(X)$ followed by taking closest points in $k$. Since the diameter of $\rho_k(\gamma)$ is uniformly bounded, we may simplify our formulas by treating $\rho_k$ as a function. Define $\operatorname{index}_X \colon \mathcal{G} \to \mathbb{N}$ to be the {\em index} in $X$: \[ \operatorname{index}_X(\sigma) = d_X(\alpha, \rho_k(\sigma)). \]
\begin{remark} \label{Rem:ChoiceOfGeodesic} Suppose that $k'$ is a different geodesic connecting $\pi_X(\alpha)$ to $\pi_X(\beta)$ and $\operatorname{index}'_X$ is defined with respect to $k'$. Then \[
|\operatorname{index}_X(\sigma) - \operatorname{index}'_X(\sigma)| \leq 17\delta + 4 \] by \reflem{MovePoint} and \reflem{MoveGeodesic}. After permitting a small additive error, the index depends only on $\alpha, \beta, \sigma$ and not on the choice of geodesic $k$. \end{remark}
\subsection{Back and sidetracking}
Fix $\sigma, \tau \in \mathcal{G}$. We say $\sigma$ {\em precedes} $\tau$ {\em by at least} $K$ in $X$ if \[ \operatorname{index}_X(\sigma) + K \leq \operatorname{index}_X(\tau). \]
We say $\sigma$ precedes $\tau$ {\em by at most} $K$ if the inequality is reversed. If $\sigma$ precedes $\tau$ then we say $\tau$ {\em succeeds} $\sigma$.
Now take $\mathcal{P} = \{\sigma_i\}$ to be a path in $\mathcal{G}$ connecting $\alpha$ to $\beta$. Recall that we have made the simplifying assumption that $\sigma_i$ and $\sigma_{i+1}$ are disjoint.
We formalize a pair of properties enjoyed by unparameterized quasi-geodesics. The path $\mathcal{P}$ {\em backtracks} at most $K$ if for every hole $X$ and all indices $i < j$ we find that $\sigma_j$ precedes $\sigma_i$ by at most $K$. The path $\mathcal{P}$ {\em sidetracks} at most $K$ if for every hole $X$ and every index $i$ we find that \[ d_X(\sigma_i, \rho_k(\sigma_i)) \leq K, \] for some geodesic $k$ connecting a point of $\pi_X(\alpha)$ to a point of $\pi_X(\beta)$.
\begin{remark} \label{Rem:ChoiceOfGeodesicTwo} As in Remark~\ref{Rem:ChoiceOfGeodesic}, allowing a small additive error makes irrelevant the choice of geodesic in the definition of sidetracking.
We note that, if $\mathcal{P}$ has bounded sidetracking, one may freely use in calculation whichever of $\sigma_i$ or $\rho_k(\sigma_i)$ is more convenient. \end{remark}
\subsection{Projection control}
We say domains $X, Y \subset S$ {\em overlap} if $\partial X$ cuts $Y$ and $\partial Y$ cuts $X$. The following lemma, due to Behrstock~\cite[4.2.1]{Behrstock04}, is closely related to the notion of {\em time ordered} domains~\cite{MasurMinsky00}. An elementary proof is given in~\cite[Lemma~2.5]{Mangahas10}.
\begin{lemma} \label{Lem:Zugzwang} There is a constant ${M_1} = {M_1}(S)$ with the following property. Suppose that $X, Y$ are overlapping non-simple domains. If $\gamma \in \mathcal{AC}(S)$ cuts both $X$ and $Y$ then either $d_X(\gamma, \partial Y) < {M_1}$ or $d_Y(\partial X, \gamma) < {M_1}$. \qed \end{lemma}
We also require a more specialized version for the case where $X$ and $Y$ are nested.
\begin{lemma} \label{Lem:ZugzwangTwo} There is a constant ${M_2} = {M_2}(S)$ with the following property. Suppose that $X \subset Y$ are nested non-simple domains. Fix $\alpha, \beta, \gamma \in \mathcal{AC}(S)$ that cut $X$. Fix $k = [\alpha', \beta'] \subset \mathcal{C}(Y)$, a geodesic connecting a point of $\pi_Y(\alpha)$ to a point of $\pi_Y(\beta)$. Assume that $d_X(\alpha, \beta) \geq M_0$, the constant given by \refthm{BoundedGeodesicImage}. If $d_X(\alpha, \gamma) \geq {M_2}$ then $$\operatorname{index}_Y(\partial X) - 4 \leq \operatorname{index}_Y(\gamma).$$ Symmetrically, we have $$\operatorname{index}_Y(\gamma) \leq \operatorname{index}_Y(\partial X) + 4$$ if $d_X(\gamma, \beta) \geq {M_2}$. \qed \end{lemma}
\subsection{Finding the midpoint of a side}
Fix $A \geq 1$. Let $\mathcal{P}, \mathcal{Q}, \mathcal{R}$ be the sides of a triangle in $\mathcal{G}$ with vertices at $\alpha, \beta, \gamma$. We assume that each of $\mathcal{P}$, $\mathcal{Q}$, and $\mathcal{R}$ are $A$--unparameterized quasi-geodesics when projected to any hole.
Recall that $M_0 = M_0(S)$, ${M_1} = {M_1}(S)$, and ${M_2} = {M_2}(S)$ are functions depending only on the topology of $S$. We may assume that if $T \subset S$ is an essential subsurface, then $M_0(S) > M_0(T)$.
Now choose ${K_1} \geq \max \{ M_0, 4{M_1}, {M_2}, 8 \} + 8\delta$ sufficiently large so that any $A$--unparameterized quasi-geodesic in any hole back and side tracks at most ${K_1}$.
\begin{claim} \label{Clm:PrecedesSucceedsExclusive} If $\sigma_i$ precedes $\gamma$ in $X$ and $\sigma_j$ succeeds $\gamma$ in $Y$, both by at least $2{K_1}$, then $i < j$. \end{claim}
\begin{proof} To begin, as $X$ and $Y$ are holes and all holes interfere, we need not consider the possibility that $X \cap Y = \emptyset$. If $X = Y$ we immediately deduce that $$\operatorname{index}_X(\sigma_i) + 2{K_1} \leq \operatorname{index}_X(\gamma) \leq \operatorname{index}_X(\sigma_j) - 2{K_1}.$$ Thus $\operatorname{index}_X(\sigma_i) + 4{K_1} \leq \operatorname{index}_X(\sigma_j)$. Since $\mathcal{P}$ backtracks at most ${K_1}$ we have $i < j$, as desired.
Suppose instead that $X \subset Y$. Since $\sigma_i$ precedes $\gamma$ in $X$ we immediately find $d_X(\alpha, \beta) \geq 2{K_1} \geq M_0$ and $d_X(\alpha, \gamma) \geq 2{K_1} - 2\delta \geq {M_2}$.
Apply \reflem{ZugzwangTwo} to deduce $\operatorname{index}_Y(\partial X) - 4 \leq \operatorname{index}_Y(\gamma)$. Since $\sigma_j$ succeeds $\gamma$ in $Y$ it follows that $\operatorname{index}_Y(\partial X) - 4 + 2{K_1} \leq \operatorname{index}_Y(\sigma_j)$. Again using the fact that $\sigma_i$ precedes $\gamma$ in $X$ we have that $d_X(\sigma_i, \beta) \geq {M_2}$. We deduce from \reflem{ZugzwangTwo} that $\operatorname{index}_Y(\sigma_i) \leq \operatorname{index}_Y(\partial X) + 4$. Thus $$\operatorname{index}_Y(\sigma_i) - 8 + 2{K_1} \leq \operatorname{index}_Y(\sigma_j).$$ Since $\mathcal{P}$ backtracks at most ${K_1}$ in $Y$ we again deduce that $i < j$. The case where $Y \subset X$ is similar.
Suppose now that $X$ and $Y$ overlap. Applying \reflem{Zugzwang} and breaking symmetry, we may assume that $d_X(\gamma, \partial Y) < {M_1}$. Since $\sigma_i$ precedes $\gamma$ we have $\operatorname{index}_X(\gamma) \geq 2{K_1}$. \reflem{MovePoint} now implies that $\operatorname{index}_X(\partial Y) \geq 2{K_1} - {M_1} - 6\delta$. Thus, \[ d_X(\alpha, \partial Y) \geq 2{K_1} - {M_1} - 8\delta \geq {M_1} \] where the first inequality follows from \reflem{RightTriangle}.
Applying \reflem{Zugzwang} again, we find that $d_Y(\alpha, \partial X) < {M_1}$. Now, since $\sigma_j$ succeeds $\gamma$ in $Y$, we deduce that $\operatorname{index}_Y(\sigma_j) \geq 2{K_1}$. So \reflem{RightTriangle} implies that $d_Y(\alpha, \sigma_j) \geq 2{K_1} - 2\delta$. The triangle inequality now gives \[ d_Y(\partial X, \sigma_j) \geq 2{K_1} - {M_1} - 2\delta \geq {M_1}. \] Applying \reflem{Zugzwang} one last time, we find that $d_X(\partial Y, \sigma_j) < {M_1}$. Thus $d_X(\gamma, \sigma_j) \leq 2{M_1}$. Finally, \reflem{MovePoint} implies that the difference in index (in $X$) between $\sigma_i$ and $\sigma_j$ is at least $2{K_1} - 2{M_1} - 6\delta$. Since this is greater than the backtracking constant, ${K_1}$, it follows that $i < j$. \end{proof}
Let $\sigma_\alpha \in \mathcal{P}$ be the {\em last} vertex of $\mathcal{P}$ preceding $\gamma$ by at least $2{K_1}$ in some hole. If no such vertex of $\mathcal{P}$ exists then take $\sigma_\alpha = \alpha$.
\begin{claim} \label{Clm:LastNearCenter}
For every hole $X$ and geodesic $h$ connecting $\pi_X(\alpha)$ to $\pi_X(\beta)$: \[ d_X(\sigma_\alpha, \rho_h(\gamma)) \leq 3{K_1}+6\delta+1 \] \end{claim}
\begin{proof} Since $\sigma_i$ and $\sigma_{i+1}$ are disjoint we have $d_X(\sigma_i, \sigma_{i+1}) \geq 3$ and so \reflem{MovePoint} implies that \[
|\operatorname{index}_X(\sigma_{i+1}) - \operatorname{index}_X(\sigma_i)| \leq 6\delta + 3. \] Since $\mathcal{P}$ is a path connecting $\alpha$ to $\beta$ the image $\rho_h(\mathcal{P})$ is $6\delta + 3$--dense in $h$. Thus, if $\operatorname{index}_X(\sigma_\alpha) + 2{K_1} + 6\delta + 3 < \operatorname{index}_X(\gamma)$ then we have a contradiction to the definition of $\sigma_\alpha$.
On the other hand, if $\operatorname{index}_X(\sigma_\alpha) \geq \operatorname{index}_X(\gamma) + 2{K_1}$ then $\sigma_\alpha$ precedes and succeeds $\gamma$ in $X$. This directly contradicts \refclm{PrecedesSucceedsExclusive}.
We deduce that the difference in index between $\sigma_\alpha$ and $\gamma$ in $X$ is at most $2{K_1} + 6\delta + 3$. Finally, as $\mathcal{P}$ sidetracks by at most ${K_1}$ we have \[ d_X(\sigma_\alpha, \rho_h(\gamma)) \leq 3{K_1} + 6\delta + 3 \] as desired. \end{proof}
We define $\sigma_\beta$ to be the first $\sigma_i$ to succeed $\gamma$ by at least $2{K_1}$ --- if no such vertex of $\mathcal{P}$ exists take $\sigma_\beta = \beta$. If $\alpha = \beta$ then $\sigma_\alpha = \sigma_\beta$. Otherwise, from Claim~\ref{Clm:PrecedesSucceedsExclusive}, we immediately deduce that $\sigma_\alpha$ comes before $\sigma_\beta$ in $\mathcal{P}$. A symmetric version of Claim~\ref{Clm:LastNearCenter} applies to $\sigma_\beta$: for every hole $X$ \[ d_X(\rho_h(\gamma), \sigma_\beta) \leq 3{K_1} + 6\delta + 3. \]
\subsection{Another side of the triangle}
Recall now that we are also given a path $\mathcal{R} = \{ \tau_i \}$ connecting $\alpha$ to $\gamma$ in $\mathcal{G}$. As before, $\mathcal{R}$ has bounded back and sidetracking. Thus we again find vertices $\tau_\alpha$ and $\tau_\gamma$ the last/first to precede/succeed $\beta$ by at least $2{K_1}$. Again, this is defined in terms of the closest points projection of $\beta$ to a geodesic of the form $h = [\pi_X(\alpha), \pi_X(\gamma)]$. By Claim~\ref{Clm:LastNearCenter}, for every hole $X$, $\tau_\alpha$ and $\tau_\gamma$ are close to $\rho_h(\beta)$.
By \reflem{CenterExists}, if $k = [\pi_X(\alpha), \pi_X(\beta)]$, then $d_X(\rho_k(\gamma), \rho_h(\beta)) \leq 6\delta$. We deduce:
\begin{claim} \label{Clm:BodySmall} $d_X(\sigma_\alpha, \tau_\alpha) \leq 6{K_1} + 18\delta+2$. \qed \end{claim}
This claim and \refclm{LastNearCenter} imply that the body of the triangle $\mathcal{P}\mathcal{Q}\mathcal{R}$ is bounded in size. We now show that the legs are narrow.
\begin{claim} \label{Clm:NearbyVertex} There is a constant ${N_2} = {N_2}(S)$ with the following property. For every $\sigma_i \leq \sigma_\alpha$ in $\mathcal{P}$ there is a $\tau_j \leq \tau_\alpha$ in $\mathcal{R}$ so that $$d_X(\sigma_i, \tau_j) \leq {N_2}$$ for every hole $X$. \end{claim}
\begin{proof} We only sketch the proof, as the details are similar to our previous discussion. Fix $\sigma_i \leq \sigma_\alpha$.
Suppose first that no vertex of $\mathcal{R}$ precedes $\sigma_i$ by more than $2{K_1}$ in any hole. So fix a hole $X$ and geodesics $k = [\pi_X(\alpha), \pi_X(\beta)]$ and $h = [\pi_X(\alpha), \pi_X(\gamma)]$. Then $\rho_h(\sigma_i)$ is within distance $2{K_1}$ of $\pi_X(\alpha)$. Appealing to Claim~\ref{Clm:BodySmall}, bounded sidetracking, and hyperbolicity of $\mathcal{C}(X)$ we find that the initial segments \[ [\pi_X(\alpha), \rho_k(\sigma_\alpha)], \quad [\pi_X(\alpha), \rho_h(\tau_\alpha)] \] of $k$ and $h$ respectively must fellow travel. Because of bounded backtracking along $\mathcal{P}$, $\rho_k(\sigma_i)$ lies on, or at least near, this initial segment of $k$. Thus by \reflem{MoveGeodesic} $\rho_h(\sigma_i)$ is close to $\rho_k(\sigma_i)$ which in turn is close to $\pi_X(\sigma_i)$, because $\mathcal{P}$ has bounded sidetracking. In short, $d_X(\alpha, \sigma_i)$ is bounded for all holes $X$. Thus we may take $\tau_j = \tau_0 = \alpha$ and we are done.
Now suppose that some vertex of $\mathcal{R}$ precedes $\sigma_i$ by at least $2{K_1}$ in some hole $X$. Take $\tau_j$ to be the last such vertex in $\mathcal{R}$. Following the proof of Claim~\ref{Clm:PrecedesSucceedsExclusive} shows that $\tau_j$ comes before $\tau_\alpha$ in $\mathcal{R}$. The argument now required to bound $d_X(\sigma_i, \tau_j)$ is essentially identical to the proof of Claim~\ref{Clm:LastNearCenter}. \end{proof}
By the distance estimate, we find that there is a uniform neighborhood of $[\sigma_0, \sigma_\alpha] \subset \mathcal{P}$, taken in $\mathcal{G}$, which contains $[\tau_0, \tau_\alpha] \subset \mathcal{P}$. The slimness of $\mathcal{P}\mathcal{Q}\mathcal{R}$ follows directly. This completes the proof of Theorem~\ref{Thm:GoodPathsGiveSlimTriangles}. \qed
\section{Coarsely computing Hempel distance} \label{Sec:HempelDistance}
We now turn to our topological application. Recall that a {\em Heegaard splitting} is a triple $(S, V, W)$ consisting of a surface and two handlebodies where $V \cap W = \partial V = \partial W = S$. Hempel~\cite{Hempel01} defines the quantity \[ d_S(V, W) = \min \big\{ d_S(D,E) \mathbin{\mid} D \in \mathcal{D}(V), E \in \mathcal{D}(W) \big\} \] and calls it the {\em distance} of the splitting. Note that a splitting can be completely determined by giving a pair of cut systems: simplices $\mathbb{D} \subset \mathcal{D}(V)$, $\mathbb{E} \subset \mathcal{D}(W)$ where the corresponding disks cut the containing handlebody into a single three-ball. The triple $(S, \mathbb{D}, \mathbb{E})$ is a {\em Heegaard diagram}. The goal of this section is to prove:
\begin{theorem} \label{Thm:CoarselyComputeDistance} There is a constant $R_1 = R_1(S)$ and an algorithm that, given a Heegaard diagram $(S, \mathbb{D}, \mathbb{E})$, computes a number $N$
so that $$|d_S(V, W) - N| \leq R_1.$$ \end{theorem}
\noindent Let $\rho_V \colon \mathcal{C}(S) \to \mathcal{D}(V)$ be the closest points relation: \[ \rho_V(\alpha) = \big\{ D \in \mathcal{D}(V) \mathbin{\mid} \mbox{ for all $E \in \mathcal{D}(V)$,
$d_S(\alpha, D) \leq d_S(\alpha, E)$ } \big\}. \]
\noindent \refthm{CoarselyComputeDistance} follows from:
\begin{theorem} \label{Thm:CoarselyComputeProjection} There is a constant $R_0 = R_0(V)$ and an algorithm that, given an essential curve $\alpha \subset S$ and a cut system $\mathbb{D} \subset \mathcal{D}(V)$, finds a disk $C \in \mathcal{D}(V)$ so that \[ d_S(C, \rho_V(\alpha)) \leq R_0. \] \end{theorem}
\begin{proof}[Proof of \refthm{CoarselyComputeDistance}] Suppose that $(S, \mathbb{D}, \mathbb{E})$ is a Heegaard diagram. Using Theorem~\ref{Thm:CoarselyComputeProjection} we find a disk $D$ within distance $R_0$ of $\rho_V(\mathbb{E})$. Again using Theorem~\ref{Thm:CoarselyComputeProjection} we find a disk $E$ within distance $R_0$ of $\rho_W(D)$. Notice that $E$ is defined using $D$ and not the cut system $\mathbb{D}$.
Since computing distance between fixed vertices in the curve complex is algorithmic~\cite{Leasure02, Shackleton04} we may compute $d_S(D, E)$. By the hyperbolicity of $\mathcal{C}(S)$ (\refthm{C(S)IsHyperbolic}) and by the quasi-convexity of the disk set (\refthm{DiskComplexConvex}) this is the desired estimate. \end{proof}
Very briefly, the algorithm asked for in \refthm{CoarselyComputeProjection} searches an $R_2$--neighborhood in $\mathcal{M}(S)$ about a splitting sequence from $\mathbb{D}$ to $\alpha$. Here are the details.
\begin{algorithm} \label{Alg:Projection} We are given $\alpha \in \mathcal{C}(S)$ and a cut system $\mathbb{D} \subset \mathcal{D}(V)$. Build a train track $\tau$ in $S = \partial V$ as follows: make $\mathbb{D}$ and $\alpha$ tight. Place one switch on every disk $D \in \mathbb{D}$. Homotope all intersections of $\alpha$ with $D$ to run through the switch. Collapse bigons of $\alpha$ inside of $S {\smallsetminus} \mathbb{D}$ to create the branches. Now make $\tau$ a generic track by combing away from $\mathbb{D}$~\cite[Proposition~1.4.1]{PennerHarer92}. Note that $\alpha$ is carried by $\tau$ and so gives a transverse measure $w$.
Build a splitting sequence of measured tracks $\{ \tau_n \}_{n = 0}^N$ where $\tau_0 = \tau$, $\tau_N = \alpha$, and $\tau_{n+1}$ is obtained by splitting the largest switch of $\tau_n$ (as determined by the measure imposed by $\alpha$).
Let $\mu_n = V(\tau_n)$ be the vertices of $\tau_n$. For each filling marking $\mu_n$ list all markings in the ball $B(\mu_n, R_2) \subset \mathcal{M}(S)$, where $R_2$ is given by \reflem{DetectingNearbyDisks} below. (If $\mu_0$ does not fill $S$ then output $\mathbb{D}$ and halt.)
For every marking $\nu$ so produced we use Whitehead's algorithm (see \reflem{Whitehead}) to try and find a disk meeting some curve $\gamma \in \nu$ at most twice. For every disk $C$ found compute $d_S(\alpha, C)$~\cite{Leasure02, Shackleton04}. Finally, output any disk which minimizes this distance, among all disks considered, and halt. \end{algorithm}
We use the following form of Whitehead's algorithm~\cite{Berge08}:
\begin{lemma} \label{Lem:Whitehead} There is an algorithm that, given a cut system $\mathbb{D} \subset V$ and a curve $\gamma \subset S$, outputs a disk $C \subset V$ so that $\iota(\gamma, \partial C) = \min \{ \iota(\gamma, \partial E) \mathbin{\mid} E \in \mathcal{D}(V) \}$. \qed \end{lemma}
We now discuss the constant $R_2$. We begin by noticing that the track $\tau_n$ is transversely recurrent because $\alpha$ is fully carried and $\mathbb{D}$ is fully dual. Thus by \refthm{TrainTrackUnparamGeodesic} and by Morse stability, for any essential $Y \subset S$ there is a stability constant $M_3$ for the path $\pi_Y(\mu_n)$. Let $\delta$ be the hyperbolicity constant for $\mathcal{C}(S)$ (\refthm{C(S)IsHyperbolic}) and let $Q$ be the quasi-convexity constant for $\mathcal{D}(V) \subset \mathcal{C}(S)$ (\refthm{DiskComplexConvex}).
Since $\iota(\mathbb{D}, \mu_0)$ is bounded we will, at the cost of an additive error, identify their images in $\mathcal{C}(S)$. Now, for every $n$ pick some $E_n \in \rho_V(\mu_n)$.
\begin{lemma} \label{Lem:DetectingNearbyDisks} There is a constant $R_2$ with the following property. Suppose that $n < m$, $d_S(\mu_n, E_n), d_S(\mu_m, E_m) \leq M_3 + \delta + Q$, and $d_S(\mu_n, \mu_m) \geq 2(M_3 + \delta + Q) + 5$. Then there is a marking $\nu \in B(\mu_n, R_2)$ and a curve $\gamma \in \nu$ so that either: \begin{itemize} \item $\gamma$ bounds a disk in $V$, \item $\gamma \subset \partial Z$, where $Z$ is a non-hole or \item $\gamma \subset \partial Z$, where $Z$ is a large hole. \end{itemize} \end{lemma}
\begin{proof}[Proof of \reflem{DetectingNearbyDisks}] Choose points $\sigma, \sigma'$ in the thick part of $\mathcal{T}(S)$ so that all curves of $\mu_n$ have bounded length in $\sigma$ and so that $E_n$ has length less than the Margulis constant in $\sigma'$. As in \refsec{BackgroundTeich} there is a {Teichm\"uller~} geodesic and associated markings $\{ \nu_k \}_{k = 0}^K$ so that $d_\mathcal{M}(\nu_0, \mu_n)$ is bounded and $E_n \in \operatorname{base}(\nu_K)$.
We say a hole $X \subset S$ is {\em small} if $\operatorname{diam}_X(\mathcal{D}(V)) < 61$.
\begin{proofclaim} There is a constant $R_3$ so that for any small hole $X$ we have $d_X(\mu_n, \nu_K) < R_3$. \end{proofclaim}
\begin{proof} If $d_X(\mu_n, \nu_K) \leq M_0$ then we are done. If the distance is greater than $M_0$ then \refthm{BoundedGeodesicImage} gives a vertex of the $\mathcal{C}(S)$--geodesic connecting $\mu_n$ to $E_n$ with distance at most one from $\partial X$. It follows from the triangle inequality that every vertex of the $\mathcal{C}(S)$--geodesic connecting $\mu_m$ to $E_m$ cuts $X$. Another application of \refthm{BoundedGeodesicImage} gives \[ d_X(\mu_m, E_m) < M_0. \] Since $X$ is small $d_X(E_m, \mathbb{D}), d_X(E_n, \mathbb{D}) \leq 60$. Since $\iota(\nu_K, E_n) = 2$ the distance $d_X(\nu_K, E_n)$ is bounded.
Finally, because $p \mapsto \pi_X(\mu_p)$ is an $A$--unparameterized quasi-geodesic in $\mathcal{C}(X)$ it follows that $d_X(\mathbb{D}, \mu_n)$ is also bounded and the claim is proved. \end{proof}
Now consider all strict subsurfaces $Y$ so that \[ d_Y(\mu_n, \nu_M) \geq R_3. \] None of these are small holes, by the claim above. If there are no such surfaces then \refthm{MarkingGraphDistanceEstimate} bounds $d_\mathcal{M}(\mu_n, \nu_M)$: taking the cutoff constant larger than \[ \max \{ R_3, {C_0}, M_3 + \delta + Q \} \] ensures that all terms on the right-hand side vanish. In this case the additive error in \refthm{MarkingGraphDistanceEstimate} is the desired constant $R_2$ and the lemma is proved.
If there are such surfaces then choose one, say $Z$, that minimizes $\ell = \min J_Z$. Thus $d_Y(\mu_n, \nu_\ell) < {C_3}$ for all strict non-holes and all strict large holes. Since $d_S(\mu_n, E_n) \leq M_3 + \delta + Q$ and $\{ \nu_m \}$ is an unparameterized quasi-geodesic \cite[Theorem~6.1]{Rafi10} we find that $d_S(\mu_n, \nu_l)$ is uniformly bounded. The claim above bounds distances in small holes. As before we find a sufficiently large cutoff so that all terms on the right-hand side of \refthm{MarkingGraphDistanceEstimate} vanish. Again the additive error of \refthm{MarkingGraphDistanceEstimate} provides the constant $R_2$. Since $\partial Z \subset \operatorname{base}(\nu_\ell)$ the lemma is proved. \end{proof}
To prove the correctness of \refalg{Projection} it suffices to show that the disk produced is close to $\rho_V(\alpha)$. Let $m$ be the largest index so that for all $n \leq m$ we have \[ d_S(\mu_n, E_n) \leq M_3 + \delta + Q. \] It follows that $\mu_{m+1}$ lies within distance $M_3 + \delta$ of the geodesic $[\alpha, \rho_V(\alpha)]$. Recall that $d_S(\mu_n, \mu_{n+1}) \leq {C_1}$ for any value of $n$. A shortcut argument shows that \[ d_S(\mu_m, \rho_V(\alpha)) \leq 2{C_1} + 3M_3 + 3\delta + Q. \] Let $n \leq m$ be the largest index so that \[ 2(M_3 + \delta + Q) + 5 \leq d_S(\mu_n, \mu_m). \] If no such $n$ exists then take $n = 0$. Now, \reflem{DetectingNearbyDisks} implies that there is a disk $C$ with $d_S(C, \mu_n) \leq 4R_2$ and this disk is found during the running of \refalg{Projection}. It follows from the above inequalities that \[ d_S(C, \alpha) \leq 4R_2 + 5M_3 + 5\delta + 3Q + 5 + 2{C_1} + d_S(\alpha, \rho_V(\alpha)). \] So the disk $C'$, output by the algorithm, is at least this close to $\alpha$ in $\mathcal{C}(S)$. Examining the triangle with vertices $\alpha, \rho_V(\alpha), C'$ and using a final short-cut argument gives \[ d_S(C', \rho_V(\alpha)) \leq
4R_2 + 5M_3 + 9\delta + 5Q + 5 + 2{C_1}. \] This completes the proof of \refthm{CoarselyComputeProjection}. \qed
\end{document} | arXiv |
\begin{document}
\setlength{\parindent}{0pt} \setlength{\parskip}{7pt}
\title[Cluster categories and stable module categories] {Realising higher cluster categories of Dynkin type as stable module categories}
\author{Thorsten Holm} \address{Institut f\"{u}r Algebra, Zahlentheorie und Diskrete
Mathematik, Fakult\"at f\"ur Mathematik und Physik, Leibniz
Universit\"{a}t Hannover, Welfengarten 1, 30167 Hannover, Germany}
\email{[email protected]} \urladdr{http://www.iazd.uni-hannover.de/\~{ }tholm}
\author{Peter J\o rgensen} \address{School of Mathematics and Statistics, Newcastle University, Newcastle upon Tyne NE1 7RU, United Kingdom} \email{[email protected]} \urladdr{http://www.staff.ncl.ac.uk/peter.jorgensen}
\subjclass[2010]{Primary: 16D50, 18E30; Secondary: 05E99, 13F60, 16G10,
16G60, 16G70}
\keywords{Finite representation type, selfinjective algebras, Dynkin
diagrams, Morita theorem}
\thanks{{\em Acknowledgement. }This work was carried out in the
framework of the research priority programme SPP 1388 {\em
Darstellungstheorie} of the Deutsche Forschungsgemeinschaft (DFG).
We gratefully acknowledge financial support through the grant HO
1880/4-1. }
\begin{abstract}
We show that the stable module categories of certain selfinjective
algebras of finite representation type having tree class $A_n$, $D_n$,
$E_6$, $E_7$ or $E_8$ are triangulated equivalent to $u$-cluster
categories of the corresponding Dynkin type.
The proof relies on the ``Morita'' theorem for $u$-cluster categories by
Keller and Reiten, along with the recent computation of Calabi-Yau
dimensions of stable module categories by Dugas.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:introduction}
This paper deals with two types of categories: {\em Stable module categories of selfinjective algebras} and {\em $u$-cluster categories}. They both originate in representation theory, and we will establish a connection between the two by showing that some stable module categories are, in fact, $u$-cluster categories.
{\em Stable module categories} are classical objects of representation theory. They arise from categories of finitely generated modules through the operation of dividing by the ideal of homomorphisms which factor through a projective module. The stable module category of a finite dimensional selfinjective algebra has the appealing property that it is triangulated; this has been very useful not least in group representation theory.
{\em Cluster categories} and the more general {\em $u$-cluster
categories} which are pa\-ra\-me\-tri\-sed by the natural number $u$ were introduced over the last few years in a number of beautiful papers: \cite{BMRRT}, \cite{CCS}, \cite{Keller}, \cite{Thomas}, and \cite{BinZhu}. The idea is to provide ca\-te\-go\-ri\-fi\-ca\-ti\-ons of the theory of cluster algebras and higher cluster complexes as introduced in \cite{FR} and \cite{FZ}. If $Q$ is a finite quiver without loops and oriented cycles, then the $u$-cluster category of type $Q$ over a field $k$ is defined by considering the bounded derived category of the path algebra $kQ$ and taking the orbit category of a certain autoequivalence; see Section \ref{sec:cluster} for details. A $u$-cluster category is triangulated; this non-trivial fact was established in \cite{Keller}.
The introduction of cluster categories and $u$-cluster categories has created a rush of activity which has turned these categories into a major item of contemporary representation theory. This is due not least to the advent of cluster tilting theory in \cite{BMR}, which provides a long awaited generalization of classical tilting theory making it possible to tilt at any vertex of the quiver of a hereditary algebra, not just at sinks and sources.
In this paper, we will show that a number of stable module categories of selfinjective algebras are, in fact, $u$-cluster categories.
To be precise, we will look at stable module categories of selfinjective algebras of finite representation type. By the Riedtmann structure theorem \cite{Riedtmann2} the Auslander-Reiten (AR) quiver of such a category has tree class of Dynkin type $A_n$, $D_n$, $E_6$, $E_7$, or $E_8$. We illustrate in type $A$ what this means. Consider the Dynkin quiver in Figure \ref{fig:Dynkin_A} which, by abuse of notation, we will often denote by $A_n$, and its repetitive quiver ${\mathbb Z} A_n$ shown in Figure \ref{fig:ZA}. \begin{figure}
\caption{The Dynkin quiver $A_n$}
\label{fig:Dynkin_A}
\end{figure} \begin{figure}
\caption{The repetitive quiver ${\mathbb Z} A_n$}
\label{fig:ZA}
\end{figure} For a selfinjective algebra to have finite representation type and tree class $A_n$ means that the AR quiver of its stable module category is a non-trivial quotient of ${\mathbb Z} A_n$ by an admissible group of automorphisms. In type $A$, in such a quotient, two vertical lines on the quiver are identified, and this gives either a cylinder or a M\"{o}bius band. According to this dichotomy, the algebra belongs to one of two well understood classes: the Nakayama algebras and the M\"{o}bius algebras.
For tree classes $D_n$ and $E_6$, $E_7$, $E_8$, the shapes of the stable AR quivers are obtained in a very similar fashion; more details on the precise shapes are given in Section \ref{sec:typeD} for type $D$ and Section \ref{sec:typeE} for type $E$.
Now, $u$-cluster categories of Dynkin types $A_n$, $D_n$, $E_6$, $E_7$, and $E_8$ also have AR quivers which are either cylinders or M\"{o}bius bands; see Section \ref{sec:cluster} for details. One of the aims of this paper is to show that this resemblance is no coincidence.
For stating the main results of the paper we have to deal with the various Dynkin types separately.
Let us start with Dynkin type $A$. For integers $N, n \geq 1$, let $B_{N,n+1}$ denote the Nakayama algebra defined as the path algebra of the circular quiver with $N$ vertices and all arrows pointing in the same direction modulo the ideal generated by all paths of length $n+1$. Moreover, for integers $p, s \geq 1$, let $M_{p,s}$ denote the corresponding M\"obius algebra (for the definition of these algebras by quivers and relations, see Section \ref{sec:Mobius}).
The following is our first main result which gives a complete list of those $u$-cluster categories of type $A$ which are triangulated equivalent to stable module categories of selfinjective algebras.
\noindent {\bf Theorem A} (Realising $u$-cluster categories of type $A$).
{\it \begin{enumerate}
\item Let $u \geq 2$ be an even integer and let $n \geq 1$ be an integer. Set $N = \frac{u}{2}(n+1) + 1.$ Then the $u$-cluster category of type $A_n$ is equivalent as a triangulated category to the stable module category $\mbox{\sf stab}\, B_{N,n+1}$.
\item
Let $u \geq 1$ be an odd integer and let $p,s \geq 1$ be integers for which $s(2p+1) = u(p+1) + 1.$ Then the $u$-cluster category of type $A_{2p+1}$ is equivalent as a triangulated category to the stable module category $\mbox{\sf stab}\, M_{p,s}$.
\end{enumerate} }
We next consider Dynkin types $D$ and $E$. The theory becomes more intricate than in type $A$. While two types of selfinjective algebras occurred in type $A$, we will show that three types of algebras occur in type $D$, and two in type $E$. More precisely, in Asashiba's notation from \cite[appendix]{Asashiba}, they are the algebras $(D_n,s,1)$, $(D_n,s,2)$, and $(D_{3m},\frac{s}{3},1)$ in type $D$, and $(E_n,s,1)$, $n=6,7,8$, and $(E_6,s,2)$ in type $E$.
Specifically, we show the following main results.
\noindent {\bf Theorem D} (Realising $u$-cluster categories of type $D$).
{\it Let $m, n, u$ be integers with $u \geq 1$. \begin{enumerate}
\item Suppose that $n\ge 4$ and $u\equiv -2 \,\operatorname{mod}\, (2n-3)$.
\noindent Then the $u$-cluster category of type $D_{n}$ is equivalent as a triangulated category to the stable module category \[
\left\{
\begin{array}{ll}
\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1) & \mbox{if $n$ or $u$ is
even, } \\[2mm]
\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},2) & \mbox{if $n$ and $u$
are odd. }
\end{array}
\right. \]
\item Suppose that $m\ge 2$ and $u \equiv -2 \,\operatorname{mod}\, (2m-1)$ but $u \not\equiv -2 \,\operatorname{mod}\, (6m-3)$. Moreover suppose that not both $m$ and $u$ are odd. Then the $u$-cluster category of type $D_{3m}$ is equivalent as a triangulated category to the stable module category $ \mbox{\sf stab}\, (D_{3m},\frac{s}{3},1) $ where $s=\frac{u(3m-1)+1}{2m-1}$.
\end{enumerate} }
\noindent {\bf Theorem E} (Realising $u$-cluster categories of type $E$).
{\it Let $u \geq 1$ be an integer. \begin{enumerate}
\item If $u\equiv -2 \,\operatorname{mod}\, 11$ then the $u$-cluster category of
type $E_6$ is equivalent as a triangulated category to the stable
module category \[
\left\{
\begin{array}{ll}
\mbox{\sf stab}(E_6,\frac{6u+1}{11},1) & \mbox{if $u$ is even,} \\[2mm]
\mbox{\sf stab}(E_6,\frac{6u+1}{11},2) & \mbox{if $u$ is odd.}
\end{array}
\right. \]
\item If $u\equiv -2 \,\operatorname{mod}\, 17$ then the $u$-cluster category of
type $E_7$ is equivalent as a triangulated category to the stable
module category $\mbox{\sf stab}(E_7,\frac{9u+1}{17},1)$.
\item If $u\equiv -2 \,\operatorname{mod}\, 29$ then the $u$-cluster category of
type $E_8$ is equivalent as a triangulated category to the stable
module category $\mbox{\sf stab}(E_8,\frac{15u+1}{29},1)$.
\end{enumerate} }
The proofs of the above theorems rely on the seminal ``Morita theorem'' for $u$-cluster categories established by Keller and Reiten in \cite{KellerReiten2}. The idea is to show that the stable module categories of the relevant selfinjective algebras have very strong formal properties in terms of their Calabi-Yau dimensions and $u$-cluster til\-ting objects. More precisely, the Keller-Reiten structure theorem states the following. Consider a Hom finite triangulated category of algebraic origin (e.g.\ the stable module category of a selfinjective algebra). Assume that it has Calabi-Yau dimension $u+1$ and possesses a $u$-cluster tilting object $T$ which has hereditary endomorphism algebra $H$ and also satisfies $\operatorname{Hom}(T,\Sigma^{-i}T) = 0$ for $i = 1, \ldots, u-1$ where $\Sigma$ is the suspension functor. Then this category is triangulated equivalent to the $u$-cluster category of $H$.
Theorems A,\,D and E were already stated in our earlier preprints \cite{HolmJorgensenA}, \cite{HolmJorgensenDE} which were later withdrawn. Unfortunately there was a mistake in \cite{HolmJorgensenA}, pointed out to us by Alex Dugas, in connection with the Calabi-Yau dimensions, and this meant there was a gap in the proofs of the main results of \cite{HolmJorgensenA} and \cite{HolmJorgensenDE}.
In this paper we circumvent the problem and thereby provide correct proofs of the above theorems. This is achieved by using a recent paper of Dugas \cite{Dugas} in which he computes the Calabi-Yau dimensions for stable module categories of selfinjective algebras of finite representation type.
The paper is organized as follows: Section \ref{sec:cluster} collects the properties of $u$-cluster categories of Dynkin types $ADE$ which we will need. Section \ref{sec:orthogonal} is a remark on $u$-cluster tilting objects in stable module categories. Section \ref{sec:typeA} considers Dynkin type $A$ and proves Theorem A. This is split into subsections \ref{sec:Nakayama} and \ref{sec:Mobius} dealing with Nakayama algebras and M\"obius algebras; these two situations correspond to parts (i) and (ii) of Theorem A. Sections \ref{sec:typeD} and \ref{sec:typeE} similarly consider Dynkin types $D$ and $E$ and prove Theorems D and E.
Throughout, $k$ is an algebraically closed field, $A$ is a selfinjective $k$-algebra, $\mbox{\sf mod}\,A$ denotes the category of finitely generated right-$A$-modules, and $\mbox{\sf stab}\,A$ denotes the stable category of finitely ge\-ne\-ra\-ted right-$A$-modules.
\section{$u$-cluster categories} \label{sec:cluster}
This section collects the properties of $u$-cluster categories which we will need.
Let $Q$ be a finite quiver without loops and oriented cycles. Consider the path algebra $kQ$ and let $\D^{\operatorname{f}}(kQ)$ be the derived category of bounded complexes of finitely generated right-$kQ$-modules. See \cite{Happel} for background on $\D^{\operatorname{f}}(kQ)$ and \cite{RVdB} for additional information on AR theory and Serre functors.
If $u \geq 1$ is an integer, then the $u$-cluster category of type $Q$ is defined as $\D^{\operatorname{f}}(kQ)$ modulo the functor $\tau^{-1}\Sigma^u$, where $\tau$ is the AR translation of $\D^{\operatorname{f}}(kQ)$ and $\Sigma$ the suspension. In other words, the $u$-cluster category is the orbit category for the action of $\tau^{-1}\Sigma^u$ on the category $\D^{\operatorname{f}}(kQ)$. Denote the $u$-cluster category of type $Q$ by $\mbox{\sf C}$.
It follows from \cite[sec.\ 4, thm.]{Keller} that $\mbox{\sf C}$ admits a structure of triangulated category in a way such that the canonical functor $\D^{\operatorname{f}}(kQ) \rightarrow \mbox{\sf C}$ is triangulated.
The category $\mbox{\sf C}$ has Calabi-Yau dimension $u + 1$ by \cite[sec.\ 4.1]{KellerReiten2}. That is, $n = u + 1$ is the smallest non-negative integer such that $\Sigma^n$, the $n$th power of the suspension functor, is the Serre functor of $\mbox{\sf C}$.
The category $\mbox{\sf C}$ has the same objects as the derived category $\D^{\operatorname{f}}(kQ)$, so in particular, $kQ$ is an object of $\mbox{\sf C}$. In fact, by \cite[sec.\ 4.1]{KellerReiten2} again, $kQ$ is a $u$-cluster tilting object of $\mbox{\sf C}$, cf.\ \cite[sec.\ 3]{IyamaYoshino}. That is, \begin{enumerate}
\item $\operatorname{Hom}_{\mbox{\sss C}}(kQ,\Sigma t) = \cdots = \operatorname{Hom}_{\mbox{\sss C}}(kQ,\Sigma^u t) = 0
\; \Leftrightarrow \; t \in \operatorname{add}\, kQ$,
\item $\operatorname{Hom}_{\mbox{\sss C}}(t,\Sigma kQ) = \cdots = \operatorname{Hom}_{\mbox{\sss C}}(t,\Sigma^u kQ) = 0
\; \Leftrightarrow \; t \in \operatorname{add}\, kQ$.
\end{enumerate} Recall that $\operatorname{add}\, kQ$ denotes the full subcategory of $\mbox{\sf C}$ consisting of direct summands of (finite) direct sums of copies of $kQ$.
The endomorphism ring $\operatorname{End}_{\mbox{\sss C}}(kQ)$ is $kQ$ itself.
\subsection{$u$-cluster categories of Dynkin type $A$} \label{subsec:clustera} Let $Q$ be a Dynkin quiver of type $A_n$ for an integer $n \geq 1$. This means that the graph obtained from $Q$ by forgetting the orientations of the arrows is a Dynkin diagram of type $A_n$. Recall that the orientation of $Q$ is not important since for any two orientations the derived categories $\D^{\operatorname{f}}(kQ)$ are triangulated equivalent. In the sequel we shall always use the linear orientation as in Figure \ref{fig:Dynkin_A} in the introduction.
By \cite[cor.\ 4.5(i)]{Happel}, the AR quiver of $\D^{\operatorname{f}}(kQ)$ is the repetitive quiver ${\mathbb Z} A_n$; see Figure \ref{fig:ZA} in the introduction. Accordingly, the AR quiver of the $u$-cluster category $\mbox{\sf C}$ is ${\mathbb Z} A_n$ modulo the action of $\tau^{-1}\Sigma^u$ by \cite[prop.\ 1.3]{BMRRT}.
The AR translation $\tau$ of $\D^{\operatorname{f}}(kQ)$ acts on the quiver by shifting one unit to the left. Both here and below, a unit equals the distance between two vertices which are horizontal neighbours. Hence $\tau^{-1}$ acts by shifting one unit to the right.
The suspension $\Sigma$ of $\D^{\operatorname{f}}(kQ)$ acts by reflecting in the horizontal centre line and shifting $\frac{n+1}{2}$ units to the right; see \cite[table p.\ 359]{MiyachiYekutieli}. Note that this shift makes sense for all values of $n$: If $n$ is even, then the reflection in the horizontal centre line sends a vertex of the quiver to a point midwise between two vertices, and the half integer shift by $\frac{n+1}{2}$ sends this point to a vertex.
It follows that if $u$ is even, then $\tau^{-1}\Sigma^u$ acts by shifting $\frac{u}{2}(n+1) + 1$ units to the right, and if $u$ is odd, then $\tau^{-1}\Sigma^u$ acts by shifting $\frac{u}{2}(n+1) + 1$ units to the right and reflecting in the horizontal centre line.
So if $u$ is even, then the AR quiver of $\mbox{\sf C}$ has the shape of a cylinder, and if $u$ is odd, then the AR quiver of $\mbox{\sf C}$ has the shape of a M\"{o}bius band.
\subsection{$u$-cluster categories of Dynkin type $D$} \label{subsec:clusterD} Let $Q$ be a Dynkin quiver of type $D_n$ for an integer $n \geq 4$. Since the orientation of the quiver does not affect the derived category, we can assume that $Q$ has the form in Figure \ref{fig:Dn}. \begin{figure}
\caption{The Dynkin quiver $D_n$}
\label{fig:Dn}
\end{figure} By \cite[cor.\ 4.5(i)]{Happel}, the AR quiver of $\D^{\operatorname{f}}(kQ)$ is the repetitive quiver ${\mathbb Z} D_n$ shown in Figure \ref{fig:ZDn}. \begin{figure}
\caption{The repetitive quiver ${\mathbb Z} D_n$}
\label{fig:ZDn}
\end{figure} The AR quiver of the $u$-cluster category $\mbox{\sf C}$ is ${\mathbb Z} D_n$ modulo the action of $\tau^{-1}\Sigma^u$ by \cite[prop.\ 1.3]{BMRRT}.
Again $\tau^{-1}$ acts by shifting one unit to the right.
If $n$ is even, then the suspension $\Sigma$ acts by shifting $n-1$ units to the right, and if $n$ is odd, then $\Sigma$ acts by shifting $n-1$ units to the right and switching each pair of `exceptional' vertices such as $(n-1)^+$ and $(n-1)^-$; cf.\ \cite[table p.\ 359]{MiyachiYekutieli}.
It follows that if $n$ or $u$ is even, then $\tau^{-1}\Sigma^u$ acts by shifting $u(n-1) + 1$ units to the right, and if $n$ and $u$ are both odd, then $\tau^{-1}\Sigma^u$ acts by shifting $u(n-1) + 1$ units to the right and switching each pair of exceptional vertices.
Accordingly, the AR quiver of the $u$-cluster category $\mbox{\sf C}$ has the shape of a cylinder of circumference
$u(n-1) + 1.$
\subsection{$u$-cluster categories of Dynkin type $E$} \label{subsec:clusterE} Let $Q$ be a Dynkin quiver of type $E_n$ for $n = 6, 7, 8$. We can suppose that $Q$ has the orientation in Figure \ref{fig:En}, with the convention that for $n = 6$ the two non-filled vertices and for $n = 7$ the leftmost non-filled vertex (and all arrows incident to them) do not exist. \begin{figure}
\caption{The Dynkin quivers $E_6$, $E_7$, $E_8$}
\label{fig:En}
\end{figure} By \cite[cor.\ 4.5(i)]{Happel}, the AR quiver of $\D^{\operatorname{f}}(kQ)$ is the repetitive quiver ${\mathbb Z} E_n$ shown in Figure \ref{fig:ZEn}. \begin{figure}
\caption{The repetitive quivers ${\mathbb Z} E_6$, ${\mathbb Z} E_7$, ${\mathbb Z} E_8$}
\label{fig:ZEn}
\end{figure} Again, for $n = 6$ and $n = 7$ the bottom two rows and bottom row, respectively, of non-filled vertices do not occur. Note that for $n = 6$ the AR quiver has a symmetry at the central line which does not exist for $n = 7, 8$.
The AR quiver of the $u$-cluster category $\mbox{\sf C}$ is ${\mathbb Z} E_n$ modulo the action of $\tau^{-1}\Sigma^u$ by \cite[prop.\ 1.3]{BMRRT}.
Again $\tau^{-1}$ acts by shifting one unit to the right.
If $n = 6$ then the suspension $\Sigma$ acts by shifting 6 units to the right and reflecting in the central line of the AR quiver. If $n = 7, 8$ then $\Sigma$ acts by shifting $9$, respectively $15$ units to the right. See \cite[table 1, p.\ 359]{MiyachiYekutieli}.
It follows that the action of $\tau^{-1}\Sigma^u$ is given as follows: for $n = 6$ and $u$ even, by shifting $6u+1$ units to the right; for $n = 6$ and $u$ odd, by shifting $6u+1$ units to the right and reflecting in the central line; for $n = 7$, by shifting $9u+1$ units to the right; for $n = 8$, by shifting $15u+1$ units to the right.
In particular, the AR quiver of the $u$-cluster category of type $E_n$, $n = 6,7,8$, has the shape of a cylinder, except when $n = 6$ and $u$ is odd where it has the shape of a M\"obius band.
\section{Cluster tilting objects} \label{sec:orthogonal}
The notion of a $u$-cluster tilting object in a triangulated category was recalled in Section \ref{sec:cluster}. There is also a definition in abelian categories, cf.\ \cite[sec.\ 2]{Iyama}. An object $X$ of an abelian category is called $u$-cluster tilting if \begin{enumerate}
\item $\operatorname{Ext}^1(X,t) = \cdots = \operatorname{Ext}^u(X,t) = 0 \; \Leftrightarrow \;
t \in \operatorname{add}\, X$,
\item $\operatorname{Ext}^1(t,X) = \cdots = \operatorname{Ext}^u(t,X) = 0 \; \Leftrightarrow \;
t \in \operatorname{add}\, X$.
\end{enumerate}
Over selfinjective algebras, there is the following simple connection between $u$-cluster tilting objects in the module category (which is abelian) and the stable module category (which is triangulated).
\begin{Proposition} \label{prop:max-orth} Let $A$ be a selfinjective $k$-algebra and let $X$ be a $u$-cluster tilting object of the module category $\mbox{\sf mod}\,A$. Then $X$ is also a $u$-cluster tilting object of the stable module category $\mbox{\sf stab}\,A$. \end{Proposition}
\begin{proof} Since $A$ is selfinjective, the suspension functor $\Sigma$ provides us with isomorphisms \[
\underline{\operatorname{Hom}}(M,\Sigma^i N) \cong \operatorname{Ext}^i(M,N) \] for $M$ and $N$ in $\mbox{\sf mod}\,A$ and $i \geq 1$. Here $\underline{\operatorname{Hom}}$ denotes morphisms in $\mbox{\sf stab}\,A$.
On one hand, this implies \[
\underline{\operatorname{Hom}}(X,\Sigma^1 X) = \cdots
= \underline{\operatorname{Hom}}(X,\Sigma^u X) = 0. \]
On the other hand, suppose that $t$ in $\mbox{\sf stab}\,A$ satisfies \[
\underline{\operatorname{Hom}}(X,\Sigma^1 t) = \cdots
= \underline{\operatorname{Hom}}(X,\Sigma^u t) = 0. \] Then \[
\operatorname{Ext}^1(X,t) = \cdots = \operatorname{Ext}^u(X,t) = 0, \] so $t$ is in $\operatorname{add}\, X$ viewed in $\mbox{\sf mod}\,A$. But then $t$ is clearly also in $\operatorname{add}\, X$ viewed in $\mbox{\sf stab}\,A$.
A similar argument shows that \[
\underline{\operatorname{Hom}}(t,\Sigma^1 X) = \cdots
= \underline{\operatorname{Hom}}(t,\Sigma^u X) = 0 \] implies that $t$ is in $\operatorname{add}\, X$ viewed in $\mbox{\sf stab}\,A$. \end{proof}
\section{Dynkin type $A$} \label{sec:typeA}
\subsection{Nakayama algebras} \label{sec:Nakayama}
This subsection proves part (i) of Theorem A from the introduction.
For integers $N, n \geq 1$, consider the Nakayama algebra $B_{N,n+1}$ defined as the path algebra of the circular quiver with $N$ vertices and all arrows pointing in the same direction, modulo the ideal generated by paths of length $n+1$.
This is a selfinjective algebra of tree class $A_n$. The stable AR quiver of $B_{N,n+1}$ has the shape of a cylinder and can be obtained as ${\mathbb Z} A_n$ modulo a shift by $N$ units to the right.
On the other hand, as we saw in Section \ref{sec:cluster}, if $u$ is even then the $u$-cluster category of type $A_n$ has an AR quiver which can be obtained as ${\mathbb Z} A_n$ modulo a shift by $\frac{u}{2}(n+1) + 1$ units to the right. Indeed, this is no coincidence.
\begin{Theorem} \label{thm:Nakayama} Let $u \geq 2$ be an even integer and let $n \geq 1$ be an integer. Set \[
N = \frac{u}{2}(n+1) + 1. \] Then the $u$-cluster category of type $A_n$ is equivalent as a triangulated category to the stable module category $\mbox{\sf stab}\, B_{N,n+1}$. \end{Theorem}
\begin{proof} For $n=1$ the theorem states that the $u$-cluster category of type $A_1$ is triangulated equivalent to $\mbox{\sf stab}\,B_{u+1,2}$. This is true by the observation that both categories have AR quiver a disconnected union of $u+1$ vertices, with suspension equal to a cyclic shift by one vertex.
We now assume $n\ge 2$, in which case the relevant categories are connected. By Keller and Reiten's Morita theorem for $u$-cluster categories \cite[thm.\ 4.2]{KellerReiten2}, we need to show three things for the stable module category $\mbox{\sf stab}\, B_{N,n+1}$. \begin{itemize}
\item It has {\em Calabi-Yau dimension } $u + 1$.
\item It has {\em a $u$-cluster tilting object } $X$ with
endomorphism ring $kA_n$.
\item The object $X$ has {\em vanishing of negative self-extensions
} in the sense that $\underline{\operatorname{Hom}}(X,\Sigma^{-i}X) = 0$ for $i=1, \ldots, u-1$. \end{itemize} According to this, the proof is divided into three sections. Note the shift in the indices compared to \cite{KellerReiten2}: their $d$-cluster categories are $u$-cluster categories for $u=d-1$ in our notation.
{\em Calabi-Yau dimension. } We must show that $\mbox{\sf stab}\, B_{N,n+1}$ has Calabi-Yau dimension $u + 1$, and we can do so using the results by Dugas in \cite{Dugas}. To apply his result from \cite[thm.\ 6.1(2)]{Dugas} in our case of type $A_n$ where $n\ge 2$, we need the Coxeter number $h_{A_n}=n+1$, and we have to observe that in Asashiba's notation from \cite[appendix]{Asashiba} the Nakayama algebra $B_{N,n+1}$ has the form $(A_n,\frac{N}{n},1)$ where $f=\frac{N}{n}$ is the {\em
frequency}. Then \cite[thm.\ 6.1(2)]{Dugas} states that the stable module category $\mbox{\sf stab}\, B_{N,n+1}$ has Calabi-Yau dimension $2r+1$ where $r \equiv -(h_{A_n})^{-1} \,\operatorname{mod}\, fn$ and $0\le r<fn$. Since $f=\frac{N}{n}$ the value of $r$ is determined by $0\le r<N$ and $r \equiv -(h_{A_n})^{-1} \,\operatorname{mod}\, N = -(n+1)^{-1} \,\operatorname{mod}\, N$. By our assumptions in Theorem \ref{thm:Nakayama} we have that $u = 2\ell$ is even and that $N = \frac{u}{2}(n+1)+1 = \ell(n+1)+1$. Then the condition for the value of $r$ reads $r \equiv -(n+1)^{-1} \,\operatorname{mod}\, (\ell(n+1)+1)$ which together with $0\le r< N = \ell(n+1)+1$ clearly forces $r = \ell$. Therefore we can deduce that $\mbox{\sf stab}\, B_{N,n+1}$ has Calabi-Yau dimension $2r+1 = 2\ell+1 = u+1,$ as desired.
{\em $u$-cluster tilting object. } To find a $u$-cluster tilting object $X$ in $\mbox{\sf stab}\,B_{N,n+1}$, by Proposition \ref{prop:max-orth} it suffices to find a $u$-cluster tilting module $X$ in the module category $\mbox{\sf mod}\,B_{N,n+1}$. We define $X$ to be the direct sum of the projective indecomposable $B_{N,n+1}$-modules and the indecomposable modules $x_1, \ldots, x_n$ whose position in the stable AR quiver of $B_{N,n+1}$ is given by Figure \ref{fig:Anx}. \begin{figure}
\caption{The indecomposable modules $x_1, \ldots, x_n$ for the Nakayama algebra}
\label{fig:Anx}
\end{figure} For the (uniserial) Nakayama algebras $B_{N,n+1}$ it is well-known that the $i$th layer from the bottom of the stable AR quiver contains precisely the non-projective indecomposable modules of dimension $i$ (see e.g. \cite[cor. V.4.2]{ASS}). Moreover, the arrow from $x_i$ to $x_{i+1}$ in the above picture is a monomorphism for each $i$. From this follows easily that the stable endomorphism ring of the module $X$ is isomorphic to $kA_n$.
We now show that the module $X$ defined above is $u$-cluster tilting. The $u$-cluster tilting modules (also called maximal $u$-orthogonal modules) for selfinjective algebras of finite type with tree class $A_n$ were described combinatorially in \cite[sec.\ 4]{Iyama}. We briefly sketch the main ingredients and refer to \cite{Iyama} for details. On the stable AR quiver of $B_{N,n+1}$ one introduces a coordinate system as in Figure \ref{fig:An_coordinates}. \begin{figure}
\caption{The coordinate system for the Nakayama algebra}
\label{fig:An_coordinates}
\end{figure} The first coordinate has to be taken modulo $N$. To each vertex $x$ in the stable AR quiver one associates a `forbidden region' $H^{+}(x)$ which is just the rectangle spanned from $x$ to the right; more precisely, if $x=(i,j)$, then $H^{+}(x)$ is the rectangle with corners $x=(i,j)$, $(i,i+n+1)$, $(j-2,i+n+1)$ and $(j-2,j)$ shown in Figure \ref{fig:An_region}. \begin{figure}
\caption{The set $H^+(x)$ in Dynkin type $A$}
\label{fig:An_region}
\end{figure} Define an automorphism $\omega$ on the stable AR quiver by setting $\omega(i,j) = (j-n-2,i+1)$ and let $\tau$ be the usual AR translation, $\tau(i,j)=(i-1,j-1)$. Then a subset $S$ of the vertex set $M$ in the stable AR quiver is called $u$-cluster tilting if \[
M\setminus S = \bigcup_{x\in S,\: 0<i\le u} H^+(\tau^{-1}\omega^{-i+1}x). \] For our particular choice of the $B_{N,n+1}$-module $X$ the set $S$ is given by the above `slice' $x_1, \ldots, x_n$. Then the straightforward, but crucial, observation is that for $i=1,\ldots,u$ the sets $H(i) = \bigcup_{x\in S} H^+(\tau^{-1}\omega^{-i+1}x)$ are as shown in Figure \ref{fig:An_regions}. \begin{figure}
\caption{The sets $H(i)$ for the Nakayama algebra}
\label{fig:An_regions}
\end{figure} I.e., each $H(i)$ contains all the vertices in a triangular region of the stable AR quiver with each edge of the triangle containing $n$ vertices.
Recall that $u$ is even by assumption. In total, the union of the forbidden regions $\bigcup_{0<i\le u}H(i)$ covers precisely the region of the stable AR quiver between the slice $x_1, \ldots, x_n$ and the shift of it by $\frac{u}{2}(n+1)+1$ units to the right. But the stable AR quiver has a circumference of $N = \frac{u}{2}(n+1)+1$ units, so it is clear from the above discussion that the set $S$ is $u$-cluster tilting and that, accordingly, the $B_{N,n+1}$-module $X$ is indeed $u$-cluster tilting.
{\em Vanishing of negative self-extensions. } We must show $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ for $i=1, \ldots, u-1$. For this, we can view $X$ in $\mbox{\sf stab}\, B_{N,n+1}$ where it has the $n$ indecomposable summands $x_1, \ldots, x_n$. Given non-projective indecomposable $B_{N,n+1}$-modules $v$ and $w$, observe that by \cite[sec.\ 4.2 and prop.\ 4.4.3]{Iyama}, we have $\underline{\operatorname{Hom}}(v,w) = 0$ precisely if the vertex of $w$ is outside the forbidden region $H^+(v)$. So we need to check that all vertices corresponding to indecomposable summands of $\Sigma^{-i} X$ for $i = 1, \ldots, u-1$ are outside the forbidden region $H(X) = \bigcup_j H^+(x_j)$, where the union is over the indecomposable summands $x_j$ in $X$.
Now, the action of $\Sigma^{-1}$ on the stable AR quiver is just $\omega$. For instance, Figure \ref{fig:AnH} shows the forbidden region along with the direct summands of $X$ and of $\omega X$. \begin{figure}
\caption{The set $H(X)$ and direct summands of $X$ and $\omega X$ for the Nakayama algebra}
\label{fig:AnH}
\end{figure} It is clear that the $\omega(x_j)$ fall outside $H(X)$. More generally, $\omega$ moves vertices to the left, so the only way we could fail to get $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ would be if we took $i$ so large that the $\omega^i(x_j)$ made it all the way around the stable AR quiver and reached the forbidden region from the right. Let us check that this does not happen: $\omega^2$ is just a shift by $n+1$ units to the left, and hence $\omega^{u-2} = (\omega^2)^{\frac{u}{2} - 1}$ is a shift by $(\frac{u}{2} - 1)(n+1) = N - (n + 2)$ units to the left. Since the stable AR quiver has circumference $N$ it is clear that by applying $\omega^{u-1}$
we do not reach the forbidden region from the right. \end{proof}
\subsection{M\"{o}bius algebras} \label{sec:Mobius}
This subsection proves part (ii) of Theorem A from the introduction.
For integers $p,s \geq 1$, consider the M\"{o}bius algebra $M_{p,s}$. Following the notation in \cite[app. A2.1.2]{Asashiba}, this is the path algebra of the quiver shown in Figure \ref{fig:Moebius} modulo the following relations. \begin{itemize}
\item[{(i)}] $\alpha_p^{i}\cdots\alpha_0^{i} = \beta_p^{i}\cdots\beta_0^{i}$ for each $i\in\{0,\ldots,s-1\}$.
\item[{(ii)}] $\beta_0^{i+1}\alpha_p^{i}=0$, $\alpha_0^{i+1}\beta_p^{i}=0$ for each $i\in\{0,\ldots,s-2\}$,\\[2mm] $\alpha_0^{0}\alpha_p^{s-1}=0$, $\beta_0^{0}\beta_p^{s-1}=0$.
\item[{(iii)}] Paths of length $p+2$ are equal to zero.
\end{itemize}
\begin{figure}
\caption{Quiver for the M\"obius algebra}
\label{fig:Moebius}
\end{figure}
This is a selfinjective algebra of tree class $A_{2p+1}$. In the notation of \cite[app. A2.1.2]{Asashiba} the M\"obius algebra $M_{p,s}$ is of the form $(A_{2p+1},s,2)$. The stable AR quiver of $M_{p,s}$ has the shape of a M\"{o}bius band and can be obtained as ${\mathbb Z} A_{2p+1}$ modulo a reflection in the horizontal centre line composed with a shift by $s(2p+1)$ units to the right, see \cite{Riedtmann}.
On the other hand, as we saw in Section \ref{sec:cluster}, if $u$ is odd then the $u$-cluster category of type $A_{2p+1}$ has an AR quiver which can be obtained as ${\mathbb Z} A_{2p+1}$ modulo a reflection in the horizontal centre line composed with a shift by $\frac{u}{2}(2p+1+1) + 1 = u(p+1) + 1$ units to the right. This quiver also has the shape of a M\"{o}bius band, and again, this is no coincidence.
\begin{Theorem} \label{thm:Mobius} Let $u \geq 1$ be an odd integer and let $p,s \geq 1$ be integers for which \[
s(2p+1) = u(p+1) + 1. \] Then the $u$-cluster category of type $A_{2p+1}$ is equivalent as a triangulated category to the stable module category $\mbox{\sf stab}\, M_{p,s}$. \end{Theorem}
\begin{proof} Like the proof of Theorem \ref{thm:Nakayama}, this proof is divided into three sections verifying the conditions in Keller and Reiten's Morita theorem \cite[thm.\ 4.2]{KellerReiten2}.
{\em Calabi-Yau dimension. } We must show that $\mbox{\sf stab}\, M_{p,s}$ has Calabi-Yau dimension $u + 1$. Again this can be done using the work of Dugas, namely \cite[prop.\ 9.6]{Dugas}. There he shows that the Calabi-Yau dimension of the stable module category $\mbox{\sf stab}\, M_{p,s}$ is of the form $K_{p,s}(2p+1)-1$ where \[
K_{p,s} = \operatorname{inf} \big\{\,r\,\big|\,r\ge 1,\,r(p+1)\equiv 1 \,\operatorname{mod}\, s,\,
\mbox{and $\displaystyle \frac{r(s+p+1)-1}{s}$ is even} \big\}. \] Let us determine the number $K_{p,s}$ for the values of $u,p$ and $s$ given by the assumptions of the theorem. We have \[
u+2 = \frac{s(2p+1)-1}{p+1} +2 = \frac{(s+1)(2p+1)}{p+1}. \] Since $\gcd(p+1,2p+1)=1$, we deduce that $p+1$ divides $s+1$. Moreover, the integer $\frac{s+1}{p+1}$ is odd since $u$ is odd by assumption. Now, for the condition $r(p+1)\equiv \, 1 \,\operatorname{mod}\, s$, the integer $\frac{s+1}{p+1}$ is clearly the minimal (positive) solution. Moreover, for this value $r=\frac{s+1}{p+1}$ we have that \[
\frac{r(s+p+1)-1}{s}
= \frac{(s+1)(s+p+1)-(p+1)}{(p+1)s}
= \frac{s+p+2}{p+1} = \frac{s+1}{p+1}+1 \] is even. Hence $K_{p,s}=\frac{s+1}{p+1}$, and we conclude that $\mbox{\sf stab}\,M_{p,s}$ has Calabi-Yau dimension \begin{eqnarray*}
K_{p,s}(2p+1)-1 & = & \frac{(s+1)(2p+1)}{p+1} - 1
= \frac{(s+1)(2p+1)-2(p+1)}{p+1} + 1 \\
& = & \frac{s(2p+1)-1}{p+1} + 1 = u + 1, \end{eqnarray*} where the last equality holds by assumption on $u$.
{\em $u$-cluster tilting object. } To find a $u$-cluster tilting object $X$ in $\mbox{\sf stab}\,M_{p,s}$, recall that the projective indecomposable $M_{p,s}$-modules are either u\-ni\-se\-ri\-al or biserial, and that correspondingly, the vertices in the quiver of $M_{p,s}$ are called uniserial or biserial. The position of the corresponding simple modules in the stable AR quiver is well-known; in particular, the simple modules corresponding to biserial vertices occur in the centre line of the stable AR quiver. As in the case of Nakayama algebras, we define the module $X$ as the direct sum of the projective indecomposable modules and the indecomposable modules $x_1, \ldots, x_{2p+1}$ lying on a slice as in Figure \ref{fig:A2pp1} such that the module $x_{p+1}$ is a simple module $S_v$ corresponding to a biserial vertex $v$ of the quiver of $M_{p,s}$. \begin{figure}
\caption{The indecomposable modules $x_1, \ldots, x_{2p+1}$ for the M\"obius algebra}
\label{fig:A2pp1}
\end{figure} The other modules in this slice can also be described. For $j=1,\ldots,p$, the module $x_j$ is the uniserial module of length $p+2-j$ with top $S_v$, and the module $x_{p+j+1}$ is the uniserial module of length $j+1$ with socle $S_v$.
In particular, the bottom $p$ maps are epimorphisms and the upper $p$ maps are monomorphisms. The composition of all $2p$ maps in such a slice is non-zero, mapping the top onto the socle. Most importantly for us, it does not factor through a projective module, i.e., it is a non-zero morphism in the stable module category of $M_{p,s}$. From this it follows easily that the stable endomorphism ring of the module $X$ is isomorphic to $kA_{2p+1}$.
We now show that $X$ is $u$-cluster tilting. This argument is also analogous to the Nakayama algebra case. The crucial difference is that now $u$ is odd. Hence, the forbidden regions defined in \cite{Iyama} and discussed in the proof of Theorem \ref{thm:Nakayama} are as in Figure \ref{fig:A2pp1_regions}. \begin{figure}
\caption{The sets $H(i)$ for the M\"obius algebra}
\label{fig:A2pp1_regions}
\end{figure} The method from the proof of Theorem \ref{thm:Nakayama} shows that, to see that $X$ is $u$-cluster tilting, it is sufficient to see that the vertex $x_1$ is identified with the vertex $*$. However, each $H(i)$ contains the vertices of the stable AR quiver in an equilateral triangular region with edges having $2p+1$ vertices. So in order for $x_1$ to be identified with $*$, we must identify after $\frac{u-1}{2}(2p+2)+ (p+2) = u(p+1)+1$ units. But in fact, one gets the stable AR quiver of $M_{p,s}$ from $\mathbb{Z}A_{2p+1}$ by identifying after $s(2p+1)$ units, and by the assumption of the theorem we do indeed have $s(2p+1)=u(p+1)+1$.
{\em Vanishing of negative self-extensions. } We must show $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ for $i=1, \ldots, u-1$. The proof is analogous to the Nakayama case: The action of $\Sigma^{-1}$ on the stable AR quiver is again just $\omega$, and the forbidden region of $X$ along with the direct summands of $X$ and of $\omega X$ are as in Figure \ref{fig:AnH2}. \begin{figure}
\caption{The set $H(X)$ and direct summands of $X$ and $\omega X$ for the M\"obius algebra}
\label{fig:AnH2}
\end{figure} The only way we could fail to get $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ would be if we took $i$ so large that the $\omega^i(x_j)$ made it all the way around the stable AR quiver and reached the forbidden region from the right. In fact, let us look at the largest relevant integer, $u - 1$. As $\omega^2$ is just a shift by $2p+2$ units to the left, we have that $\omega^{u-1} = (\omega^2)^{\frac{u-1}{2}}$ is a shift by $\frac{u-1}{2}(2p+2) = (u-1)(p+1) = s(2p+1) - (p+2)$ units to the left. The stable AR quiver has a circumference of $s(2p+1)$ units, so the $\omega^{u-1}(x_j)$ lie strictly to the right of the forbidden region. (Note that the stable AR quiver is a M\"{o}bius band, and the change of orientation means that, although $u-1$ is even, the $\omega^{u-1}(x_j)$ form a diagonal line perpendicular, not parallel, to the line of the $x_j$.) So $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X)$ is zero for $i = u - 1$, and hence certainly also for all values $i = 1, \ldots, u - 1$. This completes the proof. \end{proof}
\begin{Remark} Note that as a special case of Theorem \ref{thm:Mobius}, the $1$-cluster category of type $A_3$ is triangulated equivalent to $\mbox{\sf stab}\, M_{1,1}$. The M\"obius algebra $M_{1,1}$ is isomorphic to the preprojective algebra of Dynkin type $A_3$.
This is the only case where a $1$-cluster category is triangulated equivalent to the stable module category of a selfinjective algebra of finite representation type and tree class $A_n$. This follows from the complete classification of representation-finite selfinjective algebras of stable Calabi-Yau dimension 2 given in \cite[cor. 3.10]{ES}. \end{Remark}
\section{Dynkin type $D$} \label{sec:typeD}
This section proves Theorem D from the introduction.
Asashiba's paper \cite{Asashiba1} gives a derived and stable equivalence classification of selfinjective algebras of finite representation type. If the tree class of the stable AR quiver is Dynkin type $D$, then there are three families of representatives of algebras denoted \begin{itemize}
\item $(D_n,s,1)$ with $n \geq 4$, $s \geq 1$,
\item $(D_n,s,2)$ with $n \geq 4$, $s \geq 1$,
\item $(D_{3m},\frac{s}{3},1)$ with $m \ge 2$, $s \geq 1$, $3 \nmid s$.
\end{itemize} It follows from \cite[cor.\ 1.7]{BS} that the stable AR quivers of these algebras are cylinders with the following circumferences. \begin{itemize}
\item For $(D_n,s,1)$ and $(D_n,s,2)$ the circumference is $s(2n-3)$.
\item For $(D_{3m},\frac{s}{3},1)$ the circumference is $s(2m-1)$.
\end{itemize}
By Subsection \ref{subsec:clusterD}, the AR quiver of the $u$-cluster category of type $D_n$ is a cylinder of circumference $u(n-1)+1$. So in order for the stable categories $\mbox{\sf stab}\,(D_n,s,1)$ or $\mbox{\sf stab}\,(D_n,s,2)$ to be $u$-cluster categories we need \[
u(n-1)+1 = s(2n-3). \] In particular, this implies \[
u\equiv\,-(n-1)^{-1}\,\equiv\,-2 \,\operatorname{mod}\, (2n-3). \] Likewise, for the stable category $\mbox{\sf stab}\,(D_{3m},\frac{s}{3},1)$ to be a $u$-cluster category, we need \begin{equation} \label{equ:d}
u(3m-1)+1 = s(2m-1). \end{equation} In particular, this implies \[
u\equiv\,-m^{-1}\,\equiv\,-2 \,\operatorname{mod}\, (2m-1). \] Moreover, recall that in the definition of the algebras $(D_{3m},\frac{s}{3},1)$ the case $3 \mid s$ is excluded. In the situation of equation \eqref{equ:d} we have \[
3 \nmid s
\; \Longleftrightarrow \;
u(3m-1)+1 \not\equiv \, 0 \,\operatorname{mod}\, 3(2m-1)
\; \Longleftrightarrow \;
u \not\equiv \, -(3m-1)^{-1} \, \equiv \, -2 \,\operatorname{mod}\, (6m-3). \]
Indeed, these conditions turn out also to be sufficient. Note that, setting $n=3m$, the forbidden case $u \equiv -2 \,\operatorname{mod}\, (6m-3)$ for the algebras $(D_{3m},\frac{s}{3},1)$ is precisely the case $u\,\equiv\,-2 \,\operatorname{mod}\, (2n-3)$ in which the algebras $(D_n,s,1)$ and $(D_n,s,2)$ can be applied.
The main result of this section is the following which restates Theorem D from the introduction.
\begin{Theorem} \label{thm:n_even} Let $m, n, u$ be integers with $u \geq 1$. \begin{enumerate}
\item Suppose that $n\ge 4$ is even and $u \equiv -2 \,\operatorname{mod}\,
(2n-3)$. Then the $u$-cluster category of type $D_{n}$ is
equivalent as a triangulated category to the stable module category \[
\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1). \]
\item Suppose that $n\ge 5$ is odd and $u \equiv -2 \,\operatorname{mod}\, (2n-3)$.
\noindent If $u$ is even, then the $u$-cluster category of type $D_{n}$ is triangulated e\-qui\-va\-lent to the stable module category $\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1).$
\noindent If $u$ is odd, then the $u$-cluster category of type $D_{n}$ is triangulated e\-qui\-va\-lent to the stable module category $ \mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},2).$
\item Suppose that $m\ge 2$ and $u\equiv\, -2 \,\operatorname{mod}\, (2m-1)$ but $u\not\equiv\, -2 \,\operatorname{mod}\, (6m-3)$. Suppose moreover that not both $m$ and $u$ are odd. Then the $u$-cluster category of type $D_{3m}$ is equivalent as a triangulated category to the stable module category $\mbox{\sf stab}\, (D_{3m},\frac{s}{3},1)$ where $s=\frac{u(3m-1)+1}{2m-1}$.
\end{enumerate} \end{Theorem}
\begin{proof} As in type A, the proof is divided into three sections verifying the conditions in Keller and Reiten's Morita theorem \cite[thm.\ 4.2]{KellerReiten2}.
{\em Calabi-Yau dimension. } We must show that each of the stable module categories occurring in the theorem has Calabi-Yau dimension $u+1$.
For part (i) we suppose that $n\ge 4$ is even and we consider the algebra $(D_{n},\frac{u(n-1)+1}{2n-3},1)$. The Calabi-Yau dimension of its stable module category can be determined using \cite[thm.\ 6.1]{Dugas}, in which both parts can apply. The relevant invariants occurring there are the frequency $f=\frac{u(n-1)+1}{2n-3}$, the Coxeter number $h_{D_n}=2n-2$ and the related number $h_{D_n}^*= h_{D_n}/2=n-1$, and $m_{D_n}=h_{D_n}-1=2n-3$.
If \cite[thm.\ 6.1(1)]{Dugas} applies then the Calabi-Yau dimension $d$ of $\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1)$ satisfies $$d \equiv 1 - (h_{D_n}^*)^{-1} \,\operatorname{mod}\, fm_{D_n} \equiv 1- (n-1)^{-1} \,\operatorname{mod}\, (u(n-1)+1) $$ and $0 < d \le u(n-1)+1$. Upon multiplication with $n-1$ this becomes $d(n-1) \equiv n-2 \,\operatorname{mod}\, (u(n-1)+1)$ which is easily checked to be satisfied by $d=u+1$.
If \cite[thm.\ 6.1(2)]{Dugas} applies then the Calabi-Yau dimension $d$ of $\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1)$ has the form $d=2r+1$ where $r$ is determined by \begin{equation} \label{equ:Dn:neven} r \equiv - (h_{D_n})^{-1} \,\operatorname{mod}\, fm_{D_n} \equiv -(2n-2)^{-1} \,\operatorname{mod}\, (u(n-1)+1) \end{equation} and $0 \le r < u(n-1)+1$. Since \cite[thm.\ 6.1(2)]{Dugas} applies we know from the assumptions stated in \cite[thm.\ 6.1(1)]{Dugas} that $2\nmid f=\frac{u(n-1)+1}{2n-3}$ from which it follows that $u$ is even (since $n$ is even). Setting $r=\frac{u}{2}$ it is readily checked that it satisfies (\ref{equ:Dn:neven}). Therefore the Calabi-Yau dimension is $2r+1=u+1$, as required.
For part (ii), we suppose that $n\ge 5$ is odd and we consider the algebras $(D_{n},\frac{u(n-1)+1}{2n-3},1)$ and $(D_{n},\frac{u(n-1)+1}{2n-3},2)$, depending on whether $u$ is even or odd.
If $u$ is even then the Calabi-Yau dimension of $\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1)$ can be determined using \cite[thm.\ 6.1(2)]{Dugas} (note that \cite[thm.\ 6.1(1)]{Dugas} only applies for $n$ even). The only difference to the case of $n$ even is the invariant $h_{D_n}^*$ which is now equal to $2n-2$ instead of $n-1$. But this invariant does not occur in \cite[thm.\ 6.1(2)]{Dugas} so the proof for $n$ even carries over verbatim and gives that $\mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},1)$ has Calabi-Yau dimension $u + 1$.
If $u$ is odd (and $n \geq 5$ is still odd) then the Calabi-Yau dimension of $\mbox{\sf stab}\,(D_{n},\frac{u(n-1)+1}{2n-3},2)$ can be determined using \cite[prop.\ 7.3]{Dugas}. Note that since $n$ is odd, the frequency $f=\frac{u(n-1)+1}{2n-3}$ is odd as well, and hence \cite[prop.\ 7.3(1)]{Dugas} applies. From this we get that the Calabi-Yau dimension of $ \mbox{\sf stab}\, (D_{n},\frac{u(n-1)+1}{2n-3},2)$ is of the form $d=2r$ where $r\equiv (n-2)(2n-2)^{-1} \,\operatorname{mod}\, (u(n-1)+1)$ and $0 < r < u(n-1)+1$. Upon multiplication with $2n-2$ the latter equation becomes $2r(n-1)\equiv n-2 \,\operatorname{mod}\, (u(n-1)+1)$ which is easily seen to be satisfied by $r=\frac{u+1}{2}$. Therefore, the Calabi-Yau dimension is $d=2r=u+1$, as required.
For part (iii), we consider the algebras $(D_{3m},\frac{s}{3},1)$ where $s=\frac{u(3m-1)+1}{2m-1}$. The Calabi-Yau dimension of the stable module category can again be determined using \cite[thm.\ 6.1]{Dugas}.
If $m$ is even then the invariants we need are the frequency $f=\frac{s}{3}=\frac{u(3m-1)+1}{3(2m-1)}$, the Coxeter number $h_{D_{3m}}=6m-2$ and the related numbers $m_{D_{3m}}=h_{D_{3m}}-1= 6m-3$, and $h_{D_{3m}}^* = h_{D_{3m}}/2 = 3m-1$.
If \cite[thm.\ 6.1(1)]{Dugas} applies then the Calabi-Yau dimension $d$ of $\mbox{\sf stab}\, (D_{3m},\frac{s}{3},1)$ is determined by $$d \equiv 1 - (h_{D_{3m}}^*)^{-1} \,\operatorname{mod}\, fm_{D_{3m}} \equiv 1 - (3m-1)^{-1} \,\operatorname{mod}\, (u(3m-1)+1) $$ and $0 < d \le u(3m-1)+1$. Clearly, $d=u+1$ satisfies these properties and hence the Calabi-Yau dimension is $u+1$, as claimed.
If \cite[thm.\ 6.1(2)]{Dugas} applies then the Calabi-Yau dimension $d$ of $\mbox{\sf stab}\, (D_{3m},\frac{s}{3},1)$ is of the form $d=2r+1$ where $r$ is determined by \[
r \equiv - (h_{D_{3m}})^{-1} \,\operatorname{mod}\, fm_{D_{3m}}
\equiv -(6m-2)^{-1} \,\operatorname{mod}\, (u(3m-1)+1) \] and $0 \le r < u(3m-1)+1$. Note that our assumptions in this case imply that $u$ is even; otherwise the frequency $f$ would be even and we would be in the situation of \cite[thm.\ 6.1(1)]{Dugas}. Setting $r = \frac{u}{2}$ is easily seen to satisfy the above properties, i.e. the Calabi-Yau dimension is $2r+1=u+1$, as desired.
Finally, if $m$ is odd then only \cite[thm.\ 6.1(2)]{Dugas} can apply. The computation of the Calabi-Yau dimension carries over verbatim from the previous one; in fact, by assumption in part (iii) of our theorem $u$ has to be even (since $m$ is odd). Hence also for $m$ odd and $u$ even we get the Calabi-Yau dimension of $\mbox{\sf stab}\, (D_{3m},\frac{s}{3},1)$ to be $u+1$, as required.
{\em $u$-cluster tilting object. } To find a $u$-cluster tilting object $X$ in the stable module category, the method is the same for parts (i)--(iii). In part (iii) we set $n = 3m$ so that in each case $n$ denotes the number of vertices in the underlying Dynkin quiver of type $D_n$.
Let $X$ be the sum of the projective indecomposable modules and the indecomposable modules $x_1, \ldots, x_{n-2}, x_{n-1}^-, x_{n-1}^+$ whose positions in the stable AR quiver of the relevant algebra are given by Figure \ref{fig:ZDn2}. \begin{figure}
\caption{The indecomposable modules $x_1, \ldots, x_{n-2}, x_{n-1}^-, x_{n-1}^+$ in Dynkin type $D$}
\label{fig:ZDn2}
\end{figure} We show that $X$ is $u$-cluster tilting
in the stable module category. By Proposition \ref{prop:max-orth}, it is enough to prove that it is
$u$-cluster tilting in the abelian category of modules. Following \cite[def. 4.2]{Iyama}, introduce a coordinate system on the stable AR quiver as in Figure \ref{fig:Dn_coordinates}. \begin{figure}
\caption{The coordinate system in Dynkin type $D$}
\label{fig:Dn_coordinates}
\end{figure} To each vertex $x$ in the stable AR quiver, associate a `forbidden region' $H^+(x)$ defined as in Figure \ref{fig:DnH} (see \cite[sec. 4.2]{Iyama}), with the proviso that if $x$ is not one of the `exceptional' vertices indicated by superscripts $+$ and $-$, then $H^+(x)$ contains all the exceptional vertices along the relevant part of the top line in the diagram, but if $x$ is exceptional, say $x = (i,i+n)^+$, then $H^+(x)$ only contains half the exceptional vertices along the relevant part of the top line, namely $(i,i+n)^+, (i+1,i+n+1)^-, (i+2,i+n+2)^+, \ldots$, starting with $x$ itself. \begin{figure}
\caption{The set $H^+(x)$ in Dynkin type $D$}
\label{fig:DnH}
\end{figure} Define the following automorphisms of the stable AR quiver: $\theta$ is the identity on the non-exceptional vertices and switches $(i,i+n)^+$ and $(i,i+n)^-$. The AR translation $\tau$ is given by moving each vertex one unit to the left. And finally, $\omega = \theta(\tau \theta)^{n-1}$. A subset $S$ of the vertex set $M$ in the stable AR quiver is called
$u$-cluster tilting if \[
M \setminus S = \bigcup_{x \in S, 0 < i \leq u}
H^+(\tau^{-1}\omega^{-i+1}x), \] see \cite[sec. 4.2]{Iyama}. For our choice of $X$, the set $S$ is given by the modules $x_1$, $\ldots$, $x_{n-2}$, $x_{n-1}^-$, $x_{n-1}^+$. But then the sets \[
H(i) = \bigcup_{x \in S} H^+(\tau^{-1}\omega^{-i+1}x) \] can easily be verified to sit in the stable AR quiver as in Figure \ref{fig:DnHs} where each parallellogram has $n-1$ vertices on each edge. \begin{figure}
\caption{The sets $H(i)$ in Dynkin type $D$}
\label{fig:DnHs}
\end{figure} In total, the union \[
\bigcup_{0 < i \leq u} H(i) =
\bigcup_{x \in S, 0 < i \leq u}
H^+(\tau^{-1}\omega^{-i+1}x) \] is a parallellogram with $u(n-1)$ vertices on each horizontal edge. This means that the parallellogram covers precisely the region between the $x$'s and their shift by $u(n-1) + 1$ units to the right.
By Subsection \ref{subsec:clusterD}, this is exactly the number of units after which ${\mathbb Z} D_n$ is identified with itself to get the stable AR quiver. It follows that $S$ is a
$u$-cluster tilting set of vertices of the stable AR quiver, and hence $X$ is
$u$-cluster tilting in the module category by \cite[thm.\ 4.2.2]{Iyama}.
To show that the stable endomorphism algebra $\underline{\operatorname{End}}(X)$ is $kD_n$, we need to see that for each pair of in\-de\-com\-po\-sa\-ble summands $x_i$ and $x_j$ of $X$, the stable $\operatorname{Hom}$-space $\underline{\operatorname{Hom}}(x_i,x_j)$ is one-dimensional if $x_i$ is below $x_j$ in the stable AR quiver, and zero otherwise.
The self-injective algebras in the theorem in question are standard, so each morphism between in\-de\-com\-po\-sa\-ble modules in the stable category is a sum of compositions of sequences of irreducible morphisms between indecomposable modules. Consider such a sequence which composes to a morphism $x_i \rightarrow x_j$.
If, along the sequence, there is an indecomposable $y$ which is not a summand of $X$, then $y \rightarrow x_j$ factors through a direct sum of indecomposable summands of $\tau X$ by \cite[lem.\ VIII.5.4]{ASS}. But then $x_i \rightarrow y \rightarrow x_j$ factors in the same way, and this means that it is zero because $\underline{\operatorname{Hom}}(X,\tau X) = 0$ by the methods used in the proof that $X$ is
$u$-cluster tilting. Hence $x_i \rightarrow x_j$ can be taken to be a sum of compositions of sequences of irreducible morphisms which only pass through indecomposable summands of $X$. In the stable AR quiver, the arrows between these summands all point upwards, so it follows that $\underline{\operatorname{Hom}}(x_i,x_j)$ is zero unless $x_i$ is below $x_j$ in the stable AR quiver.
On the other hand, if $x_i$ is below $x_j$, then $\underline{\operatorname{Hom}}(x_i,x_j)$ is non-zero by \cite[sec.\ 4.2 and prop.\ 4.4.3]{Iyama}. Finally, it follows from \cite[satz 3.5]{Riedtmann2} that the dimension of $\underline{\operatorname{Hom}}(x_i,x_j)$ is at most one.
{\em Vanishing of negative self-extensions. } We must show $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ for $i=1, \ldots, u-1$. If $v$ and $w$ are indecomposable non-projective modules, we have $\underline{\operatorname{Hom}}(v,w) = 0$ precisely if the vertex of $w$ is outside the region $H^+(v)$, see \cite[sec.\ 4.2 and prop.\ 4.4.3]{Iyama}. So we need to check that all vertices corresponding to indecomposable summands of $\Sigma^{-i}X$ for $i = 1, \ldots, u-1$ are outside the forbidden region $H(X) = \bigcup_{x} H^+(x)$, where the union is over the indecomposable summands of $X$.
But the action of $\Sigma^{-1}$ on the stable AR quiver is just $\omega$. So Figure \ref{fig:DnH2} shows the forbidden region along with the $\Sigma^{-i}X$. \begin{figure}
\caption{The set $H(X)$ and direct summands of $\Sigma^{-(u-1)}X, \ldots, \Sigma^{-1}X, X$ in Dynkin type $D$}
\label{fig:DnH2}
\end{figure} \label{page:D2} The only way we could fail to get $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ would be if we took $i$ so large that $\Sigma^i X$ made it all the way around the stable AR quiver and reached the forbidden region from the right.
However, this does not happen: $\omega$, and hence $\Sigma^{-1}$, is a move by $n-1$ units to the left, so $\Sigma^{-(u-1)}X$ is moved $(u-1)(n-1)$ units to the left. On the other hand, to reach $H(X)$, one has to move by the circumference of the stable AR quiver minus the horizontal length of $H(X)$ plus one, and this is $u(n-1) + 1 - (n - 1) + 1 = (u-1)(n-1) + 2$. \end{proof}
\begin{Remark} \label{rem:m_and_u_odd} We would like to stress that in part (iii) of Theorem \ref{thm:n_even}, the assumption that at least one of $m$ and $u$ is even is necessary. This assumption was unfortunately missing in our earlier preprint \cite{HolmJorgensenDE}. We are grateful to Alex Dugas for pointing this out to us.
If both $m$ and $u$ are odd then the Calabi-Yau dimension of the stable category $\mbox{\sf stab}\, (D_{3m},\frac{s}{3},1)$ cannot be of the form $u+1$, as would be needed for being a $u$-cluster category. In fact, the Calabi-Yau dimension can again be computed using \cite[thm.\ 6.1(2)]{Dugas} (\cite[thm.\ 6.1(1)]{Dugas} does not apply since $m$ is odd). In particular, the Calabi-Yau dimension is of the form $d=2r+1$ and hence an odd number which makes it impossible to be equal to $u+1$ (since $u$ is odd).
This happens despite the fact that, for $m$ and $u$ odd, the stable module category $\mbox{\sf stab}\, (D_{3m},\frac{s}{3},1)$ and the $u$-cluster category of type $D_{3m}$ both have as AR quiver a cylinder of circumference $s(2m-1)$. The reason is that under the AR translation $\tau$, the exceptional vertices form a single orbit in the $u$-cluster category but two orbits in the stable category.
As an explicit example, consider the case when $m=3$ and $u=3$. Then the Calabi-Yau dimension of the stable module category $\mbox{\sf stab}\, (D_{9},\frac{5}{3},1)$ is, according to \cite[thm.\ 6.1(2)]{Dugas}, of the form $2r+1$ where $r$ is determined by $r \equiv - 16^{-1} \,\operatorname{mod}\, 25 \equiv 14 \,\operatorname{mod}\, 25$ and $0\le r < 25$. Thus, $\mbox{\sf stab}\, (D_{9},\frac{5}{3},1)$ has Calabi-Yau dimension 29, which is far from the Calabi-Yau dimension 4 of the $3$-cluster category of type $D_9$. \end{Remark}
\begin{Example}
We illustrate our realizability results in type $D_n$ from Theorem \ref{thm:n_even} by considering the situation for some small values of $n$.
Let us first consider type $D_4$. Then parts (ii) and (iii) of Theorem \ref{thm:n_even} do not apply. From part (i) we get for every $u\equiv\,3 \,\operatorname{mod}\, 5$ that the $u$-cluster category of type $D_4$ is triangulated equivalent to $\mbox{\sf stab}\,(D_4,\frac{3u+1}{5},1)$.
Let us now consider type $D_6$. From part (i) of Theorem \ref{thm:n_even} we get for every $u\equiv\,7 \,\operatorname{mod}\, 9$ that the $u$-cluster category of type $D_6$ is triangulated equivalent to the category $\mbox{\sf stab}\,(D_6,\frac{5u+1}{9},1)$.
Moreover, from part (iii) of Theorem \ref{thm:n_even} we also get that for every $u\equiv\,1 \,\operatorname{mod}\, 9$ and every $u\equiv\,4 \,\operatorname{mod}\, 9$ that the $u$-cluster category of type $D_6$ is triangulated equivalent to $\mbox{\sf stab}\,(D_{6},\frac{5u+1}{9},1)$. Hence, for all $u\equiv\,1 \,\operatorname{mod}\, 3$, we get the $u$-cluster category of type $D_6$ as stable module category of a selfinjective algebra.
We remark that the smallest case $u=1$ states that the $1$-cluster category of type $D_6$ is triangulated equivalent to the stable module category of the preprojective algebra of type $A_4$. In fact, the algebra $(D_{6},\frac{2}{3},1)$ is just this preprojective algebra. This can be considered as the cluster category version of the statement that the preprojective algebra of type $A_4$ is of cluster type $D_6$ \cite[sec. 19.2]{GLS_semcan1}. For more details on the close connection between preprojective algebras and cluster theory we refer to \cite{GLS_rigid}. \end{Example}
\section{Dynkin type $E$} \label{sec:typeE}
This section proves Theorem E from the introduction.
Asashiba's paper \cite{Asashiba1} gives that if the tree class of the stable AR quiver is Dynkin type $E$, then there are four families of representatives of self-injective algebras denoted \begin{itemize}
\item $(E_6,s,1)$,
\item $(E_6,s,2)$,
\item $(E_7,s,1)$,
\item $(E_8,s,1)$,
\end{itemize} all with $s\ge 1$. Recall that in type $E$, nonstandard algebras do not occur. It follows from \cite[cor.\ 1.7]{BS} that the stable AR quivers of these algebras are cylinders with the following circumferences. \begin{itemize}
\item For $(E_6,s,1)$ and $(E_6,s,2)$ the circumference is $11s$.
\item For $(E_7,s,1)$ the circumference is $17s$.
\item For $(E_8,s,1)$ the circumference is $29s$.
\end{itemize}
By Subsection \ref{subsec:clusterE}, the AR quiver of the $u$-cluster category of type $E_6$ is a cylinder or a M\"{o}bius band of circumference $6u+1$ (this number is independent of $u$ being even or odd). So in order for the stable categories $\mbox{\sf stab}\,(E_6,s,1)$ or $\mbox{\sf stab}\,(E_6,s,2)$ to be $u$-cluster categories we need $6u+1 = 11s$. In particular, this implies \[
u\equiv\,-6^{-1}\,\equiv\,-2 \,\operatorname{mod}\, 11. \] Likewise, the AR quiver of the $u$-cluster category of type $E_7$ is a cylinder of circumference $9u+1$. So in order for the stable ca\-te\-go\-ry $\mbox{\sf stab}\,(E_7,s,1)$ to be a $u$-cluster category we need $9u+1 = 17s$. In particular, this implies \[
u\equiv\,-9^{-1}\,\equiv\,-2 \,\operatorname{mod}\, 17. \] Finally, the AR quiver of the $u$-cluster category of type $E_8$ is a cylinder of circumference $15u+1$. So in order for the stable module category $\mbox{\sf stab}\,(E_8,s,1)$ to be $u$-cluster category we need $15u+1 = 29s$. In particular, this implies \[
u\equiv\,-15^{-1}\,\equiv\,-2 \,\operatorname{mod}\, 29. \]
Indeed, these conditions turn out also to be sufficient. The main result of this section is the following which restates Theorem E from the introduction.
\begin{Theorem} \label{thm:E6} Let $u \geq 1$ be an integer. \begin{enumerate}
\item If $u\equiv\, -2 \,\operatorname{mod}\, 11$ then the $u$-cluster category of
Dynkin type $E_6$ is equivalent as a triangulated category to the
stable module category $\mbox{\sf stab}\, (E_6,\frac{6u+1}{11},1)$ if $u$ is
even, and to the stable module category $\mbox{\sf stab}\,
(E_6,\frac{6u+1}{11},2)$ if $u$ is odd.
\item If $u\equiv\, -2 \,\operatorname{mod}\, 17$ then the $u$-cluster category of
Dynkin type $E_7$ is equivalent as a triangulated category to the
stable module category $\mbox{\sf stab}\, (E_7,\frac{9u+1}{17},1)$.
\item If $u\equiv\, -2 \,\operatorname{mod}\, 29$ then the $u$-cluster category of
Dynkin type $E_8$ is equivalent as a triangulated category to the
stable module category $\mbox{\sf stab}\, (E_8,\frac{15u+1}{29},1)$.
\end{enumerate} \end{Theorem}
\begin{proof} As in types A and D, the proof is divided into three sections verifying the conditions in Keller and Reiten's Morita theorem \cite[thm.\ 4.2]{KellerReiten2}.
{\em Calabi-Yau dimension. } We must show that the relevant stable module categories have Calabi-Yau dimension $u + 1$, and again we do so using the results by Dugas from \cite{Dugas}.
First, consider the algebras $(E_6,\frac{6u+1}{11},1)$; in particular $u$ is assumed to be even. Then \cite[thm.\ 6.1(2)]{Dugas} applies. Note that the invariants occurring there for type $E_6$ are given by: The frequency $f=\frac{6u+1}{11}$, the Coxeter number $h_{E_6}=12$, and $m_{E_6} = h_{E_6}-1=11$. The Calabi-Yau dimension of $\mbox{\sf stab}\, (E_6,\frac{6u+1}{11},1)$ is then of the form $2r+1$ where $$r\equiv -(h_{E_6})^{-1} \,\operatorname{mod}\, fm_{E_6} = -12^{-1} \,\operatorname{mod}\, (6u+1) $$ and $0\le r < 6u+1$. Since $u$ is even by assumption we can consider the integer $r=\frac{u}{2}$; this clearly satisfies $12r = 6u \equiv -1 \,\operatorname{mod}\, (6u+1)$, and $0\le r < 6u+1$. Therefore the Calabi-Yau dimension of $\mbox{\sf stab}\, (E_6,\frac{6u+1}{11},1)$ is $2r+1 = u+1$, as desired.
Secondly, consider the algebras $(E_6,\frac{6u+1}{11},2)$; in particular $u$ is assumed to be odd. Then we can apply \cite[prop.\ 7.4(1)]{Dugas}. The Calabi-Yau dimension of $\mbox{\sf stab}\, (E_6,\frac{6u+1}{11},2)$ is then equal to $2r$ where $r \equiv 5\cdot 12^{-1} \,\operatorname{mod}\, (6u+1)$ and $0< r < 6u+1$. Setting $r=\frac{u+1}{2}$ (recall that $u$ is odd by assumption) we immediately get that $12 r = 6(u+1) \equiv 5 \,\operatorname{mod}\, (6u+1)$ and hence the Calabi-Yau dimension is $2r = u+1$, as desired.
Thirdly, consider the algebras $(E_7,\frac{9u+1}{17},1)$. The Calabi-Yau dimension can again be determined by \cite[thm.\ 6.1]{Dugas}. The relevant invariants for type $E_7$ are given by: The frequency $f=\frac{9u+1}{17}$, the Coxeter number $h_{E_7}=18$ with its variant $h_{E_7}^* = h_{E_7}/2 = 9$, and $m_{E_7} = h_{E_7}-1=17$. Note that for type $E_7$ both parts of \cite[thm.\ 6.1]{Dugas} can possibly apply; we shall show that in either case we get $u+1$ as Calabi-Yau dimension of the stable module category.
In \cite[thm.\ 6.1(1)]{Dugas} the Calabi-Yau dimension $d$ satisfies $$d\equiv 1 - (h_{E_7}^*)^{-1} \,\operatorname{mod}\, fm_{E_7} = 1 - 9^{-1} \,\operatorname{mod}\, (9u+1) $$ as well as $0 < d \le 9u+1$. Clearly, $d=u+1$ has these properties, and hence the Calabi-Yau dimension is $u+1$, as desired.
In \cite[thm.\ 6.1(2)]{Dugas} the Calabi-Yau dimension $d$ has the form $d=2r+1$ where $r \equiv - 18^{-1} \,\operatorname{mod}\, (9u+1)$ and $0\le r < 9u+1$. Note that when \cite[thm.\ 6.1(2)]{Dugas} applies then $2\nmid f =\frac{9u+1}{17}$ which implies that $u$ is even. Setting $r=\frac{u}{2}$ we immediately see that $18r \equiv 9u \equiv -1 \,\operatorname{mod}\, (9u+1)$, i.e. the Calabi-Yau dimension of the stable category in this case is also $2r+1 = u+1$, as desired.
Finally, consider the algebras $(E_8,\frac{15u+1}{29},1)$. The arguments for determining the Calabi-Yau dimension by \cite[thm.\ 6.1]{Dugas} are very similar to the previous case of $E_7$. The relevant invariants for type $E_8$ are: The frequency $f=\frac{15u+1}{29}$, the Coxeter number $h_{E_8}=30$ with its variant $h_{E_8}^* = h_{E_8}/2 = 15$, and $m_{E_8} = h_{E_8}-1=29$. Again, both parts of \cite[thm.\ 6.1]{Dugas} can apply. In \cite[thm.\ 6.1(1)]{Dugas} the Calabi-Yau dimension $d$ satisfies $d\equiv 1 - 15^{-1} \,\operatorname{mod}\, (15u+1)$ and $0 < d \le 15u+1$. Clearly, $d = u+1$ has these properties, and hence the Calabi-Yau dimension is $u+1$, as desired.
In \cite[thm.\ 6.1(2)]{Dugas} the Calabi-Yau dimension $d$ has the form $d=2r+1$ where $r \equiv - 30^{-1} \,\operatorname{mod}\, (15u+1)$ and $0\le r < 15u+1$. As before, when \cite[thm.\ 6.1(2)]{Dugas} applies then $2\nmid f =\frac{15u+1}{29}$ which implies that $u$ is even. Setting $r=\frac{u}{2}$ we get that $30r \equiv 15u \equiv -1 \,\operatorname{mod}\, (15u+1)$, i.e. the Calabi-Yau dimension of the stable category in this case is also $2r+1 = u+1$, as desired.
{\em $u$-cluster tilting object. } To find a $u$-cluster tilting object $X$, as in type $D$, let $X$ be the direct sum of the indecomposable projective modules and the modules $x_1, \ldots, x_{6}, x_{7}, x_{8}$ whose positions in the stable AR quiver of the selfinjective algebra are given by Figure \ref{fig:ZEn2}, with the convention that the summands $x_7$ and $x_8$ only occur in types $E_7$ and $E_8$ as relevant. \begin{figure}
\caption{The indecomposable modules $x_i$ in Dynkin type $E$}
\label{fig:ZEn2}
\end{figure}
As all algebras in this theorem are standard, their stable module categories are e\-qui\-va\-lent to the mesh categories of their AR quivers. In particular, whether for two objects $v,w$ we have $\underline{\operatorname{Hom}}(v,w)\neq 0$ is completely determined by the mesh relations.
For type $E$ and our special choice of object $X$, we get the following description of the indecomposable objects $t$ such that $\underline{\operatorname{Hom}}(X,t)\neq 0$ directly from the mesh relations (we leave the details of the straightforward, though tedious, verification of these facts to the reader).
In type $E_6$, the objects $t$ are precisely the ones lying in a trapezium with $X$ as left side and $\tau\Sigma X$ as right side; i.e. a trapezium with $X$ as left side, with top side containing 4 vertices and bottom side containing 8 vertices (recall from Section \ref{sec:cluster} that $\Sigma$ is acting by shifting 6 units to the right and reflecting in the central line).
In types $E_7$ and $E_8$ the situation is different. The indecomposable objects $t$ such that $\underline{\operatorname{Hom}}(X,t)\neq 0$ are precisely the ones lying in a parallelogram with $X$ as left side, and top and bottom sides containing $9$ (for $E_7)$ and $15$ (for $E_8$) vertices, respectively.
Now we are in a position to show that our chosen object $X$ is indeed a $u$-cluster tilting object. We need to describe the objects $t$ with $\underline{\operatorname{Hom}}(X,\Sigma^it)\neq 0$ for some $i\in\{1,\ldots,u\}$. For this purpose, let us consider the regions $H(j)$ of the stable AR quiver corresponding to indecomposable objects $t$ for which \begin{equation} \label{equ:u-cluster}
\underline{\operatorname{Hom}}(X,\Sigma^{(u+1)-j}t)\neq 0
\end{equation} where $j$ ranges through $\{ 1, \ldots, u \}$.
For type $E_6$ we suppose that $u\equiv -2 \,\operatorname{mod}\, 11$. We have to distinguish the cases where $u$ is even and odd, respectively. According to the above description, the regions $H(j)$ look as follows.
If $u$ is even, then they tile a parallelogram with left side $\Sigma^{-u}T$ and right side $\tau T$ as in Figure \ref{fig:EnHs}. \begin{figure}
\caption{The sets $H(i)$ in Dynkin type $E_6$ for $u$ even}
\label{fig:EnHs}
\end{figure} In particular, the top and bottom sides of this parallelogram contain $\frac{u}{2}\cdot 12 = 6u$ vertices. But by \cite[cor.\ 1.7]{BS} (cf. also the remarks at the beginning of this section), the stable category $\mbox{\sf stab}\,(E_6,\frac{6u+1}{11},1)$ has precisely $66\cdot \frac{6u+1}{11} = 6(6u+1)$ indecomposable objects, i.e. the stable AR quiver (of tree class $E_6$) is identified after $6u+1$ steps. Hence it follows that \begin{center} $\underline{\operatorname{Hom}}(X,\Sigma^it)=0$ for all $i=1,\ldots,u$ if and only if $t \in \operatorname{add} X$. \end{center} A very similar argument shows that also \begin{center} $\underline{\operatorname{Hom}}(t,\Sigma^iX)=0$ for all $i=1,\ldots,u$ if and only if $t \in \operatorname{add} X$. \end{center} Thus we have shown that our chosen object $X$ is indeed a $u$-cluster tilting object in the stable module category $\mbox{\sf stab} (E_6,\frac{6u+1}{11},1)$.
If $u$ is odd, then the regions $H(j)$ for $j$ ranging through $\{ 1, \ldots, u \}$ tile a trapezium with left side $\Sigma^{-u}X$ and right side $\tau X$ as in Figure \ref{fig:EnHs2}. \begin{figure}
\caption{The sets $H(i)$ in Dynkin type $E_6$ for $u$ odd}
\label{fig:EnHs2}
\end{figure} In particular, the top side of this trapezium contains $\frac{u-1}{2}\cdot 12 + 8 = 6u + 2$ vertices, and the bottom side contains $6u - 2$ vertices. In total, this trapezium then contains $36u$ vertices (e.g.\ note that each of the $u$ smaller trapeziums with top and bottom sides of length 4 and 8 contains 36 vertices). But by \cite[cor.\ 1.7]{BS} (cf. also the remarks at the beginning of this section), the stable category $\mbox{\sf stab}\,(E_6,\frac{6u+1}{11},2)$ has precisely $6(6u+1)=36u+6$ indecomposable objects. Thus, the above trapezium fills precisely the region between the parts which become identified in the stable AR quiver. Now we can argue as above to deduce that $X$ is indeed a $u$-cluster tilting object in $\mbox{\sf stab} (E_6,\frac{6u+1}{11},2)$.
For types $E_7$ and $E_8$ we suppose that $u\equiv -2 \,\operatorname{mod}\, 17$ and $u\equiv -2 \,\operatorname{mod}\, 29$, respectively. Similarly to the above considerations in type $E_6$ when $u$ is even, the regions $H(j)$ of indecomposable objects $X$ satisfying equation \eqref{equ:u-cluster} tile a parallelogram with top and bottom rows containing $9u$ (for $E_7)$ and $15u$ (for $E_8$) vertices. A sketch would resemble Figure \ref{fig:DnHs}. On the other hand, again by \cite[cor.\ 1.7]{BS}, the number of indecomposable objects for the stable categories $\mbox{\sf stab}(E_7,\frac{9u+1}{17},1)$ and $\mbox{\sf stab}(E_8,\frac{15u+1}{29},1)$ occurring in the theorem are $119\cdot \frac{9u+1}{17} = 7(9u+1)$ and $232\cdot \frac{15u+1}{29} = 8(15u+1)$, respectively. Hence, the AR quivers of these stable categories are identified after $9u+1$ (for $E_7$) and $15u+1$ units (for $E_8$). From the sizes of the parallelograms given above it then follows (just as in the previous cases) that $X$ is a $u$-cluster tilting object of the relevant stable categories in types $E_7$ and $E_8$.
The desired fact that the stable endomorphism algebra $\underline{\operatorname{End}}(X)$ is $kE_n$ is proved verbatim as in type $D$ above.
{\em Vanishing of negative self-extensions. } We must show $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ for $i=1, \ldots, u-1$. We have described above the regions in the stable AR quiver where the modules $t$ are located for which $\underline{\operatorname{Hom}}(X,t)\neq 0$; let us again denote them by $H(X)$. These regions $H(X)$ are certain trapeziums (for $E_6$) or parallelograms (for $E_7$ and $E_8$).
For type $E_6$ we get a situation for which a sketch would resemble the one above. The situation for $E_7$ and $E_8$ is completely analogous, but using parallelograms instead of trapeziums.
As in type $D$, the only way we could fail to get $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ would be if we took $i$ so large that $\Sigma^i X$ made it all the way around the stable AR quiver and reached the forbidden region from the right.
However, this does not happen: In fact, left of $\Sigma^{-(u-1)}X$ we have the parallelogram (resp. trapezium) $\Sigma^{-u}H(X)$ before objects get identified in the stable AR quiver. Hence $\underline{\operatorname{Hom}}(X,\Sigma^{-i} X) = 0$ for $i = 1, \ldots, u - 1$ as desired. Note that it is crucial that the maximum value for $i$ here is $u-1$; of course, we have that $\underline{\operatorname{Hom}}(X,\Sigma^{-u} X) \neq 0$. \end{proof}
\end{document} | arXiv |
DNA DYNAMICS AND CHROMOSOME STRUCTURE
Global Nature of Dynamic Protein-Chromatin Interactions In Vivo: Three-Dimensional Genome Scanning and Dynamic Interaction Networks of Chromatin Proteins
Robert D. Phair, Paola Scaffidi, Cem Elbi, Jaromíra Vecerová, Anup Dey, Keiko Ozato, David T. Brown, Gordon Hager, Michael Bustin, Tom Misteli
Robert D. Phair
BioInformatics Services, Rockville, Maryland 20854
Paola Scaffidi
Cem Elbi
Jaromíra Vecerová
Institute of Experimental Medicine, Academy of Sciences of the Czech Republic and 1st Faculty of Medicine, Prague, Czech Republic
Anup Dey
National Institute of Child Health and Human Development, Bethesda, Maryland 20892
Keiko Ozato
David T. Brown
University of Mississippi Medical Center, Jackson, Mississippi 39216
Gordon Hager
Michael Bustin
Tom Misteli
For correspondence: [email protected]
DOI: 10.1128/MCB.24.14.6393-6402.2004
Genome structure and gene expression depend on a multitude of chromatin-binding proteins. The binding properties of these proteins to native chromatin in intact cells are largely unknown. Here, we describe an approach based on combined in vivo photobleaching microscopy and kinetic modeling to analyze globally the dynamics of binding of chromatin-associated proteins in living cells. We have quantitatively determined basic biophysical properties, such as off rate constants, residence time, and bound fraction, of a wide range of chromatin proteins of diverse functions in vivo. We demonstrate that most chromatin proteins have a high turnover on chromatin with a residence time on the order of seconds, that the major fraction of each protein is bound to chromatin at steady state, and that transient binding is a common property of chromatin-associated proteins. Our results indicate that chromatin-binding proteins find their binding sites by three-dimensional scanning of the genome space and our data are consistent with a model in which chromatin-associated proteins form dynamic interaction networks in vivo. We suggest that these properties are crucial for generating high plasticity in genome expression.
Organization of DNA into higher-order chromatin structure serves to accommodate the genome within the spatial confines of the cell nucleus and acts as an important regulatory mechanism (22, 36, 46, 60). Establishment, maintenance, and alterations of global and local chromatin states are modulated by the combined action of a multitude of chromatin-binding proteins. The nucleosome, containing histone proteins, acts as a structural scaffold and as an entry point for regulatory mechanisms (60, 63). Nonhistone proteins, including the HMG proteins, further contribute to the structural maintenance and regulation of chromatin regions (6, 61). In heterochromatin, specific factors such as HP1 convey a transcriptionally repressed state, possibly by influencing higher-order chromatin structure (19, 27). Histone-modifying enzymes such as histone acetyl- and methyltransferases are instrumental in generating epigenetic marks on chromatin domains (60). Chromatin remodeling factors act on specific sites to facilitate access to regulatory DNA elements. Once accessible, transcriptional activators bind specific sequences on DNA and recruit the basal transcription machinery (37, 44, 46). All of these steps involve binding of proteins to chromatin.
Due to their functional significance, chromatin-associated proteins have been extensively characterized—mostly by biochemical extraction and in vitro binding assays. Little is known about the dynamics of how chromatin proteins bind to their target sites in native chromatin in living cells. In vivo microscopy techniques are providing novel tools to study chromatin proteins in living cells (32, 39, 41, 50). Qualitative analysis of photobleaching experiments has revealed a wide range of dynamic behavior for chromatin-associated proteins. The bulk of core histones is immobile on DNA, whereas the linker histone H1 and the TATA-binding protein are relatively stably, yet dynamically, associated (10, 16, 34, 38, 43). Replication and repair factors, on the other hand, are highly mobile in their unengaged state but become temporarily immobilized upon binding to their sites of action (31, 57). Several transcriptional activators and repressors have high mobility, implying transient binding to chromatin (3, 17, 30, 31, 40, 47, 49, 59). However, the majority of these studies are qualitative, and in only a few cases have quantitative parameters such as binding rates or residence times been extracted from in vivo microscopy data. Furthermore, it is not clear from the limited number of analyzed proteins how generally applicable the concept of transient binding of chromatin-associated proteins is.
In this report, we describe a computational microscopy approach to quantitatively determine in native chromatin of intact cells the binding properties of chromatin-associated proteins. By analysis of a wide range of chromatin proteins, including structural proteins, remodeling factors, and transcriptional coactivators, as well as basal and specific transcription factors, we demonstrate that transient binding is a general property of chromatin-associated proteins. Our results suggest that chromatin-binding proteins find their binding sites largely by three-dimensional scanning of the genome space, and we speculate that dynamic interaction networks play a critical role in the control of gene expression.
Cell culture and growth rate.BHK, HeLa, NIH 3T3, and CHO cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 100-U/ml penicillin, 100-μg/ml streptomycin, and 2 mM l-glutamine at 37°C in an atmosphere enriched with 5% CO2. Hepa-1 cells were grown in alpha minimum essential medium (Gibco BRL) supplemented with 7% fetal bovine serum (HyClone Labs) to prevent nuclear translocation of AhR. Cells were routinely maintained in a 37°C incubator with 5% CO2. Cells were treated with a ligand, 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), at 10 nM for 1 h in all experiments. Cells were transfected by electroporation with a BTX square wave pulse at 650 V, 99 μs. Transiently transfected cells were observed 14 h after transfection. Stable cell lines expressing H10-green fluorescent protein (GFP) were generated and grown as previously described (28).
Fusion proteins.Details of the fusion proteins used in this study are found in the references cited in Table 1. Myc, Mad, Max, BRG, PCAF, and NF1 were in the pEGFP-C1 vector.
Characterization of fusion proteins
Imaging.For microscopy, transfected cells were plated and observed in LabTek II chambers (Nalgene). Live-cell microscopy was performed on a Zeiss 510 confocal microscope using the 488-nm laser line of an Ar laser (nominal output, 40 mW; beam width at specimen, 0.2 μm). All experiments were done at 37°C, and imaging was done with a ×100 objective, NA 1.3. Scanning was bidirectional at the highest possible rate using a ×5 zoom, with a pinhole of 1 Airy unit. Laser power for bleaching was maximal. For imaging, the laser power was attenuated to 0.1% of the bleach intensity. For fluorescence recovery after photobleaching (FRAP) experiments, five single-prebleach images were acquired, followed by two iterative bleach pulses of 223 ms each. Single-section images were then collected at 534-ms intervals for 60s. Recovery of signal in the bleached region and loss of signal in the unbleached region were measured as average intensity signals in a region comprising at least 50% of the bleached or unbleached area. The size of this measurement region was identical in all experiments. Signal loss during the recovery period was less than 5% of the initial fluorescence signal. Completeness of bleaching in three dimensions (3D) was confirmed by inspection of image stacks of fixed cells. All recovery and loss curves were generated from background-subtracted images. The fluorescence signal measured in a region of interest was singly normalized to the prebleach signal in the region of interest: $$mathtex$$\[R\mathrm{{=}\ (}I_{t}\mathrm{{-}}I_{\mathrm{bg}}\mathrm{)/(}I_{o}\mathrm{{-}}I_{\mathrm{bg}}\mathrm{)}\]$$mathtex$$(1) where Io is the average intensity in the region of interest during prebleach, It is the average intensity in the region of interest at time point t, and Ibg is the background signal determined in a region outside of the cell nucleus.
Kinetic modeling.Quantitative analysis of half-FRAP data was carried out as described in detail previously by a classical compartmental approach (48). Each photobleaching data set was fit to a sum of exponentials and showing that systematic deviations could not be eliminated with fewer binding sites and that addition of an additional site resulted in an unidentifiable system whose parameter values had unacceptable coefficients of variation. Most proteins analyzed showed biphasic behavior, and the kinetic modeling procedure is described for a model with two distinct types of binding events.
Using standard chemical kinetic principles, a system of ordinary differential equations with constant coefficients (rate constants) characterizing the processes of binding, unbinding, and (when active) photobleaching was generated (52). Since the absolute abundances of each protein and their binding sites are often unknown, second order association processes were converted to pseudo-first-order processes by combining the unknown steady-state binding site abundance with the corresponding second order rate constant to yield a first order association rate constant. This has the additional advantage that the nonlinear system of differential equations is converted to a linear system. The resulting equations are $$mathtex$$\[\frac{d\mathrm{CP_{DNA1unbleached}}}{dt}\mathrm{{=}}f_{\mathrm{unbleached}}\ k_{\mathrm{on1}}\mathrm{CP_{nucleoplasm}{-}}k\mathrm{_{off1}CP_{DNA1unbleached}}\]$$mathtex$$(2) $$mathtex$$\[\frac{dCP_{\mathrm{DNA1bleached}}}{dt}\mathrm{{=}}f\mathrm{_{bleached}}\ k\mathrm{_{on1}CP_{nucleoplasm}{-}}k\mathrm{_{off1}CP_{DNA1bleached}{-}}k\mathrm{_{bleach}CP_{DNA1bleached}}\]$$mathtex$$(3) $$mathtex$$\[\frac{d\mathrm{CP_{DNA2unbleached}}}{dt}\mathrm{{=}}f\mathrm{_{unbleached}}k\mathrm{_{on2}CP_{nucleoplasm}{-}}k\mathrm{_{off2}CP_{DNA2unbleached}}\]$$mathtex$$(4) $$mathtex$$\[\frac{d\mathrm{CP_{DNA2bleached}}}{dt}\mathrm{{=}}f\mathrm{_{bleached}}\ k\mathrm{_{on2}CP_{nucleoplasm}{-}}k\mathrm{_{off2}CP_{DNA2bleached}{-}}k\mathrm{_{bleach}CP_{DNA2bleached}}\]$$mathtex$$(5) $$mathtex$$\[\frac{d\mathrm{CP_{nucleoplasm}}}{dt}\mathrm{{=}}k\mathrm{_{off1}\ (CP_{DNA1bleached}{+}CP_{DNA1unbleached}\ ){+}}k\mathrm{_{off2}\ (CP_{DNA2bleached}{+}CP_{DNA2unbleached}){-}}k\mathrm{_{on1}CP_{nucleoplasm}{-}}k\mathrm{_{on2}CP_{nucleoplasm}{-}}f\mathrm{_{bleached}}\ k\mathrm{_{bleach}CP_{nucleoplasm}}\]$$mathtex$$(6) $$mathtex$$\[\frac{d\mathrm{CP_{bleached}}}{dt}\mathrm{{=}}k\mathrm{_{bleach}\ (CP_{DNA1bleached}{+}CP_{DNA2bleached}{+}}f\mathrm{_{bleached}CP_{nucleoplasm})}\]$$mathtex$$(7) In these equations, CP represents chromatin-associated protein; subscripts DNA1 and DNA2 indicate the two classes of binding sites; subscripts bleached and unbleached indicate the regions exposed to and not exposed to the bleaching laser pulse; fbleached and funbleached are the corresponding fractions of nuclear area; subscript nucleoplasm indicates the free pool of chromatin-associated protein in the nucleoplasm; kon1 and kon2 are the effective first order rate constants for chromatin-associated protein binding to sites 1 and 2, respectively; koff1 and koff2 are the rate constants for unbinding of chromatin-associated protein from sites 1 and 2, respectively; and kbleach is the rate constant characterizing the photobleaching process.
Rate constants for corresponding processes in the bleached and unbleached regions were constrained to be equal, and the proportion of the binding sites in the bleached versus unbleached parts of the nucleus was assumed to be the same as the proportion of the observed nuclear plane of focus occupied by the bleached and unbleached regions of interest. We further assumed that analysis of this single confocal slice was unaffected by the loss or gain of tagged molecules to or from nuclear regions above and below the slice.
Because it is difficult to resolve kon from half-FRAP data alone, we enforced an additional constraint. Based on our knowledge that diffusion is fast on this time scale, we assumed that in the unbleached region all of the free pool of protein and none of the bound pool was bleached by the time the first post-bleach image was collected. This allowed estimation of the fraction of the protein bound and, in turn, enforces a constraint on the values of kon1 and kon2 relative to koff1 and koff2.
Initial conditions were obtained by solving the model for an arbitrary synthetic rate of 10 molecules per s and a very slow degradation rate constant of 0.0005 s−1. As only normalized fluorescence data are analyzed, the absolute values of these steady-state abundances have no impact on the model solutions or on the dissociation rate constants and mean residence times reported here, but they do allow initial fluorescence to be correctly apportioned between fast and slow binding sites.
Data collected from the unbleached portion of the nucleus were fitted to the sum of the fast and slow compartments in the unbleached portion of the model. Data for the bleached portion of the nucleus were fitted to the sum of the fast and slow compartments in the bleached section of the model. For example, normalized recovery kinetics were fitted to $$mathtex$$\[F\mathrm{(}t\mathrm{){=}}\frac{\mathrm{CP_{DNA1bleached}{+}CP_{DNA2bleached}{+}}f\mathrm{_{bleached}CP_{nucleoplasm}}}{\mathrm{CP_{DNA1bleached}^{ss}{+}CP_{DNA2bleached}^{ss}{+}}f\mathrm{_{bleached}CP_{nucleoplasm}^{ss}}}\]$$mathtex$$(8) where the superscript ss indicates the steady-state value. All CP values refer to relative amounts.
The key parameters of interest are the mean residence times, which were calculated from the corresponding dissociation rate constants: $$mathtex$$\[T\mathrm{_{res1}\ {=}\ 1/}k\mathrm{_{off1}}\]$$mathtex$$(9) $$mathtex$$\[T\mathrm{_{res2}\ {=}\ 1/}k\mathrm{_{off2}}\]$$mathtex$$(10) Fractions of total binding associated with fast (1) or slow (2) binding sites are $$mathtex$$\[f_{bound\mathrm{1}}\mathrm{{=}}\frac{\mathrm{CP_{DNA1unbleached}^{ss}{+}CP_{DNA1bleached}^{ss}}}{\mathrm{CP_{DNA1unbleached}^{ss}{+}CP_{DNA1bleached}^{ss}{+}CP_{DNA2unbleached}^{ss}{+}CP_{DNA2bleached}^{ss}}}\]$$mathtex$$(11) $$mathtex$$\[f_{bound\mathrm{2}}\mathrm{{=}}\frac{\mathrm{CP_{DNA2unbleached}^{ss}{+}CP_{DNA2bleached}^{ss}}}{\mathrm{CP_{DNA1unbleached}^{ss}{+}CP_{DNA1bleached}^{ss}{+}CP_{DNA2unbleached}^{ss}{+}CP_{DNA2bleached}^{ss}}}\]$$mathtex$$(12) The kinetic model was implemented by using SAAM II software (SAAM Institute, Inc., Seattle, Wash.). Generalized nonlinear least-squares fitting and parameter optimization were performed in SAAM II, and the coefficients of variation were obtained directly from the SAAM II Statistics window (4, 14). When fbleached, koff1, kbleach, kon2, and koff2 were allowed to adjust simultaneously, convergence was achieved in less than 20 iterations. Coefficients of variation for the reported parameter estimates were, respectively, on the order of 0.3, 7, 0.6, 30, and 9%.
Measurement of protein-chromatin interaction dynamics in living cells.FRAP can be used to quantitatively measure binding kinetics of proteins to unperturbed chromatin in living cells. The rationale for this approach is based on the property that FRAP recovery kinetics reflect the overall mobility of a protein (39, 50). For proteins that do not interact with any cellular structures, FRAP kinetics are a direct reflection of their translational motion properties (41, 47, 62). In contrast, for proteins that bind to relatively immobile structures such as chromatin, binding events slow down the protein's overall mobility (41, 50, 62). Since diffusion over the micrometer range as measured in FRAP experiments occurs within 10 to 100 ms, even transient interactions of a protein with chromatin are rate limiting and severely slow down the overall recovery rate of a protein in a FRAP assay. Experimental FRAP recovery kinetics of chromatin-associated proteins are thus determined primarily by the protein's binding properties and allow extraction of information about protein-chromatin interactions in vivo (50).
The FRAP approach was experimentally validated by direct comparison of well-characterized corresponding pairs of wild-type and binding-impaired mutants of chromatin-associated proteins (Fig. 1A and B). To this end, half of the nucleus of a cell expressing a functional GFP-tagged protein of interest was bleached with a 400-ms bleach pulse. The recovery of the fluorescence signal in the bleached region and the loss of the signal in the unbleached regions were monitored simultaneously by time-lapse microscopy to provide complementary data sets (Fig. 1A and B). As previously reported, a stably expressed functional fusion protein between GFP and the linker histone H10 showed relatively slow fluorescence recovery and loss (Fig . 1A and B). The t50, defined as the time required to reach half-maximal recovery, for H10-GFP was 127 s, and complete recovery and loss were reached within 420 s (38, 43). In contrast, H10-Δ7-GFP, which contains seven point mutations within the globular domain and only binds weakly to cruciform DNA in vitro (26), showed dramatically faster recovery and faster loss than the corresponding wild-type protein (Fig. 1A and B). t50 was 7.3 s, and full recovery was reached within 57 s (Fig. 1B). Note that the experimental data are only normalized to the prebleach signal, and since about 50% of the initial signal is bleached, complete recovery is indicated by recovery of the signal to ∼50% of the prebleach value (Fig. 1B).
FRAP for the analysis of chromatin-binding proteins. (A) FRAP of pairs of wild-type and DNA binding-deficient chromatin proteins H10 and HMGN1. The H10 mutant contains seven point mutations within the globular domain previously demonstrated to reduce chromatin binding. The HMGN1 mutant contains mutations S20E and S24E, which abolish nucleosome binding. Fusion proteins were imaged before and during recovery after bleaching of about half the nuclear area. Images were taken at the indicated times after end of the bleach pulse. As shown in panel B, the average fluorescence intensities in the bleached (open symbols) and unbleached (solid symbols) regions were measured over time. Interference with DNA binding results in significantly faster recovery of the mutants. Bar, 5 μm. (B) Quantitative analysis of FRAP recovery of wild type and mutant H10, HMGN1 or GFP-NLS as a nonbinding control. Convergence of recovery and loss curves indicates the absence of an immobile fraction. Values are averages ± standard deviation from at least 10 cells from three experiments.
The sensitivity of FRAP to chromatin binding events was further confirmed by analysis of HMGN1-GFP and its double point mutant, HMGN1-EE-GFP, containing mutations S20E and S24E in the nucleosome binding domain in transiently transfected HeLa cells (51). The functionality of HMGN1-GFP and the loss of binding of HMGN1-EE-GFP have previously been demonstrated in vitro and in vivo (51). The recovery kinetics for the wild type, HMGN1-GFP-wt, were significantly slower than that of the mutant (Fig. 1A and B). t50 for the wild type was 17.2 s, but t50 for the nonbinding mutant was only 4.3 s, and total recovery was reached after 52 s for the wild type but within 14.8 s for the mutant (Fig. 1B). Similar results were obtained for wild-type and mutant pairs of HP1, UBF, Brd4, and FBP (data not shown) (13). As expected, recovery of GFP-NLS, which does not bind chromatin specifically, was very rapid and was completed within ∼ 2 s after bleaching (Fig. 1B). No recovery or loss was observed in chemically fixed cells. Differences in recovery kinetics between wild-type and mutant pairs cannot be explained by differences in molecular weight, since all mutants contain substitutions rather than deletions. The observed differences between wild-type and mutant proteins also exclude the possibility that the recovery kinetics of the wild type reflect nonspecific binding due to saturation of binding sites caused by overexpression. Taken together, these observations demonstrate that photobleaching is a sensitive tool to probe the binding interactions of proteins with native chromatin in intact cells.
Transient binding is a general feature of chromatin-associated proteins.To gain an impression of the interaction dynamics of a wide range of proteins with chromatin, we examined factors involved in various aspects of chromatin function, including structure, remodeling, histone modification, and transcriptional activation (Fig. 2 and 3). The functionality of most GFP fusions has been previously tested by in vivo and/or in vitro assays (Table 1). Most tested fusion proteins localized homogeneously throughout the nucleus, were excluded from the nucleolus, and did not accumulate to any significant extent in nuclear foci (Fig. 2A). Cells expressing low levels of the fusion proteins were used in all experiments, although no significant differences in recovery rates were observed in cells expressing higher levels.
Localization and FRAP recovery and loss curves for chromatin-associated proteins. (A) Most fusion proteins were homogeneously distributed within the nucleus and excluded from the nucleolus. (B) The indicated proteins were imaged before and during recovery after bleaching of about half the nuclear area. Average fluorescence intensities in the bleached (open squares) and the unbleached region (solid squares) were measured over time. The fluorescence signal equilibrated in most cases within about 30 to 60 s, suggesting rapid exchange of proteins on chromatin. Convergence of the two curves indicates the absence of any substantial immobile, stably bound fraction. Values are averages ± standard deviation from at least 10 cells from three experiments.
Kinetic characteristics of FRAP curves for chromatin proteins. The time required to reach half-maximal recovery (t50) (A) and the time required to reach a stable plateau for recovery and loss (B) were determined for all chromatin proteins. For most chromatin proteins, t50 was between 3 and 8 s and recovery was complete within 30 to 45 s. Exceptions are the core and linker histones. Values are averages ± standard deviation from at least 10 cells from three experiments.
Qualitative analysis of FRAP experiments revealed that the vast majority of fusion proteins exhibited rapid recovery kinetics (Fig. 2B). With the exception of the positive control H10 and the FUSE-binding protein FBP, half-times of recovery and loss were typically within 3 to 8 s after the bleach pulse and recovery was completed within about 30 to 45 s for most proteins (Fig. 3). Inspection of corresponding pairs of recovery and loss curves showed that the two measurements for all proteins converged and reached a common plateau (Fig. 2B). Convergence represents complete equilibration of the fluorescence signal between the unbleached and the bleached region. Such convergence can only be achieved when the entire protein population is dynamically exchanged and no immobile, statically bound population is present. Comparable results were obtained when the same protein was analyzed in different cell types (Table 1).
We conclude from these observations that a wide variety of chromatin-associated proteins, irrespective of function or structural features, bind only transiently to chromatin in the nucleus of living cells and that virtually the entire population is dynamically exchanged. Transient binding of proteins to chromatin thus appears to be a common feature of many chromatin-associated proteins.
Kinetic modeling of protein-chromatin interactions in vivo.To extract specific quantitative information about binding rates and residence times, we applied computational kinetic modeling methods (48, 50). We generated a mathematical model based on standard principles of chemical kinetics to describe the kinetic behavior of chromatin-binding proteins in the cell nucleus (Fig. 4A). We assumed that a chromatin-associated protein moves randomly through the nucleoplasm and binds at random intervals to chromatin. On average, the protein resides on chromatin for a certain period of time, which we refer to as mean residence time, before it dissociates and moves to another binding site. The overall movement of a chromatin-associated protein through nuclear space is thus determined by two factors: translational mobility between binding sites and the binding reactions themselves. Because the translational, diffusional mobility is much faster than the rate-limiting binding events, the movement of a protein through nuclear space can be described in a first approximation as a simple association/dissociation reaction (see Materials and Methods). Since the absolute abundances of each protein and their binding sites are unknown, second order association processes were converted to pseudo-first-order processes by combining the unknown steady-state binding site abundance with the corresponding second order rate constant to yield a first order association rate constant. This has the additional advantage that the nonlinear system of differential equations is converted to a linear system (see Materials and Methods for details).
Kinetic modeling of chromatin binding. (A) Compartmental model for analysis of half-FRAP photobleaching data. The model consists of an unbound nucleoplasmic pool of protein that exchanges with two classes of nuclear binding sites, a rapidly exchanging pool and a slowly exchanging pool. These pools exist in both the unbleached and bleached regions of the nucleus and are replicated in order to accurately simulate the experimental bleaching protocols. Arrows represent processes of binding and unbinding; they are labeled with the corresponding rate constants. Protein synthesis and degradation were assumed to be negligible on the time scale of these photobleaching experiments. Diffusion events are neglected since they are much faster than the binding events and are not rate limiting. (B) Least-square best fit of experimental recovery and loss data for Mad and FBP. For Mad, a two-site binding model gives a significantly better fit than a one-site binding model. For FBP, a one-site binding model is sufficient. Symbols represent experimental data, and lines represent best fits.
The kinetic model contains as parameters the on and off rate constants and the abundances of bound and unbound molecules (Fig. 4A) (Materials and Methods). The size of the total bound fraction was experimentally determined by taking advantage of the fact that the diffusion time of a protein is much shorter than the bleach time used (Table 2). For each protein, the fluorescence intensity in the unbleached region before bleaching was compared to the intensity immediately after the bleach pulse. This difference is an indication of the bound fraction, since a bleach pulse of 446 ms completely abolishes the fluorescence signal from freely diffusing, unbound molecules in the unbleached region (data not shown). For all chromatin-associated proteins, the bound fraction was around 90% or higher (Table 2). This observation suggests that the vast majority of molecules are at any give time bound to chromatin. The size of the bound fraction was used as a constraint for the fitting of the experimental data to the modeling data.
Kinetic properties of chromatin proteins
In preliminary modeling analyses, we found that the recovery kinetics for most proteins could be most accurately fit by two exponentials (data not shown) (Tables 3 and 4), indicating that most proteins were present in the nucleus in at least two distinguishable populations with distinct binding kinetics. The complete kinetic model for a protein with two kinetic populations is shown in Fig. 4A, and the set of differential equations describing the model is shown in Materials and Methods. For simplicity, we refer to the two populations as "fast" and "slow."
Statistical significance of parameter fitsa
Evaluation of a one-site versus a two-site model
Quantitative binding properties of chromatin-associated proteins in vivo.Using this kinetic model, we simulated in silico the FRAP experiments to obtain best fits between the experimental data and the simulation (Fig. 4B). Best-fit parameters for each protein's off rates, mean residence times, and the sizes of the fast and slow fractions were obtained by using a generalized least-squares method (Fig. 4B and Table 2). Error margins for all best-fit parameters were typically around 10% of the measured values, and coefficients of variance are given in Table 3. The obtained values are based on the assumption that the total of available specific and nonspecific binding sites is in excess of the number of molecules of the observed protein. To ensure that a kinetic model containing two distinct types of binding sites was more accurate than a simple single-site model, we determined best parameter fits for all proteins for a one-site or two-site model and assessed the goodness of fit according to the Akaike information criterion (Table 4). With the exception of H2B, HMGB1, and FBP, all proteins fit more accurately to a two-site model (Fig. 4B and Table 4).
The two koff rates obtained from the two-site binding models typically differed by roughly an order of magnitude (Table 2): A fast population with a koff between 0.15 and 0.3 and a slow population with a koff between 0.03 and 0.07 (Table 2). These values correspond to residence times (defined as 1/koff) of the fast population of 3 to 6 s and 15 to 30 s for the slow population (Table 2). The shortest mean residence times of ∼2 s were found for Jun and XBP, whereas FBP and HP1β had the longest mean residence times of ∼ 70 s. As a control, GFP had a nominal mean residence time of less than 1 s, which is within our experimental error (Table 2). We confirmed that core histones had mean residence times on the order of hours (Table 2). Furthermore, in agreement with earlier qualitative reports, we determined the mean residence time of H10 to be ∼3 min and a H10-ΔC mutant, which lacks the entire C terminus and has severely reduced DNA binding activity, had a mean residence time on the order of 14 s (Table 2) (43). When fits were forced to a single binding site model, mean residence times on the order of 5 to 30s were obtained for most proteins (data not shown). It is likely that populations with faster binding dynamics exist that are below our temporal resolution limit. However, computer simulations show that the experimentally observed FRAP recovery kinetics are inconsistent with large populations of mean residence times in the subsecond range (data not shown).
We next determined the relative sizes of the fast and slow fractions for each protein by using best-fit analysis. As might be expected based on the diverse functions of the analyzed proteins, most proteins showed distinct combinations of residence time and the relative proportions of slow and fast binding fractions (Table 2). For some proteins, the major fraction showed rapid binding, whereas for other proteins the rapid fraction only represented a minor population (Table 2). For example, only about 20% of BRG1 or Jun molecules were in the fast fraction, whereas more than 80% of PCAF or ARNT molecules made up the fast fraction (Table 2). As a control, we find that impairment of nucleosome binding activity of HMGN1 results as predicted in complete loss of the slow fraction and increases the fast fraction to more than 96%, compared to less than 20% for the wild-type protein (Table 2).
Two proteins, HMGB1 and FBP, fit more accurately to a single-site model, although their mean residence times were dramatically different (Tables 2 and 4). While the DNA binding protein HMGB1 is a rapid binder, with its entire population having a mean residence time of ∼4 s, the entire population of the TFIIH interacting protein FBP had a mean residence time of more than 60 s (Table 2). The short mean residence time of HMGB1 is consistent with its proposed role as a stimulator of chromatin remodeling via transient interaction with chromatin (5).
Structural proteins had an overall tendency for slower turnover. The slow fraction of HP1β with a mean residence time of 72 s had the longest mean residence time of all proteins, and although more than 80% of the protein turns over within 11 s, even this mean residence time is by far the longest among the fast fractions (Table 2). This long residence time is inconsistent with a suggested role of FBP in RNA binding. Similarly, the relatively long mean residence time of 24.8 s in combination with its large slow fraction of almost 80% makes the HMGN1 one of the more stably associated proteins (Table 2). Thus, although even structural proteins interact in a very transient fashion with chromatin, their somewhat lower turnover might be critical for the establishment and maintenance of stable chromatin domains (12).
The two enzymatic chromatin-associated proteins BRG1 and the histone deacetylase PCAF showed very distinct kinetic profiles (Table 2). BRG1 was predominantly present in a relatively slow fraction with a mean residence time of about 20 s; in contrast, PCAF was highly enriched in a rapid fraction with a mean residence time of less than 5 s. The differences might reflect the protein's different functions as PCAF modifies histone tails in a presumably very rapid enzymatic reaction, whereas BRG1 might be required for longer periods of time to bring about the structural changes involved in chromatin remodeling.
The transcriptional activator ARNT was predominantly (90%) found in a rapidly exchanging fraction, and only about 8% of the total was bound for longer periods (30 s). The size of this slow fraction is similar to estimates based on localization data of the fraction of ARNT present in transcriptionally engaged AhR-ARNT complexes (20). Furthermore, the estimated fast fraction of AhR of about 42% is in the same range as the fraction estimated to be associated with cellular transcription sites. These data are consistent with a model in which ARNT scans potential target genes and then recruits AhR temporarily to these genes, resulting in their activation (20).
The sum of these results demonstrates that transient binding is a common feature of many chromatin-associated proteins in vivo, that chromatin-associated proteins exist in several, kinetically distinct populations, and that at steady state the major population of each protein is bound to chromatin rather than present in a soluble form in the nucleoplasm.
We describe and apply here a combined microscopy-computation approach to analyze the binding dynamics of a wide range of proteins with chromatin in the nucleus of living cells. We find by quantitative analysis that transient binding is a common property of many chromatin-associated proteins. Mean residence times are typically on the order of 2 to 20 s, and for most proteins, no significant population of immobile, statically associated molecules was detected. Although we cannot strictly rule out the possibility that detection of more stable interactions were obscured due to overexpression of the fluorescently tagged protein, several lines of evidence argue against this scenario. First, transient binding with similar residence times was observed for a range of proteins representing a spectrum of endogenous proteins, whose abundances and numbers of endogenous binding sites differ. If factors such as endogenous abundance and available binding sites critically and artifactually determined our results, a larger range of residence times would be expected. Second, the dynamic behavior of a given protein was similar among several cell types, but differences between proteins were observed within a given cell type. Third, relatively small protein fractions can be detected by our method. We estimate the abundance of the overexpressed protein to be typically between 10,000 to 100,000 molecules per nucleus (18). Since the sensitivity in detecting kinetic fractions is about 5% of the total protein (Table 3), kinetically distinct populations of 500 to 5,000 molecules can be detected. Thus, even relatively low-abundance binding events would be detected against a pool of overexpressed protein. Fourth, comparison of pairs of wild-type and mutant proteins in all cases showed a dramatic reduction in the residence time of the mutant protein, suggesting that the observed slower binding of the wild-type protein was not due to nonphysiological binding. Fifth, our results are consistent with qualitative observations in numerous well-characterized experimental systems (3, 8, 13, 16, 17, 30, 31, 35, 38, 40, 51, 57, 59). For example, in the case of HP1, similar results in FRAP experiments have been obtained in transient and stable expression systems as well as in a gene replacement system (13, 23). Importantly, the dynamic properties of HP1 as well as its yeast homologue, swi6, were independent of the expression level of the GFP-fusion protein (12). Since HP1 behaves similarly to several of the proteins observed here, it is likely that these binding events are also representative of the physiological behavior of these proteins. Furthermore, although we cannot strictly exclude the possibility that some of the retardation in FRAP kinetics is due to nonchromatin-mediated binding, for example, to a nuclear matrix or to storage compartments, the sensitivity of the FRAP curves of six chromatin-associated proteins to mutations that affect chromatin binding suggests that our measurements indeed primarily reflect binding to chromatin.
While we can accurately fit the experimental data for most proteins to a kinetic model containing two distinct binding types, the biological significance of the two kinetic fractions is unclear and might be different for each protein. It is tempting to suggest that the fast and slow fractions represent nonspecific and specific binding events for each protein. However, the inability to accurately determine critical parameters such as the number of endogenous molecules and binding sites and the ratio of exogenous and endogenous protein for most of the analyzed proteins makes it difficult to generalize this interpretation. The biological meaning of each kinetic fraction should be determined on a protein-to-protein basis. Regardless, the most likely cause for generating distinct kinetic fractions is the temporary immobilization of transcription factors upon their functional binding to distinct types of target sequences. Well-documented examples of modulation of the dynamic properties of proteins due to their functional interaction with chromatin include the steroid receptors, which become temporarily immobilized on target genes upon stimulation with ligand (59), RNA polymerases I and II upon transcriptional engagement (3, 17, 35), and DNA repair and replication factors upon association with repair and replication sites (30, 31, 57).
Although our measurements provide an estimate of residence times, they do not address the underlying cause for the exchange of a protein on chromatin. Mean residence times on the order of tens of seconds or minutes are unlikely to simply reflect spontaneous exchange events but might point to an involvement of regulatory activities. Recruitment of members of the chaperone family to specific genes during hormone activation and an effect of molecular chaperone activity on the mobility of the glucocorticoid receptor (GR) and the progesterone nuclear receptors have been reported (21, 25). Similarly, proteosome activity influences the exchange dynamics of glucocorticoid and estrogen receptors (55, 58, 59). Furthermore, a role for chromatin remodeling in the active exchange of the GR has been proposed, since it appears that GR is actively displaced from the template during the remodeling process (24, 29, 45). It will be important to determine whether these active processes are limited to controlling the binding of steroid receptors or are generally involved in the rapid exchange of proteins on chromatin.
A 3D genome-scanning model for chromatin-associated proteins.Our observations of transient binding and the large bound fraction of all analyzed chromatin-associated proteins support a model in which a single molecule of a chromatin-associated protein resides on chromatin for a few seconds and then dissociates and diffuses for a relatively short period of time before it associates with a new site, most likely on a different chromatin fiber (53). While we have not systematically determined the time a molecule spends diffusing through the nucleoplasm between binding events, preliminary observations on the linker histone H1 indicate that this time is on the order of 200 to 400 ms (T. Misteli, unpublished observation). Regardless, the dynamic exchange of chromatin-associated proteins combined with a large population of bound molecules generates a steady state in which a predominantly bound population of molecules continuously samples the genome for appropriate binding sites by diffusional hopping between chromatin fibers. This mode of action is an effective way of ensuring availability of chromatin-associated proteins throughout the nucleus and is a simple means to efficiently target proteins to their binding sites (42).
Dynamic protein interaction networks on chromatin.Dynamic exchange of chromatin-associated proteins has implications for chromatin function and gene regulation. The characteristic transient binding of proteins to a site in the genome results in a high flux of molecules at a particular binding site. As a consequence, the maintenance of a protein's occupancy, or of a protein complex on chromatin, requires the continuous supply of the involved factors. A requirement for continuous supply of repressor has been demonstrated in yeast, where silencing of mating-type genes requires the continuous presence of its silencers (11). Similarly, in mammalian cells, the continuous maintenance of active ribosomal genes and the formation of preinitiation and transcription complexes require the uninterrupted supply of polymerase components (17).
As a consequence of the lability of protein-chromatin interactions, simple competition among potential binding partners for any given site in the genome emerges as a major determinant of chromatin states (42, 53). Upon dissociation of any protein, the available binding site is open for competition by multiple binding factors. Since the outcome of this competition is largely dependent on the local concentration and the affinity of the competing factors, dynamic competition is a simple, but effective, means to modulate chromatin states. In heterochromatin of yeast and mammals, for example, the mandatory structural protein HP1, which only binds transiently to chromatin, can be displaced from pericentromeric heterochromatin within minutes upon alterations of the chromatin compaction status (13). Similarly, expression of the GAL4 activator in a reporter system in Drosophila can overcome heterochromatic silencing, most likely by affecting the binding site occupancy of silencing factors (1). While these examples consist of two known competitors, more complex dynamic interaction networks acting on chromatin are likely commonplace. One such network is the dynamic competitive interactions between the linker histone H1 and multiple HMG proteins as recently described in living cells (7, 8). The interplay of multiple proteins in this manner in the form of dynamic interaction networks can account for some of the most general properties of gene expression systems, such as rapid response, plasticity, diversity, robustness, and integration of signals. Since the transient binding of proteins to chromatin is a critical component of such interaction networks, we suggest that the dynamic nature of chromatin-associated proteins is a key feature of gene regulation processes (42, 53).
We thank T. Kerppola, D. Levens, and E. Prochownik for reagents.
J.V. was partially funded by grant IAA5039103 from the Academy of Sciences of the Czech Republic and MSM111100003 from the Ministry of Education, Youth and Sports. D.T.B. was supported by NSF grant MCB-0235800. J.V. acknowledges receipt of a Journal of Cell Science Traveling Fellowship. T.M. is a Fellow of the Keith R. Porter Endowment for Cell Biology.
Received 17 March 2004.
Returned for modification 18 April 2004.
Accepted 2 May 2004.
Ahmad, K., and S. Henikoff. 2001. Modulation of a transcription factor counteracts heterochromatic gene silencing in Drosophila. Cell 104:839-847.
Akaike, H. 1974. A new look at statistical model identification. IEEE Trans. Auto. Control 19:716-723.
Becker, M., C. Baumann, S. John, D. A. Walker, M. Vigneron, J. G. McNally, and G. L. Hager. 2002. Dynamic behavior of transcription factors on a natural promoter in living cells. EMBO Rep. 3:1188-1194.
Bell, B. M. 2001. Approximating the marginal likelihood estimate for models with random parameters. Appl. Math. Comput. 119:57-75.
Bonaldi, T., G. Langst, R. Strohner, P. B. Becker, and M. E. Bianchi. 2002. The DNA chaperone HMGB1 facilitates ACF/CHRAC-dependent nucleosome sliding. EMBO J. 21:6865-6873.
Bustin, M. 1999. Regulation of DNA-dependent activities by the functional motifs of the high-mobility-group chromosomal proteins. Mol. Cell. Biol. 19:5237-5246.
Catez, F., D. T. Brown, T. Misteli, and M. Bustin. 2002. Competition between histone H1 and HMGN proteins for chromatin binding sites. EMBO Rep. 3:760-766.
Catez, F., Y. Huan, K. J. Tracey, R. Reeves, T. Misteli, and M. Bustin. 2004. A network of dynamic interations between histone H1 and HMG proteins in chromatin. Mol. Cell. Biol. 24:4321-4328.
Chalfie, M., Y. Tu, G. Euskirchen, W. Ward, and D. Prasher. 1994. Green fluorescent protein as a marker for gene expression. Science 263:802-805.
Chen, D., C. S. Hinkley, R. W. Henry, and S. Huang. 2002. TBP dynamics in living human cells: constitutive association of TBP with mitotic chromosomes. Mol. Biol. Cell 13:276-284.
Cheng, T. H., and M. R. Gartenberg. 2000. Yeast heterochromatin is a dynamic structure that requires silencers continuously. Genes Dev. 14:452-463.
Cheutin, T., S. A. Gorski, K. M. May, P. B. Singh, and T. Misteli. 2004. In vivo dynamics of Swi6 in yeast: evidence for a stochastic model of heterochromatin. Mol. Cell. Biol. 24:3157-3167.
Cheutin, T., A. J. McNairn, T. Jenuwein, D. M. Gilbert, P. B. Singh, and T. Misteli. 2003. Maintenance of stable heterochromatin domains by dynamic HP1 binding. Science 299:721-725.
Cobelli, C., and D. M. Foster. 1998. Compartmental models: theory and practice using the SAAM II software system. Adv. Exp. Med. Biol. 445:79-101.
Dey, A., F. Chitsaz, A. Abbasi, T. Misteli, and K. Ozato. 2003. The double bromodomain protein Brd4 binds to acetylated chromatin during interphase and mitosis. Proc. Natl. Acad. Sci. USA 100:8758-8763.
Dou, Y., J. Bowen, Y. Liu, and M. A. Gorovsky. 2002. Phosphorylation and an ATP-dependent process increase the dynamic exchange of H1 in chromatin. J. Cell Biol. 158:1161-1170.
Dundr, M., U. Hoffmann-Rohrer, Q. Hu, I. Grummt, L. I. Rothblum, R. D. Phair, and T. Misteli. 2002. A kinetic framework for a mammalian RNA polymerase in vivo. Science 298:1623-1626.
Dundr, M., J. G. McNally, J. Cohen, and T. Misteli. 2002. Quantitation of GFP-fusion proteins in single living cells. J. Struct. Biol. 140:92-99.
Eissenberg, J. C., and S. C. R. Elgin. 2000. The HP1 protein family: getting a grip on chromatin. Curr. Opin. Gen. Dev. 10:204-210.
Elbi, C., T. Misteli, and G. L. Hager. 2002. Recruitment of dioxin receptor to active transcription sites. Mol. Biol. Cell 13:2001-2015.
Elbi, C. C., D. A. Walker, G. Romero, W. P. Sullivan, D. O. Toft, G. L. Hager, and D. B. DeFranco. 2004. Molecular chaperones function as steroid receptor nuclear mobility factors. Proc. Natl. Acad. Sci. USA 101:2876-2881.
Felsenfeld, G., and M. Groudine. 2003. Controlling the double helix. Nature 421:448-453.
Festenstein, R., S. N. Pagakis, K. Hiragami, D. Lyon, A. Verreault, B. Sekkali, and D. Kioussis. 2003. Modulation of heterochromatin protein 1 dynamics in primary mammalian cells. Science 299:719-721.
Fletcher, T. M., N. Xiao, G. Mautino, C. T. Baumann, R. Wolford, B. S. Warren, and G. L. Hager. 2002. ATP-dependent mobilization of the glucocorticoid receptor during chromatin remodeling. Mol. Cell. Biol. 22:3255-3263.
Freeman, B. C., and K. R. Yamamoto. 2002. Disassembly of transcriptional regulatory complexes by molecular chaperones. Science 296:2232-2235.
Goytisolo, F. A., S. E. Gerchman, X. Yu, C. Rees, V. Graziano, V. Ramakrishnan, and J. O. Thomas. 1996. Identification of two DNA-binding sites on the globular domain of histone H5. EMBO J. 15:3421-3429.
Grewal, S. I., and S. C. Elgin. 2002. Heterochromatin: new possibilities for the inheritance of structure. Curr. Opin. Genet. Dev. 12:178-187.
Gunjan, A., B. T. Alexander, D. B. Sittman, and D. T. Brown. 1999. Effects of H1 histone variant overexpression on chromatin structure. J. Biol. Chem. 274:37950-37956.
Hager, G. L., C. Elbi, and M. Becker. 2002. Protein dynamics in the nuclear compartment. Curr. Opin. Genet. Dev. 12:137-141.
He, L., A. Weber, and D. Levens. 2000. Nuclear targeting determinants of the far upstream element binding protein, a c-myc transcription factor. Nucleic Acids Res. 28:4558-4565.
Hoogstraten, D., A. L. Nigg, H. Heath, L. H. Mullenders, R. van Driel, J. H. Hoeijmakers, W. Vermeulen, and A. B. Houtsmuller. 2002. Rapid switching of TFIIH between RNA polymerase I and II transcription and DNA repair in vivo. Mol. Cell 10:1163-1174.
Houtsmuller, A. B., S. Rademakers, A. L. Nigg, D. Hoogstraten, J. H. Hoeijmakers, and W. Vermeulen. 1999. Action of DNA repair endonuclease ERCC1/XPF in living cells. Science 284:958-961.
Houtsmuller, A. B., and W. Vermeulen. 2001. Macromolecular dynamics in living cell nuclei revealed by fluorescence redistribution after photobleaching. Histochem. Cell Biol. 115:13-21.
Kanda, T., K. F. Sullivan, and G. M. Wahl. 1998. Histone-GFP fusion protein enables sensitive analysis of chromosome dynamics in living mammalian cells. Curr. Biol. 26:377-385.
Kimura, H., and P. R. Cook. 2001. Kinetics of core histones in living human cells: little exchange of H3 and H4 and some rapid exchange of H2B. J. Cell Biol. 153:1341-1353.
Kimura, H., K. Sugaya, and P. R. Cook. 2002. The transcription cycle of RNA polymerase II in living cells. J. Cell Biol. 159:777-782.
Labrador, M., and V. G. Corces. 2002. Setting the boundaries of chromatin domains and nuclear organization. Cell 111:151-154.
Lemon, B., and R. Tjian. 2000. Orchestrated response: a symphony of transcription factors for gene control. Genes Dev. 14:2551-2569.
Lever, M. A., J. P. H. Th'ng, X. Sun, and M. J. Hendzel. 2000. Rapid exchange of histone H1.1 on chromatin in living cells. Nature 408:873-876.
Lippincott-Schwartz, J., E. Snapp, and A. Kenworthy. 2001. Studying protein dynamics in living cells. Nat. Rev. Mol. Cell Biol. 2:444-456.
McNally, J. G., W. G. Muller, D. Walker, R. Wolford, and G. L. Hager. 2000. The glucocorticoid receptor: rapid exchange with regulatory sites in living cells. Science 287:1262-1265.
Meyvis, T. K., S. C. De Smedt, P. Van Oostveldt, and J. Demeester. 1999. Fluorescence recovery after photobleaching: a versatile tool for mobility and interaction measurements in pharmaceutical research. Pharm. Res. 16:1153-1162.
Misteli, T. 2001. Protein dynamics: implications for nuclear architecture and gene expression. Science 291:843-847.
Misteli, T., A. Gunjan, R. Hock, M. Bustin, and D. T. Brown. 2000. Dynamic binding of histone H1 to chromatin in living cells. Nature 408:877-881.
Naar, A. M., B. D. Lemon, and R. Tjian. 2001. Transcriptional coactivator complexes. Annu. Rev. Biochem. 70:475-501.
Nagaich, A. K., D. A. Walker, R. Wolford, and G. L. Hager. 2004. Rapid periodic binding and displacement of the glucocorticoid receptor during chromatin remodeling. Mol. Cell 14:163-174.
Orphanides, G., and D. Reinberg. 2002. A unified theory of gene expression. Cell 108:439-451.
Pederson, T. 2001. Protein mobility within the nucleus—what are the right moves? Cell 104:635-638.
Phair, R. D., S. Gorski, and T. Misteli. 2004. Measurement of dynamic protein binding to chromatin in vivo using photobleaching microscopy. Methods Enzymol. 375:393-414.
Phair, R. D., and T. Misteli. 2000. High mobility of proteins in the mammalian cell nucleus. Nature 404:604-609.
Phair, R. D., and T. Misteli. 2001. Kinetic modelling approaches to in vivo imaging. Nat. Rev. Mol. Cell Biol. 2:898-907.
Prymakowsa-Bosak, M., T. Misteli, J. E. Herrera, H. Shirakawa, Y. Birger, S. Garfield, and M. Bustin. 2001. Mitotic phosphorylation prevents the binding of HMGN proteins to chromatin. Mol. Cell. Biol. 21:5169-5178.
Purich, D. L., and R. D. Allison. 2000. Handbook of biochemical kinetics. Academic Press, San Diego, Calif.
Roix, J., and T. Misteli. 2002. Genomes, proteomes, and dynamic networks in the cell nucleus. Histochem. Cell Biol. 118:105-116.
Scaffidi, P., T. Misteli, and M. E. Bianchi. 2002. Release of chromatin protein HMGB1 by necrotic cells triggers inflammation. Nature 418:191-195.
Schaaf, M. J., and J. A. Cidlowski. 2003. Molecular determinants of glucocorticoid receptor mobility in living cells: the importance of ligand affinity. Mol. Cell. Biol. 23:1922-1934.
Schaufele, F., J. F. Enwright III, X. Wang, C. Teoh, R. Srihari, R. Erickson, O. A. MacDougald, and R. N. Day. 2001. CCAAT/enhancer binding protein alpha assembles essential cooperating factors in common subnuclear domains. Mol. Endocrinol. 15:1665-1676.
Sporbert, A., A. Gahl, R. Ankerhold, H. Leonhardt, and M. C. Cardoso. 2002. DNA polymerase clamp shows little turnover at established replication sites but sequential de novo assembly at adjacent origin clusters. Mol. Cell 10:1355-1365.
Stavreva, D. A., W. G. Muller, G. L. Hager, C. L. Smith, and J. G. McNally. 2004. Rapid glucocorticoid receptor exchange at a promoter is coupled to transcription and regulated by chaperones and proteasomes. Mol. Cell. Biol. 24:2682-2697.
Stenoien, D. L., K. Patel, M. G. Mancini, M. Dutertre, C. L. Smith, B. W. O'Malley, and M. A. Mancini. 2001. FRAP reveals that mobility of oestrogen receptor-α is ligand and proteasome-dependent. Nat. Cell Biol. 3:15-23.
Strahl, B. D., and C. D. Allis. 2000. The language of covalent histone modifications. Nature 403:41-45.
Thomas, J. O., and A. A. Travers. 2001. HMG1 and 2, and related 'architectural' DNA-binding proteins. Trends Biochem. Sci. 26:167-174.
Verkman, A. S. 2002. Solute and macromolecule diffusion in cellular aqueous compartments. Trends Biochem. Sci. 27:27-33.
Woodcock, C. L., and S. Dimitrov. 2001. Higher-order structure of chromatin and chromosomes. Curr. Opin. Genet. Dev. 11:130-135.
Yin, X., M. F. Landay, W. Han, E. S. Levitan, S. C. Watkins, R. M. Levenson, D. L. Farkas, and E. V. Prochownik. 2001. Dynamic in vivo interactions among Myc network members. Oncogene 20:4650-4664.
Molecular and Cellular Biology Jun 2004, 24 (14) 6393-6402; DOI: 10.1128/MCB.24.14.6393-6402.2004
You are going to email the following Global Nature of Dynamic Protein-Chromatin Interactions In Vivo: Three-Dimensional Genome Scanning and Dynamic Interaction Networks of Chromatin Proteins | CommonCrawl |
Global existence of strong solutions for $2$-dimensional Navier-Stokes equations on exterior domains with growing data at infinity
The classification of constant weighted curvature curves in the plane with a log-linear density
July 2014, 13(4): 1629-1639. doi: 10.3934/cpaa.2014.13.1629
Note on evolutionary free piston problem for Stokes equations with slip boundary conditions
Boris Muha 1, and Zvonimir Tutek 2,
Department of Mathematics, Faculty of Science, University of Zagreb, Bijenička cesta 30, 10000 Zagreb, Croatia
Department of Mathematics, University of Zagreb, Bijenička cesta 30, 10000 Zagreb, Croatia
Received April 2012 Revised October 2012 Published February 2014
In this paper we study a free boundary fluid-rigid body interaction problem, the free piston problem. We consider an evolutionary incompressible viscous fluid flow through a junction of two pipes. Inside the "vertical" pipe there is a heavy piston which can freely slide along the pipe. On the lateral boundary of the "vertical" pipe we prescribe Navier's slip boundary conditions. We prove the existence of a local in time weak solution. Furthermore, we show that the obtained solution is a strong solution.
Keywords: Navier-Stokes equations, moving boundary, weak solution., Fluid-rigid body interaction.
Mathematics Subject Classification: Primary: 74F10, 35Q30, 76D03; Secondary: 76D0.
Citation: Boris Muha, Zvonimir Tutek. Note on evolutionary free piston problem for Stokes equations with slip boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1629-1639. doi: 10.3934/cpaa.2014.13.1629
A. Chambolle, B. Desjardins, M. J. Esteban and C. Grandmont, Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate,, \emph{J. Math. Fluid Mech.}, 7 (2005), 368. doi: 10.1007/s00021-004-0121-y. Google Scholar
C. Conca, F. Murat and O. Pironneau, The Stokes and Navier-Stokes equations with boundary conditions involving the pressure,, \emph{Japan. J. Math. (N.S.)}, 20 (1994), 279. Google Scholar
C. Conca, J. San Martín H. and M. Tucsnak, Motion of a rigid body in a viscous fluid,, \emph{C. R. Acad. Sci. Paris S\'er. I Math.}, 328 (1999), 473. doi: 10.1016/S0764-4442(99)80193-1. Google Scholar
P. Cumsille and T. Takahashi, Wellposedness for the system modelling the motion of a rigid body of arbitrary form in an incompressible viscous fluid,, \emph{Czechoslovak Math. J.}, 58 (2008), 961. doi: 10.1007/s10587-008-0063-2. Google Scholar
B. D'Acunto and S. Rionero, A note on the existence and uniqueness of solutions to a free piston problem,, \emph{Rend. Accad. Sci. Fis. Mat. Napoli}, 66 (1999), 75. Google Scholar
B. Desjardins and M. J. Esteban, On weak solutions for fluid-rigid structure interaction: compressible and incompressible models,, \emph{Comm. Partial Differential Equations}, 25 (2000), 1399. doi: 10.1080/03605300008821553. Google Scholar
G. P. Galdi, Mathematical problems in classical and non-Newtonian fluid mechanics,, in \emph{Hemodynamical Flows}, (2008), 121. doi: 10.1007/978-3-7643-7806-6_3. Google Scholar
M. Hillairet and D. Serre, Chute stationnaire d'un solide dans un fluide visqueux incompressible le long d'un plan incliné,, \emph{Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire}, 20 (2003), 779. doi: 10.1016/S0294-1449(02)00028-8. Google Scholar
M. Hillairet and T. Takahashi, Collisions in three-dimensional fluid structure interaction problems,, \emph{SIAM J. Math. Anal.}, 40 (2009), 2451. doi: 10.1137/080716074. Google Scholar
E. Marušić-Paloka., Rigorous justification of the Kirchhoff law for junction of thin pipes filled with viscous fluid,, \emph{Asymptot. Anal.}, 33 (2003), 51. Google Scholar
V. Maz'ya and J. Rossmann, $L_p$ estimates of solutions to mixed boundary value problems for the Stokes system in polyhedral domains,, \emph{Math. Nachr.}, 280 (2007), 751. doi: 10.1002/mana.200610513. Google Scholar
T. Miyakawa and Y. Teramoto, Existence and periodicity of weak solutions of the Navier-Stokes equations in a time dependent domain,, \emph{Hiroshima Math. J.}, 12 (1982), 513. Google Scholar
B. Muha and Z. Tutek, On a free piston problem for Stokes and Navier-Stokes equations,, To appear in \emph{Glasnik Matemati\v cki}., (). Google Scholar
B. Muha and Z. Tutek, Numerical analysis of a free piston problem,, \emph{Math. Commun.}, 15 (2010), 573. Google Scholar
B. Muha and Z. Tutek, On a stationary and evolutionary free piston problem for potential ideal fluid flow,, \emph{Math. Meth. Appl. Sci.}, 35 (2012), 1721. doi: 10.1002/mma.2555. Google Scholar
J. Neustupa and P. Penel, A weak solvability of the Navier-Stokes equation with Navier's boundary condition around a ball striking the wall,, in \emph{Advances in Mathematical Fluid Mechanics}, (2010), 385. doi: 10.1007/978-3-642-04068-9_24. Google Scholar
V. G. Osmolovskiĭ, Linear and Nonlinear Perturbations of the Operator div, volume 160 of "Translations of Mathematical Monographs,", American Mathematical Society, (1997). Google Scholar
S. Takeno, Free piston problem for isentropic gas dynamics,, \emph{Japan J. Indust. Appl. Math.}, 12 (1995), 163. doi: 10.1007/BF03167287. Google Scholar
R. Temam, Navier-Stokes Equations. Theory and Numerical Analysis,, North-Holland Publishing Co., (1977). Google Scholar
Matthias Hieber, Miho Murata. The $L^p$-approach to the fluid-rigid body interaction problem for compressible fluids. Evolution Equations & Control Theory, 2015, 4 (1) : 69-87. doi: 10.3934/eect.2015.4.69
Chérif Amrouche, María Ángeles Rodríguez-Bellido. On the very weak solution for the Oseen and Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 159-183. doi: 10.3934/dcdss.2010.3.159
Reinhard Farwig, Ronald B. Guenther, Enrique A. Thomann, Šárka Nečasová. The fundamental solution of linearized nonstationary Navier-Stokes equations of motion around a rotating and translating body. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 511-529. doi: 10.3934/dcds.2014.34.511
Zhenhua Guo, Zilai Li. Global existence of weak solution to the free boundary problem for compressible Navier-Stokes. Kinetic & Related Models, 2016, 9 (1) : 75-103. doi: 10.3934/krm.2016.9.75
Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 161-179. doi: 10.3934/dcds.2010.28.161
Ariane Piovezan Entringer, José Luiz Boldrini. A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 397-422. doi: 10.3934/dcdsb.2015.20.397
Henry Jacobs, Joris Vankerschaver. Fluid-structure interaction in the Lagrange-Poincaré formalism: The Navier-Stokes and inviscid regimes. Journal of Geometric Mechanics, 2014, 6 (1) : 39-66. doi: 10.3934/jgm.2014.6.39
Qiang Du, Manlin Li, Chun Liu. Analysis of a phase field Navier-Stokes vesicle-fluid interaction model. Discrete & Continuous Dynamical Systems - B, 2007, 8 (3) : 539-556. doi: 10.3934/dcdsb.2007.8.539
Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277
Wenjing Song, Ganshan Yang. The regularization of solution for the coupled Navier-Stokes and Maxwell equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2113-2127. doi: 10.3934/dcdss.2016087
Francesca Crispo, Paolo Maremonti. A remark on the partial regularity of a suitable weak solution to the Navier-Stokes Cauchy problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1283-1294. doi: 10.3934/dcds.2017053
Jing Wang, Lining Tong. Stability of boundary layers for the inflow compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2595-2613. doi: 10.3934/dcdsb.2012.17.2595
Chérif Amrouche, Nour El Houda Seloula. $L^p$-theory for the Navier-Stokes equations with pressure boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1113-1137. doi: 10.3934/dcdss.2013.6.1113
Hantaek Bae. Solvability of the free boundary value problem of the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 769-801. doi: 10.3934/dcds.2011.29.769
Sylvie Monniaux. Various boundary conditions for Navier-Stokes equations in bounded Lipschitz domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1355-1369. doi: 10.3934/dcdss.2013.6.1355
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149
Ciprian Foias, Ricardo Rosa, Roger Temam. Topological properties of the weak global attractor of the three-dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1611-1631. doi: 10.3934/dcds.2010.27.1611
Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure & Applied Analysis, 2002, 1 (1) : 35-50. doi: 10.3934/cpaa.2002.1.35
Yong Yang, Bingsheng Zhang. On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2339-2350. doi: 10.3934/dcdsb.2017101
Boris Muha Zvonimir Tutek | CommonCrawl |
\begin{definition}[Definition:Submatrix]
Let $\mathbf A$ be a matrix with $m$ rows and $n$ columns.
A '''submatrix''' of $\mathbf A$ is a matrix formed by selecting from $\mathbf A$:
:a subset of the rows
and:
:a subset of the columns
and forming a new matrix by using those entries, in the same relative positions, that appear in both the rows and columns of those selected.
\end{definition} | ProofWiki |
ecga7
Terms in this set (9)
(E-Commerce)
Business activities conducted using electronic data transmission over the Internet and the World Wide Web.
ElectronicBusiness
(E-Business)
Another term for electronic commerce; sometimes used as a broader term for electronic commerce that includes all business processes, as distinguished from a narrow definition of electronic commerce that includes sales and purchase transactions only.
Dot-Com
(Pure Dot-Com)
A company that operates only online.
Electronic Commerce Categories
B2C - Business-to-Consumer
B2B - Business-to-Business
C2C - Customer-to-Customer
B2G - Business-to-Government
(Business-to-Consumer)
(Business-to-Business)
(Customer-to-Customer)
(E-Bay)
(Business-to-Government)
(Procurement)
Other sets by this creator
Real Estate - Ch 16 - Appraising and Estimating Ma…
97 terms
Appraisal and Valuation - Introduction - RE53
Final Appraisal
*Valuation Market Analysis - RE53
Verified questions
For each of the following transactions, state whether the cost would be capitalized (C) or recorded as an expense (E). F. Paid $250 for routine maintenance
Data for Regal State Bank follow: $$ \begin{array}{lrr} \hline & \textbf{2016} & \mathbf{2 0 1 5} \\ \hline \text { Net income } & \$ 56,000 & \$ 47,200 \\ \text { Dividends-Common } & 16,000 & 16,000 \\ \text { Dividends-Preferred } & 16,000 & 16,000 \\ \text { Total Stockholders' Equity at Year-End (includes } & & \\ \quad 80,000 \text { shares of common stock) } & 770,000 & 580,000 \\ \text { Preferred Stock } & 220,000 & 220,000 \\ \text { Market Price per Share of Common Stock } & \$ \quad 16.50 & \$ \quad 11.00 \\ \hline \end{array} $$ Evaluate the common stock of Regal State Bank as an investment. Specifically, use the three stock ratios to determine whether the common stock has increased or decreased in attractiveness during the past year.
Hydro White Water Co. sells canoes, kayaks, whitewater rafts, and other boating supplies. During the taking of its physical inventory on December 31, 2012, Hydro White Water incorrectly counted its inventory as $439,650 instead of the correct amount of$451,000. c. If uncorrected, what would be the effect of the error on the 2013 income statement?
Match the following terms to the correct definitions. A. Free trade B. Tariff of duty C. Protective tariff D. Revenue tariff E. Quota F. Voluntary export restraint G. Embargo H. Trade war I. Protectionism J. Offshoring K. Trade deficit L. Trade surplus M. Dumping N. Multinational corporation O. Retaliatory tariff P. Free trade zone Q. Euro R. Fair trade ________ 1. The common currency adopted by the European Union. ________ 2. A government-ordered ban on trade. ________ 3. Tariffs meant to protect domestic industries from the loss of consumers' dollars to foreign competition. ________ 4. The amount by which expenditures on imports exceed receipts from exports. ________ 5. A limit on the amount of a good that can be imported. ________ 6. A type of business with operations in many countries. ________ 7. A tariff imposed to punish a trading partner's trade policy. ________ 8. The practice of limiting trade. ________ 9. The practice of exporting goods at a price that is below the price in the exporting country. ________ 10. Taxes on imports. ________ 11. When a firm moves some of its operations to another country. ________ **12**. Self-imposed limits on exports. ________ 13. Trade with a focus on the equitable treatment of farmers and their communities. ________ 14. A tax that has as its primary purpose generating money for the federal government without necessarily limiting or aiding domestic producers. ________ 15. The amount by which receipts from exports exceed expenditures on imports. ________ 16. An area where there are few or no trade restrictions. ________ 17. An escalation of trade restrictions by trading partners. ________ 18. The exchange of goods and services between trading partners in the absence of government-imposed restrictions or charges.
Other Quizlet sets
Idrottsprov 3
kimmarchand
Survey of US History I Study Guide 2, Test 2 (Tind…
btiller87
China's Economy-CH19
siwaasinga
About Quizlet
How Quizlet works
Modern Learning Lab
Quizlet Plus
Quizlet Plus for teachers
DeutschEnglish (UK)English (USA)EspañolFrançais (FR)Français (QC/CA)Bahasa IndonesiaItalianoNederlandspolskiPortuguês (BR)РусскийTürkçeУкраїнськаTiếng Việt한국어中文 (简体)中文 (繁體)日本語
© 2023 Quizlet, Inc. | CommonCrawl |
Schur's theorem
In discrete mathematics, Schur's theorem is any of several theorems of the mathematician Issai Schur. In differential geometry, Schur's theorem is a theorem of Axel Schur. In functional analysis, Schur's theorem is often called Schur's property, also due to Issai Schur.
Ramsey theory
The Wikibook Combinatorics has a page on the topic of: Proof of Schur's theorem
In Ramsey theory, Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x, y, z with
$x+y=z.$
For every positive integer c, S(c) denotes the smallest number S such that for every partition of the integers $\{1,\ldots ,S(c)\}$ into c parts, one of the parts contains integers x, y, and z with $x+y=z$. Schur's theorem ensures that S(c) is well-defined for every positive integer c. The numbers of the form S(c) are called Schur's number.
Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part.
Using this definition, the only known Schur numbers are S(n) = 2, 5, 14, 45, and 161 (OEIS: A030126) The proof that S(5) = 161 was announced in 2017 and took up 2 petabytes of space.[1][2]
Combinatorics
In combinatorics, Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if $\{a_{1},\ldots ,a_{n}\}$ is a set of integers such that $\gcd(a_{1},\ldots ,a_{n})=1$, the number of different tuples of non-negative integer numbers $(c_{1},\ldots ,c_{n})$ such that $x=c_{1}a_{1}+\cdots +c_{n}a_{n}$ when $x$ goes to infinity is:
${\frac {x^{n-1}}{(n-1)!a_{1}\cdots a_{n}}}(1+o(1)).$
As a result, for every set of relatively prime numbers $\{a_{1},\ldots ,a_{n}\}$ there exists a value of $x$ such that every larger number is representable as a linear combination of $\{a_{1},\ldots ,a_{n}\}$ in at least one way. This consequence of the theorem can be recast in a familiar context considering the problem of changing an amount using a set of coins. If the denominations of the coins are relatively prime numbers (such as 2 and 5) then any sufficiently large amount can be changed using only these coins. (See Coin problem.)
Differential geometry
In differential geometry, Schur's theorem compares the distance between the endpoints of a space curve $C^{*}$ to the distance between the endpoints of a corresponding plane curve $C$ of less curvature.
Suppose $C(s)$ is a plane curve with curvature $\kappa (s)$ which makes a convex curve when closed by the chord connecting its endpoints, and $C^{*}(s)$ is a curve of the same length with curvature $\kappa ^{*}(s)$. Let $d$ denote the distance between the endpoints of $C$ and $d^{*}$ denote the distance between the endpoints of $C^{*}$. If $\kappa ^{*}(s)\leq \kappa (s)$ then $d^{*}\geq d$.
Schur's theorem is usually stated for $C^{2}$ curves, but John M. Sullivan has observed that Schur's theorem applies to curves of finite total curvature (the statement is slightly different).
Linear algebra
Main article: Schur decomposition
In linear algebra, Schur’s theorem is referred to as either the triangularization of a square matrix with complex entries, or of a square matrix with real entries and real eigenvalues.
Functional analysis
In functional analysis and the study of Banach spaces, Schur's theorem, due to I. Schur, often refers to Schur's property, that for certain spaces, weak convergence implies convergence in the norm.
Number theory
In number theory, Issai Schur showed in 1912 that for every nonconstant polynomial p(x) with integer coefficients, if S is the set of all nonzero values ${\begin{Bmatrix}p(n)\neq 0:n\in \mathbb {N} \end{Bmatrix}}$, then the set of primes that divide some member of S is infinite.
See also
• Schur's lemma (from Riemannian geometry)
References
1. Heule, Marijn J. H. (2017). "Schur Number Five". arXiv:1711.08076.
2. "Schur Number Five". www.cs.utexas.edu. Retrieved 2021-10-06.
• Herbert S. Wilf (1994). generatingfunctionology. Academic Press.
• Shiing-Shen Chern (1967). Curves and Surfaces in Euclidean Space. In Studies in Global Geometry and Analysis. Prentice-Hall.
• Issai Schur (1912). Über die Existenz unendlich vieler Primzahlen in einigen speziellen arithmetischen Progressionen, Sitzungsberichte der Berliner Math.
Further reading
• Dany Breslauer and Devdatt P. Dubhashi (1995). Combinatorics for Computer Scientists
• John M. Sullivan (2006). Curves of Finite Total Curvature. arXiv.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
\begin{document}
\begin{center} \textbf{\LARGE Intervention analysis for integer-valued autoregressive models }\\[2ex] \Large \textbf{Xanthi Pedeli}\footnote{Athens University of Economics and Business, Athens, Greece} \textbf{and Roland Fried}\footnote{TU Dortmund University, Dortmund, Germany
\noindent \textbf{Corresponding author:} \newline Xanthi Pedeli, Athens University of Business and Economics, 76 Patision Street, 10434 Athens, Greece. E-mail: [email protected].}
\end{center} \noindent \subsection*{Abstract} We study the problem of intervention effects generating various types of outliers in an integer-valued autoregressive model with Poisson innovations. We concentrate on outliers which enter the dynamics and can be seen as effects of extraordinary events. We consider three different scenarios, namely the detection of an intervention effect of a known type at a known time, the detection of an intervention effect of unknown type at a known time and the detection of an intervention effect when both the type and the time are unknown. We develop $F$-tests and score tests for the first scenario. For the second and third scenarios we rely on the maximum of the different $F$-type or score statistics. The usefulness of the proposed approach is illustrated using monthly data on human brucellosis infections in Greece.\\ \noindent \textbf{Keywords:} Count data; time series; innovation outlier; level shift; transient shift.
\section{Introduction}\label{sect:intro}
Detection and modelling of unusual events is crucial in time series analysis because their presence can strongly influence statistical inference, diagnostics and forecasting. Following the seminal work of \cite{fox:1972}, several outlier detection and estimation methods have been proposed in the literature for both linear and non-linear time series, often under the assumption of Gaussian random variables, see for instance \cite{galeano:2013} and the references therein. However, to the best of our knowledge, the topic has not been investigated thoroughly in the framework of integer-valued autoregressive (INAR) models.
Integer valued autoregressive models have been introduced by \cite{mckenzie:1985} and \cite{alosh:1987} as a convenient way to capture the autoregressive structure of count time series while accounting for the discreteness of the data. Several extensions and generalizations of the first-order integer-valued autoregressive process have since been developed and are widely used nowadays \citep{davis:2016,weiss:2018}.
The integer-valued autoregressive process of order $p$, denoted briefly as INAR($p$), is defined as \begin{equation}\label{eq:inarp} Y_t=\sum_{i=1}^p\alpha_i\circ Y_{t-i}+e_t,\; t\in\mathbb{N}, \end{equation} where $\{e_t\}$ is an innovation process consisting of a sequence of independent identically distributed nonnegative integer-valued random variables with finite mean and variance. Conditional on $Y_{t-i}$, $i\in\{1,\ldots,p\}$, the binomial thinning operator ``$\circ$" is defined as \begin{equation*} \alpha_i\circ Y_{t-i}=\left\{\begin{array}{ll} \sum_{j=1}^{Y_{t-i}}X_{j,i}& X_{j,i}>0,\\ 0,& \mbox{otherwise}, \end{array}\right. \end{equation*} where each counting series $\{X_{j,i}, \; j=1,\ldots,Y_{t-i}\}$ consists of independent identically distributed Bernoulli random variables, independent of $Y_{t-i}$, with success probability $\alpha_i$ \citep{steutel:1979}. The counting series are assumed to be mutually independent for $i=1,\ldots,p$ \citep{du:1991} and the innovations $e_t$ are assumed to be independent of the thinning operations $\alpha_i\circ Y_{t-i}$ for all $t\in\mathbb{N}$. A unique stationary and ergodic solution of (\ref{eq:inarp}) exists if $\sum_{i=1}^p\alpha_i<1$, where $\alpha_i\in[0,1)$.
In the following, we focus on the parametric case that arises when the innovations are Poisson random variables with parameter $\lambda$. When $p=1$, the marginal stationary distribution of $Y_t$ is also Poisson with mean $E(Y_t)=\lambda/(1-\alpha)$. When $p>1$, the unconditional mean and variance of $Y_t$ are generally not equal so that the marginal stationary distribution of $Y_t$ is no longer Poisson although the innovations are Poisson distributed.
\cite{barczy:2010, barczy:2012} analyzed the effects of different types of outliers occurring at known time points on the conditional least squares estimators in case of Poisson INAR(1) models. Detection of additive outliers in Poisson INAR(1) time series has been treated by \cite{silva:2015} in a Bayesian framework. Additive outliers are often interpreted as effects of measurement errors as they change a single observation but do not enter the dynamics of the time series. We concentrate on other types of outliers which enter the dynamics and can be seen as effects of extraordinary events. \cite{barczy:2010} consider an outlier model similar to ours, but their work is restricted to an analysis of conditional least squares estimation in the presence of innovation outliers, which are treated as deterministic effects at known time points. \cite{morina:2020} proposed an INAR($p$) model that allows for the quantification of an intervention while taking into account possible trends or seasonal behaviour. The suggested model assumes the special case that the intervention occurs at a known time point and affects all subsequent observations in the same way.
We aim at the detection of different types of effects including innovation outliers, transient shifts and level shifts at possibly unknown time points and use a somewhat different model formulation. More precisely, we extend model~(\ref{eq:inarp}) as follows: \begin{equation}\label{eq:cont-inarp} Y_t=\sum_{i=1}^p\alpha_i\circ Y_{t-i}+e_t+\sum_{j=1}^JU_{t,j},\; t\in\mathbb{N},\end{equation} where $J$ is the number of intervention effects and $(U_{t,j}:t\in\mathbb{N})$, $j=1,\ldots,J$ are independent random variables denoting the effects of the different interventions on all time points. We assume that $(U_{t,j}:t\in\mathbb{N})$, $j=1,\ldots,J$ are independent of the thinning operations $\alpha_i\circ Y_{t-i}$ for all $t\in\mathbb{N}$. In addition, it is assumed that
$U_{t,j}\equiv 0$ for $t=0,\ldots,\tau_j-1$, and $U_{t,j}\sim Pois(\kappa_j\delta_j^{t-\tau_j})$ for $t=\tau_j,\tau_j+1,\ldots$, with $\tau_j$ and $\kappa_j$ denoting respectively the time point and the size of the $j$-th intervention and $\delta_j\in [0,1]$ controlling the effect of the intervention on the future of the time series after time $\tau_j$. For $\delta_j=1$ we get a permanent level shift starting at time $\tau_j$, for $\delta_j=0$ we get an innovation outlier, i.e., a single effect at time $\tau_j$ which spreads into the future according to the dynamics of the data generating process, and for $\delta_j\in (0,1)$ we get a transient shift in between the former two extremes which decays with rate $\delta_j$. The effect of the above type of interventions on a realization of a stationary Poisson INAR(1) process is illustrated in Figure \ref{fig:effects}.
The rest of the paper is organized as follows. Section~\ref{sect:cls-cml} discusses joint estimation of model parameters and intervention effects in the frameworks of conditional least squares and conditional maximum likelihood. Within the former framework, an $F$-test is developed for the detection of known types of interventions at known time points. For the same purpose, we suggest a score test in the framework of conditional maximum likelihood. The rejection rates and power of both tests are investigated through an extensive simulation study. In Sections~\ref{sect:unknowntype} and \ref{sect:unknowntime} we consider detection of intervention effects when either their type or both the type and time of intervention are unkown. In the spirit of \cite{fokianos:2010}, we suggest in Section~\ref{sect:iterative} an iterative procedure for the detection, classification and elimination of multiple intervention effects. The procedure is illustrated through its application to simulated and real data series. Section~\ref{sect:discussion} concludes the article and outlines future research.
\begin{figure}
\caption{Effects of different types of outliers of size $\kappa=20$ at time point $\tau=100$ on a realization of a Poisson INAR(1) process generated with $\alpha=0.3$, $\lambda=5$ and $n=200$. The solid black and dashed red lines correspond to the clean and contaminated processes (processes without and with contamination by outliers), respectively, where contamination is due to (a) an innovation outlier, (b) a transient shift with $\delta=0.8$ and (c) a level shift. }
\label{fig:effects}
\end{figure}
\section{Known types of intervention effects at known time points} \label{sect:cls-cml}
If the number of interventions $J$, the time points $\tau_j$ of their occurrence and the types $\delta_j$ of the interventions $j=1,\ldots,J$ are known, then the conditional mean $E(Y_t|Y_{t-1},\ldots,Y_{t-p})$ in our intervention model is linear in the remaining parameters $\alpha_i$, $\lambda$ and $\kappa_j$, $i=1,\ldots,p$, $j=1,\ldots,J$, leading to simple formulae for the conditional least squares (CLS) or conditional maximum likelihood (CML) estimation. In this section we formulate the objective functions for CLS and CML estimation of model (\ref{eq:cont-inarp}) and suggest $F$-type and score statistics for the detection and identification of changes of known type at known time points.
\subsection{Conditional least squares estimation and the $F$-statistic}\label{sect:cls}
The CLS estimates minimize the residual sum of squares, \begin{equation*} RSS(J)=\sum_{t=p+1}^n\left\{y_t-\lambda-\sum_{i=1}^p\alpha_i y_{t-i}-\sum_{j=1}^J\kappa_j\delta_j^{t-\tau_j}I(t\ge \tau_j)\right\}^2,\end{equation*} and can be calculated using explicit formulae and software for ordinary least squares estimation in linear models. The residual sum of squares can also be used when we want to decide whether a certain type of intervention effect is present at a given time point. A common measure for the goodness of fit of a linear model is the coefficient of determination, which is $R^2=\{RSS(0)-RSS(1)\}/RSS(0)$ in case of a single intervention effect, i.e., $J=1$. $R^2$ always takes values in the interval $[0,1]$, which simplifies its interpretation. In Gaussian linear models one often prefers the $F$-type statistic \begin{equation} \label{eq:ftest} F=\frac{RSS(0)-RSS(1)}{RSS(1)/(n-p-2)}, \end{equation} since it is $F$-distributed with 1 and $n-p-2$ degrees of freedom if the model without the additional intervention effect holds \citep{hamilton:1994}. In our case, $n-p-2$ will usually be large so that such an $F$-distribution is close to the $\chi_1^2$-distribution and thereafter we shall consider this more convenient distribution.
\subsection{Conditional maximum likelihood estimation and the score test statistic}\label{sect:cml}
The conditional log-likelihood function corresponding to model (\ref{eq:cont-inarp}) is given by
\begin{equation*}\ell(\boldsymbol{\theta})=\sum_{t=p+1}^n\log \mathnormal{p}(y_t|y_{t-1},\ldots, y_{t-p}), \end{equation*} where \begin{eqnarray*}
&&\mathnormal{p}(y_t|y_{t-1},\ldots, y_{t-p})=\sum_{i_1=0}^{min(y_{t-1},y_t)}\left(\begin{array}{c}y_{t-1}\\i_1\end{array}\right)\alpha_1^{i_1}(1-\alpha_1)^{y_{t-1}-i_1}\\ &&\times\sum_{i_2=0}^{min(y_{t-2},y_t-i_1)}\left(\begin{array}{c}y_{t-2}\\i_2\end{array}\right)\alpha_2^{i_2}(1-\alpha_2)^{y_{t-2}-i_2}\cdots \sum_{i_p=0}^{min\{y_{t-p},y_t-(i_1+\cdots+i_p)\}}\left(\begin{array}{c}y_{t-p}\\i_p\end{array}\right)\alpha_p^{i_p}(1-\alpha_p)^{y_{t-p}-i_p}\\ &&\quad \times \frac{\exp{[-\lambda-\sum_{j=1}^J\kappa_j\delta_j^{t-\tau_j}I(t\ge \tau_j)]}[\lambda+\sum_{j=1}^J\kappa_j\delta_j^{t-\tau_j}I(t\ge \tau_j)]^{y_t-(i_1+\cdots+i_p)}}{\{y_t-(i_1+\cdots+i_p)\}!}, \end{eqnarray*} and $\boldsymbol{\theta}=(\alpha_1,\ldots, \alpha_p, \lambda, \kappa_1,\ldots, \kappa_J)^T$ is the vector of unknown model parameters. It can be shown through differentiation \citep{freeland:2004, bu:2008} that the score function $V(\boldsymbol{\theta})=\partial\ell(\boldsymbol{\theta})/\partial\boldsymbol{\theta}$ is a $(J+p+1)$-dimensional vector with elements \begin{eqnarray*}
\frac{\partial\ell(\boldsymbol{\theta})}{\partial\alpha_i} &=& \sum_{t=p+1}^{n}\frac{y_{t-i}[\mathnormal{p}(y_{t}-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})-\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})]}{(1-\alpha_i)\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})},\\
\frac{\partial\ell(\boldsymbol{\theta})}{\partial\lambda} &=& \sum_{t=p+1}^{n}\frac{\mathnormal{p}(y_{t}-1|y_{t-1},\ldots,y_{t-p})-\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})},\\
\frac{\partial\ell(\boldsymbol{\theta})}{\partial\kappa_j} &=& \sum_{t=p+1}^{n}\frac{\delta^{t-\tau_j}I(t\geq\tau_j)[\mathnormal{p}(y_{t}-1|y_{t-1},\ldots,y_{t-p})-\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})]}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}, \end{eqnarray*} for $i=1,\ldots,p$ and $j=1\ldots, J$. Provided that the solution of $V(\boldsymbol{\theta})=0$ exists, it yields the conditional maximum likelihood estimate $\hat{\boldsymbol{\theta}}$ of $\boldsymbol{\theta}$. The conditional information for $\boldsymbol{\theta}$ is given by
\begin{equation*}\mathcal I(\boldsymbol{\theta})=Cov\left(\left.\frac{\partial\ell(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\right|y_{t-1},\ldots, y_{t-p}\right) \end{equation*} and under mild regularity conditions it can be written as \begin{equation*}\mathcal{I}(\boldsymbol{\theta})=-E\left(\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^T}\right), \end{equation*} where the Hessian matrix $\partial^2\ell(\boldsymbol{\theta})/\partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^T$ has elements given in Section~1 of the supplementary materials.
The availability of the score function $V(\boldsymbol{\theta})$ and conditional information matrix $\mathcal I(\boldsymbol{\theta})$ allows us to define the score test statistic \begin{equation} \label{eq:scoretest} S=V^T(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_p, \tilde{\lambda},0)\mathcal I^{-1}(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_p,\tilde{\lambda},0)V(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_p,\tilde{\lambda},0) \end{equation} for testing the presence of a single intervention effect ($J=1$) of known type and time of occurrence, i.e. testing the null hypothesis $H_0: \kappa=0$ against the alternative $H_{1}:\kappa\neq0$. In formula (\ref{eq:scoretest}), $V(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_p,\tilde{\lambda},0)$ and $\mathcal I(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_p,\tilde{\lambda},0)$ are the score function and conditional information matrix evaluated at the maximum likelihood estimators $(\tilde{\alpha}_1,\ldots,\tilde{\alpha}_p,\tilde{\lambda},0)$ computed under the null hypothesis of a clean INAR($p$) process, i.e., an INAR($p$) process that does not include any intervention effects. The fact that the score test statistic does not require fitting the model under the alternative hypothesis, gives a theoretical advantage over the $F$-type statistic that instead requires fitting the model under both the null and alternative hypotheses. Nevertheless, as discussed in Section~\ref{sect:unknowntype}, the fits in the $F$-type statistic are computationally much cheaper in practice.
Under the null hypothesis $H_0:\kappa=0$, (\ref{eq:cont-inarp}) reduces to a stationary INAR($p$) process with Poisson innovations. For such a process and under certain regularity conditions that are satisfied by the Poisson law \citep[see for instance][]{franke:1993, bu:2008}, the conditional maximum likelihood estimator is consistent and asymptotically normal, \begin{equation*} \sqrt{n}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta})\xrightarrow[]{d} N(0, \mathcal{I}^{-1}(\boldsymbol{\theta})). \end{equation*} Therefore, under $H_0$ and as $n\rightarrow\infty$, the score statistic (\ref{eq:scoretest}) converges to a $\chi^2_1$-distribution and derivation of critical values for an asymptotic test of the null hypothesis of no intervention against the alternative of an intervention of a certain type $\delta$ at known time $\tau$ is straightforward: we reject the null hypothesis at a given significance level $a$ if the value of $S$ is larger than the $(1-a)$-quantile of the $\chi^2_1$-distribution.
Although the suggested approach is general, its tractability is inevitably affected by the well-known computational difficulties with conditional maximum likelihood estimation in higher-order integer-valued autoregressive models. More specifically, even in the absence of intervention effects, maximization of (\ref{eq:inarp}) is cumbersome due to the nested summations appearing in the transition probabilities $\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})$ and the numerical difficulties that can arise when summing many small probabilities \citep[see e.g.][]{pedeli:2015, lu:2018}. In the following, we mainly focus on the Poisson INAR(1) model to avoid diverting the focus on computational aspects of CML estimation that are not related to the incorporation of intervention effects in the model specification. For the first-order model, equations (\ref{eq:inarp}) and (\ref{eq:cont-inarp}) are simplified as \begin{equation*} Y_t=\alpha\circ Y_{t-1}+e_t \quad \mbox{and} \quad Y_t=\alpha\circ Y_{t-1}+e_t+\sum_{j=1}^JU_{t,j},\; t\in\mathbb{N},\end{equation*} respectively. For higher-order models, we rely on the use of the $F$-type statistic whose performance is thoroughly investigated for both INAR(1) and INAR(2) processes. Results for the latter are summarized in Section~3 of the supplementary materials.
\subsection{Empirical results} \label{subsect:empres}
Tables \ref{tab:sizes100} and \ref{tab:sizes200} of the supplementary materials report empirical rejection rates when testing for an intervention effect of known type $\delta\in \{0,0.8,1\}$ at a known time point $\tau\in \{0.25n,0.5n,0.75n\}$, using the 90\%, 95\% or 99\% quantile of the $\chi_1^2$-distribution as critical value for the $F$-type and score statistics based on CLS and CML estimation, respectively.
The empirical rejection rates are obtained by analyzing 5000 time series of the same length $n\in \{100,200\}$ for each of
different INAR(1) models with $\alpha\in\{0.3,0.6,0.9\}$ and $\lambda\in \{2,5\}$.
The $F$-type statistics for innovation outliers ($\delta=0$) achieve empirical rejection rates close to the target significance levels 1\%, 5\% and 10\% we aim at already in case of series of length $n=100$ and irrespective of the time $\tau$. The results are somewhat worse for larger values of $\delta$, particularly if $\alpha$ is large. In the case of $n=100$, the $F$-test for a transient shift with $\delta=0.8$ achieves the target significance level if $\alpha=0.3$, but the rejection rate is about 2\%, 8\% and 13.5\% instead of 1\%, 5\% and 10\% if $\alpha=0.9$. For a permanent level shift ($\delta=1$), the results are worse, particularly if we test in the center of the series, $\tau=0.5n$.
The value of $\lambda$ apparently has little effect on the rejection rates, and neither do we observe an obvious pattern whether the tests have more problems when testing at early, central or late time points $\tau$, except for $\delta=1$.
The results improve for larger values of the series length $n$. In the case of $n=200$, all $F$-type statistics achieve the target significance level well if $\alpha=0.3$, and the rejection rates are not much larger than the target significance level if $\alpha=0.6$. The $F$-type statistics for a transient shift are only slightly oversized even if $\alpha=0.9$, where only the test for a permanent shift shows serious size problems, particularly when testing in the center of the series $\tau=0.5n$. We conclude that the $F$-type statistics allow simple yet promising tests for intervention effects of known type and time, and the quantiles of the $\chi_1^2$-distribution can be used as approximate critical values. Testing for permanent level shifts needs a rather long series length, particularly if the degree of autocorrelation $\alpha$ in the series is large.
The score tests achieve empirical rejection rates very close to the target significance levels 1\%, 5\% and 10\% even for $n=100$. The score test seems to be anti-conservative at the 1\% significance level but more conservative than the $F$-test at the 5\% and 10\% significance levels. Simulation results indicate that the score statistics perform better than the $F$-type statistics for transient shifts ($\delta=0.8$) and permanent level shifts ($\delta=1$), especially when the INAR(1) process is characterized by strong autocorrelation ($\alpha=0.9$). However, the $F$-type statistics achieve rejection rates closer to the targeted ones when the objective is to detect an innovation outlier ($\delta=0$). Similarly to the $F$-type statistic, the size of the score tests is little affected by the time $\tau$ of the occurrence of the intervention.
Next we examine the empirical power of these approximate significance tests for a single intervention effect at a known time point. For this purpose we analyzed 2000 time series of length $n=200$ per simulation scenario. The true size of the intervention effect is scaled to be $\kappa= 3\sqrt{\lambda}$, $\kappa= 2\sqrt{\lambda}$ or $\kappa= \sqrt{\lambda}$ for $\delta=0$, $\delta=0.8$ or $\delta=1$, since the total effect on the series increases with $\delta$. Table \ref{tab:power200-0} of the supplementary materials reports the empirical powers of the tests for the different types of intervention at a given time point $\tau\in\{0.25n,0.5n,0.75n\}$ when an innovation outlier occurs at the time point tested. We observe that both the $F$-type and score statistics for an innovation outlier possess larger power than the corresponding tests for other values of $\delta$. Nevertheless, the tests using a misspecified value of $\delta$ also have some power, particularly those using a value of $\delta$ not far from the true one.
Moreover, the score test achieves larger power than the $F$-test for an innovation outlier especially when testing at early or central time points.
In situations where a transient shift occurs, the $F$-type and score tests achieve similar empirical powers irrespective of the time $\tau$ of the intervention effect (Table \ref{tab:power200-0.8} of the supplementary materials). The tests using the correctly specified value of $\delta$ achieve the highest rejection rates, but other tests which use a similar value of $\delta$ are close. For instance, additional results not shown here suggest that the $F$-test using $\delta=0.8$ achieves almost the same rejection rate as that using $\delta=0.6$ if the latter is correct, and the same applies to the test for $\delta=0.9$ when there is a transient shift with $\delta=0.8$. The situation is slightly different if there is a transient shift with $\delta=0.9$, where besides the test with the correct $\delta=0.9$ usually only the test using $\delta=0.8$ reacts with a similarly large probability. The test for a permanent shift reacts with a high probability only if the transient shift occurs towards the end of the time series. Similarly, a permanent shift is only detected with high probability by the tests for a transient shift if it occurs towards the end of the time series. Moreover, a permanent shift of a certain height is detected best by the test with the correctly specified $\delta=1$ if it occurs in the center of the series (Table \ref{tab:power200-1} of the supplementary materials). For misspecified values of $\delta\in[0, 1)$, the score test achieves higher rejection rates than the $F$-test and can then be recommended if we want to consider only a single value of $\delta$ different from $0$ and $1$.
\section{Unknown types of interventions at known time points} \label{sect:unknowntype}
In the previous section we observed that the tests using the correct specification of $\delta$ usually give the largest power. This suggests that we can try to identify the type of an intervention at a known time point by comparing the $F$-type or score statistics for a selection of values of $\delta$, classifying a detected intervention according to the $F$-type or score statistic with the largest value. We investigate the empirical detection rates of this classification rule by analyzing 2000 time series of length $n=200$ per simulation scenario obtained by setting $\alpha\in\{0.3,0.6,0.9\}$, $\lambda\in\{2,5\}$, and $\tau\in\{50,100,150\}$. We also consider
$\delta\in\{0,0.6,0.8,0.9,1\}$ and we scale the true size of the intervention effect to be $\kappa=3\sqrt{\lambda}$, $\kappa=2.5\sqrt{\lambda}$, $\kappa=2\sqrt{\lambda}$, $\kappa=1.5\sqrt{\lambda}$ or $\kappa=\sqrt{\lambda}$ for $\delta=0$, $\delta=0.6$, $\delta=0.8$, $\delta=0.9$ and $\delta=1$, respectively.
Applying the classification rule to data without interventions, any intervention is detected at a given time point in about $3\beta$\%-$4\beta$\% of the time series if all tests (either $F$-type statistics or score statistics) are applied with a nominal significance level of $\beta$, see Table \ref{tab:classif200} of the supplementary materials. The overall significance levels achieved by the score statistics when testing for each of the five values of $\delta\in\{0,0.6,0.8,0.9,1\}$ at a given nominal significance level are generally lower than the corresponding significance levels achieved by the $F$-type statistics. Such differences become most obvious when $\alpha=0.9$. In this case, the score test achieves an overall significance level of about 15\% at a nominal 5\% significance level while the corresponding significance level achieved by the $F$-test is about 20\%. The fact that the five tests performed for different values of $\delta$ add up to almost 20\% indicates that we are close to the case that a Bonferroni correction is useful.
Table \ref{tab:classif200-0} of the supplementary materials reports the classification results for the situation of an innovation outlier. Obviously, innovation outliers are identified correctly in most of the cases, although they are occasionally classified as a transient shift with $\delta=0.6$ with the missclassification rates being somewhat higher for the $F$-type statistics than for the score statistics.
The results look somewhat different for transient shifts, see Tables \ref{tab:classif200-6.1}-\ref{tab:classif200-9.1} of the supplementary materials. Moderately large transient shifts are classified by both the $F$-type statistic and score statistic into one of the two categories with adjacent values of $\delta$ with about the same probability as for the true value of $\delta$. That is, a moderately large transient shift with $\delta=0.6$ is often confused with an innovation outlier or a transient shift with $\delta=0.8$. Similarly, a moderately large transient shift with $\delta=0.8$ is often considered to be a transient shift with $\delta=0.6$ or $\delta=0.9$. It is worth noting that although confusion with values of $\delta$ which are either smaller or larger than the true one seems equally probable with the $F$-type statistic, missclassification is rather in favor of smaller values with the score statistic. The situation is somewhat different for transient shifts with $\delta=0.9$. Simulation results not shown here suggest that transient shifts are confused with permanent shifts only if they occur late in the series, while Table \ref{tab:classif200-9.1} indicates that the confusion with a transient shift with $\delta=0.8$ is in the same line as before. The identification of permanent shifts seems not to pose missclassification issues according to the results displayed in Table \ref{tab:classif200-1} of the supplementary materials.
The problem of possible wrong classification is less pronounced when we consider transient shifts with a larger size, so that we can try to estimate a suitable value of $\delta$ by comparing the $F$-type or score statistics for a selection of values of $\delta$. Such a rule should work nicely at least for large effect sizes. Note that in the literature on the detection of intervention effects within ARMA or INGARCH models usually only the cases $\delta=0$ and $\delta=1$ corresponding to innovation outliers and permanent shifts are considered, along with a single value of $\delta$ like $\delta=0.8$ for transient shifts. This might be due to the misclassification problem outlined above and the additional complexity arising when considering multiple values of $\delta$. In our case, the $F$-type statistics based on CLS estimation are simple and do not cause computational efforts, so that we can consider several values of $\delta$ easily. The computational cost is larger with the score statistic, especially when the stationary mean of the process is large (see Table~\ref{tab:cost}). However, it can be reduced substantially by a reasonable choice of the truncation parameter $m$ involved in the computation of the conditional information matrix as described in Section~1 of the supplementary materials.
\begin{table} \caption{\label{tab:cost} Average CPU time required for the computation of the score and $F$-type statistics for the detection of a transient shift ($\delta=0.8$) at time $\tau=100$ in a clean INAR(1) process of length $n=200$. Results are based on 100 simulation experiments implemented on a Windows 10 Pro with a 3.6 GHz Intel Core i7-7700 processor and 16.0 GB RAM memory.} \begin{center} {\footnotesize
\begin{tabular}{cc|rr} \hline & & \multicolumn{2}{c}{Average computational cost (secs$\times 1000$)}\\ $\alpha$ & $\lambda$ & F-type statistic & Score statistic\\ \hline 0.3 & 2 & 0.11 & 45.25\\ 0.3 & 5 & 0.25 & 91.19\\ 0.6 & 2 & 0.14 & 67.83\\ 0.6 & 5 & 0.15 & 175.53\\ 0.9 & 2 & 0.15 & 324.69\\ 0.9 & 5 & 0.14 & 1698.61\\ \hline \end{tabular}} \end{center} \end{table}
\section{Unknown types of interventions at unknown time points} \label{sect:unknowntime}
Now we take our considerations another step further and look at situations where we do not know neither the type nor the time of a possible intervention. For this scenario, we consider the maximum test statistics arising from calculating the $F$-type or score tests for a set of candidate time points $\tau$ and then selecting the maximum of the statistic.
First we consider the results obtained from analyzing 10000 clean INAR(1) series for different parameter settings $\alpha\in\{0.3,0.6,0.9\}$, $\lambda\in\{2,5\}$, and series lengths \linebreak $n\in\{100,200\}$. Figures~\ref{fig:maxstatistics0-100} and~\ref{fig:maxstatistics0-200} display boxplots of these maximum statistics for each of several values of $\delta\in\{0,0.8,1\}$ individually as well as with an additional maximization with respect to $\delta$.
Apparently the distributions of the maximum statistics under the null hypothesis are not very different for the different values of $\delta$. The main differences are that the maximum statistics take somewhat smaller values for larger values of $\delta$. This can be explained by the different degrees of dependence among the test statistics for the different time points $\tau$, with stronger dependencies for larger values of $\delta$. Nevertheless we expect the maximum test statistics to provide information on the type of an intervention, since these differences are not very large, especially for the $F$-type statistics.
Approximate critical values for an overall test on any type of intervention effect can be derived from the empirical quantiles of the maximum $F$-type or score statistics with additional maximization with respect to $\delta$. In the case of $n=100$, the 90\%, 95\% and 99\% quantiles of the overall maximum $F$-type statistics range from about 15.3 to 17.4, from 17.3 to 20.3, and from 22.4 to 26.6 for the different parameter combinations considered here, with the largest quantiles arising for $(\alpha,\lambda)=(0.3,2)$. We thus can use 17, 20 and 27 as critical values for approximate significance $F$-tests for an unknown intervention at an unknown time point at a 10\%, 5\% or 1\% significance level. The range of the 90\%, 95\% and 99\% quantiles of the overall maximum score statistics is wider with values between 14.3 and 21.9, between 16.7 and 25.6, and between 21.7 and 34.8, respectively. In this case, the largest quantiles arise from the somewhat extreme scenario $(\alpha,\lambda)=(0.9,2)$. To perform approximate score tests for an unknown intervention at an unknown time point at a 10\%, 5\% or 1\% significance level, we can thus use 22, 26 and 35 as critical values. In case of $n=200$, the empirical percentiles of the maximum $F$-type statistics range from 15.9 to 19.2, from 17.8 to 21.9, and from 22.2 to 27.8, so that we can use 19, 22 and 28 as approximate critical values. The corresponding empirical percentiles of the maximum score statistics range from 16.8 to 26, from 19.3 to 30.4 and from 25.3 to 40.2, so that 26, 30 and 40 can serve as approximate critical values. Additional simulation results in Section 3 of the supplementary materials indicate that the same critical values can be used for the $F$-type statistics in case of INAR(2) models.
Derivation of critical values based on the empirical percentiles of the maximum test statistics has obviously some drawbacks. Firstly, the resulting tests will usually be somewhat conservative within the range of situations that have been previously investigated. Secondly, their performance is not guaranteed, for different parameter configurations, sample sizes or higher-order models outside this range. To confront such limitations we can employ a parametric bootstrap, as in \cite{fokianos:2010}. This procedure requires analyzing many artificial time series generated from the model fitted under the null hypothesis and comparing the values of the test statistics for the observed real data to those obtained for the artificial data. If the real data do not contain any interventions, the corresponding value of the maximum test statistic should be comparable to those of the bootstrapped series. This comes at the price of a much higher computational cost. The parametric bootstrap procedure is discussed further in Section~\ref{sect:iterative} where it is used in a couple of illustrative examples.
\begin{figure}
\caption{Boxplots of the maximum $F$-type ($F$) and score ($S$) test statistics, maximized with respect to the candidate time point $\tau$ of a change when $n=100$.}
\label{fig:maxstatistics0-100}
\end{figure}
\begin{figure}
\caption{Boxplots of the maximum $F$-type ($F$) and score ($S$) test statistics, maximized with respect to the candidate time point $\tau$ of a change when $n=200$.}
\label{fig:maxstatistics0-200}
\end{figure}
Next we inspect the performance of the classification rules when applied to time series containing an intervention effect.
Classification is based on the maximum test statistics whose significance is concluded according to the critical values derived previously. Figure \ref{fig:classif100-0} depicts the classification results when being applied to time series of length $n=100$ containing an innovation outlier at time $\tau=50$. For this we generate 2000 time series for each of the parameter combinations $(\alpha,\lambda)$ considered before and each intervention size $\kappa=k\sqrt{\lambda}$, $k=0,\ldots,12$. Apparently, time series containing an innovation outlier are classified quite reliably by both the $F$-type and score test statistics, except if the outlier is very small.
For INAR(1) processes characterized by a rather weak autocorrelation ($\alpha= 0.3$), the $F$-type and score test statistics attain very similar classification rates. As the autocorrelation of the series becomes larger, the score test statistic seems to be superior, but this is partly explained by the different empirical sizes achieved by the two tests. For example, for $\alpha=0.9$ and $\lambda=2$, the score test provides substantially higher classification rates than the $F$-test but this is in part due to the latter showing a quite conservative behavior as its empirical size is only about $0.6\%$ then.
Figure \ref{fig:classif100-08} illustrates results for the classification of a transient shift with $\delta=0.8$. The $F$-type statistic provides results that look particularly well for all parameter configurations.
The performance of the score test statistic relates to the degree of autocorrelation in the series with better performance for strongly autocorrelated time series data.
In particular, the classification rates are very reliable for $\alpha=0.9$ but the number of misclassified cases increases as the autocorrelation weakens. For $\alpha=0.6$ and even more notably for $\alpha=0.3$, many cases are classified as innovation outliers instead of transient shifts. In such situations and especially for small values of $\alpha$, the classification rates tend to increase with the intervention size up to $\kappa=7\sqrt{\lambda}$. Thereafter, there is a decreasing tendency with more than half of the cases being misclassified as innovation outliers when $\kappa=11\sqrt{\lambda}$ or $\kappa=12\sqrt{\lambda}$.
These problems do not exist for permanent shifts in the center of the series that are very rarely classified as one of the other types of intervention effects by both the $F$-type and score test statistics. Figure \ref{fig:classif100-1} highlights the role of the autocorrelation parameter $\alpha$ and the intervention size $\kappa$ in the performance of the score test statistic and indicates the cases where it is preferable to the $F$-type statistic. Specifically, when $\alpha=0.3$, the $F$-type statistic achieves slightly higher classification rates than the score test statistic, independently of the intervention size. For $\alpha=0.6$, the score test statistic is preferable to the $F$-type statistic for medium intervention sizes, that is for $\kappa=2\sqrt{\lambda}$ to $\kappa=6\sqrt{\lambda}$. For lower or higher intervention sizes, the $F$-type statistic performs better. A similar pattern is observed for $\alpha=0.9$ but with a greater outperformance by the score test statistic for medium intervention sizes. The classification rates of the score test statistic are not available for $\kappa>6\sqrt{\lambda}$ and $\alpha=0.9$ (lower panel of Figure \ref{fig:classif100-1}) since the extremely large counts that appear in the time series render the Poisson distribution a poor parametric choice and cause the Fisher information matrix $\mathcal{I}(\boldsymbol{\theta})$ to be nearly singular. In other words, in such cases the Poisson model is seriously misspecified and the conditional maximum likelihood estimates are inconsistent with unreliable standard errors. Such problems do not occur with the $F$-type statistic which achieves classification rates close to $100\%$ when both the degree of autocorrelation and the intervention size are high.
Our empirical findings on the role of the degree of autocorrelation on the performance of the score and $F$-type statistics are consistent with previous results about the performance of the conditional least squares and maximum likelihood estimators of the Poisson INAR(1) model. In particular, \cite{alosh:1987} and \cite{brannas:1994} observed that the biases of the conditional least squares estimators increase as $\alpha$ takes higher values with a much higher increase in the bias of $\hat{\lambda}^{\mbox{\tiny{CLS}}}$ than that of $\hat{\alpha}^{\mbox{\tiny{CLS}}}$. In contrast, the conditional maximum likelihood estimates do not show such a behaviour. In particular, the bias of $\hat{\alpha}^{\mbox{\tiny{CML}}}$ increases as $\alpha$ takes values up to around 0.3 and then it starts to decrease until reaching a negligible bias when $\alpha=0.9$. The bias of $\hat{\lambda}^{\mbox{\tiny{CML}}}$ remains low and almost stable for different values of $\alpha$. The observed tendency of our classification rules to over- or underestimate $\delta$ can thus be better explained by accounting for the aforementioned results and the strong negative correlation between the estimators of $\alpha$ and $\lambda$, especially when $\alpha$ is large \citep[see][Figure 3]{alosh:1987}.
\begin{figure}
\caption{Classification results when applying the maximum $F$-type (grey lines) and score test statistics (black lines) to time series of length $n=100$ containing an innovation outlier of increasing size $\kappa=0,\sqrt{\lambda},\ldots,12\sqrt{\lambda}$ at time point $\tau=50$. Classification as $\delta=0$ (dotted), $\delta=0.8$ (dashed), $\delta=1$ (solid).}
\label{fig:classif100-0}
\end{figure}
\begin{figure}
\caption{Classification results when applying the maximum $F$-type (grey lines) and score test statistics (black lines) to time series of length $n=100$ containing a transient shift with $\delta=0.8$ of increasing size $\kappa=0,\sqrt{\lambda},\ldots,12\sqrt{\lambda}$ at time point $\tau=50$. Classification as $\delta=0$ (dotted), $\delta=0.8$ (dashed), $\delta=1$ (solid).}
\label{fig:classif100-08}
\end{figure}
\begin{figure}
\caption{Classification results when applying the maximum $F$-type (grey lines) and score test statistics (black lines) to time series of length $n=100$ containing a permanent shift with $\delta=1$ of increasing size $\kappa=0,\sqrt{\lambda},\ldots,12\sqrt{\lambda}$ at time point $\tau=50$. Classification as $\delta=0$ (dotted), $\delta=0.8$ (dashed), $\delta=1$ (solid).}
\label{fig:classif100-1}
\end{figure}
\section{Iterative detection of intervention effects} \label{sect:iterative} In real data problems, time series can contain more than one intervention throughout the observation period. For the detection, classification and elimination of multiple intervention effects we follow the stepwise procedure of \citet{fokianos:2010}, adapted to the INAR(1) framework. The steps of this iterative detection approach are described below, setting $j=1$ and $Y_t^{(j)}=Y_t$, $t=1,\ldots,n$, for initialization of the algorithm: \begin{enumerate} \item Fit an INAR(1) model to the data $\{Y_t^{(j)}, t=1,\ldots,n\}$. \item Test for a single intervention of any type at any time point by employing (\ref{eq:cont-inarp}) and using the maximum of the $F$-type or score test statistics. At this step, we suggest using the parametric bootstrap procedure discussed briefly in Section~\ref{sect:unknowntime}. The individual steps for its implementation are described in detail in Table~\ref{tab:boot}. \item If there is no significant result, the iterative detection procedure is terminated and the series $Y_1^{(j)},\ldots,Y_n^{(j)}$ is considered as clean. Otherwise: \begin{enumerate} \item Fit a contaminated INAR(1) model (\ref{eq:cont-inarp}) by choosing $\delta$ according to the type of intervention identified in the previous step. Let $\hat{\kappa}$ be the estimated size of the intervention effect and $\hat{\tau}$ its time point of occurrence. \item For $t\geq\hat{\tau}$, sequentially estimate the effect of the intervention on the observation $Y_t^{(j)}$ by the rounded value \[\hat{U}_t=\left\lfloor\frac{\hat{\kappa}\delta^{t-\hat{\tau}}}{\hat{\alpha}Y_{t-1}^{(j+1)}+\hat{\lambda}+\hat{\kappa}\delta^{t-\hat{\tau}}}Y_t^{(j)}\right\rfloor\] and correct the corresponding observation for the estimated intervention effect by setting \[Y_t^{(j+1)}=Y_t^{(j)}-\hat{U}_t.\]
Note that for $t<\hat{\tau}$, $Y_t^{(j+1)}=Y_t^{(j)}$ so that $\hat{U}_{\hat{\tau}}=\left\lfloor\hat{\kappa}Y_{\hat{\tau}}^{(j)}/(\hat{\alpha}Y_{\hat{\tau}-1}^{(j)}+\hat{\lambda}+\hat{\kappa})\right\rfloor$. \end{enumerate} \item Set $j=j+1$ and return to step 1. \end{enumerate} The iterative procedure is continued until no further interventions are detected. The correction in step 3.b) is adequate if the type of intervention and time point of its occurrence have been correctly identified. The estimated intervention effect $\hat{U}_t$ is actually the rounded estimate of the conditional expectation of the contaminating process $U_t$ in (\ref{eq:cont-inarp}) given $Y_t$ and the $\sigma$-field $\mathcal{F}_{t-1}=\{Y_{t-1}, U_{t-1}\}$: \begin{eqnarray*}
E(U_t|Y_t=y,\mathcal{F}_{t-1})&=&\sum_{u=0}^{y}uP(U_t=u|Y_t=y,\mathcal{F}_{t-1})\\
&=&\sum_{u=0}^yu\frac{P(U_t=u, Y_t^{\mbox{\tiny{clean}}}=y-u|\mathcal{F}_{t-1})}{P(Y_t=y|\mathcal{F}_{t-1})}\\ &=&\sum_{u=0}^yu\frac{(\kappa\delta^{t-\tau})^u\exp(-\kappa\delta^{t-\tau})/u!(\alpha Y_{t-1}+\lambda)^{y-u}\exp(-\alpha Y_{t-1}-\lambda)/(y-u)!}{(\alpha Y_{t-1}+\lambda+\kappa\delta^{t-\tau})^y\exp(-\alpha Y_{t-1}-\lambda-\kappa\delta^{t-\tau})/y!}\\ &=&\sum_{u=0}^y u\left(\begin{array}{c}y\\u\end{array}\right)\left(\frac{\kappa\delta^{t-\tau}}{\alpha Y_{t-1}+\lambda+\kappa\delta^{t-\tau}}\right)^u\left(\frac{\alpha Y_{t-1}+\lambda}{\alpha Y_{t-1}+\lambda+\kappa\delta^{t-\tau}}\right)^{y-u}\\ &=&\left(\frac{\kappa\delta^{t-\tau}}{\alpha Y_{t-1}+\lambda+\kappa\delta^{t-\tau}}\right)y, \end{eqnarray*} where $Y_t^{\mbox{\tiny{clean}}}$ denotes the $t$-th observation from a clean INAR(1) process.
Note that $U_t|Y_t=y,\mathcal{F}_{t-1}$ is binomially distributed with parameters $y$ and $\kappa\delta^{t-\tau}/(\alpha Y_{t-1}+\lambda+\kappa\delta^{t-\tau})$.
\begin{table} \caption{\label{tab:boot}The parametric bootstrap procedure for the identification of unknown types of interventions at unknown time points.} {\small \begin{center}
\begin{tabular}{|rl|} \hline 1. & Fit an INAR model to the observed time series assuming that there are no interventions.\\ \hline 2. & Generate a large number of, say, $B=500$ bootstrap replicates from the fitted INAR \\ & model with the same parameters as those estimated for the observed real data.\\ \hline 3. & Calculate the maximum test statistics for the original and for the $B$ bootstrap series.\\ \hline 4. & Compute the number $N$ of bootstrap replicates for which the maximum test statistic\\ & is not smaller than its value computed by the original data.\\ \hline 5. &Compute the $p$-value $(N+1)/(B+1)$.\\ \hline 6. & Classify the type of the intervention according to the minimal $p$-value, with preference\\ & given to interventions with larger value of $\delta$ in case of equality.\\ \hline \end{tabular} \end{center} } \end{table}
\subsection{Simulation example} \label{sect:simulex}
We consider a simulated time series of length $n=200$ generated from a contaminated Poisson INAR(1) model of the form $Y_t=\alpha\circ Y_{t-1}+e_t+U_{t,1}+U_{t,2}$, where $e_t\sim Pois(\lambda)$, $U_{t,j}\equiv0$ for $t=0,\ldots,\tau_j-1$ and $U_{t,j}\sim Pois(\kappa_j\delta_j^{t-\tau_j})$ for $t=\tau_j,\ldots,n$, $j=1,2$. We set $(\alpha,\lambda)=(0.5,3)$ and the interventions consisting of two transient shifts of the same size $\kappa_1=\kappa_2=\kappa=10$ at times $\tau_1=50$ and $\tau_2=150$ with $\delta_1=0.6$ and $\delta_2=0.9$, respectively (see Figure \ref{fig:sim}). We apply the previously described stepwise detection algorithm to test for the existence of any type of outlier using $\delta=(0,0.6,0.8,0.9,1)$ at any time point. Step (2) of the iterative detection algorithm is implemented using the parametric bootstrap procedure of Table~\ref{tab:boot}. The conditional least squares and maximum likelihood estimates obtained at each step of the stepwise procedure are summarized in Table \ref{tab:sim-ex}.
When we fit a Poisson INAR(1) model to the data assuming no interventions, we obtain that the conditional least squares and maximum likelihood estimates are $(\hat{\alpha}^{\mbox{\tiny{CLS}}},\hat{\lambda}^{\mbox{\tiny{CLS}}})=(0.60, 2.95)$ and $(\hat{\alpha}^{\mbox{\tiny{CML}}},\hat{\lambda}^{\mbox{\tiny{CML}}})=(0.46, 3.90)$, respectively. Then, we test for unknown types of interventions at unknown time points using both the $F$-type and score test statistics. At the first iteration, both test statistics correctly identify a transient shift with $\delta=0.9$ at time $\tau=150$. After correcting the data according to step 3.b), the second intervention corresponding to $\tau=50$ is also detected by the two test statistics and is correctly classified as a transient shift, although with $\delta=0.8$ instead of $\delta=0.6$. Correcting anew the data, the $F$-type statistic detects an additional innovation outlier at time $\tau=77$. The final conditional least squares and maximum likelihood estimates are $(\hat{\alpha}^{\mbox{\tiny{CLS}}},\hat{\lambda}^{\mbox{\tiny{CLS}}})=(0.45, 3.58)$ and $(\hat{\alpha}^{\mbox{\tiny{CML}}},\hat{\lambda}^{\mbox{\tiny{CML}}})=(0.42, 3.79)$, respectively.
Since the data are generated by a contaminated INAR(1) model, evaluation of the iterative detection procedure should be based on the ability of the test statistics to identify correct types of intervention at correct time points and on the efficiency of the corresponding parameter estimators.
For such an assessment, we repeated our experiment several times with data generated from the same model and with the same types and sizes of intervention effects. Our results indicate that the suggested stepwise detection procedure correctly identifies the intervention effects with some exceptions of additional outliers being occasionally identified. This is not surprising since even an uncontaminated INAR(1) process can occasionally present some relatively large values. Moreover, the effects $U_t$ of the same intervention on different time points are independent and thus it is hard to estimate all of them well. In some instances, additional intervention effects were found right after the occurrence of the true ones. This can also happen if the size of the true intervention is underestimated or if the true intervention is not detected at all.
We also observed that conditional maximum likelihood and conditional least squares behave similarly in terms of efficiency of the stationary mean estimator.
Finally, we should note that the larger is the persistence of a transient shift, the greater is the ability of both tests to correctly identify it. In our experiments, the test statistics usually identified the transient shift occurring at $\tau=50$, but they overestimated $\delta$. This overestimation can partly be explained by the convention in the sixth step of the bootstrap procedure (see Table~\ref{tab:boot}).
\begin{figure}
\caption{Simulated time series with two transient shifts at times $\tau_1=50$ and $\tau_2=150$ (solid line) and the series after correction for the intervention effects as estimated by the $F$-type statistic (dotted line) and the score test statistic (dashed line), which are quite similar here.}
\label{fig:sim}
\end{figure}
\begin{table} \caption{\label{tab:sim-ex} Conditional least squares and maximum likelihood estimates obtained at each step of the stepwise procedure for the detection and elimination of intervention effects in the simulated time series. The final estimates of the Poisson INAR(1) model parameters are shown in bold. The $F$-type and score test statistics are used with conditional least squares and maximum likelihood estimation, respectively. The true parameter values are $\alpha=0.5$ and $\lambda=3$ and there are outliers with $\kappa=10$ and $\delta=0.6$ at $\tau=50$ as well as $\kappa=10$ and $\delta=0.9$ at $\tau=150$.} {\small \begin{center}
\begin{tabular}{cc|cr|rr|rrr}
\hline Iteration & Step & \multicolumn{2}{c|}{Test statistic} &\multicolumn{2}{c|}{Parameter estimates} & \multicolumn{3}{c}{Outlier}\\
& & Type & \multicolumn{1}{c|}{Bootstrap p-value} & $\hat{\alpha}$ & $\hat{\lambda}$ & $\hat{\kappa}$ & $\hat{\tau}$ & $\hat{\delta}$\\ \hline \rowcolor{lightgray} 1 & 1 & $F$-type & & 0.60 & 2.95 & & & \\ & & score & & 0.46 & 3.90 & & & \\ \rowcolor{lightgray}
& 2-3 & $F$-type & $<0.001$ & 0.41 & 3.82 & 8.93 & 150 & 0.9\\
& & score & $<0.001$ & 0.39 & 3.97 & 9.37 & 150 & 0.9\\
\hline
\rowcolor{lightgray}
2 & 1 & $F$-type & & 0.46 & 3.62 & & & \\
& & score & & 0.42 & 3.89 & & & \\
\rowcolor{lightgray}
& 2-3& $F$-type & $<0.001$ & 0.38 & 3.97 & 7.19 & 50 & 0.8\\
& & score & $<0.001$ & 0.40 & 3.86 & 6.66 & 50 & 0.8\\
\hline
\rowcolor{lightgray}
3 & 1 & $F$-type & & 0.43 & 3.72 & & & \\
& & score & & \textbf{0.42} & \textbf{3.79} & & & \\
\rowcolor{lightgray}
& 2-3 & $F$-type & 0.02 & 0.44 & 3.62 & 8.18 & 77 & 0\\
& & score & $0.148$ & - & - & - & - & -\\ \hline
\rowcolor{lightgray}
4 & 1 & $F$-type & & \textbf{0.45} & \textbf{3.58} & & & \\
& & score & & - & - & & & \\
\rowcolor{lightgray}
& 2-3 & $F$-type & 0.164 & - & - & - & - & -\\
& & score & - & - & - & - & - & -\\ \hline
\hline \end{tabular} \end{center} } \end{table}
\subsection{Brucellosis in Greece}
Brucellosis is a common disease worldwide, representing a serious public health problem in many countries, especially those around the Mediterranean Sea. The infection can be directly transmitted from infected animals and contaminated tissues to humans via inhalation or through skin lesions which is an occupational risk for veterinarians, abattoir workers and farmers, particularly in endemic regions. However, the ingestion of contaminated raw milk and dairy products poses a major public health risk. In milk and products thereof, brucella is controlled most effectively by pasteurization or sterilization before marketing or by further processing into dairy products. Despite intense efforts to eliminate brucellosis in Europe, the disease still occurs in Portugal, Spain, France, Italy, the Balkans, Bulgaria, and Greece \citep{karagiannis:2012, rossetti:2017}.
Figure \ref{fig:grdat} illustrates the monthly number of human brucellosis cases in Greece for the years 2007-2020 ($n=168$), as recorded by the European Center of Disease Control (ECDC) Surveillance Atlas. The data display a seasonal pattern and an outbreak of disease cases from May to July 2008. Indeed, in spring 2008, the Hellenic Center for Disease Control and Prevention was notified about human brucellosis cases in Thassos, a Greek island that had been up to that point under a brucellosis eradication programme. During the subsequent days, more cases were notified from the island and an outbreak was verified \citep{karagiannis:2012}.
To investigate whether the suggested stepwise detection algorithm is able to effectively detect the intervention effects in the time series of brucellosis cases, we start by fitting a Poisson INAR(1) regression model of the form $Y_t=\alpha\circ Y_{t-1}+e_t$. The arrival process $(e_t)$ is Poisson distributed with parameter $\lambda_t$ accounting for annual seasonality and trend, that is \[\log(\lambda_t)=\beta_0+\beta_1\sin\left(\frac{2\pi t}{12}\right)+\beta_2\cos\left(\frac{2\pi t}{12}\right)+\beta_3\frac{t}{168}, \quad t=1,\ldots,168.\]
After fitting the Poisson INAR(1) regression model to the data, we test for different types of interventions using the iterative procedure described earlier in Section~\ref{sect:iterative}. For this purpose, we employ the maximum score test statistic since, contrary to conditional least squares estimation, the non-linear form of $\lambda_t$ does not complicate conditional maximum likelihood estimation of the model parameters.
Table \ref{tab:grdat} summarizes the results of the iterative detection procedure. To decide about the approximate significance of the score test statistic we base ourselves on the parametric bootstrap procedure described in Section~\ref{sect:iterative}. In the first iteration, our classification rule decides in favor of an innovation outlier at time $t=17$ which corresponds to May 2008. The detected intervention effect is significant at $10\%$ significance level ($p$-value=0.078) and its estimated size is 52.828. After elimination of its effect from the time series, no further interventions are identified, since the score test statistic is not any more significant in the next step.
Fitting the full model with the detected innovation outlier to the original data, we conclude with the enlarged Poisson INAR(1) regression model for the number of brucellosis human cases:
\begin{eqnarray*} Y_t&=&0.274\circ Y_{t-1}+e_t+U_{t}, \quad t=1,\ldots, 168,\\ e_t&\sim& Pois(\lambda_t), \quad \log(\lambda_t)=2.184+0.175\sin\left(2\pi t/12\right)-0.553\cos\left(2\pi t/12\right)-0.758 t/168\\ U_{t}&\sim& Pois(52.92 I(t=17))\\ \end{eqnarray*}
The parameter estimates and the corresponding standard errors obtained with the enlarged (contamination) model are summarized in Table \ref{tab:grmod}. For comparison purposes we also report the results obtained by fitting
a contaminated log-linear Poisson autoregressive model of order 1 \citep{fokianos:2012}. For the latter, we assume that $Y_t|\mathcal{F}_{t-1}\sim\mbox{Poisson}(\lambda_t)$, where \[\log(\lambda_t)=\beta_0+\beta_1\sin\left(\frac{2\pi t}{12}\right)+\beta_2\cos\left(\frac{2\pi t}{12}\right)+\beta_3\frac{t}{168}+\gamma\log(Y_{t-1}+1)+\sum_{j=1}^J\kappa_j\delta_{j}^{t-\tau_j}I(t\geq\tau_j),\] and we use the \texttt{R} package \texttt{tscount} for model fitting and detection of intervention effects \citep{liboschik:2017}. Starting from a first-order log-linear model, we detect a transient shift with $\delta=0.6$ at time 17 (May 2008) and a level shift at time 67 (July 2012), both being significant at $1\%$ significance level. The log-intensity process of the fitted model with the detected interventions at their respective times is given by \begin{eqnarray*} \log(\lambda_t)&=&1.904+0.143\sin\left(\frac{2\pi t}{12}\right)-0.424\cos\left(\frac{2\pi t}{12}\right)-1.450\frac{t}{168}+0.234\log(Y_{t-1}+1)\\ &&+1.599\cdot 0.6^{t-17}I(t\geq17)+0.649I(t\geq67) \end{eqnarray*}
The predictions from the two contamination models are also plotted in Figure~\ref{fig:grdat}, illustrating that they both fit the data well and successfully accommodate the disease outbreak.
The correlograms and partial correlograms of the residuals obtained after fitting the two contaminated time series regression models to the data are shown in Figure~\ref{fig:grres}. The empricial autocorrelation and partial autocorrelation functions of the residuals obtained by the Poisson INAR(1) process (left panel) do not exhibit any serial correlation which has not been taken into account by the model. In contrast, the log-linear Poisson autoregression fails to adequately account for serial correlation at lags 2 and 3 (right panel) indicating an improved fit of the contaminated INAR(1) model for this particular dataset.
\begin{figure}
\caption{Monthly number of brucellosis human cases in Greece for the years 2007–2020: time series (solid line), fitted Poisson INAR(1) regression model with interventions (bold solid in black) and fitted log-linear Poisson autoregressive model with interventions (bold solid in grey).}
\label{fig:grdat}
\end{figure}
\begin{table} \caption{\label{tab:grdat} Iterative parameter estimates and intervention effects for the brucellosis data.} {\small \begin{center}
\begin{tabular}{cc|rrrr|rrr}
\hline Iteration & Step &\multicolumn{4}{c|}{Parameter estimates} & \multicolumn{3}{c}{Outlier}\\ & & $\hat{\alpha}$ & $\hat{\beta}_0$ & $\hat{\beta}_1$ & $\hat{\beta}_2$ & $\hat{\kappa}$ & $\hat{\tau}$ & $\hat{\delta}$\\ \hline
1 & 1 & 0.290 & 1.824 & 0.251 & -0.625&\\ & 2-3 & 0.317 & 1.760 & 0.218 & -0.543 & 56.506 & 17 & 0 \\ \hline \end{tabular} \end{center} } \end{table}
\begin{table} \caption{\label{tab:grmod}
Parameter estimates (standard errors) obtained by fitting contaminated time series regression models to the monthly number of brucellosis human cases in Greece for the years 2007–2020.} {\small \begin{center}
\begin{tabular}{c|c|c} \hline & Poisson INAR(1) & Log-linear Poisson autoregression\\ \hline $\hat{\alpha}$ & 0.274 (0.032) & -\\ $\hat{\gamma}$ & - & 0.234 (0.049) \\ $\hat{\beta}_0$ & 2.184 (0.078) & 1.904 (0.141)\\ $\hat{\beta}_1$ & 0.175 (0.050) & 0.143 (0.038)\\ $\hat{\beta}_2$ & -0.553 (0.051) & -0.424 (0.046)\\ $\hat{\beta}_3$ & -0.758 (0.117) & -1.450 (0.194)\\ $\hat{\kappa}_1$ & 52.920 (8.419) & 1.599 (0.118)\\ $\hat{\kappa}_2$ & - & 0.649 (0.102)\\ \hline \end{tabular} \end{center} } \end{table}
\begin{figure}
\caption{ Correlograms and partial correlograms of the residuals obtained after fitting a contaminated Poisson INAR(1) (left panel) and a contaminated log-linear Poisson autoregressive model (right panel) to the brucellosis series.}
\label{fig:grres}
\end{figure}
\section{Discussion} \label{sect:discussion}
We have developed a feasible procedure for the detection of intervention effects in integer-valued autoregressive models for count time series. The suggested procedure relies on the use of $F$-type or score test statistics. Extensive simulation experiments indicated that the elements that largely determine the performance of the two test statistics are the autocorrelation parameter $\alpha$ and the parameter $\delta$ identifying the intervention type. The $F$-type statistic is preferable when the INAR(1) process is characterized by a rather weak autocorrelation and the objective is to detect an innovation outlier ($\delta=0$). For transient or permanent level shifts ($\delta\in(0,1]$) and especially when the INAR(1) process is characterized by strong autocorrelation, the score test statistic performs better than the $F$-type statistic. The score test statistic is also preferable for misspecified values of $\delta$ although both tests pose some missclassification issues when transient shifts of a moderate size ($\delta=0.6$ or 0.8) are considered. The advantage of the $F$-type statistic is that it works reliably also in case of higher order models, see Section 3 of the supplementary materials.
Our modelling of intervention effects is additive for the INAR model and the same applies to the impact of intervention effects on the dynamics. Therefore, our model formulation allows for the detection and classification of different types of outliers, contrary to other approaches to intervention analysis in the context of INAR models which only consider one certain type of outlier. For instance, \cite{fried:2015} and \cite{silva:2015} have focused on additive outliers not entering the dynamics and have treated them through a Bayesian analysis. A possible interesting extension regards the possibility to distinguish and classify different intervention patterns when the uncontaminated process is unobserved. The flexibility of the Bayesian approach is promising in such a framework, but more work is necessary for this.
Another promising line for future research regards detection (and classification) of intervention effects by means of model selection criteria. The modified Bayesian information criterion developed by \cite{galeano:2012} or the modified Akaike information criterion and the average square standardized residuals used by \cite{fokianos:2012} are examples of such criteria.
\section*{Aknowledgements} This project has received funding from the Athens University of Economics and Business, Action I Funding and the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement no. 699980.
\begin{thebibliography}{} \bibitem[\protect\citeauthoryear{Al-Osh and Alzaid}{1987}]{alosh:1987} Al-Osh, M.A. and Alzaid, A.A. (1987). First-order integer-valued autoregressive (INAR(1)) process. \textit{Journal of Time Series Analysis}, \textbf{8:} 261--275.
\bibitem[\protect\citeauthoryear{Barczy et al.}{2010}]{barczy:2010}
Barczy, M., Ispany, M., Pap, G., Scotto, M, and Silva M.E. (2010). Innovational outliers in INAR(1) Models. \textit{Communications in Statistics - Theory and Methods}, \textbf{39:} 3343--3362.
\bibitem[\protect\citeauthoryear{Barczy et al.}{2012}]{barczy:2012}
Barczy, M., Ispany, M., Pap, G., Scotto, M, and Silva M.E. (2012). Additive outliers in INAR(1) Models. \textit{Statistical Papers}, \textbf{53:} 935--949.
\bibitem[\protect\citeauthoryear{Brännäs}{1994}]{brannas:1994} Brännäs, K. (1994). Estimation and testing in integer valued AR(1) models. Umeå Economic Studies, vol. 355, University of Umeå.
\bibitem[\protect\citeauthoryear{Bu et al.}{2008}]{bu:2008}
Bu, R., McCabe, B. and Hadri, K. (2008). Maximum likelihood estimation of higher-order integer-valued autoregressive processes. \textit{Journal of Time Series Analysis}, \textbf{29:} 973--994.
\bibitem[\protect\citeauthoryear{Davis et al.}{2016}]{davis:2016} Davis, R.A., Holan, S.H., Lund, R., and Ravishanker, N. (2016). \textit{Handbook of Discrete-valued Time Series}. CRC Press.
\bibitem[\protect\citeauthoryear{Du and Li}{1991}]{du:1991}
Du, J. and Li, Y. (1991). The integer-valued autoregressive (INAR($p$)) model. \textit{Journal of Time Series Analysis}, \textbf{12:} 129--142.
\bibitem[\protect\citeauthoryear{Fokianos and Fried}{2010}]{fokianos:2010} Fokianos, K. and Fried, R. (2010). Interventions in INGARCH processes. \textit{Journal of Time Series Analysis}, \textbf{31:} 210--225.
\bibitem[\protect\citeauthoryear{Fokianos and Fried}{2012}]{fokianos:2012} Fokianos, K. and Fried, R. (2012). Interventions in log-linear Poisson autoregression. \textit{Statistical Modeling}, \textbf{12:} 299--322.
\bibitem[\protect\citeauthoryear{Fox}{1972}]{fox:1972} Fox, A. J. (1972). Outliers in Time Series. \textit{Journal of the Royal Statistical Society, Series B}, \textbf{34:} 350--363.
\bibitem[\protect\citeauthoryear{Franke and Seligmann}{1993}]{franke:1993}
Franke, J. and Seligmann, T. (1993). \textit{Conditional maximum likelihood estimates for INAR(1) processes and their application to modelling epileptic seizure counts}. In: Developments in Time Series Analysis (ed. T. Subba Rao). Chapman and Hall, London. pp. 310--330.
\bibitem[\protect\citeauthoryear{Freeland and McCabe}{2004}]{freeland:2004} Freeland, R.K. and McCabe, B.P.M. (2004). Analysis of low count time series data by Poisson autoregression. \textit{Journal of Time Series Analysis}, \textbf{25:} 701--722.
\bibitem[\protect\citeauthoryear{Fried}{2015}]{fried:2015} Fried, R., Aguesop, I., Bornkamp, B., Fokianos, K., Fruth, J., Ickstadt, K. (2015). Retrospective Bayesian outlier detection in INGARCH series. \textit{ Statistics and Computing}, \textbf{25:} 365--374.
\bibitem[\protect\citeauthoryear{Galeano and Peña}{2012}]{galeano:2012} Galeano, P., and Peña, D. (2012). \textit{Additive outlier detection in seasonal ARIMA models by a modified Bayesian information criterion}. In W. R. Bell, S. H. Holan, and T. S. McElroy (Eds.), Economic time series: modeling and seasonality (pp. 317--336). Boca Raton: Chapman \& Hall.
\bibitem[\protect\citeauthoryear{Galeano and Peña}{2013}]{galeano:2013} Galeano, P., and Peña, D. (2013). \textit{Finding outliers in linear and nonlinear time series}. In: Becker, C., Fried, R., Kuhnt, S. (eds.) Robustness and Complex Data Structures, pp. 243--260. Springer, Heidelberg.
\bibitem[\protect\citeauthoryear{Hamilton}{1994}]{hamilton:1994} Hamilton, J. D. (1994). \textit{Time series analysis}, pp. 206--207. Princeton, N.J: Princeton University Press.
\bibitem[\protect\citeauthoryear{Karagiannis et al.}{2012}]{karagiannis:2012} Karagiannis, I., Mellou, K., Gkolfinopoulou, K., Dougas, G., Theocharopoulos, G., Vourvidis, D., Ellinas, D., Sotolidou, M., Papadimitriou, T., Vorou, R. (2012). Outbreak investigation of Brucellosis in Thassos, Greece, 2008. \textit{Euro Surveillance}, \textbf{17(11):} 20116.
\bibitem[\protect\citeauthoryear{Liboschik et al.}{2017}]{liboschik:2017} Liboschik, T., Fokianos, K., Fried, R. (2017). tscount: An R Package for Analysis of Count Time Series Following Generalized Linear Models. \textit{Journal of Statistical Software}, \textbf{82(5):} 1--51.
\bibitem[\protect\citeauthoryear{Lu}{2021}]{lu:2018} Lu, Y. (2021). The predictive distributions of thinning‐based count processes. \textit{Scandinavian Journal of Statistics}, \textbf{48:} 42--67.
\bibitem[\protect\citeauthoryear{McKenzie}{1985}]{mckenzie:1985} McKenzie, E. (1985). Some simple models for discrete variate time series. \textit{Journal of the American Water Resources Association}, \textbf{21:} 645--650.
\bibitem[\protect\citeauthoryear{Moriña et al.}{2020}]{morina:2020} Moriña, D., Leyva-Moral, J.M. and Feijoo-Cid, M. (2020). Intervention analysis for low-count time series with applications in public health. \textit{Statistical Modelling}, \textbf{20:} 58--70.
\bibitem[\protect\citeauthoryear{Pedeli et al.}{2015}]{pedeli:2015} Pedeli, X., Davison, A. C., and Fokianos, K. (2015). Likelihood Estimation for the INAR($p$) Model by Saddlepoint Approximation. \textit{Journal of the American Statistical Association}, \textbf{110:} 1229--1238.
\bibitem[\protect\citeauthoryear{Rossetti et al.}{2017}]{rossetti:2017} Rossetti, C.A., Arenas-Gamboa, A.M., Maurizio, E. (2017). Caprine Brucellosis: A historically neglected disease with significant impact on public health. \textit{PLoS Neglected Tropical Diseases} \textbf{11(8):} e0005692.
\bibitem[\protect\citeauthoryear{Silva and Pereira}{2015}]{silva:2015} Silva M.E., and Pereira, I. (2015). \textit{Detection of Additive Outliers in Poisson INAR(1) Time Series}. In: Bourguignon, J.P. et al. (eds.) Mathematics of Energy and Climate Change. CIM Series in Mathematical Sciences, pp. 377–388. Springer, Berlin.
\bibitem[\protect\citeauthoryear{Steutel and van Harn}{1979}]{steutel:1979} Steutel, F. W., and van Harn, K. (1979). Discrete Analogues of Self–Decomposability and Stability. \textit{The Annals of Probability}, \textbf{7:} 893--899.
\bibitem[\protect\citeauthoryear{Weiss}{2018}]{weiss:2018} Weiss, C.H. (2018). \textit{An Introduction to Discrete-Valued Time Series}. Chichester: Wiley.
\end{thebibliography}
\section*{Supplementary Materials}
\subsection*{1. The conditional information matrix}
The Hessian matrix $\partial^2\ell(\boldsymbol{\theta})/\partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^T$ has elements \allowdisplaybreaks \begin{eqnarray*}
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i^2}=\frac{1}{(1-\alpha_i)^2}\sum_{t=p+1}^{n}y_{t-i}\left\{\frac{2\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}-1\right.\\
&&\quad+(y_{t-i}-1)\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-2,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\\
&&\quad\left.-y_{t-i}\left(\frac{\mathnormal{p}(y_{t}-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right)^2\right\}\\
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i\partial\alpha_j}=\frac{1}{(1-\alpha_i)(1-\alpha_j)}\sum_{t=p+1}^ny_{t-i}y_{t-j}\left\{\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-j}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right.\\
&&\quad\left.-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-j}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})^2}\right\}\\
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\lambda^2}=\sum_{t=p+1}^n\left\{\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}-\left(\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right)^2\right\}\\
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\kappa_j\partial\kappa_s}=\sum_{t=p+1}^n\delta^{2t-\tau_j-\tau_s}I(t\geq\tau_j)I(t\geq\tau_s)\left\{\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right.\\
&&\quad\left.-\left(\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right)^2\right\}\\
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i\partial\lambda}=\frac{1}{1-\alpha_i}\sum_{t=p+1}^{n}y_{t-i}\left\{\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right.\\
&&\quad\left.-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})^2}\right\}\\
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i\partial\kappa_j}=\frac{1}{1-\alpha_i}\sum_{t=p+1}^{n}y_{t-i}\delta^{t-\tau_j}I(t\geq\tau_j)\left\{\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right.\\
&&\quad\left.-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})^2}\right\}\\
&&\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\lambda\partial\kappa_j}=\sum_{t=p+1}^{n}\delta^{t-\tau_j}I(t\geq\tau_j)\left\{\frac{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}-\left(\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right)^2\right\} \end{eqnarray*}
By convention $\mathnormal{p}(y_t-c_1|y_{t-1},\ldots,y_{t-i}-c_2,\ldots,y_{t-p})=0$ for $y_t<c_1$ or $y_{t-i}<c_2$.
The second-order derivatives are functions of $(y_t, y_{t-1},\ldots, y_{t-p})$ and so the elements of $\mathcal I(\boldsymbol{\theta})$ can be obtained as follows: \allowdisplaybreaks \begin{eqnarray*} &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i^2}\right\}=\sum_{t=p+1}^nE\left\{h(y_t, y_{t-1},\ldots,y_{t-p})\right\}\\ &&\quad=\sum_{t=p+1}^n\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^{\infty}\mathnormal{p}(y_t,y_{t-1},\ldots,y_{t-p})h(y_t, y_{t-1},\ldots,y_{t-p})\\ &&\quad=\frac{n-p}{(1-\alpha_i)^2}\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^{\infty}\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad \times y_{t-i}\bigg\{2\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})\bigg.\\
&&\quad-\left.\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})+(y_{t-i}-1)\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-2,\ldots,y_{t-p})\right.\\
&&\quad\left.-y_{t-i}\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})^2}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right\}\\ &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i\partial\alpha_j}\right\}= \frac{n-p}{(1-\alpha_i)(1-\alpha_j)}\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^{\infty}\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad\times y_{t-i}y_{t-j}\bigg\{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-j}-1,\ldots,y_{t-p})\bigg.\\
&&\quad\left.-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-j}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right\}\\ &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\lambda^2}\right\}=(n-p)\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^\infty\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad\times\left\{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-p})-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})^2}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right\}\\ &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\kappa_j\kappa_s}\right\}=\sum_{t=p+1}^n\delta^{2t-\tau_j-\tau_s}I(t\geq\tau_j)I(t\geq\tau_s)\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^\infty\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad\times\left\{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-p})-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})^2}{P(y_t|y_{t-1},\ldots,y_{t-p})}\right\}\\ &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i\partial\lambda}\right\}=\frac{n-p}{1-\alpha_i}\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^\infty\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad\times y_{t-i}\bigg\{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})\bigg.\\
&&\quad\left.-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right\}\\ &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\alpha_i\partial\kappa_j}\right\}=\frac{1}{1-\alpha_i}\sum_{t=p+1}^n\delta^{t-\tau_j}I(t\geq\tau_j)\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^\infty\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad\times y_{t-i}\bigg\{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})-\bigg.\\
&&\quad-\left.\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-i}-1,\ldots,y_{t-p})}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right\}\\ &&E\left\{\frac{\partial^2\ell(\boldsymbol{\theta})}{\partial\lambda\partial\kappa_j}\right\}=\sum_{t=p+1}^n\delta^{t-\tau_j}I(t\geq\tau_j)\sum_{y_t=0}^{\infty}\sum_{y_{t-1}=0}^{\infty}\cdots\sum_{y_{t-p}=0}^\infty\left(\prod_{k=1}^p\mathnormal{p}(y_{t-k})\right)\\
&&\quad\times\left\{\mathnormal{p}(y_t-2|y_{t-1},\ldots,y_{t-p})-\frac{\mathnormal{p}(y_t-1|y_{t-1},\ldots,y_{t-p})^2}{\mathnormal{p}(y_t|y_{t-1},\ldots,y_{t-p})}\right\} \end{eqnarray*} In practice, the elements of $\mathcal I(\boldsymbol{\theta})$ are calculated by truncating the infinite sums to some value $m$ selected such that $\mathnormal{p}(y_t>m)$ is approximately equal to zero. We select $m$ so that $\mathnormal{p}(y_t>m)\leq10^{-15}$.
\subsection*{2. Tables}
\setcounter{table}{0} \renewcommand{SM3.\arabic{table}}{SM2.\arabic{table}}
\begingroup
\centering \begin{sideways}
\setlength{\tabcolsep}{3pt}
\renewcommand{0.8}{0.8}
\begin{threeparttable}
\caption{\label{tab:sizes100} Empirical sizes (in percent) of the tests based on the $F$-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention in case of INAR(1) series of length $n=100$ with different parameters $\alpha$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline 0.3&2&0 & 1.2 &4.1 &9.2 &1.8 &4.9 &9.9 &1.3 &4.9 &9.8 & 1.5 & 4.1 & 7.7 & 1.7 & 4.4 & 7.8 & 1.7 & 4.6 & 8.3 \\ &&0.8 & 1.4 & 5.5 &10.7 &1.6 &5.9 &11.2 &1.5 &5.9 &11.1& 1.0 & 5.0 & 9.6 & 1.1 & 4.9 & 10.3 & 1.0 & 5.4 & 10.5 \\ &&1 & 1.5 &6.4 &11.4 &1.6 &6.6 &11.7 &1.2 &5.6 &11.0& 0.9 & 5.0 & 9.6 & 0.6 & 4.8 & 9.4 & 1.0 & 5.4 & 10.3 \\ \hline &5&0 &1.2 &5.0 &10.1 &1.2 &4.5 &9.4 &1.2 &5.0 &10.0 & 1.3 & 4.1 & 8.8 & 1.1 & 4.7 & 9.4 & 1.6 & 4.7 & 9.4 \\ &&0.8 & 1.5 &6.0 &10.9 &1.3 &5.7 &10.7 &1.7 &5.7 &11.0& 0.9 & 4.3 & 9.2 & 0.9 & 5.2 & 10.3 & 1.2 & 5.0 & 9.9\\ &&1 & 1.5 &6.2 &11.5 &1.1 &5.5 &11.1 &1.4 &6.1 &12.2& 0.8 & 4.7 & 10.2 & 0.7 & 4.1 & 9.1 & 0.9 & 4.8 & 9.7 \\ \hline 0.6&2&0 & 1.5 &5.3 &9.4 &1.5 &5.4 &10.0&1.0 &4.6& 9.5& 2.0 & 4.6 & 7.5 & 1.9 & 4.7 & 7.7 & 2.1 & 4.7 & 7.6 \\ &&0.8 &1.8 &6.2 &11.3 &1.5 &6.6 &12.1 &1.4 &5.4 &11.0 & 1.4 & 5.0 & 10.4 & 1.1 & 5.0 & 10.0 & 1.2 & 5.1 & 9.9 \\ &&1 & 1.6 &6.9 &12.4 &1.3 &6.0 &12.3 &1.8 &6.3&12.4& 0.9 & 4.8 & 9.6 & 1.0 & 5.1 & 9.8 & 0.7 & 4.5 & 9.7 \\ \hline &5&0 &1.1 &5.1 &9.9 &1.2 &5.6 &10.4 &1.5 &5.5 &9.6& 1.8 & 5.0 & 9.0 & 1.6 & 4.6 & 8.9 & 1.7 & 4.9 & 8.8 \\ &&0.8 & 1.7 &5.8 &11.4 &1.5 &6.8 &12.0 &1.3 &5.6& 10.6 & 1.1 & 5.4 & 10.1 & 1.1 & 5.7 & 10.4 & 0.9 & 4.9 & 10.3 \\ &&1 & 1.5 &5.9 &11.2 &1.8 &6.7 &12.3 &2.0 &6.6 &12.7 & 1.1 & 5.3 & 10.2 & 1.1 & 5.0 & 9.9 & 1.1 & 4.7 & 9.7 \\ \hline 0.9&2&0 & 1.4 &5.0 &10.3 &1.4 &5.3 &10.4 &1.1 &5.2 &10.2 & 2.3 & 4.9 & 8.1 & 2.4 & 4.9 & 7.8 & 2.2 & 4.3 & 7.5 \\ &&0.8 & 2.4 &7.5 &13.1 &2.2 &7.6 &13.7 &1.6 &7.3 &13.4 & 1.4 & 5.1 & 10.0 & 1.3 & 4.9 & 9.7 & 1.2 & 4.4 & 9.1 \\ &&1 & 3.4 &11.1 &18.5 &3.8 &12.7 &20.8 &3.6 &12.1& 19.1& 1.6 & 6.3 & 11.4 & 1.1 & 5.3 & 10.2 & 1.1 & 5.1 & 10.0 \\ \hline &5&0 & 1.3 &5.1 &10.6 &1.1 &5.6 &11.2 &1.2&5.4&10.2& 2.1 & 5.2 & 9.1 & 2.0 & 4.3 & 7.9 & 1.8 & 5.0 & 8.3 \\ &&0.8 & 2.0 &8.3 &14.2 &2.0 &7.8 &13.9 &2.1 &8.3 &14.0& 1.4 & 5.6 & 10.7 & 1.2 & 5.4 & 10.1 & 1.1 & 4.6 & 10.3 \\ &&1 & 3.7 &11.4 &18.3 &4.4 &13.2 &20.8 &3.7 &12.8 &20.5& 1.0 & 5.0 & 10.2 & 1.1 & 4.8 & 10.2 & 0.9 & 4.3 & 9.7 \\ \hline \end{tabular}}
\end{threeparttable} \end{sideways}
\endgroup
\begin{landscape} \begin{table} \caption{\label{tab:sizes200} Empirical sizes (in percent) of the tests based on the $F$-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention in case of INAR(1) series of length $n=200$ with different parameters $\alpha$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline 0.3&2&0 & 1.2& 4.3 &9.0 &1.4 &4.8 &9.4 &1.6 &4.3& 8.9 & 1.4 & 3.5 & 7.3 & 1.6 & 4.3 & 7.9 & 1.7 & 4.2 & 7.7 \\ &&0.8 &1.3 &5.2 &10.2 &1.5 &5.1 &10.3 &1.3 &5.2 &9.9& 1.1 & 4.9 & 10.0 & 0.9 & 4.5 & 9.9 & 1.2 & 5.0 & 10.4 \\ &&1 &0.9 &4.9 &10.2 &1.2 &5.0 &10.2 &1.0 &4.6 &9.8& 0.9 & 4.5 & 9.4 & 0.9 & 4.7 & 9.7 & 0.9 & 5.2 & 10.2 \\ \hline &5&0 &1.0 &5.4 &10.2 &0.9 &4.6 &10.0 &1.1 &4.5 &9.3& 1.3 & 4.6 & 8.7 & 1.0 & 4.1 & 8.5 & 1.4 & 4.7 & 9.6 \\ &&0.8 &1.2 &5.6 &10.4 &1.2 &5.1 &10.5 &1.4 &6.0 &10.8& 1.2 & 4.7 & 10.1 & 1.0 & 4.7 & 9.4 & 1.2 & 5.1 & 10.3 \\ &&1 &1.0& 5.4 &10.1 &1.5 &5.9 &11.2 &1.1 &5.6 &11.1& 1.0 & 4.7 & 9.4 & 0.9 & 5.2 & 10.0 & 1.0 & 4.5 & 9.5 \\ \hline 0.6&2&0 &1.0 &4.6 &9.8 &1.3 &5.5 &10.0 &1.4 &5.1&10.1& 2.2 & 4.6 & 7.7 & 2.0 & 4.7 & 7.5 & 1.7 & 4.4 & 7.6 \\ &&0.8 &1.2 &5.2 &10.3 &1.4 &5.8 &11.4 &1.3 &5.5 &10.8& 1.1 & 5.2 & 10.3 & 1.2 & 4.8 & 9.7 & 1.1 & 5.1 & 10.1 \\ &&1 &1.1 &6.1 &11.6 &1.4 &6.1 &11.7 &1.2 &5.7 &11.1& 0.9 & 4.8 & 10.2 & 0.9 & 4.6 & 9.4 & 0.8 & 4.6 & 9.0 \\ \hline &5&0 &1.1 &5.5 &10.7 &1.3 &5.0 &9.5 &1.5 &5.2 &10.0& 1.5 & 4.1 & 8.2 & 1.5 & 5.0 & 8.9 & 1.3 & 4.3 & 8.7 \\ &&0.8 &1.3 &5.4 &10.8 &1.2 &6.0 &11.0 &1.3 &5.6 &10.9& 1.1 & 4.7 & 9.5 & 0.9 & 5.5 & 10.3 & 1.0 & 5.0 & 9.7 \\ &&1 &1.5 &5.7 &11.0 &1.4 &5.8 &11.6 &1.5 &6.3 &12.0& 0.9 & 5.2 & 10.2 & 1.2 & 5.3 & 10.1 & 0.9 & 4.7 & 9.7 \\ \hline 0.9&2&0 &1.3 &6.0 &10.6 &1.3 &5.3 &10.1 &1.3 &4.9 &9.8& 2.2 & 4.5 & 7.1 & 2.4 & 4.9 & 8.3 & 2.2 & 4.7 & 8.2 \\ &&0.8 &1.6 &6.3 &11.6 &1.4 &5.8 &11.6 &1.4 &6.1 &11.3 & 1.2 & 4.4 & 9.1 & 1.1 & 4.5 & 9.0 & 1.1 & 4.6 & 8.9 \\ &&0.9 &2.4 &8.8 &15.9 &2.5 &9.1 &15.8 &2.1 &8.3 &14.7& 1.1 & 5.3 & 10.6 & 0.9 & 5.0 & 10.0 & 0.8 & 4.4 & 10.1 \\ \hline &5&0 &1.1 &4.9 &9.9 &1.2 &5.4 &10.1 &1.2 &5.5 &10.6& 1.6 & 3.9 & 7.3 & 1.9 & 4.6 & 8.3 & 1.4 & 4.3 & 7.8 \\ &&0.8 & 1.8 &6.4 &11.2 &1.7 &6.4 &12.0 &1.3 &6.2 &11.7& 1.0 & 4.7 & 9.7 & 1.1 & 5.3 & 10.4 & 1.1 & 4.5 & 9.8 \\ &&1 & 2.5 &8.7 &15.2 &2.7 &9.2 &15.7 &2.3 &9.1 &15.8 & 1.2 & 5.8 & 10.4 & 1.2 & 5.5 & 10.1 & 0.8 & 4.9 & 10.2 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:power200-0} Empirical power (in percent) of the tests based on the $F$-type statistics and score statistics for a known time $\tau$ and known, but possibly misspecified type $\delta$ of intervention in case of an innovation outlier $\delta=0$ of size $\kappa=3\sqrt{\lambda}$ at time $\tau$ in an INAR(1) series of length $n=200$ with different parameters $\alpha$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2 & 0&48.3 &64.3 &71.7 &49.3 &65.3 &73.3 &50.4 &65.8 &72.4& 51.1 & 66.2 & 73.2 & 52.1 & 66.2 & 72.7 & 51.4 & 65.3 & 72.5 \\ &&0.8 &21.8 &37.1 &47.1 &23.4 &37.6 &47.1 &22.9 &37.1 &47.6& 24.9 & 42.2 & 51.2 & 25.8 & 41.2 & 49.3 & 23.7 & 39.7 & 49.0 \\ &&1 &1.0 &4.3 &10.2 &1.2 &5.8 &11.6 &1.6 &6.7 &11.9& 0.9 & 4.7 & 9.9 & 1.0 & 5.5 & 11.2 & 1.9 & 7.6 & 13.1 \\ \hline &5& 0&49.3 &66.4 &74.5 &50.1 &65.8 &75.6 &49.9 &66.7 &75.3& 51.5 & 67.0 & 74.2 & 53.5 & 68.6 & 76.7 & 50.6 & 66.0 & 73.2 \\ &&0.8 &18.9 &36.1 &45.7 &20.9 &37.0 &45.3 &19.9 &36.3 &46.5& 21.6 & 38.6 & 48.6 & 22.9 & 40.0 & 49.3 & 21.1 & 39.7 & 48.9 \\ &&1 &1.2 &5.3 &9.9 &1.4 &5.7 &11.3 &1.9 &6.7 &11.8& 1.4 & 5.6 & 10.9 & 1.2 & 5.8 & 10.8 & 1.3 & 5.6 & 10.8 \\ \hline 0.6&2&0 &40.5 &57.7 &66.2 &41.4 &58.5 &66.9 &43.2 &58.6 &66.3 & 50.4 & 62.7 & 69.5 & 52.1 & 64.2 & 70.9 & 49.9 & 62.9 & 69.1 \\ &&0.8 &18.4 &32.8 &42.5 &18.7 &34.4 &42.4 &20.1 &34.3 &42.6& 26.9 & 41.6 & 49.8 & 26.0 & 40.5 & 49.5 & 26.4 & 42.0 & 50.7 \\ &&1 &1.2 &5.4 &11.1 &1.5 &6.9 &12.2 &1.6 &6.7 &12.6& 0.9 & 5.0 & 9.9 & 1.1 & 5.1 & 9.8 & 1.2 & 6.0 & 12.1 \\ \hline &5&0 &42.9 &60.8 &68.8 &41.5 &60.1 &69.2 &40.9 &60.4 &69.5& 50.2 & 63.2 & 69.8 & 49.4 & 64.8 & 72.0 & 49.4 & 63.9 & 71.3 \\ &&0.8 &18.4 & 34.1 &43.0 &17.2 &32.2 &42.8 &17.1 &32.0 &41.2& 23.1 & 38.9 & 48.3 & 22.6 & 37.7 & 48.2 & 21.5 & 37.5 & 46.2 \\ &&1 &1.2 &5.4 &10.2 &1.5 &5.3 &10.2 &1.6&7.5&13.2& 1.2 & 5.2 & 10.3 & 1.0 & 5.5 & 12.1 & 1.2 & 6.0 & 12.5 \\ \hline 0.9&2&0 &36.9 &54.3 &63.0 &38.1 &56.5 &63.4 &38.3 &54.0 &63.2& 49.0 & 59.0 & 67.5 & 49.6 & 60.0 & 68.1 & 50.7 & 60.6 & 68.3 \\ &&0.8 &17.8 &31.8 &40.5 &16.1 &30.6 &40.8 &18.0 &31.4& 40.6 & 26.4 & 41.5 & 50.0 & 27.1 & 40.9 & 48.9 & 26.4 & 39.8 & 48.8 \\ &&1 & 3.0 &9.3 &14.7 &2.5 &8.9 &16.1 &3.0 &9.8 &15.9 & 1.6 & 5.6 & 10.8 & 1.2 & 6.0 & 11.6 & 2.0 & 6.7 & 12.5 \\ \hline &5& 0&37.5 &57.2 &66.0 &36.5 &55.8 &65.0 &38.0 &55.6 &65.2& 50.1 & 60.9 & 67.8 & 48.5 & 62.1 & 69.3 & 49.6 & 61.9 & 68.2 \\ &&0.8 &16.2 &32.5 &41.9 &14.8 &28.6 &39.0 &15.8 &29.9&39.5 & 23.6 & 39.1 & 47.3 & 23.0 & 38.1 & 46.9 & 22.7 & 37.3 & 46.0 \\ &&1 & 2.6 &9.3 &15.0 &2.8 &9.4 &16.1 &2.4 &9.8 &17.1& 1.2 & 6.3 & 11.2 & 1.1 & 6.6 & 12.6 & 1.3 & 6.6 & 12.8 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:power200-0.8} Empirical power (in percent) of the tests based on the $F$-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention in case of a transient shift $\delta=0.8$ of size $\kappa=2\sqrt{\lambda}$ at time $\tau$ in an INAR(1) series of length $n=200$ with several parameters $\alpha$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0& 24.4 &38.6 &47.3 &24.8 &39.6 &48.4 &25.1 &39.0 &47.6 & 28.4 & 41.4 & 49.2 & 29.3 & 41.9 & 48.9 & 31.9 & 44.8 & 52.0 \\
&&0.8&56.1 &72.5 &79.3 &56.6 &74.0 &80.4 &53.4 &71.8 &79.8& 50.1 & 70.9 & 79.2 & 51.0 & 70.6 & 79.3 & 54.5 & 71.6 & 80.5 \\ &&1 &0.6 &4.7 &9.6 &2.2 &7.9 &14.1 &5.3 &16.0 &24.9& 0.9 & 4.7 & 9.8 & 2.4 & 8.1 & 14.1 & 5.5 & 16.6 & 26.3 \\ \hline &5&0 &23.0 &38.6 &50.0 &23.4 &39.9 &48.3 &23.1 &40.0 &50.9& 26.9 & 42.6 & 51.9 & 26.1 & 40.6 & 48.6 & 28.6 & 45.0 & 52.9 \\ &&0.8 &57.1 &74.5 &82.2 &56.0 &74.9 &83.3 &57.5 &75.2 &82.8& 53.2 & 73.7 & 80.9 & 53.0 & 72.8 & 81.5 & 53.2 & 74.6 & 83.1 \\ &&1 &0.8 &4.4 &9.2 &1.6 &8.9 &15.4 &5.0 &15.9 &25.3& 1.1 & 6.3 & 11.8 & 2.0 & 7.9 & 14.1 & 5.0 & 16.4 & 25.8 \\ \hline 0.6&2&0 &21.7 &37.0 &45.3 &20.4 &34.4 &44.5 &20.4 &34.1 &42.8 & 29.1 & 40.8 & 47.8 & 30.3 & 42.5 & 51.1 & 28.9 & 39.8 & 47.2 \\ &&0.8 &49.1 &67.5 &75.3 &46.7 &66.2 &74.2 &47.5 &67.5 &75.9& 47.3 & 66.2 & 73.5 & 48.3 & 67.0 & 74.8 & 47.3 & 65.0 & 74.2 \\ &&1 & 0.9 &5.2 &10.0 &1.6 &5.8 &11.4 &4.6 &14.8 &24.6& 1.2 & 5.6 & 12.3 & 2.4 & 8.0 & 15.2 & 5.4 & 14.9 & 23.9 \\ \hline &5&0 & 19.4 &34.5 &45.0 &20.1 &33.6 &45.1 &18.6 &34.5 &43.8 & 26.8 & 39.5 & 47.5 & 27.4 & 41.8 & 50.5 & 28.5 & 41.2 & 48.5 \\ &&0.8 &47.5 &67.3 &75.9 &48.5 &68.8 &77.2 &47.0 &68.0 &76.7 & 47.6 & 68.0 & 76.5 & 47.6 & 67.0 & 76.5 & 49.4 & 68.5 & 77.3 \\ &&1 &0.8 &4.7 &10.1 &2.2 &7.9 &13.1 &4.8 &15.1 &23.2 & 1.1 & 6.2 & 11.8 & 1.9 & 8.2 & 15.6 & 4.8 & 15.2 & 25.1 \\ \hline 0.9&2&0 & 17.9 &31.1 &39.8 &18.4 &32.0 &42.0 &17.9 &32.0 &40.5& 30.0 & 40.1 & 47.3 & 29.0 & 39.9 & 48.1 & 29.4 & 39.1 & 46.8 \\ &&0.8 & 43.7 &61.6 &70.4 &43.3 &61.8 &70.7 &44.6 &63.1 &71.0& 50.2 & 64.7 & 72.4 & 48.8 & 64.7 & 72.4 & 48.4 & 64.9 & 72.9 \\ &&1 & 2.1 &7.8 &14.1 &3.2 &10.6 &17.2 &8.2 &18.6&26.7& 1.9 & 7.9 & 13.9 & 2.0 & 8.6 & 14.9 & 5.4 & 15.8 & 25.1 \\ \hline &5&0 & 16.6 &31.8 &41.1 &16.6 &31.4 &41.6 &15.1 &30.6&39.6& 26.6 & 39.9 & 47.0 & 27.2 & 39.5 & 47.6 & 27.6 & 39.4 & 46.3 \\ &&0.8 & 43.8 &65.0 &74.0 &42.5 &63.2 &72.9 &44.2 &63 &71.6& 48.9 & 67.0 & 75.0 & 47.3 & 64.7 & 73.9 & 48.6 & 66.6 & 75.0 \\ &&1 & 2.3 &8.9 &15.0 &2.8 &10.1 &16.5 &6.8 &17.6 &25.6& 1.5 & 7.5 & 14.2 & 2.4 & 9.7 & 16.4 & 5.4 & 16.8 & 24.8 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:power200-1} Empirical power (in percent) of the tests based on the $F$-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention in case of a level shift $\delta=1$ of size $\kappa=\sqrt{\lambda}$ at time $\tau$ in an INAR(1) series of length $n=200$ with several parameters $\alpha$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0&1.7 &5.2 &9.8 &3.2 &8.9 &14.4 &5.5 &12.2 &18.0& 4.0 & 9.2 & 14.1 & 5.3 & 11.5 & 17.1 & 9.2 & 17.2 & 24.1 \\ &&0.8 &1.6 &6.2 &10.2 &4.7 &15.4 &24.8 &16.8 &34.2 &45.8 &2.8 & 9.2 & 15.1 & 8.8 & 20.2 & 30.4 & 22.1 & 42.8 & 55.0 \\ &&1 &96.1 &99.3 &99.7 &99.5 &100.0 &100.0&98.2 &99.5 &99.8& 97.3 & 99.3 & 99.8 & 99.8 & 100.0 & 100.0 & 98.2 & 99.5 & 100.0 \\ \hline &5&0 &1.7 &5.0 &10.3 &2.6 &7.7 &12.3 &3.6 &11.1 &17.6& 2.7 & 9.2 & 14.8 & 5.8 & 12.4 & 18.7 & 7.0 & 16.1 & 23.1 \\ &&0.8 &1.5 &6.2 &11.2 &4.7 &13.7 &22.8 &15.7 &36.2 &47.6 & 2.7 & 9.3 & 15.3 & 9.1 & 23.8 & 31.9 & 20.6 & 43.1 & 54.4 \\ &&1 & 98.4 &99.8 &99.9 &99.9 &100.0 &100.0 &98.9 &99.9&100.0 & 98.5 & 99.7 & 99.8 & 99.8 & 100.0 & 100.0 & 98.5 & 99.8 & 100.0 \\ \hline &&0.6 &1.8 &4.5 &9.7 &1.6 &6.6 &12.2 &4.2 &10.2 &16.1&4.8 & 8.2 & 13.2 & 6.8 & 13.1 & 18.0 & 8.3 & 16.6 & 21.3 \\ &&0.8 &1.6 &6.0 &11.2 &3.4 &11.7 &20.4 &11.6 &26.4 &38.0& 3.1 & 10.3 & 16.6 & 10.4 & 22.0 & 31.8 & 21.1 & 37.8 & 48.2 \\ &&1 &87.9 &97.8 &99.0 &98.0 &99.8 &100.0 &93.2 &98.5 &99.4 & 93.3 & 98.4 & 99.2 & 98.8 & 99.9 & 100.0 & 94.9 & 99.0 & 99.5 \\ \hline &5&0 &1.1 &5.2 &9.9 &1.9 &6.3 &11.7 &3.5 &10.2 &16.9& 3.5 & 9.9 & 14.9 & 6.1 & 12.6 & 19.1 & 7.7 & 15.6 & 21.9 \\ &&0.8 & 1.6 &5.5 &10.6 &3.2 &11.3 &19.5 &11.0 &26.2 &40.0& 3.4 & 10.3 & 17.9 & 10.6 & 23.4 & 32.1 & 22.0 & 40.8 & 52.8 \\ &&1 & 92.7 &99.0 &99.7 &98.0 &99.8 &100.0 &94.6 &98.8 &99.5& 95.3 & 98.8 & 99.6 & 99.3 & 100.0 & 100.0 & 96.7 & 99.1 & 99.7 \\ \hline 0.9&2&0 &1.2 &4.8 &9.7 &2.4 &8.4 &12.8 &3.8 &9.8 &16.0 & 5.6 & 9.5 & 14.9 & 8.1 & 13.5 & 17.1 & 9.8 & 16.9 & 21.4 \\ &&0.8&2.3 &9.0 &17.2 &5.6 &17.9 &28.1 &12.5 &29.4 &41.8& 6.2 & 14.7 & 21.5 & 12.9 & 24.2 & 34.9 & 25.1 & 40.2 & 50.2 \\ &&1 &66.5 &90.4 &96.1 &84.3 &97.2 &98.9 &85.3 &96.3 &98.5 & 91.8 & 97.8 & 99.1 & 97.3 & 99.7 & 99.9 & 92.6 & 97.6 & 99.0 \\ \hline &5&0 &1.6 &6.3 &10.6 &2.2 &8.3 &14.5 &3.4 &10.1 &16.6 & 4.9 & 10.2 & 15.8 & 6.2 & 12.9 & 18.4 & 7.6 & 15.1 & 20.8 \\ &&0.8 &2.9 &10.3 &18.8 &6.5 &19.4 &31.5 &12.3 &29.9 &41.4& 5.8 & 14.3 & 21.8 & 13.3 & 25.9 & 34.1 & 25.1 & 41.4 & 52.2 \\ &&1 & 71.1 &93.5 &97.5 &87.0 &97.3 &99.0 &86.3 &97.2 &98.7& 91.1 & 97.3 & 99.0 & 97.7 & 99.6 & 99.9 & 94.8 & 98.5 & 99.2 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:classif200} Classification rates (in percent) achieved by applying the F-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention to clean INAR(1) series of length $n=200$ without outliers. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0 & 1.3 & 3.6 & 6.5 & 0.9 & 3.3 & 6.0 & 1.0 & 3.7 & 6.9 & 1.3 & 2.5 & 5.2 & 1.5 & 3.4 & 5.8 & 1.6 & 3.4 & 5.6 \\ &&0.6 & 0.6 & 2.5 & 4.5 & 0.6 & 1.9 & 3.7 & 0.6 & 2.4 & 4.6 & 0.6 & 1.8 & 3.8 & 0.8 & 2.2 & 4.0 & 0.5 & 1.5 & 3.4 \\ &&0.8 & 0.4 & 2.1 & 3.4 & 0.5 & 1.7 & 3.0 & 0.5 & 1.8 & 3.3 & 0.2 & 1.2 & 2.8 & 0.3 & 1.7 & 3.1 & 0.7 & 2.3 & 3.5 \\ &&0.9 & 0.9 & 2.9 & 5.8 & 0.9 & 3.5 & 6.7 & 0.6 & 2.6 & 4.6 & 0.6 & 3.3 & 6.5 & 0.5 & 2.5 & 5.1 & 0.7 & 2.5 & 5.5 \\ &&1 & 0.9 & 4.4 & 9.2 & 1.2 & 5.2 & 9.7 & 0.9 & 4.9 & 8.7 & 0.9 & 4.5 & 8.6 & 0.8 & 4.8 & 8.3 & 0.8 & 4.7 & 9.0 \\ \hline &5&0 & 0.9 & 3.6 & 6.8 & 0.9 & 3.8 & 7.2 & 0.8 & 3.0 & 6.4 & 1.1 & 4.2 & 7.0 & 0.8 & 2.9 & 6.1 & 1.4 & 4.2 & 7.2 \\ &&0.6 & 0.6 & 1.9 & 3.8 & 0.5 & 2.3 & 4.2 & 0.7 & 2.2 & 4.2 & 0.5 & 2.1 & 3.5 & 0.6 & 2.2 & 4.2 & 0.4 & 2.4 & 3.8 \\ &&0.8 & 0.6 & 1.8 & 3.0 & 0.6 & 1.7 & 3.3 & 0.5 & 1.6 & 3.0 & 0.5 & 1.8 & 2.8 & 0.4 & 1.7 & 2.8 & 0.6 & 2.0 & 3.5 \\ &&0.9 & 0.8 & 3.6 & 6.6 & 0.7 & 2.9 & 5.9 & 0.7 & 3.1 & 5.6 & 0.4 & 2.4 & 5.2 & 0.5 & 2.0 & 4.8 & 0.6 & 2.4 & 4.4 \\ &&1 & 1.2 & 5.1 & 9.3 & 1.0 & 5.3 & 9.1 & 1.1 & 4.9 & 9.1 & 1.2 & 4.6 & 8.7 & 1.0 & 5.2 & 9.6 & 0.8 & 3.8 & 7.6 \\ \hline 0.6&2&0 & 1.0 & 3.7 & 6.9 & 1.1 & 3.4 & 6.4 & 1.1 & 3.7 & 7.0 & 1.8 & 3.7 & 6.2 & 1.6 & 3.4 & 5.4 & 1.4 & 3.2 & 5.3 \\ &&0.6 & 0.6 & 2.5 & 4.2 & 0.6 & 2.4 & 4.1 & 0.7 & 2.6 & 4.6 & 0.4 & 1.8 & 3.4 & 0.4 & 1.6 & 4.0 & 0.5 & 2.0 & 3.7 \\ &&0.8 & 0.4 & 1.7 & 2.9 & 0.6 & 1.7 & 3.0 & 0.6 & 1.8 & 3.1 & 0.6 & 1.7 & 3.0 & 0.3 & 1.1 & 2.8 & 0.3 & 1.8 & 3.4 \\ &&0.9 & 1.1 & 4.3 & 7.1 & 0.7 & 3.2 & 5.9 & 0.9 & 3.6 & 5.6 & 0.8 & 3.1 & 5.7 & 0.5 & 2.5 & 5.9 & 0.6 & 2.2 & 4.0 \\ &&1 & 1.4 & 5.7 & 10 & 1.0 & 5.5 & 10 & 1.3 & 5.0 & 9.0 & 0.8 & 4.0 & 8.7 & 0.7 & 3.8 & 7.6 & 0.9 & 3.8 & 7.4 \\
\hline &5&0 & 0.8 & 3.5 & 6.1 & 0.9 & 4.0 & 6.9 & 1.0 & 3.7 & 6.9 & 1.4 & 3.2 & 5.2 & 0.9 & 3.8 & 6.4 & 1.1 & 3.8 & 6.2 \\ &&0.6 & 0.7 & 2.3 & 4.3 & 0.7 & 2.6 & 4.6 & 0.5 & 2.5 & 4.5 & 0.6 & 1.9 & 3.0 & 0.6 & 2.4 & 4.2 & 0.3 & 2.0 & 3.4 \\ &&0.8 & 0.5 & 1.9 & 3.6 & 0.7 & 1.9 & 3.1 & 0.5 & 1.7 & 3.2 & 0.3 & 1.4 & 2.9 & 0.2 & 1.4 & 2.8 & 0.6 & 2.3 & 4.1 \\ &&0.9 & 0.9 & 3.1 & 6.2 & 0.8 & 3.3 & 6.2 & 0.9 & 3.5 & 6.1 & 0.8 & 2.9 & 5.6 & 1.0 & 3.4 & 6.2 & 0.8 & 2.1 & 4.7 \\ &&1 & 1.3 & 5.1 & 9.7 & 1.5 & 5.5 & 10.5 & 1.2 & 5.0 & 8.6 & 1.0 & 5.0 & 9.8 & 1.4 & 4.4 & 7.7 & 0.7 & 4.0 & 8.2 \\ \hline 0.9&2&0 & 1.0 & 3.3 & 5.9 & 1.1 & 4 & 6.5 & 0.9 & 3.8 & 6.1 & 1.6 & 3.3 & 5.4 & 2.3 & 4.1 & 6.2 & 1.6 & 3.6 & 6.3 \\ &&0.6 & 0.6 & 1.9 & 3.3 & 0.6 & 2.2 & 3.7 & 0.6 & 2.0 & 3.7 & 0.4 & 1.4 & 2.3 & 0.6 & 1.2 & 2.0 & 0.5 & 1.3 & 2.2 \\ &&0.8 & 0.5 & 1.6 & 2.9 & 0.5 & 1.7 & 3.2 & 0.5 & 2.1 & 3.7 & 0.3 & 1.4 & 2.8 & 0.2 & 1.2 & 2.8 & 0.4 & 1.4 & 3.0 \\ &&0.9 & 1.1 & 4.7 & 8.0 & 1.6 & 5 & 8.2 & 1.2 & 4.0 & 6.3 & 0.5 & 2.8 & 5.6 & 0.9 & 3.0 & 5.7 & 0.4 & 2.7 & 5.0 \\ &&1 & 2.4 & 8.6 & 14.6 & 2.7 & 8.2 & 13.9 & 2.5 & 9.0 & 14.2 & 1.3 & 5.8 & 10.7 & 1.0 & 5.0 & 9.8 & 0.8 & 3.8 & 8.4 \\ \hline &5&0 & 0.9 & 3.7 & 6.1 & 0.9 & 3.3 & 6.0 & 1.0 & 3.4 & 6.6 & 1.2 & 2.6 & 4.8 & 1.4 & 3.2 & 5.9 & 1.6 & 3.9 & 5.8 \\ &&0.6 & 0.6 & 2.4 & 3.8 & 0.6 & 2.2 & 3.7 & 0.6 & 2.1 & 3.9 & 0.6 & 1.5 & 2.6 & 0.7 & 2.1 & 4.1 & 0.5 & 1.8 & 3.5 \\ &&0.8 & 0.5 & 2.2 & 3.3 & 0.6 & 2.2 & 3.2 & 0.7 & 1.9 & 3.2 & 0.4 & 1.5 & 3.1 & 0.2 & 1.4 & 2.7 & 0.7 & 2.0 & 3.5 \\ &&0.9 & 1.3 & 5.1 & 8.7 & 1.2 & 4.4 & 7.8 & 1.6 & 4.9 & 7.4 & 0.6 & 3.0 & 6.0 & 0.9 & 3.6 & 6.1 & 0.9 & 2.5 & 4.6 \\ &&1 & 2.1 & 8.2 & 13.8 & 2.7 & 9.1 & 14.1 & 2.4 & 7.5 & 12.4 & 1.4 & 6.2 & 9.8 & 1.1 & 4.8 & 8.1 & 0.6 & 4.5 & 8.6 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:classif200-0} Classification rates (in percent) achieved by applying the $F$-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention to INAR(1) series of length $n=200$ with an innovation outlier ($\delta=0$) of size $\kappa=3\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0 & 40.6 & 51.6 & 56.4 & 40.7 & 52 & 56.1 & 40.7 & 51.7 & 56.4 & 47.4 & 58.9 & 63.6 & 47.9 & 58.0 & 62.5 & 47.1 & 58.9 & 63.7 \\ &&0.6 & 8.7 & 11.6 & 12.9 & 8.2 & 10.8 & 11.9 & 9.6 & 12.3 & 13.5 & 4.3 & 6.9 & 7.8 & 5.0 & 7.2 & 8.2 & 4.6 & 7.1 & 7.9 \\ &&0.8 & 1.5 & 2.5 & 3.0 & 1.8 & 2.6 & 3.0 & 1.7 & 2.6 & 2.9 & 0.8 & 1.9 & 2.4 & 1.0 & 1.6 & 1.9 & 0.4 & 1.2 & 1.8 \\ &&0.9 & 1.0 & 1.9 & 2.5 & 1.2 & 2.3 & 3.0 & 1.0 & 1.8 & 2.2 & 0.7 & 1.6 & 2.8 & 0.8 & 1.8 & 2.4 & 0.4 & 1.4 & 2.1 \\ &&1 & 0.7 & 2.4 & 3.8 & 0.7 & 2.4 & 3.8 & 0.6 & 2.0 & 3.2 & 0.6 & 1.9 & 3.1 & 0.3 & 1.8 & 3.1 & 0.4 & 1.5 & 2.6 \\ \hline &5&0 & 40.8 & 53.1 & 57.8 & 41.9 & 54.8 & 59.5 & 42.0 & 54.7 & 59.9 & 46.5 & 58.5 & 63.5 & 47.5 & 59.0 & 64.7 & 45.5 & 57.8 & 61.9 \\ &&0.6 & 8.8 & 11.7 & 12.9 & 8.7 & 11.7 & 12.9 & 8.6 & 12.1 & 13.3 & 6.2 & 8.2 & 9.6 & 6.0 & 8.8 & 10.1 & 5.8 & 8.6 & 10.3 \\ &&0.8 & 1.9 & 3.0 & 3.2 & 1.3 & 2.2 & 2.6 & 1.9 & 2.8 & 3.4 & 0.8 & 1.6 & 1.9 & 0.9 & 1.6 & 1.9 & 1.1 & 1.8 & 2.2 \\ &&0.9 & 1.1 & 2.3 & 3.0 & 1.2 & 2.3 & 2.9 & 1.1 & 2.2 & 2.7 & 0.4 & 1.7 & 2.6 & 0.6 & 1.7 & 2.0 & 0.5 & 1.3 & 1.8 \\ &&1 & 0.6 & 2.0 & 3.4 & 0.4 & 2.0 & 3.5 & 0.7 & 1.9 & 3.0 & 0.7 & 2.3 & 3.8 & 0.6 & 1.9 & 2.8 & 0.6 & 1.4 & 2.4 \\ \hline 0.6&2&0 & 34.5 & 46.3 & 50.8 & 35.2 & 46.6 & 51.6 & 35.0 & 45.8 & 51.1 & 46.1 & 55.3 & 60.1 & 48.2 & 58.2 & 63.0 & 46.0 & 56.3 & 61.0 \\ &&0.6 & 7.8 & 10.6 & 11.9 & 7.9 & 10.7 & 12.2 & 8.4 & 11.6 & 12.8 & 4.5 & 6.5 & 7.4 & 4.5 & 6.2 & 7.2 & 4.3 & 6.1 & 7.0 \\ &&0.8 & 2.0 & 3.0 & 3.5 & 2.1 & 3.1 & 3.6 & 1.9 & 2.9 & 3.6 & 0.9 & 1.6 & 1.9 & 0.4 & 0.9 & 1.2 & 1.0 & 1.9 & 2.6 \\ &&0.9 & 1.4 & 3.0 & 4.3 & 1.4 & 2.9 & 4.1 & 1.3 & 2.4 & 3.5 & 0.5 & 1.4 & 2.0 & 0.7 & 1.8 & 2.8 & 0.2 & 1.6 & 2.4 \\ &&1 & 0.8 & 2.8 & 4.6 & 0.7 & 2.7 & 4.2 & 1.1 & 2.8 & 4.4 & 0.4 & 2.0 & 3.4 & 0.2 & 1.6 & 2.8 & 0.3 & 1.4 & 3.3 \\ \hline &5&0 & 33.8 & 46.4 & 51.7 & 35.0 & 47.0 & 52.4 & 35.3 & 48.0 & 53.7 & 45.4 & 55.6 & 59.2 & 45.0 & 56.8 & 62.0 & 44.8 & 56.3 & 61.2 \\ &&0.6 & 7.9 & 11.4 & 12.8 & 8.2 & 11.8 & 13.7 & 8.3 & 11.4 & 12.8 & 5.4 & 7.3 & 8.8 & 4.9 & 7.6 & 8.6 & 5.3 & 7.7 & 8.8 \\ &&0.8 & 2.3 & 3.5 & 4.0 & 1.7 & 2.7 & 3.2 & 1.6 & 2.8 & 3.7 & 0.7 & 1.4 & 2.0 & 1.2 & 2.0 & 2.4 & 0.6 & 1.6 & 2.0 \\ &&0.9 & 1.3 & 2.8 & 3.7 & 1.5 & 3.2 & 4.3 & 1.1 & 2.5 & 3.3 & 0.6 & 1.8 & 2.7 & 0.6 & 1.6 & 2.1 & 0.8 & 1.8 & 2.6 \\ &&1 & 0.8 & 2.7 & 4.1 & 0.8 & 2.6 & 4.3 & 0.9 & 2.3 & 3.7 & 0.7 & 2.2 & 3.6 & 0.4 & 1.8 & 3.3 & 0.2 & 1.6 & 3.0 \\ \hline 0.9&2&0 & 31.1 & 42.2 & 47.0 & 31.0 & 43.2 & 47.9 & 31.8 & 43.2 & 48.1 & 45.0 & 52.5 & 58.1 & 45.0 & 52.5 & 58.0 & 47.1 & 55.1 & 60.2 \\ &&0.6 & 6.7 & 9.9 & 11.4 & 6.1 & 9.4 & 10.7 & 6.3 & 9.0 & 10.6 & 4.3 & 6.2 & 7.2 & 5.7 & 7.6 & 8.8 & 4.0 & 6.0 & 6.8 \\ &&0.8 & 1.8 & 2.9 & 3.5 & 1.6 & 2.6 & 3.1 & 2.0 & 3.1 & 3.8 & 0.9 & 1.8 & 2.5 & 0.7 & 1.5 & 2.0 & 0.7 & 1.0 & 1.6 \\ &&0.9 & 2.2 & 4.0 & 5.4 & 2.5 & 4.5 & 5.8 & 2.4 & 4.1 & 5.2 & 0.8 & 1.9 & 3.2 & 0.8 & 1.9 & 3.0 & 0.4 & 1.3 & 1.9 \\ &&1 & 1.6 & 4.9 & 7.4 & 1.9 & 5.1 & 7.3 & 1.9 & 4.8 & 7.1 & 0.8 & 2.2 & 3.8 & 0.4 & 1.9 & 3.8 & 0.4 & 1.9 & 3.6 \\ \hline &5&0 & 29.6 & 42 & 47.2 & 30.5 & 43.4 & 48.8 & 29.8 & 42.1 & 48.2 & 44.4 & 52.5 & 57.4 & 44.0 & 54.7 & 59.4 & 45.0 & 54.8 & 59.1 \\ &&0.6 & 7.3 & 10.3 & 11.9 & 6.8 & 10.0 & 11.7 & 6.9 & 10.3 & 12.0 & 5.8 & 8.0 & 8.9 & 5.0 & 7.3 & 8.5 & 5.5 & 7.4 & 8.6 \\ &&0.8 & 1.8 & 2.9 & 3.5 & 1.7 & 2.9 & 3.6 & 1.7 & 2.7 & 3.2 & 0.9 & 1.7 & 2.2 & 1.4 & 2.0 & 2.2 & 0.8 & 1.6 & 2.0 \\ &&0.9 & 2.3 & 4.5 & 6.0 & 2.5 & 4.7 & 5.9 & 2.0 & 3.8 & 4.9 &0.8 & 1.8 & 2.6 & 0.6 & 1.6 & 2.4 & 0.5 & 1.6 & 2.1 \\ &&1 & 1.5 & 4.9 & 7.3 & 1.8 & 4.9 & 7.0 & 2.1 & 5.1 & 7.5 & 0.6 & 2.5 & 4.3 & 0.6 & 2.1 & 3.9 & 0.2 & 2.0 & 3.4 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:classif200-6.1} Classification rates (in percent) achieved by applying the F-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention to INAR(1) series of length $n=200$ with a transient shift with $\delta=0.6$ and size $\kappa=2.5\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0 & 14.5 & 18.3 & 20.1 & 14.8 & 18.6 & 20.4 & 14.9 & 18.8 & 20.7 & 22.9 & 27.8 & 29.0 & 22.8 & 26.4 & 27.9 & 21.4 & 25.2 & 26.7 \\ &&0.6 & 28.5 & 35.3 & 37.7 & 27.6 & 33.6 & 36.1 & 27.0 & 33.5 & 35.8 & 22.8 & 28.9 & 30.6 & 23.9 & 30.9 & 33.2 & 23.1 & 29.7 & 31.9 \\ &&0.8 & 11.3 & 14.7 & 16.1 & 12.1 & 15.5 & 16.9 & 11.2 & 14.9 & 16.7 & 7.6 & 10.8 & 12.6 & 7.8 & 11.2 & 13.2 & 8.8 & 12.6 & 14.6 \\ &&0.9 & 4.4 & 7.2 & 8.3 & 4.3 & 6.5 & 7.8 & 4.6 & 6.4 & 7.6 & 2.8 & 5.8 & 7.1 & 2.6 & 4.9 & 5.6 & 2.6 & 4.5 & 5.8 \\ &&1 & 0.6 & 2.2 & 3.5 & 0.5 & 1.7 & 2.4 & 0.8 & 2.2 & 3.1 & 0.2 & 1.1 & 2.4 & 0.8 & 1.8 & 2.8 & 0.6 & 1.6 & 2.5 \\ \hline &5&0 & 12.5 & 16.3 & 18.0 & 13.0 & 17.0 & 18.6 & 13.8 & 17.7 & 19.2 & 19.3 & 23.0 & 25.3 & 18.4 & 22.7 & 24.4 & 22.1 & 26.4 & 27.9 \\ &&0.6 & 29.1 & 36.3 & 38.4 & 29.6 & 36.4 & 39.0 & 30.2 & 36.5 & 39.2 & 27.1 & 34.5 & 37.1 & 25.9 & 33.1 & 35.5 & 26.5 & 32.7 & 35.1 \\ &&0.8 & 13.6 & 17.6 & 19.1 & 12.6 & 16.2 & 17.6 & 12.9 & 16.5 & 18.1 & 8.2 & 12.2 & 13.7 & 9.7 & 13.9 & 15.2 & 7.8 & 11.4 & 13.4 \\ &&0.9 & 4.4 & 6.7 & 8.0 & 4.4 & 6.7 & 7.6 & 4.1 & 6.3 & 7.2 & 2.8 & 4.8 & 6.0 & 2.8 & 5.3 & 6.4 & 3.2 & 5.8 & 7.2 \\ &&1 & 0.8 & 1.9 & 2.8 & 0.6 & 1.8 & 2.8 & 0.7 & 1.8 & 2.4 & 0.3 & 1.9 & 3.2 & 0.5 & 1.8 & 2.4 & 0.6 & 1.5 & 2.5 \\
\hline 0.6&2&0 & 12.5 & 17.1 & 19.4 & 13.1 & 17.8 & 19.9 & 13.2 & 17.5 & 19.4 & 23.1 & 27.1 & 29.1 & 23.8 & 28.1 & 29.9 & 22.9 & 27.2 & 29.1 \\ &&0.6 & 22.0 & 28.3 & 30.6 & 23.0 & 29.2 & 31.7 & 22.6 & 28.9 & 31.7 & 20.8 & 26.6 & 28.6 & 23.1 & 28.5 & 30.1 & 21.9 & 28.0 & 30.6 \\ &&0.8 & 11.7 & 15.4 & 17.4 & 11.2 & 15.0 & 16.5 & 11.4 & 15.7 & 17.3 & 7.0 & 10.8 & 12.6 & 7.6 & 10.4 & 11.9 & 7.2 & 10.4 & 12.2 \\ &&0.9 & 5.6 & 8.0 & 9.6 & 5.2 & 8.4 & 9.8 & 4.6 & 7.1 & 8.3 & 2.8 & 5.5 & 6.7 & 2.8 & 4.8 & 6.2 & 2.4 & 4.2 & 4.8 \\ &&1 & 0.7 & 2.5 & 4.1 & 0.5 & 2.2 & 3.4 & 1.0 & 2.7 & 3.6 & 0.5 & 1.8 & 3.0 & 0.3 & 1.7 & 2.6 & 0.9 & 2.1 & 2.9 \\
\hline &5&0 & 11.9 & 16.7 & 18.7 & 11.7 & 16 & 17.8 & 11.5 & 15.6 & 17.6 & 20.6 & 24.6 & 26.4 & 20.6 & 25.5 & 27.8 & 19.6 & 24.1 & 25.4 \\ &&0.6 & 23.3 & 30.6 & 33.6 & 22.6 & 30.4 & 33.3 & 24.9 & 32.3 & 35.5 & 22.9 & 29.6 & 32.2 & 23.8 & 29.9 & 32.2 & 24.4 & 30.9 & 34.0 \\ &&0.8 & 12.0 & 15.9 & 17.5 & 11.9 & 16.7 & 18.3 & 12.4 & 16.2 & 17.6 & 7.5 & 11.4 & 13.1 & 7.1 & 10.0 & 11.6 & 7.6 & 12.1 & 13.9 \\ &&0.9 & 5.1 & 7.9 & 9.4 & 5.6 & 8.7 & 9.8 & 4.4 & 6.9 & 8.0 & 2.9 & 4.6 & 5.8 & 3.2 & 6.0 & 7.2 & 2.4 & 4.8 & 5.8 \\ &&1 & 0.6 & 2.2 & 3.1 & 0.6 & 2.2 & 3.5 & 0.7 & 2.6 & 3.6 & 0.6 & 2.1 & 3.5 & 0.4 & 1.6 & 2.8 & 0.4 & 1.6 & 2.3 \\
\hline 0.9&2&0 & 11.4 & 16.1 & 17.6 & 11.3 & 15.5 & 17.3 & 11.5 & 15.7 & 18.0 & 21.5 & 24.4 & 26.9 & 22.9 & 26.5 & 28.9 & 21.5 & 24.0 & 27.2 \\ &&0.6 & 17.6 & 23.6 & 26.2 & 19.0 & 25.5 & 28.4 & 18.0 & 24.4 & 27.0 &23.9 & 27.8 & 29.8 & 22.4 & 27.8 & 29.8 & 22.1 & 27.1 & 28.6 \\ &&0.8 & 10.8 & 15.1 & 16.6 & 9.9 & 13.3 & 14.8 & 10.5 & 14.6 & 16.2 & 8.0 & 10.9 & 12.4 & 7.4 & 10.5 & 12.2 & 8.2 & 11.2 & 12.6 \\ &&0.9 & 8.4 & 12.0 & 13.5 & 7.8 & 11.2 & 13.2 & 7.7 & 10.7 & 12.4 & 2.8 & 4.5 & 5.6 & 2.4 & 4.1 & 5.2 & 2.2 & 4.3 & 6.0 \\ &&1 & 1.8 & 4.2 & 6.3 & 1.8 & 4.8 & 6.6 & 2.2 & 4.5 & 6.4 & 0.8 & 2.6 & 4.1 & 0.8 & 1.8 & 3.2 & 0.6 & 2.0 & 3.3 \\ \hline &5&0 & 10.0 & 14.8 & 16.7 & 10.4 & 14.8 & 16.8 & 9.4 & 13.1 & 15.2 & 19.9 & 23.6 & 25.9 & 19.2 & 24.4 & 26.9 & 18.6 & 22.1 & 23.8 \\ &&0.6 & 18.9 & 25.5 & 28.0 & 17.9 & 24.6 & 27.5 & 19.7 & 27.3 & 30.0 & 23.4 & 29.2 & 31.3 & 23.7 & 29.2 & 30.9 & 24.4 & 30.7 & 33.2 \\ &&0.8 & 10.3 & 14.2 & 15.9 & 10.7 & 14.9 & 16.4 & 10.4 & 14.3 & 15.8 & 8.8 & 12.6 & 14.6 & 7.4 & 10.8 & 12.3 & 8.2 & 12.9 & 14.4 \\ &&0.9 & 8.6 & 12.4 & 14.1 & 7.5 & 11.3 & 13.2 & 7.6 & 10.9 & 12.3 & 2.9 & 4.5 & 5.2 & 3.2 & 5.5 & 7.0 & 2.3 & 4.3 & 5.4 \\ &&1 & 1.8 & 4.6 & 6.2 & 2.0 & 5.0 & 6.8 & 2.7 & 4.8 & 6.5 & 0.8 & 2.4 & 3.8 & 0.6 & 2.0 & 3.5 & 0.5 & 1.8 & 2.8 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:classif200-8.1} Classification rates (in percent) achieved by applying the F-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention to INAR(1) series of length $n=200$ with a transient shift with $\delta=0.8$ and size $\kappa=2\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0& 6.5 & 8.5 & 9.5 & 6.8 & 8.4 & 9.4 & 6.3 & 8.0 & 9.2 & 10.8 & 12.8 & 13.7 & 10.6 & 12.5 & 13.1 & 12.1 & 14.3 & 15.0 \\ &&0.6 & 15.9 & 19.0 & 20.4 & 14.4 & 17.3 & 18.4 & 14.5 & 18.1 & 19.4 & 16.1 & 19.2 & 20.3 & 16.8 & 20.2 & 21.6 & 18.1 & 20.9 & 22.2 \\ &&0.8 & 24.2 & 28.9 & 31.0 & 24.9 & 30.2 & 31.8 & 24.0 & 29.5 & 31.2 & 18.6 & 24.7 & 27.1 & 18.4 & 24.3 & 26.8 & 19.0 & 23.8 & 25.9 \\ &&0.9 & 15.5 & 22.1 & 24.9 & 16.9 & 23.0 & 25.1 & 16.4 & 22.0 & 23.8 & 12.6 & 19.8 & 22.4 & 13.0 & 19.9 & 21.9 & 13.1 & 18.7 & 21.6 \\ &&1 & 0.5 & 1.4 & 2.3 & 0.7 & 1.7 & 2.5 & 1.3 & 2.5 & 3.3 & 0.4 & 1.1 & 2.3 & 0.6 & 1.6 & 2.4 & 0.6 & 1.6 & 2.7 \\
\hline &5&0 & 4.8 & 6.5 & 7.4 & 5.5 & 7.2 & 7.9 & 5.8 & 7.6 & 8.2 & 8.9 & 10.6 & 11.3 & 8.1 & 10.0 & 10.7 & 9.9 & 12.6 & 13.3 \\ &&0.6 & 15.1 & 18.7 & 19.9 & 15.4 & 19.1 & 20.5 & 14.9 & 18.4 & 19.8 & 15.6 & 19.6 & 20.9 & 16.6 & 21.0 & 22.1 & 17.6 & 20.9 & 22.1 \\ &&0.8 & 26.6 & 32.1 & 33.8 & 25.5 & 31.6 & 33.5 & 26.1 & 31.8 & 33.7 & 22.3 & 28.6 & 30.6 & 20.6 & 26.7 & 28.8 & 21.6 & 28.1 & 30.3 \\ &&0.9 & 17.6 & 23.9 & 25.8 & 17.4 & 23.0 & 24.6 & 16.8 & 22.3 & 24.1 & 13.2 & 19.4 & 21.9 & 14.6 & 20.9 & 23.8 & 12.2 & 18.3 & 20.4 \\ &&1 & 0.5 & 1.6 & 2.4 & 0.7 & 2.0 & 2.8 & 1.4 & 2.6 & 3.3 & 0.3 & 1.4 & 2.5 & 0.8 & 1.8 & 2.5 & 0.8 & 2.4 & 2.8 \\ \hline 0.6&2&0 & 5.9 & 8.4 & 9.8 & 5.8 & 7.9 & 9.3 & 5.4 & 7.8 & 8.9 & 12.4 & 15.0 & 16.4 & 12.0 & 14.1 & 15.4 & 12.6 & 14.2 & 15.2 \\ &&0.6 & 13.4 & 17.3 & 18.4 & 12.1 & 15.4 & 16.5 & 13.1 & 16.8 & 18.2 & 16.5 & 19.4 & 20.2 & 18.2 & 22.0 & 23.2 & 15.8 & 19.3 & 20.8 \\ &&0.8 & 20.0 & 25.3 & 27.4 & 20.5 & 26.5 & 28.6 & 20.8 & 26.4 & 28.9 & 15.3 & 20.9 & 22.2 & 15.3 & 20.7 & 23.1 & 15.6 & 20.6 & 22.8 \\ &&0.9 & 16.0 & 22.2 & 24.6 & 16.8 & 23.4 & 26.3 & 15.8 & 21.4 & 23.6 & 10.8 & 17.6 & 20.8 & 10.3 & 15.8 & 18.6 & 10.8 & 16.0 & 18.6 \\ &&1 & 0.5 & 2.0 & 3.2 & 1.0 & 2.3 & 3.4 & 1.8 & 3.5 & 4.6 & 0.5 & 1.9 & 3.2 & 0.6 & 1.8 & 2.6 & 1.4 & 2.8 & 4.0 \\ \hline &5&0 & 4.9 & 7.0 & 8.1 & 5.0 & 7.4 & 8.2 & 5.0 & 7.2 & 8.4 & 10.1 & 12.2 & 13.2 & 9.2 & 11.6 & 12.4 & 9.4 & 10.8 & 11.8 \\ &&0.6 & 12.8 & 17.1 & 18.7 & 14.0 & 18.1 & 19.5 & 13.3 & 17.1 & 18.7 & 16.2 & 20.1 & 21.9 & 16.6 & 20.9 & 22.4 & 17.6 & 21.1 & 22.6 \\ &&0.8 & 22.4 & 28.9 & 31.1 & 21.6 & 28.3 & 30.2 & 21.5 & 27.6 & 30.0 & 18.1 & 24.6 & 26.9 & 17.5 & 23.2 & 25.7 & 19.2 & 26.2 & 28.6 \\ &&0.9 & 16.1 & 22.5 & 25.0 & 16.6 & 22.9 & 25.3 & 16.1 & 22.0 & 24.2 & 10.6 & 17.3 & 19.9 & 11.8 & 18.5 & 20.4 & 10.1 & 15.8 & 17.9 \\ &&1 & 0.5 & 2.0 & 3.4 & 0.6 & 2.1 & 2.9 & 1.7 & 3.2 & 4.1 & 0.6 & 1.8 & 2.8 & 0.5 & 1.6 & 2.6 & 1.0 & 2.4 & 3.4 \\ \hline 0.9&2&0 & 5.0 & 7.5 & 8.5 & 5.7 & 8.2 & 9.7 & 5.0 & 7.5 & 9.1 &11.4 & 13.2 & 14.6 & 12.1 & 14.3 & 15.9 & 11.5 & 13.5 & 14.2 \\ &&0.6 & 10.8 & 14.4 & 15.7 & 10.5 & 14.1 & 15.6 & 10.6 & 14.1 & 15.6 & 16.9 & 18.9 & 20.1 & 18.0 & 20.9 & 22.0 & 16.8 & 19.5 & 20.6 \\ &&0.8 & 16.5 & 21.6 & 23.5 & 14.5 & 19.7 & 21.6 & 14.5 & 19.1 & 21.1 & 18.1 & 22.5 & 24.1 & 16.4 & 21.4 & 23.5 & 17.3 & 21.7 & 23.9 \\ &&0.9 & 20.7 & 28.1 & 31.2 & 19.9 & 27.1 & 30.3 & 19.6 & 26.6 & 29.3 & 10.6 & 16.0 & 18.4 & 9.7 & 14.3 & 17.4 & 10.5 & 15.8 & 18.1 \\ &&1 & 1.5 & 3.5 & 5.0 & 1.8 & 3.9 & 5.5 & 3.4 & 5.9 & 7.5 & 1.0 & 2.8 & 4.0 & 0.8 & 1.8 & 2.8 & 1.0 & 2.6 & 4.0 \\
\hline &5&0 & 4.4 & 7.0 & 8.2 & 4.2 & 6.4 & 7.5 & 4.5 & 6.4 & 7.7 & 9.9 & 12.2 & 13.2 & 9.6 & 11.9 & 13.2 & 8.9 & 10.8 & 11.9 \\ &&0.6 & 9.5 & 13.5 & 15.2 & 10.3 & 14.3 & 15.6 & 10.2 & 14.1 & 15.6 & 14.8 & 18.4 & 20.2 & 14.7 & 18.6 & 19.9 & 16.1 & 19.6 & 21.2 \\ &&0.8 & 16.1 & 21.5 & 23.2 & 15.9 & 21.1 & 23.3 & 16.3 & 21.6 & 23.5 & 20.0 & 25.1 & 27.2 & 17.9 & 23.0 & 25.2 & 20.1 & 26.1 & 28.8 \\ &&0.9 & 21.3 & 29.9 & 32.7 & 20.6 & 28.0 & 31.3 & 20.6 & 27.6 & 30.2 & 11.2 & 17.1 & 19.4 & 12.8 & 18.2 & 20.2 & 10.6 & 15.8 & 17.8 \\ &&1 & 1.3 & 3.6 & 5.1 & 2.2 & 4.3 & 6.0 & 3.3 & 5.9 & 7.1 & 0.7 & 2.4 & 3.3 & 0.6 & 2.2 & 3.2 & 0.9 & 2.5 & 3.4 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table}
\caption{\label{tab:classif200-9.1} Classification rates (in percent) achieved by applying the F-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention to INAR(1) series of length $n=200$ with a transient shift with $\delta=0.9$ and size $\kappa=1.5\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0 & 3.6 & 5.1 & 6.1 & 3.5 & 4.9 & 5.8 & 3.7 & 5.3 & 6.1 & 6.0 & 7.6 & 8.4 & 6.3 & 7.6 & 8.4 & 6.8 & 8.1 & 9.0 \\ &&0.6 & 5.3 & 7.0 & 7.5 & 5.1 & 6.6 & 7.1 & 5.7 & 7.0 & 7.7 & 7.6 & 9.4 & 9.8 & 7.3 & 9.2 & 9.8 & 6.2 & 7.3 & 7.8 \\ &&0.8 & 13.7 & 16.5 & 17.6 & 14.0 & 16.9 & 17.7 & 13.1 & 16.0 & 17.2 & 11.8 & 14.8 & 16.7 & 13.9 & 16.9 & 17.8 & 11.1 & 13.8 & 15.0 \\ &&0.9 & 37.4 & 48.5 & 53.0 & 37.1 & 48.1 & 52.6 & 36.0 & 45.7 & 49.0 & 27.9 & 41.9 & 47.1 & 30.5 & 43.0 & 47.5 & 32.1 & 43.3 & 47.5 \\ &&1 & 0.4 & 1.7 & 2.5 & 1.1 & 2.7 & 3.8 & 3.8 & 6.6 & 7.8 & 0.4 & 1.2 & 2.2 & 1.0 & 2.3 & 3.2 & 3.1 & 5.5 & 6.8 \\
\hline &5&0 & 2.8 & 4.1 & 4.9 & 3.0 & 4.2 & 4.8 & 2.7 & 3.9 & 4.5 & 5.6 & 6.6 & 7.0 & 4.5 & 5.8 & 6.1 & 5.6 & 7.4 & 8.3 \\ &&0.6 & 5.3 & 7.0 & 7.4 & 5.3 & 6.9 & 7.5 & 4.9 & 6.1 & 6.7 & 5.2 & 7.0 & 7.5 & 5.4 & 6.7 & 7.2 & 6.2 & 8.0 & 8.6 \\ &&0.8 & 14.3 & 17.4 & 18.7 & 14.5 & 17.9 & 19.1 & 14.4 & 17.6 & 18.9 & 12.4 & 16.0 & 17.4 & 13.5 & 17.6 & 18.9 & 12.3 & 16.1 & 17.6 \\ &&0.9 & 39.1 & 51.5 & 56.0 & 38.6 & 50.1 & 54.1 & 37.9 & 48.8 & 52.5 & 34.6 & 48.1 & 52.5 & 33.9 & 47.4 & 52.7 & 30.7 & 43.0 & 47.8 \\ &&1 & 0.5 & 1.5 & 2.3 & 1.2 & 2.8 & 3.7 & 3.7 & 6.3 & 7.2 & 0.4 & 1.8 & 3.0 & 1.1 & 2.4 & 3.4 & 3.8 & 6.2 & 7.1 \\
\hline 0.6&2&0 & 3.6 & 5.7 & 6.7 & 3.5 & 5.4 & 6.3 & 3.6 & 5.4 & 6.5 & 7.8 & 9.8 & 10.6 & 8.3 & 9.8 & 10.8 & 6.4 & 8.2 & 8.7 \\ &&0.6 & 4.9 & 6.6 & 7.3 & 5.1 & 7.0 & 7.9 & 4.4 & 6.2 & 6.9 & 7.6 & 9.9 & 10.4 & 8.0 & 10.1 & 10.6 & 7.8 & 9.4 & 10.1 \\ &&0.8 & 11.9 & 15.3 & 16.8 & 11.9 & 15.5 & 16.7 & 11.8 & 15.2 & 16.5 & 11.6 & 14.9 & 16.4 & 11.3 & 14.7 & 15.8 & 11.1 & 14.4 & 15.8 \\ &&0.9 & 32.4 & 44.5 & 49.3 & 30.6 & 42.4 & 47.6 & 29.9 & 41.3 & 45.5 & 22.8 & 35.5 & 40.7 & 23.9 & 34.8 & 39.9 & 22.7 & 33.6 & 38.0 \\ &&1 & 0.7 & 2.5 & 4.0 & 1.5 & 3.6 & 4.9 & 4.0 & 7.3 & 9.0 &0.6 & 2.0 & 3.4 & 1.1 & 2.8 & 4.2 & 3.8 & 7.4 & 9.6 \\
\hline &5&0 & 3.2 & 4.9 & 5.6 & 2.8 & 4.4 & 5.2 & 2.6 & 4.4 & 5.2 &5.6 & 7.2 & 8.3 & 5.5 & 7.2 & 7.8 & 4.3 & 6.3 & 6.7 \\ &&0.6 & 4.7 & 6.6 & 7.4 & 4.4 & 6.0 & 6.8 & 4.7 & 6.5 & 7.4 & 7.5 & 8.9 & 9.5 & 6.8 & 8.6 & 9.4 & 6.8 & 8.7 & 9.5 \\ &&0.8 & 11.8 & 15.6 & 16.8 & 11.9 & 15.7 & 17.0 & 11.8 & 15.0 & 16.6 & 12.0 & 15.7 & 17.1 & 12.4 & 16.1 & 17.8 & 12.8 & 16.2 & 17.8 \\ &&0.9 & 34.0 & 47.5 & 51.9 & 33 & 46.4 & 51.4 & 31.2 & 43.5 & 47.5 & 26.7 & 39.6 & 44.7 & 25.9 & 38.8 & 44.0 & 26.8 & 37.6 & 42.2 \\ &&1 & 0.6 & 2.0 & 2.8 & 1.5 & 3.8 & 5.3 & 4.1 & 7.1 & 8.9 & 0.6 & 2.4 & 3.2 & 1.1 & 3.0 & 4.7 & 3.1 & 6.7 & 8.4 \\
\hline 0.9&2&0 & 2.9 & 4.8 & 5.8 & 2.9 & 4.8 & 5.9 & 3.0 & 4.9 & 6.0 & 7.0 & 9.2 & 10.2 & 7.5 & 9.2 & 10.9 & 7.6 & 9.5 & 10.4 \\ &&0.6 & 3.8 & 5.7 & 6.4 & 4.1 & 5.9 & 6.6 & 4.1 & 6.0 & 6.7 & 8.2 & 9.3 & 9.7 & 8.8 & 10.3 & 11.1 & 7.2 & 8.6 & 9.3 \\ &&0.8 & 8.4 & 11.4 & 12.8 & 8.2 & 11.0 & 12.2 & 7.4 & 10.5 & 11.8 & 12.9 & 15.6 & 16.8 & 12.3 & 16.4 & 17.4 & 12.8 & 16.0 & 17.1 \\ &&0.9 & 32.6 & 45.7 & 50.9 & 31.8 & 45.0 & 50.0 & 29.6 & 40.7 & 45.2 & 23.9 & 35.0 & 39.3 & 22.2 & 32.0 & 37.4 & 24.2 & 33.2 & 37.2 \\ &&1 & 1.8 & 4.4 & 5.8 & 2.4 & 5.7 & 7.5 & 5.9 & 10.3 & 12.4 & 1.1 & 3.1 & 4.8 & 1.0 & 2.6 & 3.8 & 2.3 & 5.8 & 7.7 \\
\hline &5&0 & 2.2 & 4.1 & 5.1 & 2.5 & 4.4 & 5.6 & 2.2 & 4.1 & 5.1 & 5.8 & 7.8 & 8.8 & 6.2 & 8.1 & 8.9 & 4.8 & 6.4 & 7.3 \\ &&0.6 & 3.7 & 5.6 & 6.5 & 4.0 & 5.8 & 6.3 & 3.9 & 5.4 & 6.2 & 7.5 & 8.3 & 8.6 & 6.3 & 8.0 & 9.0 & 7.0 & 8.7 & 9.3 \\ &&0.8 & 7.5 & 10.4 & 11.8 & 7.4 & 10.9 & 12.0 & 8.0 & 11.4 & 12.6 &12.9 & 16.8 & 18.4 & 12.5 & 15.9 & 17.1 & 12.9 & 16.5 & 17.9 \\ &&0.9 & 34.2 & 47.3 & 53.4 & 32.6 & 46.6 & 51.1 & 30.1 & 41.4 & 46.3 & 26.1 & 37.4 & 42.0 & 26.0 & 36.2 & 41.5 & 25.3 & 36.2 & 40.1 \\ &&1 & 1.6 & 3.9 & 5.5 & 3.1 & 6.3 & 8.2 & 6.9 & 11.2 & 12.9 & 0.9 & 2.8 & 3.6 & 0.9 & 2.9 & 4.6 & 3.0 & 5.8 & 7.6 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab:classif200-1} Classification rates (in percent) achieved by applying the F-type statistics and score statistics for a known type $\delta$ and time $\tau$ of intervention to INAR(1) series of length $n=200$ with a permanent shift with $\delta=1$ and size $\kappa=\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrr|rrr|rrr|rrr|rrr|rrr|rrr} \hline
&& & \multicolumn{9}{c|}{$F$-type statistics} & \multicolumn{9}{c}{Score statistics}\\ \hline
&& & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c|}{$\tau=0.75n$} & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\%\\ \hline
0.3&2&0& 0.1 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.3 & 0.3 & 0.3 & 0.4 & 0.5 & 0.5 & 0.2 & 0.2 & 0.2 & 0.8 & 0.8 & 0.8 \\ &&0.6 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.2 & 0.3 & 0.0 & 0.0 & 0.0 & 0.4 & 0.5 & 0.5 \\ &&0.8 & 0.1 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.1 & 0.2 & 0.2 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 \\ &&0.9 & 0.2 & 0.3 & 0.3 & 0.2 & 0.2 & 0.2 & 1.9 & 2.0 & 2.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 1.8 & 1.8 & 1.8 \\ &&1 & 95.4 & 98.8 & 99.2 & 99.2 & 99.6 & 99.6 & 95.7 & 97.0 & 97.3 & 96.9 & 98.6 & 99.0 & 99.5 & 99.7 & 99.7 & 95.1 & 96.3 & 96.8 \\
\hline &5&0 & 0.1 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.2 & 0.2 & 0.2 & 0.1 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.3 & 0.3 & 0.3 \\ &&0.6 & 0.0 & 0.0 & 0.0 & 0 .0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ &&0.8 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 \\ &&0.9 & 0.1 & 0.1 & 0.1 & 0.2 & 0.2 & 0.2 & 1.4 & 1.4 & 1.4 & 0.0 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 1.0 & 1.1 & 1.1 \\ &&1 & 97.4 & 99.4 & 99.6 & 99.6 & 99.8 & 99.8 & 97.1 & 98.1 & 98.2 & 98.2 & 99.4 & 99.5 & 99.5 & 99.8 & 99.8 & 96.9 & 98.2 & 98.3 \\
\hline 0.6&2&0 & 0.2 & 0.3 & 0.4 & 0.2 & 0.2 & 0.2 & 0.4 & 0.4 & 0.5 & 0.6 & 0.6 & 0.6 & 0.6 & 0.6 & 0.6 & 1.0 & 1.1 & 1.1 \\ &&0.6 & 0.1 & 0.2 & 0.2 & 0.0 & 0.0 & 0.0 & 0.2 & 0.3 & 0.3 & 0.1 & 0.2 & 0.2 & 0.4 & 0.4 & 0.4 & 0.8 & 0.8 & 0.8 \\ &&0.8 & 0.2 & 0.2 & 0.2 & 0.1 & 0.1 & 0.1 & 0.2 & 0.2 & 0.2 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 & 0.1 & 0.3 & 0.3 & 0.3 \\ &&0.9 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.6 & 3.0 & 3.4 & 3.4 & 0.2 & 0.4 & 0.4 & 0.4 & 0.5 & 0.5 & 2.8 & 3.0 & 3.0 \\ &&1 & 88.3 & 96.7 & 97.9 & 96.9 & 98.8 & 99.0 & 91.3 & 94.7 & 95.2 & 92.5 & 97.2 & 97.9 & 97.4 & 98.2 & 98.3 & 90.5 & 93.9 & 94.3 \\
\hline &5&0 & 0.2 & 0.3 & 0.3 & 0.1 & 0.1 & 0.1 & 0.2 & 0.3 & 0.3 &0.3 & 0.3 & 0.4 & 0.2 & 0.2 & 0.2 & 0.6 & 0.6 & 0.6 \\ &&0.6 & 0.1 & 0.2 & 0.2 & 0.0 & 0.1 & 0.1 & 0.1 & 0.2 & 0.2 & 0.2 & 0.3 & 0.3 & 0.3 & 0.3 & 0.3 & 0.2 & 0.2 & 0.2 \\ &&0.8 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.2 & 0.2 & 0.0 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 \\ &&0.9 & 0.2 & 0.3 & 0.4 & 0.3 & 0.3 & 0.3 & 1.9 & 2.1 & 2.1 & 0.3 & 0.4 & 0.5 & 0.1 & 0.1 & 0.1 & 2.3 & 2.4 & 2.5 \\ &&1 & 92.3 & 97.7 & 98.4 & 98.2 & 99.3 & 99.4 & 93.5 & 96.4 & 96.8 & 94.5 & 97.7 & 98.3 & 98.7 & 99.2 & 99.3 & 93.9 & 95.9 & 96.3 \\
\hline 0.9&2&0 & 0.6 & 1.1 & 1.4 & 0.7 & 0.9 & 1.0 & 0.8 & 1.0 & 1.1 & 1.5 & 1.5 & 1.5 & 1.4 & 1.4 & 1.4 & 1.6 & 1.6 & 1.6 \\ &&0.6 & 0.3 & 0.6 & 0.6 & 0.3 & 0.4 & 0.5 & 0.6 & 0.8 & 0.8 & 0.5 & 0.6 & 0.6 & 0.4 & 0.4 & 0.4 & 0.7 & 0.7 & 0.7 \\ &&0.8 & 0.3 & 0.6 & 0.6 & 0.2 & 0.2 & 0.3 & 0.3 & 0.4 & 0.4 & 0.2 & 0.3 & 0.3 & 0.3 & 0.3 & 0.3 & 0.5 & 0.7 & 0.7 \\ &&0.9 & 0.5 & 0.8 & 1.0 & 1.0 & 1.4 & 1.5 & 2.8 & 3.4 & 3.5 &0.8 & 0.9 & 0.9 & 0.7 & 0.7 & 0.7 & 6.0 & 6.4 & 6.4 \\ &&1 & 66.5 & 88.3 & 93.3 & 82.2 & 94.5 & 96.0 & 80.7 & 91.3 & 93.1 & 89.2 & 94.7 & 95.8 & 94.8 & 96.7 & 97.0 & 84.4 & 88.7 & 89.7 \\
\hline &5&0 & 0.4 & 0.8 & 1.0 & 0.4 & 0.6 & 0.6 & 0.5 & 0.6 & 0.7 & 0.7 & 0.9 & 1.0 & 0.6 & 0.6 & 0.6 & 0.8 & 0.8 & 0.9 \\ &&0.6 & 0.3 & 0.6 & 0.6 & 0.3 & 0.4 & 0.4 & 0.4 & 0.5 & 0.5 & 0.3 & 0.4 & 0.4 & 0.2 & 0.2 & 0.2 & 0.4 & 0.4 & 0.4 \\ &&0.8 & 0.2 & 0.3 & 0.3 & 0.2 & 0.3 & 0.3 & 0.3 & 0.3 & 0.3 & 0.1 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 & 0.3 & 0.3 & 0.3 \\ &&0.9 & 0.4 & 0.7 & 0.8 & 0.9 & 1.2 & 1.2 & 2.6 & 3.2 & 3.3 & 0.4 & 0.6 & 0.6 & 0.5 & 0.6 & 0.6 & 4.0 & 4.4 & 4.5 \\ &&1 & 71.1 & 91.0 & 94.8 & 85.6 & 95.8 & 97 & 84.4 & 93.4 & 94.6 & 89.8 & 95.7 & 96.8 & 96.3 & 98.1 & 98.3 & 89.8 & 92.8 & 93.2 \\
\hline \end{tabular}} \end{center} \end{table} \end{landscape}
\subsection*{3. Simulation results for the INAR(2) process with Poisson innovations}
\setcounter{table}{0} \renewcommand{SM3.\arabic{table}}{SM3.\arabic{table}}
\setcounter{figure}{0} \renewcommand{SM3.\arabic{figure}}{SM3.\arabic{figure}}
In this section we study the performance of the $F$-type statistic (\ref{eq:ftest}) for the detection of outliers in an INAR(2) model with Poisson innovations. For $p=2$, the contaminated model (\ref{eq:cont-inarp}) takes the form $$ Y_t=\alpha_1\circ Y_{t-1}+\alpha_2\circ Y_{t-2}+e_t +\sum_{j=1}^J U_{t,j}, \quad t\in\mathbb{N},$$ where $e_t\sim Pois(\lambda)$ and $(U_{t,j} : t\in\mathbb{N})$, $j=1,\ldots,J$ are independent random variables such that $U_{t,j}\equiv 0$ for $t=0,\ldots,\tau_j-1$ and $U_{t,j}\sim Pois(\kappa_j\delta_j^{t-\tau_j})$ for $t=\tau_j,\tau_j+1,\ldots$.
Tables \ref{tab3:sizes100} and \ref{tab3:sizes200} report empirical rejection rates when testing for an intervention effect of known type $\delta\in \{0,0.8,1\}$ at a known time point $\tau\in \{0.25n,0.5n,0.75n\}$, using the 90\%, 95\% or 99\% quantile of the $\chi_1^2$-distribution as critical value for the $F$-type statistic.
The empirical rejection rates are obtained by analyzing 5000 time series of the same length $n\in \{100,200\}$ for each of
different INAR(2) models with $\{\alpha_1,\alpha_2\}=\{(0.5,0.3), (0.3,0.4), (0.1,0.1)\}$ and $\lambda\in \{2,5\}$.
The $F$-type statistics for innovation outliers ($\delta=0$) achieve empirical rejection rates that are close to the target significance levels 1\%, 5\% and 10\% we aim at already in case of series of length $n=100$ and irrespective of the time $\tau$. The results are somewhat worse for larger values of $\delta$, particularly if the mean $\lambda/(1-\alpha_1-\alpha_2)$ of the series is large.
The results improve for larger values of the series length $n$. In the case of $n=200$, all $F$-type statistics achieve the target significance level well although they are slightly oversized when testing for a transient shift or a permanent level shift.
Next we examine the empirical power of these approximate significance tests for a single intervention effect at a known time point. For this purpose we analyzed 2000 time series of length $n=200$ per simulation scenario. The true size of the intervention effect is scaled to be $\kappa= 3\sqrt{\lambda}$, $\kappa= 2\sqrt{\lambda}$ or $\kappa= \sqrt{\lambda}$ for $\delta=0$, $\delta=0.8$ or $\delta=1$, since the total effect on the series increases with $\delta$. Table \ref{tab3:power200-0} reports the empirical powers of the tests for the different types of intervention at a given time point $\tau\in\{0.25n,0.5n,0.75n\}$ when an innovation outlier occurs at the time point tested. We observe that the $F$-type statistics for an innovation outlier possess larger power than the corresponding tests for other values of $\delta$. Nevertheless, the tests using a misspecified value of $\delta$ also have some power, particularly those using a value of $\delta$ not very far from the true one. Similar conclusions are drawn in case of transient shifts (see Table \ref{tab3:power200-08}).
Moreover, a permanent shift of a certain height is detected best by the test with the correctly specified $\delta=1$ if it occurs in the center of the series (Table \ref{tab3:power200-1}).
Tables \ref{tab3:classif200}-\ref{tab3:classif200-1} report the classification results for situations that we can try to identify the type of an intervention at a known time point. For this purpose, we compare the $F$-type statistics for a selection of values of $\delta$, classifying a detected intervention according to the $F$-type statistic with the largest value. We investigate the empirical detection rates of this classification rule by analyzing 2000 time series of length $n=200$ per simulation scenario. We use the same parameter configurations for $\alpha_1,\alpha_2$ and $\lambda$ as previously and we further set $\tau\in\{50,100,150\}$. We also consider
$\delta\in\{0,0.6,0.8,0.9,1\}$ and we scale the true size of the intervention effect to be $\kappa=3\sqrt{\lambda}$, $\kappa=2.5\sqrt{\lambda}$, $\kappa=2\sqrt{\lambda}$, $\kappa=1.5\sqrt{\lambda}$ or $\kappa=\sqrt{\lambda}$ for $\delta=0$, $\delta=0.6$, $\delta=0.8$, $\delta=0.9$ and $\delta=1$, respectively.
Overall, the type of an intervention is correctly identified in most of the cases with occassionally wrong classification especially for transient shifts of moderate size.
Next we look at situations where we do not know neither the type nor the time of a possible intervention. For this scenario, we consider the maximum test statistics for a set of candidate time points $\tau$ and then selecting the maximum of the statistic.
First we consider the results obtained from analyzing 10000 clean INAR(2) series for the different parameter settings and series lengths $n\in\{100,200\}$. Figures~\ref{fig:maxstatistics0-100-INAR2} and~\ref{fig:maxstatistics0-200-INAR2} display boxplots of these maximum statistics for each of several values of $\delta\in\{0,0.8,1\}$ individually as well as with an additional maximization with respect to $\delta$.
Approximate critical values for an overall test on any type of intervention effect can be derived from the empirical quantiles of the maximum $F$-type statistic with additional maximization with respect to $\delta$. In the case of $n=100$, the 90\%, 95\% and 99\% quantiles of the overall maximum $F$-type statistics range from about 14.8 to 17.3, from 16.8 to 20.0, and from 21.5 to 27 for the different parameter combinations considered here. We thus can use 17, 20 and 27 as critical values for approximate significance $F$-tests for an unknown intervention at an unknown time point at a 10\%, 5\% or 1\% significance level. In case of $n=200$, the empirical percentiles of the maximum $F$-type statistics range from 15.3 to 19, from 17.2 to 21.7, and from 21.6 to 27.9, so that we can use 19, 22 and 28 as approximate critical values. It is interesting to note that for both sample sizes $n=100$ and $n=200$, the approximate critical values derived for the INAR(2) model coincide with the corresponding critical values for the INAR(1) model.
Next we inspect the performance of the classification rules when applied to time series containing an intervention effect. Figure \ref{fig:classif100-0-inar2} depicts the classification results when being applied to time series of length $n=100$ containing an innovation outlier at time $\tau=50$. For this we generate 2000 time series for each of the parameter combinations $(\alpha_1,\alpha_2,\lambda)$ considered before and each intervention size $\kappa=k\sqrt{\lambda}$, $k=0,\ldots,12$. Apparently, time series containing an innovation outlier are classified quite reliably, and the same holds for the classification of transient shifts with $\delta=0.8$ and permanent shifts (see Figures~\ref{fig:classif100-08-inar2} and \ref{fig:classif100-1-inar2} respectively).
In the following we adapt the stepwise detection algorithm of Section~\ref{sect:iterative} to test for the existence of any type of outlier using $\delta=(0,0.6,0.8,0.9,1)$ at any time point. We consider a simulated time series of length $n=200$ generated from a contaminated Poisson INAR(2) model of the form \[Y_t=\alpha_1\circ Y_{t-1}+\alpha_2\circ Y_{t-2}+e_t+U_{t,1}+U_{t,2},\] where $e_t\sim Pois(\lambda)$, $U_{t,j}\equiv 0$ for $t=0,\ldots,\tau_j-1$ and $U_{t,j}\sim Pois(\kappa_j\delta_j^{t-\tau_j})$ for $t=\tau_j,\ldots,n$, $j=1,2$. We set $(\alpha_1,\alpha_2,\lambda)=(0.3,0.2,3)$ and the interventions consisting of two transient shifts of the same size $\kappa_1=\kappa_2=\kappa=10$ at times $\tau_1=50$ and $\tau_2=150$ with $\delta_1=0.6$ and $\delta_2=0.9$, respectively (see Figure~\ref{fig:sim-inar2}).
The iterative detection algorithm starts with fitting an INAR(2) model to the data assuming no interventions, at which step we obtain the initial conditional least squares estimates $(\hat{\alpha}_1,\hat{\alpha}_2,\hat{\lambda})=(0.38,0.18,2.81)$. Then, we test for unknown types of interventions at unknown time points using the $F$-type statistic. At the first iteration, the test statistic correctly identifies a transient shift at time $\tau=150$, although with $\delta=0.8$ instead of $\delta=0.9$. Next, we correct the data according to step (3.b) of the algorithm. Note that in this specific step, the effect of the intervention is estimated as \[\hat{U}_t=\left\lfloor\frac{\hat{\kappa}\delta^{t-\hat{\tau}}}{\hat{\alpha}_1Y_{t-1}^{(j+1)}+\hat{\alpha}_2Y_{t-2}^{(j+1)}+\hat{\lambda}+\hat{\kappa}\delta^{t-\hat{\tau}}}Y_t^{(j)}\right\rfloor.\] After data correction, the second intervention corresponding to $\tau=50$ is also detected and is correctly classified as a transient shift but with $\delta=0.8$ instead of $\delta=0.6$. Correcting anew the data, an additional transient shift is detected at time $\tau=46$. The final conditional least squares estimates are $(\hat{\alpha}_1,\hat{\alpha}_2,\hat{\lambda})=(0.28, 0.17, 3.32)$ (see Table~\ref{tab:sim-ex-inar2}).
The above simulation experiment was repeated several times in order to evaluate the iterative detection procedure. Our findings are in line with those regarding the INAR(1) model, so that the discussion of Section~\ref{sect:simulex} also applies to the INAR(2) model.
\begin{table} \begin{center} \caption{\label{tab3:sizes100} Empirical sizes (in percent) of the tests based on the $F$-type statistic for a known type $\delta$ and time $\tau$ of intervention in case of INAR(2) series of length $n=100$ with different parameters $\alpha_1$, $\alpha_2$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 1.9 & 6.1 &10.9 & 1.4 & 5.8 &10.8 & 1.5 & 5.9 &10.7\\
& & & 0.8 & 2.6 & 8.3 &14.4 & 2.1 & 7.4 &13.3 & 2.1 & 7.7 &13.8\\
& & & 1 & 2.4 & 8.0 &15.3 & 3.3 &11.0& 18.2& 3.5& 11.2& 18.6\\ \hline & & 5 & 0 & 1.2 & 5.8 &10.9& 1.7 & 5.9 &11.1 & 1.5 & 6.1& 11.9\\ & & & 0.8 & 1.9 & 7.3 & 12.7 & 2.1 & 8.0 &13.5& 1.5 & 6.5 &13.1\\ & & & 1 & 2.3 & 8.6 &14.5 & 2.7 & 9.7 &16.8 & 2.7 & 9.0 &15.4\\\hline 0.3 & 0.4 & 2 & 0 & 1.2 & 5.4& 11.1& 1.1& 5.5& 10.4 & 1.5 & 5.9 & 11.2\\
& & & 0.8 & 2.0 & 7.3 & 13.2 & 1.6 & 7.4 & 13.4 & 2.4 & 7.9 & 13.5\\
& & & 1 & 2.0 & 8.6 & 15.4 & 2.2 & 9.1 & 16.1 & 2.4 & 9.1 & 16.0\\\hline
& & 5 & 0 & 1.6 & 5.9 & 10.9 & 1.4 & 6.3 & 11.7 & 1.2 & 5.6& 10.8\\ & & & 0.8 & 1.9 & 6.4 & 12.1 & 1.7 & 6.5 &12.2 & 2.2 & 6.8 & 11.9\\ & & & 1 & 2.3 & 8.7 & 15.2& 2.0& 7.9& 14.0& 2.8& 8.8& 15.6\\ \hline 0.1 & 0.1 & 2 & 0 & 1.5 & 4.5 &10.1 & 1.4 & 4.7 &10.3 & 1.8 & 4.7 & 9.5\\ & & & 0.8 & 1.1 & 5.8 & 10.7 & 1.4 & 6.3 & 11.2 & 1.4 & 6.1 & 11.4\\ & & & 1 & 2.3 & 7.1 & 12.6 & 1.4 & 6.6 & 12.2 & 1.5 & 6.1 & 12.0\\\hline
& & 5 & 0 & 1.4 & 5.1 & 10.1 & 1.4 & 5.4 & 10.4 & 1.0 & 5.2 & 10.5\\ & & & 0.8 & 1.6 & 5.8 &11.5 & 1.5 & 5.8 & 11.5 & 1.5 & 6.4 & 11.3\\ & & & 1 & 1.3 & 6.5 & 11.7 & 1.4 & 6.2 & 11.7 & 1.7 & 6.4 & 12.6\\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \begin{center} \caption{\label{tab3:sizes200} Empirical sizes (in percent) of the tests based on the $F$-type statistic for a known type $\delta$ and time $\tau$ of intervention in case of INAR(2) series of length $n=200$ with different parameters $\alpha_1$, $\alpha_2$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 1.0 & 5.3 & 10.3 & 1.4 & 5.9 & 10.9 & 1.6 & 5.6 & 10.6 \\
& & & 0.8 & 1.6 & 5.9 & 11.1 & 1.3 & 6.2 & 11.7 & 1.4 & 5.9 & 11.1 \\
& & & 1 & 1.8 & 7.3 & 12.7 & 1.9 & 7.5 & 13.5 & 1.8 & 7.1 & 13.0 \\ \hline
& & 5 & 0 & 1.5 & 5.9 & 10.6 & 1.5 & 5.5 & 10.5 & 1.3 & 5.7 & 10.6 \\
& & & 0.8 & 1.4 & 6.5 & 12.3 & 1.6 & 6.2 & 11.8 & 1.5 & 6.6 & 11.8 \\
& & & 1 & 1.6 & 7.1 & 13.0 & 1.6 & 6.8 & 13.3 & 1.7 & 7.2 & 13.6 \\ \hline
0.3 & 0.4 & 2 & 0 & 1.0 & 5.2 & 9.9 & 1.2 & 5.3 & 10.6 & 1.2 & 5.1 & 10.2 \\
& & & 0.8 & 1.4 & 5.6 & 11.0 & 1.6 & 6.1 & 11.6 & 1.7 & 5.9 & 11.2 \\
& & & 1 & 1.7 & 6.4 & 12.4 & 1.6 & 7.4 & 13.5 & 1.4 & 7.1 & 12.6 \\ \hline
& & 5 & 0 & 1.5 & 5.9 & 10.5 & 1.1 & 5.3 & 10.4 & 1.2 & 5.3 & 10.4 \\
& & & 0.8 & 1.7 & 6.5 & 11.6 & 1.5 & 6.1 & 11.1 & 1.5 & 6.1 & 11.5 \\
& & & 1 & 1.5 & 6.8 & 12.3 & 1.6 & 7.5 & 13.8 & 1.7 & 7.1 & 13.2 \\ \hline
0.1 &0.1 & 2 & 0 & 1.3 & 4.2 & 9.4 & 1.2 & 3.8 & 8.8 & 1.4 & 4.3 & 9.4 \\
& & & 0.8 & 1.4 & 5.5 & 10.8 & 1.2 & 5.3 & 11.1 & 1.5 & 5.7 & 10.7 \\
& & & 1 & 1.2 & 5.5 & 10.9 & 1.3 & 5.7 & 11.2 & 1.4 & 5.8 & 10.8 \\ \hline
& & 5& 0 & 1.2 & 4.9 & 10.0 & 1.3 & 5.0 & 9.5 & 1.1 & 4.5 & 9.9 \\
& & & 0.8 & 1.2 & 5.7 & 11.3 & 1.2 & 5.3 & 10.2 & 1.3 & 5.6 & 11.1 \\
& & & 1 & 1.2 & 5.4 & 11.2 & 1.0 & 5.7 & 11.6 & 1.7 & 6.5 & 11.8 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:power200-0} Empirical power (in percent) of the tests based on the $F$-type statistic for a known time $\tau$ and known, but possibly misspecified type $\delta$ of intervention in case of an innovation outlier $\delta=0$ of size $\kappa=3\sqrt{\lambda}$ at time $\tau$ in an INAR(2) series of length $n=200$ with different parameters $\alpha_1$, $\alpha_2$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 24.6 & 41.1 & 51.3 & 23.6 & 40.6 & 51.0 & 23.2 & 38.6 & 49.4 \\
& & & 0.8 & 12.0 & 24.3 & 33.0 & 11.1 & 24.2 & 31.9 & 10.7 & 22.0 & 31.0 \\
& & & 1 & 1.7 & 7.0 & 12.6 & 1.8 & 8.1 & 14.9 & 3.2 & 9.6 & 15.6 \\ \hline
& & 5 & 0 & 21.3 & 37.8 & 48.5 & 21.7 & 38.6 & 48.6 & 21.6 & 38.2 & 48.6 \\
& & & 0.8 & 9.0 & 20.0 & 27.3 & 9.8 & 20.5 & 28.8 & 8.9 & 20.4 & 28.8 \\
& & & 1 & 1.8 & 6.9 & 12.9 & 2.0 & 8.1 & 14.1 & 3.0 & 8.2 & 15.2 \\ \hline 0.3 & 0.4 & 2 & 0 & 28.1 & 44.9 & 55.5 & 30.6 & 46.3 & 55.0 & 29.8 & 47.0 & 56.9 \\
& & & 0.8 & 14.0 & 25.2 & 33.5 & 13.9 & 25.5 & 34.4 & 13.1 & 24.6 & 34.9 \\
& & & 1 & 1.7 & 6.0 & 11.3 & 1.7 & 7.4 & 14.1 & 3.0 & 9.0 & 15.6 \\ \hline
& & 5 & 0 & 28.4 & 46.0 & 55.8 & 27.5 & 46.7 & 56.6 & 29.0 & 45.4 & 56.6 \\
& & & 0.8 & 11.6 & 24.1 & 33.2 & 11.1 & 23.2 & 32.1 & 11.8 & 24.6 & 34.0 \\
& & & 1 & 1.6 & 7.4 & 13.0 & 2.1 & 7.8 & 13.8 & 2.3 & 7.8 & 14.2 \\ \hline
0.1 & 0.1 & 2& 0 & 48.8 & 65.0 & 72.5 & 49.9 & 65.6 & 72.5 & 51.0 & 66.0 & 72.7 \\
& & & 0.8 & 22.3 & 38.3 & 47.3 & 23.0 & 38.9 & 48.6 & 24.4 & 38.9 & 47.1 \\
& & & 1 & 1.2 & 6.3 & 12.0 & 1.8 & 6.3 & 12.2 & 1.6 & 6.6 & 12.6 \\ \hline
& &5 & 0 & 52.0 & 67.8 & 76.5 & 50.0 & 67.2 & 75.7 & 53.6 & 70.1 & 77.6 \\
& & & 0.8 & 22.9 & 39.4 & 49.0 & 22.0 & 38.8 & 48.8 & 23.2 & 39.6 & 48.9 \\
& & & 1 & 1.4 & 5.4 & 10.6 & 1.2 & 6.0 & 10.8 & 1.7 & 7.6 & 13.9 \\ \hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:power200-08} Empirical power (in percent) of the tests based on the $F$-type statistic for a known time $\tau$ and known, but possibly misspecified type $\delta$ of intervention in case of an innovation outlier $\delta=0.8$ of size $\kappa=2\sqrt{\lambda}$ at time $\tau$ in an INAR(2) series of length $n=200$ with different parameters $\alpha_1$, $\alpha_2$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 9.9 & 21.9 & 30.8 & 11.6 & 22.9 & 32.0 & 10.5 & 21.7 & 30.0 \\
& & & 0.8 & 25.2 & 45.1 & 55.6 & 28.3 & 45.9 & 55.3 & 25.9 & 43.8 & 54.3 \\
& & & 1 & 1.8 & 6.9 & 13.4 & 2.6 & 9.4 & 16.0 & 4.8 & 13.5 & 20.1 \\ \hline
& & 5 & 0 & 7.0 & 19.6 & 29.5 & 9.0 & 21.4 & 30.9 & 8.3 & 21.8 & 29.9 \\
& & & 0.8 & 24.6 & 44.7 & 54.7 & 25.8 & 43.5 & 55.8 & 25.0 & 42.4 & 53.1 \\
& & & 1 & 2.0 & 7.0 & 12.8 & 2.5 & 8.9 & 14.9 & 4.8 & 12.0 & 19.3 \\ \hline
0.3 & 0.4 & 2 & 0 & 15.1 & 29.1 & 36.9 & 14.4 & 27.3 & 34.2 & 13.9 & 24.6 & 34.4 \\
& & & 0.8 & 33.5 & 52.8 & 61.8 & 34.6 & 53.1 & 63.0 & 34.5 & 52.4 & 63.2 \\
& & & 1 & 1.2 & 5.8 & 10.9 & 2.8 & 7.6 & 14.1 & 4.2 & 13.5 & 21.0 \\ \hline
& & 5 & 0 & 13.2 & 25.9 & 35.6 & 11.5 & 25.4 & 35.5 & 12.8 & 25.6 & 35.6 \\
& & & 0.8 & 31.3 & 53.1 & 64.6 & 33.0 & 55.4 & 64.2 & 32.8 & 52.5 & 64.5 \\
& & & 1 & 1.2 & 5.3 & 11.2 & 2.1 & 7.9 & 14.6 & 4.7 & 13.0 & 20.8 \\ \hline
0.1 & 0.1 & 2 & 0 & 25.6 & 40.4 & 48.4 & 26.2 & 40.6 & 48.3 & 26.9 & 43.0 & 50.5 \\
& & & 0.8 & 56.3 & 73.0 & 81.0 & 55.9 & 74.5 & 82.4 & 57.1 & 73.2 & 80.1 \\
& & & 1 & 1.1 & 5.1 & 10.0 & 2.5 & 8.1 & 14.4 & 5.1 & 15.2 & 25.0 \\ \hline
& & 5 & 0 & 26.1 & 42.6 & 51.9 & 23.5 & 39.6 & 49.5 & 26.5 & 42.6 & 53.4 \\
& & & 0.8 & 61.7 & 79.7 & 85.8 & 59.7 & 77.2 & 83.8 & 61.1 & 77.8 & 85.7 \\
& & & 1 & 0.9 & 5.0 & 9.8 & 2.1 & 7.8 & 14.2 & 6.0 & 16.3 & 25.7 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:power200-1} Empirical power (in percent) of the tests based on the $F$-type statistic for a known time $\tau$ and known, but possibly misspecified type $\delta$ of intervention in case of an innovation outlier $\delta=1$ of size $\kappa=\sqrt{\lambda}$ at time $\tau$ in an INAR(2) series of length $n=200$ with different parameters $\alpha_1$, $\alpha_2$ and $\lambda$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 1.0 & 3.5 & 8.1 & 1.5 & 5.1 & 9.8 & 2.1 & 7.6 & 12.8 \\ & & & 0.8 & 0.9 & 4.3 & 9.5 & 2.3 & 8.9 & 16.0 & 6.8 & 18.9 & 28.2 \\ & & & 1 & 41.5 & 73.5 & 84.9 & 66.4 & 88.7 & 94.2 & 63.7 & 84.2 & 89.8 \\ \hline & & 5 & 0 & 0.5 & 4.2 & 8.6 & 1.5 & 6.1 & 11.8 & 2.1 & 7.8 & 14.4 \\ & & & 0.8 & 1.4 & 5.3 & 10.3 & 3.1 & 10.9 & 17.6 & 6.9 & 19.0 & 29.6 \\ & & & 1 & 46.5 & 75.7 & 86.4 & 69.0 & 90.6 & 95.6 & 64.6 & 86.8 & 93.2 \\ \hline 0.3 & 0.4 & 2 & 0 & 0.9 & 4.4 & 9.3 & 2.3 & 7.4 & 12.3 & 3.1 & 9.0 & 15.2 \\ & & & 0.8 & 0.8 & 5.3 & 10.4 & 3.1 & 11.3 & 19.0 & 9.0 & 22.4 & 34.0 \\ & & & 1 & 57.6 & 85.4 & 93.2 & 80.7 & 95.7 & 98.3 & 76.4 & 91.8 & 96.0 \\ \hline & & 5 & 0 & 0.9 & 4.9 & 9.4 & 1.3 & 5.8 & 10.8 & 2.4 & 8.3 & 15.3 \\ & & & 0.8 & 1.4 & 5.3 & 11.2 & 2.6 & 10.2 & 18.6 & 7.8 & 21.3 & 31.7 \\
& & & 1 & 62.5 & 87.1 & 93.0 & 83.0 & 95.7 & 98.2 & 77.4 & 93.8 & 97.0 \\ \hline 0.1 & 0.1 & 2 & 0 & 1.8 & 4.9 & 9.4 & 3.3 & 7.6 & 13.2 & 5.6 & 11.8 & 18.1 \\ & & & 0.8 & 1.3 & 5.5 & 10.2 & 4.6 & 14.1 & 22.4 & 14.4 & 32.8 & 45.8 \\ & & & 1 & 96.6 & 99.4 & 99.7 & 99.4 & 100.0 & 100.0 & 98.7 & 99.7 & 100.0 \\ \hline & & 5 & 0 & 1.5 & 6.7 & 11.8 & 3.4 & 9.9 & 15.2 & 5.6 & 13.0 & 19.4 \\ & & & 0.8 & 1.2 & 5.3 & 10.8 & 4.2 & 14.6 & 24.6 & 16.0 & 34.5 & 48.0 \\ & & & 1 & 98.1 & 99.7 & 99.9 & 99.8 & 100.0 & 100.0 & 99.2 & 100.0 & 100.0 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:classif200} Classification rates (in percent) achieved by applying the F-type statistic for a known type $\delta$ and time $\tau$ of intervention to clean INAR(2) series of length $n=200$ without outliers. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 1.1 & 4.3 & 7.0 & 0.8 & 3.3 & 6.6 & 1.1 & 4.2 & 7.1 \\
& & & 0.6 & 0.8 & 2.4 & 3.7 & 0.4 & 1.6 & 3.4 & 0.8 & 2.5 & 3.9 \\
& & & 0.8 & 0.5 & 2.2 & 3.5 & 0.7 & 2.0 & 3.2 & 0.8 & 1.9 & 3.6 \\
& & & 0.9 & 1.7 & 4.9 & 7.9 & 1.5 & 5.0 & 7.6 & 1.5 & 4.2 & 6.9 \\
& & & 1 & 1.8 & 7.5 & 11.7 & 1.8 & 7.3 & 13.7 & 1.6 & 6.3 & 11.2 \\ \hline
& & 5& 0 & 1.0 & 3.2 & 6.0 & 1.0 & 4.2 & 7.3 & 1.3 & 4.3 & 7.7 \\
& & & 0.6 & 0.8 & 2.4 & 4.0 & 0.6 & 3.0 & 4.9 & 0.5 & 2.1 & 3.8 \\
& & & 0.8 & 0.7 & 2.4 & 4.2 & 0.8 & 1.8 & 3.0 & 0.8 & 2.5 & 3.8 \\
& & & 0.9 & 1.2 & 5.1 & 8.4 & 1.0 & 3.2 & 6.0 & 1.0 & 4.1 & 6.9 \\
& & & 1 & 1.6 & 6.8 & 11.2 & 1.7 & 6.6 & 11.6 & 2.1 & 7.2 & 11.1 \\ \hline
0.3 & 0.4 &2 & 0 & 0.9 & 3.5 & 6.8 & 0.9 & 3.9 & 6.6 & 1.0 & 3.8 & 7.2 \\
& & & 0.6 & 0.7 & 2.3 & 4.0 & 0.9 & 3.2 & 4.7 & 0.5 & 2.5 & 4.5 \\
& & & 0.8 & 0.7 & 2.2 & 3.6 & 0.7 & 1.8 & 2.9 & 0.9 & 2.1 & 3.6 \\
& & & 0.9 & 1.5 & 4.5 & 8.0 & 1.3 & 4.3 & 7.2 & 0.9 & 3.2 & 5.2 \\
& & & 1 & 1.5 & 6.2 & 10.7 & 2.0 & 6.2 & 10.7 & 1.8 & 6.5 & 10.4 \\ \hline
& & 5& 0 & 0.9 & 3.8 & 7.3 & 0.9 & 3.1 & 6.7 & 0.9 & 3.8 & 7.7 \\
& & & 0.6 & 0.8 & 2.0 & 4.1 & 0.7 & 2.1 & 3.6 & 0.7 & 2.8 & 5.1 \\
& & & 0.8 & 0.5 & 1.8 & 3.5 & 0.4 & 1.5 & 2.7 & 0.7 & 2.2 & 3.4 \\
& & & 0.9 & 1.2 & 3.7 & 7.0 & 1.4 & 3.5 & 7.0 & 1.1 & 3.5 & 5.9 \\
& & & 1 & 1.8 & 6.3 & 11.0 & 1.2 & 6.0 & 11.0 & 1.4 & 5.9 & 10.6 \\ \hline
0.1& 0.1 &2 & 0.6 & 1.4 & 3.5 & 6.9 & 1.5 & 3.5 & 6.5 & 1.1 & 3.2 & 6.9 \\
& & & 0.6 & 0.8 & 2.5 & 5.0 & 0.6 & 1.7 & 3.5 & 0.8 & 2.2 & 4.3 \\
& & & 0.8 & 0.3 & 1.8 & 3.0 & 0.5 & 2.2 & 3.5 & 0.6 & 2.0 & 3.6 \\
& & & 0.9 & 1.0 & 3.1 & 5.8 & 0.9 & 3.3 & 6.2 & 0.9 & 2.8 & 5.7 \\
& & & 1 & 1.5 & 6.2 & 10.1 & 1.3 & 5.6 & 10.1 & 1.1 & 5.3 & 9.6 \\ \hline
& & 5 & 0 & 1.2 & 4.5 & 7.6 & 1.1 & 3.9 & 6.9 & 1.2 & 3.8 & 7.5 \\
& & & 0.6 & 0.5 & 2.9 & 4.6 & 0.4 & 1.8 & 3.9 & 0.2 & 2.1 & 4.2 \\
& & & 0.8 & 0.5 & 1.8 & 3.0 & 0.2 & 1.8 & 3.0 & 0.4 & 2.1 & 3.6 \\
& & & 0.9 & 0.8 & 3.9 & 6.5 & 1.0 & 3.6 & 6.4 & 0.9 & 2.7 & 5.9 \\
& & & 1 & 1.4 & 5.1 & 10.0 & 1.3 & 4.8 & 9.0 & 1.0 & 5.1 & 8.6 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:classif200-0} Classification rates (in percent) achieved by applying the F-type statistic for a known type $\delta$ and time $\tau$ of intervention to INAR(2) series of length $n=200$ with an innovation outlier ($\delta=0$) of size $\kappa=3\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 18.4 & 29.1 & 35.0 & 18.0 & 29.4 & 35.2 & 18.6 & 28.6 & 35.5 \\
& & & 0.6 & 5.2 & 8.6 & 10.2 & 5.1 & 8.6 & 10.2 & 6.1 & 8.8 & 10.4 \\
& & & 0.8 & 2.4 & 3.5 & 4.5 & 1.5 & 2.6 & 3.7 & 1.8 & 3.0 & 3.6 \\
& & & 0.9 & 2.7 & 5.7 & 7.6 & 1.9 & 4.0 & 5.7 & 2.9 & 5.3 & 6.5 \\
& & & 1 & 1.4 & 4.5 & 8.0 & 1.5 & 5.1 & 8.2 & 1.7 & 4.4 & 7.0 \\ \hline
& & 5 & 0 & 17.1 & 28.3 & 34.8 & 19.3 & 31.1 & 37.0 & 18.6 & 30.2 & 36.9 \\
& & & 0.6 & 5.3 & 8.5 & 10.1 & 5.4 & 8.6 & 10.2 & 5.8 & 8.6 & 10.7 \\
& & & 0.8 & 2.1 & 3.4 & 4.6 & 1.3 & 2.8 & 3.5 & 2.0 & 3.4 & 4.0 \\
& & & 0.9 & 2.4 & 5.3 & 7.4 & 2.1 & 4.5 & 6.0 & 2.2 & 4.0 & 5.3 \\
& & & 1 & 1.3 & 4.7 & 8.2 & 1.5 & 5.4 & 8.0 & 1.9 & 4.4 & 6.8 \\ \hline
0.3 & 0.4 & 2 & 0 & 23.2 & 33.8 & 40.1 & 23.6 & 35.0 & 40.0 & 24.0 & 35.4 & 41.0 \\
& & & 0.6 & 7.0 & 10.4 & 11.8 & 6.2 & 8.8 & 10.8 & 6.0 & 9.4 & 11.3 \\
& & & 0.8 & 2.5 & 3.8 & 4.6 & 2.2 & 4.0 & 4.8 & 2.7 & 4.0 & 4.8 \\
& & & 0.9 & 2.3 & 4.3 & 5.5 & 3.0 & 4.3 & 5.8 & 2.5 & 4.5 & 5.9 \\
& & & 1 & 1.3 & 3.9 & 6.2 & 1.4 & 4.5 & 7.3 & 1.8 & 4.3 & 6.5 \\ \hline
& & 5 & 0 & 23.6 & 35.2 & 40.7 & 22.9 & 36.0 & 42.1 & 21.8 & 35.5 & 41.5 \\
& & & 0.6 & 5.8 & 9.1 & 10.7 & 6.3 & 9.9 & 11.6 & 5.6 & 8.8 & 10.2 \\
& & & 0.8 & 2.1 & 3.4 & 4.4 & 2.1 & 3.2 & 4.0 & 2.5 & 3.7 & 4.6 \\
& & & 0.9 & 2.2 & 4.6 & 6.1 & 2.4 & 4.2 & 6.0 & 2.1 & 3.8 & 5.3 \\
& & & 1 & 1.4 & 4.0 & 6.9 & 1.1 & 3.7 & 6.3 & 1.7 & 4.5 & 6.4 \\ \hline
0.1 & 0.1 & 2 & 0 & 42.0 & 52.4 & 56.2 & 41.5 & 52.8 & 56.8 & 42.5 & 53.3 & 57.6 \\
& & & 0.6 & 8.6 & 11.3 & 13.0 & 8.6 & 10.6 & 11.8 & 8.0 & 10.3 & 11.3 \\
& & & 0.8 & 2.3 & 3.2 & 3.7 & 1.8 & 2.6 & 3.3 & 2.0 & 2.5 & 2.6 \\
& & & 0.9 & 1.1 & 2.1 & 3.3 & 1.4 & 2.6 & 3.0 & 1.1 & 2.1 & 2.7 \\
& & & 1 & 0.7 & 1.8 & 3.5 & 0.9 & 2.9 & 3.9 & 0.9 & 2.4 & 4.2 \\ \hline
& & 5 & 0 & 44.4 & 56.8 & 61.4 & 42.6 & 53.6 & 58.9 & 44.3 & 55.4 & 60.9 \\
& & & 0.6 & 8.8 & 11.6 & 13.1 & 9.1 & 11.6 & 12.8 & 8.4 & 11.5 & 12.3 \\
& & & 0.8 & 1.2 & 1.8 & 2.6 & 1.4 & 2.4 & 2.6 & 2.0 & 2.6 & 3.1 \\
& & & 0.9 & 1.1 & 2.0 & 2.5 & 1.5 & 2.6 & 3.5 & 0.9 & 2.1 & 2.5 \\
& & & 1 & 0.8 & 2.1 & 3.6 & 0.5 & 2.1 & 3.1 & 0.8 & 2.4 & 3.3 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:classif200-06} Classification rates (in percent) achieved by applying the F-type statistic for a known type $\delta$ and time $\tau$ of intervention to INAR(2) series of length $n=200$ with a transient shift with $\delta=0.6$ of size $\kappa=3\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 8.4 & 13.7 & 16.6 & 9.2 & 14.9 & 17.7 & 8.5 & 13.7 & 16.7 \\
& & & 0.6 & 11.2 & 17.0 & 20.2 & 10.2 & 16.2 & 19.6 & 12.2 & 18.4 & 21.3 \\
& & & 0.8 & 6.7 & 10.3 & 12.3 & 8.2 & 11.8 & 13.5 & 6.9 & 10.5 & 12.2 \\
& & & 0.9 & 6.0 & 10.4 & 12.3 & 6.9 & 10.5 & 12.7 & 5.7 & 9.0 & 11.1 \\
& & & 1 & 1.6 & 5.0 & 7.3 & 1.7 & 5.0 & 7.3 & 2.2 & 4.9 & 6.7 \\ \hline
& & 5 & 0 & 7.0 & 12.4 & 14.9 & 7.6 & 13.2 & 16.1 & 6.8 & 11.9 & 14.9 \\
& & & 0.6 & 12.0 & 18.9 & 22.6 & 10.9 & 17.3 & 20.6 & 11.8 & 17.8 & 20.9 \\
& & & 0.8 & 7.0 & 10.4 & 12.2 & 7.0 & 10.7 & 12.6 & 6.3 & 10.8 & 12.8 \\
& & & 0.9 & 5.7 & 9.7 & 11.5 & 6.7 & 10.7 & 12.9 & 5.4 & 8.6 & 10.6 \\
& & & 1 & 1.1 & 4.4 & 7.0 & 1.6 & 4.6 & 7.7 & 2.6 & 6.0 & 8.6 \\ \hline
0.3 & 0.4 & 2 & 0 & 9.3 & 15.2 & 18.0 & 10.5 & 15.8 & 18.1 & 8.8 & 13.8 & 16.0 \\
& & & 0.6 & 14.9 & 20.6 & 23.7 & 14.8 & 20.4 & 23.0 & 14.8 & 20.2 & 23.2 \\
& & & 0.8 & 9.0 & 12.4 & 14.2 & 8.5 & 11.7 & 13.0 & 9.6 & 13.8 & 15.9 \\
& & & 0.9 & 7.0 & 10.5 & 12.6 & 7.4 & 10.8 & 12.8 & 5.3 & 8.7 & 10.3 \\
& & & 1 & 0.9 & 3.2 & 5.8 & 1.2 & 3.5 & 5.8 & 1.9 & 4.6 & 6.9 \\ \hline
& & 5 & 0 & 9.4 & 14.4 & 17.0 & 9.8 & 15.2 & 17.5 & 8.9 & 14.8 & 17.5 \\
& & & 0.6 & 15.1 & 22.4 & 25.0 & 13.9 & 21.9 & 24.7 & 16.4 & 23.0 & 26.4 \\
& & & 0.8 & 7.6 & 12.6 & 14.3 & 9.3 & 13.8 & 15.6 & 8.0 & 12.6 & 14.3 \\
& & & 0.9 & 6.8 & 11.3 & 13.4 & 5.8 & 9.6 & 11.3 & 5.7 & 8.9 & 10.7 \\
& & & 1 & 1.2 & 4.0 & 5.4 & 1.4 & 3.5 & 5.6 & 1.6 & 3.6 & 5.7 \\ \hline
0.1 & 0.1 & 2& 0 & 15.2 & 18.9 & 20.0 & 13.5 & 16.6 & 18.4 & 13.9 & 16.7 & 18.0 \\
& & & 0.6 & 28.2 & 34.6 & 36.8 & 29.3 & 34.4 & 36.4 & 27.8 & 33.5 & 35.9 \\
& & & 0.8 & 12.7 & 16.4 & 18.0 & 14.8 & 18.3 & 19.8 & 13.8 & 17.5 & 18.7 \\
& & & 0.9 & 4.8 & 7.2 & 8.6 & 4.9 & 7.4 & 8.5 & 5.1 & 7.4 & 8.2 \\
& & & 1 & 0.4 & 1.6 & 2.4 & 0.4 & 1.8 & 2.6 & 0.8 & 1.7 & 2.8 \\ \hline
& & 5 & 0 & 13.4 & 16.9 & 18.1 & 13.0 & 16.0 & 17.5 & 13.2 & 16.5 & 18.0 \\
& & & 0.6 & 30.3 & 37.1 & 39.6 & 30.9 & 37.9 & 40.7 & 30.6 & 37.4 & 40.1 \\
& & & 0.8 & 15.3 & 18.1 & 19.8 & 13.4 & 17.1 & 18.6 & 14.5 & 18.1 & 19.5 \\
& & & 0.9 & 4.6 & 6.6 & 7.2 & 5.4 & 7.1 & 8.5 & 4.6 & 6.2 & 7.1 \\
& & & 1 & 0.4 & 1.7 & 2.9 & 1.2 & 2.2 & 3.4 & 1.0 & 1.8 & 2.5 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:classif200-08} Classification rates (in percent) achieved by applying the F-type statistic for a known type $\delta$ and time $\tau$ of intervention to INAR(2) series of length $n=200$ with a transient shift with $\delta=0.8$ of size $\kappa=2.5\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 4.5 & 8.0 & 10.3 & 4.9 & 7.1 & 8.8 & 4.3 & 7.8 & 9.8 \\
& & & 0.6 & 7.7 & 11.8 & 13.8 & 7.0 & 10.9 & 12.7 & 5.9 & 9.9 & 11.9 \\
& & & 0.8 & 9.2 & 14.6 & 17.3 & 9.2 & 14.2 & 16.6 & 9.8 & 15.4 & 17.9 \\
& & & 0.9 & 13.4 & 20.5 & 24.2 & 14.2 & 21.6 & 25.6 & 13.6 & 19.9 & 22.8 \\
& & & 1 & 1.1 & 3.7 & 5.6 & 1.8 & 5.0 & 7.6 & 2.7 & 6.4 & 8.6 \\ \hline
& & 5 & 0 & 3.4 & 6.3 & 8.2 & 4.1 & 8.2 & 10.1 & 4.0 & 6.9 & 8.4 \\
& & & 0.6 & 7.0 & 11.2 & 13.7 & 7.1 & 11.9 & 13.6 & 6.3 & 10.4 & 12.8 \\ & & & 0.8 & 8.7 & 14.2 & 16.6 & 9.6 & 14.7 & 17.2 & 10.6 & 15.8 & 17.8 \\ & & & 0.9 & 12.4 & 21.9 & 26.2 & 12.2 & 20.5 & 23.9 & 12.5 & 20.2 & 23.5 \\
& & & 1 & 1.2 & 4.9 & 7.4 & 1.9 & 4.6 & 7.0 & 2.9 & 6.7 & 9.0 \\ \hline
0.3& 0.4 & 2 & 0 & 4.6 & 7.5 & 8.6 & 5.4 & 8.5 & 10.1 & 5.1 & 8.2 & 9.7 \\
& & & 0.6 & 9.4 & 13.1 & 15.0 & 8.8 & 13.2 & 14.9 & 9.2 & 13.6 & 15.6 \\
& & & 0.8 & 13.1 & 18.2 & 20.2 & 12.7 & 17.4 & 18.9 & 12.1 & 17.5 & 20.4 \\
& & & 0.9 & 15.2 & 22.4 & 26.0 & 15.6 & 23.1 & 26.0 & 13.9 & 20.0 & 23.2 \\ & & & 1 & 1.0 & 3.4 & 5.6 & 1.4 & 3.5 & 5.1 & 2.4 & 4.8 & 6.7 \\ \hline & & 5 & 0 & 4.9 & 7.8 & 9.1 & 4.4 & 7.0 & 8.9 & 4.9 & 8.2 & 9.4 \\
& & & 0.6 & 9.2 & 13.2 & 15.0 & 7.8 & 12.2 & 14.3 & 8.9 & 12.5 & 13.9 \\ & & & 0.8 & 13.2 & 19.4 & 21.9 & 13.7 & 19.5 & 22.0 & 13.5 & 19.4 & 21.5 \\
& & & 0.9 & 14.2 & 23.5 & 26.9 & 13.2 & 21.2 & 24.5 & 14.8 & 22.5 & 25.3 \\
& & & 1 & 1.4 & 3.2 & 4.9 & 1.1 & 3.9 & 5.9 & 2.4 & 4.8 & 6.3 \\ \hline 0.1 & 0.1 & 2 & 0 & 5.5 & 7.6 & 8.2 & 6.3 & 8.0 & 8.6 & 6.6 & 8.3 & 8.8 \\
& & & 0.6 & 15.1 & 18.0 & 18.9 & 14.4 & 17.2 & 18.5 & 15.1 & 18.1 & 19.3 \\ & & & 0.8 & 25.6 & 31.1 & 32.6 & 26.0 & 31.4 & 33.0 & 25.8 & 30.8 & 33.0 \\
& & & 0.9 & 17.6 & 23.8 & 26.2 & 19.0 & 23.8 & 25.7 & 17.2 & 22.4 & 24.3 \\ & & & 1 & 0.7 & 1.2 & 2.1 & 0.8 & 1.8 & 2.9 & 1.5 & 2.5 & 3.5 \\ \hline
& & 5 & 0 & 6.2 & 7.9 & 8.7 & 6.3 & 7.2 & 7.9 & 5.8 & 7.3 & 8.2 \\
& & & 0.6 & 16.0 & 19.3 & 20.5 & 15.7 & 19.3 & 20.9 & 17.0 & 20.2 & 21.6 \\ & & & 0.8 & 24.2 & 29.8 & 32.0 & 27.4 & 32.1 & 34.4 & 26.7 & 32.1 & 34.2 \\
& & & 0.9 & 18.4 & 25.1 & 27.1 & 17.8 & 22.8 & 25.0 & 17.3 & 22.8 & 24.5 \\ & & & 1 & 0.5 & 1.2 & 1.9 & 0.8 & 1.9 & 2.4 & 1.6 & 2.8 & 3.4 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:classif200-09} Classification rates (in percent) achieved by applying the F-type statistic for a known type $\delta$ and time $\tau$ of intervention to INAR(2) series of length $n=200$ with a transient shift with $\delta=0.9$ of size $\kappa=2\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 2.1 & 4.6 & 6.2 & 2.4 & 5.2 & 6.8 & 2.8 & 5.7 & 7.2 \\ & & & 0.6 & 3.4 & 6.3 & 7.2 & 3.5 & 5.7 & 7.1 & 3.4 & 5.7 & 7.3 \\ & & & 0.8 & 6.5 & 9.3 & 10.8 & 4.9 & 8.5 & 10.0 & 5.5 & 8.9 & 10.7 \\
& & & 0.9 & 19.6 & 31.6 & 38.2 & 18.9 & 31.0 & 37.2 & 17.0 & 26.6 & 30.5 \\
& & & 1 & 1.1 & 4.4 & 7.2 & 2.0 & 5.8 & 8.8 & 5.8 & 10.6 & 13.4 \\ \hline
& & 5 & 0 & 1.8 & 4.6 & 6.2 & 2.2 & 5.6 & 7.4 & 2.2 & 4.9 & 6.3 \\
& & & 0.6 & 3.1 & 5.4 & 6.6 & 3.5 & 5.6 & 7.2 & 3.3 & 5.7 & 6.8 \\
& & & 0.8 & 5.8 & 9.7 & 11.7 & 4.8 & 8.0 & 9.2 & 5.7 & 10.0 & 12.2 \\
& & & 0.9 & 18.2 & 31.3 & 37.1 & 19.0 & 31.4 & 36.7 & 17.0 & 27.8 & 32.8 \\
& & & 1 & 1.8 & 4.8 & 7.9 & 2.9 & 6.2 & 8.8 & 4.5 & 9.2 & 12.2 \\ \hline
0.3& 0.4& 2 & 0 & 3.0 & 5.7 & 7.4 & 2.7 & 5.2 & 6.6 & 2.4 & 4.6 & 6.4 \\
& & & 0.6 & 4.3 & 7.0 & 8.2 & 4.1 & 6.5 & 8.1 & 3.9 & 6.1 & 7.5 \\ & & & 0.8 & 7.4 & 11.8 & 13.5 & 6.6 & 10.0 & 11.3 & 7.5 & 11.6 & 13.2 \\
& & & 0.9 & 23.0 & 34.4 & 39.7 & 23.6 & 34.9 & 40.1 & 23.4 & 33.2 & 37.2 \\
& & & 1 & 1.2 & 3.9 & 5.9 & 1.7 & 4.9 & 7.0 & 5.5 & 9.6 & 11.8 \\ \hline
& & 5 & 0 & 2.8 & 5.2 & 6.4 & 2.8 & 5.1 & 6.1 & 2.6 & 4.3 & 5.4 \\
& & & 0.6 & 3.9 & 6.2 & 7.2 & 3.6 & 6.2 & 6.8 & 3.4 & 5.3 & 6.3 \\
& & & 0.8 & 7.5 & 11.2 & 12.6 & 7.9 & 11.9 & 13.2 & 6.7 & 10.8 & 13.0 \\
& & & 0.9 & 22.8 & 35.9 & 41.8 & 22.6 & 36.5 & 42.7 & 23.8 & 35.4 & 40.9 \\
& & & 1 & 1.0 & 4.1 & 6.2 & 2.1 & 5.9 & 7.7 & 4.0 & 9.0 & 11.5 \\ \hline
0.1& 0.1& 2 & 0 & 4.0 & 5.1 & 5.6 & 3.2 & 4.7 & 5.0 & 3.7 & 4.6 & 4.9 \\
& & & 0.6 & 5.8 & 6.7 & 7.2 & 5.3 & 6.8 & 7.5 & 5.6 & 7.0 & 7.2 \\
& & & 0.8 & 14.1 & 16.8 & 17.8 & 13.5 & 16.2 & 17.0 & 14.6 & 17.8 & 18.8 \\
& & & 0.9 & 40.1 & 51.0 & 55.6 & 38.6 & 50.3 & 53.9 & 37.9 & 47.0 & 50.1 \\
& & & 1 & 0.8 & 2.1 & 2.8 & 1.1 & 3.0 & 4.3 & 3.5 & 6.2 & 7.7 \\ \hline
& & 5 & 0 & 3.5 & 4.5 & 5.1 & 2.8 & 4.0 & 4.6 & 3.1 & 3.9 & 4.6 \\
& & & 0.6 & 4.5 & 5.4 & 6.2 & 4.7 & 6.0 & 6.4 & 6.0 & 7.1 & 7.7 \\
& & & 0.8 & 13.9 & 16.7 & 17.8 & 14.2 & 17.7 & 19.0 & 15.2 & 17.9 & 18.9 \\ & & & 0.9 & 41.7 & 54.2 & 57.7 & 40.2 & 52.6 & 57.2 & 38.2 & 48.4 & 51.5 \\
& & & 1 & 0.6 & 1.8 & 2.6 & 1.0 & 2.5 & 3.1 & 3.8 & 6.5 & 7.9 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{table} \caption{\label{tab3:classif200-1} Classification rates (in percent) achieved by applying the F-type statistic for a known type $\delta$ and time $\tau$ of intervention to INAR(2) series of length $n=200$ with a permanent shift ($\delta=1$) of size $\kappa=\sqrt{\lambda}$ at time $\tau$. The nominal significance levels are 1\%, 5\% or 10\%.} \begin{center} {\footnotesize
\begin{tabular}{rrrr|rrr|rrr|rrr} \hline
&& & & \multicolumn{3}{c|}{$\tau=0.25n$}& \multicolumn{3}{c|}{$\tau=0.5n$}& \multicolumn{3}{c}{$\tau=0.75n$} \\ $\alpha_1$ & $\alpha_2$ & $\lambda$ & $\delta$ & 1\% & 5\% & 10\%& 1\% & 5\% & 10\%& 1\% & 5\% & 10\% \\ \hline 0.5 & 0.3 & 2 & 0 & 0.3 & 1.4 & 2.0 & 0.7 & 1.3 & 1.5 & 0.5 & 1.4 & 1.7 \\
& & & 0.6 & 0.4 & 0.9 & 1.1 & 0.4 & 0.8 & 0.8 & 0.5 & 1.0 & 1.1 \\
& & & 0.8 & 0.2 & 0.6 & 0.9 & 0.3 & 0.4 & 0.5 & 0.4 & 0.6 & 0.9 \\
& & & 0.9 & 0.7 & 1.7 & 2.2 & 0.5 & 1.6 & 1.8 & 4.9 & 6.4 & 6.8 \\
& & & 1 & 40.1 & 69.4 & 80.6 & 65.5 & 86.2 & 90.8 & 59.5 & 77.5 & 82.4 \\ \hline
& & 5 & 0 & 0.4 & 1.2 & 1.8 & 0.7 & 0.9 & 1.1 & 0.8 & 1.6 & 1.8 \\
& & & 0.6 & 0.5 & 1.1 & 1.2 & 0.4 & 0.7 & 0.8 & 0.5 & 0.9 & 1.2 \\
& & & 0.8 & 0.1 & 0.4 & 0.7 & 0.2 & 0.4 & 0.5 & 0.1 & 0.2 & 0.4 \\
& & & 0.9 & 0.6 & 1.6 & 2.2 & 1.0 & 1.8 & 2.0 & 3.8 & 5.2 & 5.7 \\
& & & 1 & 44.0 & 73.7 & 83.0 & 67.8 & 87.3 & 91.8 & 62.4 & 80.9 & 85.2 \\\hline
0.3 & 0.4& 2 & 0 & 0.5 & 0.9 & 1.3 & 0.6 & 0.8 & 0.9 & 1.0 & 1.5 & 1.8 \\
& & & 0.6 & 0.2 & 0.6 & 0.8 & 0.2 & 0.4 & 0.4 & 0.5 & 0.8 & 0.8 \\
& & & 0.8 & 0.2 & 0.8 & 0.8 & 0.1 & 0.2 & 0.3 & 0.2 & 0.4 & 0.5 \\
& & & 0.9 & 0.8 & 1.4 & 2.0 & 1.0 & 1.7 & 1.8 & 4.2 & 5.2 & 5.5 \\
& & & 1 & 57.5 & 83.4 & 89.5 & 80.2 & 93.6 & 95.2 & 72.6 & 85.2 & 88.0 \\ \hline
& & 5 & 0 & 0.2 & 0.4 & 0.8 & 0.2 & 0.4 & 0.4 & 0.5 & 0.9 & 1.0 \\
& & & 0.6 & 0.4 & 0.8 & 0.9 & 0.3 & 0.4 & 0.5 & 0.4 & 0.4 & 0.4 \\
& & & 0.8 & 0.2 & 0.4 & 0.4 & 0.1 & 0.1 & 0.2 & 0.4 & 0.5 & 0.5 \\
& & & 0.9 & 0.5 & 1.2 & 1.4 & 0.9 & 1.5 & 1.6 & 3.4 & 4.6 & 4.8 \\
& & & 1 & 62.5 & 86.1 & 92.2 & 82.9 & 94.6 & 96.4 & 75.0 & 88.6 & 91.3 \\\hline
0.1 & 0.1 & 2 & 0 & 0.2 & 0.4 & 0.4 & 0.1 & 0.1 & 0.1 & 0.4 & 0.4 & 0.4 \\
& & & 0.6 & 0.0 & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.3 & 0.3 & 0.3\\
& & & 0.8 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\
& & & 0.9 & 0.1 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 & 1.7 & 1.8 & 1.8 \\
& & & 1 & 96.2 & 98.8 & 99.1 & 99.4 & 99.6 & 99.6 & 96.2 & 97.2 & 97.4 \\\hline
& & 5 & 0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.2 & 0.2 & 0.2 \\
& & & 0.6 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\
& & & 0.8 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\
& & & 0.9 & 0.2 & 0.2 & 0.2 & 0.1 & 0.1 & 0.1 & 1.5 & 1.6 & 1.6 \\
& & & 1 & 98.3 & 99.6 & 99.7 & 99.7 & 99.9 & 99.9 & 97.3 & 98.0 & 98.0 \\
\hline \end{tabular}} \end{center} \end{table}
\begin{figure}
\caption{Boxplots of the maximum $F$-type statistic in case of INAR(2) models, maximized with respect to the candidate time point $\tau$ of a change when $n=100$.}
\label{fig:maxstatistics0-100-INAR2}
\end{figure}
\begin{figure}
\caption{Boxplots of the maximum $F$-type statistic in case of INAR(2) models, maximized with respect to the candidate time point $\tau$ of a change when $n=200$.}
\label{fig:maxstatistics0-200-INAR2}
\end{figure}
\begin{figure}
\caption{Classification results when applying the maximum $F$-type statistics to INAR(2) time series of length $n=100$ containing an innovation outlier of increasing size $\kappa=0,\sqrt{\lambda},\ldots,12\sqrt{\lambda}$ at time point $\tau=50$. Classification as $\delta=0$ (dotted), $\delta=0.8$ (dashed), $\delta=1$ (solid).}
\label{fig:classif100-0-inar2}
\end{figure}
\begin{figure}
\caption{Classification results when applying the maximum $F$-type statistics to INAR(2) time series of length $n=100$ containing a transient shift with $\delta=0.8$ of increasing size $\kappa=0,\sqrt{\lambda},\ldots,12\sqrt{\lambda}$ at time point $\tau=50$. Classification as $\delta=0$ (dotted), $\delta=0.8$ (dashed), $\delta=1$ (solid).}
\label{fig:classif100-08-inar2}
\end{figure}
\begin{figure}
\caption{Classification results when applying the maximum $F$-type statistics to INAR(2) time series of length $n=100$ containing a permanent shift with $\delta=1$ of increasing size $\kappa=0,\sqrt{\lambda},\ldots,12\sqrt{\lambda}$ at time point $\tau=50$. Classification as $\delta=0$ (dotted), $\delta=0.8$ (dashed), $\delta=1$ (solid).}
\label{fig:classif100-1-inar2}
\end{figure}
\begin{figure}
\caption{Simulated INAR(2) time series with two transient shifts at times $\tau_1=50$ and $\tau_2=150$ (solid line) and the series after correction for the intervention effects as estimated by the $F$-type statistic (dotted line).}
\label{fig:sim-inar2}
\end{figure}
\begin{table} \caption{\label{tab:sim-ex-inar2} Conditional least squares estimates obtained at each step of the stepwise procedure for the detection and elimination of intervention effects in the simulated INAR(2) time series. The final estimates of the Poisson INAR(2) model parameters are shown in bold. The true parameter values are $\alpha_1=0.3$, $\alpha_2=0.2$ and $\lambda=3$ and there are outliers with $\kappa=10$ and $\delta=0.6$ at $\tau=50$ as well as $\kappa=10$ and $\delta=0.9$ at $\tau=150$.} {\small \begin{center}
\begin{tabular}{cc|c|rrr|rrr}
\hline Iteration & Step & Bootstrap &\multicolumn{3}{c|}{Parameter estimates} & \multicolumn{3}{c}{Outlier}\\ & & p-value& $\hat{\alpha}_1$ & $\hat{\alpha}_2$ & $\hat{\lambda}$ & $\hat{\kappa}$ & $\hat{\tau}$ & $\hat{\delta}$\\ \hline 1 & 1 & & 0.38 & 0.18 & 2.81 & & & \\ & 2-3 & 0.002 & 0.28 & 0.15 & 3.47 & 7.39 & 150 & 0.8\\ \hline 2 & 1 & & 0.31 & 0.18 & 3.20 & & & \\ & 2-3 & 0.012 & 0.28 & 0.16 & 3.39 & 5.52 & 50 & 0.8\\ \hline 3 & 1 & & 0.27 & 0.16 & 3.49 & & &\\ & 2-3 & 0.036 & 0.29 & 0.16 & 3.27 & 7.49 & 46 & 0\\ \hline 4 & 1 & & \textbf{0.28} & \textbf{0.17} & \textbf{3.32} & & &\\ & 2-3 & 0.082 & - & - & - & - & - & -\\ \hline \end{tabular} \end{center} } \end{table}
\end{document} | arXiv |
\begin{document}
\title[A Synthesis of the Procedural and Declarative Styles of Interactive \dots] {A Synthesis of the Procedural and Declarative Styles of Interactive Theorem Proving} \author[F.~Wiedijk]{Freek Wiedijk} \address{ Institute for Computing and Information Sciences, Radboud University Nijmegen, Heyendaalse\-weg~135, 6525 AJ Nijmegen, The Netherlands} \email{\texttt{[email protected]}} \keywords{interactive theorem proving, proof assistants, natural deduction, formal mathematics, procedural proof style, declarative proof style, tactics, HOL, Mizar} \subjclass{F.4.1, I.2.3, I.2.4}
\begin{abstract} We propose a synthesis of the two proof styles of interactive theorem proving: the procedural style (where proofs are scripts of commands, like in Coq) and the declarative style (where proofs are texts in a controlled natural language, like in Isabelle/Isar). Our approach combines the advantages of the declarative style -- the possibility to write formal proofs like normal mathematical text -- and the procedural style -- strong automation and help with shaping the proofs, including determining the statements of intermediate steps.
Our approach is new, and differs significantly from the ways in which the procedural and declarative proof styles have been combined before in the Isabelle, Ssreflect and Matita systems. Our approach is generic and can be implemented on top of any procedural interactive theorem prover, regardless of its architecture and logical foundations.
To show the viability of our proposed approach, we fully implemented it as a proof interface called \texttt{miz3}, on top of the HOL Light interactive theorem prover. The declarative language that this interface uses is a slight variant of the language of the Mizar system, and can be used for any interactive theorem prover regardless of its logical foundations. The \texttt{miz3} interface allows easy access to the full set of tactics and formal libraries of HOL Light, and as such has `industrial strength'.
Our approach gives a way to automatically convert any procedural proof to a declarative counterpart, where the converted proof is similar in size to the original. As all declarative systems have essentially the same proof language, this gives a straightforward way to port proofs between interactive theorem provers. \end{abstract}
\maketitle
\section{Introduction}
\subsection{Proof styles of interactive theorem proving}\label{problem}
\noindent Interactive theorem provers, also known as proof assistants, are computer programs for the development and verification of mathematical texts in a formal language. These systems make it \emph{certain} \footnote{ There is only one {serious} possibility for mathematics verified with the best interactive theorem provers to still have problems \cite{pol:98,hal:11}. The {definitions} and statements might not {mean} what the person who wrote them thinks they mean. Although it then still is \emph{certain} that the mathematics contains no errors at all, it is the `wrong' mathematics. } that the verified mathematics contains no errors at all. The activity of coding mathematics in the formal language of an interactive theorem prover is called \emph{formalizing}, and the set of resulting input files for such a system is called a \emph{formalization}. Highly non-trivial proofs have been formalized, both in mathematics \cite{gon:06,har:08} and in computer science \cite{fox:03,kle:09,ler:06}.
In interactive theorem proving one can consider the proofs on two different levels. There is the \emph{user level proof}, the proof in the formalization files on the level of which the user interacts with the system. And there is the \emph{proof object}, the proof in the formal system underlying the system. Generally the second is an order of a magnitude larger than the former. In systems like Coq and HOL \cite{gor:mel:93}, the user level proof consists of a list of tactics to be executed. The proof objects in Coq are lambda terms, while in HOL they consist of traces of function calls into the LCF style kernel of the system. In a system like Mizar \cite{gra:kor:nau:10} the user level proof consists of the proof steps in the input language of the system. The Mizar implementation does not keep track of proof objects, but the proofs on the proof object level would be the formal deductions in first order predicate logic.
Interactive theorem provers can use three different proof styles (the terminology originates in \cite{har:96:2}): \begin{enumerate}[(1)] \item \emph{The procedural style.} \label{item:procedural} In these systems the user inputs a proof as a sequence of \emph{tactic} invocations, which are commands that transform proof obligations called \emph{goals}. A tactic reduces a goal to zero or more new subgoals. When all goals have been solved this way, the proof is finished. Note that although most procedural systems support both forward and backward proof, the {user interaction} in those system primarily consists of reasoning backwards from a goal. Systems that use the procedural style are the various HOL systems like HOL4 \cite{gor:mel:93}, HOL Light \cite{har:xx,har:00} and ProofPower \cite{lem:00}, the original version of Isabelle \cite{nip:pau:wen:02}, Coq \cite{coq:10}, Matita \cite{asp:coe:tas:zac:07}, PVS \cite{owr:rus:sha:92}, the B method \cite{abr:96} and Metamath \cite{meg:97}.
Some procedural systems offer the option to print their proof objects in the form of natural language text. An example of such a system is Matita (see the discussion in Section~\ref{related} below).
\item \emph{The declarative style.} \label{item:declarative} In these systems the user inputs proofs in a stylized natural deduction language (de Bruijn called such a language a \emph{mathematical vernacular} \cite{bru:87}). The output of the system then consists of messages that point out where the proof text still has errors. There are two subclasses of this proof style:
\begin{enumerate}[(a)] \item \label{item:declarative:natlang} \emph{The natural language declarative style.} Here the proof language is a formal version of mathematical natural language, also called a \emph{controlled} natural language. Trivial reasoning between the steps in a proof is provided by the system through (light weight) automation. Systems that use this proof style are Mizar \cite{gra:kor:nau:10} and Isabelle with its `structured' proof language Isar \cite{wen:02,wen:02:1}.
Actually, the Mizar and Isar languages, as well as the language described in this paper, are not very much like natural language. The ForTheL language for the SAD system \cite{pas:07} is much better in this respect.
\item \emph{The proof object declarative style.} \label{item:declarative:object} Here the proof input language is a syntactic rendering of the proof object, with the structure of a natural deduction proof. Systems that use this style are Twelf \cite{pfe:sch:02}, Agda \cite{agd:xx} and Epigram \cite{mcb:mck:04}.
In some of these systems, the user does not need to type the whole input text by themselves, but can also give \emph{commands} in the interface that generate part of the proof text. These commands will \emph{not} be part of the proof files that are the final formalization. Examples of such systems are Agda and Epigram.
\end{enumerate}
\item \emph{The guided automated style.} \label{item:guidedautomated} In these systems the input is a sequence of lemma statements, which the system then tries to prove by itself. These systems generally produce long natural language texts for each lemma describing how it was proved. Often for some of the lemmas parameters need to be given that direct the system how to perform the proof. Also the lemma statements need to be chosen well for the system to be able to do the proof, as the gaps between the statements should not be too large. For this reason these systems still are \emph{interactive} theorem provers. Systems that use this style are ACL2 \cite{kau:man:moo:00} and Theorema \cite{buc:jeb:kri:mar:vas:97}.
One can consider the guided automated style to be either an extreme version of the procedural or an extreme version of the declarative style. In a guided automated theorem prover, one runs one supertactic per lemma. Or, in a guided automated theorem prover one writes a theory as a series of lemma statements, where the system checks that each statement follows from the previous ones.
\end{enumerate}
\noindent It is interesting to compare where the `proof commands to the system' and where the `natural language proofs' are in these various proof styles:
\xmedskip \begin{center} \begin{tabular}{lcc} & \emph{commands} & \emph{natural language} \\ \noalign{
} \hline \noalign{
} procedural & input & absent or output \\ declarative, natural language & absent & input \\ declarative, proof object & absent or interface & absent \\ guided automated & input & output \\ \noalign{
} \hline \noalign{
} {the system from this paper} & interface \emph{and} input & input \emph{and} output \\ \noalign{
} \hline \end{tabular} \end{center} \xmedskip
\noindent There are some systems that are outside the simplicity of this table, like Isabelle with its combination of natural language declarative and procedural proof styles, like Matita with its declarative proof checker, and like the `skeletonizer' of Mizar. These systems will be discussed in Section~\ref{related} below.
The proof style that we propose in this paper is an integration of the procedural and natural language declarative styles. The advantage of the declarative style is that it is closer to normal mathematical practice (one just writes proofs) with more readable proof scripts, and that it gives full control over the exact statements in the proof. Also declarative proofs tend to be easier to maintain and less dependent on the specific system than procedural ones. The advantage of the procedural style is that one does not need to write all intermediate statements: these are generated automatically. Also, procedural systems tend to have much stronger automation, with often many different \emph{decision procedures} that without human help can perform proofs in specific domains.
The goal of this paper is to propose a proof style that combines the best of these two worlds.
\subsection{Relating the procedural and declarative proof styles}
\noindent Looked at superficially, the procedural and declarative proof styles seem very different. Certainly the proof scripts for those two styles \emph{look} completely different. However, when working with these systems, both styles turn out to have a very similar work flow.
When working on a declarative proof, most of the time when one is not finished the only errors left are that the system did not succeed in proving some of the steps in the proof from the earlier steps. In Mizar these steps are called \emph{unjustified} steps, and have error numbers \texttt{*1} and \texttt{*4}. Now these unjustified steps correspond exactly to the subgoals that one looks at when a procedural proof is not finished! In other words, an unfinished proof of a Mizar lemma in which there still are -- say -- seven unjustified steps left, is very similar to a Coq or HOL proof in which there are still seven subgoals left. This is the first observation that is the basis of our approach.
A proof in a declarative system consists of `steps', most of which contain a statement. If one does the analogous proof in a procedural system, one goes through many subgoals that \emph{also} consist of many statements. Now it turns out that those two sets of statements in practice are very much the same! In other words, if for a procedural proof we collect all the statements in the goals (both the assumptions and the statements to be proved), then those statements can in a natural way be organized as a declarative proof. This is the second observation at the basis of our approach.
Note that in these two observations there is no reference to proof objects. This means that our integration of the procedural and declarative proof styles does not have anything to do with proofs on the proof object level. It also means that our proposed approach is independent of the foundations or architecture of the system. What we propose will apply to \emph{any} system in which the user performs proofs by executing tactics on subgoals containing statements.
Our proposal then is to have a proof interface in which a user is working on a declarative proof. In this proof the unjustified statements \emph{are} considered the subgoals of the prover. At any of these steps/subgoals one can execute any tactic of the system, and if this is successful the statements of the new subgoals that the tactic produces will be merged into the proof text, making the declarative proof `grow' \cite{kal:wie:09}. However, one also can freely manually edit and then recheck the declarative text. The text does not need to `remember' how it has been grown.
For an example of how all this works out in a concrete session, see Section~\ref{session} below.
\subsection{Related Work}\label{related}
\noindent From the introduction in Section~\ref{problem} it will be clear that the proof style that we propose is a combination of aspects of many different proof systems. However, there are some systems that are quite close to what we propose. For each of them we will discuss now how they differ from our work: \begin{desCription} \item\noindent{\hskip-12 pt\bf Isabelle/Isar}:\ In Isabelle one can
encapsulate procedural proof fragments consisting of tactic
applications in a declarative Isar proof text
\cite{wen:02,wen:02:1}. However, the user needs to manually type
the declarative text (it is not generated like in our approach), and
the procedural proofs do not make sense without running them on the
system (unlike in our approach, where the tactics do not
\emph{solve} the goal but connect statements together).
\item\noindent{\hskip-12 pt\bf Ssreflect}:\ The usage of Georges
Gonthier's Ssreflect language for Coq \cite{gon:mah:tas:08} is
similar to a common way of using Isar. It is used declaratively for
the high level structure of the proof while at the `leaves' of the
proof the user switches to the procedural proof style. However, the
declarative part of Ssreflect is much less developed than Isar.
Also, although Ssreflect is clearly intended to be also used
declaratively, it barely fits category (2a) in the classification
above.
\item\noindent{\hskip-12 pt\bf HELM/MoWGLI/Matita}:\ The HELM, MoWGLI
and Matita systems \cite{asp:coe:tas:zac:07,asp:03,asp:weg:02} have
as one of their goals to render type theoretical proof objects as
natural language. In Matita, these rendered proof objects also can
be read back in, and checked for correctness like in a declarative
proof system \cite{coe:10}.
An important difference with our approach is that one cannot go back from declarative editing to procedural proving. Once a declarative proof text has been modified, if the procedural proof from which it has been generated also gets modified, both modifications cannot be integrated. In other words, once one has worked declaratively, working procedurally is no longer possible.
Another difference is that the declarative proof text is generated from the proof object, which is generally more fine grained than the user level proof on the level of the tactic invocations, and is therefore more verbose and less understandable.
\item\noindent{\hskip-12 pt\bf The proof rendering from the Lemme project}:\ When proofs are being rendered as natural language, the source of the rendering is generally the proof object. An important exception is a system by Fr\'ed\'erique Guilhot, Hanane Naciri and Lo\"\i c Pottier. Unfortunately, this work seems not to have been published, all that exists is a set of slides for a talk about it \cite{gui:03}.
A difference with our approach is that the generated text cannot be modified by the user anymore. The rendering in this system is just output, and is not parsed back again.
\item\noindent{\hskip-12 pt\bf NuPRL}:\ The NuPRL system \cite{con:all:bro:cle:cre:har:how:kno:men:pan:sas:smi:86} has a way to display formal proofs in which groups of tactics are interleaved with fragments of goals. Between the groups of tactics, the parts of the goal that have changed are shown (see for an example the NuPRL chapter in \cite{wie:06}). This is quite similar to what happens in our approach.
But again, this rendering is just output, and is not parsed back again.
\item\noindent{\hskip-12 pt\bf Mizar's skeletonizer}:\ In natural language declarative systems like Isar and Mizar, the proof text has to be written by the user. A slight exception to this is the `skeletonizer' of the emacs interface to Mizar by Josef Urban \cite{urb:06:1}. In this interface, a proof skeleton is automatically generated from the statement to be proved.
This is similar to what happens when we `grow' a proof by executing a tactic. However, in our case the growing is \emph{generic}: for each tactic there is a corresponding way to insert part of the proof. In Mizar there is only one such way.
\end{desCription}
\noindent There already are various declarative proof languages which have been grafted on top of a procedural system. Currently Isabelle/Isar is the only one that knows widespread use. Others are: \begin{desCription} \item\noindent{\hskip-12 pt\bf `Mizar modes' for HOL Light}:\ There are two by John Harrison \cite{har:96,har:07:1} and two earlier ones by the author (\cite{wie:01} and an unpublished one included in the HOL Light distribution \cite{har:xx}). The \texttt{3} in the system name \texttt{miz3} refers to the fact that this is the third Mizar mode for HOL Light that we developed.
\item\noindent{\hskip-12 pt\bf C-zar}:\ A declarative proof language for Coq by Pierre Corbineau \cite{cor:07}.
\item\noindent{\hskip-12 pt\bf PhoX}:\ There exists an experimental declarative version of the PhoX theorem prover by Christophe Raffalli \cite{pho:xx}.
\end{desCription}
\noindent These systems are all quite similar. The main improvement of the work described in this paper over these other systems is that it adds a Mizar-style interaction model, and that it integrates execution of tactics with generation of proof text.
\subsection{Contribution}
\noindent This paper is a continuation of the work in \cite{kal:wie:09,wie:01}. It contains three contributions: \begin{iteMize}{$\bullet$} \item We describe a declarative proof interface for the HOL Light theorem prover that is much more developed and far more ergonomic than earlier attempts at this. This software can be downloaded at: \xmedskip \begin{center} \url{http://www.cs.ru.nl/~freek/miz3/miz3.tar.gz} \end{center} \xmedskip
\item We describe a new proof style for interactive theorem provers that is a synthesis between the procedural and declarative proof styles.
\item We describe a method for automatically converting \emph{any} existing procedural proof to a declarative equivalent. This gives an approach for conserving libraries of formal proofs and semi-automatically porting them between systems. For details see Section~\ref{automatic} below.
\end{iteMize}
\subsection{Outline}
\noindent The structure of the paper is as follows. In Section~\ref{session} we describe through an example how the proof interface that we developed works. In Section~\ref{language} we describe the declarative proof language of this interface. In Section~\ref{implementation} we give some details of the implementation of this interface. In Section~\ref{automatic} we describe how our approach makes it possible to automatically convert existing proofs to our language. In Section~\ref{lagrange} we describe our experiences with using our interface on a non-trivial example. Finally in Section~\ref{discussion} we conclude with some observations and planned future work.
\section{The \texttt{miz3} proof interface to HOL Light}\label{session}
\noindent We developed a prototype of the interface style proposed in this paper as a layer called \texttt{miz3} on top of the HOL Light system \cite{har:xx,har:00}. It consists of about 2,000 lines of OCaml code (in comparison, the basic HOL Light system is approximately 30,000 lines), and its development took three man months.
We will explain the \texttt{miz3} interface with a simple example. For this we will use the traditional inductive proof of $$\sum_{i = 1}^n i = \frac{n (n + 1)}{2}$$ In HOL Light this is written as: \xmedskip \begin{center}\small \texttt{!n.\ nsum (1..n) ({\char`\\}i.\ i) = (n * (n + 1)) DIV 2} \end{center} \xmedskip In this formula, the exclamation mark \texttt{!} is ASCII for the universal quantifier $\forall$, the backslash \texttt{\char`\\} is ASCII for the $\lambda$ of function abstraction, and the function \texttt{nsum} is a higher order version of the summation operator $\sum$.
Of course the example is very trivial. We do not want to give the impression that our approach only works well for simple examples like this. We just chose this example to be able to fit a reasonable representation of a proof session for it in the paper. We worked on much larger proofs with \texttt{miz3}, and it performs well on those too. For a description of such a larger example see Section~\ref{lagrange} below.
To prove the equality of the example, we will need one lemma, which is the recursive characterization of \texttt{nsum}.
The statement of this lemma, which we will use for rewriting expressions involving \texttt{nsum}, is: \xmedskip \begin{alltt}\small
# \fbox{NSUM_CLAUSES_NUMSEG;;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
val it : thm =
|- (!m. nsum (m..0) f = (if m = 0 then f 0 else 0)) /{\char`\\}
(!m n.
nsum (m..SUC n) f =
(if m <= SUC n then nsum (m..n) f + f (SUC n) else nsum (m..n) f))$\xtoolong$ \end{alltt} \xmedskip This is the first example of a command in a HOL Light session. We indicate user input by putting boxes around it, to differentiate it from the output from the system that is outside those boxes.
We will now show how the proof of this statement is developed, both in the traditional procedural style of the HOL Light system, as well in the synthesis between the procedural and declarative proof styles that we propose in the paper.
\subsection*{The example using the procedural proof style of HOL Light}
\noindent Traditionally, one develops the proof of a lemma in HOL Light in an interactive session. However, the exact commands from that session are not what is put in the formalization file. We now first show the session, and then the proof as it is written in the file.
The session for this lemma consists of six commands, with after each command output from the system:
\xmedskip \begin{alltt}\small
# \fbox{g `!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2`;;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
val it : goalstack = 1 subgoal (1 total)
`!n. nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2`
# \fbox{e INDUCT_TAC;;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}\label{tactic}
val it : goalstack = 2 subgoals (2 total)
0 [`nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2`]
`nsum (1..SUC n) ({\char`\\}i. i) = (SUC n * (SUC n + 1)) DIV 2`
`nsum (1..0) ({\char`\\}i. i) = (0 * (0 + 1)) DIV 2`
# \fbox{e (ASM_REWRITE_TAC[NSUM_CLAUSES_NUMSEG]);;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
val it : goalstack = 1 subgoal (2 total)
`(if 1 = 0 then 0 else 0) = (0 * (0 + 1)) DIV 2`
# \fbox{e ARITH_TAC;;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
val it : goalstack = 1 subgoal (1 total)
0 [`nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2`]
`nsum (1..SUC n) ({\char`\\}i. i) = (SUC n * (SUC n + 1)) DIV 2`
# \fbox{e (ASM_REWRITE_TAC[NSUM_CLAUSES_NUMSEG]);;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
val it : goalstack = 1 subgoal (1 total)
0 [`nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2`]
`(if 1 <= SUC n then (n * (n + 1)) DIV 2 + SUC n else (n * (n + 1)) DIV 2) =$\xtoolong$
(SUC n * (SUC n + 1)) DIV 2`
# \fbox{e ARITH_TAC;;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
val it : goalstack = No subgoals \end{alltt} \xmedskip
\noindent The session starts with a \texttt{g} command that sets the goal to be proved, and then executes five tactics using the \texttt{e} command. Each time after a tactic, the system presents the subgoals that the tactic produced, where the assumptions from which the subgoal has to be proved are numbered (from 0), and the statement to be proved is unnumbered. If there are multiple subgoals produced (as is the case with \texttt{INDUCT\char`\_TAC}), the first goal to be worked on is printed last.
The proof as it appears in the formalization file uses a more compact representation of these commands. It consists of just three lines, containing the name of the lemma, the statement, and the sequence of tactics separated by \texttt{THEN}s: \xmedskip \begin{flushleft} \fbox{\parbox{428.8pt}{
\small \texttt{{\vrule height8.5pt depth3pt width0pt} let ARITHMETIC\char`\_SUM = prove} \\ \texttt{{\vrule height8.5pt depth3pt width0pt}\ (`!n.\ nsum(1..n) ({\char`\\}i.\ i) = (n*(n + 1)) DIV 2`,} \\ \texttt{{\vrule height8.5pt depth3pt width0pt}\ \ INDUCT\char`\_TAC THEN ASM\char`\_REWRITE\char`\_TAC[NSUM\char`\_CLAUSES\char`\_NUMSEG] THEN ARITH\char`\_TAC);;}
}} \end{flushleft} \xmedskip \noindent This is the customary shape of lemmas in a HOL Light formalization.
\subsection*{The example using the \texttt{miz3} proof style}
\noindent We will now show how the same proof is developed using the \texttt{miz3} interface. This is a synthesis of the procedural and declarative proof styles, but more specifically it is a close synthesis of the HOL Light and Mizar proof styles. For example, like in Mizar the system will modify the file being worked on by putting error messages inside that file. Also, the syntax of the \texttt{miz3} proofs (explained in detail in Section~\ref{language} below) is a direct hybrid of the syntax of Mizar and HOL Light: the proof steps are written using Mizar syntax, but the formulas and types in those proof use HOL Light syntax.
There are three ways to process a \texttt{miz3} proof. First, one can just give the proof text as a command to the OCaml interpreter running HOL Light. The parser has been modified to recognize the following convention: \xmedskip \begin{center} \begin{tabular}{cc} \emph{quotation style} & \emph{is parsed as} \\ \noalign{
} \hline \noalign{
} \texttt{ `}\hspace{2pt}\dots\texttt{`} & a term \\ \texttt{`:}\hspace{2pt}\dots\texttt{`} & a type \\ \texttt{`;}\hspace{2pt}\dots\texttt{`} & a proof \end{tabular} \end{center} \xmedskip Second, one can put the proof (without the backquotes and semicolon) in a file with suffix \texttt{.mz3} and check that using the checking program \texttt{miz3}. Third, one can use the interface from the \texttt{vi} editor. The third style is the one we will explain in the rest of this section. For all three interaction styles to work, a HOL Light session with the \texttt{miz3} code loaded has to be running, as a `server'. One runs a HOL Light session in the \texttt{miz3} source directory, and in it executes the command:
\begingroup \def$\langle\mbox{\it many lines of output}\rangle${$\langle\mbox{\it many lines of output}\rangle$} \xmedskip \begin{alltt}\small
# \fbox{#use "miz3.ml";;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
$\langle\mbox{\it many lines of output}\rangle$ \end{alltt} \xmedskip \endgroup
\noindent Once a HOL Light session is running with \texttt{miz3} loaded, one can have it check \texttt{miz3} proofs from a \texttt{vi} editor session that is running in a different window (there does not even have to be a file). When typing two keystrokes, \emph{control-}\texttt{C} and then \emph{return}, the part of the file where the cursor is, in between two empty lines, is checked. We will denote these keystrokes by: \xmedskip \begin{center} \fbox{$\mbox{{\it control-}\tt C}\,{\it return}${\vrule height8.5pt depth3pt width0pt}} \end{center} \xmedskip \noindent If there are errors in the checked part of the file, appropriate error messages are inserted. For example, an unfinished proof with errors messages inserted might look like: \xmedskip \begin{alltt}\small let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
nsum(1..0) ({\char`\\}i. i) = 0 [1]; :: #2 :: 2: inference time-out
now let n be num;
assume nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2;
thus nsum(1..SUC n) ({\char`\\}i. i) = (SUC n*(SUC n + 1)) DIV 2; :: #2
end;
qed by INDUCT_TAC,1; :: #1 :: 1: inference error `;; \end{alltt} \xmedskip
\noindent The first error is caused by the lemma \texttt{NSUM\char`\_CLAUSES\char`\_NUMSEG} not being referenced, the second error has the same reason but is also caused by the proof automation of the system not being strong enough, and the third error is caused by the base case being wrong: it should not be\texttt{ }\dots\ \texttt{=} \texttt{0} but\texttt{ }\dots\ \texttt{=} \texttt{(0*(0} \texttt{+} \texttt{1))} \texttt{DIV} \texttt{2}.
If one manually writes a proof for the lemma, checking for errors all the time and fixing them until no errors remain (this is the style of working with the Mizar system), one ends up with a proof like:
\xmedskip \begin{alltt}\small\label{example} let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
nsum(1..0) ({\char`\\}i. i) = 0 by NSUM_CLAUSES_NUMSEG;
.= (0*(0 + 1)) DIV 2 [1];
now let n be num;
assume nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2 [2];
1 <= SUC n;
nsum(1..SUC n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2 + SUC n
by NSUM_CLAUSES_NUMSEG,2;
thus .= ((SUC n)*(SUC n + 1)) DIV 2;
end;
qed by INDUCT_TAC,1`;; \end{alltt} \xmedskip
\noindent This proof is thirteen lines instead of the three that we got with the traditional proof style. I.e., this proof is quite a bit longer, but not unreasonably so. We experimented quite a bit by comparing procedural proofs to their declarative counterparts, and our impression is that declarative proofs are generally about twice as long as corresponding procedural ones. See in this respect also the statistics in Section~\ref{proofcounts} on page~\pageref{proofcounts} below.
This proof was written using \texttt{miz3} in a purely declarative style. We now show how one can use \texttt{miz3} in a purely procedural style, exactly mimicking the traditional HOL Light session. (We have the problem of how to present an interactive editing session on paper. We will do this by presenting various stages of the edit buffer, interspersed with comments. This will mean a lot of duplicated text, but hopefully it will make the process clear.)
The standard starting point for a `procedural' \texttt{miz3} session is:
\xmedskip \begin{alltt}\small let = thm `;
proof
qed by #; `;; \end{alltt} \xmedskip
\noindent In the place of the two empty spaces one puts the name and statement of the lemma to be proved:
\label{luxury} \xmedskip \begin{alltt}\small let \fbox{ARITHMETIC_SUM{\vrule height8.5pt depth3pt width0pt}} = thm `;
\fbox{!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2{\vrule height8.5pt depth3pt width0pt}}
proof
qed by #; `;; \end{alltt} \xmedskip
\noindent When checking this, there will be \emph{no} error messages added, as the \texttt{\char`\#} mark means that this line is a subgoal to be proved. The \texttt{qed} step means that the statement from the lemma has been proved, and therefore the subgoal in this case consists of exactly that statement.
Next, one types a tactic after the \texttt{\char`\#} sign and has the system process the file:
\xmedskip \begin{alltt}\small let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
qed by #{\hspace{1pt}}\fbox{INDUCT_TAC\,$\mbox{{\it control-}\tt C}\,{\it return}${\vrule height8.5pt depth3pt width0pt}};$\xtoolong$
`;; \end{alltt} \xmedskip
\noindent The tactic will be executed, and the system will `merge' the two subgoals that are generated (see the traditional session on page~\pageref{tactic}) into the proof, using the method from \cite{kal:wie:09}. Also, the insertion point of the editor will be put directly after the first \texttt{\char`\#} as indicated by the vertical bar, i.e., the editor will `jump' to the first subgoal that is now left:
\xmedskip \begin{alltt}\small let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
nsum (1..0) ({\char`\\}i. i) = (0 * (0 + 1)) DIV 2 [1] by #\hspace{.5pt}\vrule height9.5pt depth4pt width.3pt\hspace{.5pt};
!n. nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2
==> nsum (1..SUC n) ({\char`\\}i. i) = (SUC n * (SUC n + 1)) DIV 2 [2]
proof
let n be num;
assume nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2;
qed by #;
qed by INDUCT_TAC from 1,2; `;; \end{alltt} \xmedskip
\noindent There is quite some text already, but most of it is generated and not typed by the user. Next one enters the second tactic, and has the system process it:
\xmedskip \begingroup \makeatletter \def\flbox#1{\leavevmode\setbox\@tempboxa\hbox{#1}\@tempdima\fboxrule
\advance\@tempdima \fboxsep \advance\@tempdima \dp\@tempboxa
\hbox{\lower \@tempdima\hbox
{\vbox{\hrule \@height \fboxrule
\hbox{\vrule \@width \fboxrule \hskip\fboxsep
\vbox{\vskip\fboxsep \box\@tempboxa\vskip\fboxsep}
}
\hrule \@height \fboxrule}}}} \def\frbox#1{\leavevmode\setbox\@tempboxa\hbox{#1}\@tempdima\fboxrule
\advance\@tempdima \fboxsep \advance\@tempdima \dp\@tempboxa
\hbox{\lower \@tempdima\hbox
{\vbox{\hrule \@height \fboxrule
\hbox{
\vbox{\vskip\fboxsep \box\@tempboxa\vskip\fboxsep}\hskip
\fboxsep\vrule \@width \fboxrule}
\hrule \@height \fboxrule}}}} \makeatother \begin{alltt}\small let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
nsum (1..0) ({\char`\\}i. i) = (0 * (0 + 1)) DIV 2 [1] by #{\hspace{1pt}}\flbox{REWRITE_TAC,NSUM_CLAUSES_NU{\vrule height8.5pt depth3pt width0pt}} \frbox{MSEG\,$\mbox{{\it control-}\tt C}\,{\it return}${\vrule height8.5pt depth3pt width0pt}};$\xtoolong$
!n. nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2
==> nsum (1..SUC n) ({\char`\\}i. i) = (SUC n * (SUC n + 1)) DIV 2 [2]
proof
let n be num;
assume nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2;
qed by #;
qed by INDUCT_TAC from 1,2; `;; \end{alltt} \endgroup \xmedskip
\noindent This leads to a new subgoal, again exactly matching the one from the traditional HOL Light session:
\xmedskip \begin{alltt}\small let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
(if 1 = 0 then 0 else 0) = (0 * (0 + 1)) DIV 2 [1] by #\hspace{.5pt}\vrule height9.5pt depth4pt width.3pt\hspace{.5pt};
nsum (1..0) ({\char`\\}i. i) = (0 * (0 + 1)) DIV 2 [2]
by REWRITE_TAC,NSUM_CLAUSES_NUMSEG from 1;
!n. nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2
==> nsum (1..SUC n) ({\char`\\}i. i) = (SUC n * (SUC n + 1)) DIV 2 [3]
proof
let n be num;
assume nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2;
qed by #;
qed by INDUCT_TAC from 2,3; `;; \end{alltt} \xmedskip
\noindent Note that the system automatically wrapped the line with the long tactic.
If one continues like this by putting in three more tactics, one gets the declarative counterpart of the procedural proof:
\xmedskip \begin{alltt}\small let ARITHMETIC_SUM = thm `;
!n. nsum(1..n) ({\char`\\}i. i) = (n*(n + 1)) DIV 2
proof
(if 1 = 0 then 0 else 0) = (0 * (0 + 1)) DIV 2 [1] by ARITH_TAC;
nsum (1..0) ({\char`\\}i. i) = (0 * (0 + 1)) DIV 2 [2]
by REWRITE_TAC,NSUM_CLAUSES_NUMSEG from 1;
!n. nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2
==> nsum (1..SUC n) ({\char`\\}i. i) = (SUC n * (SUC n + 1)) DIV 2 [3]
proof
let n be num;
assume nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2;
(if 1 <= SUC n
then nsum (1..n) ({\char`\\}i. i) + SUC n
else nsum (1..n) ({\char`\\}i. i)) =
(SUC n * (SUC n + 1)) DIV 2 [4] by ARITH_TAC;
qed by REWRITE_TAC,NSUM_CLAUSES_NUMSEG from 4;
qed by INDUCT_TAC from 2,3; `;; \end{alltt} \xmedskip
\noindent This is not a proof a human would have written, but it \emph{exactly} matches the traditional HOL Light session. Furthermore, the characters that one needs to type to get it are almost exactly the same as in that session.
Now the point of this paper is that one does \emph{not} need to choose between these two ways of working. One can work on a proof in a combination, both freely editing it and rechecking things while going, but also by executing tactics at various places and using the new proof steps that these tactics produce.
Note that \texttt{miz3} just replaces the \emph{proof} part of the HOL Light mathematical language. The rest of HOL Light's features like definitions and the implementation of proof automation are unchanged. Therefore, for convenience the \fbox{$\mbox{{\it control-}\tt C}\,{\it return}${\vrule height8.5pt depth3pt width0pt}} command will interpret text being processed as straight OCaml code if it does not have the shape \begingroup \def$\langle\mbox{ident}\rangle${$\langle\mbox{ident}\rangle$} \def$\dots${$\dots$} \xmedskip \begin{alltt}
\dots thm `;
\dots
`;; \end{alltt} \xmedskip \endgroup \noindent where the \texttt{`;} has to be at the end of the first line of the block. This means that there is no need to enter text directly in the HOL Light session, one can fully work from the \texttt{vi} interface. However, working in the HOL Light session itself is also possible.
\section{The \texttt{miz3} proof language}\label{language}
\noindent The three most common proof systems for first order predicate logic are natural deduction, sequent calculus and Hilbert-style logic. Of these three, natural deduction corresponds most closely with everyday mathematical reasoning. The two main proof systems for natural deduction are Ja\'skowski/Fitch- and Gentzen-style deduction \cite{pel:00}. Of these, the first is the easiest to use for actual proofs. An example of a proof in a Ja\'skowski/Fitch-style natural deduction proof system (where we use the syntactic conventions of \cite{hut:rya:04}) is:
\begingroup \setlength{\tabcolsep}{.5em} \def{\normalsize\strut}{{\normalsize\strut}} \def{\large\strut}{{\large\strut}} \def\noalign{
}{\noalign{
}}
\def\multicolumn{1}{|l}{}{\multicolumn{1}{|l}{}} \xmedskip \begin{center} \small \begin{tabular}{rlllrllllll} {\normalsize\strut} 1 &&&&& $(\exists x\,\neg P(x)) \lor \neg(\exists x\,\neg P(x))$ & {\scriptsize LEM} \\ \cline{2-10} {\normalsize\strut} 2 & \multicolumn{1}{|l}{} &&&& $\exists x\,\neg P(x)$ & assumption &&&& \multicolumn{1}{|l}{} \\ \cline{4-8} {\normalsize\strut} 3 & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} & $x$ & $\neg P(x)$ & assumption && \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} \\ \cline{5-7} {\normalsize\strut} 4 & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $P(x)$ & assumption & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} \\ {\normalsize\strut} 5 & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $\bot$ & $\neg E$ 3,4 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} \\ {\normalsize\strut} 6 & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $\forall y\, P(y)$ & $\bot E$ 5 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} \\ \cline{5-7} {\normalsize\strut} 7 & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} && $P(x) \Rightarrow \forall y\, P(y)$ & ${\Rightarrow}I$ 4--6 && \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} \\ {\normalsize\strut} 8 & \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} && $\exists x (P(x) \Rightarrow \forall y\, P(y))$ & $\exists I$ 7 && \multicolumn{1}{|l}{} && \multicolumn{1}{|l}{} \\ \cline{4-8} {\normalsize\strut} 9 & \multicolumn{1}{|l}{} &&&& $\exists x (P(x) \Rightarrow \forall y\, P(y))$ & $\exists E$ 2,3--8 &&&& \multicolumn{1}{|l}{} \\ \cline{2-10} &&&&&& \\ \noalign{\vspace{-
amount}} \cline{2-10} {\normalsize\strut} 10 & \multicolumn{1}{|l}{} &&&& $\neg(\exists x\,\neg P(x))$ & assumption &&&& \multicolumn{1}{|l}{} \\ \cline{3-9} {\normalsize\strut} 11 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} &&& $P(a)$ & assumption &&& \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \cline{4-8} {\normalsize\strut} 12 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $y$ & $P(y) \lor \neg P(y)$ & {\scriptsize LEM} && \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \cline{5-7} {\normalsize\strut} 13 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $P(y)$ & assumption & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \cline{5-7} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} &&&&& \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \noalign{\vspace{-
amount}} \cline{5-7} {\normalsize\strut} 14 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $\neg P(y)$ & assumption & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ {\normalsize\strut} 15 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $\exists x\, \neg P(x)$ & $\exists I$ 14 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ {\normalsize\strut} 16 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $\bot$ & $\neg E$ 10,15 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ {\normalsize\strut} 17 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & $P(y)$ & $\bot E$ 16 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \cline{5-7} {\normalsize\strut} 18 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} && $P(y)$ & $\lor E$ 12,13,14--17 && \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \cline{4-8} {\normalsize\strut} 19 & \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} &&& $\forall y\, P(y)$ & $\forall I$ 12--18 &&& \multicolumn{1}{|l}{} & \multicolumn{1}{|l}{} \\ \cline{3-9} {\normalsize\strut} 20 & \multicolumn{1}{|l}{} &&&& $P(a) \Rightarrow \forall y\, P(y)$ & ${\Rightarrow}I$ 11--19 &&&& \multicolumn{1}{|l}{} \\ {\normalsize\strut} 21 & \multicolumn{1}{|l}{} &&&& $\exists x (P(x) \Rightarrow \forall y\, P(y))$ & $\exists I$ 20 &&&& \multicolumn{1}{|l}{} \\ \cline{2-10} {\normalsize\strut} 22 &&&&& $\exists x (P(x) \Rightarrow \forall y\, P(y))$ & $\lor E$ 1,2--9,10--21 \end{tabular} \end{center} \xmedskip \xmedskip \endgroup
\noindent This is a proof of Smullyan's `{drinker's principle}' \cite{smu:90}.
Most declarative systems have essentially the same proof language (although the \emph{explanation} of the {meaning} of the elements of such a language can differ significantly between systems \cite{wen:wie:02}.) In other words, declarative interactive theorem provers have proof languages that are much more similar to each other than the languages of procedural systems are. It turns out that the languages of declarative systems are all close to Ja\'skowski/Fitch-style natural deduction.
The \texttt{miz3} proof language is in particular almost identical to the language of the Mizar system \cite{gra:kor:nau:10}. The natural deduction example that we just gave becomes in the syntax of the \texttt{miz3} language:
\begingroup \def\\{\char`\\} \xmedskip \begin{alltt}\small
let DRINKER = thm `;
let P be A->bool;
thus ?x. P x ==> !y. P y [22]
proof
(?x. ~P x) \\/ ~(?x. ~P x) [1];
cases by 1;
suppose ?x. ~P x [2];
consider x such that ~P x [3] by 2;
take x;
assume P x [4];
F [5] by 3,4;
thus !y. P y [6] by 5;
end; \end{alltt} \begin{alltt}\small
suppose ~(?x. ~P x) [10];
consider a being A such that T;
take a;
assume P a [11];
let y be A;
P y \\/ ~P y [12];
cases by 12;
suppose P y [13];
thus P y by 13;
end;
suppose ~P y [14];
?x. ~P x [15] by 14;
F [16] by 10,15;
thus P y [17] by 16;
end;
end;
end`;; \end{alltt} \xmedskip \xmedskip \endgroup
\noindent (The extra `\texttt{consider} \texttt{a} \texttt{being} \texttt{A} \texttt{such} \texttt{that} \texttt{T;}' step is needed because in \texttt{miz3} variables have to be introduced before they can be used. The Ja\'skowski/Fitch-style natural deduction proof can use the free variable $a$ without introducing it, but in \texttt{miz3} this is not allowed. The step should be read as `consider an object \texttt{a} of type \texttt{A} such that true holds'. It is accepted by the system because in HOL all types are non-empty. This non-emptiness is indeed essential for the drinker's principle to be provable.)
This example shows the correspondence between \texttt{miz3} proof steps and natural deduction rules:
\xmedskip \begin{center} \begin{tabular}{cc} \emph{deduction rule} & \emph{proof step} \\ \noalign{
} \hline \noalign{
} assumption & \texttt{thus} \\ \noalign{
} ${\land}I$ & \texttt{thus} \\ ${\Rightarrow}I$ & \texttt{assume} \\ ${\neg}I$ & \texttt{assume} \\ ${\forall}I$ & \texttt{let} \\ ${\exists}I$ & \texttt{take} \\ \noalign{
} ${\lor}E$ & \texttt{cases}/\texttt{suppose} \\ ${\exists}E$ & \texttt{consider} \end{tabular} \end{center} \xmedskip \xmedskip \xmedskip
\noindent The other natural deduction rules, like ${\land}E$, ${\Rightarrow}E$, ${\neg}E$, ${\forall}E$ and ${\lor}I$, are all subsumed by the inference checker `\texttt{by}'. Actually, the ${\land}I$ and ${\exists}I$ rules are subsumed by \texttt{by} as well. The \texttt{thus} and \texttt{take} steps are more of a backward step than a direct counterpart to the ${\land}I$ and ${\exists}I$ rules.
We now present the meaning of the \texttt{miz3} language. It is almost exactly the same as that of the Mizar language \cite{gra:kor:nau:10}.
At every step in a proof there is a designated statement called the \texttt{thesis}. This is the statement that is being proved (the \emph{goal} of a procedural prover). Subproofs have their own local \texttt{thesis}.
Now steps can add extra variables and statements to the proof context, but can also change the \texttt{thesis}. If that happens the step is called a \emph{skeleton} step.
Here is a table that summarizes the main \texttt{miz3} proof steps:
\xmedskip \begin{center} $\hspace{-5em}$\begin{tabular}{lcccccc} \emph{proof step} & \emph{statement} & \emph{new proof} & \emph{referable} & \emph{old} & \emph{new} & \emph{skeleton} \\
& \emph{to be justified} & \emph{variables} & \emph{statement} & \texttt{thesis} & \texttt{thesis} & \emph{step?} \\ \noalign{
} \hline \noalign{
} $\phi$\texttt{ by }\dots\texttt{;} & $\phi$ & & $\phi$ & $\psi$ & $\psi$ & $-$ \\ \noalign{
} \texttt{let }$x_1$ \dots\ $x_n${\tt\ be }$\tau$\texttt{;}$\hspace{-0em}$ & & $x_1$ \dots\ $x_n$ & & $\forall x_1 \dots\forall x_n\,\psi$ & $\psi$ & $+$ \\ \noalign{
} \texttt{assume }$\phi$\texttt{;} & & & $\phi$ & $\phi \Rightarrow \psi$ & $\psi$ & $+$ \\ \noalign{
} \texttt{assume }$\phi$\texttt{;} & & & $\phi$ & $\neg\phi$ & $\bot$ & $+$ \\ \noalign{
} \texttt{assume }$\neg\phi$\texttt{;} & & & $\neg\phi$ & $\phi$ & $\bot$ & $+$ \\ \noalign{
} \texttt{thus }$\phi$\texttt{ by }\dots\texttt{;} & $\phi$ & & $\phi$ & $\phi \land \psi$ & $\psi$ & $+$ \\ \noalign{
} \texttt{thus }$\phi$\texttt{ by }\dots\texttt{;} & $\phi$ & & $\phi$ & $\phi$ & $\top$ & $+$ \\ \noalign{
} \texttt{qed by }\dots\texttt{;} & $\phi$ & & & $\phi$ & & $+$ \\ \noalign{
} \texttt{take }$t$\texttt{;} & & & & $\exists x\,\psi$ & $\psi[x:=t]$ & $+$ \\ \noalign{
} \texttt{.= }$t_{i + 1}$\texttt{ by }\dots\texttt{;} & $t_i = t_{i + 1}$ & & $t_1 = t_{i + 1}$ & $\psi$ & $\psi$ & $-$ \\ \noalign{
} \texttt{set }$x$\texttt{ = }$t$\texttt{;} & & $x$ & $x = t$ & $\psi$ & $\psi$ & $-$ \\ \noalign{
} \hline \noalign{
} \texttt{consider }$x_1$ \dots\ $x_n\hspace{-0em}$ \\ $\hspace{1em}$ \texttt{such that} \\ $\hspace{1em}$ $\phi$\texttt{ by }\dots\texttt{;} & $\exists x_1\dots\exists x_n\, \phi$ & $x_1$ \dots\ $x_n$ & $\phi$ & $\psi$ & $\psi$ & $-$ \\ \noalign{
} \hline \noalign{
} \texttt{cases by }\dots\texttt{;} & $\phi_1 \lor \dots \lor \phi_n$ & & & $\psi$ \\ \dots \\ \texttt{suppose }$\phi_i$\texttt{;} & & & $\phi_i$ & & $\psi$ & $+$ \\ $\hspace{1em}$ \dots \\ \texttt{end;} \\ \dots \\ \noalign{
} \hline \end{tabular}$\hspace{-5em}$ \end{center} \xmedskip \xmedskip \noindent This then is the basic \texttt{miz3} proof language. The \texttt{by} justifications should contain sufficient references for the automation to prove the statement in the second column. The third and fourth columns give the variables and statements that are added to the proof context. The fifth and sixth columns give the \texttt{thesis} before and after the step, and the seventh column indicates whether the step is a skeleton step or not.
In the case of a \texttt{cases}, every \texttt{suppose} branch inherits the \texttt{thesis} that held at the \texttt{cases}, as indicated in the table. However, as one cannot remove a \texttt{suppose} without destroying the skeleton of the proof, it still counts as a skeleton step.
\begin{figure}
\caption{The full \texttt{miz3} grammar}
\label{grammar}
\end{figure}
The \texttt{.=} construction is used for \emph{iterated equalities}. Not indicated in the table (for lack of space) is that the step directly before the \texttt{.=}\, should prove a statement of the shape $t_1 = t_i$. This then determines the terms $t_1$ and $t_i$ in the table.
Using \texttt{.=} reasoning, chained equalities of the shape $t_1 = t_2 = t_3 = \dots = t_n$ can be coded as \xmedskip \begin{flushleft}\tt \ \ $t_1$ = $t_2$ by {\rm\dots}; \\ \ \ \phantom{$t_1$} \llap{\tt .}= $t_3$ by {\rm\dots}; \\ \ \ \phantom{$t_1$} \llap{\rlap{\rm\dots}\phantom{\tt .}} \\ \ \ \phantom{$t_1$} \llap{\tt .}= $t_n$ by {\rm\dots}; \end{flushleft} \xmedskip \noindent \looseness=-1 The \texttt{now} syntax can be used when the statement that is being proved can be inferred from the skeleton steps in the proof (which is generally the case). In that situation \xmedskip \begingroup \def$\phi${$\phi$} \def\dots{\dots} \begin{alltt}
$\phi$ proof \dots end; \end{alltt} \xmedskip \noindent can be abbreviated as \xmedskip \begin{alltt}
now \dots end; \end{alltt} \endgroup \xmedskip \noindent Finally the \texttt{-} label refers to the statement that was introduced most recently to the proof context, and the \texttt{exec} step can be used to transform the \texttt{thesis} using an arbitrary tactic in the style of \cite{wie:01}.
The \texttt{miz3} language differs in some respects from the real Mizar language: \begin{iteMize}{$\bullet$} \item The statements, terms and types use HOL Light syntax instead of Mizar syntax. In particular the rich type system of Mizar \cite{wie:07:2} is not available.
\item The labels are behind the statements in brackets, instead of in front with a colon.
\item There is no \texttt{then}. The label \texttt{-} refers to the previous statement. Also, the last `\texttt{horizon}' statements are visible without reference, where \texttt{horizon} is a variable of the \texttt{miz3} server that is usually set to \texttt{1}. If one sets \texttt{horizon} \texttt{:=} \texttt{-1;;} all statements local to the proof that are in scope are used in inference checking without any references. (This makes the proofs look nice and checking very slow.)
\item Some keywords are slightly different: see Figure~\ref{grammar} for the exact grammar. This grammar can be parsed with \texttt{yacc} without any shift/reduce conflicts. In that sense it is more straightforward than the grammar used by the real Mizar \cite{cai:gow:04}. The most notable changes are that there is a keyword \texttt{qed} as an abbreviation of `\texttt{thus thesis; end}', and that in iterated equalities {each} step has a terminating semicolon instead of just the last one.
\item The \texttt{from} keyword has a different meaning than in Mizar (see the explanation below of how tactics in justifications function.)
\end{iteMize}
\noindent In a justification of a proof step an arbitrary HOL Light tactic (either of type \texttt{tactic} or of type \texttt{thm list} \texttt{->} \texttt{tactic}) can be given. If this tactic is absent, the default tactic \texttt{default\char`\_{\penalty 100}prover}, which is another variable of the \texttt{miz3} server, is used. This first tries \texttt{HOL\char`\_BY}, an equivalent of the Mizar automated theorem prover by John Harrison, and if that fails runs some decision procedures, as well as the HOL Light first order automated theorem prover \texttt{MESON} \cite{har:96:1}.
If a tactic is explicitly present in a step, a goalstate is created with the statement that needs to be justified as the goal, and the statements referred to in the \texttt{by} part of the justification as the assumptions. In this goalstate the tactic is executed with the \texttt{by} part statements as arguments. Finally it is checked whether the subgoals obtained that way are all present in the union of the \texttt{by} and \texttt{from} parts of the justification. A tactic to connect everything in the style of \cite{wie:01} is executed, which will fail if this is not the case.
\section{The implementation of the \texttt{miz3} proof interface}\label{implementation}
\noindent We now present a low-level description of the implementation of the \texttt{miz3} interface. This might seem to be about irrelevant details, but if one leaves out enhancements like the caches and time-outs, the system becomes unusable for serious work. Readers only interested in an abstract description of the \texttt{miz3} proof style, or in using the system instead of understanding how it is organized on the inside, can safely skip this section.
The \texttt{miz3} interface is both a prototype (in the sense that it has no real users yet) as well as a quite usable system (in the sense that it is a quite usable proof language for the quite usable HOL Light system). The \texttt{miz3} interface is rather light weight, although in its current implementation it needs the infrastructure of a Unix system.
The full source of the \texttt{miz3} system consists of five files: \xmedskip \begin{center} \begin{tabular}{lrll} \texttt{miz3.ml} & 1,903 lines && the OCaml program to be loaded in HOL Light \\ \texttt{miz3\char`\_of\char`\_hol.ml} & 237 lines && (see Section~\ref{automatic} below) \\ \texttt{bin/miz3} & 28 lines && the command line batch checker \\ \texttt{bin/miz3f} & 40 lines && a filter version of the command line checker \\ \texttt{exrc} & 9 lines && the `\texttt{vi} mode' for \texttt{miz3} \end{tabular} \end{center} \xmedskip
\noindent The communication between the \texttt{vi} session and HOL Light is initiated by \texttt{vi} sending a Unix signal to HOL Light, after which the signal handler of the HOL Light session does all the work. This means that the HOL Light session when not doing a \texttt{miz3} check is {not} actively `waiting' for commands, and that if it is running some other code, then that code will be interrupted to do the \texttt{miz3} check. The communication between \texttt{vi} and HOL Light is through files, of which some have hard coded special names in order for both sides to be able to find them.
Specifically, when checking a proof from the \texttt{vi} interface by typing \xmedskip \begin{center} \fbox{$\mbox{{\it control-}\tt C}\,{\it return}${\vrule height8.5pt depth3pt width0pt}} \end{center} \xmedskip the following steps happen: \begin{iteMize}{(1)} \item The relevant part of the buffer is selected using the \texttt{vi} commands \texttt{\char`\{} and \texttt{\char`\}}.
\item This part of the buffer is filtered through the perl script \texttt{miz3f}.
\item This script writes this to a temporary file in \texttt{/tmp}, and runs the shell script \texttt{miz3} on that file.
\item This then writes the name of its input file in the file with the special name \texttt{/tmp/{\penalty 100}miz3\char`\_{\penalty 100}filename}, looks up the process id of the HOL Light session running the \texttt{miz3} server in \texttt{/tmp/miz3\char`\_pid}, and sends that process the \texttt{USR2} signal.
\item The signal handler of the HOL Light session now finds the file it needs to check, and parses it into a data structure of OCaml type \texttt{step} \texttt{list}. In this data structure the full input is stored in small pieces.
\item Next, the server calls the function \texttt{check\char`\_proof} on this, which returns another \texttt{step} \texttt{list}, this time with errors marked, and possibly grown by running tactics after the \texttt{\char`\#} symbol.
\item This result now is printed back into the file in \texttt{/tmp}, after which the filter puts it back into the edit buffer.
\end{iteMize}
\noindent In other words the \emph{full} proof is processed \emph{every} time a check is done. To make this acceptably fast, there are two caches. The first cache holds inferences that have already been checked, to prevent the checker from having to run all tactics every time. The second cache holds the OCaml objects associated with the elements in the \texttt{by} justifications. These are calculated using the OCaml functions \texttt{Toploop.{\penalty 100}parse\char`\_toplevel\char`\_phrase} and \texttt{Toploop.execute\char`\_phrase} (which together are the OCaml equivalent of Lisp's \texttt{eval} function).
We do not want to restrict users to `special' tactics that are designed to always terminate in a reasonable time. This means that while working on a proof sometimes tactics will `hang'. Therefore, while developing a proof, after a specified time tactics will be killed using the \texttt{Unix.alarm} function. This time is given by the variable \texttt{timeout}. Of course, this makes it dependent on the specific computer used whether a proof will be accepted or flagged with a time-out error. (However, a similar thing holds for any interactive theorem prover, because the memory might run out during a check as well.) If one wants to check a finished proof on a slow system without having to worry about time-outs for tactics, one can set \texttt{timeout} \texttt{:=} \texttt{-1;;} to disable time-outs.
The system will never pretty-print user input by itself, even though it parses and reprints proofs all the time. Even white space and comments will stay exactly the way they are. However, if the proof is modified by `growing' it through execution of tactics, the system tries hard to indent and line wrap the new parts nicely. There are a dozen parameters to direct this process, which are listed at the start of the source file \texttt{miz3.ml}.
Although the interface currently only runs in \texttt{vi}, we took great care to not have it be dependent on \texttt{vi} specific features. In fact, the whole `\texttt{miz3} mode' for \texttt{vi} essentially consists of the single line \xmedskip \begingroup \def\{{\char`\{} \def\}{\char`\}} \begin{alltt} :map ^C<CR> \{/.^M!\}miz3f^M/#^Ml \end{alltt} \endgroup \xmedskip \noindent in the \texttt{.exrc} file (for \texttt{vim} users the \texttt{.vimrc} file), which is the file that configures mappings between \texttt{vi} keystrokes and commands. The simplicity of this interface means that porting it to other editors will be trivial.
\section{Automatically converting procedural proofs to declarative proofs}\label{automatic}
\noindent Traditional HOL Light proofs can be mimicked in \texttt{miz3} by using the system in the procedural style as shown in Section~\ref{session}. We wrote a small program that automates this process. Using this, \emph{any} proof from HOL Light library can be fully automatically converted to the \texttt{miz3} language.
For example, consider the following HOL Light lemma, that says that subtraction on natural and real numbers correspond to each other:
\xmedskip \begin{alltt}\small let REAL_OF_NUM_SUB = prove
(`!m n. m <= n ==> (&n - &m = &(n - m))`,
REPEAT GEN_TAC THEN REWRITE_TAC[LE_EXISTS] THEN
STRIP_TAC THEN ASM_REWRITE_TAC[ADD_SUB2] THEN
REWRITE_TAC[GSYM REAL_OF_NUM_ADD] THEN
ONCE_REWRITE_TAC[REAL_ADD_SYM] THEN
REWRITE_TAC[real_sub; GSYM REAL_ADD_ASSOC] THEN
MESON_TAC[REAL_ADD_LINV; REAL_ADD_SYM; REAL_ADD_LID]);; \end{alltt} \xmedskip
\noindent To be able to use the converter, it is necessary to load it:
\begingroup \def$\langle\mbox{\it many lines of output}\rangle${$\langle\mbox{\it many lines of output}\rangle$} \xmedskip \begin{alltt}\small
# \fbox{#use "miz3_of_hol.ml";;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
$\langle\mbox{\it many lines of output}\rangle$ \end{alltt} \xmedskip \endgroup \noindent Then one converts the proof as follows:
\xmedskip \begin{alltt}\small
# \fbox{miz3_of_hol "/opt/src/hol_light/real.ml" "REAL_OF_NUM_SUB";;\hspace{-.5pt}{\vrule height8.5pt depth3pt width0pt}}
0..0..1..2..7..14..37..72..174..325..solved at 392 val it : step =
!m n. m <= n ==> &n - &m = &(n - m) [1]
proof
!n m. m <= n ==> &n - &m = &(n - m) [2]
proof
let n m be num;
!d. n = m + d ==> &n - &m = &(n - m) [3]
proof
let d be num;
assume n = m + d [4];
&d + &m + -- &m = &d [5]
by MESON_TAC[REAL_ADD_LINV; REAL_ADD_SYM; REAL_ADD_LID],4;
(&d + &m) - &m = &d [6]
by REWRITE_TAC[real_sub; GSYM REAL_ADD_ASSOC],4 from 5;
(&m + &d) - &m = &d [7] by ONCE_REWRITE_TAC[REAL_ADD_SYM],4 from 6;$\xtoolong$
&(m + d) - &m = &d [8]
by REWRITE_TAC[GSYM REAL_OF_NUM_ADD],4 from 7;
qed by ASM_REWRITE_TAC[ADD_SUB2],4 from 8;
(?d. n = m + d) ==> &n - &m = &(n - m) [9] by STRIP_TAC from 3;
qed by REWRITE_TAC[LE_EXISTS] from 9;
qed by REPEAT GEN_TAC; \end{alltt} \xmedskip
\noindent The possibility to convert proofs like this does not depend on HOL Light specifics. Any procedural system that has goals and tactics can have a \texttt{miz3} layer on top of it and proofs can then be converted to that in exactly the same way. We believe that this gives an approach to `integrate' mathematical libraries between systems:
\xmedskip \begin{center} \begin{picture}(340,120)(0,22)
\put(40,32){\makebox(0,0){\strut \emph{system 1}}} \put(170,32){\makebox(0,0){\strut \emph{system 2}}} \put(300,32){\makebox(0,0){\strut \emph{system 3}}} \put(40,60){\makebox(0,0){\strut declarative proofs}} \put(170,60){\makebox(0,0){\strut declarative proofs}} \put(300,60){\makebox(0,0){\strut declarative proofs}} \put(40,130){\makebox(0,0){\strut procedural proofs}} \put(170,130){\makebox(0,0){\strut procedural proofs}} \put(300,130){\makebox(0,0){\strut procedural proofs}} \put(40,122){\vector(0,-1){53}} \put(170,122){\vector(0,-1){53}} \put(300,122){\vector(0,-1){53}} \put(87,60){\vector(1,0){36}} \put(123,60){\vector(-1,0){36}} \put(217,60){\vector(1,0){36}} \put(253,60){\vector(-1,0){36}} \end{picture} \end{center} \xmedskip
\noindent However, conversions between Mizar style proofs for different systems cannot be completely automatic. Even although the \texttt{miz3} language can be put on top of any system, the semantics of the \emph{statements} (when working on top of the native library of the systems) will not exactly match, and the \emph{justification} automation will not match either. When converting a \texttt{miz3} proof to Mizar, Isar or C-zar, one can only do an approximate job with the statements, and there is no proper way to translate all tactics.
However, one gets a very good `starting point' when converting a declarative proof between systems. There just will be some justification errors. This means that a declarative proof converted to a different system will be like a `formal proof sketch' \cite{wie:04}.
For these reasons our system can also not directly make use of the Mizar mathematical library MML. But again, it is not difficult to write a translator that translates a class of Mizar proofs (the ones that use statements that map reasonably well to HOL Light versions) into \texttt{miz3} formal proof sketches.
\section{A larger example: Lagrange's theorem }\label{lagrange}
\noindent We have tried \texttt{miz3} on a somewhat larger proof: Lagrange's theorem from group theory. This states that for any group the order of a subgroup always divides the order of that group. John Harrison already had written a HOL Light proof of this theorem, which meant that we could see how the traditional proof style compared to what was possible in \texttt{miz3}. We used two different approaches: we wrote a proof following the proof that is in van der Waerden's book about algebra \cite{wae:71}, and we wrote a \texttt{miz3} proof trying to closely follow John Harrison's proof.
Van der Waerden's proof we formalized both in Mizar and in \texttt{miz3}, and in both cases we first wrote a formal proof sketch \cite{wie:04} of the proof. The \texttt{miz3} formal proof sketch was:
\xmedskip \begin{alltt}\small now let a be A; assume a IN G; let b be A; assume b IN G;
assume i(a)**b IN H;
b***H = a**i(a)**b***H; .= a***(i(a)**b***H); thus .= a***H; end; !a b. a IN G /{\char`\\} b IN G /{\char`\\} ~(a***H = b***H) ==> a***H INTER b***H = \{\} proof let a be A; assume a IN G; let b be A; assume b IN G;
now assume ~(a***H INTER b***H = \{\});
consider g1 g2 such that g1 IN H /{\char`\\} g2 IN H /{\char`\\} a**g1 = b**g2;
g1**i(g2) = i(a)**b;
i(a)**b IN H;
thus a***H = b***H;
end; qed; !a. a IN G ==> a IN a***H proof let a be A; assume a IN G; a**e = a; qed;$\xtoolong$
\{a***H | a IN G\} PARTITIONS G; !a b. a IN G /{\char`\\} b IN G ==> CARD (a***H) = CARD (b***H) proof let a be A; assume a IN G; let b be A; assume b IN G;
consider f such that !g. g IN H ==> f(a**g) = b**g;
bijection f (a***H) (b***H); qed; set INDEX = CARD \{a***H | a IN G\}; set N = CARD G; set n = CARD H; set j = INDEX; N = j*n; thus CARD H divides CARD G; \end{alltt} \xmedskip
\noindent (The notations with the multiple stars is a bit ugly, but we wanted to use the statement from John Harrison's proof, and there group multiplication is written as \texttt{**}. We then used \texttt{***} for multiplication of an element and a coset.) Both the Mizar and \texttt{miz3} formalizations were completely straightforward. The part of the final formalization that corresponds to the first four lines of the formal proof sketch became:
\xmedskip \begin{alltt}\small now [22]
let a be A; assume a IN G [23];
let b be A; assume b IN G [24];
i(a)**b IN G [25] by 2,23,24;
assume i(a)**b IN H [26];
b***H = e**b***H by 2,24;
.= a**i(a)**b***H by -,2,23;
.= a**(i(a)**b)***H by -,2,23,24;
.= a***(i(a)**b***H) by -,9,23,25;
thus .= a***H by -,17,26; end; \end{alltt} \xmedskip
\noindent We obtained \texttt{miz3} versions of John Harrison's proof in two different ways. We automatically generated this using the technology from Section~\ref{automatic}, but we also wrote one manually by just running the proof step by step, `understanding' what was going on, and then rendering that in \texttt{miz3} syntax. The first, although a correct declarative proof, was disappointingly large, because there is a lot of equality reasoning in the proof and the converter does not optimize that proof pattern yet. Writing the second again was completely straightforward.
The line counts of the various formalizations that we got were: \xmedskip \begin{center} \begin{tabular}{lr} \hline \noalign{
} traditional HOL Light by John Harrison & \textbf{214} lines \\ \noalign{
} \hline \noalign{
} Mizar formal proof sketch & 25 lines \\ Mizar & 153 lines \\ \texttt{miz3} formal proof sketch & 23 lines \\ \texttt{miz3} & \textbf{183} lines \\ \noalign{
} \hline \noalign{
} \texttt{miz3} (converted from John Harrison's proof) & 1,317 lines \\ \texttt{miz3} (manually written following John Harrison's proof) & \textbf{198} lines \\ \noalign{
} \hline \end{tabular}\label{proofcounts} \end{center} \xmedskip
\noindent It is clear that the two hand-written \texttt{miz3} formalizations are of a similar size to the HOL Light and Mizar formalizations, which shows that using \texttt{miz3} did not lead to much larger proof texts.
\section{Discussion}\label{discussion}
\subsection{A more generic interface?}
\noindent The synthesis between proof styles that we propose in this paper and its accompanying proof language is very general. Therefore a natural question is why we did not make the \texttt{miz3} framework \emph{generic}, like for example the Proof General interface \cite{asp:00}. That way our work would be useful for users of systems like Isabelle, Coq and PVS.
The reason we did not do this is that to get a usable framework we had to do various things that tie deep into the innards of the system. Our current implementation really works on the level of the data structures inside the prover. And there we do things like timing out tactics, caching justifications, and interpreting strings as OCaml expressions.
To make a version of our system that is generic, we would need to work on a much more syntactic level. However, in that case it probably would not be so easy to do caching of justifications in a sound way. We still think this might be possible, but it would be very different from the architecture that we use now.
\subsection{The reliability of the system}
\noindent Since we have not changed the HOL Light kernel, \texttt{miz3} is as reliable as the standard version of HOL Light. If \texttt{miz3} gives an error message, then \emph{that} might be wrong, but if a proof is \emph{accepted}, then we know that the kernel of the system has fully checked the proof object, and therefore that possible \texttt{miz3} bugs did not matter (this is called the {de Bruijn criterion} \cite{bar:wie:06}). This holds even although \texttt{miz3} consists of complicated code, and even although the system goes outside the basic OCaml programming model by using a system call to time out tactics and by invoking the OCaml interpreter on pieces of the proof text.
When a lemma has a completed \texttt{miz3} proof with no errors left, the proved \texttt{thm} will `appear' in the session for further use. For example, once we check a correct proof for the example lemma in Section~\ref{example}, we see magically appear in the HOL Light session:
\xmedskip \begin{alltt}\small
# val ( ARITHMETIC_SUM ) : thm =
|- !n. nsum (1..n) ({\char`\\}i. i) = (n * (n + 1)) DIV 2 \end{alltt} \xmedskip
\noindent From then on we can refer to the variable \texttt{ARITHMETIC\char`\_SUM}. The interface generally suppresses all printing when it is doing a check (we do not want a check from \texttt{vi} to disturb the output of our HOL Light session, where we might be doing other things), only when a proof is fully correct the theorem is printed.
To make it possible to get a \texttt{thm} out of a \texttt{miz3} proof, the justification cache has not only have to hold the information that a justification was correct, but also a \texttt{thm} that can be used to redo that justification in a very fast way.
\subsection{Future work} \label{improvements}
\noindent The \texttt{miz3} interface is working very well, and is competitive with the traditional way of using HOL Light. Still, there are various ways in which it can be improved:
\begin{iteMize}{$\bullet$} \item The default prover for \texttt{by} justifications is not completely satisfactory. This is orthogonal to the design of the system, but a better justifier will make the system easier to use. One would like the justifier to have the following properties:
\begin{iteMize}{$-$} \item Any statement that can be proved in HOL Light should be provable in \texttt{miz3} without explicit tactics. This does not mean that the \texttt{by} justifier should be able to prove it all by itself (not even given infinite time and memory), but that it should be possible to break the proof in sufficiently small steps that \texttt{by} then can prove each \emph{step} without further procedural help.
\item Basic HOL Light tactics like \texttt{REWRITE\char`\_TAC}, \texttt{MATCH\char`\_MP\char`\_TAC}, \texttt{ARITH\char`\_{\penalty 100}TAC} and \texttt{MESON\char`\_TAC} should not have to be given explicitly. If the tactic in a justification is one of these, and it runs in the justification in a reasonably short time, then the default tactic should be able to do the proof too.
\end{iteMize}
\item Experimenting with goal states that correspond to steps in a \texttt{miz3} proof could be more ergonomic. These goal states either can correspond to a \texttt{thesis} at a specific place in the proof, or to a \texttt{by} justification. One can experiment already using the \texttt{GOAL\char`\_TAC} tactic (a custom \texttt{miz3} tactic that sets the current goal of the HOL Light session to the goalstate it is given as input) but this is a bit cumbersome. At the moment Isabelle/Isar is in this respect much more ergonomic than our system.
Currently, when the system flags an error for a certain place in the proof this gives the user a bit of a helpless feeling (the same holds for the Mizar system). If the user understands the problem, then all is fine, but if the user does \emph{not} understand the error then there should be something that can be done to investigate more easily than is possible now.
\item We might try going beyond the standard Mizar proof idiom. For instance, it would be trivial (although not terribly useful) to follow the suggestion from various people to have an \emph{it now is sufficient to prove this} proof step. Such a step sets the \texttt{thesis} and its justification derives the old \texttt{thesis} from that statement. It is called `\texttt{suffices} \texttt{to} \texttt{show}' in \cite{har:96} and `\texttt{Suffices}' in \cite{bar:03}.
More interestingly there could be a variant of the \texttt{cases} construct where the cases are the subgoals produced by a tactic. That way induction proofs could be more like traditional mathematical proofs, because then the \texttt{INDUCT\char`\_TAC} tactic would be \emph{before} the inductive cases.
\item It would be interesting to have a more intelligent folding editor on top of our interface. It could color the relevant parts of the proof to clearly show the goal that corresponds to an unjustified step, and it could intelligently fold and unfold subproofs.
\item It would be useful to make the communication architecture more flexible. That way we might have verification not be restricted to one contiguous block, the system could report errors more precisely, and several instances of the system could be run on one computer at the same time.
\end{iteMize}
\subsection{A proof interface for the working mathematician}
\noindent A good proof interface for formal mathematics should satisfy the requirement that \emph{easy things should be easy, and hard things should be possible} \cite{bea:98}. The synthesis proposed in this paper combines the ease of the declarative proof style with the power of the procedural proof style. We hope that our approach will turn out to be a piece in the puzzle of making interactive theorem provers useful for and attractive to the working mathematician.
\end{document} | arXiv |
OSA Publishing > Optica > Volume 8 > Issue 1 > Page 1
Prem Kumar, Editor-in-Chief
Evidence of photochromism in a hexagonal boron nitride single-photon emitter
Matthew A. Feldman, Claire E. Marvinney, Alexander A. Puretzky, and Benjamin J. Lawrie
Matthew A. Feldman,1,2,4 Claire E. Marvinney,2 Alexander A. Puretzky,3 and Benjamin J. Lawrie2,*
1Department of Physics and Astronomy, Vanderbilt University, Nashville, Tennessee 37235, USA
2Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
3Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
4e-mail: [email protected]
*Corresponding author: [email protected]
Benjamin J. Lawrie https://orcid.org/0000-0003-1431-066X
M Feldman
C Marvinney
A Puretzky
B Lawrie
Vol. 8,
•https://doi.org/10.1364/OPTICA.406184
Matthew A. Feldman, Claire E. Marvinney, Alexander A. Puretzky, and Benjamin J. Lawrie, "Evidence of photochromism in a hexagonal boron nitride single-photon emitter," Optica 8, 1-5 (2021)
Direct measurement of quantum efficiency of single-photon emitters in hexagonal boron nitride (OPTICA)
Near-deterministic activation of room-temperature quantum emitters in hexagonal boron nitride (OPTICA)
Photophysics of quantum emitters in hexagonal boron-nitride nano-flakes (OE)
Nitrogen vacancy centers
Photon counting
Quantum information
Single photon detectors
Original Manuscript: August 24, 2020
µPL SPECTROSCOPY
TWO-COLOR HBT INTERFEROMETRY
ANALYSIS AND CONCLUSIONS
Suppl. Mat. (1)
Solid-state single-photon emitters (SPEs) such as the bright, stable, room-temperature defects within hexagonal boron nitride (hBN) are of increasing interest for quantum information science. To date, the atomic and electronic origins of SPEs within hBN have not been well understood, and no studies have reported photochromism or explored cross correlations between hBN SPEs. Here, we combine irradiation time-dependent microphotoluminescence spectroscopy with two-color Hanbury Brown–Twiss interferometry in an investigation of the electronic structure of hBN defects. We identify evidence of photochromism in an hBN SPE that exhibits single-photon cross correlations and correlated changes in the intensity of its two zero-phonon lines.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Solid-state single-photon emitters (SPEs) have become of increasing interest as a source of nonclassical light for quantum computation, quantum communication, and quantum sensing applications [1–3]. Defects in hexagonal boron nitride (hBN) have emerged as notable SPEs due to their bright, stable, room-temperature emission across the visible spectrum [4]. The recent characterization of spin states in an hBN defect ensemble with optically detected magnetic resonance could enable new quantum memories [5,6]. Charge state initialization of hBN defects could enable new approaches to coherent optical control [7,8]. Furthermore, strain localization [9] and strain tuning [10] of hBN SPEs could enable the design of deterministic indistinguishable single-photon sources.
Despite these advances in state preparation, readout, and process control, and despite substantial theoretical [11–15] and microscopic [16] analysis, the atomic origins and electronic structure of hBN SPEs are still poorly understood. To date, defects have been categorized phenomenologically. Initial reports identified Group I and Group II hBN SPEs based on the difference in their electron–phonon coupling [17]. More recent research demonstrating the existence of four species of hBN emitters spanning the visible spectrum with correlated microphotoluminescence (µPL), cathodoluminescence, and nanoscale strain mapping suggests that the observed defect species may be complexes of defects [16]. Further research has identified photochemical effects such as bleaching of hBN emitters under 405 nm excitation [18], or activation of emitters with electron-beam irradiation [19]. Polarimetric studies of hBN emitters under (i) 473 and 532 nm excitation and (ii) tunable strain have exhibited a misalignment in absorption and emission dipole moments, supporting the claim of a third excited bright state in hBN [20–22]. However, no studies to date have directly reported photochromism in hBN SPEs or explored cross correlations between electronic transitions in hBN µPL spectra.
Here, we use µPL spectroscopy to study the photostability of defects in few-layer hBN flakes in air when optically pumped with greater photon energy than the activation energy for the photochemical decomposition of hBN [23]. Further, we characterize the cross correlations between zero-phonon lines (ZPLs) that exhibit correlated changes in intensity with spectrally resolved two-color Hanbury Brown–Twiss (HBT) interferometry. While we have previously used two-color HBT interferometry with photostable emitters in hBN under vacuum to verify that the broad emission bands redshifted ${166}\;{\pm}\;{0.5}$ and ${326}\;{\pm}\;{0.5}\;{\rm meV}$ from the ZPL are optical one- and two-phonon sidebands (PSBs), respectively [24], the cross-correlated ZPLs studied in this work have separation energies 20 meV below the known optical phonon modes of hBN [25,26], and localized phonon resonances fail to explain the observed spectrum [14]. Combining irradiation time-dependent µPL spectroscopy with two-color HBT interferometry enables this new investigation of the electronic structure of hBN defects.
2. µPL SPECTROSCOPY
We investigate defects in three-to-five layer hBN using the same sample from a previous study [24]. µPL spectroscopic data were collected for each defect using a custom-built room-temperature confocal microscope. Eleven defects with ZPLs ranging from 2.15 to 2.9 eV were observed. The majority of defects measured were identified as Group I emitters with ${\sim}10\,{\rm meV}$ linewidth ZPLs and one-phonon doublets and two-PSBs 166 and 326 meV redshifted from the ZPL [17,24]. Out of these, seven defects photobleached consistent with previous µPL studies of defects in oxygen-rich environments [18]. Among the defects that photobleached, two ZPLs within a diffraction-limited confocal volume demonstrated a correlated enhancement and quenching in their µPL intensity under 405 nm excitation. Figure 1(a) shows the µPL spectra measured for that site. ${{\rm ZPL}_1}$ (2.28 eV) has one-phonon (${{\rm PSB}_{11}}$) and two-phonon (${{\rm PSB}_{12}}$) sidebands ${166}\;{\pm}\;{2}\;{\rm meV}$ and ${326}\;{\pm}\;{2}\;{\rm meV}$ redshifted with respect to ${{\rm ZPL}_1}$, and ${{\rm ZPL}_2}$ (2.14 eV) has one-phonon (${{\rm PSB}_{21}}$) and two-phonon (${{\rm PSB}_{22}}$) sidebands ${166}\;{\pm}\;{2}$ and ${326}\;{\pm}\;{2}\;{\rm meV}$ redshifted from ${{\rm ZPL}_2}$.
Fig. 1. Laser irradiation-dependent spectroscopy of a single defect pumped with a 405 nm laser. (a) The µPL spectra of ZPL transitions ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$. ${{\rm ZPL}_1}$ has one-phonon (${{\rm PSB}_{11}}$) and two-phonon (${{\rm PSB}_{12}}$) sidebands, 166 meV and 326 meV, respectively, redshifted from ${{\rm ZPL}_1}$, and ${{\rm ZPL}_2}$ has one-phonon (${{\rm PSB}_{21}}$) and two-phonon (${{\rm PSB}_{22}}$) sidebands, 166 and 326 meV, respectively, redshifted from ${{\rm ZPL}_2}$. (b) The relative µPL intensity of ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ (normalized to the peak ${{\rm ZPL}_1}$ intensity) show enhancement and partial quenching within the first half-hour of irradiation, respectively. For the following hour, they remain stable, after which ${{\rm ZPL}_2}$ undergoes a second partial quenching. ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ remain stable for another hour prior to simultaneously quenching. (c) The energy difference between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ remains constant until the second partial quenching in ${{\rm ZPL}_2}$ occurs, leading to a 10 meV spectral jump in the energy of ${{\rm ZPL}_2}$. Triangles indicate measurements made using filtered singles counts.
We then evaluate the intensity of ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ as a function of irradiation time [see Fig. 1(b)] using the ZPL peaks (dots) in our spectra and correlated filtered singles counts (triangles). In the first half-hour of irradiation, the intensity of ${{\rm ZPL}_1}$ and ${{\rm PSB}_{11}}$ increases while ${{\rm ZPL}_2}$, ${{\rm PSB}_{21}}$, and ${{\rm PSB}_{22}}$ decrease, as seen in Figs. 1(a) and 1(b). The intensities of ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ and their corresponding PSBs then equilibriate for an hour until ${{\rm ZPL}_2}$ is again partially quenched. Prior to this second quench in ${{\rm ZPL}_2}$, the energy difference between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ was ${139}\;{\pm}\;{2}\;{\rm meV}$; afterwards, the energy difference decreases to ${129}\;{\pm}\;{2}\;{\rm meV}$ due to a 10 meV blueshift in ${{\rm ZPL}_2}$, as seen in Fig. 1(c). The blueshifted ${{\rm ZPL}_2}$ and the ${{\rm ZPL}_1}$ showed no substantial intensity fluctuations before both simultaneously quenched, as seen in Fig. 1(b) and Fig. S1 in Supplement 1. The ZPLs remained dark after a month, suggesting that they are either bleached or pumped into a very long-lived dark state.
Given the appearance of similar trends in the evolution of the ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ µPL spectra, it may be possible that the two transitions are correlated and are potentially excited-state transitions of the same defect or complex. However, photoluminescence spectroscopy by itself is insufficient to prove such a claim. Clear evidence of photochromism is essential to the understanding of the electronic structure and atomistic origin of hBN SPEs.
3. TWO-COLOR HBT INTERFEROMETRY
To test the hypothesis that ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ are excited-state transitions of the same defect, we employed two-color HBT interferometry after ${{\rm ZPL}_2}$ spectrally jumped 10 meV. Filters F1 and F2 [illustrated in Fig. 2(a)] selected the luminescence of ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ in each arm of the interferometer. Because of the presence of additional defects in the spectrum [shown in Fig. 2(a)], the background counts in each channel could not be attributed solely to a Poissonian background. Instead, the background is modeled as defect emission that is uncorrelated with ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ [27]. To determine the contribution of each emitter to the counts in each channel, fits of the line shapes corresponding to ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ and their respective PSBs were made, as shown in Fig. 2(a). Here the line shapes for ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ were fit using a phenomenological model composed of the sum of a Lorentzian, exponentially modified Gaussian, and Gaussian distributions centered at their observed ZPL, low and high PSB energies, respectively. While this model is agnostic with respect to the origins of the vibrational modes contributing to the PSB emission, it takes into account the observed low- and high-energy phonon modes, as described in Supplement 1. The line shape of the ${{\rm ZPL}_2}$ emission was assumed to be fixed irrespective of spectral jumps, and so only the amplitude and peak energy parameters for the ${{\rm ZPL}_2}$ fit were left free for the 1.8 h irradiation time, presented in Fig. 2(a). Additionally, the fit for ${{\rm ZPL}_2}$ at 0 h presumes negligible background overlapped with the line shape of ${{\rm ZPL}_2}$, so only the Group 1 emitter phenomenological model was used with no additional background terms. All peaks attributed to uncorrelated background emission were fit using the same line shape function or with Gaussian distributions.
Fig. 2. Two-color Hanbury Brown–Twiss interferometry after 1.8 h of laser irradiation. (a) Spectral line shapes for ${{\rm ZPL}_j}.$ Fits to the ${{\rm ZPL}_1}$ (red), ${{\rm ZPL}_2}$ (blue) line shapes and uncorrelated emitters (${{\rm ZPL}_j}$, $j = 3,4,5$) are used to estimate the probability (${{\rm z}_{\textit{ij}}}$) that a transition contributes to the µPL (black) collected in each filtered (F1, F2) interferometer arm ($i = 1,2$). The inset shows the best fit for ${{\rm ZPL}_2}$ at 0 h. The autocorrelations for (b) ${{\rm ZPL}_1}$, (c) ${{\rm ZPL}_2}$, and (d) the cross correlations between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$. The distance of $g_i^{(2)}(0)$ from the limit for single-photon-emission (indicated by the green horizontal line) exceeds 5 standard deviations, $\sigma$. Here the black dashed lines are the $5\sigma$ bounds for $g_i^{(2)}(\tau)$. (e) A proposed energy diagram for the suspected defect with excited states (red) and shelving state(s). The observed shelving in the autocorrelations may be explained by one (solid black) or two (solid and dashed black) energy levels.
The probability ${z_{\textit{ij}}}$ that the ${j^{{\rm th}}}$ line shape will contribute to the intensity in the ${i^{{\rm th}}}$ filtered arm of the HBT interferometer is the overlap integral of the filter transfer function with the total emission of the ${i^{{\rm th}}}$ line shape divided by the total emission in the filter band [27]. The autocorrelations for ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$, as well as the cross correlation between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$, were calculated with this assumption in mind, and using the spectrum taken at 1.8 h of laser irradiation to calculate the probabilities, ${z_{\textit{ij}}}$. The bunching observed in the coincidence counts for $|\tau | \gt 0$ for both transitions indicates a shelving state is present in each ZPL, and we interpret these data assuming a three-level model. The autocorrelations for ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ and the cross correlations between each are given by
(1)$$\begin{split}g_1^{(2)}(\tau) & = (z_{11}^2 + z_{13}^2 + z_{14}^2 + z_{15}^2)g_{\rho 1}^{(2)}(\tau) +2({z_{11}}{z_{13}} + {z_{11}}{z_{15}} \\& \quad+ {z_{11}}{z_{14}} + {z_{13}}{z_{15}} + {z_{13}}{z_{14}} + {z_{15}}{z_{14}}),\end{split}$$
(2)$$g_2^{(2)}(\tau) = (z_{21}^2 + z_{22}^2)g_{\rho 2}^{(2)}(\tau) + {z_{21}}{z_{22}}(g_{21}^{(2)}(\tau) + g_{12}^{(2)}(\tau)),$$
(3)$$\!\!\!g_{21}^{(2)}(\tau) = {z_{11}}{z_{22}}g_{\rho 21}^{(2)}(\tau) + {z_{11}}{z_{21}}g_1^{(2)}(\tau) + {z_{13}} + {z_{14}} + {z_{15}},\!$$
(5)$$g_{\rho i}^{(2)}(\tau) = 1 - \rho _i^2[(1 + {a_i}){e^{- |x - {x_{\textit{oi}}}|\tau /{\tau _{1i}}}} - {a_i}{e^{- |x - {x_{\textit{oi}}}|\tau /{\tau _{2i}}}}],$$
where $g_1^{(2)}(\tau)$ and $g_2^{(2)}(\tau)$ are the autocorrelation functions for ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$, $g_{21}^{(2)}(\tau)$ and $g_{12}^{(2)}(\tau)$ are the cross correlations between the ZPLs, and $g_{\rho i}^{(2)}(\tau)$ is the three-level model for each correlation function with Poisson background contribution ${\rho _i}$, shelving parameter ${a_i}$, excited state lifetime ${\tau _{1i}}$, and shelving state lifetime ${\tau _{2i}}$. The fitted parameters for the auto- and cross correlations are provided in Table 1, along with the probabilities ${z_{\textit{ij}}}$ in Table 2. A full derivation of the auto- and cross correlations can be found in Supplement 1. The threshold for single-photon emission for auto- and cross correlations is given by $g_{{\rm limit},i}^{(2)} = \frac{1}{2}(1 + \rho _i^2{a_i})$ [28].
Table 1. Parameter Values for the Auto- and Cross-Correlation Functions
View Table | View all tables in this article
Table 2. Probabilities ${z_{\textit{ij}}}$ That the ${j^{{\rm th}}}$ Line Shape Will Contribute to the Counts in the ${{\rm i}^{{\rm th}}}$ Filtered Interferometer Arm
Due to emission from background emitters, the cross terms for $g_1^{(2)}(\tau)$, $g_2^{(2)}(\tau)$, $g_{21}^{(2)}(\tau)$, and $g_{12}^{(2)}(\tau)$ degrade the single-photon purity of the emitter while the like terms dampen the expected shelving amplitude. The most significant background terms come from the overlap of ${{\rm ZPL}_1}$'s one-phonon PSB and ${{\rm ZPL}_2}$, where ${z_{21}} = 30\%$ of the ${{\rm ZPL}_1}$ PSB contributes to the background. But as we will see, the corresponding cross terms are insufficient to degrade the purity of ${{\rm ZPL}_2}$ beyond the limit for single-photon emission. Figures 2(b)–2(d) show the auto- and cross correlations as well as best-fit and corresponding 5 standard deviation ($\sigma$) confidence intervals for $g_1^{(2)}(\tau)$, $g_2^{(2)}(\tau)$ and $g_{21}^{(2)}(\tau)$, respectively. It is clear that the auto- and cross correlations confirm single-photon emission and provide evidence of photochromism between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$, as the fit for $g_i^{(2)}(0)$ is at least 5 standard deviations from the limit for single-photon emission. The overlap of ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ leads to additional time-dependent cross terms in (i) the autocorrelation for ${{\rm ZPL}_2}$ ($g_{12}^{(2)}(\tau)$, $g_{21}^{(2)}(\tau)$), and (ii) the cross correlation between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ ($g_1^{(2)}(\tau)$), but their contribution to the antibunching and anticorrelations are found to be ${\sim}10\%$. Furthermore, all fits for $g_1^{(2)}(\tau)$, $g_2^{(2)}(\tau)$, and $g_{21}^{(2)}(\tau)$ include cross terms, and thus the background-free cross correlation between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ and the respective antibunching for each ZPL would increase the distance $[g_{{\rm limit},i}^{(2)} - g_i^{(2)}(0)]$.
While the defect photobleached prior to collecting $g_{12}^{(2)}(\tau)$, the large cross correlation of $g_{21}^{(2)}(\tau)$ and single-photon purity of $g_2^{(2)}(\tau)$ remained consistent with $[g_{{\rm limit},i}^{(2)} - g_i^{(2)}(0)] \gt 5\sigma$ under the assumptions that $g_{\rho 12}^{(2)}(\tau) \approx g_{\rho 2}^{(2)}(\tau)$, while leaving all parameters for $g_{\rho 12}^{(2)}(\tau)$ free in the fits for $g_2^{(2)}(\tau)$. Generally, the auto- and cross correlations maintain the single-photon purity, whether we assume that the spectral shape of ${{\rm ZPL}_2}$ is the same after its spectral jump or whether we allow it to be modified. In this case, we left all spectral parameters free to estimate the line shape for ${{\rm ZPL}_2}$. Under these different cases, the coefficient of determination varied by 1% or less, supporting the claim that these assumptions have negligible effect on the single-photon purity of each ZPL or the magnitude of anticorrelations between ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$.
Furthermore, we repeated the two-color HBT interferometry on different sets of additional pairs of ZPLs where each filter only passes light from a single ZPL (i.e., no background emission from other spectrally distinct defects overlaps with the filter linewidth). While these additional emitters do not meet the threshold for single-photon emission, there is still a clear cross correlation between the ZPLs, as shown in Supplement 1. However, we only observed a minority of emitters exhibiting cross-correlated ZPLs, which is a plausible consequence of a broad family of different defects being generated during the hBN annealing process. This is supported by the observation that the emitters described in Supplement 1 do not exhibit any PSBs, in contrast with the emitter discussed in this article. We would expect all emitters to have similar line shapes if they are from the same defect.
4. ANALYSIS AND CONCLUSIONS
The auto- and cross-correlation functions and spectra for each emitter presented in Figs. 1 and 2 provide experimental evidence for multiple excited states within a single hBN defect or complex of defects as proposed in Fig. 2(e). This model is consistent with previous polarimetric studies that described two excited states [21,22]. The cross correlations observed here would be expected, for example, from a two-excited state model when optically addressable charge states are present. The shelving observed may be explained most generally by two shelving states, and previous reports have suggested the existence of two shelving states determined through photophysics studies [17,29]. However, the power dependence of shelving-state dynamics may indicate that there is only one shelving state [30]. The results shown in Fig. 2 do not conclusively address whether there is more than one shelving state present, but they provide clear evidence of anticorrelations between the two ZPLs.
Based on the cross correlations between distinct ZPLs reported in Fig. 2(d) and Supplement 1, it appears that ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ are associated with a single defect or a complex of defects so that an excited state can emit light into either ${{\rm ZPL}_1}$ or ${{\rm ZPL}_2}$. Alternatively, cross correlations like those reported here could plausibly result from dipole–dipole interactions between two closely spaced, near-resonant emitters. We suggest that this alternative hypothesis is less plausible but provide a simple classical model of such a coupled defect system in Supplement 1. While we cannot reject the case of strongly interacting emitters, the preponderance of the evidence leads us to conclude that the observed cross -correlations are generated by the electronic structure of either a single-point defect or a defect complex.
In summary, we have observed correlated laser irradiation-dependent changes in the µPL intensity for two ZPLs that reach equilibrium before simultaneously quenching in ambient laboratory conditions. These ZPLs were confirmed to be antibunched and anticorrelated with one another. The cross correlation between these two emitters indicates photochromism between the ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ transitions, consistent with a similar study of the charged and neutral nitrogen vacancy centers in diamond [30]. These results are therefore evidence that ${{\rm ZPL}_1}$ and ${{\rm ZPL}_2}$ are excited-state transitions for the same defect or complex. This evidence of photochromism in hBN defects is an essential step toward improved understanding of the atomic origins of these defects.
The sample studied in this work was a multilayer hBN flake (3–5 layers in thickness) from Graphene Supermarket. The flake was subsequently annealed in a First Nano rapid thermal processor at a temperature of 850°C in 1 Torr ${{N}_2}$ with a temperature increase and decrease of 5°C/min. For all experiments, a room-temperature confocal microscope with a continuous-wave 405 nm diode-laser excitation with $2\,\,\unicode{x00B5}\rm W$ incident on the sample and a 0.9 NA objective was used to collect µPL from defects in our sample. Laser-edge filters and dichroic mirrors were used to pass only µPL to our spectrometer or interferometer. For laser irradiation-dependent spectroscopy, the µPL was passed to a diffraction grating spectrometer with 2 meV resolution. For singles counts and two-color HBT interferometry, the µPL was passed to a beam splitter with filters F1 (Newport 10LWF-500-B and 10SWF-550-B) and F2 (Semrock FF01-575/5-25) prior to the detectors corresponding to arm one and two of the HBT interferometer. The detectors were fiber-coupled using multimode fiber, and spatial mode filtering was adjusted using varying fiber core diameters (50–100 µm). Coincidence counts were collected using a HydraHarp 400, and the single-photon detectors used were Perkin Elmer SPCM-AQRs.
U.S. Department of Energy, Office of Basic Energy Sciences (DE-AC05-00OR22725); National Science Foundation (DMR-1747426).
This research was sponsored by the U. S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. Initial experimental planning and design was supported by the Laboratory-Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC for the U.S. Department of Energy. M.A.F. gratefully acknowledges student support by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) and NSF award DMR-1747426. C.E.M gratefully acknowledges postdoctoral research support from the Intelligence Community Postdoctoral Research Fellowship Program at the Oak Ridge National Laboratory, administered by Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and the Office of the Director of National Intelligence. Rapid thermal processing and spectroscopy experiments were carried out at the Center for Nanophase Materials Sciences (CNMS), which is sponsored at ORNL by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy. The authors thank Harrison Prosper for discussions regarding uncertainty estimation for the auto- and cross-correlation fits.
See Supplement 1 for supporting content.
1. I. Aharonovich, D. Englund, and M. Toth, "Solid-state single-photon emitters," Nat. Photonics 10, 631–641 (2016). [CrossRef]
2. D. D. Awschalom, L. C. Bassett, A. S. Dzurak, E. L. Hu, and J. R. Petta, "Quantum spintronics: engineering and manipulating atom-like spins in semiconductors," Science 339, 1174–1179 (2013). [CrossRef]
3. D. D. Awschalom, R. Hanson, J. Wrachtrup, and B. B. Zhou, "Quantum technologies with optically interfaced solid-state spins," Nat. Photonics 12, 516–527 (2018). [CrossRef]
4. T. T. Tran, K. Bray, M. J. Ford, M. Toth, and I. Aharonovich, "Quantum emission from hexagonal boron nitride monolayers," Nat. Nanotechnol. 11, 37–41 (2016). [CrossRef]
5. A. Gottscholl, M. Kianinia, V. Soltamov, S. Orlinskii, G. Mamin, C. Bradac, C. Kasper, K. Krambrock, A. Sperlich, M. Toth, I. Aharonovich, and V. Dyakonov, "Initialization and read-out of intrinsic spin defects in a van der Waals crystal at room temperature," Nat. Mater. 19, 540–545 (2020). [CrossRef]
6. M. Atatüre, D. Englund, N. Vamivakas, S.-Y. Lee, and J. Wrachtrup, "Material platforms for spin-based photonic quantum technologies," Nat. Rev. Mater. 3, 38–51 (2018). [CrossRef]
7. P. Khatri, A. J. Ramsay, R. N. E. Malein, H. M. Chong, and I. J. Luxmoore, "Optical gating of photoluminescence from color centers in hexagonal boron nitride," Nano Lett. 20, 4256–4263 (2020). [CrossRef]
8. K. Konthasinghe, C. Chakraborty, N. Mathur, L. Qiu, A. Mukherjee, G. D. Fuchs, and A. N. Vamivakas, "Rabi oscillations and resonance fluorescence from a single hexagonal boron nitride quantum emitter," Optica 6, 542–548 (2019). [CrossRef]
9. N. V. Proscia, Z. Shotan, H. Jayakumar, P. Reddy, C. Cohen, M. Dollar, A. Alkauskas, M. Doherty, C. A. Meriles, and V. M. Menon, "Near-deterministic activation of room-temperature quantum emitters in hexagonal boron nitride," Optica 5, 1128–1134 (2018). [CrossRef]
10. G. Grosso, H. Moon, B. Lienhard, S. Ali, D. K. Efetov, M. M. Furchi, P. Jarillo-Herrero, M. J. Ford, I. Aharonovich, and D. Englund, "Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride," Nat. Commun. 8, 1–8 (2017). [CrossRef]
11. S. A. Tawfik, S. Ali, M. Fronzi, M. Kianinia, T. T. Tran, C. Stampfl, I. Aharonovich, M. Toth, and M. J. Ford, "First-principles investigation of quantum emission from HBN defects," Nanoscale 9, 13575–13582 (2017). [CrossRef]
12. M. Abdi, J.-P. Chou, A. Gali, and M. B. Plenio, "Color centers in hexagonal boron nitride monolayers: a group theory and ab initio analysis," ACS Photon. 5, 1967–1976 (2018). [CrossRef]
13. A. Sajid, J. R. Reimers, and M. J. Ford, "Defect states in hexagonal boron nitride: assignments of observed properties and prediction of properties relevant to quantum computation," Phys. Rev. B 97, 064101 (2018). [CrossRef]
14. G. Grosso, H. Moon, C. J. Ciccarino, J. Flick, N. Mendelson, L. Mennel, M. Toth, I. Aharonovich, P. Narang, and D. R. Englund, "Low-temperature electron-phonon interaction of quantum emitters in hexagonal boron nitride," ACS Photon. 7, 1410–1417 (2020). [CrossRef]
15. V. Ivády, G. Barcza, G. Thiering, S. Li, H. Hamdi, J.-P. Chou, Ö. Legeza, and A. Gali, "Ab initio theory of the negatively charged boron vacancy qubit in hexagonal boron nitride," npj Comput. Mater. 6, 41 (2020). [CrossRef]
16. F. Hayee, L. Yu, J. L. Zhang, C. J. Ciccarino, M. Nguyen, A. F. Marshall, I. Aharonovich, J. Vučković, P. Narang, T. F. Heinz, and J. A. Dionne, "Revealing multiple classes of stable quantum emitters in hexagonal boron nitride with correlated optical and electron microscopy," Nat. Mater. 19, 534–539 (2020). [CrossRef]
17. T. T. Tran, C. Elbadawi, D. Totonjian, C. J. Lobo, G. Grosso, H. Moon, D. R. Englund, M. J. Ford, I. Aharonovich, and M. Toth, "Robust multicolor single photon emission from point defects in hexagonal boron nitride," ACS Nano 10, 7331–7338 (2016). [CrossRef]
18. Z. Shotan, H. Jayakumar, C. R. Considine, M. Mackoit, H. Fedder, J. Wrachtrup, A. Alkauskas, M. W. Doherty, V. M. Menon, and C. A. Meriles, "Photoinduced modification of single-photon emitters in hexagonal boron nitride," ACS Photon. 3, 2490–2496 (2016). [CrossRef]
19. H. Ngoc My Duong, M. A. P. Nguyen, M. Kianinia, T. Ohshima, H. Abe, K. Watanabe, T. Taniguchi, J. H. Edgar, I. Aharonovich, and M. Toth, "Effects of high-energy electron irradiation on quantum emitters in hexagonal boron nitride," ACS Appl. Mater. Interfaces 10, 24886–24891 (2018). [CrossRef]
20. N. R. Jungwirth, B. Calderon, Y. Ji, M. G. Spencer, M. E. Flatte, and G. D. Fuchs, "Temperature dependence of wavelength selectable zero-phonon emission from single defects in hexagonal boron nitride," Nano Lett. 16, 6052–6057 (2016). [CrossRef]
21. N. R. Jungwirth and G. D. Fuchs, "Optical absorption and emission mechanisms of single defects in hexagonal boron nitride," Phys. Rev. Lett. 119, 057401 (2017). [CrossRef]
22. N. Mendelson, M. Doherty, M. Toth, I. Aharonovich, and T. T. Tran, "Strain-induced modification of the optical characteristics of quantum emitters in hexagonal boron nitride," Adv. Mater. 32, 1908316 (2020). [CrossRef]
23. A. V. Kanaev, J.-P. Petitet, L. Museur, V. Marine, V. L. Solozhenko, and V. Zafiropulos, "Femtosecond and ultraviolet laser irradiation of graphitelike hexagonal boron nitride," J. Appl. Phys. 96, 4483 (2004). [CrossRef]
24. M. A. Feldman, A. Puretzky, L. Lindsay, E. Tucker, D. P. Briggs, P. G. Evans, R. F. Haglund, and B. J. Lawrie, "Phonon-induced multicolor correlations in hbn single-photon emitters," Phys. Rev. B 99, 020101 (2019). [CrossRef]
25. T. Vuong, G. Cassabois, P. Valvin, A. Ouerghi, Y. Chassagneux, C. Voisin, and B. Gil, "Phonon-photon mapping in a color center in hexagonal boron nitride," Phys. Rev. Lett. 117, 097402 (2016). [CrossRef]
26. P. Khatri, I. Luxmoore, and A. Ramsay, "Phonon sidebands of color centers in hexagonal boron nitride," Phys. Rev. B 100, 125305 (2019). [CrossRef]
27. A. Bommer and C. Becher, "New insights into nonclassical light emission from defects in multi-layer hexagonal boron nitride," Nanophotonics 8, 2041–2048 (2019). [CrossRef]
28. A. L. Exarhos, D. A. Hopper, R. R. Grote, A. Alkauskas, and L. C. Bassett, "Optical signatures of quantum emitters in suspended hexagonal boron nitride," ACS Nano 11, 3328–3336 (2017). [CrossRef]
29. M. K. Boll, I. P. Radko, A. Huck, and U. L. Andersen, "Photophysics of quantum emitters in hexagonal boron-nitride nano-flakes," Opt. Express 28, 7475–7487 (2020). [CrossRef]
30. M. Berthel, O. Mollet, G. Dantelle, T. Gacoin, S. Huant, and A. Drezet, "Photophysics of single nitrogen-vacancy centers in diamond nanocrystals," Phys. Rev. B 91, 035308 (2015). [CrossRef]
I. Aharonovich, D. Englund, and M. Toth, "Solid-state single-photon emitters," Nat. Photonics 10, 631–641 (2016).
D. D. Awschalom, L. C. Bassett, A. S. Dzurak, E. L. Hu, and J. R. Petta, "Quantum spintronics: engineering and manipulating atom-like spins in semiconductors," Science 339, 1174–1179 (2013).
D. D. Awschalom, R. Hanson, J. Wrachtrup, and B. B. Zhou, "Quantum technologies with optically interfaced solid-state spins," Nat. Photonics 12, 516–527 (2018).
T. T. Tran, K. Bray, M. J. Ford, M. Toth, and I. Aharonovich, "Quantum emission from hexagonal boron nitride monolayers," Nat. Nanotechnol. 11, 37–41 (2016).
A. Gottscholl, M. Kianinia, V. Soltamov, S. Orlinskii, G. Mamin, C. Bradac, C. Kasper, K. Krambrock, A. Sperlich, M. Toth, I. Aharonovich, and V. Dyakonov, "Initialization and read-out of intrinsic spin defects in a van der Waals crystal at room temperature," Nat. Mater. 19, 540–545 (2020).
M. Atatüre, D. Englund, N. Vamivakas, S.-Y. Lee, and J. Wrachtrup, "Material platforms for spin-based photonic quantum technologies," Nat. Rev. Mater. 3, 38–51 (2018).
P. Khatri, A. J. Ramsay, R. N. E. Malein, H. M. Chong, and I. J. Luxmoore, "Optical gating of photoluminescence from color centers in hexagonal boron nitride," Nano Lett. 20, 4256–4263 (2020).
K. Konthasinghe, C. Chakraborty, N. Mathur, L. Qiu, A. Mukherjee, G. D. Fuchs, and A. N. Vamivakas, "Rabi oscillations and resonance fluorescence from a single hexagonal boron nitride quantum emitter," Optica 6, 542–548 (2019).
N. V. Proscia, Z. Shotan, H. Jayakumar, P. Reddy, C. Cohen, M. Dollar, A. Alkauskas, M. Doherty, C. A. Meriles, and V. M. Menon, "Near-deterministic activation of room-temperature quantum emitters in hexagonal boron nitride," Optica 5, 1128–1134 (2018).
G. Grosso, H. Moon, B. Lienhard, S. Ali, D. K. Efetov, M. M. Furchi, P. Jarillo-Herrero, M. J. Ford, I. Aharonovich, and D. Englund, "Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride," Nat. Commun. 8, 1–8 (2017).
S. A. Tawfik, S. Ali, M. Fronzi, M. Kianinia, T. T. Tran, C. Stampfl, I. Aharonovich, M. Toth, and M. J. Ford, "First-principles investigation of quantum emission from HBN defects," Nanoscale 9, 13575–13582 (2017).
M. Abdi, J.-P. Chou, A. Gali, and M. B. Plenio, "Color centers in hexagonal boron nitride monolayers: a group theory and ab initio analysis," ACS Photon. 5, 1967–1976 (2018).
A. Sajid, J. R. Reimers, and M. J. Ford, "Defect states in hexagonal boron nitride: assignments of observed properties and prediction of properties relevant to quantum computation," Phys. Rev. B 97, 064101 (2018).
G. Grosso, H. Moon, C. J. Ciccarino, J. Flick, N. Mendelson, L. Mennel, M. Toth, I. Aharonovich, P. Narang, and D. R. Englund, "Low-temperature electron-phonon interaction of quantum emitters in hexagonal boron nitride," ACS Photon. 7, 1410–1417 (2020).
V. Ivády, G. Barcza, G. Thiering, S. Li, H. Hamdi, J.-P. Chou, Ö. Legeza, and A. Gali, "Ab initio theory of the negatively charged boron vacancy qubit in hexagonal boron nitride," npj Comput. Mater. 6, 41 (2020).
F. Hayee, L. Yu, J. L. Zhang, C. J. Ciccarino, M. Nguyen, A. F. Marshall, I. Aharonovich, J. Vučković, P. Narang, T. F. Heinz, and J. A. Dionne, "Revealing multiple classes of stable quantum emitters in hexagonal boron nitride with correlated optical and electron microscopy," Nat. Mater. 19, 534–539 (2020).
T. T. Tran, C. Elbadawi, D. Totonjian, C. J. Lobo, G. Grosso, H. Moon, D. R. Englund, M. J. Ford, I. Aharonovich, and M. Toth, "Robust multicolor single photon emission from point defects in hexagonal boron nitride," ACS Nano 10, 7331–7338 (2016).
Z. Shotan, H. Jayakumar, C. R. Considine, M. Mackoit, H. Fedder, J. Wrachtrup, A. Alkauskas, M. W. Doherty, V. M. Menon, and C. A. Meriles, "Photoinduced modification of single-photon emitters in hexagonal boron nitride," ACS Photon. 3, 2490–2496 (2016).
H. Ngoc My Duong, M. A. P. Nguyen, M. Kianinia, T. Ohshima, H. Abe, K. Watanabe, T. Taniguchi, J. H. Edgar, I. Aharonovich, and M. Toth, "Effects of high-energy electron irradiation on quantum emitters in hexagonal boron nitride," ACS Appl. Mater. Interfaces 10, 24886–24891 (2018).
N. R. Jungwirth, B. Calderon, Y. Ji, M. G. Spencer, M. E. Flatte, and G. D. Fuchs, "Temperature dependence of wavelength selectable zero-phonon emission from single defects in hexagonal boron nitride," Nano Lett. 16, 6052–6057 (2016).
N. R. Jungwirth and G. D. Fuchs, "Optical absorption and emission mechanisms of single defects in hexagonal boron nitride," Phys. Rev. Lett. 119, 057401 (2017).
N. Mendelson, M. Doherty, M. Toth, I. Aharonovich, and T. T. Tran, "Strain-induced modification of the optical characteristics of quantum emitters in hexagonal boron nitride," Adv. Mater. 32, 1908316 (2020).
A. V. Kanaev, J.-P. Petitet, L. Museur, V. Marine, V. L. Solozhenko, and V. Zafiropulos, "Femtosecond and ultraviolet laser irradiation of graphitelike hexagonal boron nitride," J. Appl. Phys. 96, 4483 (2004).
M. A. Feldman, A. Puretzky, L. Lindsay, E. Tucker, D. P. Briggs, P. G. Evans, R. F. Haglund, and B. J. Lawrie, "Phonon-induced multicolor correlations in hbn single-photon emitters," Phys. Rev. B 99, 020101 (2019).
T. Vuong, G. Cassabois, P. Valvin, A. Ouerghi, Y. Chassagneux, C. Voisin, and B. Gil, "Phonon-photon mapping in a color center in hexagonal boron nitride," Phys. Rev. Lett. 117, 097402 (2016).
P. Khatri, I. Luxmoore, and A. Ramsay, "Phonon sidebands of color centers in hexagonal boron nitride," Phys. Rev. B 100, 125305 (2019).
A. Bommer and C. Becher, "New insights into nonclassical light emission from defects in multi-layer hexagonal boron nitride," Nanophotonics 8, 2041–2048 (2019).
A. L. Exarhos, D. A. Hopper, R. R. Grote, A. Alkauskas, and L. C. Bassett, "Optical signatures of quantum emitters in suspended hexagonal boron nitride," ACS Nano 11, 3328–3336 (2017).
M. K. Boll, I. P. Radko, A. Huck, and U. L. Andersen, "Photophysics of quantum emitters in hexagonal boron-nitride nano-flakes," Opt. Express 28, 7475–7487 (2020).
M. Berthel, O. Mollet, G. Dantelle, T. Gacoin, S. Huant, and A. Drezet, "Photophysics of single nitrogen-vacancy centers in diamond nanocrystals," Phys. Rev. B 91, 035308 (2015).
Abdi, M.
Abe, H.
Aharonovich, I.
Ali, S.
Alkauskas, A.
Atatüre, M.
Awschalom, D. D.
Barcza, G.
Bassett, L. C.
Becher, C.
Berthel, M.
Boll, M. K.
Bommer, A.
Bradac, C.
Bray, K.
Briggs, D. P.
Calderon, B.
Cassabois, G.
Chakraborty, C.
Chassagneux, Y.
Chong, H. M.
Chou, J.-P.
Ciccarino, C. J.
Cohen, C.
Considine, C. R.
Dantelle, G.
Dionne, J. A.
Doherty, M.
Doherty, M. W.
Dollar, M.
Drezet, A.
Dyakonov, V.
Dzurak, A. S.
Edgar, J. H.
Efetov, D. K.
Elbadawi, C.
Englund, D.
Englund, D. R.
Evans, P. G.
Exarhos, A. L.
Fedder, H.
Feldman, M. A.
Flatte, M. E.
Flick, J.
Ford, M. J.
Fronzi, M.
Fuchs, G. D.
Furchi, M. M.
Gacoin, T.
Gali, A.
Gil, B.
Gottscholl, A.
Grosso, G.
Grote, R. R.
Haglund, R. F.
Hamdi, H.
Hanson, R.
Hayee, F.
Heinz, T. F.
Hopper, D. A.
Hu, E. L.
Huant, S.
Huck, A.
Ivády, V.
Jarillo-Herrero, P.
Jayakumar, H.
Ji, Y.
Jungwirth, N. R.
Kanaev, A. V.
Kasper, C.
Khatri, P.
Kianinia, M.
Konthasinghe, K.
Krambrock, K.
Lawrie, B. J.
Lee, S.-Y.
Legeza, Ö.
Li, S.
Lienhard, B.
Lindsay, L.
Lobo, C. J.
Luxmoore, I.
Luxmoore, I. J.
Mackoit, M.
Malein, R. N. E.
Mamin, G.
Marine, V.
Marshall, A. F.
Mathur, N.
Mendelson, N.
Mennel, L.
Menon, V. M.
Meriles, C. A.
Mollet, O.
Moon, H.
Mukherjee, A.
Museur, L.
Narang, P.
Ngoc My Duong, H.
Nguyen, M.
Nguyen, M. A. P.
Ohshima, T.
Orlinskii, S.
Ouerghi, A.
Petitet, J.-P.
Petta, J. R.
Plenio, M. B.
Proscia, N. V.
Puretzky, A.
Qiu, L.
Radko, I. P.
Ramsay, A.
Ramsay, A. J.
Reddy, P.
Reimers, J. R.
Sajid, A.
Shotan, Z.
Solozhenko, V. L.
Soltamov, V.
Spencer, M. G.
Sperlich, A.
Stampfl, C.
Taniguchi, T.
Tawfik, S. A.
Thiering, G.
Toth, M.
Totonjian, D.
Tran, T. T.
Tucker, E.
Valvin, P.
Vamivakas, A. N.
Vamivakas, N.
Voisin, C.
Vuckovic, J.
Vuong, T.
Watanabe, K.
Wrachtrup, J.
Zafiropulos, V.
Zhang, J. L.
Zhou, B. B.
ACS Appl. Mater. Interfaces (1)
ACS Nano (2)
ACS Photon. (3)
Adv. Mater. (1)
Nano Lett. (2)
Nanophotonics (1)
Nanoscale (1)
Nat. Commun. (1)
Nat. Mater. (2)
Nat. Nanotechnol. (1)
Nat. Photonics (2)
Nat. Rev. Mater. (1)
npj Comput. Mater. (1)
Optica (2)
Phys. Rev. Lett. (2)
Supplementary Material (1)
» Supplement 1 Supplemental 1 | CommonCrawl |
Andrea D'Amico,1,* Elliot London,1 Bertrand Le Guyader,2 Florian Frank,2 Esther Le Rouzic,2 Erwan Pincemin,2 Nicolas Brochier,2 and Vittorio Curri1
1Politecnico di Torino, Corso Duca degli Abruzzi, 24, Torino, Italy
2Orange Labs, 22300 Lannion, France
*Corresponding author: [email protected]
Andrea D'Amico https://orcid.org/0000-0003-0828-6157
Elliot London https://orcid.org/0000-0002-7581-4100
Vittorio Curri https://orcid.org/0000-0003-0691-0067
A D'Amico
E London
B Le Guyader
F Frank
E Le Rouzic
E Pincemin
N Brochier
V Curri
Andrea D'Amico, Elliot London, Bertrand Le Guyader, Florian Frank, Esther Le Rouzic, Erwan Pincemin, Nicolas Brochier, and Vittorio Curri, "Experimental validation of GNPy in a multi-vendor flex-grid flex-rate WDM optical transport scenario," J. Opt. Commun. Netw. 14, 79-88 (2022)
GNPy: an open source application for physical layer aware open optical networks
Alessio Ferrari, et al.
J. Opt. Commun. Netw. 12(6) C31-C40 (2020)
Lightpath QoT computation in optical networks assisted by transfer learning
Ihtesham Khan, et al.
J. Opt. Commun. Netw. 13(4) B72-B82 (2021)
Assessment on the in-field lightpath QoT computation including connector loss uncertainties
Quadrature phase shift keying
Wavelength division multiplexing
Original Manuscript: September 1, 2021
EXPERIMENTAL SETUP
RESULTS AND ANALYSIS
We experimentally test the accuracy of a quality of transmission estimator (QoT-E) within a laboratory flex-grid flex-rate framework, considering eight multi-vendor transceivers (TRXs) with symbol rates ranging from 33 to 69 Gbaud, and variable constellations [quadrature phase shift keying, 8-quadrature amplitude modulation (QAM), and 16-QAM probabilistic constellation shaping], for data rates of 100 Gbits/s up to 300 Gbits/s, and a flex-grid wavelength division multiplexed (WDM) spectrum, with channel spacings of 50 and 75 GHz. As a QoT-E, we utilize an enhanced implementation of the open-source GNPy project. We demonstrate that this QoT-E provides a high level of accuracy in generalized signal-to-noise ratio (GSNR) computation, with an average error value not exceeding 0.5 dB, for the scenario under investigation. These values are computed with respect to the measured bit-error ratio converted to the GSNR using the TRX model obtained via back-to-back characterization. These results demonstrate that the optimal management of flex-grid flex-rate WDM optical transport arises by managing power spectral densities instead of power per channel, as in traditional fixed-grid systems.
© 2022 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
The continually increasing demand for data transport [1], specifically from transparent wavelength division multiplexed (WDM) optical networks [2], has spurred research into a wide variety of forward-thinking solutions to address this problem using existing infrastructures. To surpass existing capacity limits, new standards and approaches have emerged, ranging from the component and physical layer to top-level optical data planning control and management.
WDM optical transport based on dual-polarization coherent optical technologies is fast evolving; transparent fiber transmission with rigid spectral implementations based on fixed WDM grids and fixed-rate transceivers (TRXs) is moving towards flex-grid [3] WDM implementations and flex-rate TRXs. Due to innovations in silicon photonic technologies [4] and photonic integrated circuits, the TRX symbol rate, ${R_s}$, now exceeds 60 Gbaud in commercial products, and transmission beyond 100 Gbaud has been demonstrated in laboratory prototypes [5]. Moreover, multi-subcarrier solutions are proposed and commercially available as an alternative method of further increasing transmission capacity [6]. Through the use of high cardinality signal constellations such as 64-quadrature amplitude modulation (QAM), optical interfaces up to 800 Gbits/s are now possible, with transmission beyond 1 Tbit/s expected to become a commercial reality in the near future. Furthermore, hybrid modulation formats [7] and/or probabilistic constellation shaping (PCS) [8] allow data rates to be tuned seamlessly, enabling optimal quality of transmission (QoT) exploitation upon the available transparent lightpaths (LPs). Besides the traditional closed approach, disaggregated and possibly open WDM solutions are emerging with the introduction of white boxes and pluggable TRXs [9]. The result of these advances is that operators are now envisioning the evolution towards progressive deployment of disaggregated solutions [10,11].
This scenario provides the foundation for a software-defined networking (SDN) implementation that encompasses the WDM transport layer [12] by implementing open-control network element (NE) interfaces for an optical control plane that operates in a multi-vendor scenario. Several consortia are actively working to further develop and standardize these advances to define open protocols, data structures, and application programming interfaces (APIs). In particular, the Telecom Infra Project (TIP) develops open hardware and software solutions for open network infrastructures, and the OpenROADM consortium [13] works to standardize re-configurable optical add/drop multiplexer (ROADM) models for networking scenarios where ROADM-to-ROADM amplified links are independent and possibly open WDM optical line systems (OLSs). Recently, the implementation agreement (IA) for fixed-rate 400G-ZR optical interfaces has been released by the Optical Internetworking Forum (OIF) [14]. A corresponding IA for flex-rate 400G-ZR+ implementations is currently under development, along with 800 Gbit/s solutions. Disaggregated transponders providing these TRX configurations are commercially available, along with open solutions that have been proposed by the TIP [15]—these technologies will steer the progression towards multi-vendor disaggregated and open optical infrastructures, which will potentially be shared by multiple operators and have a WDM optical transport that will be a fully virtualized network function.
To pursue this objective, the fundamental request by network operators is that the QoT of a TRX over a transparent LP may be computed, for a provided transport network model and status. When working with dual-polarization coherent optical technologies, it has been extensively demonstrated that propagation over a transparent LP is well modeled by an additive white and Gaussian noise (AWGN) nonlinear channel, with a corresponding QoT fully quantified by the generalized optical signal-to-noise ratio (OSNR) [16,17]. The generalized SNR (GSNR) includes the two Gaussian disturbances that impair the received constellation: the accumulated amplified spontaneous emission (ASE) noise from the amplifiers and the nonlinear interference (NLI) introduced by nonlinear cross talk. From TRX back-to-back characterization, the GSNR can be related to the pre-forward error correction (FEC) bit-error ratio (BER) [18,19]. By doing so, the BER threshold requested by the FEC technology and modulation format corresponds to a unique GSNR threshold. Combined with the spectral occupancy given by the channel symbol rates and roll-off values, the fundamental QoT and bandwidth requests from the TRX for a given LP can be quantified [20]. Throughout this work, we express all SNRs by considering the noises integrated over each distinct channel bandwidth, instead of a fixed 0.1 nm bandwidth. This perspective is considered to better observe the spectral efficiency in terms of a GSNR degradation, as this quantity does not depend upon the channel symbol rates.
Virtualization of the WDM transport layer is enabled by a QoT estimator (QoT-E) that evaluates the GSNR on a selected LP, given the network topology, NE models, and network loading status [21]. The TIP consortium developed an open-source library named GNPy [22] that follows this approach for design, planning, control, and management of disaggregated optical networks. GNPy has been extensively tested in multi-vendor scenarios in both green-field and brown-field commercial infrastructures. Thus far, experimental results have demonstrated that GNPy is capable of predicting GSNR to a high degree of accuracy in 50 GHz fixed-grid scenarios, with fixed symbol rates and purely dual-polarization quadrature phase shift keying (QPSK), 8-QAM, and 16-QAM constellations, for 100, 150, and 200 Gbit/s data rates [20,23,24]. Concerning flex-grid flex-rate scenarios, experimental verification for GNPy and other QoT-Es has in general been limited—we highlight that a preliminary test of GNPy for flex-grid flex-rate transmission has been presented in [25].
In this work, we extend the analysis performed in [25] by considering eight multi-vendor TRXs, with symbol rates ranging from 33 to 69 Gbaud, TRX constellations of QPSK, 8-QAM, and probabilistic constellation shaping (PCS)-16-QAM, for data rates ranging from 100 Gbits/s up to 300 Gbits/s, along with flex-grid WDM configurations with channel spacings of 50 and 75 GHz. This experimental transmission has been performed upon a bandwidth of 3 THz in the C-band, with the remaining spectrum aside from these TRXs fully loaded with standard 100 Gbps channels. This experiment has been carried out through a point-to-point 20-span OLS located at Orange Labs, Lannion, France, with the laboratory setup shown in Fig. 1.
Fig. 1. Image of the experimental setup within Orange Labs in Lannion, France.
We present the experimental results along with those corresponding to the GNPy model, including estimations for the interval of confidence, showing that the QoT prediction given by GNPy has an average error value that does not exceed 0.5 dB for every considered TRX. Besides confirming the reliability of GNPy as a vendor-neutral software model for WDM optical transport, we also show that the optimal management of flex-grid flex-rate OLSs is enabled by managing power spectral densities (PSDs), ${P_{{\textit{ch}},i}}/{R_{\textit{s,i}}}$, instead of power per channel, ${P_{{\textit{ch}},i}}$, as is used in fixed-grid management. This approach has been previously suggested and investigated for flex-grid transmission scenarios in [26,27].
The rest of this work is divided as follows: In Section 2, we explain the features and topology of disaggregated networks, such as the one investigated within this work. In Section 3, we give a detailed description of the experimental setup used for flex-grid flex-rate transmission and the measurements that have been performed. In Section 4, we describe the model implementations used within this work, with both the ${{\rm SNR}_{{\rm NL}}}$ and OSNR contributions to the GSNR estimated using an enhanced implementation of the open-source GNPy library. We then compare our predictions to the experimental results, initiating a discussion on key differences between the investigated use case and fixed-grid networks that must be taken into account. Finally, in Section 5, we provide a summary of the paper.
2. NETWORK ARCHITECTURE
Within this work, we consider a partially disaggregated optical network framework, as depicted in Fig. 2, with top-level management performed by an optical network controller (ONC), where the amplified lines connecting ROADMs may be independent WDM OLSs [10,28,29]. In this scenario, the ROADMs are disaggregated [13,30], meaning that each degree unit, including wavelength selective switches (WSSs), is the ingress/egress of an independent OLS. Each OLS is managed by an independent optical line controller (OLC) that is responsible for setting optical power levels by controlling the in-line, pre-amp, and booster amplifiers. The ONC is in charge of deploying LPs, meaning that once the available wavelength and route has been defined by the routing wavelength and spectrum assignment algorithm (RWSA), the LP QoT must be computed to properly control the TRX. By comparing the available GSNR to the TRX model (the GSNR threshold from back-to-back characterization), the ONC defines feasible symbol rates and modulation formats (and PCS settings in the case of flex-rate TRXs), finding the overall LP feasibility [13,31–33]. As the LPs are modeled as AWGN nonlinear channels, the QoTs of the LPs are the accumulated GSNRs from the source, $s$, to the destination, $d$, for each transparent LP crossing multiple OLSs. According to a disaggregated approach to QoT computation [21], the LP GSNR for any given channel under test (CUT) is given by
(1)$${{\rm GSNR}_{\textit{sd}}} = \frac{1}{{\sum\limits_i {\rm GSNR}_i^{- 1}}},$$
where ${{\rm GSNR}_i}$ is the GSNR for the $i$th crossed OLS:
(2)$${{\rm GSNR}_i} = \frac{{{P_{{\rm CUT},i}}}}{{{P_{{\rm ASE},i}} + {P_{{\rm NLI},i}}}} = \frac{1}{{{\rm OSNR}_i^{- 1} + {\rm SNR}_{{\rm NL},i}^{- 1}}},$$
where ${P_{{\rm CUT},i}}$ is the CUT power at the $i$th OLS egress, and ${P_{{\rm ASE},i}}$ and ${P_{{\rm NLI},i}}$ are the accumulated ASE and NLI noises on the $i$th OLS impairing the receiver decision variable, respectively. The effects of the two disturbances are quantified by the two GSNR components: the OSNR, ${{\rm OSNR}_i} = {P_{{\rm CUT},i}}/{P_{{\rm ASE},i}}$, and the nonlinear SNR, ${{\rm SNR}_{{\rm NL},i}} = {P_{{\rm CUT},i}}/{P_{{\rm NLI},i}}$. As each OLS is controlled independently, to compute the overall ${{\rm GSNR}_{\textit{sd}}}$, it is crucial to be able to accurately evaluate each ${{\rm GSNR}_i}$.
Fig. 2. Example of the architecture of a partially disaggregated optical network. In this scenario, LPs are routed through open ROADMs located at OLS terminations between a given source and destination.
A QoT-E must reliably evaluate both GSNR components, meaning that an accurate model for the losses of each WDM wavelength, as well as the amplifier gain and ASE noise contributions are required in order to properly evaluate the ${{\rm SNR}_{{\rm NL}}}$ and OSNR components, respectively. Ideally, for ${{\rm SNR}_{{\rm NL}}}$ computation, the QoT-E evaluates the NLI noise using a disaggregated approach [34], and considers the interactions between the NLI generation and the spatial and frequency dependency of the total fiber loss profile given by the intrinsic fiber loss, which can vary significantly along frequency [35,36], and the effects of stimulated Raman scattering (SRS) [37,38].
A vendor-neutral QoT-E operated by an ONC must be able to communicate with the OLC that provides the line description, i.e., the information regarding the fiber spans (lengths, chromatic dispersion values, nonlinearity coefficients, losses) and the amplifiers (gains, noise figures, tilts). Besides this line description, each OLC supplies information regarding the spectral loading of the $i$th OLS, and the corresponding ${{\rm GSNR}_i}$. Given a ${{\rm GSNR}_{\textit{sd}}}$ and the TRX GSNR threshold, an optimized control of the modulation format (or shaping in the case of PCS) can be implemented—within this framework, minimizing the system margin fundamentally depends upon the accuracy of the QoT-E.
3. EXPERIMENTAL SETUP
Figure 3 includes a detailed description of the experimental setup assembled at Orange Labs that has been used to measure various QoT transmission metrics in a flex-grid flex-rate scenario. The OLS under consideration is composed of $20 \times 80\;{\rm km} $ spans of ITU.T G.652 fiber with an average loss of 16.6 dB, dispersion values of ${16.7}\;{\rm ps/}({\rm nm} \cdot {\rm km})$, and effective areas of $80\,\,\unicode{x00B5}{\rm m}^2$, both evaluated at a reference wavelength of 1550 nm. After each fiber span, a JDSU WRA 200 erbium doped fiber amplifier (EDFA) is placed and operated in a constant gain mode to fully recover the fiber loss. To compensate for SRS effects, each EDFA gain tilt has been set so that 1 dB of tilt over the spectrum bandwidth is recovered for each span. After the 6th and 13th spans, two dynamic gain equalizers (DGEs) are used to equalize the spectrum, compensating for ripples due to the amplification process and for residual tilt caused by the SRS.
Fig. 3. Flow diagram providing a representation of the optical line used for transmission within this experiment.
Table 1. Adjacent and Far Apart Spectral Configurations: 75 GHz and 300 GHz Spacing between the 62 and 69 Gbaud Carriers, Respectivelya
View Table | View all tables in this article
Regarding transmission, two different spectra have been propagated and analyzed, for a total bandwidth that occupies a portion of the C-band located between 192.55 THz (1556.96 nm) and 195.45 THz (1533.86 nm). In both cases, we consider a total of 55 channels organized in a flexible WDM grid with a minimum division of 12.5 GHz. The overall bandwidth, along with the distinct propagated channels, can be schematically divided in two sub-regions:
• a 50 Hz fixed-grid loading comb composed of 47 QPSK-modulated 100 Gbit/s channels with symbol rates of 28 and 33 Gbaud,
• a sub-region of interest located between 192.95 THz (1553.73 nm) and 193.45 THz (1549.48 nm) that includes a total of eight CUTs:
– two QPSK 100 Gbit/s channels with symbol rates of 33 Gbaud,
– three 16-QAM 200 Gbit/s channels with symbol rates of 39 Gbaud,
– one 8-QAM 200 Gbit/s channel with a symbol rate of 44 Gbaud,
– one 16-QAM 300 Gbit/s channel with a symbol rate of 62 Gbaud,
– one QPSK 200 Gbit/s channel with a symbol rate of 69 Gbaud.
For this experimental investigation, the loading comb and two distinct CUT spectral combinations have been multiplexed using a WSS to create two distinct spectral configurations.
In this paper, we refer to the two spectra as the adjacent and far apart spectral configurations, described in detail in Table 1. The main difference between the two analyzed spectra is that the two CUTs with the larger symbol rates, 62 and 69 Gbaud, are placed next to each other or with other CUTs between them; Fig. 4 includes a visual representation of these two configurations. We choose these two configurations to observe any variation in the GSNRs within the spectral region of interest when the channels with the largest symbol rates change spectral occupations. If GSNR variations are present, they must be taken into account by the ONC when optimizing the configuration of the channels with respect to their symbol rates.
Fig. 4. Visualization of the (a) adjacent and (b) far apart spectral configurations, as fully described in Table 1.
Both spectra have been transmitted at various launch powers and, at the OLS termination, the CUTs have been demultiplexed with a Finisar WaveShaper 4000S and then received, allowing QoT analysis. In particular, the launch powers for each channel have been set such that an approximately uniform PSD is attained over the entire bandwidth; this is performed by maintaining the ratio between the launch powers of any couple of distinct channels equal to the ratio between their symbol rates, which is also visible in Fig. 4. To observe GSNR variation as the optimal power level is approached, we retain this uniform PSD and perform a power sweep, varying the equivalent power per channel, ${\bar P _{{\rm ch}}}$, between ${-}{2}$ and 2 dBm in 0.5 dB increments, where ${\bar P _{{\rm ch}}}$ is defined as the total signal power divided by the total number of channels.
In this framework, for each transmitted signal, we measured the BER at the receiver and the OSNR at the OLS termination for each CUT. In particular, the OSNR values are obtained from two distinct signal power measurements and the ASE noise after the last EDFA. The ASE noise power has been measured by switching off each CUT in turn and then evaluating the noise floor within the relative bandwidth. Both of these power measurements have been performed using a MS9740A Anritsu optical spectrum analyzer (OSA); in Fig. 5, an example of the transmitted signal power is shown.
Fig. 5. Optical spectrum at the OLS input; the spectral region of interest where the CUTs are located is highlighted in red.
To compare the performance of the CUTs to the predictions given by GNPy, the measured BERs have to be converted to GSNR values. This conversion also has the benefit of decorrelating the measurements to the specific characteristics of each distinct TRX. Furthermore, it enables a direct analysis of the relation between the QoT and the physical layer features of the investigated system and is crucial in enabling the ONC to perform an optimal symbol rate and modulation format setting.
The first required step to convert from BER to GSNR is the back-to-back characterization of every CUT TRX. These back-to-back characterizations have been performed by measuring the BER and the OSNR (which, in this case, is equal to the GSNR) with an increasing level of ASE noise loading. In Fig. 6, the theoretical expectations for the different modulation formats are compared with the measured back-to-back characterizations. The detachment between the theoretical and measured curves is due to additional implementation-specific degradations that are not related to the LP QoT. When the GSNR increases significantly, the additional noise components due to electrical components and analog-to-digital converter (ADC) quantization in the receiver are no longer negligible. We note that the back-to-back characterizations of the 16-QAM-PCS-modulated channels behave the same as the 8-QAM-modulated channel, as their constellation is reshaped into an equivalent 8-QAM modulation format. We stress that the GSNR is expressed by considering the entire channel bandwidths as noise reference bandwidths, rather than a 0.1 nm bandwidth. If this method is used, back-to-back performance curves, as expected, are grouped according to the modulation format, regardless of the symbol rate.
Fig. 6. Back-to-back characterization for each distinct channel within this experimental campaign; continuous and dashed lines represent the theoretical and measured back-to-back curves, respectively. The channel symbol rates, ${R_s}$, are given in Gbaud.
At this stage, as the pre-FEC BER is a parameter provided by the TRXs, which are commercial devices with limited access to the internal digital signal processing (DSP) unit, it has not been possible to properly estimate the error on BER measurements and the consequent confidence interval in GSNR conversions. Properly estimating this quantity would allow a more precise analysis of the QoT investigation with a more accurate description of the system margin and will be the focus of further studies. To reasonably quantify the inaccuracies related to the indirect measurement of the GSNR, we assume an error corresponding to a rigid shift in OSNR measurements up to a maximum $\pm {0.2}\;{\rm dB}$, providing a confidence interval, $\varepsilon$, of the derived GSNR values.
Following this back-to-back characterization for each CUT, the BER measurements can be directly converted to GSNR values and compared to the predictions provided by GNPy. Furthermore, the OSNR measurements obtained by the OSA provide an indirect evaluation of ${{\rm SNR}_{{\rm NL}}}$ degradation, as a subtraction estimate, which allows deeper insight into the QoT estimation analysis. A schematic of this procedure applied to a specific CUT is shown in Fig. 7
Fig. 7. Schematic of the GSNR and ${{\rm SNR}_{{\rm NL}}}$ evaluation procedure using the values for back-to-back characterization, measured BER, and OSNR for the 69 Gbaud CUT.
4. RESULTS AND ANALYSIS
An implementation of the open-source GNPy library has been modified to allow variable WDM grid spacings and variable channel settings, such as channel-dependent input powers and symbol rates. Moreover, taking advantage of GNPy's disaggregated structure, we have improved the NLI estimator by including a variable accumulation coefficient of the self-phase modulation component of the NLI, taking into account the span-by-span coherency dependence upon fiber variety and spectral characteristics [39]. These improvements allow the GNPy engine to adequately simulate the investigated experimental testbed that includes the propagation of flex-grid flex-rate spectral configurations. In general, the generalized GN model implemented in GNPy provides accurate results when a precise evaluation of the physical layer parameters is available, as it has been extensively shown in previous fixed-grid experiments [16,20,24]. For the experimental setup under investigation, some physical parameters involve a certain level of uncertainty, or are completely unknown. Among these variables, the EDFA noise figure and the fiber input connector losses, ${l_c}$, for each span are fundamental for an accurate system simulation to be achieved. In particular, the former quantity is necessary for an adequate prediction of the OSNR, whereas the input connector loss crucially affects the actual amount of power propagated through the fiber; both the SRS power tilt and the generated NLI noise strongly depend on the input power [24]. The EDFA noise figure for each span has been estimated from a single measurement of the OSNR at the OLS termination, with the total launch power set such that ${\bar P _{{\rm ch}}}= {2}\;{\rm dBm}$. These measurements can be converted to an equivalent noise figure that has been equally redistributed for each span and also partially includes effects due to EDFA ripple. Regarding input connector loss, it is not possible to directly measure or estimate this value from an overall system metric in the same way as we have performed for the noise figure. Therefore, to accommodate uncertainty due to these unknown input connector loss values, we include a confidence interval of ${0.25}\;{\rm dB}\;{\pm}\;{0.25}\;{\rm dB}$ for each span for the ${{\rm SNR}_{\rm{NL}}}$ and GSNR predictions. We then repeat the GNPy simulations with 0 and 0.5 dB connector loss values, referring to these as the lower and upper extreme simulation cases, respectively.
The results obtained using these estimations are compared to the measured pre-FEC BER converted to the GSNR, along with the measured OSNR and the ${{\rm SNR}_{\rm{NL}}}$ obtained by subtraction of the inverse GSNR and OSNR. These results are shown in Fig. 8 for a subset of the CUTs: We include the best- and worst-case scenario CUTs in terms of average GSNR prediction accuracy, and the CUTs with symbol rates of 44 and 69 Gbaud. In general, the GNPy engine provides very accurate predictions of the OSNR for all CUTs and launch powers explored in the power sweep. As expected, the generalized GN model implemented in GNPy provides conservative ${{\rm SNR}_{\rm{NL}}}$ predictions for almost all CUTs and ${\bar P _{{\rm ch}}}$ values. We highlight that, by increasing ${\bar P _{{\rm ch}}}$, the predicted and measured ${{\rm SNR}_{\rm{NL}}}$ values for all cases monotonically decrease and reach the same asymptotic slope, which demonstrates that the model under investigation provides a good representation of the underlying physical phenomena. On the other hand, the ${{\rm SNR}_{\rm{NL}}}$ measurements follow this trend less consistently at low ${\bar P _{{\rm ch}}}$ values. Nevertheless, these deviations from the trend can be justified bearing in mind that the ${{\rm SNR}_{\rm{NL}}}$ measurements have been obtained by subtracting the inverse OSNR from the inverse GSNR. As a consequence, the measured ${{\rm SNR}_{\rm{NL}}}$ also includes all the additional SNR degradations that are not included in the system abstraction. Therefore, this deviation can be explained by a constant additional degradation that is more evident at large ${{\rm SNR}_{\rm{NL}}}$ values and becomes negligible as the NLI increases. Furthermore, we observe that the ${{\rm SNR}_{\rm{NL}}}$ predictions for the CUT with the highest symbol rate, 69 Gbaud, are slightly optimistic for all ${\bar P _{{\rm ch}}}$ values. This is due to this particular CUT having a received power below the optimal TRX power range.
Fig. 8. Dots and continuous lines represent, respectively, the measured and predicted GSNRs, OSNRs, and ${{\rm SNR}_{{\rm NL}}}$s for four selected CUTs in the adjacent spectral configuration case, for every explored ${\bar P _{{\rm ch}}}$ value. The shaded areas include the confidence interval obtained with the upper and lower simulations involving the extreme values of the input connector loss.
To summarize, GNPy provides a conservative prediction with a satisfactory level of accuracy of the total GSNR for every CUT, for all ${\bar P _{{\rm ch}}}$ values explored in the power sweep, with the exception of the 69 Gbaud channel, which suffers from additional impairments due to an insufficient received power. A quantitative estimation of the prediction accuracy can be obtained by inspecting the mean, ${\mu}$, and root mean square (RMS), $\sigma$, of the errors in the GSNR simulations, $\Delta {\rm GSNR}$, defined explicitly in the following expressions:
$$\begin{split}\Delta {\rm GSNR}& = {{\rm GSNR}^{{\rm meas}}} - {\left. {{{{\rm GSNR}}^{{\rm pred}}}} \right|_{{l_c} = 0.25}}, \\ \Delta {{\rm GSNR}_{\rm{lower}}} &= {{\rm GSNR}^{{\rm meas}}} - {\left. {{{{\rm GSNR}}^{{\rm pred}}}} \right|_{{l_c} = 0}}, \\ \Delta {{\rm GSNR}_{\rm{upper}}} &= {{\rm GSNR}^{{\rm meas}}} - {\left. {{{{\rm GSNR}}^{{\rm pred}}}} \right|_{{l_c} = 0.5}}.\end{split}$$
Both ${\mu}$ and $\sigma$ are evaluated separately on the adjacent and far apart spectral configurations as
(3)$$\begin{split}\mu &= {\rm E}\left[{\Delta {\rm GSNR}} \right], \\ {\mu _{\rm{lower}}}& = {\rm E}\left[{\Delta {{{\rm GSNR}}_{\rm{lower}}}} \right], \\ {\mu _{\rm{upper}}} &= {\rm E}\left[{\Delta {{{\rm GSNR}}_{\rm{upper}}}} \right],\end{split}$$
(4)$$\begin{split}\sigma &= \sqrt {{\rm E}\left[{{{\left({\Delta {\rm GSNR}} \right)}^2}} \right]}, \\ {\sigma _{\rm{lower}}}& = \sqrt {{\rm E}\left[{{{\left({\Delta {{{\rm GSNR}}_{\rm{lower}}}} \right)}^2}} \right]}, \\ {\sigma _{\rm{upper}}} &= \sqrt {{\rm E}\left[{{{\left({\Delta {{{\rm GSNR}}_{\rm{upper}}}} \right)}^2}} \right]},\end{split}$$
where the operator ${\rm E}[\cdot]$ is the average over the entire set of measurements/predictions. These results are reported in Table 2, along with the minimum $\Delta {\rm GSNR}$, which represent the worst-case scenario; here, GNPy provides a nonconservative prediction, which also provides a rough estimation of the required QoT-E margin. Moreover, the uncertainties, $\epsilon$, reported in Table 2 have been calculated as follows:
$$\begin{split}{\varepsilon _{{\rm min}}}& = {\varepsilon _{{\rm min}({{{\rm GSNR}}^{\rm{meas}}})}}, \\ {\varepsilon _\mu} &= \sqrt {{\rm E}\left[{{{\left({{\varepsilon _{{{{\rm GSNR}}^{\rm{meas}}}}}} \right)}^2}} \right]}, \\ {\varepsilon _\sigma} &= \frac{1}{\sigma}\sqrt {\frac{1}{N}{\rm E}\left[{{{\left({\Delta {{{\rm GSNR}}^{\rm{meas}}} \cdot {\varepsilon _{{{{\rm GSNR}}^{\rm{meas}}}}}} \right)}^2}} \right]},\end{split}$$
where $N$ is the total number of measurements/predictions.
Table 2. Overall GNPy Accuracy Defined by Means of $\mu$ and $\sigma$ and the Minimum Value of GSNR Error, $\Delta {\rm GSNR}$a
In general, GNPy provides very accurate (low value of $\sigma$) and unbiased (low value of ${\mu}$) predictions, with the upper simulation providing the most precise estimations. On the other hand, the lower simulation provides a more reliable estimation, as the GSNR error on the worst-case scenario is more than halved with respect to the other simulations. These two simulations represent scenarios where either a more accurate or more reliable model may be chosen, depending upon the requirements of the network operator.
Fig. 9. Dots and continuous lines represent, respectively, the measured and predicted GSNR, OSNR, and ${{\rm SNR}_{{\rm NL}}}$ for all CUTs. Top and bottom report the results for the (a) adjacent and (b) far apart spectral configurations, at the optimal measured working point ${\bar P _{{\rm ch}}} = - 0.5\;{\rm dBm} $. The shaded horizontal areas include the confidence interval obtained with the upper and lower simulations involving the extreme values of the input connector loss. The largest symbol rate CUTs, 62 and 69 Gbaud, are highlighted with the vertical yellow shades on the left- and right-hand sides, respectively.
These results can be further analyzed from an application standpoint by investigating the optimal launch power and GSNR feasibility when higher-cardinality modulation formats are used. In general, the optimal launch power is an implementation-dependent quantity and can be described with different definitions. However, a per-channel power optimization is not straightforward due to the nonlinear effects (both SRS and NLI) that are generated during fiber propagation. Managing the NLI impairment can be simplified by considering the following heuristic idea: the NLI noise generated by the signal power contained in an infinitesimal bandwidth of an interfering channel does not depend upon the channel itself. We elucidate this idea with an example; the NLI generated by two interfering channels with symbol rates of 33 Gbaud is not significantly different from the NLI generated by one interfering channel with a 66 Gbaud symbol rate, if each of these channels occupies the same frequency slot width.
In a realistic use case, the optimal launch power can therefore be defined globally, where an optimization algorithm varies the offset and tilt of a uniform PSD configuration over the entire transmitted spectrum. This constant PSD configuration can then be tilted to recover the residual tilt due to uncompensated SRS. Defining a global optimal launch power using this uniform PSD leads to a uniform GSNR distribution for all channels; this reduces management complexity and allows system margins to be kept under control. We remark that the optimization procedure used within this work does not require any additional equipment, as the optimal launch power configuration can be obtained by varying the EDFA gains (or output power) and tilts, which are parameters that are readily accessible in currently deployed infrastructures.
This assumption is, in any case, an oversimplification of the NLI effect, and further analysis is required to reach a formal and accurate description. Additionally, we highlight that a more elaborate PSD distribution that takes into account all symbol rate variances within the spectrum can provide a better optimization; however, in this analysis, we give priority to maintaining a lower level of complexity for ease of optimization management in a realistic use case.
In this work, an analysis on the optimal launch power can only be partially performed, as the CUTs occupy a limited portion of the entire spectrum bandwidth. To define the optimal launch power, first we recap that ${\bar P _{{\rm ch}}}$ is defined as the total signal power divided by the total number of channels. Second, distinct ${\bar P _{{\rm ch}}}$ values represent different power sweep measurements, and a uniform PSD over the entire bandwidth has been retained for all of these measurements. Bearing these details in mind, we therefore define the optimal value of ${\bar P _{{\rm ch}}}$ within the investigated range as the optimal launch power. The definition of the optimal ${\bar P _{{\rm ch}}}$ value is not straightforward, as distinct CUTs reach their maximum measured GSNR at different ${\bar P _{{\rm ch}}}$ values (the same is true considering predicted GSNRs). Considering these individual optimal ${\bar P _{{\rm ch}}}$ values for each CUT, we select the minimum to be the overall optimal ${\bar P _{{\rm ch}}}$, ensuring that all channels do not exceed their own optimal values. Given this definition, the optimal launch powers for both spectral configurations are ${\bar P _{{\rm ch}}} = - 0.5\,\,{\rm dBm}$ and ${\bar P _{{\rm ch}}} = - 1\,\,{\rm dBm}$, considering the set of measured and predicted GSNRs, respectively. From an application standpoint, GNPy provides a sub-optimal launch power that results in a limited reduction in the achieved GSNR—this can be quantified as the RMS deviation between the measured GSNR at ${\bar P _{{\rm ch}}} = - 1\,\,{\rm dBm}$ and the maximum measured GSNR for each channel; for both investigated spectral configurations, this metric is equal to $0.3 \pm 0.1\,\,{\rm dB}$.
Last, the comparison of the measured and predicted values of the GSNR, OSNR, and ${{\rm SNR}_{\rm{NL}}}$ for all CUTs is shown in Fig. 9 for ${\bar P _{{\rm ch}}} = - 0.5$ dBm. It is visible that the flat GSNR assumption is verified in this case, as the GSNR standard deviation for all channels is ${0.2}\;{\pm}\;{0.1}\;{\rm dB}$ for both spectral configurations. As previously mentioned, the CUT with the highest symbol rate, highlighted in yellow on the right-hand in Fig. 9, appears to experience an additional penalty that is not included in the simulation abstraction; for all other CUTs, GNPy provides a conservative estimation of the ${{\rm SNR}_{\rm{NL}}}$, which leads to an accurate GSNR prediction.
Moreover, we highlight that the GSNR values do not vary significantly, with a RMS deviation of $0.1 \pm 0.1\,\,{\rm dB}$, when the CUT with a symbol rate of 62 Gbaud is moved from the center to the edge of the sub-region of interest. This result is very important from an application standpoint, as it suggests that the relative positions of large and narrow CUTs in a flex-rate framework are not significant when a constant PSD is implemented; consequently, system management and the optimization performed by the OLC is further simplified. We remark that this observation can be combined with the results of [36] to consider an optimal system working point that provides a flat and uniform GSNR distribution over the entire C-band.
In this work, we experimentally validate the reliability and accuracy of GNPy as a software model for the WDM optical transport within a flex-grid flex-rate multi-vendor scenario, including various transmission data rates ranging from 100 to 300 Gbits/s and different modulation formats, including shaped constellations. To the best of our knowledge, this paper represents one of the first application examples of a QoT-E implementation within a diverse transmission scenario. Looking into the back-to-back characterizations of the distinct TRXs, we further experimentally observe the unequivocal relation between the pre-FEC BER and the LP GSNR. In particular, when the GSNR is quantified over the channel bandwidths, it is clear that the BER variations depend uniquely upon the specific modulation format, except for some insignificant fluctuations. Furthermore, we show that the GSNR distribution over the distinct channels is approximately flat when a constant PSD is implemented for the launch power and does not depend on the relative positions of large and narrow symbol rate channels, at least in the investigated scenario. These features lead to further simplification from optimization and management perspectives: from an application standpoint, an operator may consider a flat GSNR distribution and divide the available bandwidth in a flex-grid flex-rate framework, basing the design of the spectral configuration only on techno-economic assessments (e.g., total number of TRXs, power budget), as the total system capacity is not significantly affected by any specific choice.
In this framework, the QoT-E provided by the GNPy engine is a valuable tool that allows the retrieval of a conservative and efficient working point—the estimated optimal launch power provided by the GNPy simulations, which is slightly sub-optimal with respect to the reachable maximum GSNR of the individual channels; however, this reduction is limited to fractions of a dB. As a practical consequence, in partially disaggregated network scenarios, the OLCs controlling power levels on each ROADM-to-ROADM OLS may operate independently by setting the optimal PSD using a GNPy implementation.
Horizon 2020 Framework Programme (Marie Skłodowska-Curie grant agreement 814276).
The authors thank the OOPT-PSE working group of the Telecom Infra Project.
1. "Cisco visual networking index: forecast and methodology," 2018, https://www.cisco.com/c/en/us/solutions/service-provider/visual-networking-index-vni/index.html.
2. Mordor Intelligence, "Optical transport network market—growth, trends, COVID-19 impact, and forecasts (2021–2026)," 2021, https://www.mordorintelligence.com/industry-reports/optical-transport-network-market.
3. "Spectral grids for WDM applications: DWDM frequency grid," ITU-T Recommendation G.694.1, 2020, https://www.itu.int/rec/T-REC-G.694.1-202010-I/en.
4. C. Doerr and L. Chen, "Silicon photonics in optical coherent systems," Proc. IEEE106, 2291–2301 (2018). [CrossRef]
5. F. Buchali, V. Aref, R. Dischler, M. Chagnon, K. Schuh, H. Hettrich, A. Bielik, L. Altenhain, M. Guntermann, R. Schmid, and M. Möller, "128 GSa/s SiGe DAC implementation enabling 1.52 Tb/s single carrier transmission," J. Lightwave Technol.39, 763–770 (2020). [CrossRef]
6. H. Sun, M. Torbatian, M. Karimi, et al., "800G DSP ASIC design using probabilistic shaping and digital sub-carrier multiplexing," J. Lightwave Technol.38, 4744–4756 (2020). [CrossRef]
7. F. P. Guiomar, R. Li, C. R. Fludger, A. Carena, and V. Curri, "Hybrid modulation formats enabling elastic fixed-grid optical networks," J. Opt. Commun. Netw.8, A92–A100 (2016). [CrossRef]
8. J. Cho and P. J. Winzer, "Probabilistic constellation shaping for optical fiber communications," J. Lightwave Technol.37, 1590–1607 (2019). [CrossRef]
9. Y. Akulova, "InP photonic integrated circuits for high efficiency pluggable optical interfaces," in Optical Fiber Communication Conference (OFC) (Optical Society of America, 2015), paper W3H.1.
10. E. Riccardi, P. Gunning, Ó. G. de Dios, M. Quagliotti, V. López, and A. Lord, "An operator view on the introduction of white boxes into optical networks," J. Lightwave Technol.36, 3062–3072 (2018). [CrossRef]
11. H. Nishizawa, W. Ishida, Y. Sone, T. Tanaka, S. Kuwabara, T. Inui, T. Sasai, and M. Tomizawa, "Open whitebox architecture for smart integration of optical networking and data center technology," J. Opt. Commun. Netw.13, A78–A87 (2021). [CrossRef]
12. S. Gringeri, N. Bitar, and T. J. Xia, "Extending software defined network principles to include optical transport," IEEE Commun. Mag.51(3), 32–40 (2013). [CrossRef]
13. M. Birk, O. Renais, G. Lambert, C. Betoule, G. Thouenon, A. Triki, D. Bhardwaj, S. Vachhani, N. Padi, and S. Tse, "The OpenROADM initiative," J. Opt. Commun. Netw.12, C58–C67 (2020). [CrossRef]
14. OIF, "400ZR," 2020, https://www.oiforum.com/technical-work/hot-topics/400zr-2/.
15. Telecom Infra Project, "RFI template for Phoenix," 2020, https://exchange.telecominfraproject.com/rfi/phoenix.
16. M. Filer, M. Cantono, A. Ferrari, G. Grammel, G. Galimberti, and V. Curri, "Multi-vendor experimental validation of an open source QoT estimator for optical networks," J. Lightwave Technol.36, 3073–3082 (2018). [CrossRef]
17. V. Kamalov, M. Cantono, V. Vusirikala, L. Jovanovski, M. Salsi, A. Pilipetskii, D. K. M. Bolshtyansky, G. Mohs, E. R. Hartling, and S. Grubb, "The subsea fiber as a Shannon channel," in Proceedings of the SubOptic (2019).
18. A. Ferrari, M. Filer, K. Balasubramanian, Y. Yin, E. Le Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "Experimental validation of an open source quality of transmission estimator for open optical networks," in Optical Fiber Communication Conference (OFC) (Optical Society of America, 2020), paper W3C.2.
19. J. Slovak, W. Schairer, M. Herrmann, and E. Torrengo, "Determination of channel OSNR and channel OSNR margin at real network conditions," U.S. patent US2021203415A1 (1 October 2021).
20. A. Ferrari, M. Filer, K. Balasubramanian, Y. Yin, E. Le Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "GNPy: an open source application for physical layer aware open optical networks," J. Opt. Commun. Netw.12, C31–C40 (2020). [CrossRef]
21. V. Curri, "Software-defined WDM optical transport in disaggregated open optical networks," in 22nd International Conference on Transparent Optical Networks (ICTON) (IEEE, 2020).
22. J. Kundrát, E. Le Rouzic, James, J. L. Auge, J. Mårtensson, A. Ferrari, G. Goldfarb, G. Grammel, A. D'Amico, M. Cantono, Robert, M. Garrich, B. Taylor, D. Boertjes, M. Naser, S. Zhu, X. Liu, and diegolaunch, "GNPy," Zenodo, 2021, https://doi.org/10.5281/zenodo.3458319.
23. G. Grammel, V. Curri, and J. L. Auge, "Physical simulation environment of the telecommunications infrastructure project (TIP)," in Optical Fiber Communication Conference (OFC) (2018), paper M1D.3.
24. A. Ferrari, K. Balasubramanian, M. Filer, Y. Yin, E. L. Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "Assessment on the in-field lightpath QoT computation including connector loss uncertainties," J. Opt. Commun. Netw.13, A156–A164 (2021). [CrossRef]
25. A. D'Amico, E. London, B. Le Guyader, F. Frank, E. Le Rouzic, E. Pincemin, N. Brochier, and V. Curri, "GNPy experimental validation on flex-grid, flex-rate WDM optical transport scenarios," in Optical Fiber Communication Conference (Optical Society of America, 2021), paper W1G.2.
26. L. Yan, E. Agrell, M. N. Dharmaweera, and H. Wymeersch, "Joint assignment of power, routing, and spectrum in static flexible-grid networks," J. Lightwave Technol.35, 1766–1774 (2017). [CrossRef]
27. V. Curri, A. Carena, A. Arduino, G. Bosco, P. Poggiolini, A. Nespola, and F. Forghieri, "Design strategies and merit of system parameters for uniform uncompensated links supporting Nyquist-WDM transmission," J. Lightwave Technol.33, 3921–3932 (2015). [CrossRef]
28. S. Gringeri, B. Basch, V. Shukla, R. Egorov, and T. J. Xia, "Flexible architectures for optical transport nodes and networks," IEEE Commun. Mag.48(7), 40–50 (2010). [CrossRef]
29. C. Manso, R. Muñoz, N. Yoshikane, R. Casellas, R. Vilalta, R. Martínez, T. Tsuritani, and I. Morita, "TAPI-enabled SDN control for partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks," J. Opt. Commun. Netw.13, A21–A33 (2021). [CrossRef]
30. J. Kundrát, O. Havliš, J. Jedlinský, and J. Vojtěch, "Opening up ROADMs: let us build a disaggregated open optical line system," J. Lightwave Technol.37, 4041–4051 (2019). [CrossRef]
31. A. Triki, E. Le Rouzic, O. Renais, G. Lambert, G. Thouenon, C. Betoule, E. Delfour, S. Vachhani, and B. Bathula, "Open-source QoT estimation for impairment-aware path computation in OpenROADM compliant network," in European Conference on Optical Communications (ECOC) (2020).
32. A. D'Amico, S. Straullu, G. Borraccini, E. London, S. Bottacchi, S. Piciaccia, A. Tanzi, A. Nespola, G. Galimberti, S. Swail, and V. Curri, "Enhancing lightpath QoT computation with machine learning in partially disaggregated optical networks," IEEE Open J. Commun. Soc.2, 564–574 (2021). [CrossRef]
33. E. Le Rouzic, A. Lindgren, S. Melin, D. Provencher, R. Subramanian, R. Joyce, F. Moore, D. Reeves, A. Rambaldi, P. Kaczmarek, K. Weeks, S. Neidlinger, G. Agrawal, S. Krishnamoha, B. Raszczyk, T. Uhlar, R. Casellas, O. Gonzalez de Dios, and V. Lopez, "Operationalizing partially disaggregated optical networks: an open standards-driven multi-vendor demonstration," in Optical Fiber Communication Conference (OFC) (2021), paper M1B.2.
34. E. London, E. Virgillito, A. D'Amico, A. Napoli, and V. Curri, "Simulative assessment of non-linear interference generation within disaggregated optical line systems," OSA Continuum3, 3378–3389 (2020). [CrossRef]
35. S. Walker, "Rapid modeling and estimation of total spectral loss in optical fibers," J. Lightwave Technol.4, 1125–1131 (1986). [CrossRef]
36. G. Borraccini, A. D'Amico, S. Straullu, A. Nespola, S. Piciaccia, A. Tanzi, G. Galimberti, S. Bottacchi, S. Swail, and V. Curri, "Cognitive and autonomous QoT-driven optical line controller," J. Opt. Commun. Netw.13, E23–E31 (2021). [CrossRef]
37. M. Cantono, D. Pilori, A. Ferrari, C. Catanese, J. Thouras, J.-L. Augé, and V. Curri, "On the interplay of nonlinear interference generation with stimulated Raman scattering for QoT estimation," J. Lightwave Technol.36, 3131–3141 (2018). [CrossRef]
38. D. Semrau, R. I. Killey, and P. Bayvel, "The Gaussian noise model in the presence of inter-channel stimulated Raman scattering," J. Lightwave Technol.36, 3046–3055 (2018). [CrossRef]
39. A. D'Amico, E. London, E. Virgillito, A. Napoli, and V. Curri, "Quality of transmission estimation for planning of disaggregated optical networks," in International Conference on Optical Network Design and Modeling (ONDM) (IEEE, 2020).
"Cisco visual networking index: forecast and methodology," 2018, https://www.cisco.com/c/en/us/solutions/service-provider/visual-networking-index-vni/index.html .
Mordor Intelligence, "Optical transport network market—growth, trends, COVID-19 impact, and forecasts (2021–2026)," 2021, https://www.mordorintelligence.com/industry-reports/optical-transport-network-market .
"Spectral grids for WDM applications: DWDM frequency grid," ITU-T Recommendation G.694.1, 2020, https://www.itu.int/rec/T-REC-G.694.1-202010-I/en .
C. Doerr and L. Chen, "Silicon photonics in optical coherent systems," Proc. IEEE 106, 2291–2301 (2018).
F. Buchali, V. Aref, R. Dischler, M. Chagnon, K. Schuh, H. Hettrich, A. Bielik, L. Altenhain, M. Guntermann, R. Schmid, and M. Möller, "128 GSa/s SiGe DAC implementation enabling 1.52 Tb/s single carrier transmission," J. Lightwave Technol. 39, 763–770 (2020).
H. Sun, M. Torbatian, and M. Karimi et al., "800G DSP ASIC design using probabilistic shaping and digital sub-carrier multiplexing," J. Lightwave Technol. 38, 4744–4756 (2020).
F. P. Guiomar, R. Li, C. R. Fludger, A. Carena, and V. Curri, "Hybrid modulation formats enabling elastic fixed-grid optical networks," J. Opt. Commun. Netw. 8, A92–A100 (2016).
J. Cho and P. J. Winzer, "Probabilistic constellation shaping for optical fiber communications," J. Lightwave Technol. 37, 1590–1607 (2019).
Y. Akulova, "InP photonic integrated circuits for high efficiency pluggable optical interfaces," in Optical Fiber Communication Conference (OFC) (Optical Society of America, 2015), paper W3H.1.
E. Riccardi, P. Gunning, Ó. G. de Dios, M. Quagliotti, V. López, and A. Lord, "An operator view on the introduction of white boxes into optical networks," J. Lightwave Technol. 36, 3062–3072 (2018).
H. Nishizawa, W. Ishida, Y. Sone, T. Tanaka, S. Kuwabara, T. Inui, T. Sasai, and M. Tomizawa, "Open whitebox architecture for smart integration of optical networking and data center technology," J. Opt. Commun. Netw. 13, A78–A87 (2021).
S. Gringeri, N. Bitar, and T. J. Xia, "Extending software defined network principles to include optical transport," IEEE Commun. Mag. 51(3), 32–40 (2013).
M. Birk, O. Renais, G. Lambert, C. Betoule, G. Thouenon, A. Triki, D. Bhardwaj, S. Vachhani, N. Padi, and S. Tse, "The OpenROADM initiative," J. Opt. Commun. Netw. 12, C58–C67 (2020).
OIF, "400ZR," 2020, https://www.oiforum.com/technical-work/hot-topics/400zr-2/ .
Telecom Infra Project, "RFI template for Phoenix," 2020, https://exchange.telecominfraproject.com/rfi/phoenix .
M. Filer, M. Cantono, A. Ferrari, G. Grammel, G. Galimberti, and V. Curri, "Multi-vendor experimental validation of an open source QoT estimator for optical networks," J. Lightwave Technol. 36, 3073–3082 (2018).
V. Kamalov, M. Cantono, V. Vusirikala, L. Jovanovski, M. Salsi, A. Pilipetskii, D. K. M. Bolshtyansky, G. Mohs, E. R. Hartling, and S. Grubb, "The subsea fiber as a Shannon channel," in Proceedings of the SubOptic (2019).
A. Ferrari, M. Filer, K. Balasubramanian, Y. Yin, E. Le Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "Experimental validation of an open source quality of transmission estimator for open optical networks," in Optical Fiber Communication Conference (OFC) (Optical Society of America, 2020), paper W3C.2.
J. Slovak, W. Schairer, M. Herrmann, and E. Torrengo, "Determination of channel OSNR and channel OSNR margin at real network conditions," U.S. patentUS2021203415A1 (1October2021).
A. Ferrari, M. Filer, K. Balasubramanian, Y. Yin, E. Le Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "GNPy: an open source application for physical layer aware open optical networks," J. Opt. Commun. Netw. 12, C31–C40 (2020).
V. Curri, "Software-defined WDM optical transport in disaggregated open optical networks," in 22nd International Conference on Transparent Optical Networks (ICTON) (IEEE, 2020).
J. Kundrát, E. Le Rouzic, James, J. L. Auge, J. Mårtensson, A. Ferrari, G. Goldfarb, G. Grammel, A. D'Amico, M. Cantono, Robert, M. Garrich, B. Taylor, D. Boertjes, M. Naser, S. Zhu, X. Liu, and diegolaunch, "GNPy," Zenodo, 2021, https://doi.org/10.5281/zenodo.3458319.
G. Grammel, V. Curri, and J. L. Auge, "Physical simulation environment of the telecommunications infrastructure project (TIP)," in Optical Fiber Communication Conference (OFC) (2018), paper M1D.3.
A. Ferrari, K. Balasubramanian, M. Filer, Y. Yin, E. L. Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "Assessment on the in-field lightpath QoT computation including connector loss uncertainties," J. Opt. Commun. Netw. 13, A156–A164 (2021).
A. D'Amico, E. London, B. Le Guyader, F. Frank, E. Le Rouzic, E. Pincemin, N. Brochier, and V. Curri, "GNPy experimental validation on flex-grid, flex-rate WDM optical transport scenarios," in Optical Fiber Communication Conference (Optical Society of America, 2021), paper W1G.2.
L. Yan, E. Agrell, M. N. Dharmaweera, and H. Wymeersch, "Joint assignment of power, routing, and spectrum in static flexible-grid networks," J. Lightwave Technol. 35, 1766–1774 (2017).
V. Curri, A. Carena, A. Arduino, G. Bosco, P. Poggiolini, A. Nespola, and F. Forghieri, "Design strategies and merit of system parameters for uniform uncompensated links supporting Nyquist-WDM transmission," J. Lightwave Technol. 33, 3921–3932 (2015).
S. Gringeri, B. Basch, V. Shukla, R. Egorov, and T. J. Xia, "Flexible architectures for optical transport nodes and networks," IEEE Commun. Mag. 48(7), 40–50 (2010).
C. Manso, R. Muñoz, N. Yoshikane, R. Casellas, R. Vilalta, R. Martínez, T. Tsuritani, and I. Morita, "TAPI-enabled SDN control for partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks," J. Opt. Commun. Netw. 13, A21–A33 (2021).
J. Kundrát, O. Havliš, J. Jedlinský, and J. Vojtěch, "Opening up ROADMs: let us build a disaggregated open optical line system," J. Lightwave Technol. 37, 4041–4051 (2019).
A. Triki, E. Le Rouzic, O. Renais, G. Lambert, G. Thouenon, C. Betoule, E. Delfour, S. Vachhani, and B. Bathula, "Open-source QoT estimation for impairment-aware path computation in OpenROADM compliant network," in European Conference on Optical Communications (ECOC) (2020).
A. D'Amico, S. Straullu, G. Borraccini, E. London, S. Bottacchi, S. Piciaccia, A. Tanzi, A. Nespola, G. Galimberti, S. Swail, and V. Curri, "Enhancing lightpath QoT computation with machine learning in partially disaggregated optical networks," IEEE Open J. Commun. Soc. 2, 564–574 (2021).
E. Le Rouzic, A. Lindgren, S. Melin, D. Provencher, R. Subramanian, R. Joyce, F. Moore, D. Reeves, A. Rambaldi, P. Kaczmarek, K. Weeks, S. Neidlinger, G. Agrawal, S. Krishnamoha, B. Raszczyk, T. Uhlar, R. Casellas, O. Gonzalez de Dios, and V. Lopez, "Operationalizing partially disaggregated optical networks: an open standards-driven multi-vendor demonstration," in Optical Fiber Communication Conference (OFC) (2021), paper M1B.2.
E. London, E. Virgillito, A. D'Amico, A. Napoli, and V. Curri, "Simulative assessment of non-linear interference generation within disaggregated optical line systems," OSA Continuum 3, 3378–3389 (2020).
S. Walker, "Rapid modeling and estimation of total spectral loss in optical fibers," J. Lightwave Technol. 4, 1125–1131 (1986).
G. Borraccini, A. D'Amico, S. Straullu, A. Nespola, S. Piciaccia, A. Tanzi, G. Galimberti, S. Bottacchi, S. Swail, and V. Curri, "Cognitive and autonomous QoT-driven optical line controller," J. Opt. Commun. Netw. 13, E23–E31 (2021).
M. Cantono, D. Pilori, A. Ferrari, C. Catanese, J. Thouras, J.-L. Augé, and V. Curri, "On the interplay of nonlinear interference generation with stimulated Raman scattering for QoT estimation," J. Lightwave Technol. 36, 3131–3141 (2018).
D. Semrau, R. I. Killey, and P. Bayvel, "The Gaussian noise model in the presence of inter-channel stimulated Raman scattering," J. Lightwave Technol. 36, 3046–3055 (2018).
A. D'Amico, E. London, E. Virgillito, A. Napoli, and V. Curri, "Quality of transmission estimation for planning of disaggregated optical networks," in International Conference on Optical Network Design and Modeling (ONDM) (IEEE, 2020).
Agrawal, G.
Agrell, E.
Akulova, Y.
Altenhain, L.
Arduino, A.
Aref, V.
Auge, J. L.
Augé, J.-L.
Balasubramanian, K.
Basch, B.
Bathula, B.
Bayvel, P.
Betoule, C.
Bhardwaj, D.
Bielik, A.
Birk, M.
Bitar, N.
Bolshtyansky, D. K. M.
Borraccini, G.
Bosco, G.
Bottacchi, S.
Brochier, N.
Buchali, F.
Cantono, M.
Carena, A.
Casellas, R.
Catanese, C.
Chagnon, M.
Cho, J.
Curri, V.
D'Amico, A.
de Dios, Ó. G.
Delfour, E.
Dharmaweera, M. N.
Dischler, R.
Doerr, C.
Egorov, R.
Ferrari, A.
Filer, M.
Fludger, C. R.
Forghieri, F.
Frank, F.
Galimberti, G.
Gonzalez de Dios, O.
Grammel, G.
Gringeri, S.
Grubb, S.
Guiomar, F. P.
Gunning, P.
Guntermann, M.
Hartling, E. R.
Havliš, O.
Herrmann, M.
Hettrich, H.
Inui, T.
Ishida, W.
Jedlinský, J.
Jovanovski, L.
Joyce, R.
Kaczmarek, P.
Kamalov, V.
Karimi, M.
Killey, R. I.
Krishnamoha, S.
Kundrát, J.
Kuwabara, S.
Lambert, G.
Le Guyader, B.
Le Rouzic, E.
Li, R.
Lindgren, A.
London, E.
Lopez, V.
López, V.
Lord, A.
Manso, C.
Martínez, R.
Melin, S.
Mohs, G.
Möller, M.
Moore, F.
Morita, I.
Muñoz, R.
Napoli, A.
Neidlinger, S.
Nespola, A.
Nishizawa, H.
Padi, N.
Piciaccia, S.
Pilipetskii, A.
Pilori, D.
Pincemin, E.
Poggiolini, P.
Provencher, D.
Quagliotti, M.
Rambaldi, A.
Raszczyk, B.
Reeves, D.
Renais, O.
Riccardi, E.
Rouzic, E. L.
Salsi, M.
Sasai, T.
Schairer, W.
Schmid, R.
Schuh, K.
Semrau, D.
Shukla, V.
Slovak, J.
Sone, Y.
Straullu, S.
Subramanian, R.
Sun, H.
Swail, S.
Tanaka, T.
Tanzi, A.
Thouenon, G.
Thouras, J.
Tomizawa, M.
Torbatian, M.
Torrengo, E.
Triki, A.
Tse, S.
Tsuritani, T.
Uhlar, T.
Vachhani, S.
Vilalta, R.
Virgillito, E.
Vojtech, J.
Vusirikala, V.
Walker, S.
Weeks, K.
Winzer, P. J.
Wymeersch, H.
Xia, T. J.
Yan, L.
Yin, Y.
Yoshikane, N.
IEEE Commun. Mag. (2)
IEEE Open J. Commun. Soc. (1)
J. Lightwave Technol. (11)
J. Opt. Commun. Netw. (7)
OSA Continuum (1)
Proc. IEEE (1) | CommonCrawl |
\begin{document}
\authortitle{Anders Bj\"orn and Jana Bj\"orn} {Local and semilocal Poincar\'e inequalities on metric spaces}
\author{ Anders Bj\"orn \\ \it\small Department of Mathematics, Link\"oping University, \\ \it\small SE-581 83 Link\"oping, Sweden\/{\rm ;} \it \small [email protected] \\ \\ Jana Bj\"orn \\ \it\small Department of Mathematics, Link\"oping University, \\ \it\small SE-581 83 Link\"oping, Sweden\/{\rm ;} \it \small [email protected] }
\date{To appear in \emph{J. Math. Pures Appl.}}
\title{#2}
\noindent{\small {\bf Abstract}. We consider several local versions of the doubling condition and Poincar\'e inequalities on metric measure spaces. Our first result is that in proper connected spaces, the weakest local assumptions self-improve to semilocal ones, i.e.\ holding within every ball.
We then study various geometrical and analytical consequences of such local assumptions, such as local quasiconvexity, self-improvement of Poincar\'e inequalities, existence of Lebesgue points, density of Lipschitz functions and quasicontinuity of Sobolev functions. It turns out that local versions of these properties hold under local assumptions, even though they are not always straightforward.
We also conclude that many qualitative, as well as quantitative, properties of {$p\mspace{1mu}$}-harmonic functions on metric spaces can be proved in various forms under such local assumptions, with the main exception being the Liouville theorem, which fails without global assumptions.
\noindent{\bf R\'esum\'e.} Nous consid\'erons plusieurs versions locales des conditions de doublement et des in\'egalit\'es de Poincar\'e dans des espaces m\'etriques mesur\'es. Notre premier r\'esultat stipule que dans un espace propre connexe, les hypoth\`eses locales les plus faibles s'am\'eliorent en semi-locales, c.\`a.d. elles sont valables dans chaque boule.
Nous \'etudions ensuite certaines cons\'equences g\'eom\'etriques et analytiques de telles hypoth\`eses locales tel que la quasi-convexit\'e locale, l'auto am\'elioration des in\'egalit\'es de Poincar\'e, l'existence des points Lebesgue, la densit\'e des fonctions Lipschitz et la quasi-continuit\'e des fonctions Sobolev. Il s'av\`ere que les versions locales de ces propri\'et\'es restent valables sous les hypoth\`eses locales m\^eme si elles ne sont pas toujours imm\'ediates.
Nous concluons \'egalement que sous telles hypoth\'eses locales, plusieurs propri\'et\'es qualitatives, ainsi que quantitatives, des fonctions p-harmoniques sur des espaces m\'etriques peuvent \^etre prouv\'ees sous diverses formes, l'exception principale \'etant le th\'eor\`eme de Liouville qui \'echoue sans hypoth\'eses globales.
\noindent {\small \emph{Key words and phrases}: capacity, density of Lipschitz functions, Lebesgue point, local doubling, metric measure space, Newtonian space, nonlinear potential theory, {$p\mspace{1mu}$}-harmonic function, Poincar\'e inequality, quasicontinuity, quasiminimizer, semilocal doubling, Sobolev space. }
\noindent {\small Mathematics Subject Classification (2010): Primary: 31E05; Secondary: 30L99, 31C45, 35J60, 46E35 } } \section{Introduction}
In the last two decades, an extensive part of first-order analysis, such as the Sobolev space theory and nonlinear potential theory for {$p\mspace{1mu}$}-harmonic functions, has been developed on metric spaces. Standard assumptions are very often that $\mu$ is a doubling measure supporting a {$p\mspace{1mu}$}-Poincar\'e inequality and that the space $X$ is complete. The doubling condition controls changes in scales, while the Poincar\'e inequality guarantees that functions are controlled by their so-called upper gradients. Both of these conditions play a vital role in many proofs. These assumptions are usually imposed globally on the whole space.
In this paper we study how these conditions can be relaxed and replaced by similar local or semilocal assumptions, while retaining most of the important consequences. We assume throughout the paper that $1 \le p<\infty$ and that $X=(X,d,\mu)$ is a metric space equipped with a metric $d$ and a positive complete Borel measure $\mu$ such that $0<\mu(B)<\infty$ for all balls $B \subset X$.
\begin{deff} The measure $\mu$ is \emph{locally doubling} if for every $x \in X$ there are $r,C>0$ (depending on $x$) such that $\mu(2B)\le C \mu(B)$ for all balls $B \subset B(x,r)$. If such a $C$ exists for all $x \in X$ and all $r>0$, then $\mu$ is \emph{semilocally doubling}. (Here and below, $\lambda B$ stands for the ball concentric with $B$ and with $\lambda$-times the radius.)
The measure $\mu$ supports a \emph{local {$p\mspace{1mu}$}-Poincar\'e inequality}, $p \ge1$, if for every $x \in X$ there are $r,C>0$ and $\lambda \ge 1$ such that for all balls $B\subset B(x,r)$, all integrable functions $u$ on $\lambda B$ and all upper gradients $g$ of $u$, \begin{equation} \label{eq-def-local-PI-intro}
\vint_{B} |u-u_B| \,d\mu
\le C r_B \biggl( \vint_{\lambda B} g^{p} \,d\mu \biggr)^{1/p}, \end{equation} where $ u_B :=\vint_B u \,d\mu := \int_B u\, d\mu/\mu(B)$. If such $C$ and $\lambda$ exist for all $x$ and all $r$, then $X$ supports a \emph{semilocal {$p\mspace{1mu}$}-Poincar\'e inequality}. \end{deff}
It may be worth comparing this with the definition of local function spaces. A function $f \in L^1_{\rm loc}(X)$ if for every $x \in X$ there is $r>0$ such $f \in L^1(B(x,r))$. If $X$ is proper (i.e.\ all closed bounded sets are compact), then it is a well-known (and useful) fact that this is equivalent to requiring that $f \in L^1_{\rm loc}(B)$ for every ball $B$ in $X$. It turns out that a similar fact is true for the doubling property. Note that if $\mu$ is globally doubling, then $X$ is proper if and only if it is complete.
\begin{prop} \label{prop-semilocal-doubling-intro} If $X$ is proper and $\mu$ is locally doubling, then $\mu$ is semilocally doubling. \end{prop}
This is perhaps not very surprising, and essentially just needs a standard compactness argument to be shown. That a similar result is true also for Poincar\'e inequalities is far less obvious, and requires several pages to prove. Poincar\'e inequalities are intimately related to connectivity properties, and thus the precise statement is as follows. The assumptions of properness and connectedness cannot be dropped.
\begin{thm} \label{thm-PI-intro} If $X$ is proper and connected and $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality, then it supports a semilocal {$p\mspace{1mu}$}-Poincar\'e inequality. \end{thm}
As already mentioned, in much of the metric space literature on first-order analysis it is assumed that $\mu$ is globally doubling and supports a global {$p\mspace{1mu}$}-Poincar\'e inequality, while it is folklore that much of the theory holds under local assumptions. For instance, the assumptions are global in the monographs Haj\l asz--Koskela~\cite{HaKo}, Bj\"orn--Bj\"orn~\cite{BBbook} and Heinonen--Koskela--Shanmugalingam--Tyson~\cite{HKSTbook}. In \cite{HaKo} and \cite{BBbook} it is mentioned that Riemannian manifolds with Ricci curvature bounded from below support semilocal assumptions with the implicit implication that most of the developed theory holds under these weaker assumptions.
There are some papers requiring local assumptions, but they often take different forms from paper to paper. Sometimes it is assumed that the constants involved are uniform, something we do not assume (but for Section~\ref{sect-local-unif}). Such assumptions (of different kinds) are e.g.\ assumed in Cheeger~\cite{Cheeg}, Danielli--Garofalo--Marola~\cite{DaGaMa}, Garofalo--Marola~\cite{GaMa} and Holopainen--Shanmugalingam~\cite{HoSh}. The requirements are in all cases more restrictive than our local assumptions, and those in \cite{Cheeg} are more restrictive than our semilocal assumptions.
Once we have established Proposition~\ref{prop-semilocal-doubling-intro} and Theorem~\ref{thm-PI-intro} (which we do in Sections~\ref{sect-doubling} and~\ref{sect-PI}), we take a look at which useful consequences of global doubling and Poincar\'e inequalities can be obtained already under (semi)local assumptions, with and without properness. We study the self-improvement of Poincar\'e inequalities, Lebesgue points, density of Lipschitz and locally Lipschitz functions, quasicontinuity and {$p\mspace{1mu}$}-harmonic functions under (semi)local assumptions.
In Section~\ref{sect-better-PI} we concentrate on the self-improvement of Poincar\'e inequalities and prove two results: one improving the norm on the left-hand side of \eqref{eq-def-local-PI-intro} and the other one the norm on the right-hand side. In Section~\ref{sect-local-unif} we see how uniformly local assumptions give slightly stronger self-improvement conclusions. Under global assumptions these important results are due to Haj\l asz--Koskela~\cite{HaKo-CR}, \cite{HaKo} and Keith--Zhong~\cite{KZ}, respectively. In particular, the latter result can be localized in the following way.
\begin{thm} \label{thm-PI-uniform-q-intro} If $X$ is locally compact and $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality, both with uniform constants $C$ and $\lambda$ and with $p>1$, then $X$ supports a local $q$-Poincar\'e inequality for some $q<p$ with new uniform constants $C$ and $\lambda$. \end{thm}
Neither in the assumptions nor in the conclusion do we assume uniformity in the radius $r$ of the balls $B(x,r)$ involved. It is worth noting that the corresponding result with global assumptions and a global conclusion fails in locally compact spaces, due to a counterexample by Koskela~\cite{Koskela}. In complete spaces it holds by Keith--Zhong~\cite{KZ}.
In Section~\ref{sect-Leb}, we turn to Lebesgue points and show that Sobolev (Newtonian) functions have Lebesgue points outside a set of zero {$p\mspace{1mu}$}-capacity. Traditionally, as well as in metric spaces, such results are shown using the density of continuous functions. Here we avoid using this property and instead exploit the Newtonian theory in a different and novel way, which may be of interest also under global assumptions.
In the next section we consider the density of Lipschitz and locally Lipschitz functions in the Sobolev (Newtonian) space $N^{1,p}(X)$. There are two existing results under global assumptions in the literature, one assuming doubling and a Poincar\'e inequality, due to Shanmugalingam~\cite{Sh-rev}, and the other more recent one assuming $p>1$, completeness and doubling but no Poincar\'e inequality, due to Ambrosio--Colombo--Di Marino~\cite{AmbCD} and Ambrosio--Gigli--Savar\'e~\cite{AmbGS}. We extend both results to local assumptions, and combine them. Among other results, we obtain the following ``local-to-global'' density result.
\begin{thm} \label{thm-Lipc-dense-intro} If $X$ is proper and connected and $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality then Lipschitz functions with compact support are dense in $N^{1,p}(X)$. \end{thm}
In Section~\ref{sect-qcont} we look at consequences of the obtained density results, primarily quasicontinuity and various properties of the Sobolev capacity ${C_p}$. We end the paper with a discussion on how much of the nonlinear potential theory for {$p\mspace{1mu}$}-harmonic functions, developed e.g.\ in the book \cite{BBbook}, holds under local assumptions, and explain that indeed most of the results therein extend to this setting, with the main exception being the Liouville theorem, which actually fails without global assumptions. As most of the results in \cite{BBbook} are either local or semilocal (e.g.\ on bounded domains) this is not so surprising, and indeed this has already been hinted upon in the literature, as mentioned above.
The importance of distinguishing between local and global assumptions is certainly more apparent when discussing global properties, such as the Dirichlet problem on unbounded domains (as in Hansevi~\cite{Hansevi1}, \cite{Hansevi2}) or existence of global singular functions (as in Holopainen--Shanmugalingam~\cite{HoSh}). We hope that the theory developed in this paper will provide a suitable foundation for such studies.
\end{ack}
\section{Upper gradients and Newtonian spaces} \label{sect-prelim}
We assume throughout the paper that $1 \le p<\infty$ and that $X=(X,d,\mu)$ is a metric space equipped with a metric $d$ and a positive complete Borel measure $\mu$ such that $0 < \mu(B)<\infty$ for all balls $B \subset X$. It follows that $X$ is separable and Lindel\"of. Proofs of the results in this section can be found in the monographs Bj\"orn--Bj\"orn~\cite{BBbook} and Heinonen--Koskela--Shanmugalingam--Tyson~\cite{HKSTbook}.
A \emph{curve} is a continuous mapping from an interval, and a \emph{rectifiable} curve is a curve with finite length. We will only consider curves which are compact and rectifiable, and thus each curve can be parameterized by its arc length $ds$. A property is said to hold for \emph{{$p\mspace{1mu}$}-almost every curve} if it fails only for a curve family $\Gamma$ with zero {$p\mspace{1mu}$}-modulus, i.e.\ there exists $0\le\rho\in L^p(X)$ such that $\int_\gamma \rho\,ds=\infty$ for every rectifiable curve $\gamma\in\Gamma$.
Following Heinonen--Kos\-ke\-la~\cite{HeKo98}, we introduce upper gradients as follows (they called them very weak gradients).
\begin{deff} \label{deff-ug} A Borel function $g: X \to [0,\infty]$ is an \emph{upper gradient} of a function $f:X \to {\overline{\R}}:=[-\infty,\infty]$ if for all nonconstant rectifiable curves $\gamma \colon [0,l_{\gamma}] \to X$, \begin{equation} \label{ug-cond}
|f(\gamma(0)) - f(\gamma(l_{\gamma}))| \le \int_{\gamma} g\,ds, \end{equation} where the left-hand side is considered to be $\infty$ whenever at least one of the terms therein is infinite. If $g:X \to [0,\infty]$ is measurable and \eqref{ug-cond} holds for {$p\mspace{1mu}$}-almost every nonconstant rectifiable curve, then $g$ is a \emph{{$p\mspace{1mu}$}-weak upper gradient} of~$f$. \end{deff}
The {$p\mspace{1mu}$}-weak upper gradients were introduced in Koskela--MacManus~\cite{KoMc}. It was also shown therein that if $g \in L^p\loc(X)$ is a {$p\mspace{1mu}$}-weak upper gradient of $f$, then one can find a sequence $\{g_j\}_{j=1}^\infty$
of upper gradients of $f$ such that $\|g_j-g\|_{L^p(X)} \to 0$. If $f$ has an upper gradient in $L^p\loc(X)$, then it has an a.e.\ unique \emph{minimal {$p\mspace{1mu}$}-weak upper gradient} $g_f \in L^p\loc(X)$ in the sense that for every {$p\mspace{1mu}$}-weak upper gradient $g \in L^p\loc(X)$ of $f$ we have $g_f \le g$ a.e., see Shan\-mu\-ga\-lin\-gam~\cite{Sh-harm}. Following Shanmugalingam~\cite{Sh-rev}, we define a version of Sobolev spaces on the metric space $X$.
\begin{deff} \label{deff-Np} For a measurable function $f:X\to {\overline{\R}}$, let \[
\|f\|_{N^{1,p}(X)} = \biggl( \int_X |f|^p \, d\mu
+ \inf_g \int_X g^p \, d\mu \biggr)^{1/p}, \] where the infimum is taken over all upper gradients $g$ of $f$. The \emph{Newtonian space} on $X$ is \[
N^{1,p} (X) = \{f: \|f\|_{N^{1,p}(X)} <\infty \}. \] \end{deff}
The quotient space $N^{1,p}(X)/{\sim}$, where $f \sim h$ if and only if $\|f-h\|_{N^{1,p}(X)}=0$, is a Banach space and a lattice, see Shan\-mu\-ga\-lin\-gam~\cite{Sh-rev}. In this paper we assume that functions in $N^{1,p}(X)$
are defined everywhere (with values in ${\overline{\R}}$), not just up to an equivalence class in the corresponding function space. This is important for upper gradients to make sense.
For a measurable set $E\subset X$, the Newtonian space $N^{1,p}(E)$ is defined by considering $(E,d|_E,\mu|_E)$ as a metric space in its own right. We say that $f \in N^{1,p}\loc(E)$ if for every $x \in E$ there exists a ball $B_x\ni x$ such that $f \in N^{1,p}(B_x \cap E)$. If $f,h \in N^{1,p}\loc(X)$, then $g_f=g_h$ a.e.\ in $\{x \in X : f(x)=h(x)\}$, in particular for $c \in \mathbf{R}$ we have $g_{\min\{f,c\}}=g_f \chi_{\{f < c\}}$ a.e.
\begin{deff} The (Sobolev) \emph{capacity} of a set $E$ is the number \begin{equation*}
{C_p}(E) =\inf_u \|u\|_{N^{1,p}(X)}^p, \end{equation*} where the infimum is taken over all $u\in N^{1,p} (X)$ such that $u=1$ on $E$. \end{deff}
We say that a property holds \emph{quasieverywhere} (q.e.)\ if the set of points for which the property does not hold has capacity zero. The capacity is the correct gauge for distinguishing between two Newtonian functions. If $u \in N^{1,p}(X)$, then $u \sim v$ if and only if $u=v$ q.e. Moreover, if $u,v \in N^{1,p}\loc(X)$ and $u= v$ a.e., then $u=v$ q.e.
We let $B=B(x,r)=\{y \in X : d(x,y) < r\}$ denote the ball with centre $x$ and radius $r$, and let $\lambda B=B(x,\lambda r)$. We assume throughout the paper that balls are open. In metric spaces it can happen that balls with different centres and/or radii denote the same set. We will however make the convention that a ball $B$ comes with a predetermined centre and radius $r_B$. Note that it can happen that $B(x_0,r_0) \subset B(x_1,r_1)$ even when $r_0 > r_1$. In disconnected spaces this can happen also when $r_0 > 2 r_1$. If $X$ is connected, then $r_0 >2r_1$ is possible only when $B(x_0,r_0)= B(x_1,r_1)=X$.
\section{Local doubling} \label{sect-doubling}
One can think of several different possibilities for local assumptions. We will make them precise below. In this section we concentrate on the doubling property and then consider Poincar\'e inequalities in the next section.
\begin{deff} \label{def-local-doubl-mu} The measure \emph{$\mu$ is doubling within $B(x_0,r_0)$} if there is $C>0$ (depending on $x_0$ and $r_0$) such that $\mu(2B)\le C \mu(B)$ for all balls $B \subset B(x_0,r_0)$.
We say that $\mu$ is \emph{locally doubling} (on $X$) if for every $x_0 \in X$ there is $r_0>0$ (depending on $x_0$) such that $\mu$ is doubling within $B(x_0,r_0)$.
If $\mu$ is doubling within every ball $B(x_0,r_0)$ then it is \emph{semilocally doubling} (on $X$), and if moreover $C$ is independent of $x_0$ and $r_0$, then $\mu$ is \emph{globally doubling} (on $X$). \end{deff}
Note that when saying that $\mu$ is doubling \emph{within} $B(x_0,r_0)$ this is (implicitly) done with respect to $X$ as the balls are all with respect to $X$, and moreover $2B$ does not have to be a subset of $B(x_0,r_0)$. This is not the same as saying that $\mu$ is globally doubling \emph{on} $B(x_0,r_0)$, which refers to balls with respect to $B(x_0,r_0)$.
If $\mu$ is locally doubling on $X$ and $\Omega \subset X$ is open, then $\mu$ is also locally doubling on $\Omega$. This hereditary property fails for semilocal and global doubling, see Remark~\ref{rmk-global-restriction} below.
An even weaker property is that $\mu$ is \emph{pointwise doubling} at $x_0 \in X$ if there are $C,r_0>0$ such that $\mu(B(x_0,2r)) \le C \mu(B(x_0,r))$ for $0<r<r_0$. Requiring such a pointwise assumption and a similar pointwise Poincar\'e inequality at every $x_0 \in X$ is too weak for most results. See however, Bj\"orn--Bj\"orn--Lehrb\"ack~\cite{BBLeh1}, \cite{BBLehFirstthin} for capacity estimates using such pointwise assumptions.
\begin{deff} \label{def-local-doubl-X} The space $X$ is \emph{globally doubling} if there is a constant $N$ such that every ball $B(x,r)$ can be covered by at most $N$ balls with radii $\frac{1}{2}r$.
The space $X$ is \emph{locally doubling} if for every $x_0 \in X$ there is $r_0>0$ such that $B(x_0,r_0)$ is globally doubling. Moreover, $X$ is \emph{semilocally doubling} if every ball $B \subset X$ is globally doubling. \end{deff}
\begin{remark} \label{rmk-global-restriction}
Let $B$ be a ball. If $\mu$ is globally doubling then it does not follow that $\mu|_B$ is globally doubling on $B$, see Example~\ref{ex-local-ass-not-inherit}. On the other hand if the space $X$ is globally doubling then so is $B$ (as a metric space). This is why Definition~\ref{def-local-doubl-X} differs from Definition~\ref{def-local-doubl-mu} in that it considers balls with respect to $B$ which are not necessarily balls with respect to $X$. It is possible to give an equivalent definition of (semi)local doubling of $X$ more in the spirit of Definition~\ref{def-local-doubl-mu}, which only uses balls with respect to $X$, but such a definition is more technical to state and hence we prefer our Definition~\ref{def-local-doubl-X}. \end{remark}
It is rather immediate that every subset of a globally doubling metric space is itself globally doubling, and hence the same hereditary property also holds for (semi)local doubling. It is also easy to see that every bounded set in a semilocally doubling metric space is totally bounded. See Heinonen~\cite[Section~10.13]{heinonen} for more on doubling metric spaces.
If $\mu$ is (semi)locally resp.\ globally doubling, then so is $X$ by the following result.
\begin{prop} \label{prop-doubling-mu-2/3} Assume that $\mu$ is doubling within $B_0=B(x_0,r_0)$ in the sense of Definition~\ref{def-local-doubl-mu}. Then $\delta B_0$ is globally doubling for every $\delta< \tfrac{2}{3}$, with $N$ depending only on $\delta$ and the doubling constant within $B_0$. \end{prop}
Example~\ref{ex-doubling-2/3} below shows that the constant $\tfrac{2}{3}$ is optimal and that it can even happen that $\tfrac23 B_0$ is not totally bounded.
\begin{proof} Let $B'=B(x,r) \cap \delta B_0$ be an arbitrary ball with respect to $\delta B_0$ for some $\delta < \tfrac{2}{3}$. Then $x \in \delta B_0$ and we may assume that $r \le 2 \delta r_0$. Let $B=B(x,r)$ and $r'=\min\bigl\{\tfrac{1}{4}r,\tfrac{1}{24}r_0\bigr\}$. Assume that $x_i \in B'$, $i=1,\ldots,N$, are such that $d(x_i,x_j) \ge 2r'$ if $i \ne j$. Then $B(x_i,r')$ are pairwise disjoint and $B(x_i,8r') \subset B_0$. We shall show that there is a bound for $N$. Let $C_\mu$ be the doubling constant for $\mu$ within $B_0$.
If $r'=\tfrac{1}{4}r$, then \begin{equation} \label{eq-C-mu-4}
\mu(2B)
\le \mu(B(x_i,16r'))
\le C_\mu \mu(B(x_i,8r'))
\le C_\mu^4 \mu(B(x_i,r')), \end{equation} and hence \[
N \min_i \mu(B(x_i,r'))
\le \sum_{i=1}^N \mu(B(x_i,r'))
\le \mu(2B)
\le C_\mu^4 \min_i \mu(B(x_i,r')), \] which yields that $N \le C_\mu^4$. On the other hand, if $r'=\tfrac{1}{24}r_0$ then as in~\eqref{eq-C-mu-4}, \[
\mu\bigl(\bigl(\tfrac{2}{3}-\delta\bigr) B_0\bigr)
\le \mu\bigl(B\bigl (x_i,\tfrac{2}{3}r_0\bigr)\bigr)
= \mu\bigl(B(x_i,16r'))
\le C_\mu^{4} \mu(B(x_i,r')). \] Therefore \begin{align*}
N \min_i \mu(B(x_i,r'))
& \le \sum_{i=1}^N \mu(B(x_i,r'))
\le \mu(B_0) \\
&\le \frac{C_\mu^{4}\mu(B_0)}{ \mu\bigl(\bigl(\tfrac{2}{3}-\delta\bigr) B_0\bigr)}
\min_i \mu(B(x_i,r'))
\le M \min_i \mu(B(x_i,r')), \end{align*} where $M$ only depends on $C_\mu$ and $\delta$. Hence $N \le M$.
We can thus find a maximal pairwise disjoint collection of $\{B(x_i,r')\}_{i=1}^N$ with $N \le \max\{M,C_\mu^4\}$ elements. As the collection is maximal we see that \[
B' \subset \bigcup_{i=1}^N B(x_i,2r') \subset \bigcup_{i=1}^N B\bigl(x_i,\tfrac{1}{2}r\bigr). \] Hence $\delta B_0$ is globally doubling. \end{proof}
\begin{example} \label{ex-doubling-2/3} Let $I_0=\{0\}\times[-1,0]\subset\mathbf{R}^2$ and $I_j=\{j\}\times\bigl[\tfrac23-\tfrac13\cdot 2^{-j},1\bigr]\subset\mathbf{R}^2$, $j=1,2,\ldots$, be vertical linear segments in the plane. Equip each $I_j$ with a multiple of the 1-dimensional Hausdorff measure so that $\mu(I_j)=2^{-j}$, $j=0,1,\ldots$. Let $X=\bigcup_{j=0}^\infty I_j$, equipped with $\mu$ and the metric $d$ so that for $x=(j,x_2)\in I_j$ and $y=(k,y_2)\in I_k$, \[ d(x,y)= \begin{cases}
|x_2-y_2|, & \text{if } j=k \text{ or } j=0 \text{ or } k=0, \\
1, & \text{if } j \ne k, \ j,k\ge 1.
\end{cases} \] Let $x_0=(0,0)$. Then it is easily verified that $\mu$ is doubling within $B_0=B(x_0,1)$. However, $\tfrac23 B_0$ is not totally bounded, and thus not globally doubling.
This example also shows that the next two results are sharp. More precisely, $X$ is bounded and complete but noncompact and thus not proper. Both $X$ and $\mu$ are locally doubling but neither is semilocally doubling. \end{example}
The most common global assumptions are that $X$ is complete and supports a global {$p\mspace{1mu}$}-Poincar\'e inequality and that $\mu$ is globally doubling. It then follows that $X$ is proper and connected (and even quasiconvex). Under local assumptions these properties need to be imposed separately. Connectedness is strongly related to Poincar\'e inequalities, which we discuss in the next section. Properness always implies completeness and the proof of \cite[Proposition~3.1]{BBbook} also shows the following equivalence.
\begin{lem} \label{lem-proper-equiv-complete} Assume that $X$ is semilocally doubling. Then $X$ is proper if and only if $X$ is complete. \end{lem}
If $X$ is only locally doubling and complete, then it is locally compact but not necessarily proper as the following example shows.
\begin{example} Let $X=\mathbf{R}^2$ equipped with the Gaussian measure
$Ce^{-x_1^2-x_2^2}\,dx$ but with the distance $d(x,y)=\arctan |x-y|$. Then $X$ is bounded and complete, but not proper (as $X$ is a closed bounded noncompact set). At the same time, $\mu$ clearly is locally doubling and
supports a local $1$-Poincar\'e inequality. \end{example}
In proper spaces, local and semilocal doubling are equivalent, as we shall now see.
\begin{prop} \label{prop-semilocal-doubling} Assume that $X$ is proper. If $X$ resp.\ $\mu$ is locally doubling, then it is semilocally doubling. \end{prop}
\begin{proof} Let $B_0$ be a ball. If $X$ is locally doubling, we can for each $x \in \itoverline{B}_0$ find a globally doubling ball $B_x \ni x$. Since $X$ is proper, $\itoverline{B}_0$ is compact and thus
we can find a finite set $\{x_i\}_{i=1}^N$ such that $B_0 \subset \bigcup_{i=1}^N B_{x_i}$. It is easily seen that a finite union of globally doubling sets is globally doubling, and hence $B_0$ is globally doubling.
Now assume instead that $\mu$ is locally doubling and $B_0=B(x_0,r_0)$. By enlarging $r_0$, if necessary, we may assume that either $r_0=\dist(x_0, X \setminus B_0)$ or $B_0=X$. Since $\itoverline{B}_0$ is compact (as $X$ is proper), it can be covered by finitely many balls $B_j=B(x_j,r_j)$ such that $\mu$ is doubling within each ball $2B_j$. Let $r'=\min_{j} r_j$. Again by compactness, we can cover $\itoverline{B}_0$ by finitely many balls $B'_j$ with radii $r'/2$. Let $B(x,r) \subset B_0$ be arbitrary and find $j$ and $j'$ such that $x \in B_j$ and $x \in B'_{j'}$.
If $r\le r'$ then $B(x,r)\subset 2B_j$ and hence the conclusion of the proposition holds for $B(x,r)$ with constant $\max_{j} C_j$. On the other hand, if $r> r'$ then $x\in B'_{j'}$ and hence $B'_{j'}\subset B(x,r)$, which yields the lower bound \[ \mu(B(x,r))\ge \min_{j} \mu(B'_j)>0. \] Since $B(x,2r)\subset B(x_0,5r_0)$, we also have a uniform upper bound for $\mu(B(x,2r))$, which proves that $\mu$ is semilocally doubling. \end{proof}
We will need the following local maximal function estimate.
\begin{prop} \label{prop-maximal-fn} Assume that $\mu$ is doubling within the ball $B_0$ in the sense of Definition~\ref{def-local-doubl-mu}, and let $\Omega \subset B_0$ be open. For $f\in L^1(\Omega)$, define the noncentred local maximal function \begin{equation} \label{eq-def-local-max-fn}
M^*_{\Omega,B_0} f(x) := \sup_{B}
\vint_{B} f\,d\mu,
\quad x \in \Omega, \end{equation} where the supremum is taken over all balls $B$ such that $x \in B \subset \Omega$ and $\frac52 B\subset B_0$. Then \begin{equation} \label{eq-max-weak-L1}
\mu(E_\tau) \le \frac{C}{\tau}\int_{E_\tau}|f|\,d\mu, \quad \text{where } E_\tau=\{x\in \Omega: M^*_{\Omega,B_0} f(x)>\tau\}. \end{equation} Moreover, if $t>1$, then \[
\int_{\Omega} (M^*_{\Omega,B_0}f)^t \, d\mu \le C_t \int_{\Omega} |f|^t \, d\mu. \] \end{prop}
The constant $5$ in the factor $\frac52$ above and in the proof below comes from the $5$-covering lemma. It is well known that also the $(3+\varepsilon)$-covering lemma holds, for every $\varepsilon>0$, see \cite[Remark~1.8, Example~1.9 and p.~36]{BBbook}. Thus the factor $\frac{5}{2}$ can be replaced by any factor $>\frac{3}{2}$, which would also make it possible to decrease some other constants in this paper. For simplicity we have chosen to just rely on the $5$-covering lemma, as is common practice in analysis on metric spaces.
\begin{proof} Since $\mu$ is doubling within the ball $B_0$, it is true that $\mu(5B) \le C \mu(B)$ for every ball $B$ used
in \eqref{eq-def-local-max-fn}. Thus, the proof of Lemma~3.12 in \cite{BBbook} directly applies also here showing the first estimate \eqref{eq-max-weak-L1}. The second estimate then follows just as in the proof of Theorem~3.13 in \cite{BBbook}, with $X$ therein replaced by~$\Omega$. \end{proof}
We end the section by noting the following consequence of local doubling, which will be useful later.
\begin{thm} \label{thm-Leb-ae} \textup{(The Lebesgue differentiation theorem)} Assume that $\mu$ is locally doubling. If $f \in L^1_{\rm loc}(X)$, then a.e.\ point is a Lebesgue point for $f$. \end{thm}
\begin{proof} As $X$ is Lindel\"of, we can cover $X$ by balls $\{B_j\}_{j=1}^\infty$ such that $f \in L^1(10B_j)$ and $\mu$ is doubling within each $10 B_j$. By the proof of Theorem~1.6 in Heinonen~\cite{heinonen}, the Vitali covering theorem holds in each $B_j$. It then follows from Remark~1.13 in \cite{heinonen} that the Lebesgue differentiation theorem holds within each $B_j$, and hence in $X$, as the union is countable. \end{proof}
\section{Local Poincar\'e inequalities} \label{sect-PI}
In this section we study local aspects of the Poincar\'e inequality similarly to the doubling property in Section~\ref{sect-doubling}. It will turn out that connectivity plays an important role here.
\begin{deff} \label{def-PI} Let $1 \le q < \infty$. We say that the \emph{$(q,p)$-Poincar\'e inequality holds within $B(x_0,r_0)$} if there are constants $C>0$ and $\lambda \ge 1$ (depending on $x_0$ and $r_0$) such that for all balls $B\subset B(x_0,r_0)$, all integrable functions $u$ on $\lambda B$, and all upper gradients $g$ of $u$, \begin{equation} \label{eq-def-local-PI}
\biggl(\vint_{B} |u-u_B|^q \,d\mu\biggr)^{1/q}
\le C r_B \biggl( \vint_{\lambda B} g^{p} \,d\mu \biggr)^{1/p}. \end{equation} We also say that $X$ (or $\mu$) supports a \emph{local $(q,p)$-Poincar\'e inequality} (on $X$) if for every $x_0 \in X$ there is $r_0$ (depending on $x_0$) such that the $(q,p)$-Poincar\'e inequality holds within $B(x_0,r_0)$.
If the $(q,p)$-Poincar\'e inequality holds within every ball $B(x_0,r_0)$ then $X$ supports a \emph{semilocal $(q,p)$-Poincar\'e inequality}, and if moreover $C$ and $\lambda$ are independent of $x_0$ and $r_0$, then $X$ supports a \emph{global $(q,p)$-Poincar\'e inequality}.
If $q=1$ we usually just write \emph{{$p\mspace{1mu}$}-Poincar\'e inequality}. \end{deff}
The inequality \eqref{eq-def-local-PI} can equivalently be required for all integrable functions $u$ on $\lambda B$, and all {$p\mspace{1mu}$}-weak upper gradients $g$ of $u$; see \cite[Proposition~4.13]{BBbook} for other equivalent formulations.
As in the case of the doubling condition, local Poincar\'e inequalities
are inherited by open subsets, i.e.\ if $\Omega \subset X$ is open and $X$ supports a local $(q,p)$-Poincar\'e inequality, then so does $\Omega$. This hereditary property fails for semilocal and global Poincar\'e inequalities.
\begin{remark} When defining (semi)local doubling in Definition~\ref{def-local-doubl-mu} it is primarily a matter of taste (except for the constant $\tfrac23$ in Proposition~\ref{prop-doubling-mu-2/3}) whether the condition is required for $B \subset B(x_0,r_0)$ or for $B \subset 2B \subset B(x_0,r_0)$, and the same is true for $B$ and $\lambda B$ in the local Poincar\'e inequalities in Definition~\ref{def-PI}. However, for semilocal Poincar\'e inequalities it is
vital that the condition is for all $B \subset B_0$, rather than for all $B \subset \lambda B \subset B_0$, which is a weaker requirement since $\lambda$ is allowed to depend on $B_0$. (Consider e.g.\ $X=(-\infty,0]\cup[1,\infty)$ with the Euclidean metric and Lebesgue measure. Then for $x_0=0$, $r_0\ge2$ and $\lambda:=r_0$, the requirement $\lambda B\subset B(x_0,r_0)$ implies that $r_B\le1$ and hence \eqref{eq-def-local-PI} holds for all such balls, while it clearly fails for $B(0,2)\subset B(x_0,r_0)$.) \end{remark}
\begin{example} \label{ex-local-ass-not-inherit} If a Poincar\'e inequality holds within $B_0=B(x_0,r_0)$, then it does not mean that $B_0$ (or $\itoverline{B}_0$) itself (as a metric space) supports a global Poincar\'e inequality. Similarly, if $\mu$ is doubling within $B_0$,
then it does not follow that $\mu|_{B_0}$ (or $\mu|_{\itoverline{B}_0}$) is doubling. To see this, let \[
X=\mathbf{R}^2 \setminus \bigl\{(x,y)\in\mathbf{R}^2: 0<y<\sqrt{1-x^2}-e^{-1/|x|}\bigr\}, \] i.e.\ $X$ is $\mathbf{R}^2$ with the open unit upper half-disc removed and the curved cusp of exponential type at $(0,1)$, \[
C_0=\bigl\{(x,y)\in\mathbf{R}^2: 0<\sqrt{1-x^2}-e^{-1/|x|}\le y<\sqrt{1-x^2}\bigr\}, \] inserted back into the hole. Then $\interior X$ is a uniform domain and hence the Lebesgue measure, restricted to it, is globally doubling and supports a global $1$-Poincar\'e inequality, by Theorem~4.4 in Bj\"orn--Shanmugalingam~\cite{BjShJMAA} (or \cite[Theorem~A.21]{BBbook}). By Aikawa--Shanmugalingam~\cite[Proposition~7.1]{AikSh05}, the same is true for $X$ itself.
Now, if $x_0$ is the origin and we let $B_0=B(x_0,1)$, then $B_0\cap X$ and $\itoverline{B_0\cap X}$ have the cusps $C_0$ and $\itoverline{C}_0$ in the vicinity of the point $(0,1)$. Hence the Lebesgue measure restricted to these sets is not doubling near or at this point.
Moreover, $C_0$ is disconnected at $(0,1)$ and $\itoverline{C}_0$ is essentially disconnected at the point $(0,1)$ (which has zero capacity with respect to $\itoverline{B_0\cap X}$ for all $p\ge 1$), so neither $B_0\cap X$ nor $\itoverline{B_0\cap X}$ supports any global or semilocal Poincar\'e inequalities. For $\itoverline{B_0\cap X}$ even the local doubling and all local Poincar\'e inequalities fail at $(0,1)$. \end{example}
Propositions~\ref{prop-semilocal-doubling-intro} and~\ref{prop-semilocal-doubling} about (semi)local doubling are rather straightforward. A bit more surprising, perhaps, is that Theorem~\ref{thm-PI-intro} is true for Poincar\'e inequalities. We will obtain the following more general version of Theorem~\ref{thm-PI-intro}.
\begin{thm} \label{thm-PI} If $X$ is proper and connected and $\mu$ is locally doubling and supports a local $(q,p)$-Poincar\'e inequality, then it supports a semilocal $(q,p)$-Poincar\'e inequality. \end{thm}
The proof of Theorem~\ref{thm-PI} will be split into a number
of lemmas, some of which may be of independent interest.
It will be concluded at the end of this section. But first, we discuss the assumptions in Theorem~\ref{thm-PI} as well as some consequences of local Poincar\'e inequalities. To start with, it is easily verified that if $X$ supports a semilocal {$p\mspace{1mu}$}-Poincar\'e inequality then it is connected, cf.\ the proof of \cite[Proposition~4.2]{BBbook}. The following example shows that this conclusion fails if we replace the semilocal assumption with a local one, even if $X$ is proper.
\begin{example} Let $X$ be the union of two disjoint closed balls in $\mathbf{R}^n$, which is proper and such that the Lebesgue measure is globally doubling and supports a local $1$-Poincar\'e inequality on $X$. However $X$ is not connected. \end{example}
The above example also shows that the connectedness assumption in Theorem~\ref{thm-PI} cannot be dropped, while the following example shows that neither can the properness assumption.
\begin{example} Let \[
X=(\itoverline{B(0,2)} \setminus B(0,1))
\cup \{x=(x_1,x_2): 0 < |x| \le 2 \text{ and } x_1x_2 \ge 0\}
\subset \mathbf{R}^2. \] Then $X$ is connected and the Lebesgue measure is globally doubling on $X$ and supports a local $1$-Poincar\'e inequality. However, $X$ is neither proper nor supports any semilocal Poincar\'e inequality. \end{example}
The following is a partial result on the way to proving Theorem~\ref{thm-PI}.
\begin{lem} \label{lem-PI-small-r-OK} If $X$ is proper and supports a local $(q,p)$-Poincar\'e inequality, then for every ball $B_0$ there exist $r',C>0$ and $\lambda \ge 1$ such that the $(q,p)$-Poincar\'e inequality~\eqref{eq-def-local-PI} holds for all balls $B=B(x,r)\subset B_0$ with $r\le r'$. \end{lem}
\begin{proof} As in the proof of Proposition~\ref{prop-semilocal-doubling}, the compact set $\itoverline{B}_0$ can be covered by finitely many balls $B_j=B(x_j,r_j)$ so that the $(q,p)$-Poincar\'e inequality holds within each $2B_j$, with constants $C_j$ and $\lambda_j$. Letting \[ r'=\min_{j} r_j, \quad C=\max_{j} C_j \quad \text{and} \quad \lambda=\max_{j} \lambda_j, \] together with the fact that $B(x,r)\subset2B_j$ for some $j$, concludes the proof. \end{proof}
We shall now see which connectivity properties follow from the local Poincar\'e inequality.
\begin{prop} \label{prop-ex-curve-in-B0} Assume that $X$ is locally compact and that $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality. Then $X$ is locally quasiconvex {\rm(}and thus locally rectifiably pathconnected\/{\rm)}, i.e.\ for every $x_0\in X$ there are $r_0, L>0$ {\rm(}depending on $x_0${\rm)} such that every pair of points $x,y\in B(x_0,r_0)$ can be connected by a curve of length at most $L d(x,y)$.
If $X$ is moreover proper and connected, then it is semilocally quasiconvex, i.e.\ the above connectivity property holds in every ball $B(x_0,r_0)$ {\rm(}with $L$ depending on it\/{\rm)}. \end{prop}
Local quasiconvexity and its behaviour under various transformations of $X$ were considered by Buckley--Herron--Xie~\cite{BuckleyHerronXie}, cf.\ Section~2 and Proposition~4.2 therein. Note that if $X$ is connected and locally (rectifiably) pathconnected, then it is (rectifiably) pathconnected, as the (rectifiably) pathconnected components must be open. In particular, local quasiconvexity and connectedness imply that $X$ is rectifiably pathconnected, but not necessarily quasiconvex.
Later on it will be important to have the following more precise version of the above connectivity result, which will also be used to deduce Proposition~\ref{prop-ex-curve-in-B0}.
\begin{lem} \label{lem-ex-curve-in-B0} Let $x,y\in X$ and assume that the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold {\rm(}with constants $C_{\rm PI}$ and $C_\mu${\rm)} within $B_0=B(x,2d(x,y))$ in the sense of Definitions~\ref{def-local-doubl-mu} and~\ref{def-PI}. Let $\Lambda=3C_\mu^3C_{\rm PI}$. If the ball $\itoverline{\Lambda B}_0$ is compact then $x$ and $y$ can be connected by a curve in $\itoverline{\Lambda B}_0$, of length at most $L d(x,y)$, where $L=9\Lambda$. \end{lem}
Note that, as in \cite[Theorem~4.32]{BBbook}, the constants $\Lambda$ and $L$ are independent of the dilation constant $\lambda$ in the {$p\mspace{1mu}$}-Poincar\'e inequality.
\begin{proof} Let $\lambda$ be the dilation constant in the {$p\mspace{1mu}$}-Poincar\'e inequality within $B_0$. Following Semmes's chaining argument, define for $\varepsilon>0$ and $z\in \lambda B_0$, \[ \rho_\varepsilon(z)=\inf \sum_{i=1}^m d(x_{i-1},x_i), \] where the infimum is taken over all collections $\{x_i\}_{i=0}^m\subset X$ such that $x_0=x$, $x_m=z$ and $d(x_{i-1},x_i)<\varepsilon$ for all $i=1,2,\ldots,m$. Should there be no such chain, we let $\rho_\varepsilon(z)=10\Lambda d(x,y)$. Then it is easily verified that $\rho_\varepsilon$ is locally 1-Lipschitz and has 1 as an upper gradient.
Since the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold for all the balls $B_0=B(x,2d(x,y))$ and $B_j=B(y,2^{-j}d(x,y))\subset B_0$, with $j=1,2,\ldots$, a standard telescoping argument shows that \begin{align} \label{eq-std-telescope}
|\rho_\varepsilon(y)-(\rho_\varepsilon)_{B_0}| &
\le \sum_{j=0}^\infty |(\rho_\varepsilon)_{B_{j+1}}-(\rho_\varepsilon)_{B_j}| \nonumber \\ & \le \sum_{j=0}^\infty \vint_{B_{j+1}}
|\rho_\varepsilon-(\rho_\varepsilon)_{B_j}|\,d\mu \\ &\le \sum_{j=0}^\infty C_\mu^3 \vint_{B_j}
|\rho_\varepsilon-(\rho_\varepsilon)_{B_j}|\,d\mu \nonumber \\ & \le C_\mu^3 C_{\rm PI} \sum_{j=0}^\infty r_{B_j}
\biggl( \vint_{\lambda B_j} 1^p \,d\mu \biggr)^{1/p} \nonumber \\ &= \Lambda d(x,y). \nonumber \end{align} A similar estimate with $\rho_\varepsilon(x)=0$ then yields that \[
\rho_\varepsilon(y) = |\rho_\varepsilon(y)-\rho_\varepsilon(x)| \le 2\Lambda d(x,y) < 10\Lambda d(x,y). \] In particular, for each $\varepsilon_n = 3\cdot 2^{-n}\Lambda d(x,y)$, $n=1,2,\ldots$, there is a chain $x=x_0^n, x_1^n, \ldots, x_{M_n}^n=y$ such that $d(x_{i-1},x_i)\le\varepsilon_n$ for all $i$ and \[ \sum_{i=1}^{M_n} d(x_{i-1}^n,x_i^n)\le 3\Lambda d(x,y). \] Moreover, as $d(x,x_i^n)\le 2\Lambda d(x,y)=\Lambda r_{B_0}$ or $d(y,x_i^n)\le \Lambda d(x,y)$, we conclude that all $x_i^n$ belong to the compact set $\itoverline{\Lambda B}_0$.
Using \cite[Lemma~4.34]{BBbook}, we can find a subchain \[ x={\hat{x}}_0^n, {\hat{x}}_1^n, \ldots, {\hat{x}}_{m_n}^n=y, \quad \text{satisfying }\varepsilon_n \le d({\hat{x}}_{i-1}^n,{\hat{x}}_i^n)\le 3\varepsilon_n, \] which implies that $m_n\le 3\Lambda d(x,y)/\varepsilon_n = 2^n$. Letting $S_n=\{2^{-n}i: i=0,1,\ldots,2^n\}$, the function $\gamma_n:S_n\to \itoverline{\Lambda B}_0$, defined by $\gamma_n(2^{-n}i)={\hat{x}}^n_{\min\{i,m_n\}}$, is easily shown to be $9\Lambda d(x,y)$-Lipschitz.
Using a diagonal argument, we can from the sequence $\{\gamma_n\}_{n=1}^\infty$ choose a subsequence, which converges on $S=\bigcup_{n=1}^\infty S_n$. Since each $\gamma_n$ is $9\Lambda d(x,y)$-Lipschitz, so is the limiting function $\gamma:S\to \itoverline{\Lambda B}_0$. Finally, $\gamma$ extends to a $9\Lambda d(x,y)$-Lipschitz function on $[0,1]$, which after reparameterization provides us with the desired curve. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop-ex-curve-in-B0}] Let $x_0\in X$ and find a ball $B_0'$ centred at $x_0$ so that ${\itoverline{B_0'}}$ is compact and the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold within $B_0'$, with constants $C_{\rm PI}$ and $C_\mu$. Let $\Lambda=3C_\mu^3C_{\rm PI}$ and $B_0=\Lambda^{-1}B_0'$. Now, if $x,y\in \tfrac15 B_0$ then $B(x,2d(x,y))\subset B_0$ and $\itoverline{\Lambda B}_0$ is compact. Hence Lemma~\ref{lem-ex-curve-in-B0} shows the existence of a connecting curve of length at most $9\Lambda d(x,y)$. Thus $X$ is locally quasiconvex, which proves the first part of the proposition.
Now assume that $X$ is in addition proper and connected. Let $B_0=B(x_0,r_0)$ be arbitrary. By Proposition~\ref{prop-semilocal-doubling}, $\mu$ is semilocally doubling. Furthermore, Lemma~\ref{lem-PI-small-r-OK} implies that for some $0<r'\le r_0$, the {$p\mspace{1mu}$}-Poincar\'e inequality~\eqref{eq-def-local-PI-intro} holds for all $B=B(x,r)\subset 5B_0$ with $0<r\le 2r'$.
Since $\overline{5\Lambda B}_0$ is compact, by the properness of $X$, Lemma~\ref{lem-ex-curve-in-B0} yields that every pair of points $x,y\in\itoverline{B}_0$ with $d(x,y)\le r'$, can be connected by a curve of length at most $L'd(x,y)$, where $L'$ depends only on the doubling and Poincar\'e constants provided for $5B_0$ by Proposition~\ref{prop-semilocal-doubling-intro} and Lemma~\ref{lem-PI-small-r-OK}.
By compactness, $\itoverline{B}_0$ can be covered by finitely many balls $B(x_k,r')$ with $x_k\in\itoverline{B}_0$, $k=1,2,\ldots, N$. As $X$ is connected and locally quasiconvex, it is rectifiably connected. Hence, there is for each pair $x_j,x_k$, $j\ne k$, a rectifiable curve $\gamma_{j,k}\subset X$ connecting $x_j$ to $x_k$. Since there are only finitely many such pairs, it follows that $M:=\sup_{j,k} l_{\gamma_{j,k}} < \infty$.
Finally, let $x,y \in B_0$ be arbitrary. If $d(x,y) \le r'$, then we already know that $x$ and $y$ can be connected by a curve of length at most $L'd(x,y)$. If $d(x,y) > r'$, then $x$ and $y$ can be connected to some $x_j$ and $x_k$, respectively, by curves of lengths at most $L'r'$. Adding $\gamma_{j,k}$ to these curves produces a curve from $x$ to $y$ of length at most $2L'r' + M < (2L'+M/r')d(x,y)$. \end{proof}
The local quasiconvexity proved in Proposition~\ref{prop-ex-curve-in-B0} can be further bootstrapped by the following result.
\begin{lem} \label{lem-rect-components} Let $B_0$ be a ball such that $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality on $B_0$ {\rm(}as the underlying space\/{\rm)}. Assume that the {$p\mspace{1mu}$}-Poincar\'e inequality \eqref{eq-def-local-PI-intro} holds for $B_0$ \textup{(}in place of $B$\textup{)} with dilation constant $\lambda\ge1$ and that $\lambda B_0$ is locally compact.
Then $B_0$ is rectifiably connected within $\lambda B_0$, i.e.\ any two points $x,y\in B_0$ can be connected by a rectifiable curve lying within $\lambda B_0$. \end{lem}
\begin{proof} Divide $\lambda B_0$ into its rectifiable components, i.e.\ $x,y \in \lambda B_0$ belong to the same rectifiable component if there is a rectifiable curve within $\lambda B_0$ from $x$ to $y$. For each $x\in\lambda B_0$, let $G_x$ denote the rectifiable component containing $x$, which is measurable by J\"arvenp\"a\"a--J\"arvenp\"a\"a--Rogovin--Rogovin-- Shan\-mu\-ga\-lin\-gam~\cite[Corollary~1.9 and Remark~3.1]{JJRRS}, since $\lambda B_0$ is locally compact. The local assumptions on $B_0$, together with Proposition~\ref{prop-ex-curve-in-B0}, imply that the sets $G_x\cap B_0$ are open for all $x$.
Let $G$ be such a rectifiable component intersecting $B_0$. We shall show that $B_0 \subset G$. Assume on the contrary that $B_0\setminus G \ne \varnothing$. Since all $G_x\cap B_0$ are open, so is $B_0\setminus G=\bigcup_{x\notin G} (G_x\cap B_0)$. Let $u=\chi_{G}$, which has $g \equiv 0$ as an upper gradient in the open set $\lambda B_0$ as there are no rectifiable curves in $\lambda B_0$ from $G$ to $\lambda B_0 \setminus G$. Since both $B_0 \cap G$ and $B_0\setminus G$ are nonempty and open, they both have positive measure. Hence, by the {$p\mspace{1mu}$}-Poincar\'e inequality for $B_0$, \begin{equation} \label{eq-PI-contradiction}
0 < \vint_{B_0} |u-u_{B_0}| \,d\mu
\le C r_{B_0} \biggl( \vint_{\lambda B_0} g^p \,d\mu \biggr)^{1/p}
= 0, \end{equation} a contradiction. \end{proof}
The following lemma makes it possible to lift the Poincar\'e inequality from small to large sets.
\begin{lem} \label{lem-iteration-with-Q} Let $1 \le q < \infty$ and $A,E\subset X$ be such that $\mu(A\cap E)\ge \theta \mu(E)$ for some $\theta>0$. Also assume that for some $Q\ge0$ and a measurable function $u$, \[
\|u-u_A\|_{L^q(A)} \le Q \quad \text{and} \quad
\|u-u_E\|_{L^q(E)} \le Q. \] Then \[
\|u-u_{A\cup E}\|_{L^q(A\cup E)} \le 4(1+\theta^{-1/q})Q. \] \end{lem}
\begin{proof} To start with, we have by the triangle inequality, \begin{align*}
|u_A-u_E| &=
\frac{\|u_A - u_E\|_{L^q(A \cap E)}}{\mu(A\cap E)^{1/q}} \\
&\le \frac{ \|u-u_A\|_{L^q(A)} + \|u-u_E\|_{L^q(E)}}{\mu(A\cap E)^{1/q}} \le \frac{2Q}{\mu(A\cap E)^{1/q}}. \end{align*} This yields \begin{align*}
\|u-u_A\|_{L^q(A \cup E)} &\le \|u-u_A\|_{L^q(A)} + \|u-u_E\|_{L^q(E)}
+ \mu(E)^{1/q} |u_A-u_E| \\ & \le 2Q + 2Q \biggl( \frac{\mu(E)}{\mu(A\cap E)}\biggr)^{1/q}
\le 2 (1+ \theta^{-1/q}) Q. \end{align*} Finally, Lemma~4.17 in \cite{BBbook} allows us to replace $u_A$ on the left-hand side by $u_{A\cup E}$, at the cost of an extra factor 2 on the right-hand side. \end{proof}
We are now ready to conclude the proof of the semilocal $(q,p)$-Poincar\'e inequality.
\begin{proof}[Proof of Theorem~\ref{thm-PI}] Let $B_0=B(x_0,r_0)$ be fixed. As $X$ is proper and connected, Proposition~\ref{prop-ex-curve-in-B0} shows that there is $\sigma \ge 1$ such that every pair of points in $\itoverline{B}_0$ can be connected by a rectifiable curve within $\sigma B_0$. By Lemma~\ref{lem-PI-small-r-OK}, there exist $r', C>0$ and $\lambda\ge 1$ such that the $(q,p)$-Poincar\'e inequality~\eqref{eq-def-local-PI} holds for every ball $B$ with radius $r_B\le r'$ and centre in $\overline{\sigma B}_0$. By decreasing $r'$, if necessary, we may assume that $\lambda r' \le r_0$.
Next, let $B\subset B_0$ be an arbitrary ball. If $r_B\le r'$, then the $(q,p)$-Poincar\'e inequality~\eqref{eq-def-local-PI} holds for $B$. Assume therefore that $r_B>r'$. By compactness, $\itoverline{B}_0$ can be covered by finitely many balls $B'_j$, $j=1,2,\ldots,M$, with radius $r'$ and centres $x_j\in \itoverline{B}_0$. Proposition~\ref{prop-ex-curve-in-B0} provides us for each $j$ with a rectifiable curve $\gamma_j$ in $\sigma B_0$ connecting $x_j$ to $x_{j+1}$. Following this curve, we define balls \[ B_{j,k}=B(\gamma_j(kr'/2),r'), \quad k=1,2,\ldots \le \frac{2l_{\gamma_j}}{r'}. \] Note that $\tfrac12 B_{j,k+1}\subset B_{j,k}$. Add all these balls to the chain $\{B'_j\}_{j=1}^M$, in between $B'_j$ and $B'_{j+1}$, and renumber the sequence as $B_1, B_2, \ldots, B_N$. Note that all the balls $B_j$ have the same radius $r'$ and that $\tfrac12 B_{j+1} \subset B_j\cap B_{j+1}$.
Next, let $A_j=\bigcup_{i=1}^j B_i$. Note that $B_0\subset A_N$. Then $A_j\cap B_{j+1} \supset \tfrac12 B_{j+1}$ and the semilocal doubling property of $\mu$, provided by Proposition~\ref{prop-semilocal-doubling-intro}, implies that for some $\theta>0$ (depending on $2 \sigma B_0$), \[ \mu(A_j\cap B_{j+1}) \ge \mu\bigl(\tfrac12 B_{j+1}\bigr) \ge \theta \mu(B_{j+1}). \] Since all the balls $B_j$ have radius $r'$, the $(q,p)$-Poincar\'e inequality~\eqref{eq-def-local-PI} holds for them, and we have \begin{align*}
\biggl( \int_{B_j} |u-u_{B_j}|^q \,d\mu \biggr)^{1/q}
& \le Cr' \mu(B_j)^{1/q} \biggl( \vint_{\lambda B_j} g_u^p\,d\mu \biggr)^{1/p}\\ &\le C'r' \mu(2\sigma B_0)^{1/q}\biggl( \vint_{2\sigma B_0} g_u^p\,d\mu \biggr)^{1/p} =: Q, \end{align*} where we have used the semilocal doubling property, together with the fact that $\lambda B_j\subset 2\sigma B_0$ and that $r'$ and $r_0$ are fixed. Lemma~\ref{lem-iteration-with-Q} with $A=A_1$ and $E=B_2$ now yields \[
\biggl( \int_{A_2} |u-u_{A_2}|^q \,d\mu \biggr)^{1/q} \le 4(1+\theta^{-1/q}) Q=:\gamma Q. \] Another application of Lemma~\ref{lem-iteration-with-Q} with $A=A_2$ and $E=B_3$ then gives \[
\biggl( \int_{A_3} |u-u_{A_3}|^q \,d\mu \biggr)^{1/q} \le \gamma^2 Q. \] Continuing in this way along the whole sequence $\{B_j\}_{j=1}^N$, we can conclude after finitely many iterations that \begin{align*}
\biggl( \int_B |u-u_{A_N}|^q \,d\mu \biggr)^{1/q}
&\le \biggl( \int_{A_N} |u-u_{A_N}|^q \,d\mu \biggr)^{1/q} \\ &\le \gamma^N Q
= C'' r' \mu(2\sigma B_0)^{1/q}
\biggl( \vint_{2\sigma B_0} g_u^p\,d\mu \biggr)^{1/p}. \end{align*} Let $\lambda'=3\sigma r_0/r'$. Since $r_B\ge r'$, the measures of the balls $B \subset 2\sigma B_0 \subset \lambda' B$ are all comparable, due to the semilocal doubling property of $\mu$, and thus \[
\biggl( \vint_B |u-u_{A_N}|^q \,d\mu \biggr)^{1/q} \le C''' r_B
\biggl( \vint_{\lambda' B} g_u^p\,d\mu \biggr)^{1/p}. \] Finally, \cite[Lemma~4.17]{BBbook} allows us to replace $u_{A_N}$ by $u_B$ on the left-hand side (at the cost of an extra factor 2 on the right-hand side), which completes the proof. \end{proof}
We will also need the following lemma, which shows the reverse doubling condition under suitable local assumptions.
\begin{lem} \label{lem-rev-doubl} Assume that the doubling property for $\mu$ holds within $B_0$ in the sense of Definition~\ref{def-local-doubl-mu}, with doubling constant $C_\mu$, and that the {$p\mspace{1mu}$}-Poincar\'e inequality \eqref{eq-def-local-PI-intro} holds for $B_0$ \textup{(}in place of $B$\textup{)}. Let $B \subset 2B \subset B_0$ be a ball with $r_B < \frac{2}{3} \diam B_0$. Then there is $\theta<1$, only depending on $C_\mu$, such that \[
\mu\bigl(\tfrac{1}{2}B\bigr) \le \theta \mu(B). \] \end{lem}
\begin{proof} Assume that $B=B(x,r)$. If there were no $y$ such that $d(x,y)=\frac{3}{4}r$, then zero would be an upper gradient of $\chi_{\frac{3}{4}B}$, which
would violate the {$p\mspace{1mu}$}-Poincar\'e inequality for the ball $B_0$, as in \eqref{eq-PI-contradiction}, since $\diam \overline{\tfrac{3}{4}B} \le \tfrac{3}{2}r < \diam B_0$ and thus $B_0 \setminus \overline{\tfrac{3}{4}B} \ne \varnothing$. Hence there is $y$ such that $d(x,y)=\frac{3}{4}r$.
As $B(y,r) \subset 2B \subset B_0$, we get from the doubling property within $B_0$ that \[
\mu\bigl(\tfrac{1}{2}B\bigr) \le \mu(B(y,2r))
\le C_\mu^3 \mu\bigl(B\bigl(y,\tfrac{1}{4}r\bigr)\bigr) \] and thus \[
\mu(B) \ge \mu\bigl(\tfrac{1}{2}B\bigr) + \mu\bigl(B\bigl(y,\tfrac{1}{4}r\bigr)\bigr)
\ge (1+C_\mu^{-3}) \mu\bigl(\tfrac{1}{2}B\bigr).
\qedhere \] \end{proof}
\section{Better Poincar\'e inequalities} \label{sect-better-PI}
There are two types of better Poincar\'e inequalities. The first type of result is the Sobolev--Poincar\'e inequality, due to Haj\l asz--Koskela~\cite{HaKo-CR}, \cite{HaKo}, which strengthens the left-hand side of the inequality. The arguments in \cite{HaKo-CR} and \cite{HaKo} also show how, in sufficiently nice spaces such as $\mathbf{R}^n$, the dilation constant $\lambda>1$ in the Poincar\'e inequality can be improved to $\lambda=1$. These results were originally proved under global doubling and Poincar\'e assumptions, but since all the considerations in the proof are of local nature, they can also be obtained under (semi)local assumptions in the following form.
\begin{thm} \label{thm-(q,p)-PI} \textup{(Local Sobolev--Poincar\'e inequality)} Let $B_0$ be a ball such that the {$p\mspace{1mu}$}-Poincar\'e inequality\/ \textup{(}with dilation constant $\lambda$\textup{)} and the doubling property for $\mu$ hold within $B_0$ in the sense of Definitions~\ref{def-local-doubl-mu} and~\ref{def-PI}. Assume that the dimension condition \begin{equation} \label{eq-local-dim-cond} \frac{\mu(B')}{\mu(B)} \ge C_0 \Bigl( \frac{r_{B'}}{r_B} \Bigr)^s \end{equation} holds for some $C_0,s>0$ and all balls $X\ne B'\subset B \subset B_0$.
Then there exists $C$, depending only on $p$, the doubling constant and both constants in the {$p\mspace{1mu}$}-Poincar\'e inequality within $B_0$, such that for all balls $B$ with $5\lambda B \subset B_0$, all integrable functions $u$ on $2\lambda B$, and all {$p\mspace{1mu}$}-weak upper gradients $g$ of~$u$, \begin{equation} \label{eq-(q,p-PI-2la}
\biggl(\vint_{B} |u-u_B|^{q} \,d\mu\biggr)^{1/q}
\le C r_B \biggl( \vint_{2\lambda B} g^{p} \,d\mu \biggr)^{1/p}, \end{equation} where $q=p^*:=sp/(s-p)>p$ if $p < s$ while $q<\infty$ is arbitrary when $p\ge s$ \textup{(}in which case $C$ depends also on $q$\textup{)}.
If $L$ is a local quasiconvexity constant for $B_0$, in the sense that every pair of points $x,y\in B_0$ can be connected {\rm(}in $X${\rm)} by a curve of length at most $Ld(x,y)$, then~\eqref{eq-(q,p-PI-2la} holds for all balls $B$ with $\tfrac52 LB \subset B_0$, and the dilation constant $2\lambda$ in~\eqref{eq-(q,p-PI-2la} can be replaced by $L$.
In particular, if $\mu$ is\/ \textup{(}semi\/\textup{)}locally doubling and supports a\/ \textup{(}semi\/\textup{)}local {$p\mspace{1mu}$}-Poincar\'e inequality then $X$ supports a\/ \textup{(}semi\/\textup{)}local $(p,p)$-Poincar\'e inequality. \end{thm}
\begin{proof} It suffices to consider the case $s>p$ since when $s\le p$, one can instead assume the dimension condition~\eqref{eq-local-dim-cond} with $s$ replaced by $p+\varepsilon$ for arbitrarily small $\varepsilon>0$.
We will follow the arguments in the proof of \cite[Theorem~4.39]{BBbook} and prove~\eqref{eq-(q,p-PI-2la} under the assumption of local $L$-quasiconvexity, i.e.\ with $LB$ in the right-hand side. To obtain~\eqref{eq-(q,p-PI-2la} with $2\lambda B$ in the right-hand side, replace the balls in the chain below by the balls $B^{0,0}:=B(x,2r)$, $B^{i,0}:=B(x',2^{-(i+1)}r)$, $i=1,2,\ldots$, and $\widehat{B}^{i}:=\lambda B^{i,0}$. Note that $\frac52 \widehat{B}^{i} \subset B_0$, $i=0,1,\ldots$.
Let $B\bigl(x,\tfrac52Lr\bigr)\subset B_0$ be arbitrary. We can assume that $Lr\le\diam B_0$. Let $\rho_0=Lr/2\lambda$ and $\rho_i=2^{-i} \rho_0$, $i=1,2,\ldots$\,. For $x'\in B(x,r)$, consider an $L$-quasiconvex curve $\gamma$ from $x$ to $x'$, i.e.\ $\gamma(0)=x$ and $\gamma(l_\gamma)=x'$, where $l_\gamma\le Ld(x,x')$ is the length of $\gamma$. Find the smallest integer $i'\ge0$ such that $2\lambda\rho_{i'}\le L(r-d(x,x'))$. For each $i=0,1,\ldots,i'-1$, consider all the integers $j\ge0$, such that \[ t_{i,j}:= (1-2^{-i})l_\gamma+ j\rho_i < (1-2^{-(i+1)})l_\gamma, \] and let $x_{i,j}=\gamma(t_{i,j})$. There are at most $\lambda$ such $x_{i,j}$'s for each $i$. Similarly, for $i=i'$ there are at most $2\lambda$ integers $j\ge0$ and points $x_{i,j}=\gamma(t_{i,j})$ such that $t_{i,j}<l_\gamma$. For $i>i'$, let $j=0$ and $x_{i,j}=x'$.
It is now easily verified that $d(x,x_{i,j})+\lambda\rho_i<Lr$ and hence \[ B^{i,j}:=B(x_{i,j},\rho_i)\subset \lambda B^{i,j} \subset B(x,Lr). \]
Ordering the balls $B^{i,j}$ lexicographically, we obtain a chain of balls from $x$ to $x'$, with substantial overlaps. Assuming that $x'$ is a Lebesgue point of $u$, a standard telescoping argument using the {$p\mspace{1mu}$}-Poincar\'e inequality for each $B^{i,j}\subset B_0$, as in \eqref{eq-std-telescope}, then yields the estimate \begin{equation} \label{eq-PI-appl}
|u(x')-u_{B^{0,0}}|
\le C_1 \sum_B r_B \biggl( \vint_{\lambda B} g_u^p\,d\mu \biggr)^{1/p}, \end{equation} where the sum is taken over all balls $B$ in the chain. Note that, because of the doubling property within $B_0$, the balls $\lambda B$ and $B(x',\lambda r_B)$ have comparable measures. The dimension condition~\eqref{eq-local-dim-cond} therefore yields \[
|u(x')-u_{B^{0,0}}|
\le C_2r \sum_B \frac{\mu(B(x',\lambda r_B))^{1/s-1/p}}{\mu(B(x,Lr))^{1/s}}
\biggl( \int_{\lambda B} g_u^p\,d\mu \biggr)^{1/p} =: \Sigma' + \Sigma'', \] where the summations in $\Sigma'$ and $\Sigma''$ are over $B$ with $r_B>\rho_{i_0}$ and $r_B\le\rho_{i_0}$, respectively (and $i_0\ge0$ will be chosen later). Since $\lambda \rho_i \le \lambda \rho_0 = \frac{1}{2} Lr \le \frac{1}{2} \diam B_0$, Lemma~\ref{lem-rev-doubl} implies that there exists $\theta\in(0,1)$, independent of $x'$ and $i$, such that \[ \mu(B(x',\lambda\rho_i)) \ge \theta^{i-i_0} \mu(B(x',\lambda\rho_{i_0})) \quad \text{for } i\le i_0 \] and hence, as $1/s-1/p<0$, \[ \Sigma' \le C_3 r \biggl( \frac{\mu(B(x',\lambda \rho_{i_0}))} {\mu(B(x,Lr))}
\biggr)^{1/s-1/p}
\biggl( \vint_{B(x,Lr)} g_u^p\,d\mu \biggr)^{1/p}. \] Similarly, \[ \mu(B(x',\lambda\rho_i)) \le \theta^{i-i_0} \mu(B(x',\lambda\rho_{i_0})) \quad \text{for } i>i_0 \] and hence, as $1/s>0$, \[ \Sigma'' \le C_4r \biggl( \frac{\mu(B(x',\lambda \rho_{i_0}))} {\mu(B(x,Lr))}
\biggr)^{1/s} M(x')^{1/p}, \] where $M(x'):= M^*_{B(x,Lr),B_0} g_u^p (x')$ is the noncentred local maximal function given by~\eqref{eq-def-local-max-fn}. Here we have also used that both $x'$ and $\laB^{i,j}$ are contained in $\widehat{B}^{i}:=B(x_{i,0},2\lambda\rho_i)\subset B(x,Lr)$, and $\tfrac52\widehat{B}^{i}\subset B\bigl(x,\tfrac52 Lr\bigr)\subset B_0$, $i\ge0$. Choosing $i_0\ge0$ so that \[ \frac{\mu(B(x',\lambda \rho_{i_0}))} {\mu(B(x,Lr))} \quad \text{is comparable to} \quad \frac{1}{M(x')} \vint_{B(x,Lr)} g_u^p\,d\mu \le1, \] we can conclude that \begin{align*}
|u(x')-u_{B^{0,0}}| \le \Sigma' + \Sigma'' &\le C_5r \biggl( \vint_{B(x,Lr)} g_u^p\,d\mu \biggr)^{1/s} M(x')^{1/p-1/s}, \end{align*} which gives a lower bound for $M(x')$. Proposition~\ref{prop-maximal-fn} then yields the level set estimate \[
\mu(\{x'\in B(x,r): |u(x')-u_{B^{0,0}}|\ge t \})
\le \frac{C_6r^q}{t^q} \mu(B(x,Lr))
\biggl( \vint_{B(x,Lr)} g_u^p\,d\mu \biggr)^{q/p}, \] which in turn implies~\eqref{eq-(q,p-PI-2la} with $B(x,Lr)$ in the right-hand side, by \cite[Lemma~4.25]{BBbook}.
To conclude the statement of the theorem under local assumptions, let $x_0\in X$ be arbitrary and find $r_0>0$ so that the local assumptions hold within $B(x_0,r_0)$. Then choose a radius $0<r_0'\le (11\lambda)^{-1}r_0$ so that $B_0':=B(x_0,r_0') \ne X$ and $\dist(x_0,X \setminus B_0')=r_0'$. For $B=B(x,r)\subset B_0'$ it then follows that $r_B\le2r_0'$ and hence $5\lambda B \subset B(x_0,r_0)$. The already proved first part of the theorem then implies that~\eqref{eq-(q,p-PI-2la} holds for $B$.
Under semilocal assumptions, let $B_0':=B(x_0,r_0')\ne X$ be arbitrary and such that $\dist(x_0,X \setminus B_0')=r_0'$. (If $B_0'=X$, the proof is similar.) Then $r_B\le2r_0'$ whenever $B=B(x,r)\subset B_0'$, and hence $2B\subset 5B_0'$. Note that above, when proving \eqref{eq-(q,p-PI-2la} with $2 \lambda B$ on the right-hand side, the {$p\mspace{1mu}$}-Poincar\'e inequality is only used to balls within $2B$ (to obtain \eqref{eq-PI-appl}), while \eqref{eq-local-dim-cond} and the doubling property are used for balls within $5\lambda B$, where $\lambda$ is the dilation constant in the {$p\mspace{1mu}$}-Poincar\'e inequality within $2B$. Thus, to obtain~\eqref{eq-(q,p-PI-2la} for $B\subset B_0'$ we need to apply the {$p\mspace{1mu}$}-Poincar\'e inequality with $\lambda$ and $C_{\rm PI}$ determined by $5B_0'$, followed by \eqref{eq-local-dim-cond} and the doubling property with constants determined by $11\lambda B_0'$. This can be done because of the semilocal assumptions, since the doubling property for $\mu$ within $11\lambda B_0'$ implies~\eqref{eq-local-dim-cond} within $11\lambda B_0'$ for some $s>0$. \end{proof}
\begin{remark} \label{rem-s-and-q} There is a converse relation between $s$ and $q$ in Theorem~\ref{thm-(q,p)-PI} as well, namely if the $(q,p)$-Poincar\'e inequality \eqref{eq-(q,p-PI-2la} holds for all balls $B$ with $5 \lambda B \subset B_0$, and $\mu$ is doubling within $B_0$, then \eqref{eq-local-dim-cond} holds with $s=qp/(q-p)$ for all balls $X \ne B'\subset B$ with $15\lambda B \subset B_0$; this follows from the proof of \cite[Proposition~4.20]{BBbook}. Note that the formulas $s(q)$ and $q(s)$ are inverse functions of each other, if $p<s$. In particular, if we let \begin{align*} {\hat{s}} &= \sup_{x \in X}\lim_{r\to0} \inf \{s>0: \eqref{eq-local-dim-cond} \text{ holds for all balls } B'\subset B\subset B(x,r)\}. \\ {\hat{q}} & = \sup \{q \ge p : X \text{ supports a local $(q,p)$-Poincar\'e inequality}\}, \end{align*} then \[
{\hat{q}} = \begin{cases}
\displaystyle \frac{{\hat{s}} p}{{\hat{s}} -p}, & \text{if } p < {\hat{s}} < \infty, \\
p, & \text{if } {\hat{s}}= \infty, \\
\infty, & \text{if } {\hat{s}} \le p.
\end{cases} \]
Note that there need
not exist an optimal
$s$ (for a given $B_0$ in Theorem~\ref{thm-(q,p)-PI}), i.e.\ the set of values of $s$ for which \eqref{eq-local-dim-cond} holds may be an open interval, cf.\ Example~3.1 in Bj\"orn--Bj\"orn--Lehrb\"ack~\cite{BBLeh1}. Similarly there need not be an optimal $q$. \end{remark}
The second type of self-improvement for Poincar\'e inequalities is the open-ended property due to Keith--Zhong~\cite[Theorem~1.0.1]{KZ} which strengthens the right-hand side of the inequality, see also Heinonen--Koskela--Shanmugalingam--Tyson~\cite[Theorem~12.3.9]{HKSTbook}, Eriksson-Bique~\cite{Eriksson-Bique} and Kinnunen--Lehrb\"ack--V\"a\-h\"a\-kan\-gas--Zhong~\cite{KLVZ}. A careful analysis of the proof (in \cite{KZ} or \cite{HKSTbook}) shows that all the balls considered therein lie within a constant dilation of the ball in the Poincar\'e inequality under consideration. This makes it possible to prove the following local version.
\begin{thm} \label{thm-KZ-proper} Assume that $p>1$ and let $B_0=B(x_0,r_0)$ be a ball such that $\itoverline{B}_0$ is compact and the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold within $B_0$ in the sense of Definitions~\ref{def-local-doubl-mu} and~\ref{def-PI}.
Then there exist constants $C$, $\lambda$ and $q<p$, depending only on $p$, the doubling constant and both constants in the {$p\mspace{1mu}$}-Poincar\'e inequality within $B_0$, such that for all balls $B$ with $\lambda B\subset B_0$, all integrable functions $u$ on $\lambda B$, and all $q$-weak upper gradients $g$ of $u$, \begin{equation} \label{eq-KZ-proper}
\vint_{B} |u-u_B| \,d\mu
\le C r_B \biggl( \vint_{\lambda B} g^{q} \,d\mu \biggr)^{1/q}. \end{equation} \end{thm}
In the proof below, we will use the \emph{inner metric} which is defined by \[
d_{\rm in}(x,y)=\inf \length(\gamma), \] where the infimum is taken over all curves $\gamma$ connecting $x$ and $y$. If there are no such curves then $d_{\rm in}(x,y)=\infty$. As $X$ may be disconnected, this is not always a metric, but we will nevertheless still use the name ``inner metric''. Balls with respect to $d_{\rm in}$, defined in the obvious way, will be denoted by $B_{\rm in}$.
\begin{proof} Let $\lambda'$ be the dilation constant in the {$p\mspace{1mu}$}-Poincar\'e inequality within $B_0$. Lemma~\ref{lem-ex-curve-in-B0} shows that there exist $\Lambda\ge1$ and $L=9\Lambda$ such that \[ d(x,y)\le d_{\rm in}(x,y) \le Ld(x,y) \] whenever $B(x,2\Lambda d(x,y))\subset B_0$. It follows that if $2\Lambda B(x,r)\subset B_0$, then \begin{equation} \label{eq-compare-balls} B(x,r/L) \subset B_{\rm in}(x,r) \subset B(x,r) \subset B_{\rm in}(x,Lr) \subset B(x,Lr). \end{equation}
We will now explain how the arguments in the proof of \cite[Theorem~4.39]{BBbook} can be used to show that for every inner ball $B_{\rm in}=B_{\rm in}(x,r)$ such that $B(x,2r)\subset B_0$, the following inner {$p\mspace{1mu}$}-Poincar\'e inequality with dilation constant $1$ holds: \begin{equation} \label{eq-PI-with-1}
\vint_{B_{\rm in}} |u-u_{B_{\rm in}}| \,d\mu
\le C' r \biggl( \vint_{B_{\rm in}} g^p \,d\mu \biggr)^{1/p}. \end{equation} (Since we only assume a local {$p\mspace{1mu}$}-Poincar\'e inequality with respect to ordinary balls, Theorem~4.39 in \cite{BBbook} cannot be applied directly and care has to be taken when comparing ordinary and inner balls.)
More precisely, let $\rho_0=r/2\lambda' L$ and $\rho_i=2^{-i} \rho_0$, $i=1,2,\ldots$\,. For $x'\in B_{\rm in}(x,r)$, consider a $d_{\rm in}$-geodesic $\gamma$ from $x$ to $x'$, i.e.\ $\gamma(0)=x$ and $\gamma(d')=x'$, where $d'=d_{\rm in}(x,x')<r$ is the length of $\gamma$. Such a geodesic exists by Ascoli's theorem and the compactness of $\itoverline{B}_0$. Find the smallest integer $i'\ge0$ such that $2\lambda'\rho_{i'}\le r-d'$. For each $i=0,1,\ldots,i'-1$, consider all the integers $j\ge0$, such that \[ t_{i,j}:= (1-2^{-i})d'+ j\rho_i < (1-2^{-(i+1)})d', \] and let $x_{i,j}=\gamma(t_{i,j})$. There are at most $\lambda'L$ such $x_{i,j}$'s for each $i$. Similarly, for $i=i'$, there are at most $2\lambda'L$ integers $j\ge0$ and points $x_{i,j}=\gamma(t_{i,j})$ such that $t_{i,j}<d'$. For $i>i'$, let $j=0$ and $x_{i,j}=x'$.
It is now easily verified that $d_{\rm in}(x,x_{i,j})+\lambda'L\rho_i<r$ and hence, with $B^{i,j}=B(x_{i,j},\rho_i)$, \[ 2\lambda'\Lambda B^{i,j}\subset \lambda' LB^{i,j} \subset B(x,r) \subset B_0, \] so \eqref{eq-compare-balls} implies that \[ B^{i,j} \subset \lambda'B^{i,j} \subset B_{\rm in}(x_{i,j},\lambda'L\rho_i) \subset B_{\rm in}(x,r) \subset B_0. \] For later reference, let $y_i=\gamma((1-2^{-(i+1)})d')$ when $i\le i'$ and $y_i=x'$ otherwise. Then each ball $\widehat{B}^{i,j}:=B(y_i,(L+1)\lambda'\rho_i)$ contains both $\lambda'B^{i,j}$ and $x'$. Moreover, \[ d(x,y_i) + \tfrac52 (L+1)\lambda'\rho_i < (1+2^{-(i+2)}(3+5/L))r < 2r, \] so $\tfrac52\widehat{B}^{i,j} \subset B(x,2r)\subset B_0$.
Ordering the balls $B^{i,j}$ lexicographically, we obtain a chain of balls from $x$ to $x'$, with substantial overlaps. Assuming that $x'$ is a Lebesgue point of $u$, a standard telescoping argument using the {$p\mspace{1mu}$}-Poincar\'e inequality for each $B^{i,j}\subset B_0$, as in \eqref{eq-std-telescope}, then yields the estimate \begin{equation} \label{eq-telescope}
|u(x')-u_{B^{0,0}}|
\le C_1 \sum_B r_B \biggl( \vint_{\lambda'B} g_u^p\,d\mu \biggr)^{1/p}, \end{equation} where the sum is taken over all balls $B$ in the above chain. We can now estimate the measure of the set \[
E_t=\{x'\in B_{\rm in}(x,r): |u(x')-u_{B^{0,0}}|>t\} \] as follows. Writing $t = C_2t\sum_B r_B/r$ and comparing with~\eqref{eq-telescope}, we can for every Lebesgue point $x'\in E_t$ find some ball $B_{x'}:=B^{i,j}$ from the corresponding chain so that \begin{equation} \label{eq-choose-Bx} \biggl( \vint_{\lambda'B_{x'}} g_u^p\,d\mu \biggr)^{1/p} \ge \frac{C_3t}{r}. \end{equation} By the above, the corresponding ball $\widehat{B}_{x'}:=\widehat{B}^{i,j}$ contains both $\lambda' B_{x'}$ and $x'$. Hence, using the 5-covering lemma we can from the collection $\widehat{B}_{x'}$, where $x'\in E_t$ are Lebesgue points of $u$, extract a countable pairwise disjoint subcollection $\widehat{B}_{x'_k}$, $k=1,2,\ldots$, such that \[ \mu(E_t) \le \mu \biggl( \bigcup_{k=1}^\infty 5\widehat{B}_{x'_k} \biggr) \le C_4 \sum_{k=1}^\infty \mu(\widehat{B}_{x'_k}), \] where in the last inequality we have used that $\tfrac52\widehat{B}_{x'_k}\subset B_0$, so that the doubling condition can be applied. Note also that the measures of $\lambda'B_{x'_k}$ and $\widehat{B}_{x'_k}$ are comparable. Estimating the balls in the last sum using~\eqref{eq-choose-Bx}, together with the fact that the balls $\lambda' B_{x'_k}\subset \widehat{B}_{x'_k} \cap B_{\rm in}(x,r)$ are disjoint, now yields the level set estimate \[ t^p \mu(E_t) \le C_5r^p \sum_{k=1}^\infty \int_{\lambda'B_{x'_k}} g_u^p\,d\mu \le C_6r^p \int_{B_{\rm in}(x,r)} g_u^p\,d\mu, \] which in turn implies~\eqref{eq-PI-with-1}, by \cite[Lemma~4.25]{BBbook}.
Next, still with respect to $d_{\rm in}$ and within $B_0$, the proof in \cite[Theorem~12.3.9]{HKSTbook} (or Keith--Zhong~\cite{KZ}), which is written for geodesic spaces, can be applied to show that there exists $q<p$ such that for every inner ball $B_{\rm in}=B_{\rm in}(x,r)$ with $1280B(x,r)\subset B_0$, \begin{equation} \label{eq-q-PI-Bin}
\vint_{B_{\rm in}} |u-u_{B_{\rm in}}| \,d\mu
\le C'' r \biggl( \vint_{256B_{\rm in}} g^q \,d\mu \biggr)^{1/q}. \end{equation} (Here it is also used that if $B_{\rm in}(x',r')\subset 256B_{\rm in}$ then $B_{\rm in}(x',r')=B_{\rm in}(x',r'')$ for some $r'' \le 512 r$, and hence \[ B(x',2r'')\subset 1280 B(x,r)\subset B_0, \] so \eqref{eq-PI-with-1} holds for every such inner ball $B_{\rm in}(x',r') \subset 256B_{\rm in}$ and can be used in the arguments leading to \cite[Theorem~12.3.9]{HKSTbook}.) Now, by \eqref{eq-compare-balls}, \[ B(x,r/L)\subset B_{\rm in} \quad \text{and} \quad 256B_{\rm in}\subset 256B(x,r)\subset 1280B(x,r), \] all with comparable measures (depending on $L$). Hence, \eqref{eq-q-PI-Bin} yields~\eqref{eq-KZ-proper} with $\lambda=1280L$ and $B$ replaced by $B(x,r/L)$. \end{proof}
In Heinonen--Koskela--Shanmugalingam--Tyson~\cite[Proposition~12.3.10]{HKSTbook} it is explained how (under global assumptions) the properness of $X$ in Keith--Zhong~\cite{KZ} can be relaxed to local compactness, at the price that the resulting $q$-Poincar\'e inequality only holds for $u \in N^{1,p}(\lambda B)$, which is however enough for many applications. A counterexample by Koskela~\cite{Koskela} shows that one cannot deduce a standard $q$-Poincar\'e inequality in this case. A similar improvement can be proved under local assumptions and in this case we \emph{do} conclude a standard local $q$-Poincar\'e inequality, even though $q$ may vary from ball to ball. Under semiuniformly local assumptions there is even a \emph{fixed} $q<p$, see Theorem~\ref{thm-PI-uniform-q-intro} whose
proof is given in Section~\ref{sect-local-unif} below.
\begin{thm} \label{thm-loc-cpt-q-PI} If $X$ is locally compact and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality, $p>1$, and $\mu$ is locally doubling, then for every $x_0\in X$ there is a ball $B_0' \ni x_0$,
and $q<p$, such that a $q$-Poincar\'e inequality holds within $B_0'$ in the sense of Definition~\ref{def-PI}.
If $X$ is in addition proper and connected, then the conclusion is semilocal, i.e.\ it holds for all balls $B_0'\subset X$. \end{thm}
Note that for a semilocal conclusion it is \emph{not} enough to assume that $X$ is locally compact and that the doubling and Poincar\'e assumptions are semilocal. This is another consequence of the counterexample in Koskela~\cite{Koskela}.
\begin{proof} Let $x_0\in X$ be arbitrary and find $r_0>0$ so that $\itoverline{B(x_0,r_0)}$ is compact and the local assumptions hold within $B(x_0,r_0)$. Let $\lambda$ be given by Theorem~\ref{thm-KZ-proper}. Then choose a radius $0<r_0'\le (3\lambda)^{-1}r_0$ so that $B_0':=B(x_0,r_0') \ne X$ and $\dist(x_0,X \setminus B_0')=r_0'$. For $B\subset B_0'$ it then follows that $r_B\le2r_0'$ and hence $\lambda B \subset B(x_0,r_0)$. The first statement then follows from Theorem~\ref{thm-KZ-proper}.
If $X$ is in addition proper and connected, then let $B_0':=B(x_0,r_0')$ be arbitrary and assume that $\dist(x_0,X \setminus B_0')=r_0'$ (the proof is similar for $B_0'=X$). Since $\lambda$ in Theorem~\ref{thm-KZ-proper} depends on $B_0$, we cannot directly obtain a semilocal conclusion from it.
Instead, let $L$ be provided by Lemma~\ref{lem-ex-curve-in-B0} with $B_0$ replaced by $5B_0'$. Then for every ball $B(x,r)\subset B_0'$, we have $r\le2r_0'$ and hence $B(x,1280Lr)\subset 2561LB_0' =:B_0$. Because the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold within $B_0$ (by Proposition~\ref{prop-semilocal-doubling-intro} and Theorem~\ref{thm-PI-intro}), the proof of Theorem~\ref{thm-KZ-proper} (with constants dictated by $B_0$) yields \eqref{eq-q-PI-Bin}. In particular, as $B(x,1280Lr) \subset B_0$, the inner $q$-Poincar\'e inequality \eqref{eq-q-PI-Bin} holds with $B_{\rm in}$ replaced by $B_{\rm in}(x,Lr)$ and with constants $q<p$ and $C''>0$ depending on $B_0$.
Now, $B(x,2r) \subset 5B_0'$ (as $r\le2r_0'$) and hence, since $X$ is proper, Lemma~\ref{lem-ex-curve-in-B0} implies that \[ B(x,r)\subset B_{\rm in}(x,Lr) \quad \text{and} \quad B_{\rm in}(x,256Lr)\subset B(x,256Lr), \] all with comparable measures (by the semilocal doubling property of $\mu$ provided by Proposition~\ref{prop-semilocal-doubling-intro}). We can therefore conclude that~\eqref{eq-KZ-proper} holds for every $B(x,r) \subset B_0'$ with $\lambda =256 L$. \end{proof}
\section{Semiuniformly local assumptions} \label{sect-local-unif}
A possible strengthening of our local assumptions is to also require uniformity in the constants and/or the radii.
\begin{deff} The measure $\mu$ is \emph{semiuniformly locally doubling} if there is a (uniform) constant $C$ such that for each $x \in X$ there is $r>0$ so that $\mu(2B)\le C \mu(B)$ for all balls $B \subset B(x,r)$. If $r$ is independent of $x$, then $\mu$ is \emph{uniformly locally doubling}. \end{deff}
Note that there is no uniformity of the radii when $\mu$ is semiuniformly locally doubling. (Semi)uniformly local Poincar\'e inequalities are defined similarly, with uniform constants $C$ and $\lambda$. Semiuniformly local assumptions were used by Holopainen--Shan\-mu\-ga\-lin\-gam~\cite{HoSh}.
It may seem more natural to impose uniformly local assumptions but, as we shall see, semiuniformly local assumptions are sometimes enough. The semiuniformly local assumptions also have the advantage that they are inherited by open subsets, and in particular are satisfied on all open subsets of spaces supporting global assumptions. Moreover, any strictly positive continuous weight on $\mathbf{R}^n$
is semiuniformly locally doubling and supports a semiuniformly local $1$-Poincar\'e inequality. On the other hand, our local assumptions are more general, as seen in Example~\ref{ex-local-unif} below, and sufficient for many purposes, as demonstrated in this paper. However, under semiuniformly local assumptions the constants and exponents in the local (but not necessarily the semilocal) results in Section~\ref{sect-better-PI} are also uniform. We now make this more precise.
\begin{thm} If $\mu$ is semiuniformly locally doubling and $X$ supports a local\/ \textup{(}resp.\ semiuniformly local\/\textup{)}
{$p\mspace{1mu}$}-Poincar\'e inequality,
then there is $q>p$ such that
$X$ supports a local\/ \textup{(}resp.\ semiuniformly local\/\textup{)}
$(q,p)$-Poincar\'e inequality. \end{thm}
\begin{proof} This is a direct consequence of Theorem~\ref{thm-(q,p)-PI}, since the exponent $q$, given by an explicit formula, only depends on the local doubling constant. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-PI-uniform-q-intro}] This follows from Theorem~\ref{thm-loc-cpt-q-PI}, since the improvement $p-q$ in the exponent depends only on the local doubling constant and the constants $p$, $C$ and $\lambda'$ in the local {$p\mspace{1mu}$}-Poincar\'e inequality within $B_0'$. \end{proof}
Another consequence of semiuniformly local assumptions is obvious in Lemma~\ref{lem-ex-curve-in-B0}: the constants $\Lambda$ and $L$ therein are uniform. It would be interesting to know for which other results it is essential to require semiuniformly local assumptions, especially when the consequences for the conclusions are not just the uniformity of constants.
\begin{example} \label{ex-local-unif} For $k \ge 3$, let \begin{align*}
E_k &= \bigcup_{j=2}^\infty \bigl([k-2k^{-j},k-k^{-j}]\cup[k+k^{-j},k+2k^{-j}]\bigr)
\times [0,k^{1-j}], \\\
X &= \bigl(\mathbf{R} \times (-\infty,0]\bigr) \cup \bigcup_{k=3}^\infty E_k, \end{align*} i.e.\ $X$ is the closed lower half-plane together with countably many ``skyscraper cities'' $E_k$ near each point $x_k=(k,0)$, $k\ge3$. Note that each $E_k$ is self-similar with the factor $1/k$ and centre $x_k$.
Equip $X$ with the Euclidean distance and the $2$-dimensional Lebesgue measure~$\mu$. Then every $x\in X$ has a neighbourhood $V_x$ (with respect to $X$) whose interior (with respect to $\mathbf{R}^2$) is a uniform subdomain of $\mathbf{R}^2$. This implies that $\mu$ is doubling and supports a 1-Poincar\'e inequality on $\overline{V}_x$, by Theorem~4.4 in Bj\"orn--Shanmugalingam~\cite{BjShJMAA} and Proposition~7.1 in Aikawa--Shanmugalingam~\cite{AikSh05}. Thus $\mu$ is locally doubling and supports a local 1-Poincar\'e inequality.
For $j,k\ge4$, let $r_{j,k}=k^{-j}$ and $x_{j,k}=(k+r_{j,k},kr_{j,k})\in E_k$. Then it is easily seen that $\mu(B(x_{j,k},kr_{j,k}))$ is comparable to $k^{1-2j}$, while $\mu(B(x_{j,k},2kr_{j,k}))$ is comparable to $k^{2(1-j)}$. It follows that the local doubling constant near $x_k$ is at least comparable to $k$.
Similarly, the ball $B(x_{j,k},3r_{j,k})$ is disconnected and contains the point $(k-r_{j,k},kr_{j,k})$. Lemma~\ref{lem-rect-components} then implies that the local dilation constant $\lambda_k$ in any Poincar\'e inequality around the point $x_k$ satisfies $\lambda_k \ge \tfrac13 k$.
Letting $k\to\infty$ shows that $\mu$ is not semiuniformly locally doubling and does not support any semiuniformly local Poincar\'e inequality. On the other hand, since $X$ is proper and connected, it follows from Proposition~\ref{prop-semilocal-doubling} and Theorem~\ref{thm-PI} that $\mu$ is semilocally doubling and supports a semilocal 1-Poincar\'e inequality.
If one is satisfied with a (semi)local {$p\mspace{1mu}$}-Poincar\'e inequality only for $p>2$, rather than a 1-Poincar\'e inequality, then a simpler example can be constructed by replacing each ``city'' $E_k$ with a wedge \[
V_k=\{(x,y)\in\mathbf{R}^2: k^2|x-k|\le y \le 2k^2|x-k|\le 1 \}. \] In this case, the validity of a local {$p\mspace{1mu}$}-Poincar\'e inequality for $p>2$ follows from \cite[Example~A.23]{BBbook}.
It is also possible to make $X$ bounded if $\mathbf{R}\times(-\infty,0]$ in its construction is replaced by $(-1,1)\times(-1,0]$ and the ``cities'' or ``wedges'' are attached at the points $(1-8k^{-1},0)$ rather than at $x_k$, $k\ge8$. In this case, $X$ is not complete and does not support semilocal assumptions, because they would automatically imply global assumptions as $X$ is bounded, and thus semiuniformly local assumptions would follow as well. \end{example}
\section{Lebesgue points} \label{sect-Leb}
Using the better Poincar\'e inequalities of Section~\ref{sect-better-PI} it can be shown that Newtonian functions have $L^p$-Lebesgue points q.e.\ also under local assumptions. By H\"older's inequality every $L^p$-Lebesgue point is an $L^r$-Lebesgue point for all $1 \le r \le p$.
\begin{thm} \label{thm-Leb-pt} Assume that $p>1$, that $\mu$ is locally doubling and that one of the following conditions is satisfied\/\textup{:} \begin{enumerate} \item \label{Leb-a} $X$ is locally compact and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality\/\textup{;} \item \label{Leb-b} for every $x \in X$ there are $r>0$ and $q<p$ such that a $q$-Poincar\'e inequality holds within $B(x,r)$ in the sense of Definition~\ref{def-PI}. \end{enumerate} If $u \in N^{1,p}\loc(X)$, then q.e.\ $x$ is an $L^p$-Lebesgue point of $u$, i.e. \[
\lim_{r \to 0} \vint_{B(x,r)} |u-u(x)|^p\, d\mu=0. \] \end{thm}
On metric spaces with global assumptions such results have been obtained by Kinnunen--Latvala~\cite{KiLa02} and Bj\"orn--Bj\"orn--Parviainen~\cite{BBP}. Traditionally (for Sobolev functions), as well as in \cite{KiLa02} and \cite{BBP}, this is shown using the density of continuous functions. Here we offer a different approach based on the fact that Newtonian functions are more precisely defined than arbitrary a.e.-representatives.
Note that, by Theorem~\ref{thm-locLip-dense-Om} below, locally Lipschitz functions are dense in $N^{1,p}\loc(X)$ under the assumptions in Theorem~\ref{thm-Leb-pt}, even though we do not use this fact here. In case~\ref{Leb-a} it also follows from Theorem~\ref{thm-when-qcont-new} below that the functions in $N^{1,p}\loc(X)$ are quasicontinuous. Even though this is not known in case~\ref{Leb-b}, they are still, by their Newtonian definition, more precisely defined than arbitrary a.e.-representatives, which enables us to prove the existence of Lebesgue points q.e.
\begin{proof}[Proof of Theorem~\ref{thm-Leb-pt}] Theorem~\ref{thm-loc-cpt-q-PI} shows that the assumptions~\ref{Leb-a} imply~\ref{Leb-b}. Thus in both cases we can consider a ball $B_0$ such that $u \in N^{1,p}(B_0)$ and a $q$-Poincar\'e inequality and the doubling property for $\mu$ hold within $B_0$, where $q<p$ depends on $B_0$. Theorem~\ref{thm-(q,p)-PI} shows that if $q<p$ is chosen close enough to $p$, then the Sobolev--Poincar\'e inequality \begin{equation} \label{eq-(q,q*)-PI}
\biggl(\vint_{B} |u-u_B|^{p} \,d\mu\biggr)^{1/p}
\le C r_B \biggl( \vint_{\lambda B} g^{q} \,d\mu \biggr)^{1/q}, \end{equation} with some $\lambda\ge2$, holds for all balls $B$ with $\frac52\lambda B \subset B_0$ and for every upper gradient $g$ of $u$.
For $x\in B_0$, let $r_j=2^{-j}$ and $v(x) = \limsup_{j\to\infty} v_j(x)$, where \begin{equation} \label{eq-def-v}
v_j(x) = \biggl( \vint_{B(x,r_j)} |u-u(x)|^{p}\,d\mu \biggr)^{1/p},
\quad j=0,1,\ldots. \end{equation} Note that $u\in L^{p}(B_0)$ and hence, by the Lebesgue differentiation theorem (Theorem~\ref{thm-Leb-ae}), $v=0$ a.e.\ in $B_0$. We shall show that $v\in N^{1,p}(B_0)$ and hence $v=0$ q.e.\ in $B_0$, by \cite[Proposition~1.59]{BBbook}.
To this end, let $\gamma \colon [0,l_\gamma] \to B_0$ be a nonconstant rectifiable curve (parameterized by arc length) and $g \in L^p(B_0)$ be an upper gradient of $u$. By splitting $\gamma$ into parts, if necessary, and by considering sufficiently large $j$, we can assume that $\frac{1}{2}r_j \le l_\gamma \le r_j$ and that $\tfrac52\lambda B\subset B_0$, where $B:=B(x,2r_j)$ with $x=\gamma(0)$ and $y=\gamma(l_\gamma)$ being the endpoints of~$\gamma$. Since $u\in L^{p}(B_0)$, both $v_j(x)$ and $v_j(y)$ are finite, and hence \begin{align*}
|v_j(x)-v_j(y)| &
\le \bigl|v_j(x)-|u_B-u(x)|\bigr| + \bigl|v_j(y)-|u_B-u(y)|\bigr|
+ |u(x)-u(y)| \\ &
\le \biggl( \vint_{B(x,r_j)} |u-u_B|^{p}\,d\mu \biggr)^{1/p}
+ \biggl( \vint_{B(y,r_j)} |u-u_B|^{p}\,d\mu \biggr)^{1/p}
+ \int_\gamma g\,ds, \end{align*} where we have used that by the triangle inequality for the normalized $L^p$-norm, \begin{align*}
\bigl|v_j(x)-|u_B-u(x)|\bigr| \kern -2em & \\ &
= \biggl| \biggl( \vint_{B(x,r_j)} |u-u(x)|^{p}\,d\mu \biggr)^{1/p}
- \biggl( \vint_{B(x,r_j)} |u_B-u(x)|^{p}\,d\mu \biggr)^{1/p} \biggr| \\
&\le \biggl( \vint_{B(x,r_j)} |(u-u(x))-(u_B-u(x))|^{p}\,d\mu \biggr)^{1/p}, \end{align*}
and similarly for $\bigl|v_j(y)-|u_B-u(y)|\bigr|$.
The local doubling property within $B_0$ and the Sobolev--Poincar\'e inequality~\eqref{eq-(q,q*)-PI} then imply that \begin{align}
|v_j(x)-v_j(y)| &\le C' \biggl( \vint_B |u-u_B|^{p}\,d\mu \biggr)^{1/p}
+ \int_\gamma g\,ds \nonumber \\
& \le C'' r_j \biggl( \vint_{\lambda B} g^{q}\,d\mu
\biggr)^{1/q} + \int_\gamma g\,ds \label{eq-q-Leb}\\
&\le C'' r_j \inf_{z \in \lambda B} g_M(z)
+ \int_\gamma g\,ds \nonumber\\
&\le C''' \int_\gamma (g_M+g) \,ds, \nonumber \end{align} where $C'''$ is independent of $j$ and $g_M^q:=M^*_{B_0,B_0}g^q$ is the noncentred local maximal function defined by \eqref{eq-def-local-max-fn}. Glueing together all the parts of $\gamma$ and by applying \cite[Lemma~1.52]{BBbook} we conclude that $C'''(g_M+g)$ is a {$p\mspace{1mu}$}-weak upper gradient of $v$ in $B_0$. Since $q<p$, the noncentred local maximal operator is bounded on $L^{p/q}(B_0)$, by Proposition~\ref{prop-maximal-fn}, which yields that \[ \int_{B_0} g_M^p \, d\mu \le C_0 \int_{B_0} g^p \,d\mu, \] and hence $v\in N^{1,p}(B_0)$, as required.
As $X$ is Lindel\"of there is a countable cover of $X$ by balls $B_0$ of the type above, and since the capacity is countably subadditive we conclude the existence of Lebesgue points q.e. \end{proof}
\begin{remark} \label{rem-Lq-Leb}
Since Lebesgue points are of a local nature, the proof of Theorem~\ref{thm-Leb-pt} can be modified so that \begin{equation} \label{eq-Leb-q(x)}
\lim_{r \to 0} \vint_{B(x,r)} |u-u(x)|^{q(x)}\, d\mu=0 \end{equation} holds for q.e.\ $x$ and all \[ q(x)< \begin{cases}
\displaystyle \frac{s(x)p}{s(x)-p}, &\text{if } s(x)>p, \\
\infty, &\text{if } s(x)\le p, \end{cases} \] where \[ s(x) = \lim_{r\to0} \inf \{s>0: \eqref{eq-local-dim-cond} \text{ holds for all balls } B'\subset B\subset B(x,r) \}, \] cf.\ Remark~\ref{rem-s-and-q}. If $\mu$ is semiuniformly locally doubling, then ${\hat{s}}:=\sup_x s(x) < \infty$, and we can use ${\hat{s}}$ instead of $s(x)$ to find a common $q=q(x)>p$ so that \eqref{eq-Leb-q(x)} holds for q.e.\ $x$.
In fact, if $s(x)$ is ``attained'' then it is even possible to reach the borderline case, as in Heinonen--Koskela--Shanmugalingam--Tyson~\cite[Theorem~9.2.8]{HKSTbook}. More precisely, if there is $r_0>0$ such that \[
\eqref{eq-local-dim-cond}
\text{ holds for all balls } B'\subset B\subset B(x,r_0)
\text{ with } s=s(x)>p, \] then we may let $q(x)=s(x)p/(s(x)-p)$. To see this, one uses the following estimate with $q=q(x)$ and $y \in B(x,r_0)$, \begin{align*} &
\limsup_{r\to0} \biggl( \vint_{B(y,r)} |u-u(y)|^{q}\,d\mu \biggr)^{1/q} \\ & \quad \quad \quad
\le \limsup_{r\to0}\biggl( \vint_{B(y,r)} |u-u_{B(y,r)}|^{q}\,d\mu \biggr)^{1/q}
+ \limsup_{r\to0}|u_{B(y,r)}-u(y)| \\ & \quad \quad \quad \le C \limsup_{r\to0}\biggl( r^p \vint_{B(y,r)} g_u^p\,d\mu \biggr)^{1/p}
+ \limsup_{r\to0} \vint_{B(y,r)} |u-u(y)|\,d\mu, \end{align*} where the $(q,p)$-Poincar\'e inequality within $B(x,r_0)$ is provided by Theorem~\ref{thm-(q,p)-PI}. The second term on the right-hand side tends to $0$ q.e., as we already know that $u$ has $L^1$-Lebesgue points q.e., while the first term tends to $0$ q.e.\ in $B(x,r_0)$ by Lemma~9.2.4 in Heinonen--Koskela--Shanmugalingam--Tyson~\cite{HKSTbook}.
In particular, if for each $x \in X$ there is $r>0$ such that \eqref{eq-local-dim-cond} holds for all balls $B'\subset B\subset B(x,r)$ with the same $s>0$ (independent of $x$), then $u$ has $L^q$-Lebesgue points q.e., with $q=sp/(s-p)$ for $p<s$ and all $q<\infty$ for $p\ge s$, provided that the assumptions in Theorem~\ref{thm-Leb-pt} are fulfilled. \end{remark}
\begin{remark} The proof of Theorem~\ref{thm-Leb-pt} shows that the assumptions can be further relaxed with a somewhat weaker conclusion. Namely, if $\mu$ is locally doubling and only supports a local {$p\mspace{1mu}$}-Poincar\'e inequality (which need not improve to a $q$-Poincar\'e inequality as $X$ is not necessarily locally compact), then it can be verified that the function $v$, defined by~\eqref{eq-def-v}, belongs to $N^{1,q}(B_0)$ for every $1\le q<p$. To see this one replaces $q$ by $p$ in \eqref{eq-q-Leb}, and then uses the $L^1$ to weak-$L^1$ boundedness \eqref{eq-max-weak-L1} of the noncentred local maximal operator, which yields that $g_M$ belongs to weak-$L^p(B_0)$ and thus to $L^q(B_0)$ for all $q<p$.
An immediate consequence is that $v=0$ outside a set of zero $q$-capacity and hence, every $u\inN^{1,p}_{\rm loc}(X)$ has Lebesgue points $q$-quasieverywhere. Since it is not clear whether $\mu$ supports a local $q$-Poincar\'e inequality for some $q<p$, this cannot be deduced directly from the inclusion $N^{1,p}\loc(X) \subset N^{1,q}\loc(X)$ as we have no $q$-quasieverywhere Lebesgue point result available for functions in $N^{1,q}\loc(X)$ under these assumptions. \end{remark}
\section{Density of Lipschitz functions}
Density of smooth functions is a useful property of Sobolev spaces, with many important consequences (which we will discuss in Section~\ref{sect-qcont}). In metric spaces the smoothest functions to consider are (locally) Lipschitz functions. There are two types of density results for $N^{1,p}(X)$ in the literature. The first one is due to Shanmugalingam~\cite{Sh-rev} (it can also be found in \cite[Theorem~5.1]{BBbook} and \cite[Theorem~8.2.1]{HKSTbook}).
\begin{thm} \label{thm-dense-Nages} \textup{(Shanmugalingam~\cite[Theorem~4.1]{Sh-rev})} If $\mu$ is globally doubling and supports a global {$p\mspace{1mu}$}-Poincar\'e inequality, then Lipschitz functions are dense in $N^{1,p}(X)$. \end{thm}
The following result was recently obtained by Ambrosio--Colombo--Di Marino~\cite{AmbCD} and Ambrosio--Gigli--Savar\'e~\cite{AmbGS}. In fact, it is not explicitly spelled out in either paper, but it is a direct consequence of a combination of results in the two papers. Below we explain this in some detail, see Remark~\ref{rmk-Amb}. Note that by density we always mean norm-density in the $N^{1,p}$ norm, with the exception of Theorem~\ref{thm-Lip-weak-dense}.
\begin{thm} \textup{(Ambrosio et al.~\cite{AmbCD},~\cite{AmbGS})} \label{thm-AmbCDGS} Assume that $X$ is a complete globally doubling metric space and that $p >1$. Then Lipschitz functions are dense in $N^{1,p}(X)$. \end{thm}
The main difference in these two results is that the former assumes doubling for $\mu$ (and not just for $X$) together with a Poincar\'e inequality, while the latter requires completeness and $p>1$. Note that even though the doubling property and the Poincar\'e inequality extend from $X$ to its closure $\itoverline{X}$, it need not be true that $N^{1,p}(\itoverline{X})=N^{1,p}(X)$, cf.\ \cite[Lemma~8.2.3]{HKSTbook}. In other words, completeness (or at least local compactness) is not a negligible assumption. Thus both results have their pros and cons. Our aim in this section is to extend both of these results to local assumptions and to combine them into a unified result (when $p>1$).
\begin{remark} \label{rmk-Lip-dense} Without both completeness and a global Poincar\'e inequality, Lipschitz functions are not necessarily dense in $N^{1,p}(X)$, consider e.g.\ $X=\mathbf{R}\setminus\{0\}$ or the slit disc in $\mathbf{R}^2$, both of which support a local 1-Poincar\'e inequality. This also shows that the completeness assumption in Theorem~\ref{thm-AmbCDGS} cannot be dropped nor replaced by local compactness.
It is therefore natural to obtain density of locally Lipschitz functions in most of our results below. It should be mentioned that there is no known example of a metric space $X$ such that locally Lipschitz functions are not dense in $N^{1,p}(X)$. (A function $u: X \to \mathbf{R}$ is \emph{locally Lipschitz}
if for every $x \in X$ there is $r>0$ such that $u|_{B(x,r)}$ is Lipschitz.) \end{remark}
The following is a local generalization of Theorem~\ref{thm-dense-Nages}.
\begin{thm} \label{thm-locLip-dense-Om} If $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality, then locally Lipschitz functions are dense in $N^{1,p}_{\rm loc}(X)$. \end{thm}
Note that if $\Omega$ is an open subset of $X$, then the local assumptions are inherited by $\Omega$ and hence we can also directly conclude the density of locally Lipschitz functions in $N^{1,p}\loc(\Omega$). The same is true for Theorem~\ref{thm-locLip-dense-Om-Amb-new} below.
To prove Theorem~\ref{thm-locLip-dense-Om} we will need the following lemma.
\begin{lem} \label{lem-Lip-dense-Np-2B} Assume that the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold within a ball $2B_0$ in the sense of Definitions~\ref{def-local-doubl-mu} and~\ref{def-PI}. Then every $u\inN^{1,p}(2B_0)$ can be approximated in the $N^{1,p}(B_0)$-norm by Lipschitz functions $u_k$ with support in $2B_0$. Moreover, $\mu(\{x\in B_0: u(x)\ne u_k(x)\}) \to 0$ as $k\to\infty$. \end{lem}
\begin{proof} By Proposition~\ref{prop-doubling-mu-2/3}, $B_0$ can be covered by finitely many balls $B_j$ with centres in $B_0$ and radii $r' =r_{B_0}/22\lambda$, where $\lambda$ is the dilation constant in the {$p\mspace{1mu}$}-Poincar\'e inequality within $2B_0$. Let $\{\varphi_j\}_j$ be a Lipschitz partition of unity on $\bigcup_j B_j$ subordinate to $2B_j$, e.g.\ one constructed as in the proof of Theorem~\ref{thm-locLip-dense-Om} below.
Let $u\inN^{1,p}(2B_0)$ be arbitrary and let $g\in L^p(2B_0)$ be an upper gradient of $u$. Noting that $\max\{\min\{u,k\},-k\} \to u$ in $N^{1,p}(2B_0)$, as $k \to \infty$, we can assume without loss of generality that
$|u| \le 1$. Using the noncentred local maximal function in \eqref{eq-def-local-max-fn}, let \[ E_t = \{x\in 2B_0: M^*_{2B_0,2B_0} g^p(x)>t^p\}. \] Proposition~\ref{prop-maximal-fn} then implies that \begin{equation} \label{eq-tp-mu-Et-0} t^p\mu(E_t) \le C_1 \int_{E_t} g^p\,d\mu \to0, \quad \text{as } t\to\infty. \end{equation} For a fixed $j$ and $x\in2B_j\setminus E_t$, we get for all $0<\tfrac12r\le\rho\le r\le 8r'$ that \begin{align*}
|u_{B(x,\rho)} - u_{B(x,r)}|
&\le \vint_{B(x,\rho)} |u - u_{B(x,r)}| \,d\mu
\le C_2 \vint_{B(x,r)} |u - u_{B(x,r)}| \,d\mu \\
&\le C_3 r \biggl( \vint_{\lambda B(x,r)} g^p \,d\mu \biggr)^{1/p}
\le C_3 r \bigl(M^*_{2B_0,2B_0} g^p(x)\bigr)^{1/p} \le C_3tr, \end{align*} since for such radii, $\tfrac52\lambda B(x,r)\subset22\lambda B_j \subset 2B_0$. A telescopic argument as in the proof of \cite[Theorem~5.1]{BBbook} then shows that the limit $\bar{u}(x)=\lim_{r\to0} u_{B(x,r)}$ exists everywhere in $2B_j\setminus E_t$ and is $C_4t$-Lipschitz therein. Also, by Lebesgue's differentiation theorem (Theorem~\ref{thm-Leb-ae}), $\bar{u}=u$ a.e.\ in $2B_j\setminus E_t$. Using e.g.\ truncated McShane extensions, $\bar{u}$ extends to a $C_4t$-Lipschitz function
$u_{t,j}$ on $2B_j$ such that $|u_{t,j}| \le 1$. Then also $u_t=\sum_j \varphi_j u_{t,j}$ equals $u$ a.e.\ in $B_0\setminus E_t$. In view of \cite[Corollary~2.21]{BBbook}, it follows that $(g+C_4t)\chi_{E_t \cap B_0}$ is a {$p\mspace{1mu}$}-weak upper gradient of $u-u_t$ and hence \[
\|u-u_t\|_{N^{1,p}(B_0)}^p
\le 2^p \mu(E_t \cap B_0) +
\bigl( \|g\|_{L^p(E_t \cap B_0)} + C_4 t \mu(E_t \cap B_0)^{1/p}
\bigr)^p. \] Since $g\in L^p(2B_0)$, we conclude from~\eqref{eq-tp-mu-Et-0} that $u_t\to u$ in $N^{1,p}(B_0)$. By construction, $u_t$ is Lipschitz in $2B_0$ and $\supp u_t \subset \bigcup_j 2B_j \subset 2B_0$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-locLip-dense-Om}] Let $u\inN^{1,p}_{\rm loc}(X)$. For every $x\in X$, let $B_x=B(x,r_x)$ be a ball such that $u \in N^{1,p}(B_x)$ and such that the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold within $B_x$. As $X$ is Lindel\"of, we can find a countable subcollection such that $X=\bigcup_{j=1}^\infty \tfrac14 B_{x_j}$. Let $B_j=\tfrac14 B_{x_j}$, $j=1,2,\ldots$.
We construct a suitable Lipschitz partition of unity. For each $j$ we find $\psi_j\in\Lip(X)$ such that $\chi_{B_j} \le \psi_j \le \chi_{2B_j}$. Let inductively $\varphi_1=\psi_1$ and \[
\varphi_j:= \psi_j \biggl( 1- \sum_{k=1}^{j-1} \varphi_k\biggr), \quad j\ge2. \] Then $ \sum_{k=1}^j \varphi_k=1$ in $B_j$, and hence $\varphi_k \equiv 0$ in $B_j$ for $k >j$, so that \[
\sum_{k=1}^\infty \varphi_k=1 \quad \text{in } B_j. \] As this holds for all $B_j$ we see that $\{\varphi_j\}_{j=1}^\infty$ is a partition of unity.
Since $4B_j = B_{x_j}$, the assumptions of Lemma~\ref{lem-Lip-dense-Np-2B} are satisfied for each $2B_j$ (in place of $B_0$). Hence, for every $\varepsilon>0$ and each $j$, there is $v_j\in\Lip(2B_j)$ with \[
\|u-v_j\|_{N^{1,p}(2B_j)} \le \frac{2^{-j}\varepsilon}{1+L_j}, \] where $L_j$ is the Lipschitz constant of $\varphi_j$. Then also \begin{align*}
\|\varphi_j (u-v_j)\|_{N^{1,p}(2B_j)}^p
&\le \|u-v_j\|_{L^p(2B_j)}^p
+ (L_j\|u-v_j\|_{L^p(2B_j)} + \|g_{u-v_j}\|_{L^p(2B_j)})^p \\
& \le 2(1+L_j)^p \|u-v_j\|_{N^{1,p}(2B_j)}^p
\le 2^{1-pj} \varepsilon^p. \end{align*} As $v:=\sum _{j=1}^\infty \varphi_j v_j$ is a locally finite sum of Lipschitz functions, $v$ is locally Lipschitz. Combining this with the above estimate we conclude that \[
\|u-v\|_{N^{1,p}(X)} \le \sum_{j=1}^\infty \|\varphi_j(u-v_j)\|_{N^{1,p}(2B_j)} \le 2^{1/p}\varepsilon. \qedhere \] \end{proof}
We now turn to Theorem~\ref{thm-AmbCDGS} which we generalize in the following way, also taking into account Theorem~\ref{thm-dense-Nages}. Note that the set of points, for which \ref{Amb-a} resp.\ \ref{Amb-b} below holds, is open. Thus, if $X$ is connected, these two sets cannot be disjoint.
\begin{thm} \label{thm-locLip-dense-Om-Amb-new} Let $p>1$ and assume that for every $x \in X$ there is a ball $B_x \ni x$ such that either \begin{enumerate} \item \label{Amb-a} $B_x$ is locally compact and globally doubling\/\textup{;} or \item \label{Amb-b} the {$p\mspace{1mu}$}-Poincar\'e inequality and the doubling property for $\mu$ hold within $B_x$ in the sense of Definitions~\ref{def-local-doubl-mu} and~\ref{def-PI}. \end{enumerate} Then locally Lipschitz functions are dense in $N^{1,p}\loc(X)$.
In particular, this holds if $p>1$ and $X$ is locally compact and locally doubling. \end{thm}
\begin{proof} Without loss of generality we can replace the assumption \ref{Amb-a} by \begin{enumerate} \renewcommand{\textup{(\alph{enumi}$'$)}}{\textup{(\alph{enumi}$'$)}} \item \label{Amb-a'} $\itoverline{B}_x$ is compact and globally doubling. \end{enumerate}
The proof is now the same as the proof of Theorem~\ref{thm-locLip-dense-Om}, but with appeal to Theorem~\ref{thm-AmbCDGS} instead of Lemma~\ref{lem-Lip-dense-Np-2B} for the balls $B_x$ satisfying \ref{Amb-a'}. \end{proof}
So far we have deduced the density for locally Lipschitz functions. A natural question is when density can be obtained for Lipschitz functions. Note that under the assumptions in Theorems~\ref{thm-locLip-dense-Om} and~\ref{thm-locLip-dense-Om-Amb-new} it can happen that Lipschitz functions are not dense, see Remark~\ref{rmk-Lip-dense}.
\begin{thm} \label{thm-semi-dense} Assume that $\mu$ is semilocally doubling and supports a semilocal {$p\mspace{1mu}$}-Poincar\'e inequality. Then Lipschitz functions with bounded support are dense in $N^{1,p}(X)$. If $X$ is also proper, then $\overline{{\Lip_c}(X)} =N^{1,p}(X)$. \end{thm}
${\Lip_c}(X)$ denotes the space of Lipschitz functions with compact support.
\begin{proof} Let $u \in N^{1,p}(X)$ and $\varepsilon >0$. Find a sufficiently large ball $B$ with $r_B>1$ such that
$\|u\|_{N^{1,p}(X \setminus \itoverline{B})} < \varepsilon$. By Lemma~\ref{lem-Lip-dense-Np-2B} and the semilocal assumptions, there is $v\in\Lip(X)$ so that $\|u-v\|_{N^{1,p}(2B)}<\varepsilon$.
For $\eta(x)=(1-\dist(x,B))_+$ we then get that
$v_\varepsilon:=v\eta\in\Lip(X)$ with $\supp v_\varepsilon \subset 2B$ and $u-v_\varepsilon = u(1-\eta) + (u-v)\eta$. Since \[
g_{u(1-\eta)} \le |u| g_\eta + (1-\eta)g_u
\quad \text{and}\quad g_{(u-v)\eta} \le \eta g_{u-v} + |u-v|g_\eta, \]
a simple calculation yields $\|u-v_\varepsilon\|_{N^{1,p}(X)} <6\varepsilon$, which concludes the proof of the first part. If $X$ is also proper then $v_\varepsilon$ has compact support and thus $\overline{{\Lip_c}(X)} =N^{1,p}(X)$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-Lipc-dense-intro}.] The assumptions in Theorem~\ref{thm-semi-dense} are satisfied because of Proposition~\ref{prop-semilocal-doubling-intro} and Theorem~\ref{thm-PI-intro}, and hence the result follows. \end{proof}
\begin{thm} \label{thm-Lip-dense-compl} Assume that $X$ is a complete semilocally doubling metric space and that $p>1$. Then $\overline{{\Lip_c}(X)} =N^{1,p}(X)$. \end{thm}
Note that, by Lemma~\ref{lem-proper-equiv-complete} and Proposition~\ref{prop-semilocal-doubling}, a metric space is complete and semilocally doubling if and only if it is proper and locally doubling. A natural question is if Theorem~\ref{thm-Lip-dense-compl} holds when $X$ is only complete and locally doubling. The completeness in Theorem~\ref{thm-Lip-dense-compl} cannot be replaced by local compactness, see Remark~\ref{rmk-Lip-dense}.
\begin{proof} This is quite similar to the proof of Theorem~\ref{thm-semi-dense}. Let $u \in N^{1,p}(X)$ and $\varepsilon >0$. Find a sufficiently large ball $B$ with $r_B>1$ such that
$\|u\|_{N^{1,p}(X \setminus \itoverline{B})} < \varepsilon$. By the semilocal assumptions, $\overline{2B}$ is a complete globally doubling metric space. Hence by Theorem~\ref{thm-AmbCDGS}
there is $v\in\Lip(\overline{2B})$ so that $\|u-v\|_{N^{1,p}(2B)}<\varepsilon$. The rest of the proof is identical to the second half of the proof of Theorem~\ref{thm-semi-dense}, since $X$ is proper by Lemma~\ref{lem-proper-equiv-complete}. \end{proof}
\begin{remark} \label{rmk-Amb} We will now explain how Theorem~\ref{thm-AmbCDGS} follows from Ambrosio--Colombo--Di Marino~\cite{AmbCD} and Ambrosio--Gigli--Savar\'e~\cite{AmbGS}. In~\cite{AmbGS}, a function $g$ is called a {$p\mspace{1mu}$}-upper gradient of a function $f$ if there is $\tilde{f}$ such that $\tilde{f}=f$ a.e.\ and $g$ is a {$p\mspace{1mu}$}-weak upper gradient of $\tilde{f}$ in the sense of Definition~\ref{deff-ug}. In \cite{AmbGS} they also define {$p\mspace{1mu}$}-weak upper gradients (different from ours), {$p\mspace{1mu}$}-relaxed upper gradients and {$p\mspace{1mu}$}-relaxed slopes. Furthermore, they show that if a function $f \in L^p(X)$ has a gradient $g\in L^p(X)$ in any of these four senses, then it has one in each of the four senses and the a.e.-minimal ones coincide. This is shown in \cite[Theorem~7.4 and Section~8.1]{AmbGS} assuming completeness of $X$.
In Ambrosio--Colombo--Di Marino~\cite{AmbCD}, there is a different definition of {$p\mspace{1mu}$}-relaxed slope and the same definition of {$p\mspace{1mu}$}-weak upper gradient as in \cite{AmbGS}. In~\cite[Theorem~6.1]{AmbCD} it is shown that if $X$ is complete and $f \in L^p(X)$ has a gradient $g \in L^p(X)$ in either of these two senses, then it has one in the other and the a.e.-minimal ones coincide. So in conclusion the a.e.-minimal gradients in $L^p(X)$ in all fives senses exist simultaneously and then coincide, assuming that $f \in L^p(X)$ and that $X$ is complete. Moreover, the Sobolev space $W^{1,p}(X)$ in \cite[Corollary~7.5]{AmbCD} thus coincides with \begin{equation} \label{eq-hNp-deff}
\widehat{N}^{1,p} (X) = \{u : u=v \text{ a.e. for some } v \in N^{1,p}(X)\} \end{equation} (recall that functions in $N^{1,p}(X)$ are defined pointwise everywhere).
The following density result is a special case of the equivalence result from \cite[Theorem~7.4 and Section~8.1]{AmbGS}. (Note that this is not a norm-density result as elsewhere in this section.)
\begin{thm} \label{thm-Lip-weak-dense} Assume that $X$ is complete and $p>1$. Let $f \in N^{1,p}(X)$. Then there exist Lipschitz functions $f_n$ such that \[
\lim_{n \to \infty} \biggl(\int_X |f_n-f|^p \,d\mu
+\int_X |{\Lip f_n-g_f}|^p \,d\mu \biggr)=0, \] where \[
\Lip f_n(x) := \limsup_{r\to0} \sup_{y\in B(x,r)} \frac{|f_n(y)-f_n(x)|}{r} \] is the \emph{local upper Lipschitz constant} \textup{(}also called\/ \emph{upper pointwise dilation}\textup{)} of $f_n$, and $g_f$ is the minimal {$p\mspace{1mu}$}-weak upper gradient of $f$ \textup{(}in the sense of Definition~\ref{deff-ug}\textup{)}. \end{thm}
As explained at the bottom of p.\ 1 of \cite{AmbCD}, once we know reflexivity, the norm-density of Lipschitz functions follows directly using Mazur's lemma. This reflexivity was obtained in Corollary~7.5 in \cite{AmbCD} when $X$ is complete and globally doubling, and thus Theorem~\ref{thm-AmbCDGS} follows.
Completeness is not assumed explicitly in \cite[Corollary~7.5]{AmbCD}, but the result relies on other results in \cite{AmbCD} (e.g.\ Theorem~7.4) for which completeness is assumed. And indeed, the completeness assumption cannot be dropped or replaced by assuming that $X$ is merely locally compact for the density result, since norm-density of Lipschitz functions then can fail, see Remark~\ref{rmk-Lip-dense}. The same counterexamples show that Theorem~\ref{thm-Lip-weak-dense} cannot hold (in general) in locally compact spaces with Lipschitz functions $f_n$. Whether it can hold in locally compact spaces, if $f_n$ are just required to be locally Lipschitz, is not clear because the ``partition of unity'' technique used in the proof of Theorem~\ref{thm-locLip-dense-Om} cannot be used to construct such an extension, at least not in such an easy way as here, since it would require controlling
$\|{\Lip v - g_{f}}\|$ in terms of
$\|{\Lip (\varphi_j v_j) - g_{f_j}}\|$.
A slight word of warning: one may get the impression that as a consequence of Theorem~\ref{thm-Lip-weak-dense} one can deduce that $g_f= \Lip f$ a.e.\ for Lipschitz functions. This is not so, and indeed it is not true in general, as seen by considering e.g.\ the von Koch snowflake curve, on which $g_f \equiv 0$ for all functions, because of the lack of rectifiable curves. The equality $g_f= \Lip f$ a.e.\ for Lipschitz functions is however true if $X$ is complete, $p>1$ and $\mu$ is globally doubling and supports a global {$p\mspace{1mu}$}-Poincar\'e inequality, by Theorem~6.1 in Cheeger~\cite{Cheeg}. \end{remark}
\begin{remark}
Occasionally it may be interesting to know when (locally) Lipschitz
or continuous functions are dense in $N^{1,p}(X)$ even when $X\ne\supp\mu$, in which case our general condition that all balls have positive measure is invalid.
It turns out that this happens if and only if they are dense
in $N^{1,p}(\supp \mu)$.
For Lipschitz and continuous functions this is Lemma~5.19\,(e) in
\cite{BBbook}.
For locally Lipschitz functions this can be proved similarly,
provided that one uses the locally Lipschitz extensions
due to Luukkainen--V\"ais\"al\"a~\cite[Theorem~5.7]{LuukkV77}. (Note that the class LIP in \cite{LuukkV77} consists of locally Lipschitz functions.)
For quasicontinuity, which will be discussed in the next section,
a similar equivalence is also true, by \cite[Lemma~5.19\,(d)]{BBbook}. \end{remark}
\section{Quasicontinuity and other consequences} \label{sect-qcont}
Having established the density of continuous (or more exactly locally Lipschitz) functions we can now draw a number of qualitative conclusions about Newtonian functions and capacities.
Throughout this section, $\Omega \subset X$ is open. A function $u:\Omega \to {\overline{\R}}$ is \emph{quasicontinuous} if for every $\varepsilon>0$ there is an open set $G$
with ${C_p}(G)<\varepsilon$ such that $u|_{\Omega \setminus G}$ is continuous. See the recent paper Bj\"orn--Bj\"orn--Mal\'y~\cite{BBMaly} for several different characterizations of quasicontinuity, and in particular that one can equivalently replace the condition ${C_p}(G) <\varepsilon$ by ${C_p^\Omega}(G) < \varepsilon$, where ${C_p^\Omega}$ is the capacity associated with $N^{1,p}(\Omega)$ rather than with $N^{1,p}(X)$. Note also that, in the following theorem, $X$ can be replaced by any open subset of $X$. Moreover, the conditions in \ref{q-a} and \ref{q-b} below are inherited by open subsets.
\begin{thm} \label{thm-when-qcont-new} Assume that $X$ is locally compact and that one of the following conditions is satisfied\/\textup{:} \begin{enumerate} \item \label{q-a} $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality, \item \label{q-b} $p>1$ and $X$ is locally doubling, or more generally the conditions in Theorem~\ref{thm-locLip-dense-Om-Amb-new} are satisfied\/\textup{;} \item \label{q-c} continuous functions are dense in $N^{1,p}(X)$. \end{enumerate}
Then every $u \in N^{1,p}\loc(X)$ is quasicontinuous in $X$ and hence ${C_p}$ is an outer capacity, i.e.\ \[ {C_p}(E)=\inf_{\substack{ G \supset E \\ G \text{ open}}} {C_p}(G) \quad \text{for every } E \subset X. \] \end{thm}
Quasicontinuity has been established for Newtonian functions under various assumptions in Bj\"orn--Bj\"orn--Shanmugalingam~\cite{BBS5}, Bj\"orn--Bj\"orn--Lehrb\"ack~\cite{BBLeh1} and Heinonen--Koskela--Shanmugalingam--Tyson~\cite{HKSTbook} for open subsets. See also Shanmugalingam~\cite{Sh-rev}. Assuming that $X$ is complete and that $\mu$ is globally doubling and supports a global {$p\mspace{1mu}$}-Poincar\'e inequality, quasicontinuity was also established for functions in $N^{1,p}(U)$ when $U$ is a quasiopen subset of $X$, by Bj\"orn--Bj\"orn--Latvala~\cite{BBLat3} (when $p>1$) and Bj\"orn--Bj\"orn--Mal\'y~\cite{BBMaly}.
For $p>1$, quasicontinuity also implies that ${C_p}$ is a Choquet capacity and thus, if $X$ is locally compact, that all Borel sets are capacitable, i.e.\ \[ {C_p}(E)=\sup_{\substack{ K \subset E \\ K \text{ compact}}} {C_p}(K) \quad \text{for every Borel set } E \subset X, \] see e.g.\ Aikawa--Ess\'en~\cite[Part~2, Section~10]{AE} together with \cite[Theorems~6.4 and 6.7\,(viii)]{BBbook}. It should be mentioned that there is no known example of a Newtonian function which is not quasicontinuous, nor of a metric space $X$ such that continuous functions are not dense in $N^{1,p}(X)$.
\begin{proof}[Proof of Theorem~\ref{thm-when-qcont-new}] By Theorems~\ref{thm-locLip-dense-Om} and~\ref{thm-locLip-dense-Om-Amb-new}, \ref{q-a} $\imp$ \ref{q-c} and \ref{q-b} $\imp$ \ref{q-c}. So assume that \ref{q-c} holds. By \cite[Theorem~5.21]{BBbook} every $u \in N^{1,p}\loc(X)$ has a quasicontinuous representative. As $X$ is locally compact, Proposition~4.7 in Bj\"orn--Bj\"orn--Lehrb\"ack~\cite{BBLeh1} then shows that every $u \in N^{1,p}\loc(X)$ is quasicontinuous. The outer capacity property then follows from Bj\"orn--Bj\"orn--Shanmugalingam~\cite[Corollary~1.3]{BBS5} (or \cite[Theorem~5.31]{BBbook}). \end{proof}
Quasicontinuity of $N^{1,p}(X)$ gets inherited to open subsets in the
following way. Recall the definition of $\widehat{N}^{1,p}(\Omega)$ in \eqref{eq-hNp-deff}.
\begin{prop} \label{prop-qcont-consequences} If every $u \in N^{1,p}(X)$ is quasicontinuous, then $N^{1,p}_{\rm loc}(\Omega)$ consists exactly of those $u\in\widehat{N}^{1,p}_{\rm loc} (\Omega)$ which are quasicontinuous, and similarly for $N^{1,p}(\Omega)$. \end{prop}
\begin{proof} Clearly, $N^{1,p}_{\rm loc}(\Omega)\subset\widehat{N}^{1,p}_{\rm loc}(\Omega)$. Multiplying $u\inN^{1,p}_{\rm loc}(\Omega)$ by Lipschitz cut-off functions shows that for each $x \in \Omega$ there is $r_x>0$ such that $u$ is quasicontinuous in $B(x,r_x)$. As $X$ is Lindel\"of, a countable covering of $\Omega$ by such balls yields quasicontinuity in $\Omega$.
Conversely, by an argument due to Kilpel\"ainen~\cite{Kilpqe}, every quasicontinuous $u\in\widehat{N}^{1,p}_{\rm loc}(\Omega)$ is q.e.\ equal to a Newtonian function and hence itself in $N^{1,p}_{\rm loc}(\Omega)$, cf.\ \cite[Propositions~5.22 and~5.23]{BBbook}. \end{proof}
Quasicontinuity, or rather the outer capacity property following from it, provides us with a short proof of the following fact, cf.\ Kallunki--Shanmugalingam~\cite{KaSh} where it was proved under stronger assumptions. A similar statement is not true if we drop the assumption that $K$ be compact. (Let e.g.\ $K$ be a countable dense subset of a ball in $\mathbf{R}^n$ and $p\le n$.)
\begin{prop} Assume that ${C_p}$ is an outer capacity and that continuous resp.\ {\rm(}locally\/{\rm)} Lipschitz functions are dense in $N^{1,p}(X)$. If $K\subset X$ is compact, then \[
{C_p}(K) = \inf
\|u\|_{N^{1,p}(X)}^p, \] where the infimum is taken over all continuous resp.\ {\rm(}locally\/{\rm)} Lipschitz $u$ such that $u\ge1$ on $K$. \end{prop}
Note that if continuous functions are dense in $N^{1,p}(X)$, then the condition that ${C_p}$ is an outer capacity is equivalent to requiring that all functions in $N^{1,p}(X)$ are quasicontinuous, see Theorems~5.20, 5.31 and Proposition~5.32 in \cite{BBbook}.
\begin{proof} Given $\varepsilon>0$, there exist an open set $G\supset K$ and $u\inN^{1,p}(X)$ such that $u=1$ on $G$ and \begin{equation} \label{eq-CpK}
\|u\|_{N^{1,p}(X)}^p < {C_p}(K)+\varepsilon. \end{equation} Since $K$ is compact, there exists $0<\delta\le1$ such that $\dist(K,X\setminus G)>2\delta$. Let $\eta(x):=\min\{1,\dist(x,K)/\delta\}$
and set $\tilde{v}=u+\eta(v-u)$, where $v$ is continuous resp.\ (locally) Lipschitz in $X$ and such that $\|v-u\|_{N^{1,p}(X)}<\varepsilon\delta$. Then $\tilde{v}=1$ on $K$ and, as $g_{\eta(v-u)}\le\eta g_{v-u}+|v-u|g_\eta$, we also have \[
\|g_{\eta(v-u)}\|_{L^p(X)} \le \|g_{v-u}\|_{L^p(X)} + \frac{1}{\delta} \|v-u\|_{L^p(X)}
\le \frac{2}{\delta} \|v-u\|_{N^{1,p}(X)} < 2\varepsilon. \] It then follows that \begin{align*}
{C_p}(K) &\le \|\tilde{v}\|_{N^{1,p}(X)}^p \\
&\le (\|u\|_{L^p(X)}+ \|v-u\|_{L^p(X)}) ^p + (\|g_u\|_{L^p(X)}
+ \|g_{\eta(v-u)}\|_{L^p(X)}) ^p \\
&\le (\|u\|_{L^p(X)}+ \varepsilon \delta) ^p + (\|g_u\|_{L^p(X)}
+ 2\varepsilon) ^p, \end{align*} which, by \eqref{eq-CpK}, tends to ${C_p}(K)$ as $\varepsilon \to 0$.
Finally, $\tilde{v}=1-\eta+\eta v$ is continuous resp.\ (locally) Lipschitz in $G$ (as $u\equiv1$ therein), while $\tilde{v}=v$ is continuous resp.\ (locally) Lipschitz in the open set \[ X\setminus\supp(1-\eta)\supset X\setminus G. \] It follows that $\tilde{v}$ is continuous resp.\ locally
Lipschitz in $X$ and, as $\dist(X\setminus G,\supp(1-\eta))>\delta$,
also Lipschitz in $X$ whenever $v$ is Lipschitz. \end{proof}
\section{Conclusions for \texorpdfstring{{$p\mspace{1mu}$}}{p}-harmonic functions}
Nonlinear potential theory associated with {$p\mspace{1mu}$}-harmonic functions and quasiminimizers, $p>1$, has been extensively studied during the last 20 years on complete metric spaces equipped with globally doubling measures supporting a global {$p\mspace{1mu}$}-Poincar\'e inequality, see e.g.\ \cite{BBbook} and the references therein. It is therefore natural to see to which extent this theory can be extended to local assumptions.
In much of this theory the properness of $X$ plays an important role and even though some of the theory has already been developed on noncomplete spaces (see in particular Kinnunen--Shanmugalingam~\cite{KiSh01}, Bj\"orn~\cite{Bj02} and Bj\"orn--Marola~\cite{BMarola}), we will in this section restrict ourselves to \emph{proper $X$}. (See Bj\"orn--Bj\"orn~\cite[Section~6]{BBnoncomp} for a similar discussion without the properness assumption.) \emph{We will also assume that $X$ is connected, that $\mu$ is locally doubling and supports a local {$p\mspace{1mu}$}-Poincar\'e inequality, and that $p>1$.}
As we have seen, it then follows from Proposition~\ref{prop-semilocal-doubling-intro} and Theorem~\ref{thm-PI-intro} that $\mu$ is semilocally doubling and supports a semilocal {$p\mspace{1mu}$}-Poincar\'e inequality. The results in this paper show that most of the essential tools needed to develop the potential theory on metric spaces are available also under these assumptions.
\begin{deff} \label{def-quasimin} A function $u \in N^{1,p}\loc(\Omega)$ is a \emph{$Q$-quasi\/\textup{(}super\/\textup{)}minimizer}, $Q \ge 1$, in $\Omega$ if \begin{equation} \label{eq-deff-qmin}
\int_{\varphi \ne 0} g^p_u \, d\mu
\le Q \int_{\varphi \ne 0} g_{u+\varphi}^p \, d\mu \end{equation} for all (nonnegative) $\varphi \in {\Lip_c}(\Omega)$.
If $Q=1$ in \eqref{eq-deff-qmin} then $u$ is a \emph{\textup{(}super\/\textup{)}minimizer}. A \emph{{$p\mspace{1mu}$}-harmonic function} is a continuous minimizer. \end{deff}
See Bj\"orn~\cite[Proposition~3.2]{ABkellogg} for equivalent ways of defining quasisuperminimizers; those equivalences also extend to spaces with our local assumptions. (Here Theorem~\ref{thm-semi-dense} is needed.) Our first observation is that interior regularity is preserved under local assumptions. A function $u$ on $\Omega$ is \emph{lsc-regularized} if $u(x)=\essliminf_{y \to x} u(y)$ for all $x \in \Omega$.
\begin{thm} \label{thm-semiloc-int-reg} Let $u$ be a quasi\/\textup{(}super\/\textup{)}minimizer in $\Omega$. Then $u$ has a representative ${\tilde{u}}$ which is continuous \textup{(}resp.\ lsc-regularized\/\textup{)}.
Moreover, the weak Harnack inequalities for quasi\/\textup{(}super\/\textup{)}minimizers hold within every ball $B_0\subset X$ i.e.\ for every ball $B \subset B_0$ with $\Lambda B\subset\Omega$, where $\Lambda$ and the weak Harnack constants depend only on $B_0$. \end{thm}
See e.g.\ Kinnunen--Martio~\cite{KiMa02}, \cite{KiMa03}, Kinnunen--Shanmugalingam~\cite{KiSh01}, Bj\"orn--Marola~\cite{BMarola} and Bj\"orn--Bj\"orn~\cite{BBbook} for formulations of the weak Harnack inequalities. There are various types of weak Harnack inequalities in these papers and under different assumptions. In \cite{KiMa02}, \cite{KiMa03} and \cite{KiSh01} a $q$-Poincar\'e inequality for some $q<p$ is assumed, which under our assumptions is provided by Theorem~\ref{thm-loc-cpt-q-PI}. Here $q$ will depend on the ball $B_0$.
Note that some weak Harnack inequalities in \cite{KiMa02}, \cite{KiMa03} and \cite{KiSh01} need to be modified, taking into account the dilation constant $\lambda$ from the {$p\mspace{1mu}$}-Poincar\'e inequality, see Bj\"orn--Marola~\cite[Section~10]{BMarola}. This is reflected in the constant $\Lambda\ge1$ in Theorem~\ref{thm-semiloc-int-reg} in the following way: Several of the weak Harnack inequalities in \cite{BBbook} contain a requirement that $50 \lambda B \subset \Omega$. (The factor $50$ is not the same in all the papers.) For a fixed ball $B_0\subset X$, we let $C_{\rm PI}$ and $\lambda$ be the constants in the {$p\mspace{1mu}$}-Poincar\'e inequality (or $q$-Poincar\'e inequality) within $50 B_0$, and $C_\mu$ be the doubling constant within $50 \lambda B_0$. The weak Harnack inequality then holds for every ball $B \subset B_0$ provided that $50 \lambda B \subset\Omega$ and with a constant depending only on $C_{\rm PI}$, $\lambda$, $C_\mu$ and $p$ (and $q$).
\begin{proof}[Proof of Theorem~\ref{thm-semiloc-int-reg}] The arguments in \cite[Section~4 and Theorem~5.1]{KiMa02}, \cite[Section~5]{KiMa03} and \cite{KiSh01} are all local, so local assumptions are enough. They do rely on a better $q$-Poincar\'e inequality but a suitable version is provided by Theorem~\ref{thm-KZ-proper}, as continuity is a local property. For the lsc-regularity of quasisuperminimizers also a Lebesgue point result is needed, which is justified by Theorem~\ref{thm-Leb-pt}.
For the weak Harnack inequalities it is explained above how the semilocal dependence on the constants is achieved. \end{proof}
Apart from that the parameters may only be semilocal, the results in Chapters~7--14 in \cite{BBbook} all hold, with the exception of the Liouville theorem, which we look at in Example~\ref{ex-Liouville} below. This is because all the other results are of a local or semilocal nature, i.e.\ either in a bounded domain or concerning a local or semilocal property. In particular, in addition to the interior regularity in Theorem~\ref{thm-semiloc-int-reg}, one can prove various convergence results, minimum and maximum principles, solve the Dirichlet and the obstacle problem on bounded domains and obtain boundary regularity and resolutivity for suitable boundary data.
On the other hand, for results of a global nature, such as the Dirichlet problem on unbounded domains (as in Hansevi~\cite{Hansevi1}, \cite{Hansevi2}) or global singular functions (as in Holopainen--Shanmugalingam~\cite{HoSh}), it is far from clear whether they hold under (semi)local assumptions. Thus, it is precisely when studying ``global'' properties that it is really interesting to know if the results hold with only local assumptions, possibly (semi)uniform ones, or if perhaps other properties of the space play a vital role. As an example of a ``global'' result, we will now have a look at the Liouville theorem, and show that it does not hold in the generality considered here,
not even under uniformly local assumptions, cf.\ Section~\ref{sect-local-unif}.
\begin{example} \label{ex-Liouville} Let $d\mu =w\, dx$ on $\mathbf{R}$, where $\alpha \in \mathbf{R}$ and \[
w(x)=\begin{cases}
1, & |x| \le 1, \\
|x|^\alpha, & |x| \ge 1.
\end{cases} \] The measure $\mu$ is globally doubling and supports a uniformly local $1$-Poincar\'e inequality, and thus a semilocal $1$-Poincar\'e inequality. (And a global {$p\mspace{1mu}$}-Poincar\'e inequality if and only if $\alpha < p-1$.) A simple calculation shows that a function $u$ is {$p\mspace{1mu}$}-harmonic on $(\mathbf{R},\mu)$ if and only if there is a constant $c$ such that $u'(x)= c w(x)^{1/(1-p)}$. If $\alpha>p-1$, this gives bounded nonconstant {$p\mspace{1mu}$}-harmonic functions on $(\mathbf{R},\mu)$, namely \[
u(x)=\begin{cases}
cx+b, & |x| \le 1, \\
\displaystyle b + c \biggl( \frac{1}{\beta} +1
- \frac{1}{\beta|x|^\beta} \biggr) \sgn x, & |x| \ge 1,
\end{cases} \quad \text{where } \beta=\frac{\alpha-(p-1)}{p-1}>0 \] and $b,c \in \mathbf{R}$ are arbitrary constants. This shows that the Liouville theorem does not hold under semilocal assumptions, nor under uniformly local ones. \end{example}
\end{document} | arXiv |
Intuition of error in Taylor appoximation and finding error in approximation of a function by a constant function
I am reading up on Taylor approximation of a function and I'm trying to develop the intuition for the remainder, when approximating a function with $n^{th}$ degree polynomial which has a continuous $(n+1)^{th}$ derivate, given by $\frac{1}{n!}\int_{a}^{x} (x - t)^nf^{(n+1)}(t)dt$
My intuition of linear approximation is this: We used a constant first derivate to evaluate at x (since we approximate f at a). Hence, we have to use the information about rate of rate of change from the any point $ t \in (a,x)$ to compensate for this error. Specifically, the second derivative gives the difference between the first derivatives at two successive points and scales it over unit interval. Therefore, f''(t) corrects for error at t but introduces new error from (t, x) which is corrected with the same logic at the next point. Thus, the integral given above. Is this correct?
My reasoning is because if I begin the approximation using a constant function and reason that by using the rate of change at every point, a function can be reconstructed starting from any point. But if I try to use the above integral to compute the error in estimates for the constant function, it doesn't work because of the $(-t)$. Is there a formula to estimate the error including the constant case?
I understand the proof of the integral using integration by parts (and requirement of the continuity of $f^{(n+1)}(x)$ is to be able to use the first fundamental theorem).
Can you please help me fix my intuition of the integral?
calculus sequences-and-series taylor-expansion approximation linear-approximation
Srini Vas
Srini VasSrini Vas
$\begingroup$ Where did you get that integral? $\endgroup$
– gen-ℤ ready to perish
$\begingroup$ @ChaseRyanTaylor I encountered it in Apostol Chapter 7. I learned that it is called the integral form of the remainder $\endgroup$
– Srini Vas
$\begingroup$ It can be found here, for example: math.upenn.edu/~kazdan/361F15/Notes/Taylor-integral.pdf $\endgroup$
Here is some intuition: Let $j_a^nf$ be the $n^{\rm th}$ Taylor polynomial of $f$ at $a$. If the $(n+1)^{\rm st}$ derivative of $f$ were identically zero then this polynomial $j_a^nf$ would produce the exact value of $f$ for all $x$, and the error $R(x):=f(x)-j_a^nf(x)$ would be identically zero. It follows that in the case of nonzero error the nonvanishing of $f^{(n+1)}(t)$ in certain points of the interval $[a,x]$ (when $x>a$) has to be the culprit. Linearity then would indicate that we have a formula of the form $$R_n(x)=\int_a^x w(t)f^{(n+1)}(t)\>dt$$ with a certain more or less "universal" weight function $w$. That this weight function has the simple form appearing in "Taylor's theorem with integral remainder" is due to the secret of partial integration, an intuitive explanation of which I still have to see.
Christian BlatterChristian Blatter
$\begingroup$ Thanks for the answer. To me the weight function was the obvious. Does the following makes sense? Suppose we work with a linear approximation, then we use the first derivate of f at a to compute P(x) where P is first degree polynomial approximation of f. As a result, we have incurred an error if first derivative is not a constant in (a, x). The error can be corrected with the second derivate because the difference quotient gives the difference between f'(x+h) and f'(x) and normalize it for a unit interval (by dividing by h). $\endgroup$
$\begingroup$ Thus, we scale this difference by (x - t), which is the distance to x from t (let t be the immediate point after a). This corrects for the error, arising from using the constant first derivate from a, at the point t. But introduces error in the estimation from (t, x] since we have used the constant difference of f'(t) - f'(a). This is corrected by the second derivate at t', the point immediately after t. This process is done at every point in [a,x] and thus the giving rise to the said integral. $\endgroup$
Not the answer you're looking for? Browse other questions tagged calculus sequences-and-series taylor-expansion approximation linear-approximation or ask your own question.
advice on using Taylor Series for function approximation
Estimate an error using method similar to Stirling's approximation?
Error bound in the sum of chords approximation to arc length
Error when using $n$th Taylor polynomial to approximate a function at $x=a$
Taylor polynomial and approximation of integral
Intuition behind quadratic approximation
Error in Taylor polynomial
Finding the interval of a function that can be approximated to $x^3$ | CommonCrawl |
Malliavin calculus
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, S. Watanabe, I. Shigekawa, and so on finally completed the foundations.
Malliavin calculus is named after Paul Malliavin whose ideas led to a proof that Hörmander's condition implies the existence and smoothness of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. The calculus has been applied to stochastic partial differential equations as well.
The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications in, for example, stochastic filtering.
Overview and history
Malliavin introduced Malliavin calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations.
Invariance principle
The usual invariance principle for Lebesgue integration over the whole real line is that, for any real number ε and integrable function f, the following holds
$\int _{-\infty }^{\infty }f(x)\,d\lambda (x)=\int _{-\infty }^{\infty }f(x+\varepsilon )\,d\lambda (x)$ and hence $\int _{-\infty }^{\infty }f'(x)\,d\lambda (x)=0.$
This can be used to derive the integration by parts formula since, setting f = gh, it implies
$0=\int _{-\infty }^{\infty }f'\,d\lambda =\int _{-\infty }^{\infty }(gh)'\,d\lambda =\int _{-\infty }^{\infty }gh'\,d\lambda +\int _{-\infty }^{\infty }g'h\,d\lambda .$
A similar idea can be applied in stochastic analysis for the differentiation along a Cameron-Martin-Girsanov direction. Indeed, let $h_{s}$ be a square-integrable predictable process and set
$\varphi (t)=\int _{0}^{t}h_{s}\,ds.$
If $X$ is a Wiener process, the Girsanov theorem then yields the following analogue of the invariance principle:
$E(F(X+\varepsilon \varphi ))=E\left[F(X)\exp \left(\varepsilon \int _{0}^{1}h_{s}\,dX_{s}-{\frac {1}{2}}\varepsilon ^{2}\int _{0}^{1}h_{s}^{2}\,ds\right)\right].$
Differentiating with respect to ε on both sides and evaluating at ε=0, one obtains the following integration by parts formula:
$E(\langle DF(X),\varphi \rangle )=E{\Bigl [}F(X)\int _{0}^{1}h_{s}\,dX_{s}{\Bigr ]}.$
Here, the left-hand side is the Malliavin derivative of the random variable $F$ in the direction $\varphi $ and the integral appearing on the right hand side should be interpreted as an Itô integral.
Clark–Ocone formula
Main article: Clark–Ocone theorem
One of the most useful results from Malliavin calculus is the Clark–Ocone theorem, which allows the process in the martingale representation theorem to be identified explicitly. A simplified version of this theorem is as follows:
Consider the standard Wiener measure on the canonical space $C[0,1]$, equipped with its canonical filtration. For $F:C[0,1]\to \mathbb {R} $ satisfying $E(F(X)^{2})<\infty $ which is Lipschitz and such that F has a strong derivative kernel, in the sense that for $\varphi $ in C[0,1]
$\lim _{\varepsilon \to 0}{\frac {1}{\varepsilon }}(F(X+\varepsilon \varphi )-F(X))=\int _{0}^{1}F'(X,dt)\varphi (t)\ \mathrm {a.e.} \ X$
then
$F(X)=E(F(X))+\int _{0}^{1}H_{t}\,dX_{t},$
where H is the previsible projection of F'(x, (t,1]) which may be viewed as the derivative of the function F with respect to a suitable parallel shift of the process X over the portion (t,1] of its domain.
This may be more concisely expressed by
$F(X)=E(F(X))+\int _{0}^{1}E(D_{t}F\mid {\mathcal {F}}_{t})\,dX_{t}.$
Much of the work in the formal development of the Malliavin calculus involves extending this result to the largest possible class of functionals F by replacing the derivative kernel used above by the "Malliavin derivative" denoted $D_{t}$ in the above statement of the result.
Skorokhod integral
Main article: Skorokhod integral
The Skorokhod integral operator which is conventionally denoted δ is defined as the adjoint of the Malliavin derivative in the white noise case when the Hilbert space is an $L^{2}$ space, thus for u in the domain of the operator which is a subset of $L^{2}([0,\infty )\times \Omega )$, for F in the domain of the Malliavin derivative, we require
$E(\langle DF,u\rangle )=E(F\delta (u)),$
where the inner product is that on $L^{2}[0,\infty )$ viz
$\langle f,g\rangle =\int _{0}^{\infty }f(s)g(s)\,ds.$
The existence of this adjoint follows from the Riesz representation theorem for linear operators on Hilbert spaces.
It can be shown that if u is adapted then
$\delta (u)=\int _{0}^{\infty }u_{t}\,dW_{t},$
where the integral is to be understood in the Itô sense. Thus this provides a method of extending the Itô integral to non adapted integrands.
Applications
The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering.
References
• Kusuoka, S. and Stroock, D. (1981) "Applications of Malliavin Calculus I", Stochastic Analysis, Proceedings Taniguchi International Symposium Katata and Kyoto 1982, pp 271–306
• Kusuoka, S. and Stroock, D. (1985) "Applications of Malliavin Calculus II", J. Faculty Sci. Uni. Tokyo Sect. 1A Math., 32 pp 1–76
• Kusuoka, S. and Stroock, D. (1987) "Applications of Malliavin Calculus III", J. Faculty Sci. Univ. Tokyo Sect. 1A Math., 34 pp 391–442
• Malliavin, Paul and Thalmaier, Anton. Stochastic Calculus of Variations in Mathematical Finance, Springer 2005, ISBN 3-540-43431-3
• Nualart, David (2006). The Malliavin calculus and related topics (Second ed.). Springer-Verlag. ISBN 978-3-540-28328-7.
• Bell, Denis. (2007) The Malliavin Calculus, Dover. ISBN 0-486-44994-7; ebook
• Sanz-Solé, Marta (2005) Malliavin Calculus, with applications to stochastic partial differential equations. EPFL Press, distributed by CRC Press, Taylor & Francis Group.
• Schiller, Alex (2009) Malliavin Calculus for Monte Carlo Simulation with Financial Applications. Thesis, Department of Mathematics, Princeton University
• Øksendal, Bernt K.(1997) An Introduction To Malliavin Calculus With Applications To Economics. Lecture Notes, Dept. of Mathematics, University of Oslo (Zip file containing Thesis and addendum)
• Di Nunno, Giulia, Øksendal, Bernt, Proske, Frank (2009) "Malliavin Calculus for Lévy Processes with Applications to Finance", Universitext, Springer. ISBN 978-3-540-78571-2
External links
• Quotations related to Malliavin calculus at Wikiquote
• Friz, Peter K. (2005-04-10). "An Introduction to Malliavin Calculus" (PDF). Archived from the original (PDF) on 2007-04-17. Retrieved 2007-07-23. Lecture Notes, 43 pages
• Zhang, H. (2004-11-11). "The Malliavin Calculus" (PDF). Retrieved 2004-11-11. Thesis, 100 pages
| Wikipedia |
\begin{document}
\begin{abstract} We are concerned with the inverse problem of determining both the potential and the damping coefficient in a dissipative wave equation from boundary measurements. We establish stability estimates of logarithmic type when the measurements are given by the operator who maps the initial condition to Neumann boundary trace of the solution of the corresponding initial-boundary value problem. We build a method combining an observability inequality together with a spectral decomposition. We also apply this method to a clamped Euler-Bernoulli beam equation. Finally, we indicate how the present approach can be adapted to a heat equation.
\noindent {\bf Keywords}: Damping coefficient, potential, dissipative wave equation, boundary measurements, boundary observability, initial-to-boundary operator.
\noindent {\bf MSC}: 93C25, 93B07, 93C20, 35R30. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction} We consider the following initial-boundary value problem (abbreviated to IBVP in the sequel) for the wave equation: \begin{equation}\label{wd} \left\{ \begin{array}{lll}
\partial _t^2 u - \Delta u + q(x)u + a(x) \partial_t u = 0 \;\; &\mbox{in}\; Q=\Omega \times (0,\tau),
\\ u = 0 &\mbox{on}\; \Sigma =\partial \Omega \times (0,\tau), \\ u(\cdot ,0) = u_0,\; \partial_t u (\cdot ,0) = u_1, \end{array} \right. \end{equation} where $\Omega \subset \mathbb{R}^n$, $n\geq 1$, is a bounded domain with $C^2$-smooth boundary $\partial \Omega$ and $\tau > 0$.
We assume in this text that the coefficients $q$ and $a$ are real-valued.
Under the assumption that $q,a\in L^\infty (\Omega )$, for each $\tau >0$ and $\left(\begin{array}{cc}u_0\\u_1\end{array}\right)\in H_0^1(\Omega )\times L^2(\Omega )$, the IBVP \eqref{wd} has a unique solution $u_{q,a}\in C([0,\tau ],H_0^1(\Omega ))$ such that $\partial _tu_{q,a}\in C([0,\tau ],L^2(\Omega ))$ (e.g. \cite[pages 699-702]{DL}). On the other hand, by a classical energy estimate, we have \[
\|u_{q,a}\|_{C([0,\tau ],H_0^1(\Omega ))}+\|\partial _t u_{q,a}\|_{C([0,\tau ],L^2(\Omega ))}\leq C(\|u_0\|_{1,2}+\|u_1\|_0). \]
Here and henceforth, $\| \cdot \|_p$ and $\|\cdot \|_{s,p}$, $1\leq p\leq \infty$, $s\in \mathbb{R}$, denote respectively the usual $L^p$-norm and the $W^{s,p}$-norm.
We note that the constant $C$ above is a non decreasing function of $\|q\|_\infty +\|a\|_\infty$.
Now, since $u_{q,a}$ coincides with the solution of the IBVP \eqref{wd} in which $- q(x)u_{q,a} -a(x) \partial_t u_{q,a}$ is seen as a right-hand side, we can apply \cite[Theorem 2.1]{LLT} to get that $\partial _\nu u_{q,a}$, the derivative of the $u_{q,a}$ in the direction of $\nu$, the unit outward normal vector to $\partial \Omega$, belongs to $L^2(\Sigma )$. Additionaly, the mapping \[ \left(\begin{array}{cc}u_0\\ u_1\end{array}\right)\in H_0^1(\Omega )\times L^2(\Omega ) \longrightarrow \partial _\nu u_{q,a}\in L^2(\Sigma ) \] defines a bounded operator.
Let $\Gamma$ be a non empty open subset of $\partial \Omega$ and $\Upsilon =\Gamma \times (0,\tau )$. To $q,a\in L^\infty (\Omega )$, we associate the initial-to-boundary (abbreviated to IB in the following) operator $\Lambda _{q,a}$ defined by \[
\Lambda _{q,a}: \left(\begin{array}{cc}u_0\\ u_1\end{array}\right)\in H_0^1(\Omega )\times L^2(\Omega ) \longrightarrow \partial _\nu u_{q,a}{_{|\Upsilon}}\in L^2(\Upsilon ). \] Clearly, from the preceding discussion, $\Lambda _{q,a}\in \mathscr{B}\left(H_0^1(\Omega )\times L^2(\Omega ),L^2(\Upsilon )\right)$.
We also consider two partial IB operators $\Lambda _q$ and $\tilde{\Lambda} _{q,a}$ which are given by \[ \Lambda _q(u_0)=\Lambda_{q,0}\left(\begin{array}{cc}u_0\\ 0\end{array}\right),\quad \tilde{\Lambda} _{q,a}(u_1)=\Lambda_{q,a}\left(\begin{array}{cc}0\\ u_1\end{array}\right). \] Therefore, $\Lambda _q\in \mathscr{B}\left(H_0^1(\Omega ),L^2(\Upsilon )\right)$ and $\tilde{\Lambda} _{q,a} \in \mathscr{B}\left( L^2(\Omega ),L^2(\Upsilon )\right)$.
Next, we see that $\partial _tu$ is the solution of the IBVP \eqref{wd} corresponding to the initial conditions $u_1$ and $\Delta u_0-qu_0-au_1$. Hence, repeating the preceding analysis with $\partial _tu$ in place of $u$, we get \[ \Lambda _{q,a}\in \mathscr{B}\left(\left[H_0^1(\Omega )\cap H^2(\Omega )\right]\times H_0^1(\Omega ), H^1((0,\tau ),L^2(\Gamma ))\right). \] Consequently, \begin{align*} &\Lambda _q\in \mathscr{B}\left(H_0^1(\Omega )\cap H^2(\Omega ), H^1((0,\tau ),L^2(\Gamma ))\right), \\ &\tilde{\Lambda} _{q,a} \in \mathscr{B}\left( H_0^1(\Omega ),H^1((0,\tau ),L^2(\Gamma ))\right). \end{align*}
We are interested in the stability issue for the inverse problem consisting in the determination of both the potential $q$ and the damping coefficient $a$, appearing in the IBVP \eqref{wd}, from the IB map $\Lambda_{q,a}$. We succeed in proving logarithmic stability estimates of determining $q$ from $\Lambda _q$, $a$ from $\widetilde{\Lambda}_{q,a}$ and $(q,a)$ from $\Lambda_{q,a}$.
We introduce the unbounded operators, defined on $H_0^1(\Omega )\times L^2(\Omega )$, as follows \[ \mathcal{A}_0=\left( \begin{array}{cc} 0 & I \\ \Delta & 0 \\
\end{array}
\right),\;\; D(\mathcal{A}_0)=\left[H^2(\Omega )\cap H_0^1(\Omega )\right]\times H_0^1(\Omega )
\]
and $\mathcal{A}=\mathcal{A}_{q,a}=\mathcal{A}_0+\mathcal{B}$ with $D(\mathcal{A})=D(\mathcal{A}_0)$, where
\[ \mathcal{B}=\mathcal{B}_{q,a}=\left( \begin{array}{cc} 0 & 0 \\ -q & -a \\
\end{array}
\right).
\] Let \[ \mathcal{C}: D(\mathcal{A}_0)\rightarrow L^2(\Sigma ): \left(\begin{array}{cc}\varphi \\ \psi\end{array}\right)\longrightarrow \partial _\nu\varphi . \]
Since we deal with the wave equation, it is necessary to make assumptions on $\Gamma$ and $\tau$ in order to guarantee that our system is observable. To this end, we assume that $\Gamma$ is chosen in such a way that there is $\tau _0$ such that the pair $(\mathcal{A},\mathcal{C})$ is exactly observable for any $\tau \geq \tau _0$. We formulate the precise definition of exact observability in the next section in an abstract framework.
We give sufficient conditions ensuring that the pair $(\mathcal{A},\mathcal{C})$ is exactly observable. We fix $x_0\in \mathbb{R}^n\setminus \overline{\Omega}$ and we set \[
\Gamma _0 =\{x\in \partial \Omega ;\; \nu (x)\cdot (x-x_0)> 0\}\;\; \mbox{and}\;\; d=\max_{x\in \overline{\Omega}}|x-x_0|. \] Let us assume that $\Gamma \supset \Gamma _0$. Following \cite[Theorem 7.2.3, page 233]{tucsnakweiss}, $(\mathcal{A}_0,\mathcal{C})$ is exactly observable with $\tau \geq \tau _0=2d$. In light of \cite[Theorem 7.3.2, page 235]{tucsnakweiss} and the remark following it, we conclude that $(\mathcal{A},\mathcal{C})$ is also exactly observable for $\tau \geq \tau _0$, again with $\tau _0=2d$.
We mention that sharp sufficient conditions on $\Gamma$ and $\tau _0$ were given in a work by Bardos, Lebeau and Rauch \cite{blr}.
Unless otherwise stated, for sake of simplicity, all operator norms will denoted by $\| \cdot \|$. Also, $B_p$ (resp. $B_{s,p}$) denote the unit ball of $L^p(\Omega )$ (resp. $W^{s,p}(\Omega ))$.
We aim to prove in the present work the following theorem.
\begin{theorem}\label{mt1} We assume that $(\mathcal{A}_0,\mathcal{C})$ is exactly observable for $\tau \geq \tau_0$, for some $\tau_0>0$. Let $0\leq q_0 \in L^\infty(\Omega )$, there is a constant $\delta>0$ so that \begin{equation}\label{est1}
\|q-q_0\|_2\leq C\left| \ln \left(C^{-1} \|\Lambda _{q_0}-\Lambda_q\|\right)\right|^{-1/2},\quad q\in q_0+\delta B_{1,\infty}, \end{equation} and, for any $m>0$, \begin{align}
& \|a\|_2\leq C\left| \ln \left(C^{-1}\| \tilde{\Lambda} _{q_0,a}-\tilde{\Lambda} _{q_0,0}\|\right)\right|^{-1/2},\quad a\in \left[\delta B_\infty \right]\cap \left[mB_{1,2}\right],\label{est2} \\
& \|q-q_0\|_2+ \|a\|_0\leq C\left| \ln \left(C^{-1}\| \Lambda _{q,a}-\Lambda _{q_0,0}\|\right)\right|^{-1/2},\quad q\in q_0+\delta B_{1,\infty} ,\; a\in \left[\delta B_\infty \right]\cap \left[mB_{1,2}\right].\label{est3} \end{align} Here, $C$ is a generic constant not depending on $q$ and $a$. \end{theorem}
Theorem \ref{mt1} gives only stability estimates at zero damping coefficient. The difficulty of stability estimates at a non zero damping coefficient is related to the fact that the operator $\mathcal{A}$ is not necessarily diagonalizable. The main reason is that, contrary to case where $a=0$, this operator is no longer skew-adjoint. We detail the stability estimate at a non zero damping coefficient in a separate section.
The problem of determining the potential in a wave equation from the so-called Dirichlet-to-Neumann (usually abbreviated to DN) map was initiated by Rakesh and Symes \cite{RS} (see also \cite{CM} and \cite{Is}). They prove that the potential can be recovered uniquely from the DN map provided that the length of the time interval is larger than the diameter of the space domain. The key point in their method is the construction of special solutions, called beam solutions. A sharp uniqueness result was proved by the so-called boundary control method. More details on this method can be found for instance in \cite {B} and \cite{KKL}. Also, Sun \cite{Su} establishes H\"older stability estimates and, most recently, Bao and Yun \cite{BY} improve the result of \cite{Su}. Specifically, they prove a nearly Lipschitz stability estimate. An extension was obtained by Bellassoued, Choulli and Yamamoto \cite{BCY} in the case of a partial DN map by a method built on the quantification of the continuation of the solution of the wave equation from partial Cauchy data. We refer to the introduction of \cite{BCY} for a short overview of inverse problems related to the wave equation. We finally quote a very recent paper by Bao and Zhang \cite{BZ} dealing with sensitivity analysis of an inverse problem for the wave equation with caustics.
It is worthwhile to mention that contrary to hyperbolic inverse problems, for which the stability can be of Lipschitz, H\"older or logarithmic type, elliptic and parabolic inverse problems are always severely ill-posed. That is the corresponding stability estimates are in most cases of logarithmic type. In \cite{Al}, Alessandrini gives an example in non destructive testing showing that the logarithmic stability is the best possible.
This text is organized as follows. We consider in Section 2 the inverse source problem for exactly observable systems in an abstract framework. This material is necessary to establish stability estimates for the determination of the potential and the damping coefficient appearing in the IBVP \eqref{wd}. We devote Section 3 to the proof of Theorem \ref{mt1} and we give in Section 4 a sufficient condition which guarantees that $\mathcal{A}$ is diagonalizable. The condition that $\mathcal{A}$ is diagonalizable is used in an essential way to get a variant of Theorem \ref{mt1} at a non zero damping coefficient. We apply in Section 5 our approach to a clamped Euler-Bernoulli beam equation. The possible adaptation of our method to a heat equation is discussed in Section 6. Due to the fact that a heat equation is not exactly observable but only observable at final time, we obtain a stability estimate only when we perturb the unknown coefficient by a finite dimensional subspace.
\section{An abstract framework for the inverse source problem} \label{back}
Let $H$ be a Hilbert space and $A :D(A) \subset H \rightarrow H$ be the generator of continuous semigroup $T(t)$. An operator $C \in \mathscr{B}(D(A),Y)$, $Y$ is a Hilbert space which is identified with its dual space, is called an admissible observation for $T(t)$ if for some (and hence for all) $\tau >0$, the operator $\Psi \in \mathscr{B}(D(A),L^2((0,\tau ),Y))$ given by \[ (\Psi x)(t)=CT(t)x,\;\; t\in [0,\tau ],\;\; x\in D(A), \] has a bounded extension to $H$.
We introduce the notion of exact observability for the system \begin{align}\label{2.1} &z'(t)=Az(t),\;\; z(0)=x, \\ &y(t)=Cz(t),\label{2.2} \end{align} where $C$ is an admissible observation for $T(t)$. Following the usual definition, the pair $(A,C)$ is said exactly observable at time $\tau >0$ if there is a constant $\kappa $ such that the solution $(z,y)$ of \eqref{2.1} and \eqref{2.2} satisfies \[
\int_0^\tau \|y(t)\|_Y^2dt\geq \kappa ^2 \|x\|_H^2,\;\; x\in D(A). \] Or equivalently \begin{equation}\label{2.3}
\int_0^\tau \|(\Psi x)(t)\|_Y^2dt\geq \kappa ^2 \|x\|_H^2,\;\; x\in D(A). \end{equation}
We consider the Cauchy problem \begin{equation}\label{2.4} z'(t)=Az(t)+\lambda (t)x,\;\; z(0)=0, \end{equation} and we set \begin{equation}\label{2.5} y(t)=Cz(t),\;\; t\in [0,\tau ]. \end{equation}
By Duhamel's formula, we have \begin{equation}\label{2.6} y(t)=\int_0^t \lambda (t-s)CT(s)xds=\int_0^t\lambda (t-s)(\Psi x)(s)ds. \end{equation}
Let \[ H^1_\ell ((0,\tau), Y) = \left\{u \in H^1((0,\tau), Y); \; u(0) = 0 \right\}. \]
We define the operator $S:L^2((0,\tau), Y)\longrightarrow H^1_\ell ((0,\tau ) ,Y)$ by \begin{equation}\label{2.7} (Sh)(t)=\int_0^t\lambda (t-s)h(s)ds. \end{equation}
If $E =S\Psi$, then \eqref{2.6} takes the form \[ y(t)=(E x)(t). \]
\begin{theorem}\label{theorem2.1} We assume that $(A,C)$ is exactly observable for $\tau \geq \tau_0$, for some $\tau_0>0$. Let $\lambda \in H^1((0,T))$ satisfies $\lambda (0)\ne 0$. Then $E$ is one-to-one from $H$ onto $H^1_\ell ((0,\tau), Y)$ and \begin{equation}\label{2.8}
\frac{\kappa |\lambda (0)|}{\sqrt{2}}e^{-\tau \frac{\|\lambda '\|^2_{L^2((0,\tau))}}{|\lambda (0)|^2 }}\|x\|_H\leq \|Ex\|_{H^1_\ell ((0,\tau), Y)},\;\; x\in H. \end{equation} \end{theorem}
\begin{proof} First, taking the derivative with respect to $t$ of each side of the integral equation \[ \int_0^t \lambda (t-s)\varphi (s) ds=\psi (t), \] we get a Volterra equation of second kind \[ \lambda (0)\varphi (t) +\int_0^t\lambda '(t-s)\varphi (s)ds=\psi '(t). \] Mimicking the proof of \cite[Theorem 2, page 33]{Ho}, we obtain that this integral equation has a unique solution $\varphi \in L^2((0,\tau ) ,Y)$ and \begin{align*}
\|\varphi \|_{L^2((0,\tau ),Y)}&\leq C \|\psi '\|_{L^2((0,\tau ),Y)}\\ &\leq C \|\psi \|_{H^1_\ell ((0,\tau ),Y)}. \end{align*} Here $C =C(\lambda )$ is a constant.
Next, we estimate the constant $C$ above. From the elementary convexity inequality $(a+b)^2\leq 2(a^2+b^2)$, we deduce \[
\| |\lambda (0)|\varphi (t)\|_Y^2\leq 2\left( \int_0^t\frac{|\lambda '(t-s)}{|\lambda (0)|}\left[|\lambda (0)|\| \varphi (s)\|_Y\right]ds \right)^2+2\|\psi '(t)\|_Y^2. \] Thus, \[
|\lambda (0)|^2\| \varphi (t)\|_Y^2\leq 2\frac{\|\lambda '\|_{L^2((0,\tau))}^2}{|\lambda (0)|^2}\int_0^t|\varphi (0)|^2\| \varphi (s)\|_Y^2ds +2\|\psi '(t)\|_Y^2 \] by the Cauchy-Schwarz's inequality. Therefore, using Gronwall's lemma, we obtain in a straightforward manner that \[
\| \varphi \|_{L^2((0,\tau ),Y)}\leq \frac{\sqrt{2}}{|\lambda (0)|}e^{\tau \frac{\|\lambda '\|_{L^2((0,\tau))}^2}{|\lambda (0)|^2}}\|\psi '\|_{L^2((0,\tau ),Y)} \] and then \[
\| \varphi \|_{L^2((0,\tau ),Y)}\leq \frac{\sqrt{2}}{|\lambda (0)|}e^{\tau\frac{\|\lambda '\|_{L^2((0,\tau))}^2}{|\lambda (0)|^2}}\|S\varphi \|_{H^1_\ell ((0,\tau ),Y)}. \] In light of \eqref{2.3}, we end up getting \[
\|Ex\|_{H^1_\ell ((0,\tau), Y)}\geq \frac{\kappa |\lambda (0)|}{\sqrt{2}}e^{-\tau \frac{\|\lambda '\|^2_{L^2((0,\tau))}}{|\lambda (0)|^2 }} \|x\|_H. \] \end{proof}
We shall need a variant of Theorem \ref{theorem2.1}. If $(A,C)$ is as in Theorem \ref{theorem2.1}, then it follows from \cite[Proposition 6.3.3, page 189]{tucsnakweiss} that there is $\delta >0$ such that for any $P\in \mathscr{B}(H)$ satisfying $\|P\|\leq \delta$, $(A+P,C)$ is exactly observable with $\kappa (P+A)\geq \kappa/2$.
We define $E ^P$ similarly to $E$ by replacing $A$ by $A+P$.
\begin{theorem}\label{theorem2.2}
We assume that $(A,C)$ is exactly observable for $\tau \geq \tau_0$, for some $\tau_0>0$. Let $\lambda \in H^1((0,T))$ satisfies $\lambda (0)\ne 0$. There is $\delta >0$ such that, for any $P\in \mathscr{B}(H)$ satisfying $\|P\|\leq \delta$, $E^P$ is one-to-one from $H$ onto $H^1_\ell ((0,\tau), Y)$ and \begin{equation}\label{2.10}
\frac{\kappa |\lambda (0)|}{2\sqrt{2}}e^{-\tau \frac{\|\lambda '\|^2_{L^2((0,\tau))}}{|\lambda (0)|^2 }}\|x\|_H\leq \|E ^Px\|_{H^1_\ell ((0,\tau), Y)},\;\; x\in H. \end{equation} \end{theorem}
We now apply the preceding theorem to the following IBVP for the wave equation \begin{equation}\label{2.9} \left\{ \begin{array}{lll}
\partial _t^2 u - \Delta u + q(x)u + a(x) \partial_t u = \lambda (t)f(x) \;\; &\mbox{in}\; Q,
\\ u = 0 &\mbox{on}\; \Sigma, \\ u(\cdot ,0) = 0,\; \partial_t u (\cdot ,0) = 0. \end{array} \right. \end{equation}
We recall that \[ \mathcal{A}_0=\left( \begin{array}{cc} 0 & I \\ \Delta & 0 \\
\end{array}
\right),\;\; D(\mathcal{A}_0)=\left[H^2(\Omega )\cap H_0^1(\Omega )\right]\times H_0^1(\Omega )
\]
and $\mathcal{A}=\mathcal{A}_{q,a}=\mathcal{A}_0+\mathcal{B}_{q,a}$ with $D(\mathcal{A})=D(\mathcal{A}_0)$, where
\[ \mathcal{B}_{q,a}=\left( \begin{array}{cc} 0 & 0 \\ -q & -a \\
\end{array}
\right).
\] Also \[ \mathcal{C}: D(\mathcal{A}_0)\rightarrow L^2(\Gamma ): \left(\begin{array}{cc}\varphi \\ \psi \end{array}\right)\longrightarrow \partial _\nu\varphi . \]
We fix $q_0,a_0\in L^\infty (\Omega )$ and we assume that $(\mathcal{A}_{q_0,a_0},\mathcal{C})$ is exactly observable with constant $\kappa $. This is the case when $\Gamma \supset\Gamma _0 =\{x\in \partial \Omega ;\; \nu (x)\cdot (x-x_0)> 0\}$ (see for instance \cite[Theorem 1.2, page 141]{FI}\footnote{We note that from the proof of this theorem it is not possible to extract the dependance of $\kappa $ on $q_0$ and $a_0$.}).
\begin{corollary}\label{corollary2.1} There is $\delta >0$ such that, for any $q\in q_0+\delta B_{1,\infty}$ and $a\in a_0+\delta B_\infty $, we have \[
\|f\|_2\leq \frac{2\sqrt{2}}{\kappa |\lambda (0)|}e^{\tau \frac{\|\lambda '\|_{L^2((0,\tau))}^2}{|\lambda (0)|^2}}\| \partial _\nu u_f\|_{H^1((0,\tau ),L^2(\Gamma ))}, \] where $u_f$ is the solution of the IBVP \eqref{2.9}. \end{corollary}
This is nothing else but a Lipschitz stability estimate for the inverse problem of determining the source term $f$ from the boundary data $\partial _\nu u _f{_{|\Upsilon}}$, when $\lambda$ is supposed to be known.
\section{Proof of Theorem \ref{mt1}}
\begin{proof}[Proof of Theorem \ref{mt1}] Let $(\lambda _k)$ and $(\phi _k)$ be respectively the sequence of Dirichlet eigenvalues of $-\Delta +q_0$, counted according to their multiplicity, and the corresponding eigenvectors. We assume that the sequence $(\phi _k)$ forms an orthonormal basis of $L^2(\Omega )$.
We recall that according to the min-max principle, the following two-sided estimates hold \begin{equation}\label{ineq1} c^{-1}k^{2/n}\leq \lambda _k\leq ck^{2/n}. \end{equation} Here, the constant $c>1$ depends only on $\Omega$ and $q_0$.
Let $u_q$ be the solution of the IBVP \eqref{wd} corresponding to $q$, $a=0$, $u_0=\phi _k$ and $u_1=0$. Taking into account that $u_{q_0}=\cos (t\sqrt{\lambda _k})\phi _k$ is the solution of the IBVP \eqref{wd} corresponding to $q=q_0$, $a=0$, $u_0=\phi _k$ and $u_1=0$, we see that $u=u_q -u_{q_0}$ is the solution of the IBVP \begin{equation}\label{e2} \left\{ \begin{array}{lll} \partial _t^2u - \Delta u + qu = -(q-q_0) \cos(t \sqrt{\lambda_k}) \phi_k \;\; &\mbox{in}\; Q, \\ u= 0 &\mbox{on}\; \Sigma , \\ u(\cdot ,0) = 0,\; \partial _tu (\cdot ,0) = 0. \end{array} \right. \end{equation}
In the remaining part of this proof, $C$ is a generic constant independent on $k$.
Let $\delta$ be as in Corollary \ref{corollary2.1}. If $q\in q_0+\delta B_{1,\infty}$, we get by applying Corollary \ref{corollary2.1} \[
\|(q-q_0) \phi_k\|_2\leq Ce^{C\lambda _k}\|\partial _\nu u\|_{H^1 ((0,\tau),L^2 (\Gamma))}. \]
Since $|(q-q_0,\phi_k)|\leq |\Omega |^{1/2}\|(q-q_0) \phi_k\|_{L^2(\Omega )}$ by Cauchy-Schwarz's inequality, the last inequality entails \[
|(q-q_0,\phi_k)|\leq Ce^{C\lambda _k}\|\partial _\nu u\|_{H^1 ((0,\tau),L^2 (\Gamma))}. \] But, $\partial _\nu u=(\Lambda _{q_0}-\Lambda_q)\phi _k$. Therefore \begin{equation}\label{e1}
|(q-q_0,\phi_k)|^2\leq Ce^{C\lambda _k}\| \Lambda _{q_0}-\Lambda_q\|^2. \end{equation}
Let $\lambda \geq \lambda _1$ and $N=N(\lambda )$ be the smallest integer so that $\lambda _N\leq \lambda <\lambda _{N+1}$. Then \begin{align*}
\|q-q_0\|_2^2 &=\sum_k |(q-q_0,\phi_k)|^2 \\
&=\sum_{k\leq N} |(q-q_0,\phi_k)|^2+\sum_{k>N}|(q-q_0,\phi_k)|^2 \\
&\leq \sum_{k\leq N} |(q-q_0,\phi_k)|^2+\frac{1}{\lambda }\sum_{k>N}\lambda _k |(q-q_0,\phi_k)|^2 \\
&\leq \sum_{k\leq N} |(q-q_0,\phi_k)|^2+\frac{C\delta^2}{\lambda }. \end{align*} Here we used the fact that $\left( \sum_{k\geq 1} (1+\lambda _k)(\cdot ,\varphi _k)^2\right)^{1/2}$ defines an equivalent norm on $H^1(\Omega )$.
In light of \eqref{e1}, we get \[
\|q-q_0\|_2^2\leq CNe^{C\lambda }\| \Lambda _{q_0}-\Lambda_q\|^2+\frac{C\delta^2}{\lambda }. \] By \eqref{ineq1}, $N\leq C\lambda ^{n/2}$. Hence \[
\|q-q_0\|_2^2\leq Ce^{C\lambda }\| \Lambda _{q_0}-\Lambda_q\|^2+\frac{C\delta^2}{\lambda }. \]
Minimizing with respect to $\lambda$, we obtain that there is $\delta _0>0$ such that if $\| \Lambda _{q_0}-\Lambda_q\|\leq \delta _0$, then \[
\|q-q_0\|_2\leq C\left| \ln (C^{-1} \|\Lambda _{q_0}-\Lambda_q\|)\right|^{-1/2}. \] Estimate \eqref{est1} follows then from the continuity of the mapping \[ q\in L^\infty (\Omega )\rightarrow \Lambda _q \in \mathscr{B}(H_0^1(\Omega )\cap H^2(\Omega ),H^1((0,\tau ),L^2(\Gamma )).\]
We proceed similarly for proving \eqref{est2}. In the actual case we have to replace the previous $u_{q_0}$ by $u_{q_0}=\lambda _k^{-1}\sin (t\sqrt{\lambda _k})\phi _k$, corresponding to the initial conditions $u_0=0$ and $u_1=\phi _k$. Therefore, we have in place of \eqref{e2} \begin{equation}\label{e3} \left\{ \begin{array}{lll} \partial _t^2u - \Delta u + a \partial_t u = -a \cos(t \sqrt{\lambda_k}) \phi_k \;\; &\mbox{in}\; Q, \\ u= 0 &\mbox{on}\; \Sigma , \\ u(\cdot ,0) = 0,\; \partial _tu (\cdot ,0) = 0. \end{array} \right. \end{equation} We continue as in the preceding case by establishing the estimate \[
|(a,\phi_k)|^2\leq Ce^{C\lambda _k}\| \widetilde{\Lambda} _{q_0,a}-\widetilde{\Lambda} _{q_0,0}\| \] and we complete the proof of \eqref{est2} as above.
We end the proof by showing how we proceed for proving \eqref{est3}. Taking into account that the solution corresponding to $q=q_0$, $a=0$, $u_0=\phi _k$ and $u_1=i\lambda _k\phi _k$ is $u_{q_0}=e^{i\sqrt{\lambda _k}t}\phi _k$, then in place of \eqref{e2} we have the following IBVP \begin{equation}\label{e4} \left\{ \begin{array}{lll} \partial _t^2u - \Delta u + qu +a\partial _tu = -[(q-q_0)+ i\sqrt{\lambda _k}a]e^{i\sqrt{\lambda _k}t} \phi_k, \;\; &\mbox{in}\; Q, \\ u= 0 \;\; &\mbox{on}\; \Sigma , \\ u(\cdot ,0) = 0,\; \partial _tu (\cdot ,0) = 0.
\end{array} \right. \end{equation}
We can argue one more time as in the proof of \eqref{est1}. We find
\[
|(\varphi ,q-q_0)+i\sqrt{\lambda _k}(\varphi ,a)|^2\leq Ce^{C\lambda _k}\|\Lambda_{q,a}-\Lambda_{q_0,0}\|^2,
\]
entailing
\begin{align*}
&|(\varphi ,q-q_0)|^2\leq Ce^{C\lambda _k}\|\Lambda_{q,a}-\Lambda_{q_0,0}\|^2,
\\
&|(\varphi ,a)|^2\leq Ce^{C\lambda _k}\|\Lambda_{q,a}-\Lambda_{q_0,0}\|^2.
\end{align*} We end up getting \eqref{est3} by mimicking the rest of the proof of estimate \eqref{est1}. \end{proof}
\section{Stability around a non zero damping coefficient}
We limit ourselves to the one dimensional case and, for sake of simplicity, we take $q$ identically equal to zero. But the analysis we carry out in the present section is still applicable for any non negative bounded potential.
We assume in the present section that $\Omega =(0,\pi )$. We introduced in the first section the unbounded operators, defined on $\mathcal{H}=H_0^1(\Omega )\times L^2(\Omega )$, \[ \mathcal{A}_0=\left( \begin{array}{cc} 0 & I \\ \frac{d^2}{d x^2} & 0 \\
\end{array}
\right),\;\; D(\mathcal{A}_0)=\left[H^2(\Omega )\cap H_0^1(\Omega )\right]\times H_0^1(\Omega ):=\mathcal{H}_1
\]
and $\mathcal{A}_a=\mathcal{A}_0+\mathcal{B}_a$ with $D(\mathcal{A})=D(\mathcal{A}_0)$, where
\[ \mathcal{B}_a=\left( \begin{array}{cc} 0 & 0 \\ 0 & -a \\
\end{array}
\right).
\]
From \cite[Proposition 6.2.1, page 180]{tucsnakweiss}, $(\mathcal{A}_0,\mathcal{C})$ is exactly observable for any $\tau \geq 2\pi$ when \[ \mathcal{C}: \left(\begin{array}{cc}\varphi \\ \psi \end{array}\right)\in D(\mathcal{A}_0)\longrightarrow \frac{d\varphi}{dx}(0). \] On the other hand, it follows from \cite[Proposition 3.7.7, page 101]{tucsnakweiss} that the skew-adjoint operator $\mathcal{A}_0$ is diagonalizable with eigenvalues $\lambda _k=ik$, $k\in \mathbb{Z}^\ast$, corresponding to the orthonormal basis $(g_k)$, where \[ g_k=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} \frac{f_k}{ik}\\ f_k\end{array}\right),\;\; k\in \mathbb{Z}^\ast , \] where $(f_k)_{k\in \mathbb{N}}$ is an orthonormal basis of $L^2(\Omega )$ consisting of eigenfunctions of the unbounded operator $A_0=\frac{d^2}{dx^2}$ under Dirichlet boundary condition and $f_{-k}=-f_k$, $k\in \mathbb{N}^\ast$.
Let $\mathcal{H}_{\pm}$ be the closure in $\mathcal{H}$ of ${\rm span}\{ g_{\pm k};\; k\in \mathbb{N}^\ast\}$. Clearly, $\mathcal{H}=\mathcal{H}_+\oplus \mathcal{H}_-$ and $\mathcal{H}_\pm$ is invariant under $\mathcal{A}_0$. Let then $\mathcal{A}_0^\pm:\mathcal{H}_\pm \rightarrow \mathcal{H}_\pm$ be the unbounded operator given by $\mathcal{A}_0^\pm=\mathcal{A}_0{_{|\mathcal{H}_\pm}}$ and \[
D(\mathcal{A}_0^\pm )=\{u\in \mathcal{H}_\pm ;\; \sum_{k\in \mathbb{N}^\ast}k^2|\langle u,g _{\pm k}\rangle|^2<\infty \}. \] Here $\langle \cdot ,\cdot \rangle$ is the scalar product in $\mathcal{H}$.
Let $\mathcal{A}_{a_0}^\pm=\mathcal{A}_0^\pm + \mathcal{B}_{a_0}$ and set \[ \varrho=\sum_{k\geq 1}\frac{1}{(2k+1)^2}\;\; \mbox{and}\;\; \alpha =\frac{1}{2\sqrt{2(1+\varrho )}}. \]
In light of \cite[Theorem 2 and Lemma 10]{Shk}, we get
\begin{theorem}\label{t4.1} Under the assumption \[
\rho :=\|a_0\|_\infty <\alpha , \] the spectrum of $\pm \mathcal{A}_{a_0}^\pm$ consists in a sequence $(i\mu _k^\pm)$ such that, for any $\delta \in (0,1-\rho^2/\alpha ^2)$, there is an integer $\widetilde{k}$ such that \[
|i\mu_k^\pm -ik|\leq \overline{\alpha}=\overline{\alpha}(a_0):=\frac{\rho }{\sqrt{4\rho^2+\delta}},\;\; k\geq \widetilde{k}. \] In addition, $\mathcal{H}_\pm$ admits a Riesz basis $(\phi _k^\pm)=\left(\left( \begin{array}{c}\varphi _k^\pm \\ i\mu _k^\pm \varphi _k^\pm \end{array}\right)\right)_{k\in \mathbb{N}^\ast}$, each $\phi_k^\pm$ is an eigenfunction corresponding to $i\mu _k^\pm$. \end{theorem}
We denote by $(\widetilde{\phi}_k^\pm)$ the Riesz basis biorthogonal to $(\phi _k^\pm)$ and define the sequence $(\phi _k)_{k\in \mathbb{Z}^\ast}$ (resp. $(\widetilde{\phi}_k)_{k\in \mathbb{Z}^\ast}$) as follows $\phi_{-k}=-\phi _k^-$ and $\phi_k=\phi _k^+$ (resp. $\widetilde{\phi}_{-k}=-\widetilde{\phi }_k^-$ and $\widetilde{\phi}_k=\widetilde{\phi} _k^+$), $k\in \mathbb{N}^\ast$. Set also $\mu_{-k}=-\mu_k^-$ and $\mu_k=\mu_k^+$, $k\in \mathbb{N}^\ast$. Therefore, $\mathcal{A}_{a_0}\phi_k=i\mu_k\phi_k$, $k\in \mathbb{Z}^\ast$, and, for any $u\in \mathcal{H}$, \[ u=\sum_{k\in \mathbb{Z}}\langle u,\widetilde{\phi}_k\rangle \phi_k =\sum_{k\in \mathbb{Z}}\langle u,\phi_k\rangle \widetilde{\phi}_k. \] Additionally, \begin{equation}\label{4.1}
\alpha \|u\|^2_{\mathcal{H}}\leq \sum_{k\in \mathbb{Z}^\ast}|\langle u,\widetilde{\phi}_k\rangle|^2,\; \sum_{k\in \mathbb{Z}^\ast}|\langle u,\phi_k\rangle|^2 \leq \beta \|u\|^2_{\mathcal{H}}, \end{equation} where the constants $\alpha$ and $\beta$ do not depend on $u$ (see for instance \cite[Lemma 252, page 37]{tucsnakweiss}).
We pick $a_0$ as in the preceding theorem. Then it is straightforward to check that $u_{a_0}=e^{i\mu _kt}\varphi_k$, $k\in \mathbb{Z}^\ast$, is the solution of the IBVP \eqref{wd} with $q=0$, $a=a_0$, $\left( \begin{array}{c}u_0 \\ u_1 \end{array}\right)=\phi_k$. If $u_a$ is the solution of the IBVP \eqref{wd}, then $u=u_a-u_{a_0}$ is the solution of the IBVP
\begin{equation}\label{4.2} \left\{ \begin{array}{lll}
\partial _t^2 u - \Delta u + a(x) \partial_t u = (a_0-a)i\mu_ke^{i\mu _kt}\varphi _k \;\; &\mbox{in}\; Q,
\\ u = 0 &\mbox{on}\; \Sigma, \\ u(\cdot ,0) =0 ,\; \partial_t u (\cdot ,0) = 0. \end{array} \right. \end{equation}
We fixe $\delta$ as in the statement of Theorem \ref{t4.1}. Then, for some integer $\widetilde{k}$, \begin{align*}
&\left| e^{i\mu _kt}\right|\leq e^{|i\mu _k-i|k||t}\left|e^{i|k|t}\right|\leq e^{\overline{\alpha}\tau},\;\; |k|\geq \widetilde{k}, \\
&|\mu _k|\leq |k|+\overline{\alpha},\;\; |k|\geq \widetilde{k}. \end{align*}
These estimates at hand, we can proceed as in the previous section to get, where $\psi _k=i\mu _k\varphi _k$, \begin{equation}\label{4.3}
|(a-a_0, \psi _k)|^2=\left| \left\langle \left( \begin{array}{c} 0\\ a-a_0 \end{array}\right),\phi_k \right\rangle \right|^2\leq Ce^{Ck^2}\|\Lambda_a-\Lambda_{a_0}\|^2.
\end{equation}
It follows from \eqref{4.1}, \begin{equation}\label{4.4}
\alpha \|a-a_0\|^2_2=\alpha \left\| \left( \begin{array}{c} 0\\ a-a_0 \end{array}\right)\right\|_{\mathcal{H}}^2\leq \sum_{|k|\geq 1}
\left| \left\langle \left( \begin{array}{c} 0\\ a-a_0 \end{array}\right),\phi_k \right\rangle \right|^2. \end{equation}
In light of \eqref{4.3} and \eqref{4.4}, we have \begin{align}
\alpha \|a-a_0\|^2_2&\leq CNe^{C\lambda }\|\Lambda_a-\Lambda_{a_0}\|^2+\frac{1}{\lambda }\sum_{|k|>N}k^2 |(a-a_0, \psi _k)|^2\nonumber \\
&\leq CNe^{C\lambda }\|\Lambda_a-\Lambda_{a_0}\|^2+\frac{1}{\lambda }\sum_{|k|\geq 1}k^2 |(a-a_0, \psi _k)|^2.\label{4.5} \end{align} Here $\lambda \geq \lambda _1$ and $N=N(\lambda )$ be the smallest integer satisfying $N^2\leq \lambda <(N+1)^2$.
We note that we cannot pursue the proof similarly to that of \eqref{est1} because $(\psi _k)$ is not necessarily an orthonormal basis of $L^2(\Omega )$. So instead of the boundedness of $a-a_0$ in $H^1(\Omega )$, we make the assumption, where $m>0$ is fixed, \begin{equation}\label{4.6}
\sum_{|k|\geq 1}k^2 |(a-a_0, \psi _k)|^2\leq m. \end{equation}
Under the assumption \eqref{4.6}, \eqref{4.5} entails \[
\alpha \|a-a_0\|^2_2\leq Ce^{C\lambda }\|\widetilde{\Lambda}_a-\widetilde{\Lambda}_{a_0}\|^2+\frac{m}{\lambda }. \] where $\widetilde{\Lambda}_a=\widetilde{\Lambda}_{0,a}$ and $\widetilde{\Lambda}_{a_0}=\widetilde{\Lambda}_{0,a_0}$.
The same minimization argument used in the proof of \eqref{est1} (see Section 3) allows us to prove the following theorem.
\begin{theorem}\label{theorem4.2} There exist two constants $C>0$ and $\delta >0$ so that \[
\|a-a_0\|_2\leq C\left| \ln \left(C^{-1}\| \widetilde{\Lambda} _a-\widetilde{\Lambda} _{a_0}\|\right)\right|^{-1/2},\quad a \in a_0+\delta B_\infty \; \mbox{and \eqref{4.6} holds}. \] \end{theorem}
\begin{remark}\label{remark4.1} Let us explain briefly why the result of this section can not be extended to a higher dimensional case. The main reason is that, even for simple geometries, the eigenvalues of the unperturbed operators $\mathcal{A}_0^\pm$ do not satisfy a gap condition which is the main assumption in \cite[Theorem 2]{Shk}. If $(\rho _k)$, $\rho_k=k$, is the sequence of eigenvalues of $\pm \mathcal{A}_0^\pm $, we used in an essential way that \[ \rho_{k+1} -\rho_k =1. \] When $\Omega = (0,a)\times (0,b)$, the eigenvalues operator $\mathcal{A}_0^+$ consist in the sequence $\left( \pi ^2\left( \frac{k^2}{a^2}+\frac{\ell ^2}{b^2}\right) \right)_{k,\ell \in \mathbb{N}^\ast}$. These eigenvalues are simple when $\frac{a^2}{b^2}\not \in \mathbb{Q}$ but can condensate in finite interval and therefore they don't satisfy a gap condition like in the one dimensional case. \end{remark}
\section{An application to clamped Euler-Bernoulli beam}
For the same reason as in the preceding section, we limit our analysis to the one dimensional case. So we let $\Omega =(0,1)$.
We introduce the following spaces \begin{align*} & H_0=L^2(0,1), \\ &H_{1/2}=H_0^2(\Omega ), \\ &H_1=H^4(0,1)\cap H_0^2(\Omega ). \end{align*}
The natural norm of $H_s$ will denoted by $\|\cdot \|_s$, $s\in \{0,1/2,1\}$.
On $\mathcal{H}=H_{1/2}\times H_0$, we introduce the unbounded operator $\mathcal{A}$ given by \[ \mathcal{A}=\left( \begin{array}{cc} 0&I\\- \frac{d^4}{dx^4}&0\end{array} \right),\;\; D(\mathcal{A})=H_1\times H_{1/2}:=\mathcal{H}_1. \]
We consider a torque observation at an end point. We define then $C:\mathcal{H}_1\rightarrow \mathbb{C}$ by \[ C\left(\begin{array}{cc}\varphi \\\psi \end{array}\right)= \frac{d^2\varphi}{dx^2}(0). \]
We are concerned with following IBVP for the clamped Euler-Bernoulli beam equation \begin{equation}\label{eb0} \left\{ \begin{array}{lll}
\partial _t^2 u + \partial _x^4 u = 0 \;\; &\mbox{in}\; Q, \\ u(0,\cdot ) = u(1,\cdot )= 0 \;\; &\mbox{on}\; (0,\tau ), \\ \partial _xu(0,\cdot ) = \partial _xu(1,\cdot )= 0 \;\; &\mbox{on}\; (0,\tau ), \\ u(\cdot ,0) = u_0,\; \partial_t u (\cdot ,0) = u_1. \end{array} \right. \end{equation}
From \cite[Proposition 3.7.6, page 100]{tucsnakweiss}, $\mathcal{A}$ is skew-adjoint and therefore it generates a unitary group on $\mathcal{H}$. Consequently, for any $\left( \begin{array}{c} u_0\\u_1\end{array} \right)\in \mathcal{H}_1$ the IBVP \eqref{eb0} has a unique solution $u$ so that $(u,u')\in C([0,\tau ],\mathcal{H}_1)\cap C^1([0,\tau ],\mathcal{H})$. Moreover, by \cite[Proposition 6.10.1, page 270]{tucsnakweiss}, $(\mathcal{A},\mathcal{C})$ is exactly observable for any $\tau >0$ and there is a constant $\kappa$ such that \begin{equation}\label{p1}
\kappa ^2(\|u_0\|_{1/2}^2+\|u_1\|_0^2)\leq \| \partial _x^2u(0,\cdot )\|^2_{L^2((0,\tau ))}. \end{equation} Here the constant $\kappa$ is independent on $u_0$ and $u_1$.
Let $\mathcal{B}_a$ be the operator, where $a=a(x)$, \[ \mathcal{B}_a=\left( \begin{array}{cc} 0&0\\0& - a\end{array} \right). \] This operator is bounded on $\mathcal{H}$ whenever $a\in L^\infty (\Omega )$. Therefore, bearing in mind that $\mathcal{A}+\mathcal{B}_a$ generates a continuous semigroup, the IBVP
\begin{equation}\label{eb1} \left\{ \begin{array}{lll}
\partial _t^2 u + \partial _x^4 u +a(x)\partial _tu= 0 \;\; &\mbox{in}\; Q, \\ u(0,\cdot ) = u(1,\cdot )= 0 \;\; &\mbox{on}\; (0,\tau ), \\ \partial _xu(0,\cdot ) = \partial _xu(1,\cdot )= 0 \;\; &\mbox{on}\; (0,\tau ), \\ u(\cdot ,0) = u_0,\; \partial_t u (\cdot ,0) = u_1 \end{array} \right. \end{equation}
has a unique solution $u=u_a(u_0,u_1)$ satisfying $(u,u')\in C([0,T],\mathcal{H}_1)\cap C^1([0,T],\mathcal{H})$, for any $\left( \begin{array}{c} u_0\\u_1\end{array} \right)\in \mathcal{H}_1$. Moreover, the same perturbation argument used in the proof of Theorem \ref{theorem2.2} enables us to show that $(\mathcal{A}+\mathcal{B}_a,\mathcal{C})$ is exactly observable with constant $\widetilde{\kappa}^2\geq \kappa ^2/2$ provided the norm of the operator $\mathcal{B}_a$ is sufficiently small. That is, there is $\delta >0$ such that for any $\mathcal{B}_a\in \mathscr{B}(\mathcal{H})$ with $\| \mathcal{B}_a\|\leq \delta$, we have \begin{equation}\label{p2}
(1/2)\kappa ^2(\|u_0\|_{1/2}^2+\|u_1\|_0^2)\leq \| \partial _x^2u(0,\cdot )\|^2_{L^2((0,\tau ))}. \end{equation}
In light of \cite[Lemma 6.10.2, page 218]{tucsnakweiss}, the spectrum of $\mathcal{A}$ consists in a sequence of simple eigenvalues $(i\rho _k)_{k\in \mathbb{Z}^\ast}$, where \[ \rho _k=\pi^2\left( k-\frac{1}{2} \right)^2+a_k, \;\; k\in \mathbb{N}^\ast , \] $(a_k)$ a sequence converging exponentially to $0$, and $\rho_{-k}=-\rho_k$, $k\in \mathbb{N}^\ast$.
Let $A_0$ be the unbounded operator on $L^2(\Omega )$ defined by $A_0=\frac{d^4}{dx^4}$ and $D(A_0)=H^4(\Omega )\cap H_0^2(\Omega )$. Then $A_0$ is diagonalizable with eigenvalues $(\rho_k^2)_{k\in \mathbb{N}^\ast}$. Let $(f_k)_{k\in \mathbb{N}^\ast}$ be a basis of eigenfunctions, each $f_n$ is an eigenfunction corresponding to $\rho _k^2$. Let \[ g_k=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} \frac{f_k}{i\rho _k}\\ f_k\end{array}\right),\;\; \textrm{and}\;\; g_{-k}=-g_k,\;\; k\in \mathbb{N}^\ast . \] With the help of \cite[Lemma 3.7.7, page 101]{tucsnakweiss}, we get that $(g_k)_{k\in \mathbb{Z}^\ast}$ is an orthonormal basis of $\mathcal{A}_0$.
Define $\mathcal{H}_\pm$ as the closure of $\textrm{span}\{ g_{\pm k};\; k\in \mathbb{N}^\ast \}$. Then $\mathcal{H}=\mathcal{H}_-\oplus \mathcal{H}_+$ and $\mathcal{H}_\pm$ is invariant under $\mathcal{A}_0$. We consider $\mathcal{A}_0^\pm:\mathcal{H}_\pm \rightarrow \mathcal{H}_\pm$ the unbounded operator given by $\mathcal{A}_0^\pm=\mathcal{A}_0{_{|\mathcal{H}_\pm}}$ and \[
D(\mathcal{A}_0^\pm )=\{u\in \mathcal{H}_\pm ;\; \sum_{k\in \mathbb{N}^\ast}k^2|\langle u,g _{\pm k}\rangle|^2<\infty \}, \] where $\langle \cdot ,\cdot \rangle$ is the scalar product in $\mathcal{H}$, and we set $\mathcal{A}_{a_0}^\pm =\mathcal{A}_{a_0}^\pm +\mathcal{B}_{a_0}$.
Since $\rho_{k+1}-\rho_k\rightarrow +\infty$ as $k\rightarrow +\infty$, $(\rho _k)_{k\in \mathbb{N}^\ast}$ satisfies the a gap condition. Precisely, there exists $d>0$ so that \[ \rho_{k+1}-\rho_k\geq d,\;\; k\in \mathbb{N}^\ast. \]
Set \[ \alpha '=\frac{d}{2\sqrt{2(1+\varrho )}}, \] where $\varrho$ is as in Section 4.
We have similarly to Theorem \ref{t4.1},
\begin{theorem}\label{t5.1} Under the assumption \[
\rho :=\|a_0\|_\infty <\alpha ', \] the spectrum of $\pm\mathcal{A}_{a_0}^\pm$ consists in a sequence $(i\mu _k^\pm)$ such that, for any $\delta \in (0,1-\rho^2/(\alpha ')^2)$, there is an integer $\widetilde{k}$ such that \[
|i\mu_k^\pm -i\rho_k|\leq \overline{\alpha}=\overline{\alpha}(a_0):=\frac{\rho d}{\sqrt{4\rho^2+d^2\delta}},\;\; k\geq \widetilde{k}. \] In addition, $\mathcal{H}^\pm $ admits a Riesz basis $(\phi _k^\pm )=\left(\left( \begin{array}{c}\varphi _k^\pm \\ i\mu _k^\pm\varphi _k^\pm \end{array}\right)\right)$, each $\phi_k^\pm $ is an eigenfunction corresponding to $i\mu _k^\pm$. \end{theorem}
We define the IB operator $\tilde{\Lambda} _a$ by \[
\tilde{\Lambda} _a : u_1\in H_{1/2}\longrightarrow \partial _x^2u_a(0,u_1)(0,\cdot )\in L^2(0,\tau ). \] One can prove in a straightforward manner that $ \tilde{\Lambda} _a$ is bounded operator between $\mathcal{H}_1$ and $H^1((0,\tau ))$ and its norm can be uniformly bounded, with respect to $a$, by a constant, provided that the $L^\infty$-norm of $a$ is sufficiently small.
We carry out a similar analysis to that after Theorem \ref{t4.1} to get the following stability estimate.
\begin{theorem}\label{theorem5.2} Given $m>0$, there exist constants $C>0$ and $\delta >0$ so that \[
\|a-a_0\|_0\leq C\left| \ln \left(C^{-1}\| \tilde{\Lambda} _a-\tilde{\Lambda} _{a_0}\|\right)\right|^{-1/4}, \] if $a\in a_0+\delta B_{1,\infty}$ and \[
\sum_{|k|\geq 1}\lambda_k |(a-a_0, \psi _k)|^2\leq m, \] where $\psi _{\pm k}=i\mu _k^\pm\varphi _k^\pm$, $k\in \mathbb{N}^\ast$. \end{theorem}
We mention that the method used in this section and in the previous one is easily adaptable to a Schr\"odinger equation.
\section{The case of a heat equation}
We consider the following IBVP for the heat equation \begin{equation}\label{he} \left\{ \begin{array}{lll}
\partial _t u - \Delta u + q(x)u = 0 \;\; &\mbox{in}\; Q,
\\ u = 0 &\mbox{on}\; \Sigma, \\ u(\cdot ,0) = u_0. \end{array} \right. \end{equation} Let $H^{2,1}(Q)=L^2((0,\tau ),H^2(\Omega ))\cap H^1((0,\tau ), L^2(\Omega ))$. From \cite[Theorem 1.43, page 27]{choulli}, for any $q\in L^\infty (\Omega )$ and $u_0\in H_0^1(\Omega )$, the IBVP has a unique solution $u_q=u_q(u_0)\in H^{2,1}(Q)$ and, for any $m>0$,
\[
\|u_q\|_{H^{2,1}(Q)}\leq C\|u_0\|_{1,2}, \]
where the constant $C=C(M)$ is independent on $q$, $\|q\|_\infty \leq m$.
Let $\Gamma$ be an arbitrary nonempty open subset of $\partial \Omega$ and set $\Upsilon =\Gamma \times (0,\tau )$. Using the trace theorem in \cite[page 26]{choulli}, we obtain that the following IB mapping \[ \Lambda _q :u_0\in H_0^1(\Omega )\longrightarrow \partial _\nu u_q(u_0)\in L^2(\Upsilon) \] is bounded.
The following lemma will be useful in the sequel. Its proof is sketched in Appendix A.
\begin{lemma}\label{lemmah1} Let $q_0,q\in L^\infty (\Omega )$ so that $q\in q_0+W^{1,\infty}(\Omega )$. Then $\Lambda _q-\Lambda _{q_0}$ defines a bounded operator from $H_0^1(\Omega )$ into $H^1((0,\tau );L^2(\Gamma ))$. Additionally, for each $m>0$, there exits $C>0$ so that \[
\| \Lambda _q-\Lambda _{q_0}\|\leq C, \]
for all $q_0,q\in mB_\infty$. Here, $\| \Lambda _q-\Lambda _{q_0}\|$ is the norm of $\Lambda _q-\Lambda _{q_0}$ in $\mathscr{B}(H_0^1(\Omega );H^1((0,\tau );L^2(\Gamma )))$. \end{lemma}
In the sequel $\Lambda _q-\Lambda _{q_0}$ is considered as an operator acting from $H_0^1(\Omega )$ into $H^1((0,\tau );L^2(\Gamma ))$.
We assume, without loss of generality, that $q\geq 0$. Indeed, substituting $u$ by $ue^{-\|q\|_\infty t}$, we see that $q$ in \eqref{he} is changed to $q+\|q\|_\infty$. So, we fix $q_0\in L^\infty (\Omega )$ satisfying $0\leq q_0$ and we let $0<\lambda _1<\lambda _2 \ldots \leq \lambda _k\rightarrow +\infty$ be the sequence of eigenvalues, counted according to their multiplicity, of $-\Delta +q_0$ under Dirichlet boundary condition. An orthonormal basis consisting in the corresponding eigenfunctions is denoted by $(\varphi _k)$.
Let $q\in mB_\infty \cap \left(q_0+W^{1,\infty}(\Omega )\right)$. We pick a positive integer $k$. Taking into account that $u_{q_0}(\varphi_k)=e^{-\lambda _k t}\varphi _k$, we obtain that $u=u_q(\varphi _k)-u_{q_0}(\varphi_k)$ is the solution of the IBVP \begin{equation}\label{h1} \left\{ \begin{array}{lll}
\partial _t u - \Delta u + q(x)u = (q_0-q)e^{-\lambda _kt}\varphi _k \;\; &\mbox{in}\; Q,
\\ u = 0 &\mbox{on}\; \Sigma, \\ u(\cdot ,0) = 0. \end{array} \right. \end{equation}
We set $f=(q-q_0)\varphi _k$ and $\lambda (t)=e^{-\lambda _kt}$. Therefore \eqref{h1} becomes \begin{equation}\label{h2} \left\{ \begin{array}{lll}
\partial _t u - \Delta u + q(x)u = \lambda (t) f(x) \;\;& \mbox{in}\; Q,
\\ u = 0 &\mbox{on}\; \Sigma , \\ u(\cdot ,0) = 0. \end{array} \right. \end{equation}
It is straightforward to check that
\begin{equation}\label{h3}
u(x,t)=\int_0^t\lambda (t-s)v(x,s),
\end{equation}
where $v$ is the solution of the IBVP
\[ \left\{ \begin{array}{lll}
\partial _t v - \Delta v + q(x)v = 0 \;\; &\mbox{in}\; Q,
\\ v = 0 &\mbox{on}\; \Sigma , \\ v(\cdot ,0) = f. \end{array} \right. \]
In light of the Carleman estimate in \cite[Theorem 3.4, page 165]{choulli}, we can extend \cite[Proposition 3.5, page 170]{choulli} in order to get the following final time observability inequality \begin{equation}\label{h4}
\|v(\cdot ,\tau )\|_{H_0^1(\Omega )}\leq C\|\partial _\nu v\|_{L^2(\Upsilon)}. \end{equation} Here $C$ is a constant depending on $m$ but not on $q$.
By the continuity of the trace operator $w\in H^{2,1}(Q)\rightarrow \partial _\nu w_{|\Upsilon}\in L^2(\Upsilon)$, we get from \eqref{h3} \[
\partial _\nu u(x,t)_{|\Upsilon}=\int_0^t\lambda (t-s)\partial _\nu v(x,s)_{|\Upsilon}. \] We proceed as in the beginning of the proof of Theorem \ref{theorem2.1} to deduce the following estimate \begin{equation}\label{h7}
\|\partial _\nu v\|_{L^2(\Upsilon )}\leq \sqrt{2}e^{\tau ^2\lambda _k^2}\|\partial _\nu u\|_{H^1((0,\tau ), L^2(\Gamma ))}. \end{equation}
On the other hand \begin{equation}\label{h5} v(x,t)=\sum_{\ell \geq 1}e^{-\lambda _\ell t}(f,\varphi _\ell )\varphi _\ell . \end{equation} Hence \[
\|v(\cdot ,\tau )\|_2^2= \sum_{\ell \geq 1}e^{-2\lambda _\ell \tau }|(f,\varphi _\ell )|^2. \] Arguing as in Section 3, we get, for any $\lambda \geq \lambda _1$ and $N=N(\lambda )$ satisfying $\lambda _N\leq \lambda <\lambda_{N+1}$, \begin{equation}\label{h6}
\|f\|_2^2\leq e^{2\lambda _k \tau }\|v(\cdot ,\tau )\|_2+\frac{1}{\lambda ^2}\sum_{\ell >N}\lambda _\ell ^2|(f,\varphi _\ell )|^2. \end{equation}
By Green's formula, we obtain \[ \lambda _\ell (f,\varphi _\ell)=-\int_\Omega \Delta (q-q_0)\varphi_\ell \varphi_kdx +2\int_\Omega \nabla (q-q_0)\cdot \nabla \varphi_k\varphi_\ell dx +\lambda _k(f,\varphi _\ell). \]
Therefore, under the assumption that $q\in q_0+ W^{2,\infty}(\Omega )$ and $\| q-q_0\|_{2,\infty}\leq m$, \[
\lambda _\ell |(f,\varphi _\ell)|\leq (1+\sqrt{\lambda _k})m +\lambda _k|(f,\varphi _\ell)|. \]
This estimate in \eqref{h6} yields \begin{align*}
\|f\|_2^2&\leq e^{2\lambda \tau}\|v(\cdot ,\tau)\|_2^2+\frac{2(1+\lambda_k)m^2+\lambda _k^2}{\lambda ^2}\sum_{\ell >N}|(f,\varphi _\ell )|^2 \\
&\leq e^{2\lambda \tau}\|v(\cdot ,\tau)\|_2^2+\frac{2(1+\lambda_k)m^2+\lambda _k^2}{\lambda ^2}\|f\|_2^2 \\
&\leq e^{2\lambda \tau}\|v(\cdot ,\tau)\|_2^2+\frac{C\lambda _k^2}{\lambda ^2}\|f\|_2^2 \\
&\leq e^{2\lambda \tau}\|v(\cdot ,\tau)\|_2^2+\frac{C\lambda _k^2}{\lambda ^2}. \end{align*} This inequality together with \eqref{h7} imply
\[
\|(q-q_0)\varphi _k\|_2^2=\|f\|_2^2\leq \sqrt{2}e^{2\tau ^2\lambda _k^2+2\lambda \tau}\|\partial _\nu u\|^2_{H^1((0,\tau ), L^2(\Gamma ))}+\frac{C\lambda _k^2}{\lambda ^2}. \]
But \[
\|\partial _\nu u\|_{H^1((0,\tau ), L^2(\Gamma ))}=\|\Lambda_q(\varphi )-\Lambda _{q_0}(\varphi _k)\|_{H^1((0,\tau ), L^2(\Gamma ))}\leq \|\Lambda_q-\Lambda _{q_0}\| \| \varphi _k\|_{H_0^1(\Omega )}\leq \sqrt{\lambda_k}\|\Lambda_q-\Lambda _{q_0}\|. \] Whence, \begin{equation}\label{h8}
\|(q-q_0)\varphi _k\|_2^2=\|f\|_2^2\leq \sqrt{2}\lambda _ke^{2\tau ^2\lambda _k^2+2\lambda \tau} \|\Lambda_q-\Lambda _{q_0}\|^2+\frac{C\lambda _k^2}{\lambda ^2}. \end{equation}
Now the usual way consists in minimizing, with respect to $\lambda$, the right hand side of the inequality above. By a straightforward computation, one can see that the minimization argument is possible only if \[
\frac{\lambda _ke^{-2\tau ^2\lambda _k^2}}{\|\Lambda_q-\Lambda _{q_0}\|^2}\gg 1. \]
But this estimate does not guarantee that $\|\Lambda_q-\Lambda _{q_0}\|$ can be chosen arbitrarily small uniformly in $k$. However, the minimization argument works if we perturb $q_0$ by a finite dimensional subspace. That what we will discuss now.
Let $I>0$ be a given integer and $E_I=\textrm{span}\{\varphi_1,\ldots ,\varphi _I\}$. Since $|(q-q_0,\varphi _k)|^2\leq |\Omega |\|(q-q_0)\varphi _k\|_2^2$ by Cauchy-Schwarz's inequality, we get from \eqref{h8} \[
\|q-q_0\|_2^2=\sum_{k=1}^I|((q-q_0),\varphi _k)|^2\leq C_I\left(e^{2\lambda \tau}\|\Lambda_q-\Lambda _{q_0}\|+\frac{1}{\lambda ^2}\right), \] for some constant $C_I$ depending on $I$. We observe that, according to the preceding analysis, $C_I$ surely blows-up when $I\rightarrow +\infty$.
Minimizing with respect to $\lambda$ the right hand side of the inequality above, we get \[
\|q-q_0\|_2\leq C_I\left|\ln \left(\|\Lambda_q-\Lambda _{q_0}\|\right)\right|^{-1}, \]
provided that $\|\Lambda_q-\Lambda _{q_0}\|$ is sufficiently small. By a simple continuity argument, we see that $\|\Lambda_q-\Lambda _{q_0}\|$ is small whenever $\|q-q_0\|_1,\infty$ is small. If $\Lambda ^I_q=\Lambda _q{_{|E_I}}$, we end up getting
\begin{theorem}\label{theorem6.1} Under the preceding notations and assumptions, there exist two constants $C_I$ and $\delta _I$ so that \[
\|q-q_0\|_2\leq C_I\left|\ln \left(\|\Lambda ^I _q-\Lambda^I_{q_0}\|\right)\right|^{-1}, \]
if $\|q-q_0\|_{1,\infty}\le \delta _I$. \end{theorem}
\begin{remark}\label{remark6.1} We consider on $L^2(\Omega )$ the following norm, which weaker than its natural norm, \[
\|w\|_\ast =\left(\sum_{k\geq 1} e^{-3\tau ^2\lambda _k^2}|(w,\varphi _k)|^2\right)^2,\;\; w\in L^2(\Omega ). \] Then \eqref{h8} yields \[
\|q-q_0\|_\ast^2\leq \sqrt{2}e^{2\tau \lambda}\|\Lambda_q-\Lambda _{q_0}\|^2+\frac{C}{\lambda ^2}. \] We get by minimizing the right hand side with respect to $\lambda$ \[
\|q-q_0\|_\ast \leq C\left|\ln \left(\|\Lambda _q-\Lambda_{q_0}\|\right)\right|^{-1}.
\] \end{remark}
\appendix \section{}
\begin{proof}[Proof of Lemma \ref{lemmah1}] We start be considering the following homogenous IBVP for the heat equation \begin{equation}\label{he1} \left\{ \begin{array}{lll}
\partial _t u - \Delta u + q(x)u = \phi \;\; &\mbox{in}\; Q,
\\ u = 0 &\mbox{on}\; \Sigma, \\ u(\cdot ,0) = \psi . \end{array} \right. \end{equation} Let $\phi\in L^2(Q)$ and $\psi \in H_0^1(\Omega )$. We obtain by applying one more time \cite[Theorem 1.43, page 27]{choulli} that the IBVP \eqref{he1} has a unique solution $u_{\phi ,\psi}\in H^{2,1}(Q)$ provided that $q\in L^\infty (\Omega )$. Moreover, for any $m>0$, there exists $C>0$ so that \begin{equation}\label{h9}
\|u_{\phi,\psi}\|_{H^{2,1}(Q)}\leq C\left(\|\psi\|_{H^1(\Omega )} +\|\phi\|_{L^2(Q)}\right), \end{equation} uniformly in $q\in mB_\infty$.
Let $A_q$ be the unbounded operator on $L^2(\Omega )$ defined by \[ A_q=-\Delta +q,\quad D(A_q)=H^2(\Omega )\cap H_0^1(\Omega ). \] Then the solution of \eqref{he1} is given by \[ u_{\phi ,\psi}(\cdot ,t)=e^{-tA_q}\psi +\int_0^te^{-sA_q}\phi (\cdot ,t-s)ds, \] where $e^{-tA_q}$ is the semigroup generated by $-A_q$.
In the sequel, we write $u_\phi$ for $u_{\phi ,0}$.
Let $\phi \in C^1([0,\tau ]; L^2(\Omega ))$ so that $\phi (\cdot ,0)\in H_0^1(\Omega )$. Then \[ \partial _tu_\phi (\cdot ,t)=e^{-tA_q}\phi(\cdot ,0)+\int_0^te^{-sA_q}\partial _t\phi (\cdot ,t-s)ds. \] In other words, $\partial _tu_\phi=u_{\partial _t\phi ,\phi(\cdot ,0)}$. Thus, estimate \eqref{h9} entails \begin{equation}\label{h10}
\|\partial _tu_\phi\|_{H^{2,1}(Q)}\leq C\left(\|\phi (\cdot ,0)\|_{H^1(\Omega )} +\|\partial _t\phi\|_{L^2(Q)}\right). \end{equation}
Next, let $\phi \in H^1((0,\tau );L^2(\Omega ))$ with $\phi (\cdot ,0)\in H_0^1(\Omega )$. Observing that $u_\phi =u_{\widetilde{\phi}}+u_{\phi(\cdot ,0 )}$, where $\widetilde{\phi}=\phi -\phi (\cdot ,0)$, and $\partial_tu_{\phi (\cdot ,0)}=u_{0,\phi (\cdot ,0)}$, we see that is sufficient to consider the case $\phi (\cdot ,0)=0$.
By density, there exist a sequence $(\phi_k)$ in $C_0^\infty ((0,\tau ];L^2(\Omega ))$ converging to $\phi \in H^1((0,\tau );L^2(\Omega ))$. Armed with \eqref{h9}, we get in a straightforward manner that $u_{\phi _k}$ and $u_{\partial _t\phi}$ converge respectively to $u_\phi$ and $u_{\partial _t\phi}$ in $H^{2,1}(Q)$. But, in light of the smoothness of $\phi_k$, $\partial _tu_{\phi _k}=u_{\partial _t\phi_k}$. Therefore, we have $\partial _tu_\phi=u_{\partial _t\phi}$ and \eqref{h10} holds true for all $\phi \in H^1((0,\tau );L^2(\Omega ))$ with $\phi (\cdot ,0)\in H_0^1(\Omega )$.
Now, let $q_0,q\in mB_\infty$ so that $q\in q_0+W^{1,\infty }(\Omega )$. Let $u_0\in H_0^1(\Omega )$. By an elementary computation, we get that $u:=u_q(u_0)-u_{q_0}(u_0)=u_\phi$, with $\phi =(q_0-q)u_{q_0}(u_0)$. Consequently, from the preceding discussion, $u,\partial _tu\in H^{2,1}(Q)$ and \begin{equation}\label{h11}
\|u\|_{H^{2,1}(Q)} +\|\partial _tu\|_{H^{2,1}(Q)}\leq C\|u_0\|_{H^1(\Omega )}. \end{equation} For some constant $C=C(m)$.
Finally, using the continuity of the trace operator $w\in H^{2,1}(Q)\rightarrow \partial _\nu w\in L^2(\Upsilon )$, we obtain from \eqref{h11} \[
\|\Lambda_q(u_0)-\Lambda_{q_0}(u_0)\|_{H^1((0,\tau );L^2(\Gamma ))}\leq C\|u\|_{H^1(\Omega )}. \] That is, we proved \[
\|\Lambda_q-\Lambda_{q_0}\|\leq C, \]
where $\|\Lambda_q-\Lambda_{q_0}\|$ is the norm of $\Lambda_q-\Lambda_{q_0}$ in $\mathscr{B}(H_0^1(\Omega ),H^1((0,\tau );L^2(\Gamma ))$. \end{proof}
\end{document} | arXiv |
Graded-commutative ring
In algebra, a graded-commutative ring (also called a skew-commutative ring) is a graded ring that is commutative in the graded sense; that is, homogeneous elements x, y satisfy
$xy=(-1)^{|x||y|}yx,$
where |x | and |y | denote the degrees of x and y.
A commutative (non-graded) ring, with trivial grading, is a basic example. An exterior algebra is an example of a graded-commutative ring that is not commutative in the non-graded sense.
A cup product on cohomology satisfies the skew-commutative relation; hence, a cohomology ring is graded-commutative. In fact, many examples of graded-commutative rings come from algebraic topology and homological algebra.
References
• David Eisenbud, Commutative Algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol 150, Springer-Verlag, New York, 1995. ISBN 0-387-94268-8
• Beck, Kristen A.; Sather-Wagstaff, Keri Ann (2013-07-01). "A somewhat gentle introduction to differential graded commutative algebra". arXiv:1307.0369 [math.AC].
See also
• DG algebra
• graded-symmetric algebra
• alternating algebra
• supercommutative algebra
| Wikipedia |
Analytic solution to an interfacial flow with kinetic undercooling in a time-dependent gap Hele-Shaw cell
Modeling error of $ \alpha $-models of turbulence on a two-dimensional torus
September 2021, 26(9): 4645-4661. doi: 10.3934/dcdsb.2020306
Impulses in driving semigroups of nonautonomous dynamical systems: Application to cascade systems
Everaldo de Mello Bonotto 1,, , Matheus Cheque Bortolan 2, , Rodolfo Collegari 3, and José Manuel Uzal 4,
Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos - SP, 13566-590, Brazil
Departamento de Matemática, Centro de Ciências Físicas e Matemáticas, Universidade Federal de Santa Catarina, Florianópolis-SC, 88040-900, Brazil
Faculdade de Matemática, Universidade Federal de Uberlândia, Uberlândia-MG, 38400-902, Brazil
Departamento de Estatística, Análise Matemática e Optimización & Instituto de Matemáticas, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
Received April 2020 Revised August 2020 Published September 2021 Early access October 2020
Fund Project: The first author is partially supported by FAPESP grant 2016/24711-1 and CNPq grant 310497/2016-7. The second author is partially supported by CNPq, project # 407635/2016-5. The third author is partially supported by FAPEMIG, project # APQ-00371-18.The fourth author is partially supported by the predoctoral contact BES-2017-082334
In this paper we investigate the long time behavior of a nonautonomous dynamical system (cocycle) when its driving semigroup is subjected to impulses. We provide conditions to ensure the existence of global attractors for the associated impulsive skew-product semigroups, uniform attractors for the coupled impulsive cocycle and pullback attractors for the associated evolution processes. Finally, we illustrate the theory with an application to cascade systems.
Keywords: Impulses, global attractor, pullback attractor, skew-product semiflow, cascade systems.
Mathematics Subject Classification: Primary:35B41;Secondary:34A37, 35R12.
Citation: Everaldo de Mello Bonotto, Matheus Cheque Bortolan, Rodolfo Collegari, José Manuel Uzal. Impulses in driving semigroups of nonautonomous dynamical systems: Application to cascade systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4645-4661. doi: 10.3934/dcdsb.2020306
N. U. Ahmed, Existence of optimal controls for a general class of impulsive systems on Banach spaces, SIAM J. Control Optim., 42 (2003), 669-685. doi: 10.1137/S0363012901391299. Google Scholar
M. Benchora, J. Henderson and S. Ntouyas, Impulsive Differential Equations and Inclusions, Contemporary Mathematics and Its Applications, 2. Hindawi Publishing Corporation, New York, 2006. doi: 10.1155/9789775945501. Google Scholar
E. M. Bonotto, M. C. Bortolan, T. Caraballo and R. Collegari, Attractors for impulsive non-autonomous dynamical systems and their relations, J. Differential Equations, 262 (2017), 3524-3550. doi: 10.1016/j.jde.2016.11.036. Google Scholar
E. M. Bonotto, M. C. Bortolan, T. Caraballo and R. Collegari, Impulsive non-autonomous dynamical systems and impulsive cocycle attractors, Math. Methods in the Appl. Sci., 40 (2017), 1095-1113. doi: 10.1002/mma.4038. Google Scholar
E. M. Bonotto and P. Kalita, On attractors of generalized semiflows with impulses, The Journal of Geometric Analysis, 30 (2020), 1412-1449. doi: 10.1007/s12220-019-00143-0. Google Scholar
E. M. Bonotto, Flows of characteristic $0^+$ in impulsive semidynamical systems, J. Math. Anal. Appl., 332 (2007), 81-96. doi: 10.1016/j.jmaa.2006.09.076. Google Scholar
E. M. Bonotto, M. C. Bortolan, T. Caraballo and R. Collegari, Impulsive surfaces on dynamical systems, Acta Mathematica Hungarica, 150 (2016), 209-216. doi: 10.1007/s10474-016-0631-0. Google Scholar
E. M. Bonotto, M. C. Bortolan, A. N. Carvalho and R. Czaja, Global attractors for impulsive dynamical systems - a precompact approach, J. Differential Equations, (2015), 2602-2625. doi: 10.1016/j.jde.2015.03.033. Google Scholar
M. C. Bortolan, A. N. Carvalho and J. A. Langa, Structure of attractors for skew product semiflows, J. Differential Equations, 257 (2014), 490-522. doi: 10.1016/j.jde.2014.04.008. Google Scholar
M. C. Bortolan and J. M. Uzal, Pullback attractors to impulsive evolution processes: Applications to differential equations and tube conditions, Discrete Contin. Dyn. Syst., 40 (2020), 2791-2826. doi: 10.3934/dcds.2020150. Google Scholar
B. Bouchard, N.-M. Dang and C.-A. Lehalle, Optimal control of trading algorithms: A general impulse control approach, SIAM J. Finan. Math., 2 (2011), 404-438. doi: 10.1137/090777293. Google Scholar
T. Cardinali and R. Servadei, Periodic solutions of nonlinear impulsive differential inclusions with constraints, Proc. Am. Math. Soc., 132 (2004), 2339-2349. doi: 10.1090/S0002-9939-04-07343-5. Google Scholar
A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems, Applied Mathematical Sciences, 182. Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, American Mathematical Society Colloquium Publications, 49. American Mathematical Society, Providence, RI, 2002. doi: 10.1090/coll/049. Google Scholar
S. Dashkovskiy, O. Kapustyan and I. Romaniuk, Global attractors of impulsive parabolic inclusions, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 1875-1886. doi: 10.3934/dcdsb.2017111. Google Scholar
M. H. A. Davis, X. Guo and G. Wu, Impulse control of multidimensional jump diffusions, SIAM J. Control Optim., 48 (2010), 5276-5293. doi: 10.1137/090780419. Google Scholar
M. di Bernardo, C. J. Budd, A. R. Champneys and P. Kowalczyk, Piecewise-Smooth Dynamical Systems: Theory and Applications, Applied Mathematical Sciences, 163. Springer-Verlag London, Ltd., London, 2008. doi: 10.1007/978-1-84628-708-4. Google Scholar
J. A. Feroe, Existence and stability of multiple impulse solutions of a nerve equation, SIAM J. Appl. Math., 42 (1982), 235-246. doi: 10.1137/0142017. Google Scholar
A. F. Filippov, Differential Equations with Discontinuous Righthand Sides, Mathematics and its Applications, 18. Kluwer Academic Publishers Group, Dordrecht, 1988. doi: 10.1007/978-94-015-7793-9. Google Scholar
W. M. Haddad and Q. Hui, Energy dissipating hybrid control for impulsive dynamical systems, Nonlinear Anal., 69 (2008), 3232-3248. doi: 10.1016/j.na.2005.10.052. Google Scholar
S. K. Kaul, On impulsive semidynamical systems, J. Math. Anal. Appl., 150 (1990), 120-128. doi: 10.1016/0022-247X(90)90199-P. Google Scholar
K. Li, C. Ding, F. Wang and J. Hu, Limit set maps in impulsive semidynamical systems, J. Dyn. Control. Syst., 20 (2014), 47-58. doi: 10.1007/s10883-013-9204-5. Google Scholar
V. F. Rožko, A certain class of almost periodic motions in systems with pulses, Differencial' nye Uravnenja, 8 (1972), 2012-2022. Google Scholar
V. F. Rožko, Ljapunov stability in discontinuous dynamical systems, Differencial'nye Uravnenja, 11 (1975), 1005-1012, 1148. Google Scholar
V. F. Rožko, The almost recurrent and recurrent motions of discontinuous dynamical systems, Differencial'nye Uravnenja, 9 (1973), 1826-1830, 1925. Google Scholar
H. Song and H. Wu, Pullback attractors of nonautonomous reaction-diffusion equations, J. Math. Anal. Appl., 325 (2007), 1200-1215. doi: 10.1016/j.jmaa.2006.02.041. Google Scholar
Saša Kocić. Reducibility of skew-product systems with multidimensional Brjuno base flows. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 261-283. doi: 10.3934/dcds.2011.29.261
P.E. Kloeden, Victor S. Kozyakin. The perturbation of attractors of skew-product flows with a shadowing driving system. Discrete & Continuous Dynamical Systems, 2001, 7 (4) : 883-893. doi: 10.3934/dcds.2001.7.883
Tomás Caraballo, Alexandre N. Carvalho, Henrique B. da Costa, José A. Langa. Equi-attraction and continuity of attractors for skew-product semiflows. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 2949-2967. doi: 10.3934/dcdsb.2016081
Juan A. Calzada, Rafael Obaya, Ana M. Sanz. Continuous separation for monotone skew-product semiflows: From theoretical to numerical results. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 915-944. doi: 10.3934/dcdsb.2015.20.915
Sylvia Novo, Carmen Núñez, Rafael Obaya, Ana M. Sanz. Skew-product semiflows for non-autonomous partial functional differential equations with delay. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4291-4321. doi: 10.3934/dcds.2014.34.4291
Bogdan Sasu, A. L. Sasu. Input-output conditions for the asymptotic behavior of linear skew-product flows and applications. Communications on Pure & Applied Analysis, 2006, 5 (3) : 551-569. doi: 10.3934/cpaa.2006.5.551
Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195
T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037
Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060
Mustapha Yebdri. Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 167-198. doi: 10.3934/dcdsb.2021036
Shulin Wang, Yangrong Li. Probabilistic continuity of a pullback random attractor in time-sample. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2699-2722. doi: 10.3934/dcdsb.2020028
Tomás Caraballo, David Cheban. On the structure of the global attractor for infinite-dimensional non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2013, 12 (1) : 281-302. doi: 10.3934/cpaa.2013.12.281
B. Ambrosio, M. A. Aziz-Alaoui, V. L. E. Phan. Global attractor of complex networks of reaction-diffusion systems of Fitzhugh-Nagumo type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3787-3797. doi: 10.3934/dcdsb.2018077
Eduardo Liz, Gergely Röst. On the global attractor of delay differential equations with unimodal feedback. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1215-1224. doi: 10.3934/dcds.2009.24.1215
I. D. Chueshov, Iryna Ryzhkova. A global attractor for a fluid--plate interaction model. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1635-1656. doi: 10.3934/cpaa.2013.12.1635
Yirong Jiang, Nanjing Huang, Zhouchao Wei. Existence of a global attractor for fractional differential hemivariational inequalities. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1193-1212. doi: 10.3934/dcdsb.2019216
Hiroshi Matano, Ken-Ichi Nakamura. The global attractor of semilinear parabolic equations on $S^1$. Discrete & Continuous Dynamical Systems, 1997, 3 (1) : 1-24. doi: 10.3934/dcds.1997.3.1
Yuncheng You. Global attractor of the Gray-Scott equations. Communications on Pure & Applied Analysis, 2008, 7 (4) : 947-970. doi: 10.3934/cpaa.2008.7.947
Everaldo de Mello Bonotto Matheus Cheque Bortolan Rodolfo Collegari José Manuel Uzal | CommonCrawl |
# Integers
### Properties of Addition and Subtraction of INTEGERS
We have learnt about whole numbers and integers in Class VI. We have also learnt about addition and subtraction of integers.
#### Closure under Addition
We have learnt that sum of two whole numbers is again a whole number. For example, $17+24=41$ which is again a whole number. We know that, this property is known as the closure property for addition of the whole numbers.
Let us see whether this property is true for integers or not.
Following are some pairs of integers. Observe the following table and complete it.
$\begin{array}{lc}\text { Statement } & \text { Observation } \\ \text { (i) } 17+23=40 & \text { Result is an integer } \\ \text { (ii) }(-10)+3= & \\ \text { (iii) }(-75)+18= & \\ \text { (iv) } 19+(-25)=-6 & \text { Result is an integer } \\ \text { (v) } 27+(-27)= \\ \text { (vi) }(-20)+0= \\ \text { (vii) }(-35)+(-10)=\end{array}$
What do you observe? Is the sum of two integers always an integer?
Did you find a pair of integers whose sum is not an integer?
Since addition of integers gives integers, we say integers are closed under addition.
In general, for any two integers $a$ and $b, a+b$ is an integer.
#### Closure under Subtraction
What happens when we subtract an integer from another integer? Can we say that their difference is also an integer?
Observe the following table and complete it:
## Statement
(i) $7-9=-2$
(ii) $17-(-21)=$
(iii) $(-8)-(-14)=6$
(iv) $(-21)-(-10)=$
(v) $32-(-17)=$
(vi) $(-18)-(-18)=$
(vii) $(-29)-0=$ Observation
Result is an integer
Result is an integer
What do you observe? Is there any pair of integers whose difference is not an integer? Can we say integers are closed under subtraction? Yes, we can see that integers are closed under subtraction.
Thus, if $a$ and $b$ are two integers then $a-b$ is also an intger. Do the whole numbers satisfy this property?
#### Commutative Property
We know that $3+5=5+3=8$, that is, the whole numbers can be added in any order. In other words, addition is commutative for whole numbers.
Can we say the same for integers also?
We have $5+(-6)=-1$ and $(-6)+5=-1$
So, $5+(-6)=(-6)+5$
Are the following equal?
(i) $(-8)+(-9)$ and $(-9)+(-8)$
(ii) $(-23)+32$ and $32+(-23)$
(iii) $(-45)+0$ and $0+(-45)$
Try this with five other pairs of integers. Do you find any pair of integers for which the sums are different when the order is changed? Certainly not. We say that addition is commutative for integers. In general, for any two integers $a$ and $b$, we can say
$$
a+b=b+a
$$
- We know that subtraction is not commutative for whole numbers. Is it commutative for integers?
Consider the integers 5 and (-3).
Is $5-(-3)$ the same as $(-3)-5$ ? No, because $5-(-3)=5+3=8$, and $(-3)-5$ $=-3-5=-8$.
Take atleast five different pairs of integers and check this.
We conclude that subtraction is not commutative for integers.
#### Associative Property
Observe the following examples:
Consider the integers $-3,-2$ and -5 .
Look at $(-5)+[(-3)+(-2)]$ and $[(-5)+(-3)]+(-2)$.
In the first sum (-3) and (-2) are grouped together and in the second $(-5)$ and $(-3)$ are grouped together. We will check whether we get different results.
$$
(-5)+[(-3)+(-2)]
$$
$$
[(-5)+(-3)]+(-2)
$$
In both the cases, we get -10 .
i.e.,
$$
(-5)+[(-3)+(-2)]=[(-5)+(-2)]+(-3)
$$
Similarly consider $-3,1$ and -7 .
$$
\begin{aligned}
& (-3)+[1+(-7)]=-3+ \\
& {[(-3)+1]+(-7)=-2+}
\end{aligned}
$$
Is $(-3)+[1+(-7)]$ same as $[(-3)+1]+(-7)$ ?
Take five more such examples. You will not find any example for which the sums are different. Addition is associative for integers.
In general for any integers $a, b$ and $c$, we can say
$$
a+(b+c)=(a+b)+c
$$
#### Additive Identity
When we add zero to any whole number, we get the same whole number. Zero is an additive identity for whole numbers. Is it an additive identity again for integers also?
Observe the following and fill in the blanks:
(i) $(-8)+0=-8$
(ii) $0+(-8)=-8$
(iii) $(-23)+0=$
(iv) $0+(-37)=-37$
(v) $0+(-59)=$
(vi) $0+$ $=-43$
(vii) $-61+$ $=-61$
(viii) $+0=$
The above examples show that zero is an additive identity for integers.
You can verify it by adding zero to any other five integers.
In general, for any integer $a$
$$
a+0=a=0+a
$$
## Try These
1. Write a pair of integers whose sum gives
(a) a negative integer
(c) an integer smaller than both the integers.
(e) an integer greater than both the integers.
(b) zero
(d) an integer smaller than only one of the integers.
2. Write a pair of integers whose difference gives
(a) a negative integer.
(b) zero.
(c) an integer smaller than both the integers.
(d) an integer greater than only one of the integers.
(e) an integer greater than both the integers.
Example 1 Write down a pair of integers whose
(a) sum is -3
(b) difference is -5
(c) difference is 2
(d) $\operatorname{sum}$ is 0
$$
\begin{array}{llll}
\text { Solution } & \text { (a) }(-1)+(-2)=-3 & \text { or } & (-5)+2=-3 \\
& \text { (b) }(-9)-(-4)=-5 & \text { or } & (-2)-3=-5 \\
& \text { (c) }(-7)-(-9)=2 & \text { or } & 1-(-1)=2 \\
& \text { (d) }(-10)+10=0 & \text { or } & 5+(-5)=0
\end{array}
$$
Can you write more pairs in these examples? 1. Write down a pair of integers whose:
(a) sum is -7
(b) difference is -10
(c) sum is 0
2. (a) Write a pair of negative integers whose difference gives 8 .
(b) Write a negative integer and a positive integer whose sum is -5 .
(c) Write a negative integer and a positive integer whose difference is -3 .
3. In a quiz, team A scored - 40, 10, 0 and team B scored 10, 0, - 40 in three successive rounds. Which team scored more? Can we say that we can add integers in any order?
4. Fill in the blanks to make the following statements true:
(i) $(-5)+(-8)=(-8)+(\ldots \ldots \ldots \ldots)$
(ii) $-53+\ldots \ldots \ldots . .=-53$
(iii) $17+\ldots \ldots \ldots . .=0$
(iv) $[13+(-12)]+(\ldots \ldots \ldots .)=.13+[(-12)+(-7)]$
(v) $(-4)+[15+(-3)]=[-4+15]+$
### Multiplication of Integers
We can add and subtract integers. Let us now learn how to multiply integers.
#### Multiplication of a Positive and a Negative Integer
We know that multiplication of whole numbers is repeated addition. For example,
$$
5+5+5=3 \times 5=15
$$
Can you represent addition of integers in the same way?
We have from the following number line, $(-5)+(-5)+(-5)=-15$
But we can also write
$$
(-5)+(-5)+(-5)=3 \times(-5)
$$
Therefore,
## IRY THESE
Find:
$$
\begin{aligned}
& 4 \times(-8), \\
& 8 \times(-2), \\
& 3 \times(-7), \\
& 10 \times(-1)
\end{aligned}
$$
using number line. Similarly $(-4)+(-4)+(-4)+(-4)+(-4)=5 \times(-4)=-20$
And
$$
(-3)+(-3)+(-3)+(-3)=
$$
Also,
$$
(-7)+(-7)+(-7)=
$$
Let us see how to find the product of a positive integer and a negative integer without using number line.
Let us find $3 \times(-5)$ in a different way. First find $3 \times 5$ and then put minus sign $(-)$ before the product obtained. You get -15 . That is we find $-(3 \times 5)$ to get -15 .
Similarly,
$$
5 \times(-4)=-(5 \times 4)=-20 \text {. }
$$
Find in a similar way,
$$
\begin{array}{ll}
4 \times(-8)= & \quad, 3 \times(-7)= \\
6 \times(-5)= & , 2 \times(-9)=
\end{array}
$$
## Try These
Find:
(i) $6 \times(-19)$
(ii) $12 \times(-32)$
(iii) $7 \times(-22)$
Using this method we thus have,
$$
10 \times(-43)=
$$$$
-(10 \times 43)=-430
$$
Till now we multiplied integers as (positive integer) $\times$ (negative integer).
Let us now multiply them as (negative integer) $\times$ (positive integer).
We first find $-3 \times 5$.
To find this, observe the following pattern:
We have,
$$
\begin{aligned}
3 \times 5 & =15 \\
2 \times 5 & =10=15-5 \\
1 \times 5 & =5=10-5 \\
0 \times 5 & =0=5-5 \\
-1 \times 5 & =0-5=-5 \\
-2 \times 5 & =-5-5=-10 \\
-3 \times 5 & =-10-5=-15
\end{aligned}
$$
So,
We already have
$$
3 \times(-5)=-15
$$
So we get
$$
(-3) \times 5=-15=3 \times(-5)
$$
Using such patterns, we also get $(-5) \times 4=-20=5 \times(-4)$
Using patterns, find $(-4) \times 8,(-3) \times 7,(-6) \times 5$ and $(-2) \times 9$
Check whether, $(-4) \times 8=4 \times(-8),(-3) \times 7=3 \times(-7),(-6) \times 5=6 \times(-5)$ and
Using this we get,
$$
(-2) \times 9=2 \times(-9)
$$
$$
(-33) \times 5=33 \times(-5)=-165
$$
We thus find that while multiplying a positive integer and a negative integer, we multiply them as whole numbers and put a minus sign (-) before the product. We thus get a negative integer.
## TrY These
1. Find:
$$
\begin{aligned}
& \text { (a) } 15 \times(-16) \\
& \text { (c) }(-42) \times 12
\end{aligned}
$$
(b) $21 \times(-32)$
(d) $-55 \times 15$
2. Check if (a) $25 \times(-21)=(-25) \times 21$
(b) $(-23) \times 20=23 \times(-20)$
Write five more such examples.
In general, for any two positive integers $a$ and $b$ we can say
$$
a \times(-b)=(-a) \times b=-(a \times b)
$$
#### Multiplication of two Negative Integers
Can you find the product $(-3) \times(-2)$ ?
Observe the following:
$$
\begin{aligned}
& -3 \times 4=-12 \\
& -3 \times 3=-9=-12-(-3) \\
& -3 \times 2=-6=-9-(-3) \\
& -3 \times 1=-3=-6-(-3) \\
& -3 \times 0=0=-3-(-3) \\
& -3 \times-1=0-(-3)=0+3=3 \\
& -3 \times-2=3-(-3)=3+3=6
\end{aligned}
$$
Do you see any pattern? Observe how the products change.
Based on this observation, complete the following:
$$
-3 \times-3=
$$
$-3 \times-4=$
Now observe these products and fill in the blanks:
$$
\begin{aligned}
& -4 \times 4=-16 \\
& -4 \times 3=-12=-16+4 \\
& -4 \times 2=\square=-12+4
\end{aligned}
$$
$$
\begin{aligned}
& -4 \times 1= \\
& -4 \times 0= \\
& -4 \times(-1)= \\
& -4 \times(-2)= \\
& -4 \times(-3)=
\end{aligned}
$$
## TRY IHESE
(i) Starting from $(-5) \times 4$, find $(-5) \times(-6)$
(ii) Starting from $(-6) \times 3$, find $(-6) \times(-7)$
From these patterns we observe that,
$(-3) \times(-1)=3=3 \times 1$
$(-3) \times(-2)=6=3 \times 2$
$(-3) \times(-3)=9=3 \times 3$
and $\quad(-4) \times(-1)=4=4 \times 1$
So, $(-4) \times(-2)=4 \times 2=$
$(-4) \times(-3)=$ $=$
So observing these products we can say that the product of two negative integers is a positive integer. We multiply the two negative integers as whole numbers and put the positive sign before the product.
Thus, we have $\quad(-10) \times(-12)=+120=120$
Similarly
$$
(-15) \times(-6)=+90=90
$$
In general, for any two positive integers $a$ and $b$,
$$
(-a) \times(-b)=a \times b
$$
TRY THEse Find: $(-31) \times(-100),(-25) \times(-72),(-83) \times(-28)$
## Game 1
(i) Take a board marked from -104 to 104 as shown in the figure.
(ii) Take a bag containing two blue and two red dice. Number of dots on the blue dice indicate positive integers and number of dots on the red dice indicate negative integers.
(iii) Every player will place his/her counter at zero.
(iv) Each player will take out two dice at a time from the bag and throw them.
(v) After every throw, the player has to multiply the numbers marked on the dice.
| 104 | 103 | 102 | 101 | 100 | 99 | 98 | 97 | 96 | 95 | 94 |
| ---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 |
| 82 | 81 | 80 | 79 | 78 | 77 | 76 | 75 | 74 | 73 | 72 |
| 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 |
| 60 | 59 | 58 | 57 | 56 | 55 | 54 | 53 | 52 | 51 | 50 |
| 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 |
| 38 | 37 | 36 | 35 | 34 | 33 | 32 | 31 | 30 | 29 | 28 |
| 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 |
| 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 |
| -5 | -4 | -3 | -2 | -1 | 0 | 1 | 2 | 3 | 4 | 5 |
| -6 | -7 | -8 | -9 | -10 | -11 | -12 | -13 | -14 | -15 | -16 |
| -27 | -26 | -25 | -24 | -23 | -22 | -21 | -20 | -19 | -18 | -17 |
| -28 | -29 | -30 | -31 | -32 | -33 | -34 | -35 | -36 | -37 | -38 |
| -49 | -48 | -47 | -46 | -45 | -44 | -43 | -42 | -41 | -40 | -39 |
| -50 | -51 | -52 | -53 | -54 | -55 | -56 | -57 | -58 | -59 | -60 |
| -71 | -70 | -69 | -68 | -67 | -66 | -65 | -64 | -63 | -62 | -61 |
| -72 | -73 | -74 | -75 | -76 | -77 | -78 | -79 | -80 | -81 | -82 |
| -93 | -92 | -91 | -90 | -89 | -88 | -87 | -86 | -85 | -84 | $-83 \times$ |
| -94 | -95 | -96 | -97 | -98 | -99 | -100 | -101 | -102 | -103 | -104 |
(vi) If the product is a positive integer then the player will move his counter towards 104; if the product is a negative integer then the player will move his counter towards -104.
(vii) The player who reaches either-104 or 104 first is the winner.
### Properties of Multiplication of Integers
#### Closure under Multiplication
1. Observe the following table and complete it:
| Statement | Inference |
| :---: | :---: |
| $(-20) \times(-5)=100$ | Product is an integer |
| $(-15) \times 17=-255$ | Product is an integer |
| $(-30) \times 12=$ | |
| $(-15) \times(-23)=$ | |
| $(-14) \times(-13)=$ | |
| $12 \times(-30)=$ | |
What do you observe? Can you find a pair of integers whose product is not an integer? No. This gives us an idea that the product of two integers is again an integer. So we can say that integers are closed under multiplication.
In general,
$$
a \times b \text { is an integer, for all integers } a \text { and } b .
$$
Find the product of five more pairs of integers and verify the above statement.
#### Commutativity of Multiplication
We know that multiplication is commutative for whole numbers. Can we say, multiplication is also commutative for integers?
Observe the following table and complete it:
| Statement 1 | Statement 2 | Inference |
| :---: | :---: | :---: |
| $3 \times(-4)=-12$ | $(-4) \times 3=-12$ | $3 \times(-4)=(-4) \times 3$ |
| $(-30) \times 12=$ | $12 \times(-30)=$ | |
| $(-15) \times(-10)=150$ | $(-10) \times(-15)=150$ | |
| $(-35) \times(-12)=\ldots$ | $(-12) \times(-35)=$ | |
| $(-17) \times 0=\ldots$ | $(-1) \times(-15)=$ | |
What are your observations? The above examples suggest multiplication is commutative for integers. Write five more such examples and verify.
In general, for any two integers $a$ and $b$,
$$
a \times b=b \times a
$$
#### Multiplication by Zero
We know that any whole number when multiplied by zero gives zero. Observe the following products of negative integers and zero. These are obtained from the patterns done earlier.
$$
\begin{aligned}
& (-3) \times 0=0 \\
& 0 \times(-4)=0 \\
& -5 \times 0= \\
& 0 \times(-6)=
\end{aligned}
$$
This shows that the product of a negative integer and zero is zero.
In general, for any integer $a$,
$$
a \times 0=0 \times a=0
$$
#### Multiplicative Identity
We know that 1 is the multiplicative identity for whole numbers.
Check that 1 is the multiplicative identity for integers as well. Observe the following products of integers with 1.
$$
\begin{array}{ll}
(-3) \times 1=-3 & 1 \times 5=5 \\
(-4) \times 1= & 1 \times 8= \\
1 \times(-5)= & 3 \times 1= \\
1 \times(-6)= & 7 \times 1=
\end{array}
$$
This shows that 1 is the multiplicative identity for integers also.
In general, for any integer $a$ we have,
$$
a \times 1=1 \times a=a
$$
What happens when we multiply any integer with -1 ? Complete the following:
$$
(-3) \times(-1)=3
$$
$3 \times(-1)=-3$
$(-6) \times(-1)=$
$(-1) \times 13=$
$(-1) \times(-25)=$
0 is the additive identity whereas 1 is the multiplicative identity for integers. We get additive inverse of an integer a when we multiply $(-1)$ to a, i.e. $a \times(-1)=(-1) \times a=-a$
$18 \times(-1)=$
What do you observe?
Can we say -1 is a multiplicative identity of integers? No.
#### Associativity for Multiplication
Consider $-3,-2$ and 5 .
Look at $[(-3) \times(-2)] \times 5$ and $(-3) \times[(-2) \times 5]$.
In the first case $(-3)$ and $(-2)$ are grouped together and in the second $(-2)$ and 5 are grouped together.
We see that $[(-3) \times(-2)] \times 5=6 \times 5=30$
and
$$
(-3) \times[(-2) \times 5]=(-3) \times(-10)=30
$$
So, we get the same answer in both the cases.
Thus,
$$
[(-3) \times(-2)] \times 5=(-3) \times[(-2) \times 5]
$$
Look at this and complete the products:
$$
\begin{aligned}
& {[(7) \times(-6)] \times 4=\square} \\
& 7 \times[(-6) \times 4]=7 \times 4= \\
& \text { Is }[7 \times(-6)] \times 4=7 \times[(-6) \times 4] ?
\end{aligned}
$$
Does the grouping of integers affect the product of integers? No. In general, for any three integers $a, b$ and $c$
$$
(a \times b) \times c=a \times(b \times c)
$$
Take any five values for $a, b$ and $c$ each and verify this property.
Thus, like whole numbers, the product of three integers does not depend upon the grouping of integers and this is called the associative property for multiplication of integers.
#### Distributive Property
We know
$$
16 \times(10+2)=(16 \times 10)+(16 \times 2) \quad \text { [Distributivity of multiplication over addition] }
$$
Let us check if this is true for integers also.
Observe the following:
(a) $(-2) \times(3+5)=-2 \times 8=-16$
$$
\begin{array}{ll}
\text { and } & {[(-2) \times 3]+[(-2) \times 5]=(-6)+(-10)=-16} \\
\text { So, } & (-2) \times(3+5)=[(-2) \times 3]+[(-2) \times 5]
\end{array}
$$
(b) $(-4) \times[(-2)+7]=(-4) \times 5=-20$
and $[(-4) \times(-2)]+[(-4) \times 7]=8+(-28)=-20$
So, $\quad(-4) \times[(-2)+7]=[(-4) \times(-2)]+[(-4) \times 7]$
(c) $(-8) \times[(-2)+(-1)]=(-8) \times(-3)=24$
and $[(-8) \times(-2)]+[(-8) \times(-1)]=16+8=24$
So, $\quad(-8) \times[(-2)+(-1)]=[(-8) \times(-2)]+[(-8) \times(-1)]$ Can we say that the distributivity of multiplication over addition is true for integers also? Yes.
In general, for any integers $a, b$ and $c$,
$$
a \times(b+c)=a \times b+a \times c
$$
Take atleast five different values for each of $a, b$ and $c$ and verify the above Distributive property.
## TRY These
(i) Is $10 \times[(6+(-2)]=10 \times 6+10 \times(-2)$ ?
(ii) Is $(-15) \times[(-7)+(-1)]=(-15) \times(-7)+(-15) \times(-1)$ ?
Now consider the following:
Can we say $4 \times(3-8)=4 \times 3-4 \times 8 ?$
Let us check:
$$
\begin{aligned}
& 4 \times(3-8)=4 \times(-5)=-20 \\
& 4 \times 3-4 \times 8=12-32=-20
\end{aligned}
$$
So, $4 \times(3-8)=4 \times 3-4 \times 8$.
Look at the following:
$$
\begin{aligned}
& (-5) \times[(-4)-(-6)]=(-5) \times 2=-10 \\
& {[(-5) \times(-4)]-[(-5) \times(-6)]=20-30=-10}
\end{aligned}
$$
So, $(-5) \times[(-4)-(-6)]=[(-5) \times(-4)]-[(-5) \times(-6)]$
Check this for $(-9) \times[10-(-3)]$ and $[(-9) \times 10]-[(-9) \times(-3)]$
You will find that these are also equal.
In general, for any three integers $a, b$ and $c$,
$$
a \times(b-c)=a \times b-a \times c
$$
Take atleast five different values for each of $a, b$ and $c$ and verify this property.
## TRY These
(i) Is $10 \times(6-(-2)]=10 \times 6-10 \times(-2)$ ?
(ii) Is $(-15) \times[(-7)-(-1)]=(-15) \times(-7)-(-15) \times(-1)$ ?
## EXerCISE 1.2
1. Find each of the following products:
(a) $3 \times(-1)$
(b) $(-1) \times 225$
(c) $(-21) \times(-30)$
(d) $(-316) \times(-1)$
(e) $(-15) \times 0 \times(-18)$
(f) $(-12) \times(-11) \times(10)$
(g) $9 \times(-3) \times(-6)$
(h) $(-18) \times(-5) \times(-4)$
(i) $(-1) \times(-2) \times(-3) \times 4$
(j) $(-3) \times(-6) \times(-2) \times(-1)$
2. Verify the following:
(a) $18 \times[7+(-3)]=[18 \times 7]+[18 \times(-3)]$
(b) $(-21) \times[(-4)+(-6)]=[(-21) \times(-4)]+[(-21) \times(-6)]$
3. (i) For any integer $a$, what is $(-1) \times a$ equal to?
(ii) Determine the integer whose product with $(-1)$ is
(a) -22
(b) 37
(c) 0
4. Starting from $(-1) \times 5$, write various products showing some pattern to show $(-1) \times(-1)=1$.
### Division of INTEGERS
We know that division is the inverse operation of multiplication. Let us see an example for whole numbers.
Since $3 \times 5=15$
So $15 \div 5=3$ and $15 \div 3=5$
Similarly, $4 \times 3=12$ gives $12 \div 4=3$ and $12 \div 3=4$
We can say for each multiplication statement of whole numbers there are two division statements.
Can you write multiplication statement and its corresponding divison statements for integers?
Observe the following and complete it.
| $\frac{\text { Multiplication Statement }}{2 \times(-6)=(-12)}$ | Corresponding Division Statements | | |
| :---: | :---: | :---: | :---: |
| | $(-12) \div(-6)=2$ | | $(-12) \div 2=(-6)$ |
| $(-4) \times 5=(-20)$ | $(-20) \div 5=(-4)$ | , | $(-20) \div(-4)=5$ |
| $(-8) \times(-9)=72$ | $72 \div$ | , | $72 \div$ |
| $(-3) \times(-7)=$ | $-(-3)=$ | , | --_-_-_- |
| $(-8) \times 4=$ | - | | |
| $5 \times(0)-=$ | - | , | |
| $5 \times(-9)=$ | | , | |
| $(-10) \times(-5)=$ | | 1 | |
From the above we observe that :
$$
\begin{aligned}
& (-12) \div 2=(-6) \\
& (-20) \div 5=(-4) \\
& (-32) \div 4=(-8) \\
& (-45) \div 5=(-9)
\end{aligned}
$$
## TRY THEse
Find:
(a) $(-100) \div 5$
(b) $(-81) \div 9$
(c) $(-75) \div 5$
(d) $(-32) \div 2$
We observe that when we divide a negative integer by a positive integer, we divide them as whole numbers and then put a minus sign (-) before the quotient.
- We also observe that:
$$
\begin{array}{lll}
72 \div(-8)=-9 & \text { and } & 50 \div(-10)=-5 \\
72 \div(-9)=-8 & & 50 \div(-5)=-10
\end{array}
$$
So we can say that when we divide a positive integer by a negative integer, we first divide them as whole numbers and then put a minus sign (-) before the quotient.
In general, for any two positive integers $a$ and $b$
$$
a \div(-b)=(-a) \div b \quad \text { where } b \neq 0
$$
Can we say that
$$
(-48) \div 8=48 \div(-8) ?
$$
Let us check. We know that
$$
(-48) \div 8=-6
$$
and $48 \div(-8)=-6$
So $(-48) \div 8=48 \div(-8)$
Check this for
(i) $90 \div(-45)$ and $(-90) \div 45$
(ii) $(-136) \div 4$ and $136 \div(-4)$
## TRY These
Find: (a) $125 \div(-25)$
(b) $80 \div(-5)$
(c) $64 \div(-16)$
- Lastly, we observe that
$$
(-12) \div(-6)=2 ;(-20) \div(-4)=5 ;(-32) \div(-8)=4 ;(-45) \div(-9)=5
$$
So, we can say that when we divide a negative integer by a negative integer, we first divide them as whole numbers and then put a positive sign (+).
In general, for any two positive integers $a$ and $b$
$$
(-a) \div(-b)=a \div b \quad \text { where } b \neq 0
$$
## TRY These
Find:
(a) $(-36) \div(-4)$
(b) $(-201) \div(-3)$
(c) $(-325) \div(-13)$
### Properties of Division of Integers
Observe the following table and complete it:
What do you observe? We observe that integers are not closed under division.
| Statement | Inference | Statement | Inference |
| :---: | :--- | :---: | :---: |
| $(-8) \div(-4)=2$ | Result is an integer | $(-8) \div 3=\frac{-8}{3}$ | - |
| $(-4) \div(-8)=\frac{-4}{-8}$ | Result is not an integer | $3 \div(-8)=\frac{3}{-8}$ | - |
Justify it by taking five more examples of your own.
- We know that division is not commutative for whole numbers. Let us check it for integers also.
You can see from the table that $(-8) \div(-4) \neq(-4) \div(-8)$.
Is $(-9) \div 3$ the same as $3 \div(-9)$ ?
Is $(-30) \div(-6)$ the same as $(-6) \div(-30) ?$
Can we say that division is commutative for integers? No.
You can verify it by taking five more pairs of integers.
- Like whole numbers, any integer divided by zero is meaningless and zero divided by an integer other than zero is equal to zero i.e., for any integer $a$, $a \div 0$ is not defined but $0 \div a=0$ for $a \neq 0$.
- When we divide a whole number by 1 it gives the same whole number. Let us check whether it is true for negative integers also.
Observe the following:
$(-8) \div 1=(-8)$
$(-11) \div 1=-11$
$(-13) \div 1=-13$
$(-25) \div 1=$
$(-37) \div 1=$
$(-48) \div 1=$
This shows that negative integer divided by 1 gives the same negative integer. So, any integer divided by 1 gives the same integer.
In general, for any integer $a$,
$$
a \div 1=a
$$
- What happens when we divide any integer by $(-1)$ ? Complete the following table
$$
\begin{array}{lll}
(-8) \div(-1)=8 & 11 \div(-1)=-11 & 13 \div(-1)= \\
(-25) \div(-1)= & (-37) \div(-1)= & -48 \div(-1)=
\end{array}
$$
## TRY THESE
Is (i) $1 \div a=1$ ?
(ii) $a \div(-1)=-a$ ? for any integer $a$.
Take different values of $a$ and check.
What do you observe?
We can say that if any integer is divided by $(-1)$ it does not give the same integer.
- Can we say $[(-16) \div 4] \div(-2)$ is the same as $(-16) \div[4 \div(-2)]$ ?
We know that $[(-16) \div 4] \div(-2)=(-4) \div(-2)=2$
$$
\begin{array}{ll}
\text { and } & (-16) \div[4 \div(-2)]=(-16) \div(-2)=8 \\
\text { So } & {[(-16) \div 4] \div(-2) \neq(-16) \div[4 \div(-2)]}
\end{array}
$$
Can you say that division is associative for integers? No.
Verify it by taking five more examples of your own. ExAmple 2 In a test (+5) marks are given for every correct answer and (-2) marks are given for every incorrect answer. (i) Radhika answered all the questions and scored 30 marks though she got 10 correct answers. (ii) Jay also answered all the questions and scored (-12) marks though he got 4 correct answers. How many incorrect answers had they attempted?
## Solution
(i) Marks given for one correct answer $=5$
So, marks given for 10 correct answers $=5 \times 10=50$
Radhika's score $=30$
Marks obtained for incorrect answers $=30-50=-20$
Marks given for one incorrect answer $=(-2)$
Therefore, number of incorrect answers $=(-20) \div(-2)=10$
(ii) Marks given for 4 correct answers $=5 \times 4=20$
Jay's score $=-12$
Marks obtained for incorrect answers $=-12-20=-32$
Marks given for one incorrect answer $=(-2)$
Therefore number of incorrect answers $=(-32) \div(-2)=16$
EXAMPLE 3 A shopkeeper earns a profit of ₹ 1 by selling one pen and incurs a loss of 40 paise per pencil while selling pencils of her old stock.
(i) In a particular month she incurs a loss of ₹ 5 . In this period, she sold 45 pens. How many pencils did she sell in this period?
(ii) In the next month she earns neither profit nor loss. If she sold 70 pens, how many pencils did she sell?
## Solution
(i) Profit earned by selling one pen $=₹ 1$
Profit earned by selling 45 pens $=₹ 45$, which we denote by $+₹ 45$
Total loss given $=₹ 5$, which we denote by $-₹ 5$
Profit earned + Loss incurred $=$ Total loss
Therefore, Loss incurred $=$ Total Loss - Profit earned
₹ $(-5-45)=$ ₹ $(-50)=-5000$ paise
Loss incurred by selling one pencil $=40$ paise which we write as -40 paise
So, number of pencils sold $=(-5000) \div(-40)=125$ (ii) In the next month there is neither profit nor loss.
So, Profit earned + Loss incurred $=0$
i.e., Profit earned $=-$ Loss incurred.
Now, profit earned by selling 70 pens $=₹ 70$
Hence, loss incurred by selling pencils $=₹ 70$ which we indicate by ₹ 70 or $-7,000$ paise.
Total number of pencils sold $=(-7000) \div(-40)=175$ pencils.
## Exercise 1.3
1. Evaluate each of the following:
(a) $(-30) \div 10$
(b) $50 \div(-5)$
(c) $(-36) \div(-9)$
(d) $(-49) \div(49)$
(e) $13 \div[(-2)+1]$
(f) $0 \div(-12)$
(g) $(-31) \div[(-30)+(-1)]$
(h) $[(-36) \div 12] \div 3$
(i) $[(-6)+5)] \div[(-2)+1]$
2. Verify that $a \div(b+c) \neq(a \div b)+(a \div c)$ for each of the following values of $a, b$ and $c$.
(a) $a=12, b=-4, c=2$
(b) $a=(-10), b=1, c=1$
3. Fill in the blanks:
(a) $369 \div$
(b) $(-75) \div$
(c) $(-206) \div \quad=1$
(d) $-87 \div$
(e) $\div 1=-87$
(f) $\div 48=-1$
(g) $20 \div$
(h) $\div(4)=-3$
4. Write five pairs of integers $(a, b)$ such that $a \div b=-3$. One such pair is $(6,-2)$ because $6 \div(-2)=(-3)$
5. The temperature at 12 noon was $10^{\circ} \mathrm{C}$ above zero. If it decreases at the rate of $2^{\circ} \mathrm{C}$ per hour until midnight, at what time would the temperature be $8^{\circ} \mathrm{C}$ below zero? What would be the temperature at mid-night?
6. In a class test $(+3)$ marks are given for every correct answer and (-2) marks are given for every incorrect answer and no marks for not attempting any question. (i) Radhika scored 20 marks. If she has got 12 correct answers, how many questions has she attempted incorrectly? (ii) Mohini scores -5 marks in this test, though she has got 7 correct answers. How many questions has she attempted incorrectly?
7. An elevator descends into a mine shaft at the rate of $6 \mathrm{~m} / \mathrm{min}$. If the descent starts from $10 \mathrm{~m}$ above the ground level, how long will it take to reach $-350 \mathrm{~m}$.
## What have We Discussed?
1. We now study the properties satisfied by addition and subtraction.
(a) Integers are closed for addition and subtraction both. That is, $a+b$ and $a-b$ are again integers, where $a$ and $b$ are any integers.
(b) Addition is commutative for integers, i.e., $a+b=b+a$ for all integers $a$ and $b$.
(c) Addition is associative for integers, i.e., $(a+b)+c=a+(b+c)$ for all integers $a, b$ and $c$.
(d) Integer 0 is the identity under addition. That is, $a+0=0+a=a$ for every integer $a$.
2. We studied, how integers could be multiplied, and found that product of a positive and a negative integer is a negative integer, whereas the product of two negative integers is a positive integer. For example, $-2 \times 7=-14$ and $-3 \times-8=24$.
3. Product of even number of negative integers is positive, whereas the product of odd number of negative integers is negative.
4. Integers show some properties under multiplication.
(a) Integers are closed under multiplication. That is, $a \times b$ is an integer for any two integers $a$ and $b$.
(b) Multiplication is commutative for integers. That is, $a \times b=b \times a$ for any integers $a$ and $b$.
(c) The integer 1 is the identity under multiplication, i.e., $1 \times a=a \times 1=a$ for any integer $a$.
(d) Multiplication is associative for integers, i.e., $(a \times b) \times c=a \times(b \times c)$ for any three integers $a, b$ and $c$.
5. Under addition and multiplication, integers show a property called distributive property. That is, $a \times(b+c)=a \times b+a \times c$ for any three integers $a, b$ and $c$.
6. The properties of commutativity, associativity under addition and multiplication, and the distributive property help us to make our calculations easier.
7. We also learnt how to divide integers. We found that,
(a) When a positive integer is divided by a negative integer, the quotient obtained is negative and vice-versa.
(b) Division of a negative integer by another negative integer gives positive as quotient.
8. For any integer $a$, we have
(a) $a \div 0$ is not defined
(b) $a \div 1=a$
## Fractions and Decimals
### Multiplication of Fractions
You know how to find the area of a rectangle. It is equal to length $\times$ breadth. If the length and breadth of a rectangle are $7 \mathrm{~cm}$ and $4 \mathrm{~cm}$ respectively, then what will be its area? Its area would be $7 \times 4=28 \mathrm{~cm}^{2}$.
What will be the area of the rectangle if its length and breadth are $7 \frac{1}{2} \mathrm{~cm}$ and $3 \frac{1}{2} \mathrm{~cm}$ respectively? You will say it will be $7 \frac{1}{2} \times 3 \frac{1}{2}=\frac{15}{2} \times \frac{7}{2} \mathrm{~cm}^{2}$. The numbers $\frac{15}{2}$ and $\frac{7}{2}$ are fractions. To calculate the area of the given rectangle, we need to know how to multiply fractions. We shall learn that now.
#### Multiplication of a Fraction by a Whole Number
Fig 2.1
Observe the pictures at the left (Fig 2.1). Each shaded part is $\frac{1}{4}$ part of a circle. How much will the two shaded parts represent together? They will represent $\frac{1}{4}+\frac{1}{4}=2 \times \frac{1}{4}$.
Combining the two shaded parts, we get Fig 2.2. What part of a circle does the shaded part in Fig 2.2 represent? It represents $\frac{2}{4}$ part of a circle.
Fig 2.2 The shaded portions in Fig 2.1 taken together are the same as the shaded portion in Fig 2.2, i.e., we get Fig 2.3.
Fig 2.3
or
$$
2 \times \frac{1}{4}=\frac{2}{4}
$$
Can you now tell what this picture will represent? (Fig 2.4)
And this? (Fig 2.5)
는.
Fig 2.4
Fig 2.5
Let us now find $3 \times \frac{1}{2}$.
We have
$3 \times \frac{1}{2}=\frac{1}{2}+\frac{1}{2}+\frac{1}{2}=\frac{3}{2}$
We also have $\frac{1}{2}+\frac{1}{2}+\frac{1}{2}=\frac{1+1+1}{2}=\frac{3 \times 1}{2}=\frac{3}{2}$
So
$$
3 \times \frac{1}{2}=\frac{3 \times 1}{2}=\frac{3}{2}
$$
Similarly
$$
\frac{2}{3} \times 5=\frac{2 \times 5}{3}=\text { ? }
$$
Can you tell
$$
3 \times \frac{2}{7}=? \quad 4 \times \frac{3}{5}=?
$$
The fractions that we considered till now, i.e., $\frac{1}{2}, \frac{2}{3}, \frac{2}{7}$ and $\frac{3}{5}$ were proper fractions. For improper fractions also we have,
Try,
$$
\begin{aligned}
& 2 \times \frac{5}{3}=\frac{2 \times 5}{3}=\frac{10}{3} \\
& 3 \times \frac{8}{7}=? 4 \times \frac{7}{5}=?
\end{aligned}
$$
Thus, to multiply a whole number with a proper or an improper fraction, we multiply the whole number with the numerator of the fraction, keeping the denominator same.
## TrY THESE
1. Find: (a) $\frac{2}{7} \times 3$
(b) $\frac{9}{7} \times 6$
(c) $3 \times \frac{1}{8}$
(d) $\frac{13}{11} \times 6$
If the product is an improper fraction express it as a mixed fraction.
2. Represent pictorially: $2 \times \frac{2}{5}=\frac{4}{5}$
## TRY These
Find: (i) $5 \times 2 \frac{3}{7}$
(ii) $1 \frac{4}{9} \times 6$ To multiply a mixed fraction to a whole number, first convert the mixed fraction to an improper fraction and then multiply.
Therefore, $\quad 3 \times 2 \frac{5}{7}=3 \times \frac{19}{7}=\frac{57}{7}=8 \frac{1}{7}$.
Similarly, $2 \times 4 \frac{2}{5}=2 \times \frac{22}{5}=?$
Fraction as an operator 'of'
Observe these figures (Fig 2.6)
The two squares are exactly similar.
Each shaded portion represents $\frac{1}{2}$ of 1 .
So, both the shaded portions together will represent $\frac{1}{2}$ of 2 .
Combine the 2 shaded $\frac{1}{2}$ parts. It represents 1.
So, we say $\frac{1}{2}$ of 2 is 1 . We can also get it as $\frac{1}{2} \times 2=1$.
Thus, $\frac{1}{2}$ of $2=\frac{1}{2} \times 2=1$
Fig 2.6 Also, look at these similar squares (Fig 2.7).
Each shaded portion represents $\frac{1}{2}$ of 1 .
So, the three shaded portions represent $\frac{1}{2}$ of 3 .
Combine the 3 shaded parts.
It represents $1 \frac{1}{2}$ i.e., $\frac{3}{2}$.
So, $\frac{1}{2}$ of 3 is $\frac{3}{2}$. Also, $\frac{1}{2} \times 3=\frac{3}{2}$.
Fig 2.7
Thus, $\frac{1}{2}$ of $3=\frac{1}{2} \times 3=\frac{3}{2}$.
So we see that 'of' represents multiplication.
Farida has 20 marbles. Reshma has $\frac{1}{5}$ th of the number of marbles what Farida has. How many marbles Reshma has? As, 'of 'indicates multiplication, so, Reshma has $\frac{1}{5} \times 20=4$ marbles.
Similarly, we have $\frac{1}{2}$ of 16 is $\frac{1}{2} \times 16=\frac{16}{2}=8$.
## TrY THEse
Can you tell, what is (i) $\frac{1}{2}$ of 10 ?, (ii) $\frac{1}{4}$ of 16 ?, (iii) $\frac{2}{5}$ of $25 ?$
EXAMIPLE 1 In a class of 40 students $\frac{1}{5}$ of the total number of studetns like to study English, $\frac{2}{5}$ of the total number like to study Mathematics and the remaining students like to study Science.
(i) How many students like to study English?
(ii) How many students like to study Mathematics?
(iii) What fraction of the total number of students like to study Science?
Solution Total number of students in the class $=40$.
(i) Of these $\frac{1}{5}$ of the total number of students like to study English. Thus, the number of students who like to study English $=\frac{1}{5}$ of $40=\frac{1}{5} \times 40=8$.
(ii) Try yourself.
(iii) The number of students who like English and Mathematics $=8+16=24$. Thus, the number of students who like Science $=40-24=16$.
Thus, the required fraction is $\frac{16}{40}$.
## Exercise 2.1
1. Which of the drawings (a) to (d) show :
(i) $2 \times \frac{1}{5}$
(ii) $2 \times \frac{1}{2}$
(iii) $3 \times \frac{2}{3}$
(iv) $3 \times \frac{1}{4}$
(a)
(c)
(b)
(d)
2. Some pictures (a) to (c) are given below. Tell which of them show:
(i) $3 \times \frac{1}{5}=\frac{3}{5}$
0
(a) (ii) $2 \times \frac{1}{3}=\frac{2}{3}$
(b)
(iii) $3 \times \frac{3}{4}=2 \frac{1}{4}$
(c)
3. Multiply and reduce to lowest form and convert into a mixed fraction:
(i) $7 \times \frac{3}{5}$
(ii) $4 \times \frac{1}{3}$
(vi) $\frac{5}{2} \times 6$
(iii) $2 \times \frac{6}{7}$
(vii) $11 \times \frac{4}{7}$ (viii) $20 \times \frac{4}{5}$ (iv) $5 \times \frac{2}{9}$
(v) $\frac{2}{3} \times 4$ (x) $15 \times \frac{3}{5}$ 4. Shade: (i) $\frac{1}{2}$ of the circles in box (a) (ii) $\frac{2}{3}$ of the triangles in box (b) (iii) $\frac{3}{5}$ of the squares in box (c).
(a)
(b)
(c)
5. Find:
(a) $\frac{1}{2}$ of
(i) 24
(ii) 46
(b) $\frac{2}{3}$ of
(i) 18
(ii) 27
(c) $\frac{3}{4}$ of
(i) 16
(ii) 36
(d) $\frac{4}{5}$ of
(i) 20
(ii) 35
6. Multiply and express as a mixed fraction:
(a) $3 \times 5 \frac{1}{5}$
(b) $5 \times 6 \frac{3}{4}$
(c) $7 \times 2 \frac{1}{4}$
(d) $4 \times 6 \frac{1}{3}$
(e) $3 \frac{1}{4} \times 6$
(f) $3 \frac{2}{5} \times 8$
8. Vidya and Pratap went for a picnic. Their mother gave them a water bottle that contained 5 litres of water. Vidya consumed $\frac{2}{5}$ of the water. Pratap consumed the remaining water.
(i) How much water did Vidya drink?
(ii) What fraction of the total quantity of water did Pratap drink?
#### Multiplication of a Fraction by a Fraction
Farida had a $9 \mathrm{~cm}$ long strip of ribbon. She cut this strip into four equal parts. How did she do it? She folded the strip twice. What fraction of the total length will each part represent?
Each part will be $\frac{9}{4}$ of the strip. She took one part and divided it in two equal parts by
Fig 2.8
Fig 2.9 folding the part once. What will one of the pieces represent? It will represent $\frac{1}{2}$ of $\frac{9}{4}$ or $\frac{1}{2} \times \frac{9}{4}$
Let us now see how to find the product of two fractions like $\frac{1}{2} \times \frac{9}{4}$.
To do this we first learn to find the products like $\frac{1}{2} \times \frac{1}{3}$.
(a) How do we find $\frac{1}{3}$ of a whole? We divide the whole in three equal parts. Each of the three parts represents $\frac{1}{3}$ of the whole. Take one part of these three parts, and shade it as shown in Fig 2.8.
(b) How will you find $\frac{1}{2}$ of this shaded part? Divide this one-third $\left(\frac{1}{3}\right)$ shaded part into two equal parts. Each of these two parts represents $\frac{1}{2}$ of $\frac{1}{3}$ i.e., $\frac{1}{2} \times \frac{1}{3}$ (Fig 2.9).
Take out 1 part of these two and name it 'A'. 'A' represents $\frac{1}{2} \times \frac{1}{3}$.
(c) What fraction is ' $A$ ' of the whole? For this, divide each of the remaining $\frac{1}{3}$ parts also in two equal parts. How many such equal parts do you have now?
There are six such equal parts. ' $A$ ' is one of these parts.
So, ' $A$ ' is $\frac{1}{6}$ of the whole. Thus, $\frac{1}{2} \times \frac{1}{3}=\frac{1}{6}$.
How did we decide that ' $A$ ' was $\frac{1}{6}$ of the whole? The whole was divided in $6=2 \times 3$ parts and $1=1 \times 1$ part was taken out of it.
Thus,
$$
\frac{1}{2} \times \frac{1}{3}=\frac{1}{6}=\frac{1 \times 1}{2 \times 3}
$$
or
$$
\frac{1}{2} \times \frac{1}{3}=\frac{1 \times 1}{2 \times 3}
$$
The value of $\frac{1}{3} \times \frac{1}{2}$ can be found in a similar way. Divide the whole into two equal parts and then divide one of these parts in three equal parts. Take one of these parts. This will represent $\frac{1}{3} \times \frac{1}{2}$ i.e., $\frac{1}{6}$.
Therefore
$$
\frac{1}{3} \times \frac{1}{2}=\frac{1}{6}=\frac{1 \times 1}{3 \times 2} \text { as discussed earlier. }
$$
Hence
$$
\frac{1}{2} \times \frac{1}{3}=\frac{1}{3} \times \frac{1}{2}=\frac{1}{6}
$$
Find $\frac{1}{3} \times \frac{1}{4}$ and $\frac{1}{4} \times \frac{1}{3} ; \frac{1}{2} \times \frac{1}{5}$ and $\frac{1}{5} \times \frac{1}{2}$ and check whether you get
$$
\frac{1}{3} \times \frac{1}{4}=\frac{1}{4} \times \frac{1}{3} ; \frac{1}{2} \times \frac{1}{5}=\frac{1}{5} \times \frac{1}{2}
$$
## TRY THESE
Fill in these boxes:
(i) $\frac{1}{2} \times \frac{1}{7}=\frac{1 \times 1}{2 \times 7}=\square$
(ii) $\frac{1}{5} \times \frac{1}{7}=\square=\square$
(iii) $\frac{1}{7} \times \frac{1}{2}=\square=\square$
(iv) $\frac{1}{7} \times \frac{1}{5}=\square=\square$
EXAMPLE 2 Sushant reads $\frac{1}{3}$ part of a book in 1 hour. How much part of the book will he read in $2 \frac{1}{5}$ hours?
Solution The part of the book read by Sushant in 1 hour $=\frac{1}{3}$.
So, the part of the book read by him in $2 \frac{1}{5}$ hours $=2 \frac{1}{5} \times \frac{1}{3}$
$$
=\frac{11}{5} \times \frac{1}{3}=\frac{11 \times 1}{5 \times 3}=\frac{11}{15}
$$
Let us now find $\frac{1}{2} \times \frac{5}{3}$. We know that $\frac{5}{3}=\frac{1}{3} \times 5$.
$$
\text { So, } \frac{1}{2} \times \frac{5}{3}=\frac{1}{2} \times \frac{1}{3} \times 5=\frac{1}{6} \quad 5=\frac{5}{6}
$$
$$
\text { Also, } \frac{5}{6}=\frac{1 \times 5}{2 \times 3} \text {. Thus, } \frac{1}{2} \times \frac{5}{3}=\frac{1 \times 5}{2 \times 3}=\frac{5}{6} \text {. }
$$
This is also shown by the figures drawn below. Each of these five equal shapes (Fig 2.10) are parts of five similar circles. Take one such shape. To obtain this shape we first divide a circle in three equal parts. Further divide each of these three parts in two equal parts. One part out of it is the shape we considered. What will it represent? It will represent $\frac{1}{2} \times \frac{1}{3}=\frac{1}{6}$. The total of such parts would be $5 \times \frac{1}{6}=\frac{5}{6}$.
Fig 2.10
## Try These
Find: $\frac{1}{3} \times \frac{4}{5} ; \frac{2}{3} \times \frac{1}{5}$
$$
\text { Similarly } \frac{3}{5} \times \frac{1}{7}=\frac{3 \times 1}{5 \times 7}=\frac{3}{35} \text {. }
$$
So, we find that we multiply two fractions as $\frac{\text { Product of Numerators }}{\text { Product of Denominators }}$.
## Value of the Products
TRY THese
Find: $\frac{8}{3} \times \frac{4}{7} ; \quad \frac{3}{4} \times \frac{2}{3}$. You have seen that the product of two whole numbers is bigger than each of the two whole numbers. For example, $3 \times 4=12$ and $12>4,12>3$. What happens to the value of the product when we multiply two fractions?
| $\frac{2}{3} \times \frac{4}{5}=\frac{8}{15}$ | $\frac{8}{15}<\frac{2}{3}, \frac{8}{15}<\frac{4}{5}$ | Product is less than each of the fraction |
| :---: | :---: | :---: |
| $\frac{1}{5} \times \frac{2}{7}=\ldots$ | --------,-------- | |
| $\frac{3}{5} \times \frac{\square}{8}=$ | ---------,-------- | ---------------------------------------- |
| $\frac{2}{\square} \times \frac{4}{9}=\frac{8}{45}$ | --------,-------- | |
Let us first consider the product of two proper fractions.
We have, You will find that when two proper fractions are multiplied, the product is less than each of the fractions. Or, we say the value of the product of two proper fractions is smaller than each of the two fractions.
Check this by constructing five more examples.
Let us now multiply two improper fractions.
We find that the product of two improper fractions is greater than each of the two fractions.
Or, the value of the product of two improper fractions is more than each of the two fractions.
Construct five more examples for yourself and verify the above statement.
Let us now multiply a proper and an improper fraction, say $\frac{2}{3}$ and $\frac{7}{5}$.
We have $\quad \frac{2}{3} \times \frac{7}{5}=\frac{14}{15}$. Here, $\frac{14}{15}<\frac{7}{5}$ and $\frac{14}{15}>\frac{2}{3}$
The product obtained is less than the improper fraction and greater than the proper fraction involved in the multiplication.
Check it for $\frac{6}{5} \times \frac{2}{8}, \frac{8}{3} \times \frac{4}{5}$.
1. Find:
(i) $\frac{1}{4}$ of
(a) $\frac{1}{4}$
(b) $\frac{3}{5}$
(c) $\frac{4}{3}$
(ii) $\frac{1}{7}$ of
(a) $\frac{2}{9}$
(b) $\frac{6}{5}$
(c) $\frac{3}{10}$
2. Multiply and reduce to lowest form (if possible) :
(i) $\frac{2}{3} \times 2 \frac{2}{3}$
(ii) $\frac{2}{7} \times \frac{7}{9}$
(iii) $\frac{3}{8} \times \frac{6}{4}$
(iv) $\frac{9}{5} \times \frac{3}{5}$
(v) $\frac{1}{3} \times \frac{15}{8}$
(vi) $\frac{11}{2} \times \frac{3}{10}$
(vii) $\frac{4}{5} \times \frac{12}{7}$
3. Multiply the following fractions:
(i) $\frac{2}{5} \times 5 \frac{1}{4}$
(ii) $6 \frac{2}{5} \times \frac{7}{9}$
(iii) $\frac{3}{2} \times 5 \frac{1}{3}$
(iv) $\frac{5}{6} \times 2 \frac{3}{7}$
(v) $3 \frac{2}{5} \times \frac{4}{7}$
(vi) $2 \frac{3}{5} \times 3$
(vii) $3 \frac{4}{7} \times \frac{3}{5}$
4. Which is greater:
(i) $\frac{2}{7}$ of $\frac{3}{4}$ or $\frac{3}{5}$ of $\frac{5}{8}$
(ii) $\frac{1}{2}$ of $\frac{6}{7}$ or $\frac{2}{3}$ of $\frac{3}{7}$
5. Saili plants 4 saplings, in a row, in her garden. The distance between two adjacent saplings is $\frac{3}{4} \mathrm{~m}$. Find the distance between the first and the last sapling.
6. Lipika reads a book for $1 \frac{3}{4}$ hours everyday. She reads the entire book in 6 days. How many hours in all were required by her to read the book?
7. A car runs $16 \mathrm{~km}$ using 1 litre of petrol. How much distance will it cover using $2 \frac{3}{4}$ litres of petrol.
8. (a) (i) Provide the number in the box $\square$, such that $\frac{2}{3} \times \square=\frac{10}{30}$.
(ii) The simplest form of the number obtained in $\square$ is
(b) (i) Provide the number in the box $\square$, such that $\frac{3}{5} \times \square=\frac{24}{75}$.
(ii) The simplest form of the number obtained in $\square$ is
### Division of Fractions
John has a paper strip of length $6 \mathrm{~cm}$. He cuts this strip in smaller strips of length $2 \mathrm{~cm}$ each. You know that he would get $6 \div 2=3$ strips. John cuts another strip of length $6 \mathrm{~cm}$ into smaller strips of length $\frac{3}{2} \mathrm{~cm}$ each. How many strips will he get now? He will get $6 \div \frac{3}{2}$ strips.
A paper strip of length $\frac{15}{2} \mathrm{~cm}$ can be cut into smaller strips of length $\frac{3}{2} \mathrm{~cm}$ each to give $\frac{15}{2} \div \frac{3}{2}$ pieces.
So, we are required to divide a whole number by a fraction or a fraction by another fraction. Let us see how to do that.
#### Division of Whole Number by a Fraction
Let us find $1 \div \frac{1}{2}$.
We divide a whole into a number of equal parts such that each part is half of the whole.
The number of such half $\left(\frac{1}{2}\right)$ parts would be $1 \div \frac{1}{2}$. Observe the figure (Fig 2.11). How many half parts do you see?
There are two half parts.
So, $\quad 1 \div \frac{1}{2}=2$. Also, $1 \times \frac{2}{1}=1 \times 2=2$
Similarly, $3 \div \frac{1}{4}=$ number of $\frac{1}{4}$ parts obtained when each of the 3 whole, are divided Fig 2.11 into $\frac{1}{4}$ equal parts $=12($ From Fig 2.12)
Fig 2.12
Observe also that, $3 \times \frac{4}{1}=3 \times 4=12$. Thus, $3 \div \frac{1}{4}=3 \times \frac{4}{1}=12$.
Find in a similar way, $3 \div \frac{1}{2}$ and $3 \times \frac{2}{1}$
## Reciprocal of a fraction
The number $\frac{2}{1}$ can be obtained by interchanging the numerator and denominator of $\frac{1}{2}$ or by inverting $\frac{1}{2}$. Similarly, $\frac{3}{1}$ is obtained by inverting $\frac{1}{3}$.
Let us first see about the inverting of such numbers.
Observe these products and fill in the blanks :
$$
\begin{array}{l|r}
7 \times \frac{1}{7}=1 & \frac{5}{4} \times \frac{4}{5}= \\
\frac{1}{9} \times 9=- & \frac{2}{7} \times-\cdots \\
\frac{2}{3} \times \frac{3}{2}=\frac{2 \times 3}{3 \times 2}=\frac{6}{6}=1 & -1
\end{array}
$$
Multiply five more such pairs.
The non-zero numbers whose product with each other is 1 , are called the reciprocals of each other. So reciprocal of $\frac{5}{9}$ is $\frac{9}{5}$ and the reciprocal of $\frac{9}{5}$ is $\frac{5}{9}$. What is the receiprocal of $\frac{1}{9} ? \frac{2}{7} ?$
You will see that the reciprocal of $\frac{2}{3}$ is obtained by inverting it. You get $\frac{3}{2}$.
## Think, Discuss and Write
(i) Will the reciprocal of a proper fraction be again a proper fraction?
(ii) Will the reciprocal of an improper fraction be again an improper fraction?
Therefore, we can say that
$1 \div \frac{1}{2}=1 \times \frac{2}{1}=1 \times$ reciprocal of $\frac{1}{2}$.
$3 \div \frac{1}{4}=3 \times \frac{4}{1}=3 \times$ reciprocal of $\frac{1}{4}$.
$3 \div \frac{1}{2}=----$
So, $2 \div \frac{3}{4}=2 \times$ reciprocal of $\frac{3}{4}=2 \times \frac{4}{3}$.
$$
5 \div \frac{2}{9}=5 \times
$$
Thus, to divide a whole number by any fraction, multiply that whole number by the reciprocal of that fraction.
## TRY These
Find: (i) $7 \div \frac{2}{5}$
(ii) $6 \div \frac{4}{7}$
(iii) $2 \div \frac{8}{9}$
- While dividing a whole number by a mixed fraction, first convert the mixed fraction into improper fraction and then solve it.
Thus, $4 \div 2 \frac{2}{5}=4 \div \frac{12}{5}=$ ? Also, $5 \div 3 \frac{1}{3}=3 \div \frac{10}{3}=$ ?
#### Division of a Fraction by a Whole Number
(ii) $7 \div 2 \frac{4}{7}$
- What will be $\frac{3}{4} \div 3$ ?
Based on our earlier observations we have: $\frac{3}{4} \div 3=\frac{3}{4} \div \frac{3}{1}=\frac{3}{4} \times \frac{1}{3}=\frac{3}{12}=\frac{1}{4}$
So, $\frac{2}{3} \div 7=\frac{2}{3} \times \frac{1}{7}=$ ?
What is $\frac{5}{7} \div 6, \frac{2}{7} \div 8 ?$
- While dividing mixed fractions by whole numbers, convert the mixed fractions into improper fractions. That is,
$$
2 \frac{2}{3} \div 5=\frac{8}{3} \div 5=----; \quad 4 \frac{2}{5} \div 3=-----2 \frac{3}{5} \div 2=----
$$
#### Division of a Fraction by Another Fraction
We can now find $\frac{1}{3} \div \frac{6}{5}$.
$\frac{1}{3} \div \frac{6}{5}=\frac{1}{3} \times$ reciprocal of $\frac{6}{5}=\frac{1}{3} \times \frac{5}{6}=\frac{5}{18}$
Similarly, $\frac{8}{5} \div \frac{2}{3}=\frac{8}{5} \times$ reciprocal of $\frac{2}{3}=$ ?
and, $\frac{1}{2} \div \frac{3}{4}=?$
## TRY THESE
Find: (i) $\frac{3}{5} \div \frac{1}{2}$
(ii) $\frac{1}{2} \div \frac{3}{5}$
(iii) $2 \frac{1}{2} \div \frac{3}{5}$
(iv) $5 \frac{1}{6} \div \frac{9}{2}$
## EXercise 2.3
1. Find:
(i) $12 \div \frac{3}{4}$
(ii) $14 \div \frac{5}{6}$
(iii) $8 \div \frac{7}{3}$
(iv) $4 \div \frac{8}{3}$
(v) $3 \div 2 \frac{1}{3}$
(vi) $5 \div 3 \frac{4}{7}$
2. Find the reciprocal of each of the following fractions. Classify the reciprocals as proper fractions, improper fractions and whole numbers.
(i) $\frac{3}{7}$
(ii) $\frac{5}{8}$
(iii) $\frac{9}{7}$
(iv) $\frac{6}{5}$
(v) $\frac{12}{7}$
(vi) $\frac{1}{8}$
(vii) $\frac{1}{11}$
3. Find:
(i) $\frac{7}{3} \div 2$
(ii) $\frac{4}{9} \div 5$
(iii) $\frac{6}{13} \div 7$
(iv) $4 \frac{1}{3} \div 3$
(v) $3 \frac{1}{2} \div 4$
(vi) $4 \frac{3}{7} \div 7$
4. Find:
(i) $\frac{2}{5} \div \frac{1}{2}$
(ii) $\frac{4}{9} \div \frac{2}{3}$
(iii) $\frac{3}{7} \div \frac{8}{7}$
(iv) $2 \frac{1}{3} \div \frac{3}{5}$
(v) $3 \frac{1}{2} \div \frac{8}{3}$
(vi) $\frac{2}{5} \div 1 \frac{1}{2}$
(vii) $3 \frac{1}{5} \div 1 \frac{2}{3}$ (viii) $2 \frac{1}{5} \div 1 \frac{1}{5}$
### Multiplication of Decimal Numbers
Reshma purchased $1.5 \mathrm{~kg}$ vegetable at the rate of ₹ 8.50 per kg. How much money should she pay? Certainly it would be ₹ $(8.50 \times 1.50)$. Both 8.5 and 1.5 are decimal numbers. So, we have come across a situation where we need to know how to multiply two decimals. Let us now learn the multiplication of two decimal numbers.
First we find $0.1 \times 0.1$.
Now, $0.1=\frac{1}{10}$. So, $0.1 \times 0.1=\frac{1}{10} \times \frac{1}{10}=$ $\frac{1 \times 1}{10 \times 10}=\frac{1}{100}=0.01$
Let us see it's pictorial representation (Fig 2.13) The fraction $\frac{1}{10}$ represents 1 part out of 10 equal parts.
Fig 2.13 The shaded part in the picture represents $\frac{1}{10}$.
We know that,
$$
\frac{1}{10} \times \frac{1}{10} \text { means } \frac{1}{10} \text { of } \frac{1}{10} . \text { So, divide this } \frac{1}{10} \text { th } \text { part into } 10 \text { equal parts and take one }
$$
part out of it.
Thus, we have, (Fig 2.14).
Fig 2.14
The dotted square is one part out of 10 of the $\frac{1}{10}^{\text {th }}$ part. That is, it represents $\frac{1}{10} \times \frac{1}{10}$ or $0.1 \times 0.1$
Can the dotted square be represented in some other way?
How many small squares do you find in Fig 2.14?
There are 100 small squares. So the dotted square represents one out of 100 or 0.01 .
Hence, $0.1 \times 0.1=0.01$.
Note that 0.1 occurs two times in the product. In 0.1 there is one digit to the right of the decimal point. In 0.01 there are two digits (i.e., $1+1$ ) to the right of the decimal point. Let us now find $0.2 \times 0.3$.
We have, $0.2 \times 0.3=\frac{2}{10} \times \frac{3}{10}$
As we did for $\frac{1}{10} \times \frac{1}{10}$, let us divide the square into 10 equal parts and take three parts out of it, to get $\frac{3}{10}$. Again divide each
Fig 2.15 of these three equal parts into 10 equal parts and take two from each. We get $\frac{2}{10} \times \frac{3}{10}$. The dotted squares represent $\frac{2}{10} \times \frac{3}{10}$ or $0.2 \times 0.3$. (Fig 2.15)
Since there are 6 dotted squares out of 100, so they also reprsent 0.06 . Thus, $0.2 \times 0.3=0.06$.
Observe that $2 \times 3=6$ and the number of digits to the right of the decimal point in 0.06 is $2(=1+1)$.
Check whether this applies to $0.1 \times 0.1$ also.
Find $0.2 \times 0.4$ by applying these observations.
While finding $0.1 \times 0.1$ and $0.2 \times 0.3$, you might have noticed that first we multiplied them as whole numbers ignoring the decimal point. In $0.1 \times 0.1$, we found $01 \times 01$ or $1 \times$ 1. Similarly in $0.2 \times 0.3$ we found $02 \times 03$ or $2 \times 3$.
Then, we counted the number of digits starting from the rightmost digit and moved towards left. We then put the decimal point there. The number of digits to be counted is obtained by adding the number of digits to the right of the decimal point in the decimal numbers that are being multiplied.
Let us now find $1.2 \times 2.5$.
Multiply 12 and 25. We get 300 . Both, in 1.2 and 2.5, there is 1 digit to the right of the decimal point. So, count $1+1=2$ digits from the rightmost digit (i.e., 0) in 300 and move towards left. We get 3.00 or 3 .
Find in a similar way $1.5 \times 1.6,2.4 \times 4.2$.
While multiplying 2.5 and 1.25, you will first multiply 25 and 125 . For placing the decimal in the product obtained, you will count $1+2=3$ (Why?) digits starting from the rightmost digit. Thus, $2.5 \times 1.25=3.225$
Find $2.7 \times 1.35$
## TRY THESE
1. Find:
(i) $2.7 \times 4$
(ii) $1.8 \times 1.2$
(iii) $2.3 \times 4.35$
2. Arrange the products obtained in (1) in descending order.
EXampLe 3 The side of an equilateral triangle is $3.5 \mathrm{~cm}$. Find its perimeter.
Solution All the sides of an equilateral triangle are equal. So, length of each side $=3.5 \mathrm{~cm}$
Thus, perimeter $=3 \times 3.5 \mathrm{~cm}=10.5 \mathrm{~cm}$
EXAMPLE 4 The length of a rectangle is $7.1 \mathrm{~cm}$ and its breadth is $2.5 \mathrm{~cm}$. What is the area of the rectangle?
SOLUTION Length of the rectangle $=7.1 \mathrm{~cm}$
Breadth of the rectangle $=2.5 \mathrm{~cm}$
Therefore, area of the rectangle $=7.1 \times 2.5 \mathrm{~cm}^{2}=17.75 \mathrm{~cm}^{2}$
#### Multiplication of Decimal Numbers by 10,100 and 1000
Reshma observed that $2.3=\frac{23}{10}$ whereas $2.35=\frac{235}{100}$. Thus, she found that depending on the position of the decimal point the decimal number can be converted to a fraction with denominator 10 or 100 . She wondered what would happen if a decimal number is multiplied by 10 or 100 or 1000 .
Let us see if we can find a pattern of multiplying numbers by 10 or 100 or 1000 .
Have a look at the table given below and fill in the blanks:
| $1.76 \times 10=\frac{176}{100} \times 10=17.6$ | $2.35 \times 10=\square$ | $12.356 \times 10=\square$ |
| :--- | :--- | :--- |
| $1.76 \times 100=\frac{176}{100} \times 100=176$ or 176.0 | $2.35 \times 100=\_$ | $12.356 \times 100=$ |
| $1.76 \times 1000=\frac{176}{100} \times 1000=1760$ or | $2.35 \times 1000=\ldots$ | $12.356 \times 1000=$ |
| $0.5 \times 10=\frac{5}{10} \times 10=5 ; 0.5 \times 100=\ldots ;$ | $0.5 \times 1000=$ | |
Observe the shift of the decimal point of the products in the table. Here the numbers are multiplied by 10,100 and 1000 . In $1.76 \times 10=17.6$, the digits are same i.e., 1,7 and 6 . Do you observe this in other products also? Observe 1.76 and 17.6. To which side has the decimal point shifted, right or left? The decimal point has shifted to the right by one place. Note that 10 has one zero over 1 .
In $1.76 \times 100=176.0$, observe 1.76 and 176.0 . To which side and by how many digits has the decimal point shifted? The decimal point has shifted to the right by two places.
Note that 100 has two zeros over one.
Do you observe similar shifting of decimal point in other products also? So we say, when a decimal number is multiplied by 10, 100 or 1000, the digits in the product are same as in the decimal number but the decimal point in the product is shifted to the right by as, many of places as there are zeros over one.
TRY THESE
Find: (i) $0.3 \times 10$
(ii) $1.2 \times 100$
(iii) $56.3 \times 1000$
Based on these observations we can now say
$$
0.07 \times 10=0.7,0.07 \times 100=7 \text { and } 0.07 \times 1000=70 .
$$
Can you now tell $2.97 \times 10=? \quad 2.97 \times 100=? \quad 2.97 \times 1000=?$
Can you now help Reshma to find the total amount i.e., ₹ $8.50 \times$ 150 , that she has to pay?
## EXercise 2.4
1. Find:
(i) $0.2 \times 6$
(ii) $8 \times 4.6$
(iii) $2.71 \times 5$
(iv) $20.1 \times 4$
(v) $0.05 \times 7$
(vi) $211.02 \times 4$
(vii) $2 \times 0.86$
2. Find the area of rectangle whose length is $5.7 \mathrm{~cm}$ and breadth is $3 \mathrm{~cm}$.
3. Find:
(i) $1.3 \times 10$
(ii) $36.8 \times 10$
(iii) $153.7 \times 10$
(iv) $168.07 \times 10$
(v) $31.1 \times 100$
(vi) $156.1 \times 100$
(vii) $3.62 \times 100$
(viii) $43.07 \times 100$
(ix) $0.5 \times 10$
(x) $0.08 \times 10$
(xi) $0.9 \times 100$
(xii) $0.03 \times 1000$
4. A two-wheeler covers a distance of $55.3 \mathrm{~km}$ in one litre of petrol. How much distance will it cover in 10 litres of petrol?
5. Find:
(i) $2.5 \times 0.3$
(ii) $0.1 \times 51.7$
(iii) $0.2 \times 316.8$
(iv) $1.3 \times 3.1$
(v) $0.5 \times 0.05$
(vi) $11.2 \times 0.15$
(vii) $1.07 \times 0.02$
(viii) $10.05 \times 1.05$
(ix) $101.01 \times 0.01$
(x) $100.01 \times 1.1$
### Division of Decimal Numbers
Savita was preparing a design to decorate her classroom. She needed a few coloured strips of paper of length $1.9 \mathrm{~cm}$ each. She had a strip of coloured paper of length $9.5 \mathrm{~cm}$. How many pieces of the required length will she get out of this strip? She thought it would be $\frac{9.5}{1.9} \mathrm{~cm}$. Is she correct?
Both 9.5 and 1.9 are decimal numbers. So we need to know the division of decimal numbers too!
#### Division by 10,100 and 1000
Let us find the division of a decimal number by 10,100 and 1000 .
Consider $31.5 \div 10$. $31.5 \div 10=\frac{315}{10} \times \frac{1}{10}=\frac{315}{100}=3.15$
Similarly, $31.5 \div 100=\frac{315}{10} \times \frac{1}{100}=\frac{315}{1000}=0.315$
Let us see if we can find a pattern for dividing numbers by 10,100 or 1000 . This may help us in dividing numbers by 10,100 or 1000 in a shorter way.
Take $31.5 \div 10=3.15$. In 31.5 and 3.15 , the digits are same i.e., 3,1 , and 5 but the decimal point has shifted in the quotient. To which side and by how many digits? The decimal point has shifted to the left by one place. Note that 10 has one zero over 1.
Consider now $31.5 \div 100=0.315$. In 31.5 and 0.315 the digits are same, but what about the decimal point in the quotient?
## Try These
Find: (i) $235.4 \div 10$
(ii) $235.4 \div 100$
(iii) $235.4 \div 1000$ It has shifted to the left by two places. Note that 100 has two zeros over1.
So we can say that, while dividing a number by 10, 100 or 1000, the digits of the number and the quotient are same but the decimal point in the quotient shifts to the left by as many places as there are zeros over 1 . Using this observation let us now quickly find: $\quad 2.38 \div 10=0.238,2.38 \div 100=0.0238,2.38 \div 1000=0.00238$
#### Division of a Decimal Number by a Whole Number
Let us find $\frac{6.4}{2}$. Remember we also write it as $6.4 \div 2$.
So, $6.4 \div 2=\frac{64}{10} \div 2=\frac{64}{10} \times \frac{1}{2}$ as learnt in fractions.
(i) $35.7 \div 3=?$
(ii) $25.5 \div 3=$ ?
$$
=\frac{64 \times 1}{10 \times 2}=\frac{1 \times 64}{10 \times 2}=\frac{1}{10} \times \frac{64}{2}=\frac{1}{10} \times 32=\frac{32}{10}=3.2
$$
Or, let us first divide 64 by 2 . We get 32 . There is one digit to the right of the decimal point in 6.4. Place the decimal in 32 such that there would be one digit to its right. We get 3.2 again.
To find $19.5 \div 5$, first find $195 \div 5$. We get 39 . There is one digit to the right of the decimal point in 19.5. Place the decimal point in 39 such that there would be one digit to its right. You will get 3.9.
## TRY THESE
(i) $43.15 \div 5=$ ?;
(ii) $82.44 \div 6=$ ?
## Try These
Find: (i) $15.5 \div 5$
(ii) $126.35 \div 7$
$$
\text { Now, } 12.96 \div 4=\frac{1296}{100} \div 4=\frac{1296}{100} \times \frac{1}{4}=\frac{1}{100} \times \frac{1296}{4}=\frac{1}{100} \times 324=3.24
$$
Or, divide 1296 by 4 . You get 324. There are two digits to the right of the decimal in 12.96. Making similar placement of the decimal in 324, you will get 3.24.
Note that here and in the next section, we have considered only those divisions in which, ignoring the decimal, the number would be completely divisible by another number to give remainder zero. Like, in $19.5 \div 5$, the number 195 when divided by 5 , leaves remainder zero.
However, there are situations in which the number may not be completely divisible by another number, i.e., we may not get remainder zero. For example, $195 \div 7$. We deal with such situations in later classes.
EXAMrle 5 Find the average of 4.2, 3.8 and 7.6.
Solution The average of 4.2,3.8 and 7.6 is $\frac{4.2+3.8+7.6}{3}=\quad=5.2$.
#### Division of a Decimal Number by another Decimal Number
Let us find $\frac{25.5}{0.5}$ i.e., $25.5 \div 0.5$.
We have $25.5 \div 0.5=\frac{255}{10} \div \frac{5}{10}=\frac{255}{10} \times \frac{10}{5}=51$. Thus, $25.5 \div 0.5=51$
What do you observe? For $\frac{25.5}{0.5}$, we find that there is one digit to the right of the decimal in 0.5 . This could be converted to whole number by dividing by 10 . Accordingly 25.5 was also converted to a fraction by dividing by 10 .
Or, we say the decimal point was shifted by one place to the right in 0.5 to make it 5 . So, there was a shift of one decimal point to the right in 25.5 also to make it 255 .
Thus, $\quad 22.5 \div 1.5=\frac{22.5}{1.5}=\frac{225}{15}=15$
Find $\quad \frac{20.3}{0.7}$ and $\frac{15.2}{0.8}$ in a similar way.
Find: (i) $\frac{7.75}{0.25}$ (ii) $\frac{42.8}{0.02}$ (iii) $\frac{5.6}{1.4}$
We can write it is as $205.5 \div 15$, as discussed above. We get 13.7. Find $\frac{3.96}{0.4}, \frac{2.31}{0.3}$. Consider now, $\frac{33.725}{0.25}$. We can write it as $\frac{3372.5}{25}$ (How?) and we get the quotient as 134.9. How will you find $\frac{27}{0.03}$ ? We know that 27 can be written as 27.00 .
So, $\quad \frac{27}{0.03}=\frac{27.00}{0.03}=\frac{2700}{3}=900$
EXAMPLE 6 Each side of a regular polygon is $2.5 \mathrm{~cm}$ in length. The perimeter of the polygon is $12.5 \mathrm{~cm}$. How many sides does the polygon have?
Solution The perimeter of a regular polygon is the sum of the lengths of all its equal sides $=12.5 \mathrm{~cm}$.
Length of each side $=2.5 \mathrm{~cm}$. Thus, the number of sides $=\frac{12.5}{2.5}=\frac{125}{25}=5$
The polygon has 5 sides.
EXAMPLE 7 A car covers a distance of $89.1 \mathrm{~km}$ in 2.2 hours. What is the average distance covered by it in 1 hour?
Solution Distance covered by the car $=89.1 \mathrm{~km}$.
Time required to cover this distance $=2.2$ hours.
So distance covered by it in 1 hour $=\frac{89.1}{2.2}=\frac{891}{22}=40.5 \mathrm{~km}$.
## EXercise 2.5
1. Find:
(i) $0.4 \div 2$
(ii) $0.35 \div 5$
(iii) $2.48 \div 4$
(v) $651.2 \div 4$
(vi) $14.49 \div 7$
(vii) $3.96 \div 4$
(iv) $65.4 \div 6$
(viii) $0.80 \div 5$
2. Find:
(i) $4.8 \div 10$
(ii) $52.5 \div 10$
(iii) $0.7 \div 10$
(v) $272.23 \div 10$
(vi) $0.56 \div 10$
(vii) $3.97 \div 10$
3. Find:
(i) $2.7 \div 100$
(ii) $0.3 \div 100$
(iii) $0.78 \div 100$
(iv) $432.6 \div 100$
(v) $23.6 \div 100$
(vi) $98.53 \div 100$
4. Find:
(i) $7.9 \div 1000$
(ii) $26.3 \div 1000$
(iii) $38.53 \div 1000$
(iv) $128.9 \div 1000$
(v) $0.5 \div 1000$
(iv) $33.1 \div 10$
## Find:
(i) $7 \div 3.5$
(ii) $36 \div 0.2$
(iii) $3.25 \div 0.5$
(iv) $30.94 \div 0.7$
(v) $0.5 \div 0.25$
(vi) $7.75 \div 0.25$
(vii) $76.5 \div 0.15$
(viii) $37.8 \div 1.4$
(ix) $2.73 \div 1.3$
6. A vehicle covers a distance of $43.2 \mathrm{~km}$ in 2.4 litres of petrol. How much distance will it cover in one litre of petrol?
## What have We Discussed?
1. We have learnt how to multiply fractions. Two fractions are multiplied by multiplying their numerators and denominators seperately and writing the product as $\frac{\text { product of numerators }}{\text { product of denominators }}$. For example, $\frac{2}{3} \times \frac{5}{7}=\frac{2 \times 5}{3 \times 7}=\frac{10}{21}$.
2. A fraction acts as an operator 'of'. For example, $\frac{1}{2}$ of 2 is $\frac{1}{2} \times 2=1$.
3. (a) The product of two proper fractions is less than each of the fractions that are multiplied.
(b) The product of a proper and an improper fraction is less than the improper fraction and greater than the proper fraction.
(c) The product of two imporper fractions is greater than the two fractions.
4. A reciprocal of a fraction is obtained by inverting it upside down.
5. We have seen how to divide two fractions.
(a) While dividing a whole number by a fraction, we multiply the whole number with the reciprocal of that fraction.
For example, $2 \div \frac{3}{5}=2 \times \frac{5}{3}=\frac{10}{3}$
(b) While dividing a fraction by a whole number we multiply the fraction by the reciprocal of the whole number.
For example, $\frac{2}{3} \div 7=\frac{2}{3} \times \frac{1}{7}=\frac{2}{21}$
(c) While dividing one fraction by another fraction, we multuiply the first fraction by the reciprocal of the other. So, $\frac{2}{3} \div \frac{5}{7}=\frac{2}{3} \times \frac{7}{5}=\frac{14}{15}$.
6. We also learnt how to multiply two decimal numbers. While multiplying two decimal numbers, first multiply them as whole numbers. Count the number of digits to the right of the decimal point in both the decimal numbers. Add the number of digits counted. Put the decimal point in the product by counting the digits from its rightmost place. The count should be the sum obtained earlier.
For example, $0.5 \times 0.7=0.35$ 7. To multiply a decimal number by 10,100 or 1000 , we move the decimal point in the number to the right by as many places as there are zeros over 1 .
Thus $0.53 \times 10=5.3, \quad 0.53 \times 100=53, \quad 0.53 \times 1000=530$
8. We have seen how to divide decimal numbers.
(a) To divide a decimal number by a whole number, we first divide them as whole numbers. Then place the decimal point in the quotient as in the decimal number.
For example, $8.4 \div 4=2.1$
Note that here we consider only those divisions in which the remainder is zero.
(b) To divide a decimal number by 10, 100 or 1000 , shift the digits in the decimal number to the left by as many places as there are zeros over 1 , to get the quotient.
So, $23.9 \div 10=2.39,23.9 \div 100=0.239,23.9 \div 1000=0.0239$
(c) While dividing two decimal numbers, first shift the decimal point to the right by equal number of places in both, to convert the divisor to a whole number. Then divide. Thus, $2.4 \div 0.2=24 \div 2=12$. Data
Handling
3.1 Representative Values the term 'average' in your day-to-day life:
- The average age of pupils in my class is 12 years. 98 per cent.
You might be aware of the term average and would have come across statements involving
- Isha spends on an average of about 5 hours daily for her studies.
- The average temperature at this time of the year is about 40 degree celsius.
- The average attendance of students in a school during its final examination was
Many more of such statements could be there. Think about the statements given above. Do you think that the child in the first statement studies exactly for 5 hours daily?
Or, is the temperature of the given place during that particular time always 40 degrees?
Or, is the age of each pupil in that class 12 years? Obviously not.
Then what do these statements tell you?
By average we understand that Isha, usually, studies for 5 hours. On some days, she may study for less number of hours and on the other days she may study longer.
Similarly, the average temperature of 40 degree celsius, means that, very often, the temperature at this time of the year is around 40 degree celsius. Sometimes, it may be less than 40 degree celsius and at other times, it may be more than $40^{\circ} \mathrm{C}$.
Thus, we realise that average is a number that represents or shows the central tendency of a group of observations or data. Since average lies between the highest and the lowest value of the given data so, we say average is a measure of the central tendency of the group of data. Different forms of data need different forms of representative or central value to describe it. One of these representative values is the "Arithmetic mean". You will learn about the other representative values in the later part of the chapter.
### Arithimetic Mean
The most common representative value of a group of data is the arithmetic mean or the mean. To understand this in a better way, let us look at the following example:
Two vessels contain 20 litres and 60 litres of milk respectively. What is the amount that each vessel would have, if both share the milk equally? When we ask this question we are seeking the arithmetic mean.
In the above case, the average or the arithmetic mean would be
$$
\frac{\text { Total quantity of milk }}{\text { Number of vessels }}=\frac{20+60}{2} \text { litres }=40 \text { litres. }
$$
Thus, each vessels would have 40 litres of milk.
The average or Arithmetic Mean (A.M.) or simply mean is defined as follows:
Consider these examples.
$$
\text { mean }=\frac{\text { Sum of all observations }}{\text { number of observations }}
$$
EXAMPLE 1 Ashish studies for 4 hours, 5 hours and 3 hours respectively on three consecutive days. How many hours does he study daily on an average?
Solution The average study time of Ashish would be
$$
\frac{\text { Total number of study hours }}{\text { Number of days for which he studied }}=\frac{4+5+3}{3} \text { hours }=4 \text { hours per day }
$$
Thus, we can say that Ashish studies for 4 hours daily on an average.
EXamiple 2 A batsman scored the following number of runs in six innings:
$$
36,35,50,46,60,55
$$
Calculate the mean runs scored by him in an inning.
Solution Total runs $=36+35+50+46+60+55=282$.
To find the mean, we find the sum of all the observations and divide it by the number of observations.
Therefore, in this case, mean $=\frac{282}{6}=47$. Thus, the mean runs scored in an inning are 47 . Where does the arithmetic mean lie
## TRY THEse
How would you find the average of your study hours for the whole week?
## Think, Discuss and Write
Consider the data in the above examples and think on the following:
- Is the mean bigger than each of the observations?
- Is it smaller than each observation?
Discuss with your friends. Frame one more example of this type and answer the same questions.
You will find that the mean lies inbetween the greatest and the smallest observations.
In particular, the mean of two numbers will always lie between the two numbers. For example the mean of 5 and 11 is $\frac{5+11}{2}=8$, which lies between 5 and 11 .
Can you use this idea to show that between any two fractional numbers, you can find as many fractional numbers as you like. For example between $\frac{1}{2}$ and $\frac{1}{4}$ you have their average $\frac{\frac{1}{2}+\frac{1}{4}}{2}=\frac{3}{8}$ and then between $\frac{1}{2}$ and $\frac{3}{8}$, you have their average $\frac{7}{16}$ and so on.
## Try These
1. Find the mean of your sleeping hours during one week.
2. Find atleast 5 numbers between $\frac{1}{2}$ and $\frac{1}{3}$.
#### Range
The difference between the highest and the lowest observation gives us an idea of the spread of the observations. This can be found by subtracting the lowest observation from the highest observation. We call the result the range of the observation. Look at the following example:
EXAMrpe 3 The ages in years of 10 teachers of a school are:
$$
32,41,28,54,35,26,23,33,38,40
$$
(i) What is the age of the oldest teacher and that of the youngest teacher?
(ii) What is the range of the ages of the teachers?
(iii) What is the mean age of these teachers?
## Solution
(i) Arranging the ages in ascending order, we get:
$23,26,28,32,33,35,38,40,41,54$
We find that the age of the oldest teacher is 54 years and the age of the youngest teacher is 23 years. (ii) Range of the ages of the teachers $=(54-23)$ years $=31$ years
(iii) Mean age of the teachers
$$
\begin{aligned}
& =\frac{23+26+28+32+33+35+38+40+41+54}{10} \text { years } \\
& =\frac{350}{10} \text { years }=35 \text { years }
\end{aligned}
$$
## ExERCISE 3.1
1. Find the range of heights of any ten students of your class.
2. Organise the following marks in a class assessment, in a tabular form.
$$
4,6,7,5,3,5,4,5,2,6,2,5,1,9,6,5,8,4,6,7
$$
(i) Which number is the highest?
(ii) Which number is the lowest?
(iii) What is the range of the data?
(iv) Find the arithmetic mean.
3. Find the mean of the first five whole numbers.
4. A cricketer scores the following runs in eight innings:
$$
58,76,40,35,46,45,0,100 .
$$
Find the mean score.
5. Following table shows the points of each player scored in four games:
| Player | $\begin{array}{c}\text { Game } \\ \mathbf{1}\end{array}$ | $\begin{array}{c}\text { Game } \\ \mathbf{2}\end{array}$ | $\begin{array}{c}\text { Game } \\ \mathbf{3}\end{array}$ | $\begin{array}{c}\text { Game } \\ \mathbf{4}\end{array}$ |
| :---: | :---: | :---: | :---: | :---: |
| A | 14 | 16 | 10 | 10 |
| B | 0 | 8 | 6 | 4 |
| $\mathbf{C}$ | 8 | 11 | $\begin{array}{c}\text { Did not } \\ \text { Play }\end{array}$ | 13 |
Now answer the following questions:
(i) Find the mean to determine A's average number of points scored per game.
(ii) To find the mean number of points per game for $\mathrm{C}$, would you divide the total points by 3 or by 4 ? Why?
(iii) B played in all the four games. How would you find the mean?
(iv) Who is the best performer?
6. The marks (out of 100) obtained by a group of students in a science test are 85,76 , 90, 85, 39, 48, 56, 95, 81 and 75. Find the:
(i) Highest and the lowest marks obtained by the students. (ii) Range of the marks obtained.
(iii) Mean marks obtained by the group.
7. The enrolment in a school during six consecutive years was as follows: $1555,1670,1750,2013,2540,2820$
Find the mean enrolment of the school for this period.
8. The rainfall (in $\mathrm{mm}$ ) in a city on 7 days of a certain week was recorded as follows:
| Day | Mon | Tue | Wed | Thurs | Fri | Sat | Sun |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| $\begin{array}{l}\text { Rainfall } \\ \text { (in mm) }\end{array}$ | 0.0 | 12.2 | 2.1 | 0.0 | 20.5 | 5.5 | 1.0 |
(i) Find the range of the rainfall in the above data.
(ii) Find the mean rainfall for the week.
(iii) On how many days was the rainfall less than the mean rainfall.
9. The heights of 10 girls were measured in $\mathrm{cm}$ and the results are as follows: $135,150,139,128,151,132,146,149,143,141$.
(i) What is the height of the tallest girl? (ii) What is the height of the shortest girl?
(iii) What is the range of the data?
(iv) What is the mean height of the girls?
(v) How many girls have heights more than the mean height.
### Mode
As we have said Mean is not the only measure of central tendency or the only form of representative value. For different requirements from a data, other measures of central tendencies are used.
## Look at the following example
To find out the weekly demand for different sizes of shirt, a shopkeeper kept records of sales of sizes $90 \mathrm{~cm}, 95 \mathrm{~cm}, 100 \mathrm{~cm}, 105 \mathrm{~cm}, 110 \mathrm{~cm}$. Following is the record for a week:
| Size (in inches) | $90 \mathrm{~cm}$ | $95 \mathrm{~cm}$ | $100 \mathrm{~cm}$ | $105 \mathrm{~cm}$ | $110 \mathrm{~cm}$ | Total |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Number of Shirts Sold | 8 | 22 | 32 | 37 | 6 | $\mathbf{1 0 5}$ |
If he found the mean number of shirts sold, do you think that he would be able to decide which shirt sizes to keep in stock?
$$
\text { Mean of total shirts sold }=\frac{\text { Total number of shirts sold }}{\text { Number of different sizes of shirts }}=\frac{105}{5}=21
$$
Should he obtain 21 shirts of each size? If he does so, will he be able to cater to the needs of the customers? The shopkeeper, on looking at the record, decides to procure shirts of sizes $95 \mathrm{~cm}$, $100 \mathrm{~cm}, 105 \mathrm{~cm}$. He decided to postpone the procurement of the shirts of other sizes because of their small number of buyers.
## Look at another example
The owner of a readymade dress shop says, "The most popular size of dress I sell is the size $90 \mathrm{~cm}$.
Observe that here also, the owner is concerned about the number of shirts of different sizes sold. She is however looking at the shirt size that is sold the most. This is another representative value for the data. The highest occuring event is the sale of size $90 \mathrm{~cm}$. This representative value is called the mode of the data.
## The mode of a set of observations is the observation that occurs most often.
Example 4 Find the mode of the given set of numbers: 1, 1, 2, 4, 3, 2, 1, 2, 2, 4
Solution Arranging the numbers with same values together, we get
$$
1,1,1,2,2,2,2,3,4,4
$$
Mode of this data is 2 because it occurs more frequently than other observations.
#### Mode of Large Data
Putting the same observations together and counting them is not easy if the number of observations is large. In such cases we tabulate the data. Tabulation can begin by putting tally marks and finding the frequency, as you did in your previous class.
Look at the following example:
EXAMPLE 5 Following are the margins of victory in the football matches of a league.
$$
\begin{aligned}
& 1,3,2,5,1,4,6,2,5,2,2,2,4,1,2,3,1,1,2,3,2 \\
& 6,4,3,2,1,1,4,2,1,5,3,3,2,3,2,4,2,1,2
\end{aligned}
$$
Find the mode of this data.
Solution Let us put the data in a tabular form:
## MRY THESE
Find the mode of
(i) 2, 6, 5, 3, 0, 3, 4, 3, 2, 4, 5, 2,4
| Margins of Victory | Tally Bars | Number of Matches |
| :---: | :---: | :---: |
| 1 | HHIII | 9 |
| 2 | HH HHIII | 14 |
| 3 | $\mathrm{HHII}$ | 7 |
| 4 | HH | 5 |
| 5 | $\\|$ | 3 |
| 6 | $\\|$ | 2 |
| | Total | 40 |
(ii) $2,14,16,12,14,14,16$, $14,10,14,18,14$ Looking at the table, we can quickly say that 2 is the 'mode' since 2 has occured the highest number of times. Thus, most of the matches have been won with a victory margin of 2 goals.
## Think, Discuss and Write
Can a set of numbers have more than one mode?
Example 6Find the mode of the numbers: 2, 2, 2, 3, 3, 4, 5, 5, 5, 6, 6, 8
Solution Here, 2 and 5 both occur three times. Therefore, they both are modes of the data.
## Do This
1. Record the age in years of all your classmates. Tabulate the data and find the mode.
2. Record the heights in centimetres of your classmates and find the mode.
## TRY THESE
1. Find the mode of the following data:
$12,14,12,16,15,13,14,18,19,12,14,15,16,15,16,16,15$,
$17,13,16,16,15,15,13,15,17,15,14,15,13,15,14$
2. Heights (in $\mathrm{cm}$ ) of 25 children are given below:
$168,165,163,160,163,161,162,164,163,162,164,163,160,163,160$, $165,163,162,163,164,163,160,165,163,162$
What is the mode of their heights? What do we understand by mode here?
Whereas mean gives us the average of all observations of the data, the mode gives that observation which occurs most frequently in the data.
Let us consider the following examples:
(a) You have to decide upon the number of chapattis needed for 25 people called for a feast.
(b) A shopkeeper selling shirts has decided to replenish her stock.
(c) We need to find the height of the door needed in our house.
(d) When going on a picnic, if only one fruit can be bought for everyone, which is the fruit that we would get.
In which of these situations can we use the mode as a good estimate?
Consider the first statement. Suppose the number of chapattis needed by each person
is $2,3,2,3,2,1,2,3,2,2,4,2,2,3,2,4,4,2,3,2,4,2,4,3,5$ The mode of the data is 2 chapattis. If we use mode as the representative value for this data, then we need 50 chapattis only, 2 for each of the 25 persons. However the total number would clearly be inadequate. Would mean be an appropriate representative value?
For the third statement the height of the door is related to the height of the persons using that door. Suppose there are 5 children and 4 adults using the door and the height of each of 5 children is around 135 $\mathrm{cm}$. The mode for the heights is $135 \mathrm{~cm}$. Should we get a door that is $144 \mathrm{~cm}$ high? Would all the adults be able to go through that door? It is clear that mode is not the appropriate representative value for this data. Would mean be an appropriate representative value here?
Why not? Which representative value of height should be used to decide the doorheight?
Similarly analyse the rest of the statements and find the representative value useful for that issue.
## Try These
Discuss with your friends and give
(a) Two situations where mean would be an appropriate representative value to use, and
(b) Two situations where mode would be an appropriate representative value to use.
### Median
We have seen that in some situations, arithmetic mean is an appropriate measure of central tendency whereas in some other situations, mode is the appropriate measure of central tendency.
Let us now look at another example. Consider a group of 17 students with the following heights (in cm): 106, 110, 123, 125, 117, 120, 112, 115, 110, 120, 115, 102, 115, 115, 109, 115, 101.
The games teacher wants to divide the class into two groups so that each group has
equal number of students, one group has students with height lesser than a particular height and the other group has students with heights greater than the particular height. How would she do that?
Let us see the various options she has:
(i) She can find the mean. The mean is
$$
\begin{aligned}
& \frac{106+110+123+125+117+120+112+115+110+120+115+102+115+115+109+115+101}{17} \\
& =\frac{1930}{17}=113.5
\end{aligned}
$$
So, if the teacher divides the students into two groups on the basis of this mean height, such that one group has students of height less than the mean height and the other group has students with height more than the mean height, then the groups would be of unequal size. They would have 7 and 10 members respectively.
(ii) The second option for her is to find mode. The observation with highest frequency is $115 \mathrm{~cm}$, which would be taken as mode.
There are 7 children below the mode and 10 children at the mode and above the mode. Therefore, we cannot divide the group into equal parts.
Let us therefore think of an alternative representative value or measure of central tendency. For doing this we again look at the given heights (in $\mathrm{cm}$ ) of students and arrange them in ascending order. We have the following observations:
$101,102,106,109,110,110,112,115,115,115,115,115,117,120,120,123,125$
The middle value in this data is 115 because this value
TRY THESE
Your friend found the median and the mode of a given data. Describe and correct your friends error if any:
$35,32,35,42,38,32,34$
Median $=42$, Mode $=32$ divides the students into two equal groups of 8 students each. This value is called as Median. Median refers to the value which lies in the middle of the data (when arranged in an increasing or decreasing order) with half of the observations above it and the other half below it. The games teacher decides to keep the middle student as a refree in the game.
Here, we consider only those cases where number of observations is odd.
Thus, in a given data, arranged in ascending or descending order, the median gives us the middle observation.
Note that in general, we may not get the same value for median and mode.
Thus we realise that mean, mode and median are the numbers that are the representative values of a group of observations or data. They lie between the minimum and maximum values of the data. They are also called the measures of the central tendency.
EXAMPLE 7 Find the median of the data: 24, 36, 46, 17, 18, 25, 35
Solution We arrange the data in ascending order, we get 17, 18, 24, 25, 35, 36,46
Median is the middle observation. Therefore 25 is the median.
## EXERCISE 3.2
1. The scores in mathematics test (out of 25) of 15 students is as follows:
$$
19,25,23,20,9,20,15,10,5,16,25,20,24,12,20
$$
Find the mode and median of this data. Are they same?
2. The runs scored in a cricket match by 11 players is as follows:
$$
6,15,120,50,100,80,10,15,8,10,15
$$
Find the mean, mode and median of this data. Are the three same? 3. The weights (in kg.) of 15 students of a class are:
$$
38,42,35,37,45,50,32,43,43,40,36,38,43,38,47
$$
(i) Find the mode and median of this data.
(ii) Is there more than one mode?
4. Find the mode and median of the data: $13,16,12,14,19,12,14,13,14$
5. Tell whether the statement is true or false:
(i) The mode is always one of the numbers in a data.
(ii) The mean is one of the numbers in a data.
(iii) The median is always one of the numbers in a data.
(iv) The data $6,4,3,8,9,12,13,9$ has mean 9.
### Use of Bar Graphs with a Different Purpose
We have seen last year how information collected could be first arranged in a frequency distribution table and then this information could be put as a visual representation in the form of pictographs or bar graphs. You can look at the bar graphs and make deductions about the data. You can also get information based on these bar graphs. For example, you can say that the mode is the longest bar if the bar represents the frequency.
#### Choosing a Scale
We know that a bar graph is a representation of numbers using bars of uniform width and the lengths of the bars depend upon the frequency and the scale you have chosen. For example, in a bar graph where numbers in units are to be shown, the graph represents one unit length for one observation and if it has to show numbers in tens or hundreds, one unit length can represent 10 or 100 observations. Consider the following examples:
EXAMPLE 8 Two hundred students of $6^{\text {th }}$ and $7^{\text {th }}$ classes were asked to name their favourite colour so as to decide upon what should be the colour of their school building. The results are shown in the following table. Represent the given data on a bar graph.
| Favourite Colour | Red | Green | Blue | Yellow | Orange |
| :--- | :---: | :---: | :---: | :---: | :---: |
| Number of Students | 43 | 19 | 55 | 49 | 34 |
Answer the following questions with the help of the bar graph:
(i) Which is the most preferred colour and which is the least preferred?
(ii) How many colours are there in all? What are they?
SOLUTION Choose a suitable scale as follows:
Start the scale at 0 . The greatest value in the data is 55 , so end the scale at a value greater than 55 , such as 60 . Use equal divisions along the axes, such as increments of 10 . You
know that all the bars would lie between 0 and 60 . We choose the scale such that the length between 0 and 60 is neither too long nor too small. Here we take 1 unit for 10 students.
We then draw and label the graph as shown. From the bar graph we conclude that
(i) Blue is the most preferred colour (Because the bar representing Blue is the tallest).
(ii) Green is the least preferred colour. (Because the bar representing Green is the shortest).
(iii) There are five colours. They are Red, Green, Blue, Yellow and Orange. (These are observed on the horizontal line)
EXAMPLE 9Following data gives total marks (out of 600) obtained by six children of a particular class. Represent the data on a bar graph.
| Students | Ajay | Bali | Dipti | Faiyaz | Geetika | Hari |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Marks Obtained | 450 | 500 | 300 | 360 | 400 | 540 |
## Solution
(i) To choose an appropriate scale we make equal divisions taking increments of 100. Thus 1 unit will represent 100 marks. (What would be the difficulty if we choose one unit to represent 10 marks?)
(ii) Now represent the data on the bar graph.
## Drawing double bar graph
Consider the following two collections of data giving the average daily hours of sunshine in two cities Aberdeen and Margate for all the twelve months of the year. These cities are near the south pole and hence have only a few hours of sunshine each day.
| In Margate | | | | | | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| | Jan. | Feb. | Mar. | April | May | June | July | Aug. | Sept. | Oct. | Nov. | Dec. |
| $\begin{array}{l}\text { Average } \\ \text { hours of } \\ \text { Sunshine }\end{array}$ | 2 | $3 \frac{1}{4}$ | 4 | 4 | $7 \frac{3}{4}$ | 8 | $7 \frac{1}{2}$ | 7 | $6 \frac{1}{4}$ | 6 | 4 | 2 |
| In Aberdeen | | | | | | | | | | | | |
| $\begin{array}{l}\text { Average } \\ \text { hours of } \\ \text { Sunshine }\end{array}$ | $1 \frac{1}{2}$ | 3 | $3 \frac{1}{2}$ | 6 | $5 \frac{1}{2}$ | $6 \frac{1}{2}$ | $5 \frac{1}{2}$ | 5 | $4 \frac{1}{2}$ | 4 | 3 | $1 \frac{3}{4}$ |
By drawing individual bar graphs you could answer questions like
(i) In which month does each city has maximum sunlight? or
(ii) In which months does each city has minimum sunlight?
However, to answer questions like "In a particular month, which city has more sunshine hours", we need to compare the average hours of sunshine of both the cities. To do this we will learn to draw what is called a double bar graph giving the information of both cities side-by-side.
This bar graph (Fig 3.1) shows the average sunshine of both the cities.
Fig 3.1 For each month we have two bars, the heights of which give the average hours of sunshine in each city. From this we can infer that except for the month of April, there is always more sunshine in Margate than in Aberdeen. You could put together a similiar bar graph for your area or for your city.
Let us look at another example more related to us.
Example 10 A mathematics teacher wants to see, whether the new technique of teaching she applied after quarterly test was effective or not. She takes
## Try These
1. The bar graph (Fig 3.2) shows the result of a survey to test water resistant watches made by different companies.
Each of these companies claimed that their watches were water resistant. After a test the above results were revealed.
Fig 3.2 (a) Can you work out a fraction of the number of watches that leaked to the number tested for each company?
(b) Could you tell on this basis which company has better watches?
2. Sale of English and Hindi books in the years 1995, 1996, 1997 and 1998 are given below:
| Years | $\mathbf{1 9 9 5}$ | $\mathbf{1 9 9 6}$ | $\mathbf{1 9 9 7}$ | $\mathbf{1 9 9 8}$ |
| :--- | :---: | :---: | :---: | :---: |
| English | 350 | 400 | 450 | 620 |
| Hindi | 500 | 525 | 600 | 650 |
Draw a double bar graph and answer the following questions:
(a) In which year was the difference in the sale of the two language books least?.
(b) Can you say that the demand for English books rose faster? Justify.
## EXercise 3.3
1. Use the bar graph (Fig 3.3) to answer the following questions.
(a) Which is the most popular pet?
(b) How many students have dog as a pet?
Fig 3.3
Fig 3.4
2. Read the bar graph (Fig 3.4) which shows the number of books sold by a bookstore during five consecutive years and answer the following questions:
(i) About how many books were sold in 1989? 1990? 1992?
(ii) In which year were about 475 books sold? About 225 books sold?
(iii) In which years were fewer than 250 books sold?
(iv) Can you explain how you would estimate the number of books sold in 1989?
3. Number of children in six different classes are given below. Represent the data on a bar graph.
| Class | Fifth | Sixth | Seventh | Eighth | Ninth | Tenth |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Number of Children | 135 | 120 | 95 | 100 | 90 | 80 |
(a) How would you choose a scale?
(b) Answer the following questions:
(i) Which class has the maximum number of children? And the minimum?
(ii) Find the ratio of students of class sixth to the students of class eight.
4. The performance of a student in $1^{\text {st }}$ Term and $2^{\text {nd }}$ Term is given. Draw a double bar graph choosing appropriate scale and answer the following:
| Subject | English | Hindi | Maths | Science | S. Science |
| :--- | :---: | :---: | :---: | :---: | :---: |
| $\mathbf{1}^{\text {st }}$ Term (M.M. 100) | 67 | 72 | 88 | 81 | 73 |
| 2 $^{\text {nd }}$ Term (M.M. 100) | 70 | 65 | 95 | 85 | 75 |
(i) In which subject, has the child improved his performance the most?
(ii) In which subject is the improvement the least?
(iii) Has the performance gone down in any subject?
5. Consider this data collected from a survey of a colony.
| Favourite Sport | Cricket | Basket Ball | Swimming | Hockey | Athletics |
| :--- | :---: | :---: | :---: | :---: | :---: |
| Watching | 1240 | 470 | 510 | 430 | 250 |
| Participating | 620 | 320 | 320 | 250 | 105 |
(i) Draw a double bar graph choosing an appropriate scale.
What do you infer from the bar graph?
(ii) Which sport is most popular?
(iii) Which is more preferred, watching or participating in sports?
6. Take the data giving the minimum and the maximum temperature of various cities given in the beginning of this Chapter (Table 3.1). Plot a double bar graph using the data and answer the following:
(i) Which city has the largest difference in the minimum and maximum temperature on the given date?
(ii) Which is the hottest city and which is the coldest city?
(iii) Name two cities where maximum temperature of one was less than the minimum temperature of the other.
(iv) Name the city which has the least difference between its minimum and the maximum temperature.
## What HAVE We Discussed?
1. Average is a number that represents or shows the central tendency of a group of observations or data.
2. Arithmetic mean is one of the representative values of data.
3. Mode is another form of central tendency or representative value. The mode of a set of observations is the observation that occurs most often.
4. Median is also a form of representative value. It refers to the value which lies in the middle of the data with half of the observations above it and the other half below it.
5. A bar graph is a representation of numbers using bars of uniform widths.
6. Double bar graphs help to compare two collections of data at a glance.
### A Mind-Reading Game!
The teacher has said that she would be starting a new chapter in mathematics and it is going to be simple equations. Appu, Sarita and Ameena have revised what they learnt in algebra chapter in Class VI. Have you? Appu, Sarita and Ameena are excited because they have constructed a game which they call mind reader and they want to present it to the whole class.
The teacher appreciates their enthusiasm and invites them to present their game. Ameena begins; she asks Sara to think of a number, multiply it by 4 and add 5 to the product. Then, she asks Sara to tell the result. She says it is 65 . Ameena instantly declares that the number Sara had thought of is 15. Sara nods. The whole class including Sara is surprised.
It is Appu's turn now. He asks Balu to think of a number, multiply it by 10 and subtract 20 from the product. He then asks Balu what his result is? Balu says it is 50. Appu immediately tells the number thought by Balu. It is 7 , Balu confirms it.
Everybody wants to know how the 'mind reader' presented by Appu, Sarita and Ameena works. Can you see how it works? After studying this chapter and chapter 12, you will very well know how the game works.
### Setting up of an Eguation
Let us take Ameena's example. Ameena asks Sara to think of a number. Ameena does not know the number. For her, it could be anything $1,2,3, \ldots, 11, \ldots, 100, \ldots$ Let us denote this unknown number by a letter, say $x$. You may use $y$ or $t$ or some other letter in place of $x$. It does not matter which letter we use to denote the unknown number Sara has thought of. When Sara multiplies the number by 4 , she gets $4 x$. She then adds 5 to the product, which gives $4 x+5$. The value of $(4 x+5)$ depends on the value of $x$. Thus if $x=1,4 x+5=4 \times 1+5=9$. This means that if Sara had 1 in her mind, her result would have been 9 . Similarly, if she thought of 5 , then for $x=5,4 x+5=4 \times 5+5=25$; Thus if Sara had chosen 5, the result would have been 25 . To find the number thought by Sara let us work backward from her answer 65 . We have to find $x$ such that
$$
4 x+5=65
$$
Solution to the equation will give us the number which Sara held in her mind.
Let us similarly look at Appu's example. Let us call the number Balu chose as $y$. Appu asks Balu to multiply the number by 10 and subtract 20 from the product. That is, from $y$, Balu first gets $10 y$ and from there $(10 y-20)$. The result is known to be 50 .
Therefore,
$$
10 y-20=50
$$
The solution of this equation will give us the number Balu had thought of.
### Review of what We Know
Note, (4.1) and (4.2) are equations. Let us recall what we learnt about equations in Class VI. An equation is a condition on a variable. In equation (4.1), the variable is $x$; in equation (4.2), the variable is $y$.
The word variable means something that can vary, i.e. change. A variable takes on different numerical values; its value is not fixed. Variables are denoted usually by letters of the alphabets, such as $x, y, z, l, m, n, p$, etc. From variables, we form expressions. The expressions are formed by performing operations like addition, subtraction, multiplication and division on the variables. From $x$, we formed the expression $(4 x+5)$. For this, first we multiplied $x$ by 4 and then added 5 to the product. Similarly, from $y$, we formed the expression $(10 y-20)$. For this, we multiplied $y$ by 10 and then subtracted 20 from the product. All these are examples of expressions.
The value of an expression thus formed depends upon the chosen value of the variable. As we have already seen, when $x=1,4 x+5=9$; when $x=5,4 x+5=25$. Similarly, when
$$
\begin{aligned}
& x=15,4 x+5=4 \times 15+5=65 ; \\
& x=0,4 x+5=4 \times 0+5=5 ; \text { and so on. }
\end{aligned}
$$$$
\text { when }
$$
Equation (4.1) is a condition on the variable $x$. It states that the value of the expression $(4 x+5)$ is 65 . The condition is satisfied when $x=15$. It is the solution to the equation $4 x+5=65$. When $x=5,4 x+5=25$ and not 65 . Thus $x=5$ is not a solution to the equation. Similarly, $x=0$ is not a solution to the equation. No value of $x$ other than 15 satisfies the condition $4 x+5=65$.
## Try THEse
The value of the expression $(10 y-20)$ depends on the value of $y$. Verify this by giving five different values to $y$ and finding for each $y$ the value of $(10 y-20)$. From the different values of $(10 y-20)$ you obtain, do you see a solution to $10 y-20=50$ ? If there is no solution, try giving more values to $y$ and find whether the condition $10 y-20=50$ is met.
### What Eguation is?
In an equation there is always an equality sign. The equality sign shows that the value of the expression to the left of the sign (the left hand side or LHS) is equal to the value of the expression to the right of the sign (the right hand side or RHS). In equation (4.1), the LHS is $(4 x+5)$ and the RHS is 65 . In equation (4.2), the LHS is $(10 y-20)$ and the RHS is 50.
If there is some sign other than the equality sign between the LHS and the RHS, it is not an equation. Thus, $4 x+5>65$ is not an equation.
It says that, the value of $(4 x+5)$ is greater than 65 .
Similarly, $4 x+5<65$ is not an equation. It says that the value of $(4 x+5)$ is smaller than 65.
In equations, we often find that the RHS is just a number. In Equation (4.1), it is 65 and in equation (4.2), it is 50 . But this need not be always so. The RHS of an equation may be an expression containing the variable. For example, the equation
$$
4 x+5=6 x-25
$$
has the expression $(4 x+5)$ on the left and $(6 x-25)$ on the right of the equality sign.
In short, an equation is a condition on a variable. The condition is that two expressions should have equal value. Note that at least one of the two expressions must contain the variable.
We also note a simple and useful property of equations. The equation $4 x+5=65$ is the same as $65=4 x+5$. Similarly, the equation $6 x-25=4 x+5$ is the same as $4 x+5=6 x-25$. An equation remains the same, when the expressions on the left and on the right are interchanged. This property is often useful in solving equations.
EXAMIPLE 1 Write the following statements in the form of equations:
(i) The sum of three times $x$ and 11 is 32 .
(ii) If you subtract 5 from 6 times a number, you get 7 .
(iii) One fourth of $m$ is 3 more than 7.
(iv) One third of a number plus 5 is 8.
## Solution
(i) Three times $x$ is $3 x$.
Sum of $3 x$ and 11 is $3 x+11$. The sum is 32 .
The equation is $3 x+11=32$.
(ii) Let us say the number is $z ; z$ multiplied by 6 is $6 z$.
Subtracting 5 from $6 z$, one gets $6 z-5$. The result is 7 .
The equation is $6 z-5=7$ (iii) One fourth of $m$ is $\frac{m}{4}$.
It is greater than 7 by 3 . This means the difference $\left(\frac{m}{4}-7\right)$ is 3 .
The equation is $\frac{m}{4}-7=3$.
(iv) Take the number to be $n$. One third of $n$ is $\frac{n}{3}$. This one-third plus 5 is $\frac{n}{3}+5$. It is 8 . The equation is $\frac{n}{3}+5=8$.
Examiple 2 Convert the following equations in statement form:
(i) $x-5=9$
(ii) $5 p=20$
(iii) $3 n+7=1$
(iv) $\frac{m}{5}-2=6$
Solution (i) Taking away 5 from $x$ gives 9.
(ii) Five times a number $p$ is 20 .
(iii) Add 7 to three times $n$ to get 1 .
(iv) You get 6, when you subtract 2 from one-fifth of a number $m$.
What is important to note is that for a given equation, not just one, but many statement forms can be given. For example, for Equation (i) above, you can say:
## TRY MHESE
Write atleast one other form for Subtract 5 from $x$, you get 9 .
or The number $x$ is 5 more than 9 . each equation (ii), (iii) and (iv).
or The number $x$ is greater by 5 than 9 .
or The difference between $x$ and 5 is 9 , and so on.
EXAMPLE 3 Consider the following situation:
Raju's father's age is 5 years more than three times Raju's age. Raju's father is 44 years old. Set up an equation to find Raju's age.
Solution We do not know Raju's age. Let us take it to be $y$ years. Three times Raju's age is $3 y$ years. Raju's father's age is 5 years more than $3 y$; that is, Raju's father is $(3 y+5)$ years old. It is also given that Raju's father is 44 years old.
Therefore,
$$
3 y+5=44
$$
This is an equation in $y$. It will give Raju's age when solved.
EXAMPLE 4 A shopkeeper sells mangoes in two types of boxes, one small and one large. A large box contains as many as 8 small boxes plus 4 loose mangoes. Set up an equation which gives the number of mangoes in each small box. The number of mangoes in a large box is given to be 100 .
SOLUTION Let a small box contain $m$ mangoes. A large box contains 4 more than 8 times $m$, that is, $8 m+4$ mangoes. But this is given to be 100 . Thus
$$
8 m+4=100
$$
You can get the number of mangoes in a small box by solving this equation.
## EXercise 4.1
1. Complete the last column of the table.
| $\begin{array}{c}\text { S. } \\ \text { No. }\end{array}$ | Equation | Value | $\begin{array}{c}\text { Say, whether the Equation } \\ \text { is Satisfied. (Yes/ No) }\end{array}$ |
| :---: | :---: | :---: | :---: |
| (i) | $x+3=0$ | $x=3$ | |
| (ii) | $x+3=0$ | $x=0$ | |
| (iii) | $x+3=0$ | $x=-3$ | |
| (iv) | $x-7=1$ | $x=7$ | |
| (v) | $x-7=1$ | $x=8$ | |
| (vi) | $5 x=25$ | $x=0$ | |
| (vii) | $5 x=25$ | $x=5$ | |
| (viii) | $5 x=25$ | $x=-5$ | |
| (ix) | $\frac{m}{3}=2$ | $m=-6$ | |
| (x) | $\frac{m}{3}=2$ | $m=0$ | |
| (xi) | $\frac{m}{3}=2$ | $m=6$ | |
2. Check whether the value given in the brackets is a solution to the given equation or not:
(a) $n+5=19(n=1)$
(b) $7 n+5=19(n=-2)$
(c) $7 n+5=19(n=2)$
(d) $4 p-3=13(p=1)$
(e) $4 p-3=13(p=-4)$
(f) $4 p-3=13(p=0)$
3. Solve the following equations by trial and error method:
(i) $5 p+2=17$
(ii) $3 m-14=4$
4. Write equations for the following statements:
(i) The sum of numbers $x$ and 4 is 9 .
(ii) 2 subtracted from $y$ is 8 .
(iii) Ten times $a$ is 70 .
(iv) The number $b$ divided by 5 gives 6 .
(v) Three-fourth of $t$ is 15 .
(vi) Seven times $m$ plus 7 gets you 77.
(vii) One-fourth of a number $x$ minus 4 gives 4 .
(viii) If you take away 6 from 6 times $y$, you get 60 .
(ix) If you add 3 to one-third of $z$, you get 30 .
5. Write the following equations in statement forms:
(i) $p+4=15$
(ii) $m-7=3$
(iii) $2 m=7$
(iv) $\frac{m}{5}=3$
(v) $\frac{3 m}{5}=6$
(vi) $3 p+4=25$
(vii) $4 p-2=18$
(viii) $\frac{p}{2}+2=8$ 6. Set up an equation in the following cases:
(i) Irfan says that he has 7 marbles more than five times the marbles Parmit has. Irfan has 37 marbles. (Take $m$ to be the number of Parmit's marbles.)
(ii) Laxmi's father is 49 years old. He is 4 years older than three times Laxmi's age. (Take Laxmi's age to be $y$ years.)
(iii) The teacher tells the class that the highest marks obtained by a student in her class is twice the lowest marks plus 7 . The highest score is 87 . (Take the lowest score to be $l$.)
(iv) In an isosceles triangle, the vertex angle is twice either base angle. (Let the base angle be $b$ in degrees. Remember that the sum of angles of a triangle is 180 degrees).
#### Solving an Equation
## Consider an equality $\quad 8-3=4+1$
The equality (4.5) holds, since both its sides are equal (each is equal to 5).
- Let us now add 2 to both sides; as a result
LHS $=8-3+2=5+2=7 \quad$ RHS $=4+1+2=5+2=7$.
Again the equality holds (i.e., its LHS and RHS are equal).
Thus if we add the same number to both sides of an equality, it still holds.
- Let us now subtract 2 from both the sides; as a result,
$$
\text { LHS }=8-3-2=5-2=3 \quad \mathrm{RHS}=4+1-2=5-2=3 \text {. }
$$
Again, the equality holds.
Thus if we subtract the same number from both sides of an equality, it still holds.
- Similarly, if we multiply or divide both sides of the equality by the same non-zero number, it still holds.
For example, let us multiply both the sides of the equality by 3 , we get
LHS $=3 \times(8-3)=3 \times 5=15, \quad$ RHS $=3 \times(4+1)=3 \times 5=15$.
The equality holds.
Let us now divide both sides of the equality by 2.
$\mathrm{LHS}=(8-3) \div 2=5 \div 2=\frac{5}{2}$
$\operatorname{RHS}=(4+1) \div 2=5 \div 2=\frac{5}{2}=\mathrm{LHS}$
Again, the equality holds.
If we take any other equality, we shall find the same conclusions.
Suppose, we do not observe these rules. Specificially, suppose we add different numbers, to the two sides of an equality. We shall find in this case that the equality does not hold (i.e., its both sides are not equal). For example, let us take again equality (4.5),
$$
8-3=4+1
$$
add 2 to the LHS and 3 to the RHS. The new LHS is $8-3+2=5+2=7$ and the new RHS is $4+1+3=5+3=8$. The equality does not hold, because the new LHS and RHS are not equal.
Thus if we fail to do the same mathematical operation with same number on both sides of an equality, the equality may not hold.
The equality that involves variables is an equation.
These conclusions are also valid for equations, as in each equation variable represents a number only.
Often an equation is said to be like a weighing balance. Doing a mathematical operation on an equation is like adding weights to or removing weights from the pans of a weighing balance.
An equation is like a weighing balance with equal weights on both its pans, in which case the arm of the balance is exactly horizontal. If we add the same weights to both the pans, the arm remains horizontal. Similarly, if we remove the same weights from both the pans, the arm remains horizontal. On the other hand if we add different weights to the pans or remove different weights from them, the balance is tilted; that is, the arm of the balance does not remain horizontal.
We use this principle for solving an equation. Here, ofcourse,
the balance is imaginary and numbers can be used as weights that can be physically balanced against each other. This is the real purpose in presenting the principle. Let us take some examples.
- Consider the equation: $x+3=8$
We shall subtract 3 from both sides of this equation.
The new LHS is $x+3-3=x$ and the new RHS is $8-3=5$
Since this does not disturb the balance, we have
$$
\text { New LHS = New RHS }
$$
Why should we subtract 3, and not some other number? Try adding 3. Will it help? Why not?
It is because subtracting 3 reduces the LHS to $x$.
which is exactly what we want, the solution of the equation (4.6). To confirm whether we are right, we shall put $x=5$ in the original equation. We get LHS $=x+3=5+3=8$, which is equal to the RHS as required.
By doing the right mathematical operation (i.e., subtracting 3 ) on both the sides of the equation, we arrived at the solution of the equation.
- Let us look at another equation $x-3=10$
What should we do here? We should add 3 to both the sides, By doing so, we shall retain the balance and also the LHS will reduce to just $x$.
New LHS $=x-3+3=x, \quad$ New RHS $=10+3=13$
Therefore, $x=13$, which is the required solution.
By putting $x=13$ in the original equation (4.7) we confirm that the solution is correct:
LHS of original equation $=x-3=13-3=10$
This is equal to the RHS as required.
Similarly, let us look at the equations
$$
\begin{aligned}
& 5 y=35 \\
& \frac{m}{2}=5
\end{aligned}
$$
In the first case, we shall divide both the sides by 5 . This will give us just $y$ on LHS
$$
\text { New LHS }=\frac{5 y}{5}=\frac{5 \times y}{5}=y, \quad \text { New RHS }=\frac{35}{5}=\frac{5 \times 7}{5}=7
$$
Therefore,
$$
y=7
$$
This is the required solution. We can substitute $y=7$ in Eq. (4.8) and check that it is satisfied.
In the second case, we shall multiply both sides by 2 . This will give us just $m$ on the LHS
The new LHS $=\frac{m}{2} \times 2=m$. The new RHS $=5 \times 2=10$.
Hence, $m=10$ (It is the required solution. You can check whether the solution is correct). One can see that in the above examples, the operation we need to perform depends on the equation. Our attempt should be to get the variable in the equation separated. Sometimes, for doing so we may have to carry out more than one mathematical operation. Let us solve some more equations with this in mind.
Example 5Solve: (a) $3 n+7=25$
$$
\text { (b) } 2 p-1=23
$$
## Solution
(a) We go stepwise to separate the variable $n$ on the LHS of the equation. The LHS is $3 n+7$. We shall first subtract 7 from it so that we get $3 n$. From this, in the next step we shall divide by 3 to get $n$. Remember we must do the same operation on both sides of the equation. Therefore, subtracting 7 from both sides,
$$
3 n+7-7=25-7
$$
$$
3 n=18
$$
Now divide both sides by 3 ,
$$
\frac{3 n}{3}=\frac{18}{3}
$$
or
$$
n=6 \text {, which is the solution. }
$$
(b) What should we do here? First we shall add 1 to both the sides:
$$
2 p-1+1=23+1
$$
or
$$
2 p=24
$$
Now divide both sides by 2 , we get $\frac{2 p}{2}=\frac{24}{2}$
or
$$
p=12 \text {, which is the solution. }
$$
One good practice you should develop is to check the solution you have obtained. Although we have not done this for (a) above, let us do it for this example.
Let us put the solution $p=12$ back into the equation.
$$
\begin{aligned}
\text { LHS } & =2 p-1=2 \times 12-1=24-1 \\
& =23=\text { RHS }
\end{aligned}
$$
The solution is thus checked for its correctness.
Why do you not check the solution of (a) also?
We are now in a position to go back to the mind-reading game presented by Appu, Sarita, and Ameena and understand how they got their answers. For this purpose, let us look at the equations (4.1) and (4.2) which correspond respectively to Ameena's and Appu's examples.
First consider the equation $4 x+5=65$.
Subtracting 5 from both sides, $4 x+5-5=65-5$.
i.e. $4 x=60$
Divide both sides by 4 ; this will separate $x$. We get $\frac{4 x}{4}=\frac{60}{4}$
or $\quad x=15$, which is the solution. (Check, if it is correct.)
- Now consider, $10 y-20=50$
Adding 20 to both sides, we get $10 y-20+20=50+20$ or $10 y=70$
Dividing both sides by 10 , we get $\frac{10 y}{10}=\frac{70}{10}$
or $\quad y=7$, which is the solution. (Check if it is correct.)
You will realise that exactly these were the answers given by Appu, Sarita and Ameena. They had learnt to set up equations and solve them. That is why they could construct their mind reader game and impress the whole class. We shall come back to this in Section 4.7.
## EXercise 4.2
1. Give first the step you will use to separate the variable and then solve the equation:
(a) $x-1=0$
(b) $x+1=0$
(c) $x-1=5$
(d) $x+6=2$
(e) $y-4=-7$
(f) $y-4=4$
(g) $y+4=4$
(h) $y+4=-4$
2. Give first the step you will use to separate the variable and then solve the equation:
(a) $3 l=42$
(b) $\frac{b}{2}=6$
(c) $\frac{p}{7}=4$
(d) $4 x=25$
(e) $8 y=36$
(f) $\frac{z}{3}=\frac{5}{4}$
(g) $\frac{a}{5}=\frac{7}{15}$
(h) $20 t=-10$
3. Give the steps you will use to separate the variable and then solve the equation:
(a) $3 n-2=46$
(b) $5 m+7=17$
(c) $\frac{20 p}{3}=40$
(d) $\frac{3 p}{10}=6$
4. Solve the following equations:
(a) $10 p=100$
(b) $10 p+10=100$
(c) $\frac{p}{4}=5$
(d) $\frac{-p}{3}=5$
(e) $\frac{3 p}{4}=6$
(f) $3 s=-9$
(g) $3 s+12=0$
(h) $3 s=0$
(i) $2 q=6$
(j) $2 q-6=0$
(k) $2 q+6=0$
(l) $2 q+6=12$
### More Eguations
Let us practise solving some more equations. While solving these equations, we shall learn about transposing a number, i.e., moving it from one side to the other. We can transpose a number instead of adding or subtracting it from both sides of the equation.
Examiple 6 Solve: $12 p-5=25$
## Solution
- Adding 5 on both sides of the equation,
$$
12 p-5+5=25+5 \quad \text { or } \quad 12 p=30
$$
- Dividing both sides by 12 ,
$$
\frac{12 p}{12}=\frac{30}{12} \text { or } p=\frac{5}{2}
$$
Check Putting $p=\frac{5}{2}$ in the LHS of equation 4.12,
$$
\begin{aligned}
\text { LHS } & =12 \times \frac{5}{2}-5=6 \times 5-5 \\
& =30-5=25=\mathrm{RHS}
\end{aligned}
$$
Note, adding 5 to both sides is the same as changing side of $(-5)$.
$$
\begin{aligned}
& 12 p-5=25 \\
& 12 p=25+5
\end{aligned}
$$
Changing side is called transposing. While transposing a number, we change its sign. As we have seen, while solving equations one commonly used operation is adding or subtracting the same number on both sides of the equation. Transposing a number (i.e., changing the side of the number) is the same as adding or subtracting the number from both sides. In doing so, the sign of the number has to be changed. What applies to numbers also applies to expressions. Let us take two more examples of transposing.
We shall now solve two more equations. As you can see they involve brackets, which have to be solved before proceeding.
## EXampLe 7 Solve
(a) $4(m+3)=18$
(b) $-2(x+3)=8$
## Solution
(a) $4(m+3)=18$
Let us divide both the sides by 4. This will remove the brackets in the LHS We get,
$$
m+3=\frac{18}{4} \text { or } \quad m+3=\frac{9}{2}
$$
or $\quad m=\frac{9}{2}-3($ transposing 3 to $\mathrm{RHS})$
or $\quad m=\frac{3}{2} \quad$ (required solution) $\left(\right.$ as $\left.\frac{9}{2}-3=\frac{9}{2}-\frac{6}{2}=\frac{3}{2}\right)$
Check LHS $=4\left[\frac{3}{2}+3\right]=4 \times \frac{3}{2}+4 \times 3=2 \times 3+4 \times 3$ [put $m=\frac{3}{2}$ ]
$$
=6+12=18=\mathrm{RHS}
$$
(b) $-2(x+3)=8$
We divide both sides by (-2), so as to remove the brackets in the LHS, we get,
$x+3=-\frac{8}{2} \quad$ or $\quad x+3=-4$
i.e., $x=-4-3 \quad$ (transposing 3 to RHS) $\quad$ or $\quad x=-7 \quad$ (required solution) Check
$$
\begin{aligned}
\mathrm{LHS} & =-2(-7+3)=-2(-4) \\
& =8=\mathrm{RHS} \text { as required. }
\end{aligned}
$$
### Applications of Simple Eguations to Practical Situations
We have already seen examples in which we have taken statements in everyday language and converted them into simple equations. We also have learnt how to solve simple equations. Thus we are ready to solve puzzles/problems from practical situations. The method is first to form equations corresponding to such situations and then to solve those equations to give the solution to the puzzles/problems. We begin with what we have already seen [Example 1 (i) and (iii), Section 4.2].
Example 8 The sum of three times a number and 11 is 32 . Find the number.
## Solution
- If the unknown number is taken to be $x$, then three times the number is $3 x$ and the sum of $3 x$ and 11 is 32 . That is, $3 x+11=32$
- To solve this equation, we transpose 11 to RHS, so that
$3 x=32-11$ or $3 x=21$
Now, divide both sides by 3
So
$$
x=\frac{21}{3}=7
$$
The required number is 7 . (We may check it by taking 3 times 7 and adding 11 to it. It gives 32 as required.)
EXample 9 Find a number, such that one-fourth of the number is 3 more than 7.
## TRY IHESE
(i) When you multiply a number by 6 and subtract 5 from the product, you get 7 . Can you tell what the number is?
(ii) What is that number one third of which added to 5 gives 8 ?
## Solution
- Let us take the unknown number to be $y$; one-fourth of $y$ is $\frac{y}{4}$.
This number $\left(\frac{y}{4}\right)$ is more than 7 by 3 .
Hence we get the equation for $y$ as $\frac{y}{4}-7=3$
- To solve this equation, first transpose 7 to RHS We get, $\frac{y}{4}=3+7=10$. We then multiply both sides of the equation by 4 , to get
$$
\frac{y}{4} \times 4=10 \times 4 \quad \text { or } \quad y=40 \quad \text { (the required number) }
$$
Let us check the equation formed. Putting the value of $y$ in the equation,
$$
\mathrm{LHS}=\frac{40}{4}-7=10-7=3=\mathrm{RHS}, \quad \text { as required. }
$$
EXAMPLE 10 Raju's father's age is 5 years more than three times Raju's age. Find Raju's age, if his father is 44 years old.
## Solution
As given in Example 3 earlier, the equation that gives Raju's age is
$$
3 y+5=44
$$
- To solve it, we first transpose 5, to get $3 y=44-5=39$
Dividing both sides by 3 , we get $\quad y=13$
That is, Raju's age is 13 years. (You may check the answer.)
## TRY THEse
There are two types of boxes containing mangoes. Each box of the larger type contains 4 more mangoes than the number of mangoes contained in 8 boxes of the smaller type. Each larger box contains 100 mangoes. Find the number of mangoes contained in the smaller box?
## EXercise 4.3
1. Set up equations and solve them to find the unknown numbers in the following cases:
(a) Add 4 to eight times a number; you get 60 .
(b) One-fifth of a number minus 4 gives 3.
(c) If I take three-fourths of a number and add 3 to it, I get 21.
(d) When I subtracted 11 from twice a number, the result was 15 .
(e) Munna subtracts thrice the number of notebooks he has from 50 , he finds the result to be 8 .
(f) Ibenhal thinks of a number. If she adds 19 to it and divides the sum by 5 , she will get 8 .
(g) Anwar thinks of a number. If he takes away 7 from $\frac{5}{2}$ of the number, the result is 23.
2. Solve the following:
(a) The teacher tells the class that the highest marks obtained by a student in her class is twice the lowest marks plus 7 . The highest score is 87 . What is the lowest score? (b) In an isosceles triangle, the base angles are equal. The vertex angle is $40^{\circ}$. What are the base angles of the triangle? (Remember, the sum of three angles of a triangle is $180^{\circ}$ ).
(c) Sachin scored twice as many runs as Rahul. Together, their runs fell two short of a double century. How many runs did each one score?
3. Solve the following:
(i) Irfan says that he has 7 marbles more than five times the marbles Parmit has. Irfan has 37 marbles. How many marbles does Parmit have?
(ii) Laxmi's father is 49 years old. He is 4 years older than three times Laxmi's age. What is Laxmi's age?
(iii) People of Sundargram planted trees in the village garden. Some of the trees were fruit trees. The number of non-fruit trees were two more than three times the number of fruit trees. What was the number of fruit trees planted if the number of non-fruit trees planted was 77 ?
4. Solve the following riddle:
I am a number,
Tell my identity!
Take me seven times over
And add a fifty!
To reach a triple century
You still need forty!
## What have We Discussed?
1. An equation is a condition on a variable such that two expressions in the variable should have equal value.
2. The value of the variable for which the equation is satisfied is called the solution of the equation.
3. An equation remains the same if the LHS and the RHS are interchanged.
4. In case of the balanced equation, if we
(i) add the same number to both the sides, or (ii) subtract the same number from both the sides, or (iii) multiply both sides by the same number, or (iv) divide both sides by the same number, the balance remains undisturbed, i.e., the value of the LHS remains equal to the value of the RHS
5. The above property gives a systematic method of solving an equation. We carry out a series of identical mathematical operations on the two sides of the equation in such a way that on one of the sides we get just the variable. The last step is the solution of the equation. 6. Transposing means moving to the other side. Transposition of a number has the same effect as adding same number to (or subtracting the same number from) both sides of the equation. When you transpose a number from one side of the equation to the other side, you change its sign. For example, transposing +3 from the LHS to the RHS in equation $x+3=8$ gives $x=8-3(=5)$. We can carry out the transposition of an expression in the same way as the transposition of a number.
6. We also learnt how, using the technique of doing the same mathematical operation (for example adding the same number) on both sides, we could build an equation starting from its solution. Further, we also learnt that we could relate a given equation to some appropriate practical situation and build a practical word problem/puzzle from the equation.
## Lines and Angles
### Introduction
You already know how to identify different lines, line segments and angles in a given shape. Can you identify the different line segments and angles formed in the following figures? (Fig 5.1)
(i)
(ii)
(iii)
(iv)
Fig 5.1
Can you also identify whether the angles made are acute or obtuse or right?
Recall that a line segment has two end points. If we extend the two end points in either direction endlessly, we get a line. Thus, we can say that a line has no end points. On the other hand, recall that a ray has one end point (namely its starting point). For example, look at the figures given below:
Fig 5.2
(iii)
Here, Fig 5.2 (i) shows a line segment, Fig 5.2 (ii) shows a line and Fig 5.2 (iii) is that of a ray. Aline segment $\mathrm{PQ}$ is generally denoted by the symbol $\overline{\mathrm{PQ}}$, a line $\mathrm{AB}$ is denoted by the symbol $\overrightarrow{\mathrm{AB}}$ and the ray $\mathrm{OP}$ is denoted by $\stackrel{\mathrm{OP}}{\mathrm{OP}}$. Give some examples of line segments and rays from your daily life and discuss them with your friends. Again recall that an angle is formed when lines or line segments meet. In Fig 5.1, observe the corners. These corners are formed when two lines or line segments intersect at a point. For example, look at the figures given below:
(i)
(ii)
## Fig 5.3
In Fig 5.3 (i) line segments $\mathrm{AB}$ and $\mathrm{BC}$ intersect at $\mathrm{B}$ to form angle $\mathrm{ABC}$, and again line segments $\mathrm{BC}$ and $\mathrm{AC}$ intersect at $\mathrm{C}$ to form angle $\mathrm{ACB}$ and so on. Whereas, in Fig 5.3 (ii) lines PQ and RS intersect at $\mathrm{O}$ to form four angles POS, SOQ, QOR and ROP. An angle ABC is represented by the symbol $\angle \mathrm{ABC}$. Thus, in Fig 5.3 (i), the three angles formed are $\angle \mathrm{ABC}, \angle \mathrm{BCA}$ and $\angle \mathrm{BAC}$, and in Fig 5.3 (ii), the four angles formed are $\angle \mathrm{POS}, \angle \mathrm{SOQ}, \angle \mathrm{QOR}$ and $\angle \mathrm{POR}$. You have already
## TRY THESE
List ten figures around you and identify the acute, obtuse and right angles found in them. studied how to classify the angles as acute, obtuse or right angle.
Note: While referring to the measure of an angle $A B C$, we shall write $\mathrm{m} \angle \mathrm{ABC}$ as simply $\angle A B C$. The context will make it clear, whether we are referring to the angle or its measure.
### Related Angles
#### Complementary Angles
When the sum of the measures of two angles is $90^{\circ}$, the angles are called complementary angles.
(i)
(ii)
(iii)
(iv)
Are these two angles complementary? Yes
Are these two angles complementary?
Fig 5.4
No
Whenever two angles are complementary, each angle is said to be the complement of the other angle. In the above diagram (Fig 5.4), the ' $30^{\circ}$ angle' is the complement of the ' $60^{\circ}$ angle' and vice versa.
1. Can two acute angles be complement to each other?
2. Can two obtuse angles be complement to each other?
3. Can two right angles be complement to each other?
## TRY These
1. Which pairs of following angles are complementary? (Fig 5.5)
(i)
(iii)
Fig 5.5
(ii)
(iv)
2. What is the measure of the complement of each of the following angles?
(i) $45^{\circ}$
(ii) $65^{\circ}$
(iii) $41^{\circ}$
(iv) $54^{\circ}$
3. The difference in the measures of two complementary angles is $12^{\circ}$. Find the measures of the angles.
#### Supplementary Angles
Let us now look at the following pairs of angles (Fig 5.6):
(i)
(ii)
(iii)
Fig 5.6
(iv)
Do you notice that the sum of the measures of the angles in each of the above pairs (Fig 5.6) comes out to be $180^{\circ}$ ? Such pairs of angles are called supplementary angles. When two angles are supplementary, each angle is said to be the supplement of the other.
## Think, Discuss and Write
1. Can two obtuse angles be supplementary?
2. Can two acute angles be supplementary?
3. Can two right angles be supplementary?
## TRY THESE
1.Find the pairs of supplementary angles in Fig 5.7:
(i)
(iii)
Fig 5.7
(ii)
(iv) 2. What will be the measure of the supplement of each one of the following angles?
(i) $100^{\circ}$
(ii) $90^{\circ}$
(iii) $55^{\circ}$
(iv) $125^{\circ}$
3. Among two supplementary angles the measure of the larger angle is $44^{\circ}$ more than the measure of the smaller. Find their measures.
4. Find the complement of each of the following angles:
(i)
(ii)
(iii)
2. Find the supplement of each of the following angles:
(i)
(ii)
(iii)
3. Identify which of the following pairs of angles are complementary and which are supplementary.
(i) $65^{\circ}, 115^{\circ}$
(ii) $63^{\circ}, 27^{\circ}$
(iii) $112^{\circ}, 68^{\circ}$
(iv) $130^{\circ}, 50^{\circ}$
(v) $45^{\circ}, 45^{\circ}$
(vi) $80^{\circ}, 10^{\circ}$
4. Find the angle which is equal to its complement.
5. Find the angle which is equal to its supplement.
6. In the given figure, $\angle 1$ and $\angle 2$ are supplementary angles.
If $\angle 1$ is decreased, what changes should take place in $\angle 2$ so that both the angles still remain supplementary.
7. Can two angles be supplementary if both of them are:
(i) acute?
(ii) obtuse?
(iii) right? 8. An angle is greater than $45^{\circ}$. Is its complementary angle greater than $45^{\circ}$ or equal to $45^{\circ}$ or less than $45^{\circ}$ ?
8. Fill in the blanks:
(i) If two angles are complementary, then the sum of their measures is
(ii) If two angles are supplementary, then the sum of their measures is
(iii) If two adjacent angles are supplementary, they form a
10. In the adjoining figure, name the following pairs of angles.
(i) Obtuse vertically opposite angles
(ii) Adjacent complementary angles
(iii) Equal supplementary angles
(iv) Unequal supplementary angles
(v) Adjacent angles that do not form a linear pair
### Pairs of Lines
#### Intersecting Lines
Fig 5.8
The blackboard on its stand, the letter Y made up of line segments and the grill-door of a window (Fig 5.8), what do all these have in common? They are examples of intersecting lines.
Two lines $l$ and $\mathrm{m}$ intersect if they have a point in common. This common point $\mathrm{O}$ is their point of intersection.
## THINK, DISCUSS AND WRITE
In Fig 5.9, $\mathrm{AC}$ and $\mathrm{BE}$ intersect at $\mathrm{P}$.
$\mathrm{AC}$ and $\mathrm{BC}$ intersect at $\mathrm{C}, \mathrm{AC}$ and $\mathrm{EC}$ intersect at $\mathrm{C}$.
Try to find another ten pairs of intersecting line segments.
Should any two lines or line segments necessarily intersect? Can you find two pairs of non-intersecting line segments in the figure?
Can two lines intersect in more than one point? Think about it.
Fig 5.9
## TrY These
1. Find examples from your surroundings where lines intersect at right angles.
2. Find the measures of the angles made by the intersecting lines at the vertices of an equilateral triangle.
3. Draw any rectangle and find the measures of angles at the four vertices made by the intersecting lines.
4. If two lines intersect, do they always intersect at right angles?
#### Transversal
You might have seen a road crossing two or more roads or a railway line crossing several other lines (Fig 5.10). These give an idea of a transversal.
(i)
(ii)
A line that intersects two or more lines at distinct points is called a transversal. In the Fig 5.11, $p$ is a transversal to the lines $l$ and $m$.
Fig 5.11
Fig 5.12
In Fig 5.12 the line $p$ is not a transversal, although it cuts two lines $l$ and $m$. Can you say, 'why'?
#### Angles made by a Transversal
In Fig 5.13, you see lines $l$ and $m$ cut by transversal $p$. The eight angles marked 1 to 8 have their special names:
Fig 5.13
## Try These
1. Suppose two lines are given. How many transversals can you draw for these lines?
2. If a line is a transversal to three lines, how many points of intersections are there?
3. Try to identify a few transversals in your surroundings.
| Interior angles | $\angle 3, \angle 4, \angle 5, \angle 6$ |
| :---: | :---: |
| Exterior angles | $\angle 1, \angle 2, \angle 7, \angle 8$ |
| Pairs of Corresponding angles | $\angle 1$ and $\angle 5, \angle 2$ and $\angle 6$, |
| | $\angle 3$ and $\angle 7, \angle 4$ and $\angle 8$ |
| Pairs of Alternate interior angles | $\angle 3$ and $\angle 6, \angle 4$ and $\angle 5$ |
| Pairs of Alternate exterior angles | $\angle 1$ and $\angle 8, \angle 2$ and $\angle 7$ |
| $\begin{array}{l}\text { Pairs of interior angles on the } \\ \text { same side of the transversal }\end{array}$ | $\angle 3$ and $\angle 5, \angle 4$ and $\angle 6$ |
Note: Corresponding angles (like $\angle 1$ and $\angle 5$ in Fig 5.14) include
(i) different vertices
(ii) are on the same side of the transversal and
(iii) are in 'corresponding' positions (above or below, left or right) relative to the two lines.
Fig 5.14
Alternate interior angles (like $\angle 3$ and $\angle 6$ in Fig 5.15 )
(i) have different vertices
(ii) are on opposite sides of the transversal and
(iii) lie 'between' the two lines.
## TRY THESE
Name the pairs of angles in each figure:
#### Transversal of Parallel Lines
Do you remember what parallel lines are? They are lines on a plane that do not meet anywhere. Can you identify parallel lines in the following figures? (Fig 5.16)
Fig 5.16
Transversals of parallel lines give rise to quite interesting results.
## Do THIS
Take a ruled sheet of paper. Draw (in thick colour) two parallel lines $l$ and $m$. Draw a transversal $t$ to the lines $l$ and $m$. Label $\angle 1$ and $\angle 2$ as shown [Fig 5.17(i)]. Place a tracing paper over the figure drawn. Trace the lines $l, m$ and $t$.
Slide the tracing paper along $t$, until $l$ coincides with $m$.
You find that $\angle 1$ on the traced figure coincides with $\angle 2$ of the original figure. In fact, you can see all the following results by similar tracing and sliding activity.
(i) $\angle 1=\angle 2$
(ii) $\angle 3=\angle 4$
(iii) $\angle 5=\angle 6$
(iv) $\angle 7=\angle 8$
(i)
(iii)
(ii)
(iv)
Fig 5.17
This activity illustrates the following fact:
If two parallel lines are cut by a transversal, each pair of corresponding angles are equal in measure.
We use this result to get another interesting result. Look at Fig 5.18.
When $t$ cuts the parallel lines, $l, m$, we get, $\angle 3=\angle 7$ (vertically opposite angles).
But $\angle 7=\angle 8$ (corresponding angles). Therefore, $\angle 3=\angle 8$
You can similarly show that $\angle 1=\angle 6$. Thus, we have the following result :
If two parallel lines are cut by a transversal, each pair of alternate interior angles are equal.
This second result leads to another interesting property. Again, from Fig 5.18.
Fig 5.18 $\angle 3+\angle 1=180^{\circ}(\angle 3$ and $\angle 1$ form a linear pair)
But $\angle 1=\angle 6$ (A pair of alternate interior angles)
Therefore, we can say that $\angle 3+\angle 6=180^{\circ}$.
Similarly, $\angle 1+\angle 8=180^{\circ}$. Thus, we obtain the following result:
If two parallel lines are cut by a transversal, then each pair of interior angles on the same side of the transversal are supplementary.
You can very easily remember these results if you can look for relevant 'shapes'.
The F-shape stands for corresponding angles:
The Z - shape stands for alternate angles.
## Do THIS
Draw a pair of parallel lines and a transversal. Verify the above three statements by actually measuring the angles.
## Try These
Lines $l \| m$; $t$ is a transversal
$$
\angle x=\text { ? }
$$
## Lines $l \| m$;
$t$ is a transversal$$
\angle z=\text { ? }
$$
Lines $a \| b$; $c$ is a transversal
$$
\angle y=\text { ? }
$$
Lines $l \| m$; $t$ is a transversal
$$
\angle x=\text { ? }
$$
$t$ is a transversal
Is $\angle 1=\angle 2$ ?
### Checking for Parallel Lines
If two lines are parallel, then you know that a transversal gives rise to pairs of equal corresponding angles, equal alternate interior angles and interior angles on the same side of the transversal being supplementary.
When two lines are given, is there any method to check if they are parallel or not? You need this skill in many life-oriented situations.
Adraftsman uses a carpenter's square and a straight edge (ruler) to draw these segments (Fig 5.19). He claims they are parallel. How?
Are you able to see that he has kept the corresponding angles to be equal? (What is the transversal here?)
Thus, when a transversal cuts two lines, such that pairs of corresponding angles are equal, then the lines have to be parallel.
Look at the letter Z(Fig 5.20). The horizontal segments here are parallel, because the alternate angles are equal.
When a transversal cuts two lines, such that pairs of alternate interior angles are equal, the lines have to be parallel.
Fig 5.19 Lines $l\|m, p\| q$;
Find $a, b, c, d$
Fig 5.20 Draw a line $l$ (Fig 5.21).
Draw a line $m$, perpendicular to $l$. Again draw a line $p$, such that $p$ is perpendicular to $m$.
Thus, $p$ is perpendicular to a perpendicular to $l$.
You find $p \| l$. How? This is because you draw $p$ such that $\angle 1+\angle 2=180^{\circ}$.
Fig 5.21
Thus, when a transversal cuts two lines, such that pairs of interior angles on the same side of the transversal are supplementary, the lines have to be parallel.
## TRY These
Is $l \| m$ ? Why?
Is $l \| m$ ? Why?
Exercise 5.2
1. State the property that is used in each of the following statements?
(i) If $a \| b$, then $\angle 1=\angle 5$.
(ii) If $\angle 4=\angle 6$, then $a \| b$.
(iii) If $\angle 4+\angle 5=180^{\circ}$, then $a \| b$.
2. In the adjoining figure, identify
(i) the pairs of corresponding angles.
(ii) the pairs of alternate interior angles.
(iii) the pairs of interior angles on the same side of the transversal.
(iv) the vertically opposite angles.
3. In the adjoining figure, $p \| q$. Find the unknown angles.
4. Find the value of $x$ in each of the following figures if $l \| m$.
(i)
(ii)
5. In the given figure, the arms of two angles are parallel.
If $\angle \mathrm{ABC}=70^{\circ}$, then find
(i) $\angle \mathrm{DGC}$
(ii) $\angle \mathrm{DEF}$
6. In the given figures below, decide whether $l$ is parallel to $m$.
(i)
(ii)
(iii)
## What haVe WE Discussed?
1. We recall that (i) A line-segment has two end points.
(ii) A ray has only one end point (its initial point); and
(iii) A line has no end points on either side.
2. When two lines $l$ and $m$ meet, we say they intersect; the meeting point is called the point of intersection.
When lines drawn on a sheet of paper do not meet, however far produced, we call them to be parallel lines.
## The Triangle and its Properties
### Introduction
A triangle, you have seen, is a simple closed curve made of three line segments. It has three vertices, three sides and three angles. Here is $\triangle \mathrm{ABC}$ (Fig 6.1). It has
Sides: $\quad \overline{\mathrm{AB}}, \overline{\mathrm{BC}}, \overline{\mathrm{CA}}$
Angles: $\quad \angle \mathrm{BAC}, \angle \mathrm{ABC}, \angle \mathrm{BCA}$
Vertices: A, B, C
Fig 6.1
The side opposite to the vertex A is BC. Can you name the angle opposite to the side AB? You know how to classify triangles based on the (i) sides (ii) angles.
(i) Based on Sides: Scalene, Isosceles and Equilateral triangles.
(ii) Based on Angles: Acute-angled, Obtuse-angled and Right-angled triangles.
Make paper-cut models of the above triangular shapes. Compare your models with those of your friends and discuss about them.
## TRY These
1. Write the six elements (i.e., the 3 sides and the 3 angles) of $\triangle \mathrm{ABC}$.
2. Write the:
(i) Side opposite to the vertex $\mathrm{Q}$ of $\triangle \mathrm{PQR}$
(ii) Angle opposite to the side LM of $\triangle \mathrm{LMN}$
(iii) Vertex opposite to the side RT of $\triangle \mathrm{RST}$
3. Look at Fig 6.2 and classify each of the triangles according to its
(a) Sides
(b) Angles
(i)
(iv)
(ii)
(v)
(iii)
(vi)
Fig 6.2
Now, let us try to explore something more about triangles.
### Medians of a Triangle
Given a line segment, you know how to find its perpendicular bisector by paper folding. Cut out a triangle $\mathrm{ABC}$ from a piece of paper (Fig 6.3). Consider any one of its sides, say, $\overline{\mathrm{BC}}$. By paper-folding, locate the perpendicular bisector of $\overline{\mathrm{BC}}$. The folded crease meets $\overline{\mathrm{BC}}$ at $\mathrm{D}$, its mid-point. Join $\mathrm{AD}$.
Fig 6.3
The line segment $\mathrm{AD}$, joining the mid-point of $\overline{\mathrm{BC}}$ to its opposite vertex $\mathrm{A}$ is called a median of the triangle.
Consider the sides $\overline{\mathrm{AB}}$ and $\overline{\mathrm{CA}}$ and find two more medians of the triangle.
A median connects a vertex of a triangle to the mid-point of the opposite side.
## Think, Discuss and Write
1. How many medians can a triangle have?
2. Does a median lie wholly in the interior of the triangle? (If you think that this is not true, draw a figure to show such a case).
### Altitudes of a Triangle
Make a triangular shaped cardboard ABC. Place it upright on a table. How 'tall' is the triangle? The height is the distance from vertex A (in the Fig 6.4) to the base $\overline{\mathrm{BC}}$.
From A to $\overline{\mathrm{BC}}$, you can think of many line segments (see the next Fig 6.5). Which among them
will represent its height?
The height is given by the line segment that starts from A, comes straight down to $\overline{\mathrm{BC}}$, and is perpendicular to $\overline{\mathrm{BC}}$. This line segment $\overline{\mathrm{AL}}$ is an altitude of the triangle.
An altitude has one end point at a vertex of the triangle and the other on the line containing the opposite side. Through
Fig 6.5 each vertex, an altitude can be drawn.
## Think, Discuss and Write
1. How many altitudes can a triangle have?
2. Draw rough sketches of altitudes from $A$ to $\overline{\mathrm{BC}}$ for the following triangles (Fig 6.6):
(i)
(ii)
Obtuse-angled
(iii)
Fig 6.6
3. Will an altitude always lie in the interior of a triangle? If you think that this need not be true, draw a rough sketch to show such a case.
4. Can you think of a triangle in which two altitudes of the triangle are two of its sides?
5. Can the altitude and median be same for a triangle?
(Hint: For Q.No. 4 and 5, investigate by drawing the altitudes for every type of triangle).
## Do THis
Take several cut-outs of
(i) an equilateral triangle
(ii) an isosceles triangle and
(iii) a scalene triangle.
Find their altitudes and medians. Do you find anything special about them? Discuss it with your friends.
## ExERCISE 6.1
1. In $\triangle \mathrm{PQR}, \mathrm{D}$ is the mid-point of $\overline{\mathrm{QR}}$.
$\overline{\mathrm{PM}}$ is
$\mathrm{PD}$ is
Is $\mathrm{QM}=\mathrm{MR}$ ?
2. Draw rough sketches for the following:
(a) In $\triangle \mathrm{ABC}, \mathrm{BE}$ is a median.
(b) In $\triangle \mathrm{PQR}, \mathrm{PQ}$ and $\mathrm{PR}$ are altitudes of the triangle.
(c) In $\triangle X Y Z, Y L$ is an altitude in the exterior of the triangle.
3. Verify by drawing a diagram if the median and altitude of an isosceles triangle can be same.
### Exterior Avgle of a Triangle and its Property
## Do THIS
1. Draw a triangle $\mathrm{ABC}$ and produce one of its sides, say BC as shown in Fig 6.7. Observe the angle ACD formed at the point $C$. This angle lies in the exterior of $\triangle \mathrm{ABC}$. We call it an exterior angle of the $\triangle \mathrm{ABC}$ formed at vertex $C$.
Clearly $\angle \mathrm{BCA}$ is an adjacent angle to $\angle \mathrm{ACD}$. The remaining two angles of the triangle namely $\angle \mathrm{A}$ and
Fig 6.7 $\angle \mathrm{B}$ are called the two interior opposite angles or the two remote interior angles of $\angle \mathrm{ACD}$. Now cut out (or make trace copies of) $\angle \mathrm{A}$ and $\angle \mathrm{B}$ and place them adjacent to each other as shown in Fig 6.8.
Do these two pieces together entirely cover $\angle \mathrm{ACD}$ ?
Can you say that
$m \angle \mathrm{ACD}=m \angle \mathrm{A}+m \angle \mathrm{B}$ ?
2. As done earlier, draw a triangle $\mathrm{ABC}$ and form an exterior angle ACD. Now take a protractor and measure $\angle \mathrm{ACD}, \angle \mathrm{A}$ and $\angle \mathrm{B}$.
Find the sum $\angle \mathrm{A}+\angle \mathrm{B}$ and compare it with the measure of $\angle \mathrm{ACD}$. Do you observe that $\angle \mathrm{ACD}$ is equal (or nearly equal, if there is an error in $\mathrm{B}$ measurement) to $\angle \mathrm{A}+\angle \mathrm{B}$ ?
Fig 6.8 You may repeat the two activities as mentioned by drawing some more triangles along with their exterior angles. Every time, you will find that the exterior angle of a triangle is equal to the sum of its two interior opposite angles.
A logical step-by-step argument can further confirm this fact.
## angles.
An exterior angle of a triangle is equal to the sum of its interior opposite
Given: Consider $\triangle \mathrm{ABC}$.
$\angle \mathrm{ACD}$ is an exterior angle.
To Show: $m \angle \mathrm{ACD}=m \angle \mathrm{A}+m \angle \mathrm{B}$
Through C draw $\overline{\mathrm{CE}}$, parallel to $\overline{\mathrm{BA}}$.
Fig 6.9
## Justification
Steps
(a) $\angle 1=\angle x$
(b) $\angle 2=\angle y$
(c) $\angle 1+\angle 2=\angle x+\angle y$
(d) Now, $\angle x+\angle y=m \angle \mathrm{ACD}$
Hence, $\angle 1+\angle 2=\angle \mathrm{ACD}$
## Reasons
$\overline{\mathrm{BA}} \| \overline{\mathrm{CE}}$ and $\overline{\mathrm{AC}}$ is a transversal.
Therefore, alternate angles should be equal.
$\overline{\mathrm{BA}} \| \overline{\mathrm{CE}}$ and $\overline{\mathrm{BD}}$ is a transversal.
Therefore, corresponding angles should be equal.
The above relation between an exterior angle and its two interior opposite angles is referred to as the Exterior Angle Property of a triangle.
## Think, Discuss and Write
1. Exterior angles can be formed for a triangle in many ways. Three of them are shown here (Fig 6.10)
Fig 6.10
There are three more ways of getting exterior angles. Try to produce those rough sketches.
2. Are the exterior angles formed at each vertex of a triangle equal?
3. What can you say about the sum of an exterior angle of a triangle and its adjacent interior angle? Example 1 Find angle $x$ in Fig 6.11.
Solution Sum of interior opposite angles $=$ Exterior angle
or
$50^{\circ}+x=110^{\circ}$
or
$x=60^{\circ}$
Fig 6.11
## Think, Discuss and Write
1. What can you say about each of the interior opposite angles, when the exterior angle is (i) a right angle? (ii) an obtuse angle? (iii) an acute angle?
2. Can the exterior angle of a triangle be a straight angle?
## TrY THEse
1. An exterior angle of a triangle is of measure $70^{\circ}$ and one of its interior opposite angles is of measure $25^{\circ}$. Find the measure of the other interior opposite angle.
2. The two interior opposite angles of an exterior angle of a triangle are $60^{\circ}$ and $80^{\circ}$. Find the measure of the exterior angle.
3. Is something wrong in this diagram (Fig 6.12)? Comment.
Fig 6.12
## EXERCISE 6.2
1. Find the value of the unknown exterior angle $x$ in the following diagrams:
(i)
(iv)
(iii)
2. Find the value of the unknown interior angle $x$ in the following figures:
(i)
(ii)
(iii)
(iv)
(v)
### Angle Sum Property of a Triangle
(vi)
There is a remarkable property connecting the three angles of a triangle. You are going to see this through the following four activities.
1. Draw a triangle. Cut on the three angles. Rearrange them as shown in Fig 6.13 (i), (ii). The three angles now constitute one angle. This angle is a straight angle and so has measure $180^{\circ}$.
(i)
(ii)
Fig 6.13
Thus, the sum of the measures of the three angles of a triangle is $180^{\circ}$.
2. The same fact you can observe in a different way also. Take three copies of any triangle, say $\triangle \mathrm{ABC}$ (Fig 6.14).
Fig 6.14 Arrange them as in Fig 6.15.
What do you observe about $\angle 1+\angle 2+\angle 3$ ?
(Do you also see the 'exterior angle property'?)
Fig 6.15
3. Take a piece of paper and cut out a triangle, say, $\triangle \mathrm{ABC}$ (Fig 6.16).
Make the altitude AM by folding $\triangle \mathrm{ABC}$ such that it passes through $\mathrm{A}$.
Fold now the three corners such that all the three vertices A, B and $\mathrm{C}$ touch at $\mathrm{M}$.
(i)
(ii)
(iii)
Fig 6.16
You find that all the three angles form together a straight angle. This again shows that the sum of the measures of the three angles of a triangle is $180^{\circ}$.
4. Draw any three triangles, say $\triangle \mathrm{ABC}, \triangle \mathrm{PQR}$ and $\triangle \mathrm{XYZ}$ in your notebook.
Use your protractor and measure each of the angles of these triangles.
Tabulate your results
| Name of $\Delta$ | Measures of Angles | $\begin{array}{c}\text { Sum of the Measures } \\ \text { of the three Angles }\end{array}$ |
| :--- | :--- | :--- |
| $\triangle \mathrm{ABC}$ | $\mathrm{m} \angle \mathrm{A}=\mathrm{m} \angle \mathrm{B}=\mathrm{m} \angle \mathrm{C}=$ | $\mathrm{m} \angle \mathrm{A}+\mathrm{m} \angle \mathrm{B}+\mathrm{m} \angle \mathrm{C}=$ |
| $\Delta \mathrm{PQR}$ | $\mathrm{m} \angle \mathrm{P}=\mathrm{m} \angle \mathrm{Q}=\mathrm{m} \angle \mathrm{R}=$ | $\mathrm{m} \angle \mathrm{P}+\mathrm{m} \angle \mathrm{Q}+\mathrm{m} \angle \mathrm{R}=$ |
| $\Delta \mathrm{XYZ}$ | $\mathrm{m} \angle \mathrm{X}=\mathrm{m} \angle \mathrm{Y}=\mathrm{m} \angle \mathrm{Z}=$ | $\mathrm{m} \angle \mathrm{X}+\mathrm{m} \angle \mathrm{Y}+\mathrm{m} \angle \mathrm{Z}=$ |
Allowing marginal errors in measurement, you will find that the last column always gives $180^{\circ}$ (or nearly $180^{\circ}$ ).
When perfect precision is possible, this will also show that the sum of the measures of the three angles of a triangle is $180^{\circ}$.
You are now ready to give a formal justification of your assertion through logical argument.
Statement The total measure of the three angles of a triangle is $180^{\circ}$.
To justify this let us use the exterior angle property of a triangle.
Fig 6.17 Given $\quad \angle 1, \angle 2, \angle 3$ are angles of $\triangle \mathrm{ABC}$ (Fig 6.17).
$\angle 4$ is the exterior angle when $\mathrm{BC}$ is extended to $\mathrm{D}$.
Justification $\angle 1+\angle 2=\angle 4$ (by exterior angle property) $\angle 1+\angle 2+\angle 3=\angle 4+\angle 3$ (adding $\angle 3$ to both the sides)
But $\angle 4$ and $\angle 3$ form a linear pair so it is $180^{\circ}$. Therefore, $\angle 1+\angle 2+\angle 3=180^{\circ}$.
Let us see how we can use this property in a number of ways.
Example 2 In the given figure (Fig 6.18) find $\mathrm{m} \angle \mathrm{P}$.
Solution By angle sum property of a triangle, $\mathrm{m} \angle \mathrm{P}+47^{\circ}+52^{\circ}=180^{\circ}$
Therefore
$$
\begin{aligned}
\mathrm{m} \angle \mathrm{P} & =180^{\circ}-47^{\circ}-52^{\circ} \\
& =180^{\circ}-99^{\circ}=81^{\circ}
\end{aligned}
$$
Fig 6.18
(i)
2. Find the values of the unknowns $x$ and $y$ in the following diagrams:
(i)
## EXERCISE 6.3
(V)
(vi)
(ii)
(iii)
(iv)
(v)
## TRY THESE
1. Two angles of a triangle are $30^{\circ}$ and $80^{\circ}$. Find the third angle.
2. One of the angles of a triangle is $80^{\circ}$ and the other two angles are equal. Find the measure of each of the equal angles.
3. The three angles of a triangle are in the ratio 1:2:1. Find all the angles of the triangle. Classify the triangle in two different ways.
## Think, Discuss and Write
1. Can you have a triangle with two right angles?
2. Can you have a triangle with two obtuse angles?
3. Can you have a triangle with two acute angles?
4. Can you have a triangle with all the three angles greater than $60^{\circ}$ ?
5. Can you have a triangle with all the three angles equal to $60^{\circ}$ ?
6. Can you have a triangle with all the three angles less than $60^{\circ}$ ?
### Two Special Triangles : Egullateral and Isosceles
## A triangle in which all the three sides are of equal lengths is called an equilateral
## triangle.
Take two copies of an equilateral triangle ABC (Fig 6.19). Keep one of them fixed. Place the second triangle on it. It fits exactly into the first. Turn it round in any way and still they fit with one another exactly. Are you able to see that when the three sides of a triangle have equal lengths then the three angles are also of the same size?
We conclude that in an equilateral triangle:
(i) all sides have same length.
(ii) each angle has measure $60^{\circ}$.
(i)
(ii)
Fig 6.19 A triangle in which two sides are of equal lengths is called an isosceles triangle.
(i)
(ii)
Fig 6.20
From a piece of paper cut out an isosceles triangle $X Y Z$, with $X Y=X Z$ (Fig 6.20). Fold it such that $Z$ lies on $Y$. The line $X M$ through $X$ is now the axis of symmetry (which you will read in Chapter 14). You find that $\angle Y$ and $\angle Z$ fit on each other exactly. $X Y$ and $\mathrm{XZ}$ are called equal sides; $\mathrm{YZ}$ is called the base; $\angle \mathrm{Y}$ and $\angle \mathrm{Z}$ are called base angles and these are also equal.
Thus, in an isosceles triangle:
(i) two sides have same length.
(ii) base angles opposite to the equal sides are equal.
## TRY THESE
1. Find angle $x$ in each figure:
(i)
(ii)
(iv)
(v)
(vii)
(viii)
(ix) 2. Find angles $x$ and $y$ in each figure.
(i)
(ii)
(iii)
### Sum of the Lengths of Two Sides of a Triangle
1. Mark three non-collinear spots A, B and C in your playground. Using lime powder mark the paths $\mathrm{AB}, \mathrm{BC}$ and $\mathrm{AC}$.
Ask your friend to start from $\mathrm{A}$ and reach $\mathrm{C}$, walking along one or more of these paths. She can, for example, walk first along $\overline{\mathrm{AB}}$ and then along $\overline{\mathrm{BC}}$ to reach $\mathrm{C}$; or she can walk straight along $\overline{\mathrm{AC}}$. She will naturally prefer the direct path $\mathrm{AC}$. If she takes the other path $(\overline{\mathrm{AB}}$ and then $\overline{\mathrm{BC}}$ ), she will have to walk more. In other words,
Fig 6.21
$$
\mathrm{AB}+\mathrm{BC}>\mathrm{AC}
$$
Similarly, if one were to start from B and go to A, he or she will not take the route $\overline{\mathrm{BC}}$ and $\overline{\mathrm{CA}}$ but will prefer $\overline{\mathrm{BA}}$ This is because
$$
\mathrm{BC}+\mathrm{CA}>\mathrm{AB}
$$
By a similar argument, you find that
$$
\mathrm{CA}+\mathrm{AB}>\mathrm{BC}
$$
These observations suggest that the sum of the lengths of any two sides of a triangle is greater than the third side.
2. Collect fifteen small sticks (or strips) of different lengths, say, $6 \mathrm{~cm}, 7 \mathrm{~cm}, 8 \mathrm{~cm}$, $9 \mathrm{~cm}, \ldots, 20 \mathrm{~cm}$.
Take any three of these sticks and try to form a triangle. Repeat this by choosing different combinations of three sticks.
Suppose you first choose two sticks of length $6 \mathrm{~cm}$ and $12 \mathrm{~cm}$. Your third stick has to be of length more than $12-6=6 \mathrm{~cm}$ and less than $12+6=18 \mathrm{~cm}$. Try this and find out why it is so.
To form a triangle you will need any three sticks such that the sum of the lengths of any two of them will always be greater than the length of the third stick.
This also suggests that the sum of the lengths of any two sides of a triangle is greater than the third side. 3. Draw any three triangles, say $\triangle \mathrm{ABC}, \triangle \mathrm{PQR}$ and $\triangle \mathrm{XYZ}$ in your notebook (Fig 6.22).
(i)
(ii)
(iii)
Fig 6.22
Use your ruler to find the lengths of their sides and then tabulate your results as follows:
This also strengthens our earlier guess. Therefore, we conclude that sum of the lengths of any two sides of a triangle is greater than the length of the third side.
We also find that the difference between the length of any two sides of a triangle is smaller than the length of the third side. Example 3 Is there a triangle whose sides have lengths $10.2 \mathrm{~cm}, 5.8 \mathrm{~cm}$ and $4.5 \mathrm{~cm}$ ?
Solution Suppose such a triangle is possible. Then the sum of the lengths of any two sides would be greater than the length of the third side. Let us check this.
$$
\begin{array}{ll}
\text { Is } 4.5+5.8>10.2 ? & \text { Yes } \\
\text { Is } 5.8+10.2>4.5 ? & \text { Yes } \\
\text { Is } 10.2+4.5>5.8 ? & \text { Yes }
\end{array}
$$
Therefore, the triangle is possible.
EXAMPLE 4 The lengths of two sides of a triangle are $6 \mathrm{~cm}$ and $8 \mathrm{~cm}$. Between which two numbers can length of the third side fall?
Solution We know that the sum of two sides of a triangle is always greater than the third.
Therefore, third side has to be less than the sum of the two sides. The third side is thus, less than $8+6=14 \mathrm{~cm}$.
The side cannot be less than the difference of the two sides. Thus, the third side has to be more than $8-6=2 \mathrm{~cm}$.
The length of the third side could be any length greater than 2 and less than $14 \mathrm{~cm}$.
## EXERCISE 6.4
1. Is it possible to have a triangle with the following sides?
(i) $2 \mathrm{~cm}, 3 \mathrm{~cm}, 5 \mathrm{~cm}$
(ii) $3 \mathrm{~cm}, 6 \mathrm{~cm}, 7 \mathrm{~cm}$
(iii) $6 \mathrm{~cm}, 3 \mathrm{~cm}, 2 \mathrm{~cm}$
2. Take any point $\mathrm{O}$ in the interior of a triangle $P Q R$. Is
(i) $\mathrm{OP}+\mathrm{OQ}>\mathrm{PQ}$ ?
(ii) $\mathrm{OQ}+\mathrm{OR}>\mathrm{QR}$ ?
(iii) $\mathrm{OR}+\mathrm{OP}>\mathrm{RP}$ ?
3. $A M$ is a median of a triangle $A B C$.
Is $\mathrm{AB}+\mathrm{BC}+\mathrm{CA}>2 \mathrm{AM}$ ?
(Consider the sides of triangles
$\triangle \mathrm{ABM}$ and $\triangle \mathrm{AMC}$.)
4. $\mathrm{ABCD}$ is a quadrilateral.
Is $\mathrm{AB}+\mathrm{BC}+\mathrm{CD}+\mathrm{DA}>\mathrm{AC}+\mathrm{BD}$ ?
5. $A B C D$ is quadrilateral. Is
$\mathrm{AB}+\mathrm{BC}+\mathrm{CD}+\mathrm{DA}<2(\mathrm{AC}+\mathrm{BD}) ?$
1. Is the sum of any two angles of a triangle always greater than the third angle?
### Right-angled Triangles and Pythagoras Property
Pythagoras, a Greek philosopher of sixth century B.C. is said to have found a very important and useful property of right-angled triangles given in this section. The property is, hence, named after him. In fact, this property was known to people of many other countries too. The Indian mathematician Baudhayan has also given an equivalent form of this property. We now try to explain the Pythagoras property.
In a right-angled triangle, the sides have some special names. The side opposite to the right angle is called the hypotenuse; the other two sides are known as the legs of the right-angled triangle.
In $\triangle \mathrm{ABC}$ (Fig 6.23), the right-angle is at B. So, $\mathrm{AC}$ is the hypotenuse. $\overline{\mathrm{AB}}$ and $\overline{\mathrm{BC}}$ are the legs of $\triangle \mathrm{ABC}$.
Make eight identical copies of right angled triangle of any size you prefer. For example, you make a right-angled triangle whose hypotenuse is $a$ units long and the legs are of lengths $b$ units and $c$ units (Fig 6.24).
Draw two identical squares on a sheet with sides of lengths $b+c$.
You are to place four triangles in one square and the remaining four triangles in the other square, as shown in the following diagram (Fig 6.25).
Square A
Square B
Fig 6.25
Fig 6.23
Fig 6.24 The squares are identical; the eight triangles inserted are also identical.
Hence the uncovered area of square $A=$ Uncovered area of square B.
i.e., Area of inner square of square $\mathrm{A}=$ The total area of two uncovered squares in square $\mathrm{B}$.
$$
a^{2}=b^{2}+c^{2}
$$
This is Pythagoras property. It may be stated as follows:
In a right-angled triangle, the square on the hypotenuse $=$ sum of the squares on the legs.
Pythagoras property is a very useful tool in mathematics. It is formally proved as a theorem in later classes. You should be clear about its meaning.
It says that for any right-angled triangle, the area of the square on the hypotenuse is equal to the sum of the areas of the squares on the legs.
Draw a right triangle, preferably on a square sheet, construct squares on its sides, compute the area of these squares and verify the theorem practically (Fig 6.26).
If you have a right-angled triangle, the Pythagoras property holds. If the Pythagoras property holds for some triangle, will the triangle be rightangled? (Such problems are known as converse problems). We will try to answer this. Now, we will show that, if there is a triangle such that sum of the squares on two of its sides is equal to the square of the third side, it must
be a right-angled triangle.
Fig 6.26
## Do THIS
1. Have cut-outs of squares with sides $4 \mathrm{~cm}$, $5 \mathrm{~cm}, 6 \mathrm{~cm}$ long. Arrange to get a triangular shape by placing the corners of the squares suitably as shown in the figure (Fig 6.27). Trace out the triangle formed. Measure each angle of the triangle. You find that there is no right angle at all.
In fact, in this case each angle will be acute! Note that $4^{2}+5^{2} \neq 6^{2}, 5^{2}+6^{2} \neq 4^{2}$ and $6^{2}+4^{2} \neq 5^{2}$.
Fig 6.27 2. Repeat the above activity with squares whose sides have lengths $4 \mathrm{~cm}, 5 \mathrm{~cm}$ and $7 \mathrm{~cm}$. You get an obtuse-angled triangle! Note that
$$
4^{2}+5^{2} \neq 7^{2} \text { etc. }
$$
This shows that Pythagoras property holds if and only if the triangle is right-angled. Hence we get this fact:
If the Pythagoras property holds, the triangle must be right-angled.
EXample 5 Determine whether the triangle whose lengths of sides are $3 \mathrm{~cm}, 4 \mathrm{~cm}$, $5 \mathrm{~cm}$ is a right-angled triangle.
Solution $3^{2}=3 \times 3=9 ; 4^{2}=4 \times 4=16 ; 5^{2}=5 \times 5=25$
We find $3^{2}+4^{2}=5^{2}$.
Therefore, the triangle is right-angled.
Note: In any right-angled triangle, the hypotenuse happens to be the longest side. In this example, the side with length $5 \mathrm{~cm}$ is the hypotenuse.
Example $6 \triangle \mathrm{ABC}$ is right-angled at $\mathrm{C}$. If $\mathrm{AC}=5 \mathrm{~cm}$ and $\mathrm{BC}=12 \mathrm{~cm}$ find the length of $\mathrm{AB}$.
Solution A rough figure will help us (Fig 6.28).
By Pythagoras property,
Fig 6.28
$$
\begin{aligned}
\mathrm{AB}^{2} & =\mathrm{AC}^{2}+\mathrm{BC}^{2} \\
& =5^{2}+12^{2}=25+144=169=13^{2}
\end{aligned}
$$
or
$$
\mathrm{AB}^{2}=13^{2} \text {. So, } \mathrm{AB}=13
$$
or the length of $\mathrm{AB}$ is $13 \mathrm{~cm}$.
Note: To identify perfect squares, you may use prime factorisation technique.
## TRY THESE
Find the unknown length $x$ in the following figures (Fig 6.29):
(i)
(ii)
(iii)
(iv)
(v)
(vi)
Fig 6.29
## EXERCISE 6.5
1. $\mathrm{PQR}$ is a triangle, right-angled at $\mathrm{P}$. If $\mathrm{PQ}=10 \mathrm{~cm}$ and $P R=24 \mathrm{~cm}$, find $\mathrm{QR}$.
2. $\mathrm{ABC}$ is a triangle, right-angled at $\mathrm{C}$. If $\mathrm{AB}=25$ $\mathrm{cm}$ and $\mathrm{AC}=7 \mathrm{~cm}$, find $\mathrm{BC}$.
3. A $15 \mathrm{~m}$ long ladder reached a window $12 \mathrm{~m}$ high from the ground on placing it against a wall at a distance $a$. Find the distance of the foot of the ladder from the wall.
4. Which of the following can be the sides of a right triangle?
(i) $2.5 \mathrm{~cm}, 6.5 \mathrm{~cm}, 6 \mathrm{~cm}$.
(ii) $2 \mathrm{~cm}, 2 \mathrm{~cm}, 5 \mathrm{~cm}$.
(iii) $1.5 \mathrm{~cm}, 2 \mathrm{~cm}, 2.5 \mathrm{~cm}$.
In the case of right-angled triangles, identify the right angles.
5. A tree is broken at a height of $5 \mathrm{~m}$ from the ground and its top touches the ground at a distance of $12 \mathrm{~m}$ from the base of the tree. Find the original height of the tree.
6. Angles $\mathrm{Q}$ and $\mathrm{R}$ of a $\triangle \mathrm{PQR}$ are $25^{\circ}$ and $65^{\circ}$. Write which of the following is true:
(i) $\mathrm{PQ}^{2}+\mathrm{QR}^{2}=\mathrm{RP}^{2}$
(ii) $\mathrm{PQ}^{2}+\mathrm{RP}^{2}=\mathrm{QR}^{2}$
(iii) $\mathrm{RP}^{2}+\mathrm{QR}^{2}=\mathrm{PQ}^{2}$
7. Find the perimeter of the rectangle whose length is $40 \mathrm{~cm}$ and a diagonal is $41 \mathrm{~cm}$.
8. The diagonals of a rhombus measure $16 \mathrm{~cm}$ and $30 \mathrm{~cm}$. Find its perimeter.
## Think, Discuss and Write
1. Which is the longest side in the triangle $P Q R$, right-angled at $P$ ?
2. Which is the longest side in the triangle $A B C$, right-angled at $B$ ?
3. Which is the longest side of a right triangle?
4. 'The diagonal of a rectangle produce by itself the same area as produced by its length and breadth'- This is Baudhayan Theorem. Compare it with the Pythagoras property.
## Do THIS
## Enrichment activity
There are many proofs for Pythagoras theorem, using 'dissection' and 'rearrangement' procedure. Try to collect a few of them and draw charts explaining them.
## What have We Discussed?
1. The six elements of a triangle are its three angles and the three sides.
2. The line segment joining a vertex of a triangle to the mid point of its opposite side is called a median of the triangle. A triangle has 3 medians.
3. The perpendicular line segment from a vertex of a triangle to its opposite side is called an altitude of the triangle. A triangle has 3 altitudes.
4. An exterior angle of a triangle is formed, when a side of a triangle is produced. At each vertex, you have two ways of forming an exterior angle.
5. A property of exterior angles:
The measure of any exterior angle of a triangle is equal to the sum of the measures of its interior opposite angles.
6. The angle sum property of a triangle:
The total measure of the three angles of a triangle is $180^{\circ}$.
7. A triangle is said to be equilateral, if each one of its sides has the same length. In an equilateral triangle, each angle has measure $60^{\circ}$.
8. A triangle is said to be isosceles, if atleast any two of its sides are of same length. The non-equal side of an isosceles triangle is called its base; the base angles of an isosceles triangle have equal measure.
9. Property of the lengths of sides of a triangle:
The sum of the lengths of any two sides of a triangle is greater than the length of the third side.
The difference between the lengths of any two sides is smaller than the length of the third side. This property is useful to know if it is possible to draw a triangle when the lengths of the three sides are known.
10. In a right angled triangle, the side opposite to the right angle is called the hypotenuse and the other two sides are called its legs.
11. Pythagoras property:
In a right-angled triangle,
the square on the hypotenuse $=$ the sum of the squares on its legs.
If a triangle is not right-angled, this property does not hold good. This property is useful to decide whether a given triangle is right-angled or not.
## Comparing Quantities
### Percentage - Another Way of Comparing Quantities
Anita said that she has done better as she got 320 marks whereas Rita got only 300 . Do you agree with her? Who do you think has done better?
Mansi told them that they cannot decide who has done better by just comparing the total marks obtained because the maximum marks out of which they got the marks are not the same.
She said why don't you see the Percentages given in your report cards?
Anita's Percentage was 80 and Rita's was 83.3. So, this shows Rita has done better. Do you agree?
Percentages are numerators of fractions with denominator 100 and have been used in comparing results. Let us try to understand in detail about it.
#### Meaning of Percentage
Per cent is derived from Latin word 'per centum' meaning 'per hundred'.
Per cent is represented by the symbol $\%$ and means hundredths too. That is $1 \%$ means 1 out of hundred or one hundredth. It can be written as: $1 \%=\frac{1}{100}=0.01$
To understand this, let us consider the following example. Rina made a table top of 100 different coloured tiles. She counted yellow, green, red and blue tiles separately and filled the table below. Can you help her complete the table?
| Colour | $\begin{array}{c}\text { Number } \\ \text { of Tiles }\end{array}$ | $\begin{array}{c}\text { Rate per } \\ \text { Hundred }\end{array}$ | Fraction | Written as | Read as |
| :---: | :---: | :---: | :---: | :---: | :---: |
| Yellow | 14 | 14 | $\frac{14}{100}$ | $14 \%$ | 14 per cent |
| Green | 26 | 26 | $\frac{26}{100}$ | $26 \%$ | 26 per cent |
| Red | 35 | 35 | ---- | ---- | ---- |
| Blue | 25 | ------ | --- | --- | -- |
| Total | $\mathbf{1 0 0}$ | | | | |
## TRY These
1. Find the Percentage of children of different heights for the following data.
| Height | Number of Children | In Fraction | In Percentage |
| :--- | :---: | :--- | :--- |
| $110 \mathrm{~cm}$ | 22 | | |
| $120 \mathrm{~cm}$ | 25 | | |
| $128 \mathrm{~cm}$ | 32 | | |
| $130 \mathrm{~cm}$ | 21 | | |
| Total | $\mathbf{1 0 0}$ | | |
2. A shop has the following number of shoe pairs of different sizes.
Size $2: 20 \quad$ Size $3: 30 \quad$ Size $4: 28$
Size $5: 14 \quad$ Size $6: 8$
Write this information in tabular form as done earlier and find the Percentage of each shoe size available in the shop.
## Percentages when total is not hundred
In all these examples, the total number of items add up to 100. For example, Rina had 100 tiles in all, there were 100 children and 100 shoe pairs. How do we calculate Percentage of an item if the total number of items do not add up to 100? In such cases, we need to convert the fraction to an equivalent fraction with denominator 100. Consider the following example. You have a necklace with twenty beads in two colours.
| Colour | $\begin{array}{c}\text { Number } \\ \text { of Beads }\end{array}$ | Fraction | Denominator Hundred | In Percentage |
| :---: | :---: | :---: | :---: | :---: |
| Red | 8 | $\frac{8}{20}$ | $\frac{8}{20} \times \frac{100}{100}=\frac{40}{100}$ | $40 \%$ |
| Blue | 12 | $\frac{12}{20}$ | $\frac{12}{20} \times \frac{100}{100}=\frac{60}{100}$ | $60 \%$ |
| Total | $\mathbf{2 0}$ | | | |
## Anwar found the Percentage of red beads like this
Out of 20 beads, the number of red beads is 8 .
Hence, out of 100 , the number of red beads is
$$
\frac{8}{20} \times 100=40(\text { out of hundred })=40 \%
$$
Asha does it like this
$$
\begin{aligned}
& \frac{8}{20}=\frac{8 \times 5}{20 \times 5} \\
& =\frac{40}{100}=40 \%
\end{aligned}
$$
We see that these three methods can be used to find the Percentage when the total does not add to give 100. In the method shown in the table, we multiply the fraction by $\frac{100}{100}$. This does not change the value of the fraction. Subsequently, only 100 remains in the denominator.
Anwar has used the unitary method. Asha has multiplied by $\frac{5}{5}$ to get 100 in the denominator. You can use whichever method you find suitable. May be, you can make your own method too.
The method used by Anwar can work for all ratios. Can the method used by Asha also work for all ratios? Anwar says Asha's method can be used only if you can find a natural number which on multiplication with the denominator gives 100. Since denominator was 20 , she could multiply it by 5 to get 100 . If the denominator was 6 , she would not have been able to use this method. Do you agree?
## TRY These
1. A collection of 10 chips with different colours is given .
| Colour | Number | Fraction | Denominator Hundred | In Percentage |
| :--- | :--- | :--- | :--- | :--- |
| Green | | | | |
| Blue | | | | |
| Red | | | | |
| Total | | | | |
Fill the table and find the percentage of chips of each colour. 2. Mala has a collection of bangles. She has 20 gold bangles and 10 silver bangles. What is the percentage of bangles of each type? Can you put it in the tabular form as done in the above example?
## Think, Discuss and Write
1. Look at the examples below and in each of them, discuss which is better for comparison.
In the atmosphere, $1 \mathrm{~g}$ of air contains:
$$
\begin{aligned}
& .78 \mathrm{~g} \text { Nitrogen } \\
& .21 \mathrm{~g} \text { Oxygen } \\
& .01 \mathrm{~g} \text { Other gas }
\end{aligned}
$$
2. A shirthas:
$$
\begin{aligned}
& \frac{3}{5} \text { Cotton } \\
& \frac{2}{5} \text { Polyster }
\end{aligned}
$$
78\% Nitrogen $21 \%$ Oxygen $1 \%$ Other gas
$60 \%$ Cotton
or
$40 \%$ Polyster
#### Converting Fractional Numbers to Percentage
Fractional numbers can have different denominator. To compare fractional numbers, we need a common denominator and we have seen that it is more convenient to compare if our denominator is 100 . That is, we are converting the fractions to Percentages. Let us try converting different fractional numbers to Percentages.
EXAMPLE 1 Write $\frac{1}{3}$ as per cent.
Solution We have, $\frac{1}{3}=\frac{1}{3} \times \frac{100}{100}=\frac{1}{3} \times 100 \%$
$$
=\frac{100}{3} \%=33 \frac{1}{3} \%
$$
ExAMple 2 Out of 25 children in a class, 15 are girls. What is the percentage of girls?
Solution Out of 25 children, there are 15 girls.
Therefore, percentage of girls $=\frac{15}{25} \times 100=60$. There are $60 \%$ girls in the class.
Example 3 Convert $\frac{5}{4}$ to per cent.
Solution We have, $\frac{5}{4}=\frac{5}{4} \times 100 \%=125 \%$
From these examples, we find that the percentages related to proper fractions are less than 100 whereas percentages related to improper fractions are more than 100.
## Think, Discuss and Write
(i) Can you eat $50 \%$ of a cake?
Can you eat $100 \%$ of a cake?
Can you eat $150 \%$ of a cake?
(ii) Can a price of an item go up by $50 \%$ ? Can a price of an item go up by $100 \%$ ?
Can a price of an item go up by $150 \%$ ?
#### Converting Decimals to Percentage
We have seen how fractions can be converted to per cents. Let us now find how decimals can be converted to per cents.
EXample 4 Convert the given decimals to per cents:
(a) 0.75
(b) 0.09
(c) 0.2
## Solution
(a) $0.75=0.75 \times 100 \%$
(b) $0.09=\frac{9}{100}=9 \%$
$$
=\frac{75}{100} \times 100 \%=75 \%
$$
(c) $0.2=\frac{2}{10} \times 100 \%=20 \%$
## Try These
1. Convert the following to per cents:
(a) $\frac{12}{16}$
(b) 3.5
(c) $\frac{49}{50}$
(d) $\frac{2}{2}$
(e) 0.05
2. (i) Out of 32 students, 8 are absent. What per cent of the students are absent?
(ii) There are 25 radios, 16 of them are out of order. What per cent of radios are out of order?
(iii) A shop has 500 items, out of which 5 are defective. What per cent are defective?
(iv) There are 120 voters, 90 of them voted yes. What per cent voted yes?
#### Converting Percentages to Fractions or Decimals
We have so far converted fractions and decimals to percentages. We can also do the reverse. That is, given per cents, we can convert them to decimals or fractions. Look at the table, observe and complete it:
| Percent | $1 \%$ | $10 \%$ | $25 \%$ | $50 \%$ | $90 \%$ | $125 \%$ | $250 \%$ |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Fraction | $\frac{1}{100}$ | $\frac{10}{100}=\frac{1}{10}$ | | | | | |
| Decimal | 0.01 | 0.10 | | | | | |
## Parts always add to give a whole
In the examples for coloured tiles, for the heights of children and for gases in the air, we find that when we add the Percentages we get 100. All the parts that form the whole when added together gives the whole or $100 \%$. So, if we are given one part, we can always find out the other part. Suppose,
$30 \%$ of a given number of students are boys.
This means that if there were 100 students, 30 out of them would be boys and the remaining would be girls.
Then girls would obviously be $(100-30) \%=70 \%$.
## TRY THESE
1. $35 \%+$ $\%=100 \%$
$64 \%+20 \%+$ $\%=100 \%$ $45 \%=100 \%-$ $\%, \quad 70 \%=$ $\%-30 \%$
2. If $65 \%$ of students in a class have a bicycle, what per cent of the student do not have bicycles?
3. We have a basket full of apples, oranges and mangoes. If $50 \%$ are apples, $30 \%$ are oranges, then what per cent are mangoes?
Consider the expenditure made on a dress
$20 \%$ on embroidery, $50 \%$ on cloth, $30 \%$ on stitching.
Can you think of more such examples?
#### Fun with Estimation
Percentages help us to estimate the parts of an area.
Example 5 What per cent of the adjoining figure is shaded?
Solution We first find the fraction of the figure that is shaded. From this fraction, the percentage of the shaded part can be found.
You will find that half of the figure is shaded. And, $\frac{1}{2}=\frac{1}{2} \times 100 \%=50 \%$
Thus, $50 \%$ of the figure is shaded.
## TRY These
What per cent of these figures are shaded?
(i)
(ii)
Tangram
You can make some more figures yourself and ask your friends to estimate the shaded parts.
### Use of Percentages
#### Interpreting Percentages
We saw how percentages were helpful in comparison. We have also learnt to convert fractional numbers and decimals to percentages. Now, we shall learn how percentages can be used in real life. For this, we start with interpreting the following statements:
- $5 \%$ of the income is saved by Ravi. _ $20 \%$ of Meera's dresses are blue in colour.
- Rekha gets $10 \%$ on every book sold by her.
What can you infer from each of these statements?
By $5 \%$ we mean 5 parts out of 100 or we write it as $\frac{5}{100}$. It means Ravi is saving $₹ 5$ out of every ₹ 100 that he earns. In the same way, interpret the rest of the statements given above.
#### Converting Percentages to "How Many"
Consider the following examples:
Example 6 A survey of 40 children showed that $25 \%$ liked playing football. How many children liked playing football?
Solution Here, the total number of children are 40 . Out of these, $25 \%$ like playing football. Meena and Arun used the following methods to find the number. You can choose either method.
## Arun does it like this
Out of 100, 25 like playing football So out of 40 , number of children who like playing football $=\frac{25}{100} \times 40=10$
## Meena does it like this
$$
\begin{aligned}
25 \% \text { of } 40 & =\frac{25}{100} \times 40 \\
& =10
\end{aligned}
$$
Hence, 10 children out of 40 like playing football.
## Try These
1. Find:
(a) $50 \%$ of 164
(b) $75 \%$ of 12
(c) $12 \frac{1}{2} \%$ of 64
2. $8 \%$ children of a class of 25 like getting wet in the rain. How many children like getting wet in the rain.
EXAMPLE 7 Rahul bought a sweater and saved ₹ 200 when a discount of $25 \%$ was given. What was the price of the sweater before the discount?
Solution Rahul has saved ₹ 200 when price of sweater is reduced by $25 \%$. This means that $25 \%$ reduction in price is the amount saved by Rahul. Let us see how Mohan and Abdul have found the original cost of the sweater.
## Mohan's solution
$25 \%$ of the original price $=₹ 200$
Let the price (in ₹) be $P$
So, $25 \%$ of $P=200$ or $\frac{25}{100} \times P=200$
or, $\frac{P}{4}=200$ or $P=200 \times 4$
Therefore, $P=800$
## Abdul's solution
₹ 25 is saved for every ₹ 100
Amount for which ₹ 200 is saved
$$
=\frac{100}{25} \times 200=₹ 800
$$
Thus both obtained the original price of sweater as ₹ 800 .
## TRY THESE
1. 9 is $25 \%$ of what number?
2. $75 \%$ of what number is 15 ?
## Exercise 7.1
1. Convert the given fractional numbers to per cents.
(a) $\frac{1}{8}$
(b) $\frac{5}{4}$
(c) $\frac{3}{40}$
(d) $\frac{2}{7}$
2. Convert the given decimal fractions to per cents.
(a) 0.65
(b) 2.1
(c) 0.02
(d) 12.35
3. Estimate what part of the figures is coloured and hence find the per cent which is coloured.
(i)
(ii)
(iii)
4. Find:
(a) $15 \%$ of 250
(b) $1 \%$ of 1 hour
(c) $20 \%$ of ₹ 2500
(d) $75 \%$ of $1 \mathrm{~kg}$
5. Find the whole quantity if
(a) $5 \%$ of it is 600 .
(b) $12 \%$ of it is ₹ 1080 .
(c) $40 \%$ of it is $500 \mathrm{~km}$.
(d) $70 \%$ of it is 14 minutes.
(e) $8 \%$ of it is 40 litres.
6. Convert given per cents to decimal fractions and also to fractions in simplest forms:
(a) $25 \%$
(b) $150 \%$
(c) $20 \%$
(d) $5 \%$
7. In a city, $30 \%$ are females, $40 \%$ are males and remaining are children. What per cent are children?
8. Out of 15,000 voters in a constituency, $60 \%$ voted. Find the percentage of voters who did not vote. Can you now find how many actually did not vote?
9. Meeta saves ₹ 4000 from her salary. If this is $10 \%$ of her salary. What is her salary?
10. A local cricket team played 20 matches in one season. It won $25 \%$ of them. How many matches did they win?
#### Ratios to Percents
Sometimes, parts are given to us in the form of ratios and we need to convert those to percentages. Consider the following example:
EXAMPLE 8 Reena's mother said, to make idlis, you must take two parts rice and one part urad dal. What percentage of such a mixture would be rice and what percentage would be urad dal?
Solution In terms of ratio we would write this as Rice : Urad dal $=2: 1$.
Now, $2+1=3$ is the total of all parts. This means $\frac{2}{3}$ part is rice and $\frac{1}{3}$ part is urad dal.
Then, percentage of rice would be $\frac{2}{3} \times 100 \%=\frac{200}{3}=66 \frac{2}{3} \%$.
Percentage of urad dal would be $\frac{1}{3} \times 100 \%=\frac{100}{3}=33 \frac{1}{3} \%$. Example 9 If ₹ 250 is to be divided amongst Ravi, Raju and Roy, so that Ravi gets two parts, Raju three parts and Roy five parts. How much money will each get? What will it be in percentages?
Solution The parts which the three boys are getting can be written in terms of ratios as $2: 3: 5$.
Total of the parts is $2+3+5=10$.
Amounts received by each
$$
\begin{aligned}
& \frac{2}{10} \times ₹ 250=₹ 50 \\
& \frac{3}{10} \times ₹ 250=₹ 75 \\
& \frac{5}{10} \times ₹ 250=₹ 125
\end{aligned}
$$
## Percentages of money for each
$$
\begin{aligned}
& \text { Ravi gets } \frac{2}{10} \times 100 \%=20 \% \\
& \text { Raju gets } \frac{3}{10} \times 100 \%=30 \% \\
& \text { Roy gets } \frac{5}{10} \times 100 \%=50 \%
\end{aligned}
$$
## TRY Thiese
1. Divide 15 sweets between Manu and Sonu so that they get $20 \%$ and $80 \%$ of them respectively.
2. If angles of a triangle are in the ratio $2: 3: 4$. Find the value of each angle.
#### Increase or Decrease as Per Cent
There are times when we need to know the increase or decrease in a certain quantity as percentage. For example, if the population of a state increased from 5,50,000 to $6,05,000$. Then the increase in population can be understood better if we say, the population increased by $10 \%$.
How do we convert the increase or decrease in a quantity as a percentage of the initial amount? Consider the following example.
Example 10A school team won 6 games this year against 4 games won last year. What is the per cent increase?
Solution The increase in the number of wins (or amount of change) $=6-4=2$.
Percentage increase $=\frac{\text { amount of change }}{\text { original amount or base }} \times 100$
$$
=\frac{\text { increase in the number of wins }}{\text { original number of wins }} \times 100=\frac{2}{4} \times 100=50
$$
EXAMPLE 11 The number of illiterate persons in a country decreased from 150 lakhs to 100 lakhs in 10 years. What is the percentage of decrease?
SOLUTION Original amount $=$ the number of illiterate persons initially $=150$ lakhs. Amount of change $=$ decrease in the number of illiterate persons $=150-100=50$ lakhs Therefore, the percentage of decrease
$$
=\frac{\text { amount of change }}{\text { original amount }} \times 100=\frac{50}{150} \times 100=33 \frac{1}{3}
$$
## TRY THESE
1. Find Percentage of increase or decrease:
- Price of shirt decreased from ₹ 280 to ₹ 210.
- Marks in a test increased from 20 to 30.
2. My mother says, in her childhood petrol was $₹ 1$ a litre. It is $₹ 52$ per litre today. By what Percentage has the price gone up?
### Prices Related to an Item or Buying and Selling
I bought it for ₹ 600
and will sell it for ₹ 610
The buying price of any item is known as its cost price. It is written in short as CP. The price at which you sell is known as the selling price or in short SP.
What would you say is better, to you sell the item at a lower price, same price or higher price than your buying price? You can decide whether the sale was profitable or not depending on the $\mathrm{CP}$ and $\mathrm{SP}$. If $\mathrm{CP}<\mathrm{SP}$ then you made a profit $=\mathrm{SP}-\mathrm{CP}$.
If $\mathrm{CP}=\mathrm{SP}$ then you are in a no profit no loss situation. If $\mathrm{CP}>\mathrm{SP}$ then you have a loss $=\mathrm{CP}-\mathrm{SP}$.
Let us try to interpret the statements related to prices of items.
- A toy bought for ₹ 72 is sold at ₹ 80 .
- A T-shirt bought for ₹ 120 is sold at ₹ 100.
- A cycle bought for ₹ 800 is sold for ₹ 940 .
Let us consider the first statement.
The buying price (or CP) is ₹ 72 and the selling price (or SP) is ₹ 80. This means SP is more than CP. Hence profit made $=\mathrm{SP}-\mathrm{CP}=₹ 80-₹ 72=₹ 8$
Now try interpreting the remaining statements in a similar way.
#### Profit or Loss as a Percentage
The profit or loss can be converted to a percentage. It is always calculated on the CP. For the above examples, we can find the profit $\%$ or loss $\%$.
Let us consider the example related to the toy. We have $\mathrm{CP}=₹ 72, \mathrm{SP}=₹ 80$, Profit $=₹ 8$. To find the percentage of profit, Neha and Shekhar have used the following methods.
$$
\begin{aligned}
\text { Loss per cent } & =\frac{\text { Loss }}{\mathrm{CP}} \times 100 \\
& =\frac{20}{120} \times 100 \\
& =\frac{50}{3}=16 \frac{2}{3}
\end{aligned}
$$
## Shekhar does it this way
On $₹ 72$ the profit is $₹ 8$
On ₹ 100, profit $=\frac{8}{72} \times 100$
$=11 \frac{1}{9}$. Thus, profit per cent $=11 \frac{1}{9}$
Try the last case.
Now we see that given any two out of the three quantities related to prices that is, CP, SP, amount of Profit or Loss or their percentage, we can find the rest.
Example 12 The cost of a flower vase is $₹ 120$. If the shopkeeper sells it at a loss of $10 \%$, find the price at which it is sold.
SOLUTION We are given that $\mathrm{CP}=₹ 120$ and Loss per cent $=10$. We have to find the SP.
Sohan does it like this
Loss of $10 \%$ means if CP is ₹ 100,
Loss is ₹ 10
Therefore, SP would be
$₹(100-10)=₹ 90$
When CP is ₹ 100, SP is ₹ 90.
Therefore, if CP were $₹ 120$ then
SP $=\frac{90}{100} \times 120=₹ 108$
On ₹ 120 , the loss is ₹ 20 So on $₹ 100$, the loss
$$
\begin{aligned}
& =\frac{20}{120} \times 100=\frac{50}{3}=16 \frac{2}{3} \\
& \text { Thus, loss per cent is } 16 \frac{2}{3}
\end{aligned}
$$
EXAMIPLE 13 Selling price of a toy car is ₹ 540. If the profit made by shopkeeper is $20 \%$, what is the cost price of this toy?
Solution We are given that $\mathrm{SP}=₹ 540$ and the Profit $=20 \%$. We need to find the $\mathrm{CP}$.
## Arun does it like this
$20 \%$ profit will mean if $\mathrm{CP}$ is $₹ 100$, profit is ₹ 20
Therefore, $\mathrm{SP}=100+20=120$
Now, when SP is ₹ 120 , then $\mathrm{CP}$ is ₹ 100.
Therefore, when SP is ₹ 540 , then $\mathrm{CP}=\frac{100}{120} \times 540=₹ 450$
Thus, by both methods, the cost price is ₹ 450 .
## TRY These
1. A shopkeeper bought a chair for $₹ 375$ and sold it for $₹ 400$. Find the gain Percentage.
2. Cost of an item is ₹ 50 . It was sold with a profit of $12 \%$. Find the selling price.
3. An article was sold for $₹ 250$ with a profit of $5 \%$. What was its cost price?
4. An item was sold for $₹ 540$ at a loss of $5 \%$. What was its cost price?
### Charge Given on Borrowed Money or Simple INTEREST
Sohini said that they were going to buy a new scooter. Mohan asked her whether they had the money to buy it. Sohini said her father was going to take a loan from a bank. The money you borrow is known as sum borrowed or principal.
This money would be used by the borrower for some time before it is returned. For keeping this money for some time the borrower has to pay some extra money to the bank. This is known as Interest.
You can find the amount you have to pay at the end of the year by adding the sum borrowed and the interest. That is, Amount = Principal + Interest.
Interest is generally given in per cent for a period of one year. It is written as say $10 \%$ per year or per annum or in short as $10 \%$ p.a. (per annum).
$10 \%$ p.a. means on every ₹ 100 borrowed, ₹ 10 is the interest you have to pay for one year. Let us take an example and see how this works.
EXAMPLe 14 Anita takes a loan of ₹ 5,000 at 15\% per year as rate of interest. Find the interest she has to pay at the end of one year. Solution The sum borrowed $=₹ 5,000$, Rate of interest $=15 \%$ per year.
This means if ₹ 100 is borrowed, she has to pay ₹ 15 as interest for one year. If she has borrowed ₹ 5,000 , then the interest she has to pay for one year
$$
=₹ \frac{15}{100} \times 5000=₹ 750
$$
So, at the end of the year she has to give an amount of ₹ $5,000+₹ 750=₹ 5,750$.
We can write a general relation to find interest for one year. Take $P$ as the principal or sum and $R \%$ as Rate per cent per annum.
Now on every ₹ 100 borrowed, the interest paid is ₹ $R$
Therefore, on ₹ $P$ borrowed, the interest paid for one year would be $\frac{R \times P}{100}=\frac{P \times R}{100}$.
#### Interest for Multiple Years
If the amount is borrowed for more than one year the interest is calculated for the period the money is kept for. For example, if Anita returns the money at the end of two years and the rate of interest is the same then she would have to pay twice the interest i.e., ₹ 750 for the first year and ₹ 750 for the second. This way of calculating interest where principal is not changed is known as simple interest. As the number of years increase the interest also increases. For ₹ 100 borrowed for 3 years at $18 \%$, the interest to be paid at the end of 3 years is $18+18+18=3 \times 18=₹ 54$.
We can find the general form for simple interest for more than one year.
We know that on a principal of $₹ P$ at $R \%$ rate of interest per year, the interest paid for one year is $\frac{R \times P}{100}$. Therefore, interest $I$ paid for $T$ years would be
$$
\frac{T \times R \times P}{100}=\frac{P \times R \times T}{100} \text { or } \frac{P R T}{100}
$$
And amount you have to pay at the end of $T$ years is $A=P+I$
## TRY These
1. $₹ 10,000$ is invested at $5 \%$ interest rate p.a. Find the interest at the end of one year.
2. ₹ 3,500 is given at $7 \%$ p.a. rate of interest. Find the interest which will be received at the end of two years.
3. ₹ 6,050 is borrowed at $6.5 \%$ rate of interest p.a.. Find the interest and the amount to be paid at the end of 3 years.
4. ₹ 7,000 is borrowed at $3.5 \%$ rate of interest p.a. borrowed for 2 years. Find the amount to be paid at the end of the second year.
Just as in the case of prices related to items, if you are given any two of the three quantities in the relation $I=\frac{P \times T \times R}{100}$, you could find the remaining quantity. EXAMPLE 15 If Manohar pays an interest of $₹ 750$ for 2 years on a sum of $₹ 4,500$, find the rate of interest.
## TrY These
1. You have $₹ 2,400$ in your account and the interest rate is $5 \%$. After how many years would you earn $₹ 240$ as interest.
2. On a certain sum the interest paid after 3 years is $₹ 450$ at $5 \%$ rate of interest per annum. Find the sum.
## EXERCISE 7.2
1. Tell what is the profit or loss in the following transactions. Also find profit per cent or loss per cent in each case.
(a) Gardening shears bought for ₹ 250 and sold for ₹ 325 .
(b) A refrigerater bought for ₹ 12,000 and sold at ₹ 13,500.
(c) A cupboard bought for ₹ 2,500 and sold at ₹ 3,000.
(d) A skirt bought for ₹ 250 and sold at ₹ 150 .
2. Convert each part of the ratio to percentage:
(a) $3: 1$
(b) $2: 3: 5$
(c) $1: 4$
(d) $1: 2: 5$
3. The population of a city decreased from 25,000 to 24,500 . Find the percentage decrease.
4. Arun bought a car for ₹ 3,50,000. The next year, the price went upto $₹ 3,70,000$. What was the Percentage of price increase?
5. I buy a T.V. for ₹ 10,000 and sell it at a profit of $20 \%$. How much money do I get for it?
6. Juhi sells a washing machine for ₹ 13,500 . She loses $20 \%$ in the bargain. What was the price at which she bought it?
7. (i) Chalk contains calcium, carbon and oxygen in the ratio 10:3:12. Find the percentage of carbon in chalk.
(ii) If in a stick of chalk, carbon is $3 \mathrm{~g}$, what is the weight of the chalk stick? 8. Amina buys a book for $₹ 275$ and sells it at a loss of $15 \%$. How much does she sell it for?
9. Find the amount to be paid at the end of 3 years in each case:
(a) Principal $=₹ 1,200$ at $12 \%$ p.a.
(b) Principal $=₹ 7,500$ at $5 \%$ p.a.
10. What rate gives $₹ 280$ as interest on a sum of $₹ 56,000$ in 2 years?
11. If Meena gives an interest of $₹ 45$ for one year at $9 \%$ rate p.a.. What is the sum she has borrowed?
## What HAVE WE Discussed?
1. A way of comparing quantities is percentage. Percentages are numerators of fractions with denominator 100. Per cent means per hundred.
For example $82 \%$ marks means 82 marks out of hundred.
2. Fractions can be converted to percentages and vice-versa.
For example, $\frac{1}{4}=\frac{1}{4} \times 100 \%$ whereas, $75 \%=\frac{75}{100}=\frac{3}{4}$
3. Decimals too can be converted to percentages and vice-versa.
For example, $0.25=0.25 \times 100 \%=25 \%$
4. Percentages are widely used in our daily life,
(a) We have learnt to find exact number when a certain per cent of the total quantity is given.
(b) When parts of a quantity are given to us as ratios, we have seen how to convert them to percentages.
(c) The increase or decrease in a certain quantity can also be expressed as percentage.
(d) The profit or loss incurred in a certain transaction can be expressed in terms of percentages.
(e) While computing interest on an amount borrowed, the rate of interest is given in terms of per cents. For example, $₹ 800$ borrowed for 3 years at $12 \%$ per annum.
### Introduction
You began your study of numbers by counting objects around you. The numbers used for this purpose were called counting numbers or natural numbers. They are $1,2,3,4, \ldots$ By including 0 to natural numbers, we got the whole numbers, i.e., $0,1,2,3, \ldots$ The negatives of natural numbers were then put together with whole numbers to make up integers. Integers are ..., $-3,-2,-1,0,1,2,3, \ldots$. We, thus, extended the number system, from natural numbers to whole numbers and from whole numbers to integers.
You were also introduced to fractions. These are numbers of the form $\frac{\text { numerator }}{\text { denominator }}$, where the numerator is either 0 or a positive integer and the denominator, a positive integer. You compared two fractions, found their equivalent forms and studied all the four basic operations of addition, subtraction, multiplication and division on them.
In this Chapter, we shall extend the number system further. We shall introduce the concept of rational numbers alongwith their addition, subtraction, multiplication and division operations.
### NeEd for Rational Numbers
Earlier, we have seen how integers could be used to denote opposite situations involving numbers. For example, if the distance of $3 \mathrm{~km}$ to the right of a place was denoted by 3 , then the distance of $5 \mathrm{~km}$ to the left of the same place could be denoted by -5 . If a profit of ₹ 150 was represented by 150 then a loss of ₹ 100 could be written as -100 .
There are many situations similar to the above situations that involve fractional numbers. You can represent a distance of $750 \mathrm{~m}$ above sea level as $\frac{3}{4} \mathrm{~km}$. Can we represent $750 \mathrm{~m}$ below sea level in km? Can we denote the distance of $\frac{3}{4} \mathrm{~km}$ below sea level by $\frac{-3}{4}$ ? We can see $\frac{-3}{4}$ is neither an integer, nor a fractional number. We need to extend our number system to include such numbers.
### What aRE Rational Numbers?
The word 'rational' arises from the term 'ratio'. You know that a ratio like 3:2 can also be written as $\frac{3}{2}$. Here, 3 and 2 are natural numbers.
Similarly, the ratio of two integers $p$ and $q(q \neq 0)$, i.e., $p: q$ can be written in the form $\frac{p}{q}$. This is the form in which rational numbers are expressed.
A rational number is defined as a number that can be expressed in the form $\frac{p}{q}$, where $p$ and $q$ are integers and $q \neq 0$.
Thus, $\frac{4}{5}$ is a rational number. Here, $p=4$ and $q=5$.
Is $\frac{-3}{4}$ also a rational number? Yes, because $p=-3$ and $q=4$ are integers.
- You have seen many fractions like $\frac{3}{8}, \frac{4}{8}, 1 \frac{2}{3}$ etc. All fractions are rational
numbers. Can you say why?
How about the decimal numbers like $0.5,2.3$, etc.? Each of such numbers can be written as an ordinary fraction and, hence, are rational numbers. For example, $0.5=\frac{5}{10}$, $0.333=\frac{333}{1000}$ etc.
## TRY THESE
1. Is the number $\frac{2}{-3}$ rational? Think about it. 2. List ten rational numbers.
## Numerator and Denominator
In $\frac{p}{q}$, the integer $p$ is the numerator, and the integer $q(\neq 0)$ is the denominator.
Thus, in $\frac{-3}{7}$, the numerator is -3 and the denominator is 7 .
Mention five rational numbers each of whose
(a) Numerator is a negative integer and denominator is a positive integer.
(b) Numerator is a positive integer and denominator is a negative integer.
(c) Numerator and denominator both are negative integers.
(d) Numerator and denominator both are positive integers.
- Are integers also rational numbers?
Any integer can be thought of as a rational number. For example, the integer -5 is a rational number, because you can write it as $\frac{-5}{1}$. The integer 0 can also be written as $0=\frac{0}{2}$ or $\frac{0}{7}$ etc. Hence, it is also a rational number.
Thus, rational numbers include integers and fractions.
## Equivalent rational numbers
A rational number can be written with different numerators and denominators. For example, consider the rational number $\frac{-2}{3}$.
$$
\begin{aligned}
& \frac{-2}{3}=\frac{-2 \times 2}{3 \times 2}=\frac{-4}{6} \text {. We see that } \frac{-2}{3} \text { is the same as } \frac{-4}{6} \text {. } \\
& \text { Also, } \quad \frac{-2}{3}=\frac{(-2) \times(-5)}{3 \times(-5)}=\frac{10}{-15} \text {. So, } \frac{-2}{3} \text { is also the same as } \frac{10}{-15} \text {. }
\end{aligned}
$$
Thus, $\frac{-2}{3}=\frac{-4}{6}=\frac{10}{-15}$. Such rational numbers that are equal to each other are said to be equivalent to each other.
$$
\text { Again, } \quad \frac{10}{-15}=\frac{-10}{15} \text { (How?) }
$$
By multiplying the numerator and denominator of a rational
## Try These
Fill in the boxes:
(i) $\frac{5}{4}=\frac{\square}{16}=\frac{25}{\square}=\frac{-15}{\square}$
(ii) $\frac{-3}{7}=\frac{\square}{14}=\frac{9}{\square}=\frac{-6}{\square}$ number by the same non zero integer, we obtain another rational number equivalent to the given rational number. This is exactly like obtaining equivalent fractions.
Just as multiplication, the division of the numerator and denominator by the same non zero integer, also gives equivalent rational numbers. For example,
$$
\begin{gathered}
\frac{10}{-15}=\frac{10 \div(-5)}{-15 \div(-5)}=\frac{-2}{3}, \quad \frac{-12}{24}=\frac{-12 \div 12}{24 \div 12}=\frac{-1}{2} \\
\text { We write } \frac{-2}{3} \text { as }-\frac{2}{3}, \frac{-10}{15} \text { as }-\frac{10}{15}, \text { etc. }
\end{gathered}
$$
### Positive and Negative Rational Numbers
Consider the rational number $\frac{2}{3}$. Both the numerator and denominator of this number are positive integers. Such a rational number is called a positive rational number. So, $\frac{3}{8}, \frac{5}{7}, \frac{2}{9}$
## Try These
1. Is 5 a positive rational number?
2. List five more positive rational numbers. etc. are positive rational numbers.
The numerator of $\frac{-3}{5}$ is a negative integer, whereas the denominator is a positive integer. Such a rational number is called a negative rational number. So, $\frac{-5}{7}, \frac{-3}{8}, \frac{-9}{5}$ etc. are negative rational numbers. - Is $\frac{8}{-3}$ a negative rational number? We know that $\frac{8}{-3}=\frac{8 \times-1}{-3 \times-1}=\frac{-8}{3}$, and $\frac{-8}{3}$ is a negative rational number. So, $\frac{8}{-3}$ is a negative rational number. Similarly, $\frac{5}{-7}, \frac{6}{-5}, \frac{2}{-9}$ etc. are all negative rational numbers. Note that their numerators are positive and their denominators negative.
- The number 0 is neither a positive nor a negative rational number.
- What about $\frac{-3}{-5}$ ?
You will see that $\frac{-3}{-5}=\frac{-3 \times(-1)}{-5 \times(-1)}=\frac{3}{5}$. So, $\frac{-3}{-5}$ is a positive rational number. Thus, $\frac{-2}{-5}, \frac{-5}{-3}$ etc. are positive rational numbers.
## TRY THESE
1. Is -8 a negative rational number?
2. List five more negative rational numbers.
## TRY These
Which of these are negative rational numbers?
(i) $\frac{-2}{3}$
(ii) $\frac{5}{7}$
(iii) $\frac{3}{-5}$
(iv) 0
(v) $\frac{6}{11}$
(vi) $\frac{-2}{-9}$
### Rational Numbers on a Number Line
You know how to represent integers on a number line. Let us draw one such number line.
The points to the right of 0 are denoted by + sign and are positive integers. The points to the left of 0 are denoted by - sign and are negative integers.
Representation of fractions on a number line is also known to you.
Let us see how the rational numbers can be represented on a number line.
Let us represent the number $-\frac{1}{2}$ on the number line.
As done in the case of positive integers, the positive rational numbers would be marked on the right of 0 and the negative rational numbers would be marked on the left of 0 .
To which side of 0 will you mark $-\frac{1}{2}$ ? Being a negative rational number, it would be marked to the left of 0 .
You know that while marking integers on the number line, successive integers are marked at equal intervels. Also, from 0 , the pair 1 and -1 is equidistant. So are the pairs 2 and $-2,3$ and -3 . In the same way, the rational numbers $\frac{1}{2}$ and $-\frac{1}{2}$ would be at equal distance from 0 . We know how to mark the rational number $\frac{1}{2}$. It is marked at a point which is half the distance between 0 and 1 . So, $-\frac{1}{2}$ would be marked at a point half the distance between 0 and -1 .
We know how to mark $\frac{3}{2}$ on the number line. It is marked on the right of 0 and lies halfway between 1 and 2. Let us now mark $\frac{-3}{2}$ on the number line. It lies on the left of 0 and is at the same distance as $\frac{3}{2}$ from 0 .
In decreasing order, we have, $\frac{-1}{2}, \frac{-2}{2}(=-1), \frac{-3}{2}, \frac{-4}{2}(=-2)$. This shows that $\frac{-3}{2}$ lies between -1 and -2 . Thus, $\frac{-3}{2}$ lies halfway between -1 and -2 .
$\longleftrightarrow \frac{-4}{2}=(-2) \quad \frac{-3}{2} \quad \frac{-2}{2}=(-1) \quad \frac{-1}{2} \quad \frac{0}{2}=(0) \quad \frac{1}{2} \quad \frac{2}{2}=(1) \quad \frac{3}{2} \quad \frac{4}{2}=(2)$ Mark $\frac{-5}{2}$ and $\frac{-7}{2}$ in a similar way.
Similarly, $-\frac{1}{3}$ is to the left of zero and at the same distance from zero as $\frac{1}{3}$ is to the right. So as done above, $-\frac{1}{3}$ can be represented on the number line. Once we know how to represent $-\frac{1}{3}$ on the number line, we can go on representing $-\frac{2}{3},-\frac{4}{3},-\frac{5}{3}$ and so on. All other rational numbers with different denominators can be represented in a similar way.
### Rational Numbers in Standard Form
Observe the rational numbers $\frac{3}{5}, \frac{-5}{8}, \frac{2}{7}, \frac{-7}{11}$.
The denominators of these rational numbers are positive integers and 1 is the only common factor between the numerators and denominators. Further, the negative sign occurs only in the numerator.
Such rational numbers are said to be in standard form. A rational number is said to be in the standard form if its denominator is a positive integer and the numerator and denominator have no common factor other than 1.
If a rational number is not in the standard form, then it can be reduced to the standard form.
Recall that for reducing fractions to their lowest forms, we divided the numerator and the denominator of the fraction by the same non zero positive integer. We shall use the same method for reducing rational numbers to their standard form.
Example 1 Reduce $\frac{-45}{30}$ to the standard form.
Solution We have, $\frac{-45}{30}=\frac{-45 \div 3}{30 \div 3}=\frac{-15}{10}=\frac{-15 \div 5}{10 \div 5}=\frac{-3}{2}$
We had to divide twice. First time by 3 and then by 5 . This could also be done as
$$
\frac{-45}{30}=\frac{-45 \div 15}{30 \div 15}=\frac{-3}{2}
$$
In this example, note that 15 is the HCF of 45 and 30.
Thus, to reduce the rational number to its standard form, we divide its numerator and denominator by their HCF ignoring the negative sign, if any. (The reason for ignoring the negative sign will be studied in Higher Classes)
If there is negative sign in the denominator, divide by ' $-\mathrm{HCF}$ '.
EXAMPLE 2 Reduce to standard form:
(i) $\frac{36}{-24}$
(ii) $\frac{-3}{-15}$
## SOLUTION
(i) The HCF of 36 and 24 is 12.
Thus, its standard form would be obtained by dividing by -12 .
$$
\frac{36}{-24}=\frac{36 \div(-12)}{-24 \div(-12)}=\frac{-3}{2}
$$
(ii) The HCF of 3 and 15 is 3 .
Thus, $\frac{-3}{-15}=\frac{-3 \div(-2)}{-15 \div(-3)}=\frac{1}{5}$
MrY IHESE
Find the standard form of (i) $\frac{-18}{45}$
(ii) $\frac{-12}{18}$
### Comparison of Rational Numbers
We know how to compare two integers or two fractions and tell which is smaller or which is greater among them. Let us now see how we can compare two rational numbers.
- Two positive rational numbers, like $\frac{2}{3}$ and $\frac{5}{7}$ can be compared as studied earlier in the case of fractions.
- Mary compared two negative rational numbers $-\frac{1}{2}$ and $-\frac{1}{5}$ using number line. She knew that the integer which was on the right side of the other integer, was the greater integer.
For example, 5 is to the right of 2 on the number line and $5>2$. The integer -2 is on the right of -5 on the number line and $-2>-5$.
She used this method for rational numbers also. She knew how to mark rational numbers on the number line. She marked $-\frac{1}{2}$ and $-\frac{1}{5}$ as follows:
Has she correctly marked the two points? How and why did she convert $-\frac{1}{2}$ to $-\frac{5}{10}$ and $-\frac{1}{5}$ to $-\frac{2}{10}$ ? She found that $-\frac{1}{5}$ is to the right of $-\frac{1}{2}$. Thus, $-\frac{1}{5}>-\frac{1}{2}$ or $-\frac{1}{2}<-\frac{1}{5}$. Can you compare $-\frac{3}{4}$ and $-\frac{2}{3} ?-\frac{1}{3}$ and $-\frac{1}{5} ?$
We know from our study of fractions that $\frac{1}{5}<\frac{1}{2}$. And what did Mary get for $-\frac{1}{2}$ and $-\frac{1}{5}$ ? Was it not exactly the opposite?
You will find that, $\frac{1}{2}>\frac{1}{5}$ but $-\frac{1}{2}<-\frac{1}{5}$.
Do you observe the same for $-\frac{3}{4},-\frac{2}{3}$ and $-\frac{1}{3},-\frac{1}{5}$ ? Mary remembered that in integers she had studied $4>3$ but $-4<-3,5>2$ but $-5<-2$ etc. - The case of pairs of negative rational numbers is similar. To compare two negative rational numbers, we compare them ignoring their negative signs and then reverse the order.
For example, to compare $-\frac{7}{5}$ and $-\frac{5}{3}$, we first compare $\frac{7}{5}$ and $\frac{5}{3}$.
We get $\frac{7}{5}<\frac{5}{3}$ and conclude that $\frac{-7}{5}>\frac{-5}{3}$.
Take five more such pairs and compare them.
Which is greater $-\frac{3}{8}$ or $-\frac{2}{7} ? ;-\frac{4}{3}$ or $-\frac{3}{2}$ ?
- Comparison of a negative and a positive rational number is obvious. A negative rational number is to the left of zero whereas a positive rational number is to the right of zero on a number line. So, a negative rational number will always be less than a positive rational number.
Thus, $-\frac{2}{7}<\frac{1}{2}$.
- To compare rational numbers $\frac{-3}{-5}$ and $\frac{-2}{-7}$ reduce them to their standard forms and then compare them.
EXAMriple 3 Do $\frac{4}{-9}$ and $\frac{-16}{36}$ represent the same rational number?
Solution Yes, because $\frac{4}{-9}=\frac{4 \times(-4)}{9 \times(-4)}=\frac{-16}{36}$ or $\frac{-16}{36}=\frac{-16+-4}{35 \div-4}=\frac{-4}{-9}$.
### Rational Numbers Between Two Rational Numibers
Reshma wanted to count the whole numbers between 3 and 10. From her earlier classes, she knew there would be exactly 6 whole numbers between 3 and 10. Similarly, she wanted to know the total number of integers between -3 and 3 . The integers between -3 and 3 are $-2,-1,0,1,2$. Thus, there are exactly 5 integers between -3 and 3 .
Are there any integers between -3 and -2 ? No, there is no integer between -3 and -2 . Between two successive integers the number of integers is 0 .
Thus, we find that number of integers between two integers are limited (finite). Will the same happen in the case of rational numbers also?
Reshma took two rational numbers $\frac{-3}{5}$ and $\frac{-1}{3}$.
She converted them to rational numbers with same denominators.
So
$$
\frac{-3}{5}=\frac{-9}{15} \text { and } \frac{-1}{3}=\frac{-5}{15}
$$
We have
She could find rational numbers $\frac{-8}{15}<\frac{-7}{15}<\frac{-6}{15}$ between $\frac{-3}{5}$ and $\frac{-1}{3}$.
Are the numbers $\frac{-8}{15}, \frac{-7}{15}, \frac{-6}{15}$ the only rational numbers between $-\frac{3}{5}$ and $-\frac{1}{3}$ ?
We have $\quad \frac{-3}{5}<\frac{-18}{30}$ and $\frac{-8}{15}<\frac{-16}{30}$
And
$$
\frac{-9}{15}<\frac{-8}{15}<\frac{-7}{15}<\frac{-6}{15}<\frac{-5}{15} \text { or } \frac{-3}{5}<\frac{-8}{15}<\frac{-7}{15}<\frac{-6}{15}<\frac{-1}{3}
$$
And
$$
\frac{-18}{30}<\frac{-17}{30}<\frac{-16}{30} \text {. i.e., } \frac{-3}{5}<\frac{-17}{30}<\frac{-8}{15}
$$
Hence
$$
\frac{-3}{5}<\frac{-17}{30}<\frac{-8}{15}<\frac{-7}{15}<\frac{-6}{15}<\frac{-1}{3}
$$
So, we could find one more rational number between $\frac{-3}{5}$ and $\frac{-1}{3}$.
By using this method, you can insert as many rational numbers as you want between two different rational numbers.
$$
\text { For example, } \quad \frac{-3}{5}=\frac{-3 \times 30}{5 \times 30}=\frac{-90}{150} \text { and } \frac{-1}{3}=\frac{-1 \times 50}{3 \times 50}=\frac{-50}{150}
$$
$$
\text { We get } 39 \text { rational numbers }\left(\frac{-89}{150}, \ldots, \frac{-51}{150}\right) \text { between } \frac{-90}{150} \text { and } \frac{-50}{150}
$$
i.e., between $\frac{-3}{5}$ and $\frac{-1}{3}$. You will find that the list is unending.
## MRY MHISSE
Find five rational numbers between $\frac{-5}{7}$ and $\frac{-3}{8}$. Can you list five rational numbers between $\frac{-5}{3}$ and $\frac{-8}{7}$ ?
We can find unlimited number of rational numbers between any two rational numbers. EXAMPLE 4 List three rational numbers between -2 and -1 .
Solution Let us write -1 and -2 as rational numbers with denominator 5. (Why?)
We have, $\quad-1=\frac{-5}{5}$ and $-2=\frac{-10}{5}$
So, $\quad \frac{-10}{5}<\frac{-9}{5}<\frac{-8}{5}<\frac{-7}{5}<\frac{-6}{5}<\frac{-5}{5}$ or $-2<\frac{-9}{5}<\frac{-8}{5}<\frac{-7}{5}<\frac{-6}{5}<-1$
The three rational numbers between -2 and -1 would be, $\frac{-9}{5}, \frac{-8}{5}, \frac{-7}{5}$
(You can take any three of $\frac{-9}{5}, \frac{-8}{5}, \frac{-7}{5}, \frac{-6}{5}$ )
EXAMPLE 5 Write four more numbers in the following pattern:
$$
\frac{-1}{3}, \frac{-2}{6}, \frac{-3}{9}, \frac{-4}{12}, \ldots
$$
Solution We have,
$$
\frac{-2}{6}=\frac{-1 \times 2}{3 \times 2}, \frac{-3}{9}=\frac{-1 \times 3}{3 \times 3}, \frac{-4}{12}=\frac{-1 \times 4}{3 \times 4}
$$
or
$$
\frac{-1 \times 1}{3 \times 1}=\frac{-1}{3}, \frac{-1 \times 2}{3 \times 2}=\frac{-2}{6}, \frac{-1 \times 3}{3 \times 3}=\frac{-3}{9}, \frac{-1 \times 4}{3 \times 4}=\frac{-4}{12}
$$
Thus, we observe a pattern in these numbers.
The other numbers would be $\frac{-1 \times 5}{3 \times 5}=\frac{-5}{15}, \frac{-1 \times 6}{3 \times 6}=\frac{-6}{18}, \frac{-1 \times 7}{3 \times 7}=\frac{-7}{21}$.
## EXerCISe 8.1
1. List five rational numbers between:
(i) -1 and 0
(ii) -2 and -1
(iii) $\frac{-4}{5}$ and $\frac{-2}{3}$
(iv) $-\frac{1}{2}$ and $\frac{2}{3}$
2. Write four more rational numbers in each of the following patterns:
(i) $\frac{-3}{5}, \frac{-6}{10}, \frac{-9}{15}, \frac{-12}{20}, \ldots .$.
(ii) $\frac{-1}{4}, \frac{-2}{8}, \frac{-3}{12}, \ldots$.
(iii) $\frac{-1}{6}, \frac{2}{-12}, \frac{3}{-18}, \frac{4}{-24}, \ldots$.
(iv) $\frac{-2}{3}, \frac{2}{-3}, \frac{4}{-6}, \frac{6}{-9}, \ldots$.
3. Give four rational numbers equivalent to:
(i) $\frac{-2}{7}$
(ii) $\frac{5}{-3}$
(iii) $\frac{4}{9}$
4. Draw the number line and represent the following rational numbers on it:
(i) $\frac{3}{4}$
(ii) $\frac{-5}{8}$
(iii) $\frac{-7}{4}$
(iv) $\frac{7}{8}$
5. The points $P, Q, R, S, T, U, A$ and $B$ on the number line are such that, $T R=R S=S U$ and $\mathrm{AP}=\mathrm{PQ}=\mathrm{QB}$. Name the rational numbers represented by $\mathrm{P}, \mathrm{Q}, \mathrm{R}$ and $\mathrm{S}$.
6. Which of the following pairs represent the same rational number?
(i) $\frac{-7}{21}$ and $\frac{3}{9}$
(ii) $\frac{-16}{20}$ and $\frac{20}{-25}$
(iii) $\frac{-2}{-3}$ and $\frac{2}{3}$
(iv) $\frac{-3}{5}$ and $\frac{-12}{20}$
(v) $\frac{8}{-5}$ and $\frac{-24}{15}$
(vi) $\frac{1}{3}$ and $\frac{-1}{9}$
(vii) $\frac{-5}{-9}$ and $\frac{5}{-9}$
7. Rewrite the following rational numbers in the simplest form:
(i) $\frac{-8}{6}$
(ii) $\frac{25}{45}$
(iii) $\frac{-44}{72}$
(iv) $\frac{-8}{10}$
8. Fill in the boxes with the correct symbol out of $>,<$, and $=$.
(i) $\frac{-5}{7} \square \frac{2}{3}$
(ii) $\frac{-4}{5} \square \frac{-5}{7}$
(iii) $\frac{-7}{8} \square \frac{14}{-16}$
(iv) $\frac{-8}{5} \square \frac{-7}{4}$
(v) $\frac{1}{-3} \square \frac{-1}{4}$
(vi) $\frac{5}{-11} \square \frac{-5}{11}$
9. Which is greater in each of the following:
(i) $\frac{2}{3}, \frac{5}{2}$
(ii) $\frac{-5}{6}, \frac{-4}{3}$
(iii) $\frac{-3}{4}, \frac{2}{-3}$
(iv) $\frac{-1}{4}, \frac{1}{4}$
(v) $-3 \frac{2}{7},-3 \frac{4}{5}$
10. Write the following rational numbers in ascending order:
(i) $\frac{-3}{5}, \frac{-2}{5}, \frac{-1}{5}$
(ii) $\frac{-1}{3}, \frac{-2}{9}, \frac{-4}{3}$
(iii) $\frac{-3}{7}, \frac{-3}{2}, \frac{-3}{4}$
### Operations on Rational Numbers
You know how to add, subtract, multiply and divide integers as well as fractions. Let us now study these basic operations on rational numbers.
#### Addition
- Let us add two rational numbers with same denominators, say $\frac{7}{3}$ and $\frac{-5}{3}$.
We find $\frac{7}{3}+\left(\frac{-5}{3}\right)$
On the number line, we have:
The distance between two consecutive points is $\frac{1}{3}$. So adding $\frac{-5}{3}$ to $\frac{7}{3}$ will mean, moving to the left of $\frac{7}{3}$, making 5 jumps. Where do we reach? We reach at $\frac{2}{3}$. So, $\quad \frac{7}{3}+\left(\frac{-5}{3}\right)=\frac{2}{3}$.
Let us now try this way:
$$
\frac{7}{3}+\frac{(-5)}{3}=\frac{7+(-5)}{3}=\frac{2}{3}
$$
We get the same answer.
Find $\frac{6}{5}+\frac{(-2)}{5}, \frac{3}{7}+\frac{(-5)}{7}$ in both ways and check if you get the same answers.
What do you get?
Also, $\quad \frac{-7}{8}+\frac{5}{8}=\frac{-7+5}{8}=?$ Are the two values same?
## TrY These
Find: $\frac{-13}{7}+\frac{6}{7}, \frac{19}{5}+\left(\frac{-7}{5}\right)$
So, we find that while adding rational numbers with same denominators, we add the numerators keeping the denominators same.
Thus, $\quad \frac{-11}{5}+\frac{7}{5}=\frac{-11+7}{5}=\frac{-4}{5}$
- How do we add rational numbers with different denominators? As in the case of fractions, we first find the LCM of the two denominators. Then, we find the equivalent rational numbers of the given rational numbers with this LCM as the denominator. Then, add the two rational numbers.
## TrY THEse
Find:
(i) $\frac{-3}{7}+\frac{2}{3}$
(ii) $\frac{-5}{6}+\frac{-3}{11}$
For example, let us add $\frac{-7}{5}$ and $\frac{-2}{3}$.
## LCM of 5 and 3 is 15.
So, $\frac{-7}{5}=\frac{-21}{15}$ and $\frac{-2}{3}=\frac{-10}{15}$
Thus, $\frac{-7}{5}+\frac{(-2)}{3}=\frac{-21}{15}+\frac{(-10)}{15}=\frac{-31}{15}$
## Additive Inverse
What will be $\frac{-4}{7}+\frac{4}{7}=?$
$$
\frac{-4}{7}+\frac{4}{7}=\frac{-4+4}{7}=0 . \text { Also, } \frac{4}{7}+\left(\frac{-4}{7}\right)=0 .
$$
Similarly, $\frac{-2}{3}+\frac{2}{3}=0=\frac{2}{3}+\left(\frac{-2}{3}\right)$
In the case of integers, we call -2 as the additive inverse of 2 and 2 as the additive inverse of -2 .
For rational numbers also, we call $\frac{-4}{7}$ as the additive inverse of $\frac{4}{7}$ and $\frac{4}{7}$ as the additive inverse of $\frac{-4}{7}$. Similarly, $\frac{-2}{3}$ is the additive inverse of $\frac{2}{3}$ and $\frac{2}{3}$ is the additive inverse of $\frac{-2}{3}$.
## TRY THESE
What will be the additive inverse of $\frac{-3}{9} ?, \frac{-9}{11} ?, \frac{5}{7} ?$
EXAMPLe 6Satpal walks $\frac{2}{3} \mathrm{~km}$ from a place P, towards east and then from there
$$
1 \frac{5}{7} \mathrm{~km} \text { towards west. Where will he be now from } \mathrm{P} \text { ? }
$$
Solution Let us denote the distance travelled towards east by positive sign. So, the distances towards west would be denoted by negative sign.
Thus, distance of Satpal from the point $\mathrm{P}$ would be
$$
\begin{gathered}
\frac{2}{3}+\left(-1 \frac{5}{7}\right)=\frac{2}{3}+\frac{(-12)}{7}=\frac{2 \times 7}{3 \times 7}+\frac{(-12) \times 3}{7 \times 3} \\
=\frac{14-36}{21}=\frac{-22}{21}=-1 \frac{1}{21}
\end{gathered}
$$
Since it is negative, it means Satpal is at a distance $1 \frac{1}{21} \mathrm{~km}$ towards west of $\mathrm{P}$.
#### Subtraction
Savita found the difference of two rational numbers $\frac{5}{7}$ and $\frac{3}{8}$ in this way:
$$
\frac{5}{7}-\frac{3}{8}=\frac{40-21}{56}=\frac{19}{56}
$$
Farida knew that for two integers $a$ and $b$ she could write $a-b=a+(-b)$ She tried this for rational numbers also and found, $\frac{5}{7}-\frac{3}{8}=\frac{5}{7}+\frac{(-3)}{8}=\frac{19}{56}$. Both obtained the same difference.
Try to find $\frac{7}{8}-\frac{5}{9}, \frac{3}{11}-\frac{8}{7}$ in both ways. Did you get the same answer?
So, we say while subtracting two rational numbers, we add the additive inverse of the rational number that is being subtracted, to the other rational number.
Thus, $1 \frac{2}{3}-2 \frac{4}{5}=\frac{5}{3}-\frac{14}{5}=\frac{5}{3}+$ additive inverse of $\frac{14}{5}=\frac{5}{3}+\frac{(-14)}{5}$
$$
=\frac{-17}{15}=-1 \frac{2}{15} \text {. }
$$
## TRY THESE
Find:
What will be $\frac{2}{7}-\left(\frac{-5}{6}\right)$ ?
(i) $\frac{7}{9}-\frac{2}{5}$
(ii) $2 \frac{1}{5}-\frac{(-1)}{3}$
$$
\frac{2}{7}-\left(\frac{-5}{6}\right)=\frac{2}{7}+\text { additive inverse of }\left(\frac{-5}{6}\right)=\frac{2}{7}+\frac{5}{6}=\frac{47}{42}=1 \frac{5}{42}
$$
#### Multiplication
Let us multiply the rational number $\frac{-3}{5}$ by 2 , i.e., we find $\frac{-3}{5} \times 2$.
On the number line, it will mean two jumps of $\frac{3}{5}$ to the left.
Where do we reach? We reach at $\frac{-6}{5}$. Let us find it as we did in fractions.
$$
\frac{-3}{5} \times 2=\frac{-3 \times 2}{5}=\frac{-6}{5}
$$
We arrive at the same rational number.
Find $\frac{-4}{7} \times 3, \frac{-6}{5} \times 4$ using both ways. What do you observe? So, we find that while multiplying a rational number by a positive integer, we multiply the numerator by that integer, keeping the denominator unchanged.
Let us now multiply a rational number by a negative integer,
$$
\frac{-2}{9} \times(-5)=\frac{-2 \times(-5)}{9}=\frac{10}{9}
$$
Remember, -5 can be written as $\frac{-5}{1}$.
So, $\quad \frac{-2}{9} \times \frac{-5}{1}=\frac{10}{9}=\frac{-2 \times(-5)}{9 \times 1}$
Similarly, $\frac{3}{11} \times(-2)=\frac{3 \times(-2)}{11 \times 1}=\frac{-6}{11}$
## TRY THESE
What will be
(i) $\frac{-3}{5} \times 7 ?$
(ii) $\frac{-6}{5} \times(-2)$ ?
Based on these observations, we find that, $\frac{-3}{8} \times \frac{5}{7}=\frac{-3 \times 5}{8 \times 7}=\frac{-15}{56}$
So, as we did in the case of fractions, we multiply two rational numbers in the following way:
Step 1 Multiply the numerators of the two rational numbers.
Step 2 Multiply the denominators of the two rational numbers.
Step 3 Write the product as $\frac{\text { Result of Step } 1}{\text { Result of Step } 2}$
Thus, $\quad \frac{-3}{5} \times \frac{2}{7}=\frac{-3 \times 2}{5 \times 7}=\frac{-6}{35}$.
## TRY These
Find:
(i) $\frac{-3}{4} \times \frac{1}{7}$
(ii) $\frac{2}{3} \times \frac{-5}{9}$
Also, $\quad \frac{-5}{8} \times \frac{-9}{7}=\frac{-5 \times(-9)}{8 \times 7}=\frac{45}{56}$
#### Division
We have studied reciprocals of a fraction earlier. What is the reciprocal of $\frac{2}{7}$ ? It will be
$\frac{7}{2}$. We extend this idea of reciprocals to non-zero rational numbers also.
The reciprocal of $\frac{-2}{7}$ will be $\frac{7}{-2}$ i.e., $\frac{-7}{2}$; that of $\frac{-3}{5}$ would be $\frac{-5}{3}$.
## TRY These
What will be the reciprocal of $\frac{-6}{11} ?$ and $\frac{-8}{5}$ ?
## Product of reciprocals
The product of a rational number with its reciprocal is always 1 .
For example, $\quad \frac{-4}{9} \times\left(\right.$ reciprocal of $\left.\frac{-4}{9}\right)$
$$
=\frac{-4}{9} \times \frac{-9}{4}=1
$$
Similarly, $\frac{-6}{13} \times \frac{-13}{6}=1$
Try some more examples and confirm this observation.
Savita divided a rational number $\frac{4}{9}$ by another rational number $\frac{-5}{7}$ as,
$$
\frac{4}{9} \div \frac{-5}{7}=\frac{4}{9} \times \frac{7}{-5}=\frac{-28}{45}
$$
She used the idea of reciprocal as done in fractions.
Arpit first divided $\frac{4}{9}$ by $\frac{5}{7}$ and got $\frac{28}{45}$.
He finally said $\frac{4}{9} \div \frac{-5}{7}=\frac{-28}{45}$. How did he get that?
He divided them as fractions, ignoring the negative sign and then put the negative sign in the value so obtained.
Both of them got the same value $\frac{-28}{45}$. Try dividing $\frac{2}{3}$ by $\frac{-5}{7}$ both ways and see if you get the same answer.
This shows, to divide one rational number by the other non-zero rational number we multiply the rational number by the reciprocal of the other.
Thus, $\quad \frac{-6}{5} \div \frac{-2}{3}=\frac{6}{-5} \times$ reciprocal of $\left(\frac{-2}{3}\right)=\frac{6}{-5} \times \frac{3}{-2}=\frac{18}{10}$
## TRY THESE
Find: (i) $\frac{2}{3} \times \frac{-7}{8}$
(ii) $\frac{-6}{7} \times \frac{5}{7}$
## Exercise 8.2
1. Find the sum:
(i) $\frac{5}{4}+\left(\frac{-11}{4}\right)$
(ii) $\frac{5}{3}+\frac{3}{5}$
(iii) $\frac{-9}{10}+\frac{22}{15}$
(iv) $\frac{-3}{-11}+\frac{5}{9}$
(v) $\frac{-8}{19}+\frac{(-2)}{57}$
(vi) $\frac{-2}{3}+0$
(vii) $-2 \frac{1}{3}+4 \frac{3}{5}$
2. Find
(i) $\frac{7}{24}-\frac{17}{36}$
(ii) $\frac{5}{63}-\left(\frac{-6}{21}\right)$
(iii) $\frac{-6}{13}-\left(\frac{-7}{15}\right)$
(iv) $\frac{-3}{8}-\frac{7}{11}$
(v) $-2 \frac{1}{9}-6$
3. Find the product:
(i) $\frac{9}{2} \times\left(\frac{-7}{4}\right)$
(ii) $\frac{3}{10} \times(-9)$
(iii) $\frac{-6}{5} \times \frac{9}{11}$
(iv) $\frac{3}{7} \times\left(\frac{-2}{5}\right)$
(v) $\frac{3}{11} \times \frac{2}{5}$
(vi) $\frac{3}{-5} \times \frac{-5}{3}$
4. Find the value of:
(i) $(-4) \div \frac{2}{3}$
(ii) $\frac{-3}{5} \div 2$
(iii) $\frac{-4}{5} \div(-3)$
(iv) $\frac{-1}{8} \div \frac{3}{4}$
(v) $\frac{-2}{13} \div \frac{1}{7}$
(vi) $\frac{-7}{12} \div\left(\frac{-2}{13}\right)$
(vii) $\frac{3}{13} \div\left(\frac{-4}{65}\right)$
## What have We Discussed?
1. A number that can be expressed in the form $\frac{p}{q}$, where $p$ and $q$ are integers and $q \neq 0$, is called a rational number. The numbers $\frac{-2}{7}, \frac{3}{8}, 3$ etc. are rational numbers.
2. All integers and fractions are rational numbers.
3. If the numerator and denominator of a rational number are multiplied or divided by a non-zero integer, we get a rational number which is said to be equivalent to the given rational number. For example $\frac{-3}{7}=\frac{-3 \times 2}{7 \times 2}=\frac{-6}{14}$. So, we say $\frac{-6}{14}$ is the equivalent form of $\frac{-3}{7}$. Also note that $\frac{-6}{14}=\frac{-6 \div 2}{14 \div 2}=\frac{-3}{7}$.
4. Rational numbers are classified as Positive and Negative rational numbers. When the numerator and denominator, both, are positive integers, it is a positive rational number. When either the numerator or the denominator is a negative integer, it is a negative rational number. For example, $\frac{3}{8}$ is a positive rational number whereas $\frac{-8}{9}$ is a negative rational number.
5. The number 0 is neither a positive nor a negative rational number.
6. A rational number is said to be in the standard form if its denominator is a positive integer and the numerator and denominator have no common factor other than 1.
The numbers $\frac{-1}{3}, \frac{2}{7}$ etc. are in standard form.
7. There are unlimited number of rational numbers between two rational numbers.
8. Two rational numbers with the same denominator can be added by adding their numerators, keeping the denominator same. Two rational numbers with different denominators are added by first taking the LCM of the two denominators and then converting both the rational numbers to their equivalent forms having the LCM as the denominator. For example, $\frac{-2}{3}+\frac{3}{8}=\frac{-16}{24}+\frac{9}{24}=\frac{-16+9}{24}=\frac{-7}{24}$. Here, LCM of 3 and 8 is 24.
9. While subtracting two rational numbers, we add the additive inverse of the rational number to be subtracted to the other rational number.
Thus, $\frac{7}{8}-\frac{2}{3}=\frac{7}{8}+$ additive inverse of $\frac{2}{3}=\frac{7}{8}+\frac{(-2)}{3}=\frac{21+(-16)}{24}=\frac{5}{24}$. 10. To multiply two rational numbers, we multiply their numerators and denominators separately, and write the product as $\frac{\text { product of numerators }}{\text { product of denominators }}$.
11. To divide one rational number by the other non-zero rational number, we multiply the rational number by the reciprocal of the other. Thus,
$$
\frac{-7}{2} \div \frac{4}{3}=\frac{-7}{2} \times\left(\text { reciprocal of } \frac{4}{3}\right)=\frac{-7}{2} \times \frac{3}{4}=\frac{-21}{8} .
$$
## Perimeter and Area
### Area of a Parallelogram
We come across many shapes other than squares and rectangles.
How will you find the area of a land which is a parallelogram in shape?
Let us find a method to get the area of a parallelogram.
Can a parallelogram be converted into a rectangle of equal area?
Draw a parallelogram on a graph paper as shown in Fig 9.1(i). Cut out the parallelogram. Draw a line from one vertex of the parallelogram perpendicular to the opposite side [Fig 9.1(ii)]. Cut out the triangle. Move the triangle to the other side of the parallelogram.
(i)
(ii)
(iii)
Fig 9.1
What shape do you get? You get a rectangle. Is the area of the parallelogram equal to the area of the rectangle formed?
Yes, area of the parallelogram $=$ area of the rectangle formed
What are the length and the breadth of the rectangle?
Fig 9.2 We find that the length of the rectangle formed is equal to the base of the parallelogram and the breadth of the rectangle is equal to the height of the parallelogram (Fig 9.2).
Now, Area of parallelogram $=$ Area of rectangle
$$
=\text { length } \times \text { breadth }=l \times b
$$
But the length $l$ and breadth $b$ of the rectangle are exactly the base $b$ and the height $h$, respectively of the parallelogram.
Thus, the area of parallelogram $=$ base $\times$ height $=b \times h$.
Consider the following parallelograms (Fig 9.2).
Fig 9.3
Find the areas of the parallelograms by counting the squares enclosed within the figures and also find the perimeters by measuring the sides. Complete the following table:
| Parallelogram | Base | Height | Area | Perimeter |
| :---: | :--- | :---: | :---: | :---: |
| (a) | 5 units | 3 units | 15 sq units | |
| (b) | | | | |
| (c) | | | | |
| (d) | | | | |
| (e) | | | | |
| (f) | | | | |
| (g) | | | | |
You will find that all these parallelograms have equal areas but different perimeters. Now, consider the following parallelograms with sides $7 \mathrm{~cm}$ and $5 \mathrm{~cm}$ (Fig 9.4).
## Fig 9.4
Find the perimeter and area of each of these parallelograms. Analyse your results. You will find that these parallelograms have different areas but equal perimeters.
To find the area of a parallelogram, you need to know only the base and the corresponding height of the parallelogram.
## TrY These
Find the area of following parallelograms:
(i)
(ii)
(iii) In a parallelogram $\mathrm{ABCD}, \mathrm{AB}=7.2 \mathrm{~cm}$ and the perpendicular from $\mathrm{C}$ on $\mathrm{AB}$ is $4.5 \mathrm{~cm}$.
### Area of a triangle
A gardener wants to know the cost of covering the whole of a triangular garden with grass.
In this case we need to know the area of the triangular region. Let us find a method to get the area of a triangle.
Draw a scalene triangle on a piece of paper. Cut out the triangle. Place this triangle on another piece of paper and cut out another triangle of the same size.
So now you have two scalene triangles of the same size.
Are both the triangles congruent?
Superpose one triangle on the other so that they match. You may have to rotate one of the two triangles.
Now place both the triangles such that a pair of corresponding sides is joined as shown in Fig 9.5.
Is the figure thus formed a parallelogram?
Compare the area of each triangle to the area of the parallelogram.
Compare the base and height of the triangles with the base and height of the parallelogram.
You will find that the sum of the areas of both the triangles is equal to the area of the parallelogram. The base and the height of the triangle are the same as the base and the height of the parallelogram, respectively.
Area of each triangle $=\frac{1}{2}($ Area of parallelogram $)$
$$
\begin{aligned}
& =\frac{1}{2}(\text { base } \times \text { height })(\text { Since area of a parallelogram }=\text { base } \times \text { height }) \\
& =\frac{1}{2}(b \times h)\left(\text { or } \frac{1}{2} b h, \text { in short }\right)
\end{aligned}
$$
## Try These
1. Try the above activity with different types of triangles.
2. Take different parallelograms. Divide each of the parallelograms into two triangles by cutting along any of its diagonals. Are the triangles congruent?
In the figure (Fig 9.6) all the triangles are on the base $\mathrm{AB}=6 \mathrm{~cm}$.
What can you say about the height of each of the triangles corresponding to the base $\mathrm{AB}$ ?
Can we say all the triangles are equal in area? Yes. Are the triangles congruent also? No.
We conclude that all the congruent triangles are equal in area but the triangles equal in area need not be congruent.
Fig 9.6
Fig 9.7
Consider the obtuse-angled triangle $\mathrm{ABC}$ of base $6 \mathrm{~cm}$ (Fig 9.7).
Its height $\mathrm{AD}$ which is perpendicular from the vertex $\mathrm{A}$ is outside the triangle.
Can you find the area of the triangle?
ExAMIPLE 1 One of the sides and the corresponding height of a parallelogram are $4 \mathrm{~cm}$ and $3 \mathrm{~cm}$ respectively. Find the area of the parallelogram (Fig 9.8).
Solution Given that length of base $(b)=4 \mathrm{~cm}$, height $(h)=3 \mathrm{~cm}$
Area of the parallelogram $=b \times h$
$$
=4 \mathrm{~cm} \times 3 \mathrm{~cm}=12 \mathrm{~cm}^{2}
$$
Example 2 Find the height ' $x$ ' if the area of the parallelogram is $24 \mathrm{~cm}^{2}$ and the base is $4 \mathrm{~cm}$.
Fig 9.8
Fig 9.9 Solution Area of parallelogram $=b \times h$
Therefore, $24=4 \times x$ (Fig 9.9)
$$
\text { or } \quad \frac{24}{4}=x \text { or } \quad x=6 \mathrm{~cm}
$$
So, the height of the parallelogram is $6 \mathrm{~cm}$. EXAMpLe 3 The two sides of the parallelogram $A B C D$ are $6 \mathrm{~cm}$ and $4 \mathrm{~cm}$. The height corresponding to the base $\mathrm{CD}$ is $3 \mathrm{~cm}$ (Fig 9.10). Find the
(i) area of the parallelogram.
(ii) the height corresponding to the base $\mathrm{AD}$.
## Solution
(i) Area of parallelogram $=b \times h$
(ii)
$$
=6 \mathrm{~cm} \times 3 \mathrm{~cm}=18 \mathrm{~cm}^{2}
$$$$
\text { base }(b)=4 \mathrm{~cm} \text {, height }=x \text { (say), }
$$
$$
\text { Area }=18 \mathrm{~cm}^{2}
$$
Area of parallelogram $=b \times x$
$$
18=4 \times x
$$$$
\frac{18}{4}=x
$$
Therefore,
$$
x=4.5 \mathrm{~cm}
$$
Thus, the height corresponding to base AD is $4.5 \mathrm{~cm}$.
Example 4Find the area of the following triangles (Fig 9.11).
(i)
Fig 9.11
(ii)
## Solution
(i) Area of triangle $=\frac{1}{2} b h=\frac{1}{2} \times \mathrm{QR} \times \mathrm{PS}$
$$
=\frac{1}{2} \times 4 \mathrm{~cm} \times 2 \mathrm{~cm}=4 \mathrm{~cm}^{2}
$$
(ii) Area of triangle $=\frac{1}{2} b h=\frac{1}{2} \times \mathrm{MN} \times \mathrm{LO}$
$$
=\frac{1}{2} \times 3 \mathrm{~cm} \times 2 \mathrm{~cm}=3 \mathrm{~cm}^{2}
$$
Example 5 Find $B C$, if the area of the triangle $A B C$ is $36 \mathrm{~cm}^{2}$ and the height $\mathrm{AD}$ is $3 \mathrm{~cm}$ (Fig 9.12).
Solution Height $=3 \mathrm{~cm}$, Area $=36 \mathrm{~cm}^{2}$ Area of the triangle $\mathrm{ABC}=\frac{1}{2} b h$
Fig 9.12
or
$$
36=\frac{1}{2} \times b \times 3 \text { i.e., } \quad b=\frac{36 \times 2}{3}=24 \mathrm{~cm}
$$
So,
$$
\mathrm{BC}=24 \mathrm{~cm}
$$
Example 6 In $\triangle \mathrm{PQR}, \mathrm{PR}=8 \mathrm{~cm}, \mathrm{QR}=4$ $\mathrm{cm}$ and $P L=5 \mathrm{~cm}$ (Fig 9.13). Find:
(i) the area of the $\triangle \mathrm{PQR}$
(ii) QM
## Solution
(i) $\mathrm{QR}=$ base $=4 \mathrm{~cm}, \mathrm{PL}=$ height $=5 \mathrm{~cm}$
Fig 9.13
$$
\begin{aligned}
\text { Area of the triangle } \mathrm{PQR} & =\frac{1}{2} b h \\
& =\frac{1}{2} \times 4 \mathrm{~cm} \times 5 \mathrm{~cm}=10 \mathrm{~cm}^{2}
\end{aligned}
$$
(ii) $\mathrm{PR}=$ base $=8 \mathrm{~cm}$
$$
\mathrm{QM}=\text { height = ? }
$$
$$
\begin{array}{rlr}
\text { Area of triangle }=\frac{1}{2} \times b \times h \quad & \text { i.e., } \quad 10=\frac{1}{2} \times 8 \times h \\
h=\frac{10}{4}=\frac{5}{2}=2.5 . & \text { So, } \quad Q M=2.5 \mathrm{~cm}
\end{array}
$$
## Exercise 9.1
1. Find the area of each of the following parallelograms:
(a)
(b)
(c)
(d)
(e)
2. Find the area of each of the following triangles:
(a)
(b)
(c)
(d)
3. Find the missing values:
| S.No. | Base | Height | Area of the Parallelogram |
| :---: | :---: | :---: | :---: |
| a. | $20 \mathrm{~cm}$ | | $246 \mathrm{~cm}^{2}$ |
| b. | | $15 \mathrm{~cm}$ | $154.5 \mathrm{~cm}^{2}$ |
| c. | | $8.4 \mathrm{~cm}$ | $48.72 \mathrm{~cm}^{2}$ |
| d. | $15.6 \mathrm{~cm}$ | | $16.38 \mathrm{~cm}^{2}$ |
4. Find the missing values:
| Base | Height | Area of Triangle |
| :---: | :---: | :---: |
| $15 \mathrm{~cm}$ | - | $87 \mathrm{~cm}^{2}$ |
| | $31.4 \mathrm{~mm}$ | $1256 \mathrm{~mm}^{2}$ |
| $22 \mathrm{~cm}$ | - | $170.5 \mathrm{~cm}^{2}$ |
Fig 9.14
5. PQRS is a parallelogram (Fig 9.14). QM is the height from $\mathrm{Q}$ to $\mathrm{SR}$ and $\mathrm{QN}$ is the height from $\mathrm{Q}$ to PS. If $S R=12 \mathrm{~cm}$ and $Q M=7.6 \mathrm{~cm}$. Find:
(a) the area of the parallegram PQRS
(b) $\mathrm{QN}$, if $\mathrm{PS}=8 \mathrm{~cm}$
6. $\mathrm{DL}$ and $\mathrm{BM}$ are the heights on sides $\mathrm{AB}$ and $\mathrm{AD}$ respectively of parallelogram $A B C D$ (Fig 9.15). If
Fig 9.15 the area of the parallelogram is $1470 \mathrm{~cm}^{2}, \mathrm{AB}=35 \mathrm{~cm}$ and $\mathrm{AD}=$ $49 \mathrm{~cm}$, find the length of BM and DL.
7. $\triangle \mathrm{ABC}$ is right angled at $\mathrm{A}$ (Fig 9.16). $\mathrm{AD}$ is perpendicular to $\mathrm{BC}$. If $\mathrm{AB}=5 \mathrm{~cm}$, $B C=13 \mathrm{~cm}$ and $A C=12 \mathrm{~cm}$, Find the area of $\triangle A B C$. Also find the length of $\mathrm{AD}$.
Fig 9.16
Fig 9.17
8. $\triangle \mathrm{ABC}$ is isosceles with $\mathrm{AB}=\mathrm{AC}=7.5 \mathrm{~cm}$ and $\mathrm{BC}=9 \mathrm{~cm}$ (Fig 9.17). The height $\mathrm{AD}$ from $\mathrm{A}$ to $\mathrm{BC}$, is $6 \mathrm{~cm}$. Find the area of $\triangle \mathrm{ABC}$. What will be the height from $\mathrm{C}$ to $\mathrm{AB}$ i.e., $\mathrm{CE}$ ?
### Circles
A racing track is semi-circular at both ends (Fig 9.18).
Can you find the distance covered by an athlete if he takes two rounds of a racing track? We need to find a method to find the distances around when a shape is circular.
Fig 9.18
#### Circumference of a Circle
Tanya cut different cards, in curved shape from a cardboard. She wants to put lace around to decorate these cards. What length of the lace does she require for each? (Fig 9.19)
(a)
(b)
(c)
Fig 9.19
Fig 9.20
You cannot measure the curves with the help of a ruler, as these figures are not "straight". What can you do?
Here is a way to find the length of lace required for shape in Fig 9.19(a). Mark a point on the edge of the card and place the card on the table. Mark the position of the point on the table also (Fig 9.20).
Now roll the circular card on the table along a straight line till the marked point again touches the table. Measure the distance
Fig 9.21 along the line. This is the length of the lace required (Fig 9.21). It is also the distance along the edge of the card from the marked point back to the marked point.
You can also find the distance by putting a string on the edge of the circular object and taking all round it.
## The distance around a circular region is known as its circumference.
## Do This
Take a bottle cap, a bangle or any other circular object and find the circumference.
Now, can you find the distance covered by the athlete on the track by this method?
Still, it will be very difficult to find the distance around the track or any other circular object by measuring through string. Moreover, the measurement will not be accurate.
So, we need some formula for this, as we have for rectilinear figures or shapes.
Let us see if there is any relationship between the diameter and the circumference of the circles.
Consider the following table: Draw six circles of different radii and find their circumference by using string. Also find the ratio of the circumference to the diameter.
| Circle | Radius | Diameter | Circumference | $\begin{array}{c}\text { Ratio of Circumference } \\ \text { to Diameter }\end{array}$ |
| :---: | :---: | :---: | :---: | :---: |
| 1. | $3.5 \mathrm{~cm}$ | $7.0 \mathrm{~cm}$ | $22.0 \mathrm{~cm}$ | $\frac{22}{7}=3.14$ |
| 2. | $7.0 \mathrm{~cm}$ | $14.0 \mathrm{~cm}$ | $44.0 \mathrm{~cm}$ | $\frac{44}{14}=3.14$ |
| :---: | :---: | :---: | :---: | :---: |
| 3. | $10.5 \mathrm{~cm}$ | $21.0 \mathrm{~cm}$ | $66.0 \mathrm{~cm}$ | $\frac{66}{21}=3.14$ |
| 4. | $21.0 \mathrm{~cm}$ | $42.0 \mathrm{~cm}$ | $132.0 \mathrm{~cm}$ | $\frac{132}{42}=3.14$ |
| 5. | $5.0 \mathrm{~cm}$ | $10.0 \mathrm{~cm}$ | $32.0 \mathrm{~cm}$ | $\frac{32}{10}=3.2$ |
| 6. | $15.0 \mathrm{~cm}$ | $30.0 \mathrm{~cm}$ | $94.0 \mathrm{~cm}$ | $\frac{94}{30}=3.13$ |
What do you infer from the above table? Is this ratio approximately the same? Yes.
Can you say that the circumference of a circle is always more than three times its diameter? Yes.
This ratio is a constant and is denoted by $\pi$ (pi). Its approximate value is $\frac{22}{7}$ or 3.14 .
So, we can say that $\frac{\mathrm{C}}{d}=\pi$, where ' $\mathrm{C}$ ' represents circumference of the circle and ' $d$ ' its diameter.
or
$$
\mathrm{C}=\pi d
$$
We know that diameter $(d)$ of a circle is twice the radius $(r)$ i.e., $d=2 r$
So,
$\mathrm{C}=\pi d=\pi \times 2 r$
or $\mathrm{C}=2 \pi r$.
## TRY THESE
In Fig 9.22,
(a) Which square has the larger perimeter?
(b) Which is larger, perimeter of smaller square or the circumference of the circle?
Fig 9.22
## Do This
Take one each of quarter plate and half plate. Roll once each of these on a table-top. Which plate covers more distance in one complete revolution? Which plate will take less number of revolutions to cover the length of the table-top? Example 7 What is the circumference of a circle of diameter $10 \mathrm{~cm}$ (Take $\pi=3.14$ )?
Solution Diameter of the circle $(d)=10 \mathrm{~cm}$
Circumference of circle $=\pi d$
$$
=3.14 \times 10 \mathrm{~cm}=31.4 \mathrm{~cm}
$$
So, the circumference of the circle of diameter $10 \mathrm{~cm}$ is $31.4 \mathrm{~cm}$.
EXAMIPLE 8 What is the circumference of a circular disc of radius $14 \mathrm{~cm}$ ?
$$
\left(\text { Use } \pi=\frac{22}{7}\right)
$$
Solution Radius of circular disc $(r)=14 \mathrm{~cm}$
Circumference of disc $=2 \pi r$
$$
=2 \times \frac{22}{7} \times 14 \mathrm{~cm}=88 \mathrm{~cm}
$$
So, the circumference of the circular disc is $88 \mathrm{~cm}$.
EXAMPLE 9 The radius of a circular pipe is $10 \mathrm{~cm}$. What length of a tape is required to wrap once around the pipe $(\pi=3.14)$ ?
Solution Radius of the pipe $(r)=10 \mathrm{~cm}$
Length of tape required is equal to the circumference of the pipe.
Circumference of the pipe $=2 \pi r$
$$
\begin{aligned}
& =2 \times 3.14 \times 10 \mathrm{~cm} \\
& =62.8 \mathrm{~cm}
\end{aligned}
$$
Therefore, length of the tape needed to wrap once around the pipe is $62.8 \mathrm{~cm}$.
Example 10Find the perimeter of the given shape (Fig 9.23) (Take $\pi=\frac{22}{7}$ ).
Solution In this shape we need to find the circumference of semicircles on each side of the square. Do you need to find the perimeter of the square also? No. The outer boundary, of this figure is made up of semicircles. Diameter of
We know that: each semicircle is $14 \mathrm{~cm}$.
Circumference of the circle $=\pi d$
Circumference of the semicircle $=\frac{1}{2} \pi d$
$$
=\frac{1}{2} \times \frac{22}{7} \times 14 \mathrm{~cm}=22 \mathrm{~cm}
$$
Circumference of each of the semicircles is $22 \mathrm{~cm}$
Therefore, perimeter of the given figure $=4 \times 22 \mathrm{~cm}=88 \mathrm{~cm}$
Fig 9.23 Example 11 Sudhanshu divides a circular disc of radius $7 \mathrm{~cm}$ in two equal parts. What is the perimeter of each semicircular shape disc? (Use $\pi=\frac{22}{7}$ ) Solution To find the perimeter of the semicircular disc (Fig 9.24), we need to find $\begin{array}{ll}\text { (i) Circumference of semicircular shape } & \text { (ii) Diameter }\end{array}$ Given that radius $(r)=7 \mathrm{~cm}$. We know that the circumference of circle $=2 \pi r$
Fig 9.24
So, the diameter of the circle $=2 r=2 \times 7 \mathrm{~cm}=14 \mathrm{~cm}$
Thus, perimeter of each semicircular disc $=22 \mathrm{~cm}+14 \mathrm{~cm}=36 \mathrm{~cm}$
#### Area of Circle
Consider the following:
- A farmer dug a flower bed of radius $7 \mathrm{~m}$ at the centre of a field. He needs to purchase fertiliser. If $1 \mathrm{~kg}$ of fertiliser is required for 1 square metre area, how much fertiliser should he purchase?
- What will be the cost of polishing a circular table-top of radius $2 \mathrm{~m}$ at the rate of $₹ 10$ per square metre?
Can you tell what we need to find in such cases, Area or Perimeter? In such cases we need to find the area of the circular region. Let us find the area of a circle, using graph paper.
Draw a circle of radius $4 \mathrm{~cm}$ on a graph paper (Fig 9.25). Find the area by counting the number of squares enclosed.
As the edges are not straight, we get a rough estimate of the area of circle by this method. There is another way of finding the area of a circle.
Draw a circle and shade one half of the circle [Fig 9.26(i)]. Now fold the circle into eighths and cut along the folds [Fig 9.26(ii)].
(i)
(ii)
Fig 9.26
Fig 9.27
Arrange the separate pieces as shown, in Fig 9.27, which is roughly a parallelogram.
The more sectors we have, the nearer we reach an appropriate parallelogram. As done above if we divide the circle in 64 sectors, and arrange these sectors. It gives nearly a rectangle (Fig 9.28).
Fig 9.28
What is the breadth of this rectangle? The breadth of this rectangle is the radius of the circle, i.e., ' $r$ '.
As the whole circle is divided into 64 sectors and on each side we have 32 sectors, the length of the rectangle is the length of the 32 sectors, which is half of the circumference. (Fig 9.28)
Area of the circle $=$ Area of rectangle thus formed $=l \times b$
$=($ Half of circumference $) \times$ radius $=\left(\frac{1}{2} \times 2 \pi r\right) \times r=\pi r^{2}$
So, the area of the circle $=\pi r^{2}$
## TRY THEse
Draw circles of different radii on a graph paper. Find the area by counting the number of squares. Also find the area by using the formula. Compare the two answers.
Example 12Find the area of a circle of radius $30 \mathrm{~cm}$ (use $\pi=3.14$ ).
SOLUTION Radius, $r=30 \mathrm{~cm}$
Area of the circle $=\pi r^{2}=3.14 \times 30^{2}=2,826 \mathrm{~cm}^{2}$
Example 13Diameter of a circular garden is $9.8 \mathrm{~m}$. Find its area.
Solution Diameter, $d=9.8 \mathrm{~m}$. Therefore, radius $r=9.8 \div 2=4.9 \mathrm{~m}$
Area of the circle $=\pi r^{2}=\frac{22}{7} \times(4.9)^{2} \mathrm{~m}^{2}=\frac{22}{7} \times 4.9 \times 4.9 \mathrm{~m}^{2}=75.46 \mathrm{~m}^{2}$
Example 14 The adjoining figure shows two circles with the same centre. The radius of the larger circle is $10 \mathrm{~cm}$ and the radius of the smaller circle is $4 \mathrm{~cm}$.
Find: (a) the area of the larger circle
(b) the area of the smaller circle
(c) the shaded area between the two circles. $(\pi=3.14)$
## Solution
(a) Radius of the larger circle $=10 \mathrm{~cm}$
So, area of the larger circle $=\pi r^{2}$
$$
=3.14 \times 10 \times 10=314 \mathrm{~cm}^{2}
$$
(b) Radius of the smaller circle $=4 \mathrm{~cm}$
Area of the smaller circle $=\pi r^{2}$
$$
=3.14 \times 4 \times 4=50.24 \mathrm{~cm}^{2}
$$
(c) Area of the shaded region $=(314-50.24) \mathrm{cm}^{2}=263.76 \mathrm{~cm}^{2}$
## Exercise 9.2
1. Find the circumference of the circles with the following radius: (Take $\pi=\frac{22}{7}$ )
(a) $14 \mathrm{~cm}$
(b) $28 \mathrm{~mm}$
(c) $21 \mathrm{~cm}$
2. Find the area of the following circles, given that:
(a) radius $=14 \mathrm{~mm}\left(\right.$ Take $\left.\pi=\frac{22}{7}\right)$
(c) radius $=5 \mathrm{~cm}$
(b) diameter $=49 \mathrm{~m}$
3. If the circumference of a circular sheet is $154 \mathrm{~m}$, find its radius. Also find the area of the sheet. (Take $\pi=\frac{22}{7}$ )
4. A gardener wants to fence a circular garden of diameter $21 \mathrm{~m}$. Find the length of the rope he needs to purchase, if he makes 2 rounds of fence. Also find the cost of the rope, if it costs $₹ 4$ per meter. (Take $\pi=\frac{22}{7}$ )
5. From a circular sheet of radius $4 \mathrm{~cm}$, a circle of radius $3 \mathrm{~cm}$ is removed. Find the area of the remaining sheet. (Take $\pi=3.14$ )
6. Saima wants to put a lace on the edge of a circular table cover of diameter $1.5 \mathrm{~m}$. Find the length of the lace required and also find its cost if one meter of the lace costs
₹ 15 . (Take $\pi=3.14$ )
7. Find the perimeter of the adjoining figure, which is a semicircle including its diameter.
8. Find the cost of polishing a circular table-top of diameter $1.6 \mathrm{~m}$, if the rate of polishing is ₹ $15 / \mathrm{m}^{2}$. (Take $\pi=3.14$ )
9. Shazli took a wire of length $44 \mathrm{~cm}$ and bent it into the shape of a circle. Find the radius of that circle. Also find its area. If the same wire is bent into the shape of a square, what will be the length of each of its sides? Which figure encloses more
area, the circle or the square? (Take $\pi=\frac{22}{7}$ )
10. From a circular card sheet of radius $14 \mathrm{~cm}$, two circles of radius $3.5 \mathrm{~cm}$ and a rectangle of length $3 \mathrm{~cm}$ and breadth $1 \mathrm{~cm}$ are removed. (as shown in the adjoining figure). Find the area of the remaining sheet. (Take $\pi=\frac{22}{7}$ ) 11. A circle of radius $2 \mathrm{~cm}$ is cut out from a square piece of an aluminium sheet of side $6 \mathrm{~cm}$. What is the area of the left over aluminium sheet? (Take $\pi=3.14$ )
11. The circumference of a circle is $31.4 \mathrm{~cm}$. Find the radius and the area of the circle? (Take $\pi=3.14$ )
12. A circular flower bed is surrounded by a path $4 \mathrm{~m}$ wide. The diameter of the flower bed is $66 \mathrm{~m}$. What is the area of this path? $(\pi=3.14)$
13. A circular flower garden has an area of $314 \mathrm{~m}^{2}$. A sprinkler at the centre of the garden can cover an area that has a radius of $12 \mathrm{~m}$. Will the sprinkler water the entire garden? (Take $\pi=3.14$ )
14. Find the circumference of the inner and the outer circles, shown in the adjoining figure? (Take $\pi=3.14$ )
15. How many times a wheel of radius $28 \mathrm{~cm}$ must rotate to go $352 \mathrm{~m}$ ? (Take $\pi=\frac{22}{7}$ )
16. The minute hand of a circular clock is $15 \mathrm{~cm}$ long. How far does the tip of the minute hand move in 1 hour. (Take $\pi=3.14$ )
## What have We Discussed?
1. Area of a parallelogram $=$ base $\times$ height
2. Area of a triangle $=\frac{1}{2}$ (area of the parallelogram generated from it)
$$
=\frac{1}{2} \times \text { base } \times \text { height }
$$
3. The distance around a circular region is known as its circumference.
Circumference of a circle $=\pi d$, where $d$ is the diameter of a circle and $\pi=\frac{22}{7}$ or 3.14 (approximately).
4. Area of a circle $=\pi r^{2}$, where $r$ is the radius of the circle.
## Algebraic Expressions
### Introduction
We have already come across simple algebraic expressions like $x+3, y-5,4 x+5$, $10 y-5$ and so on. In Class VI, we have seen how these expressions are useful in formulating puzzles and problems. We have also seen examples of several expressions in the chapter on simple equations.
Expressions are a central concept in algebra. This Chapter is devoted to algebraic expressions. When you have studied this Chapter, you will know how algebraic expressions are formed, how they can be combined, how we can find their values and how they can be used.
### How ARE EXpressions Formed?
We now know very well what a variable is. We use letters $x, y, l, m, \ldots$ etc. to denote variables. A variable can take various values. Its value is not fixed. On the other hand, a constant has a fixed value. Examples of constants are: $4,100,-17$, etc.
We combine variables and constants to make algebraic expressions. For this, we use the operations of addition, subtraction, multiplication and division. We have already come across expressions like $4 x+5,10 y-20$. The expression $4 x+5$ is obtained from the variable $x$, first by multiplying $x$ by the constant 4 and then adding the constant 5 to the product. Similarly, $10 y-20$ is obtained by first multiplying $y$ by 10 and then subtracting 20 from the product.
The above expressions were obtained by combining variables with constants. We can also obtain expressions by combining variables with themselves or with other variables. Look at how the following expressions are obtained:
$$
x^{2}, 2 y^{2}, 3 x^{2}-5, x y, 4 x y+7
$$
(i) The expression $x^{2}$ is obtained by multiplying the variable $x$ by itself;
$$
x \times x=x^{2}
$$
Just as $4 \times 4$ is written as $4^{2}$, we write $x \times x=x^{2}$. It is commonly read as $x$ squared. (Later, when you study the chapter 'Exponents and Powers' you will realise that $x^{2}$ may also be read as $x$ raised to the power 2).
In the same manner, we can write $\quad x \times x \times x=x^{3}$
Commonly, $x^{3}$ is read as ' $x$ cubed'. Later, you will realise that $x^{3}$ may also be read as $x$ raised to the power 3 .
$x, x^{2}, x^{3}, \ldots$ are all algebraic expressions obtained from $x$.
(ii) The expression $2 y^{2}$ is obtained from $y$ : $2 y^{2}=2 \times y \times y$
Here by multiplying $y$ with $y$ we obtain $y^{2}$ and then we multiply $y^{2}$ by the constant 2 .
(iii) In $\left(3 x^{2}-5\right)$ we first obtain $x^{2}$, and multiply it by 3 to get $3 x^{2}$. From $3 x^{2}$, we subtract 5 to finally arrive at $3 x^{2}-5$.
(iv) In $x y$, we multiply the variable $x$ with another variable $y$. Thus, $x \times y=x y$.
(v) In $4 x y+7$, we first obtain $x y$, multiply it by 4 to get $4 x y$ and add 7 to $4 x y$ to get the expression.
### TERMS OF an EXpression
We shall now put in a systematic form what we have learnt above about how expressions are formed. For this purpose, we need to understand what terms of an expression and their factors are.
Consider the expression $(4 x+5)$. In forming this expression, we first formed $4 x$ separately as a product of 4 and $x$ and then added 5 to it. Similarly consider the expression $\left(3 x^{2}+7 y\right)$. Here we first formed $3 x^{2}$ separately as a product of $3, x$ and $x$. We then formed $7 y$ separately as a product of 7 and $y$. Having formed $3 x^{2}$ and $7 y$ separately, we added them to get the expression.
You will find that the expressions we deal with can always be seen this way. They have parts which are formed separately and then added. Such parts of an expression which are formed separately first and then added are known as terms. Look at the expression $\left(4 x^{2}-3 x y\right)$. We say that it has two terms, $4 x^{2}$ and $-3 x y$. The term $4 x^{2}$ is a product of $4, x$ and $x$, and the term (-3xy) is a product of $(-3), x$ and $y$.
Terms are added to form expressions. Just as the terms $4 x$ and 5 are added to form the expression $(4 x+5)$, the terms $4 x^{2}$ and $(-3 x y)$ are added to give the expression $\left(4 x^{2}-3 x y\right)$. This is because $4 x^{2}+(-3 x y)=4 x^{2}-3 x y$.
Note, the minus sign $(-)$ is included in the term. In the expression $4 x^{2}-3 x y$, we took the term as $(-3 x y)$ and not as (3xy). That is why we do not need to say that terms are 'added or subtracted' to form an expression; just 'added' is enough.
## Factors of a term
We saw above that the expression $\left(4 x^{2}-3 x y\right)$ consists of two terms $4 x^{2}$ and $-3 x y$. The term $4 x^{2}$ is a product of $4, x$ and $x$; we say that $4, x$ and $x$ are the factors of the term $4 x^{2}$. A term is a product of its factors. The term $-3 x y$ is a product of the factors $-3, x$ and $y$.
We can represent the terms and factors of the terms of an expression conveniently and elegantly by a tree diagram. The tree for the expression $\left(4 x^{2}-3 x y\right)$ is as shown in the adjacent figure.
Note, in the tree diagram, we have used dotted lines for factors and continuous lines for terms. This is to avoid mixing them.
Let us draw a tree diagram for the expression $5 x y+10$.
The factors are such that they cannot be further factorised. Thus we do not write $5 x y$ as $5 \times x y$, because $x y$ can be further factorised. Similarly, if $x^{3}$ were a term, it would be written as $x \times x \times x$ and not $x^{2} \times x$. Also, remember that 1 is not taken as a separate factor.
## Try THEse
1. What are the terms in the following expressions? Show how the terms are formed. Draw a tree diagram for each expression:
$$
8 y+3 x^{2}, 7 m n-4,2 x^{2} y .
$$
2. Write three expression each having 4 terms.
## Coefficients
We have learnt how to write a term as a product of factors. One of these factors may be numerical and the others algebraic (i.e., they contain variables). The numerical factor is said to be the numerical coefficient or simply the coefficient of the term. It is also said to be the coefficient of the rest of the term (which is obviously the product of algebraic factors of the term). Thus in $5 x y, 5$ is the coefficient of the term. It is also the coefficient of $x y$. In the term $10 x y z, 10$ is the coefficient of $x y z$, in the term $-7 x^{2} y^{2},-7$ is the coefficient of $x^{2} y^{2}$.
When the coefficient of a term is +1 , it is usually omitted. For example, $1 x$ is written as $x ; 1 x^{2} y^{2}$ is written as $x^{2} y^{2}$ and so on. Also, the coefficient (-1) is indicated only by the minus sign. Thus (-1) $x$ is written as $-x ;(-1) x^{2} y^{2}$ is written as $-x^{2} y^{2}$ and so on.
Sometimes, the word 'coefficient' is used in a more general way. Thus
## TrY THEse
Identify the coefficients of the terms of following expressions:
$4 x-3 y, a+b+5,2 y+5,2 x y$ we say that in the term $5 x y, 5$ is the coefficient of $x y, x$ is the coefficient of $5 y$ and $y$ is the coefficient of $5 x$. In $10 x y^{2}, 10$ is the coefficient of $x y^{2}, x$ is the coefficient of $10 y^{2}$ and $y^{2}$ is the coefficient of 10x. Thus, in this more general way, a coefficient may be either a numerical factor or an algebraic factor or a product of two or more factors. It is said to be the coefficient of the product of the remaining factors.
EXample 1 Identify, in the following expressions, terms which are not constants. Give their numerical coefficients:
$$
x y+4,13-y^{2}, 13-y+5 y^{2}, 4 p^{2} q-3 p q^{2}+5
$$
## Solution
| S. No. | Expression | $\begin{array}{c}\text { Term (which is not } \\ \text { a Constant) }\end{array}$ | $\begin{array}{c}\text { Numerical } \\ \text { Coefficient }\end{array}$ |
| :---: | :---: | :---: | :---: |
| (i) | $x y+4$ | $x y$ | 1 |
| (ii) | $13-y^{2}$ | $-y^{2}$ | -1 |
| (iii) | $13-y+5 y^{2}$ | $-y$ | -1 |
| (iv) | $4 p^{2} q-3 p q^{2}+5$ | $4 y^{2} q$ | 5 |
| | | $-3 p q^{2}$ | 4 |
## Example 2
(a) What are the coefficients of $x$ in the following expressions?
$$
4 x-3 y, 8-x+y, y^{2} x-y, 2 z-5 x z
$$
(b) What are the coefficients of $y$ in the following expressions?
$$
4 x-3 y, 8+y z, y z^{2}+5, m y+m
$$
## Solution
(a) In each expression we look for a term with $x$ as a factor. The remaining part of that term is the coefficient of $x$.
| S. No. | Expression | Term with Factor $\boldsymbol{x}$ | Coefficient of $\boldsymbol{x}$ |
| :---: | :---: | :---: | :---: |
| (i) | $4 x-3 y$ | $4 x$ | 4 |
| (ii) | $8-x+y$ | $-x$ | -1 |
| (iii) | $y^{2} x-y$ | $y^{2} x$ | $y^{2}$ |
| (iv) | $2 z-5 x z$ | $-5 x z$ | $-5 z$ |
(b) The method is similar to that in (a) above.
| S. No. | Expression | Term with factor $\boldsymbol{y}$ | Coefficient of $\boldsymbol{y}$ |
| :---: | :---: | :---: | :---: |
| (i) | $4 x-3 y$ | $-3 y$ | -3 |
| (ii) | $8+y z$ | $y z$ | $z$ |
| (iii) | $y z^{2}+5$ | $y z^{2}$ | $z^{2}$ |
| (iv) | $m y+m$ | $m y$ | $m$ |
### LiKe AND UnLIKe Terms
When terms have the same algebraic factors, they are like terms. When terms have different algebraic factors, they are unlike terms. For example, in the expression $2 x y-3 x+5 x y-4$, look at the terms $2 x y$ and $5 x y$. The factors of $2 x y$ are $2, x$ and $y$. The factors of $5 x y$ are 5 , $x$ and $y$. Thus their algebraic (i.e., those which contain variables) factors are the same and
## Try These
Group the like terms together from the following:
$12 x, 12,-25 x,-25,-25 y, 1, x, 12 y, y$ hence they are like terms. On the other hand the terms $2 x y$ and $-3 x$, have different algebraic factors. They are unlike terms. Similarly, the terms, $2 x y$ and 4 , are unlike terms. Also, the terms $-3 x$ and 4 are unlike terms.
### Monomials, Binomials, Trinomials and Polynomials
An expression with only one term is called a monomial; for example, $7 x y,-5 m$, $3 z^{2}, 4$ etc.
## TRY These
Classify the following expressions as a monomial, a binomial or a trinomial: $a$, $a+b, a b+a+b, a b+a$ $+b-5, x y, x y+5$ $5 x^{2}-x+2,4 p q-3 q+5 p$, $7,4 m-7 n+10,4 m n+7$
An expression which contains two unlike terms is called a binomial; for example, $x+y, m-5, m n+4 m, a^{2}-b^{2}$ are binomials. The expression $10 p q$ is not a binomial; it is a monomial. The expression $(a+b+5)$ is not a binomial. It contains three terms.
An expression which contains three terms is called a trinomial; for example, the expressions $x+y+7, a b+a+b$, $3 x^{2}-5 x+2, m+n+10$ are trinomials. The expression $a b+a+b+5$ is, however not a trinomial; it contains four terms and not three. The expression $x+y+5 x$ is not a trinomial as the terms $x$ and $5 x$ are like terms.
In general, an expression with one or more terms is called a polynomial. Thus a monomial, a binomial and a trinomial are all polynomials.
EXample 3 State with reasons, which of the following pairs of terms are of like terms and which are of unlike terms:
(i) $7 x, 12 y$
(ii) $15 x,-21 x$
(iii) $-4 a b, 7 b a$
(iv) $3 x y, 3 x$
(v) $6 x y^{2}, 9 x^{2} y$
(vi) $p q^{2},-4 p q^{2}$
(vii) $m n^{2}, 10 m n$
## Solution
| $\begin{array}{l}\text { S. } \\ \text { No. }\end{array}$ | Pair | Factors | $\begin{array}{c}\text { Algebraic } \\ \text { factors same } \\ \text { or different }\end{array}$ | $\begin{array}{l}\text { Like/ } \\ \text { Unlike } \\ \text { terms }\end{array}$ | Remarks |
| :---: | :---: | :---: | :---: | :---: | :---: |
| (i) | $\begin{array}{l}7 x \\ 12 y\end{array}$ | $\begin{array}{l}7, x \\ 12, y\end{array}$ | Different | Unlike | $\begin{array}{l}\text { The variables in the } \\ \text { terms are different. }\end{array}$ |
| (ii) | $\begin{array}{r}15 x \\ -21 x\end{array}$ | $\begin{array}{r}15, x \\ -21, x\end{array}$ | Same | Like | |
| (iii) | $\begin{array}{c}-4 a b \\ 7 b a\end{array}$ | $\begin{array}{c}-4, a, b \\ 7, a, b\end{array}$ | Same | Like | $\begin{array}{c}\text { Remember } \\ a b=b a\end{array}$ |
| (iv) | $\begin{array}{c}3 x y \\ 3 x\end{array}$ | $\begin{array}{c}3, x, y \\ 3, x\end{array}$ | Different | Unlike | $\begin{array}{c}\text { The variable } y \text { is only } \\ \text { in one term. }\end{array}$ |
| :---: | :---: | :---: | :---: | :---: | :---: |
| (v) | $\begin{array}{l}6 x y^{2} \\ 9 x^{2} y\end{array}$ | $\left.\begin{array}{l}6, x, y, y \\ 9, x, x, y\end{array}\right\}$ | Different | Unlike | $\begin{array}{l}\text { The variables in the two } \\ \text { terms match, but their } \\ \text { powers do not match. }\end{array}$ |
| (vi) | $\begin{array}{c}p q^{2} \\ -4 p q^{2}\end{array}$ | $\left.\begin{array}{r}1, p, q, q \\ -4, p, q, q\end{array}\right\}$ | Same | Like | $\begin{array}{c}\text { Note, numerical } \\ \text { factor } 1 \text { is not shown }\end{array}$ |
Following simple steps will help you to decide whether the given terms are like or unlike terms:
(i) Ignore the numerical coefficients. Concentrate on the algebraic part of the terms.
(ii) Check the variables in the terms. They must be the same.
(iii) Next, check the powers of each variable in the terms. They must be the same.
Note that in deciding like terms, two things do not matter (1) the numerical coefficients of the terms and (2) the order in which the variables are multiplied in the terms.
## Exercise 10.1
1. Get the algebraic expressions in the following cases using variables, constants and arithmetic operations.
(i) Subtraction of $z$ from $y$.
(ii) One-half of the sum of numbers $x$ and $y$.
(iii) The number $z$ multiplied by itself.
(iv) One-fourth of the product of numbers $p$ and $q$.
(v) Numbers $x$ and $y$ both squared and added.
(vi) Number 5 added to three times the product of numbers $m$ and $n$.
(vii) Product of numbers $y$ and $z$ subtracted from 10.
(viii) Sum of numbers $a$ and $b$ subtracted from their product.
2. (i) Identify the terms and their factors in the following expressions Show the terms and factors by tree diagrams.
(a) $x-3$
(b) $1+x+x^{2}$
(c) $y-y^{3}$
(d) $5 x y^{2}+7 x^{2} y$
(e) $-a b+2 b^{2}-3 a^{2}$
(ii) Identify terms and factors in the expressions given below:
(a) $-4 x+5$
(b) $-4 x+5 y$
(c) $5 y+3 y^{2}$
(d) $x y+2 x^{2} y^{2}$
(e) $p q+q$
(f) $1.2 a b-2.4 b+3.6 a$
(g) $\frac{3}{4} x+\frac{1}{4}$
(h) $0.1 p^{2}+0.2 q^{2}$
3. Identify the numerical coefficients of terms (other than constants) in the following expressions:
(i) $5-3 t^{2}$
(ii) $1+t+t^{2}+t^{3}$
(iii) $x+2 x y+3 y$
(iv) $100 m+1000 n$
(v) $-p^{2} q^{2}+7 p q$
(vi) $1.2 a+0.8 b$
(vii) $3.14 r^{2}$
(viii) $2(l+b)$
(ix) $0.1 y+0.01 y^{2}$
4. (a) Identify terms which contain $x$ and give the coefficient of $x$.
(i) $y^{2} x+y$
(ii) $13 y^{2}-8 y x$
(iii) $x+y+2$
(iv) $5+z+z x$
(v) $1+x+x y$
(vi) $12 x y^{2}+25$
(vii) $7 x+x y^{2}$
(b) Identify terms which contain $y^{2}$ and give the coefficient of $y^{2}$.
(i) $8-x y^{2}$
(ii) $5 y^{2}+7 x$
(iii) $2 x^{2} y-15 x y^{2}+7 y^{2}$
5. Classify into monomials, binomials and trinomials.
(i) $4 y-7 z$
(ii) $y^{2}$
(iii) $x+y-x y$
(iv) 100
(v) $a b-a-b$
(vi) $5-3 t$
(vii) $4 p^{2} q-4 p q^{2}$
(viii) $7 \mathrm{mn}$
(ix) $z^{2}-3 z+8$
(x) $a^{2}+b^{2}$
(xi) $z^{2}+z$
(xii) $1+x+x^{2}$
6. State whether a given pair of terms is of like or unlike terms.
(i) 1,100
(ii) $-7 x, \frac{5}{2} x$
(iii) $-29 x,-29 y$
(iv) $14 x y, 42 y x$
(v) $4 m^{2} p, 4 m p^{2}$
(vi) $12 x z, 12 x^{2} z^{2}$
7. Identify like terms in the following:
(a) $-x y^{2},-4 y x^{2}, 8 x^{2}, 2 x y^{2}, 7 y,-11 x^{2},-100 x,-11 y x, 20 x^{2} y$, $-6 x^{2}, y, 2 x y, 3 x$
(b) $10 p q, 7 p, 8 q,-p^{2} q^{2},-7 q p,-100 q,-23,12 q^{2} p^{2},-5 p^{2}, 41,2405 p, 78 q p$, $13 p^{2} q, q p^{2}, 701 p^{2}$
### Finding the Value of an Expression
We know that the value of an algebraic expression depends on the values of the variables forming the expression. There are a number of situations in which we need to find the value of an expression, such as when we wish to check whether a particular value of a variable satisfies a given equation or not.
We find values of expressions, also, when we use formulas from geometry and from everyday mathematics. For example, the area of a square is $l^{2}$, where $l$ is the length of a side of the square. If $l=5 \mathrm{~cm}$., the area is $5^{2} \mathrm{~cm}^{2}$ or $25 \mathrm{~cm}^{2}$; if the side is $10 \mathrm{~cm}$, the area is $10^{2} \mathrm{~cm}^{2}$ or $100 \mathrm{~cm}^{2}$ and so on. We shall see more such examples in the next section. Example 4Find the values of the following expressions for $x=2$.
(i) $x+4$
(ii) $4 x-3$
(iii) $19-5 x^{2}$
(iv) $100-10 x^{3}$
Solution Putting $x=2$
(i) In $x+4$, we get the value of $x+4$, i.e., $x+4=2+4=6$
(ii) In $4 x-3$, we get
$4 x-3=(4 \times 2)-3=8-3=5$
(iii) In $19-5 x^{2}$, we get $19-5 x^{2}=19-\left(5 \times 2^{2}\right)=19-(5 \times 4)=19-20=-1$
(iv) In $100-10 x^{3}$, we get
$$
\begin{aligned}
& 100-10 x^{3}=100-\left(10 \times 2^{3}\right)=100-(10 \times 8)\left(\text { Note } 2^{3}=8\right) \\
& =100-80=20
\end{aligned}
$$
Example 5 Find the value of the following expressions when $n=-2$.
(i) $5 n-2$
(ii) $5 n^{2}+5 n-2$
(iii) $n^{3}+5 n^{2}+5 n-2$
## Solution
(i) Putting the value of $n=-2$, in $5 n-2$, we get,
$$
5(-2)-2=-10-2=-12
$$
(ii) In $5 n^{2}+5 n-2$, we have,
for $n=-2,5 n-2=-12$
$$
\text { and } \left.5 n^{2}=5 \times(-2)^{2}=5 \times 4=20 \quad \text { [as }(-2)^{2}=4\right]
$$
Combining,
$$
5 n^{2}+5 n-2=20-12=8
$$
(iii) Now, for $n=-2$,
$$
\begin{aligned}
& 5 n^{2}+5 n-2=8 \text { and } \\
& n^{3}=(-2)^{3}=(-2) \times(-2) \times(-2)=-8
\end{aligned}
$$
Combining,
$$
n^{3}+5 n^{2}+5 n-2=-8+8=0
$$
We shall now consider expressions of two variables, for example, $x+y, x y$. To work out the numerical value of an expression of two variables, we need to give the values of both variables. For example, the value of $(x+y)$, for $x=3$ and $y=5$, is $3+5=8$. Example 6 Find the value of the following expressions for $a=3, b=2$.
(i) $a+b$
(ii) $7 a-4 b$
(iii) $a^{2}+2 a b+b^{2}$
(iv) $a^{3}-b^{3}$
Solution Substituting $a=3$ and $b=2$ in
(i) $a+b$, we get
$$
a+b=3+2=5
$$
(ii) $7 a-4 b$, we get
$$
7 a-4 b=7 \times 3-4 \times 2=21-8=13 \text {. }
$$
(iii) $a^{2}+2 a b+b^{2}$, we get
$$
a^{2}+2 a b+b^{2}=3^{2}+2 \times 3 \times 2+2^{2}=9+2 \times 6+4=9+12+4=25
$$
(iv) $a^{3}-b^{3}$, we get
$$
a^{3}-b^{3}=3^{3}-2^{3}=3 \times 3 \times 3-2 \times 2 \times 2=9 \times 3-4 \times 2=27-8=19
$$
## Exercise 10.2
1. If $m=2$, find the value of:
(i) $m-2$
(ii) $3 m-5$
(iii) $9-5 m$
(iv) $3 m^{2}-2 m-7$
(v) $\frac{5 m}{2} \quad 4$
2. If $p=-2$, find the value of:
(i) $4 p+7$
(ii) $-3 p^{2}+4 p+7$
(iii) $-2 p^{3}-3 p^{2}+4 p+7$
3. Find the value of the following expressions, when $x=-1$ :
(i) $2 x-7$
(ii) $-x+2$
(iii) $x^{2}+2 x+1$
(iv) $2 x^{2}-x-2$
4. If $a=2, b=-2$, find the value of:
(i) $a^{2}+b^{2}$
(ii) $a^{2}+a b+b^{2}$
(iii) $a^{2}-b^{2}$
5. When $a=0, b=-1$, find the value of the given expressions:
(i) $2 a+2 b$
(ii) $2 a^{2}+b^{2}+1$
(iii) $2 a^{2} b+2 a b^{2}+a b$
(iv) $a^{2}+a b+2$
6. Simplify the expressions and find the value if $x$ is equal to 2
(i) $x+7+4(x-5)$
(ii) $3(x+2)+5 x-7$
(iii) $6 x+5(x-2)$
(iv) $4(2 x-1)+3 x+11$
7. Simplify these expressions and find their values if $x=3, a=-1, b=-2$.
(i) $3 x-5-x+9$
(ii) $2-8 x+4 x+4$
(iii) $3 a+5-8 a+1$
(iv) $10-3 b-4-5 b$
(v) $2 a-2 b-4-5+a$
8. (i) If $z=10$, find the value of $z^{3}-3(z-10)$.
(ii) If $p=-10$, find the value of $p^{2}-2 p-100$
9. What should be the value of $a$ if the value of $2 x^{2}+x-a$ equals to 5 , when $x=0$ ?
10. Simplify the expression and find its value when $a=5$ and $b=-3$.
$$
2\left(a^{2}+a b\right)+3-a b
$$
## What have We Discussed?
1. Algebraic expressions are formed from variables and constants. We use the operations of addition, subtraction, multiplication and division on the variables and constants to form expressions. For example, the expression $4 x y+7$ is formed from the variables $x$ and $y$ and constants 4 and 7 . The constant 4 and the variables $x$ and $y$ are multiplied to give the product $4 x y$ and the constant 7 is added to this product to give the expression.
2. Expressions are made up of terms. Terms are added to make an expression. For example, the addition of the terms $4 x y$ and 7 gives the expression $4 x y+7$.
3. A term is a product of factors. The term $4 x y$ in the expression $4 x y+7$ is a product of factors $x, y$ and 4. Factors containing variables are said to be algebraic factors.
4. The coefficient is the numerical factor in the term. Sometimes anyone factor in a term is called the coefficient of the remaining part of the term.
5. Any expression with one or more terms is called a polynomial. Specifically a one term expression is called a monomial; a two-term expression is called a binomial; and a three-term expression is called a trinomial.
6. Terms which have the same algebraic factors are like terms. Terms which have different algebraic factors are unlike terms. Thus, terms $4 x y$ and $-3 x y$ are like terms; but terms $4 x y$ and $-3 x$ are not like terms.
7. In situations such as solving an equation and using a formula, we have to find the value of an expression. The value of the expression depends on the value of the variable from which the expression is formed. Thus, the value of $7 x-3$ for $x=5$ is 32 , since $7(5)-3=35-3=32$.
## Exponents and Powers
### Introduction
Do you know what the mass of earth is? It is
$5,970,000,000,000,000,000,000,000 \mathrm{~kg}$ !
Can you read this number?
Mass of Uranus is 86,800,000,000,000,000,000,000,000 kg.
Which has greater mass, Earth or Uranus?
Distance between Sun and Saturn is 1,433,500,000,000 $\mathrm{m}$ and distance between Saturn and Uranus is 1,439,000,000,000 m. Can you read these numbers? Which distance is less?
These very large numbers are difficult to read, understand and compare. To make these numbers easy to read, understand and compare, we use exponents. In this Chapter, we shall learn about exponents and also learn how to use them.
### Exponents
We can write large numbers in a shorter form using exponents.
Observe $\quad 10,000=10 \times 10 \times 10 \times 10=10^{4}$
The short notation $10^{4}$ stands for the product $10 \times 10 \times 10 \times 10$. Here ' 10 ' is called the base and ' 4 ' the exponent. The number $10^{4}$ is read as 10 raised to the power of 4 or simply as fourth power of $\mathbf{1 0 . 1 0 ^ { 4 }}$ is called the exponential form of 10,000 .
We can similarly express 1,000 as a power of 10 . Note that
$$
1000=10 \times 10 \times 10=10^{3}
$$
Here again, $10^{3}$ is the exponential form of 1,000 .
Similarly, $1,00,000=10 \times 10 \times 10 \times 10 \times 10=10^{5}$
$10^{5}$ is the exponential form of $1,00,000$
In both these examples, the base is 10 ; in case of $10^{3}$, the exponent is 3 and in case of $10^{5}$ the exponent is 5 . We have used numbers like 10,100, 1000 etc., while writing numbers in an expanded form. For example, $47561=4 \times 10000+7 \times 1000+5 \times 100+6 \times 10+1$
This can be written as $4 \times 10^{4}+7 \times 10^{3}+5 \times 10^{2}+6 \times 10+1$.
Try writing these numbers in the same way $172,5642,6374$.
In all the above given examples, we have seen numbers whose base is 10 . However the base can be any other number also. For example:
$81=3 \times 3 \times 3 \times 3$ can be written as $81=3^{4}$, here 3 is the base and 4 is the exponent.
Some powers have special names. For example,
$10^{2}$, which is 10 raised to the power 2 , also read as ' 10 squared' and
$10^{3}$, which is 10 raised to the power 3 , also read as ' 10 cubed'.
Can you tell what $5^{3}(5$ cubed) means?
$$
5^{3}=5 \times 5 \times 5=125
$$
So, we can say 125 is the third power of 5 .
What is the exponent and the base in $5^{3}$ ?
Similarly, $2^{5}=2 \times 2 \times 2 \times 2 \times 2=32$, which is the fifth power of 2 .
In $2^{5}, 2$ is the base and 5 is the exponent.
In the same way,
$$
\begin{aligned}
243 & =3 \times 3 \times 3 \times 3 \times 3=3^{5} \\
64 & =2 \times 2 \times 2 \times 2 \times 2 \times 2=2^{6} \\
625 & =5 \times 5 \times 5 \times 5=5^{4}
\end{aligned}
$$
## TRY THESE
Find five more such examples, where a number is expressed in exponential form. Also identify the base and the exponent in each case.
You can also extend this way of writing when the base is a negative integer.
What does $(-2)^{3}$ mean?
It is $\quad(-2)^{3}=(-2) \times(-2) \times(-2)=-8$
Is $\quad(-2)^{4}=16 ?$ Check it.
Instead of taking a fixed number let us take any integer $a$ as the base, and write the numbers as,
$$
a \times a=a^{2} \text { (read as ' } a \text { squared' or ' } a \text { raised to the power 2') }
$$
$a \times a \times a=a^{3}$ (read as ' $a$ cubed' or ' $a$ raised to the power 3')
$a \times a \times a \times a=a^{4}\left(\mathrm{read}\right.$ as $a$ raised to the power 4 or the $4^{\text {th }}$ power of $\left.a\right)$
$a \times a \times a \times a \times a \times a \times a=a^{7}$ (read as $a$ raised to the power 7 or the $7^{\text {th }}$ power of $a$ ) and so on.
$a \times a \times a \times b \times b$ can be expressed as $a^{3} b^{2}$ (read as $a$ cubed $b$ squared) TRY THESE
Express:
(i) 729 as a power of 3
(ii) 128 as a power of 2
(iii) 343 as a power of 7
$a \times a \times b \times b \times b \times b$ can be expressed as $a^{2} b^{4}$ (read as $a$ squared into $b$ raised to the power of 4).
Example 1 Express 256 as a power 2.
Solution We have $256=2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2$.
So we can say that $256=2^{8}$
Example 2 Which one is greater $2^{3}$ or $3^{2}$ ?
Solution We have, $2^{3}=2 \times 2 \times 2=8$ and
$3^{2}=3 \times 3=9$.
Since $9>8$, so, $3^{2}$ is greater than $2^{3}$
EXAMPLE 3 Which one is greater $8^{2}$ or $2^{8}$ ?
SOLution $8^{2}=8 \times 8=64$
$$
2^{8}=2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2=256
$$
Clearly, $2^{8}>8^{2}$
Example 4 Expand $a^{3} b^{2}, a^{2} b^{3}, b^{2} a^{3}, b^{3} a^{2}$. Are they all same?
SOLUTION $a^{3} b^{2}=a^{3} \times b^{2}$
$$
\begin{aligned}
& =(a \times a \times a) \times(b \times b) \\
& =a \times a \times a \times b \times b
\end{aligned}
$$
$$
a^{2} b^{3}=a^{2} \times b^{3}
$$$$
=a \times a \times b \times b \times b
$$
$b^{2} a^{3}=b^{2} \times a^{3}$
$$
=b \times b \times a \times a \times a
$$
$b^{3} a^{2}=b^{3} \times a^{2}$
$$
=b \times b \times b \times a \times a
$$
Note that in the case of terms $a^{3} b^{2}$ and $a^{2} b^{3}$ the powers of $a$ and $b$ are different. Thus $a^{3} b^{2}$ and $a^{2} b^{3}$ are different.
On the other hand, $a^{3} b^{2}$ and $b^{2} a^{3}$ are the same, since the powers of $a$ and $b$ in these two terms are the same. The order of factors does not matter.
Thus, $a^{3} b^{2}=a^{3} \times b^{2}=b^{2} \times a^{3}=b^{2} a^{3}$. Similarly, $a^{2} b^{3}$ and $b^{3} a^{2}$ are the same.
EXAMPLE 5 Express the following numbers as a product of powers of prime factors:
(i) 72
(ii) 432
(iii) 1000
(iv) 16000
## Solution
(i) $72=2 \times 36=2 \times 2 \times 18$
$$
\begin{aligned}
& =2 \times 2 \times 2 \times 9 \\
& =2 \times 2 \times 2 \times 3 \times 3=2^{3} \times 3^{2}
\end{aligned}
$$
Thus, $72=2^{3} \times 3^{2}$
(required prime factor product form)
| 2 | 72 |
| :--- | :--- |
| 2 | 36 |
| 2 | 18 |
| 3 | 9 |
| | 3 |
(ii) $432=2 \times 216=2 \times 2 \times 108=2 \times 2 \times 2 \times 54$
$$
\begin{aligned}
& =2 \times 2 \times 2 \times 2 \times 27=2 \times 2 \times 2 \times 2 \times 3 \times 9 \\
& =2 \times 2 \times 2 \times 2 \times 3 \times 3 \times 3
\end{aligned}
$$
or $432=2^{4} \times 3^{3} \quad$ (required form)
(iii) $1000=2 \times 500=2 \times 2 \times 250=2 \times 2 \times 2 \times 125$
$$
=2 \times 2 \times 2 \times 5 \times 25=2 \times 2 \times 2 \times 5 \times 5 \times 5
$$
or
$$
1000=2^{3} \times 5^{3}
$$
Atul wants to solve this example in another way:
$$
\begin{aligned}
1000 & =10 \times 100=10 \times 10 \times 10 \\
& =(2 \times 5) \times(2 \times 5) \times(2 \times 5) \quad(\text { Since } 10=2 \times 5) \\
& =2 \times 5 \times 2 \times 5 \times 2 \times 5=2 \times 2 \times 2 \times 5 \times 5 \times 5
\end{aligned}
$$
or
Is Atul's method correct?
(iv) $16,000=16 \times 1000=(2 \times 2 \times 2 \times 2) \times 1000=2^{4} \times 10^{3}$ (as $\left.16=2 \times 2 \times 2 \times 2\right)$
$$
\begin{aligned}
&=(2 \times 2 \times 2 \times 2) \times(2 \times 2 \times 2 \times 5 \times 5 \times 5)=2^{4} \times 2^{3} \times 5^{3} \\
&(\text { Since } 1000=2 \times 2 \times 2 \times 5 \times 5 \times 5) \\
&=(2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 2) \times(5 \times 5 \times 5)
\end{aligned}
$$
or, $\quad 16,000=2^{7} \times 5^{3}$
Examiple 6 Work out $(1)^{5},(-1)^{3},(-1)^{4},(-10)^{3},(-5)^{4}$.
## SOLUTION
(i) We have $(1)^{5}=1 \times 1 \times 1 \times 1 \times 1=1$
In fact, you will realise that 1 raised to any power is 1 .
(ii) $(-1)^{3}=(-1) \times(-1) \times(-1)=1 \times(-1)=-1$
(iii) $(-1)^{4}=(-1) \times(-1) \times(-1) \times(-1)=1 \times 1=1$
You may check that $(-1)$ raised to any odd power is $(-1)$,
and $(-1)$ raised to any even power is $(+1)$.
(iv) $(-10)^{3}=(-10) \times(-10) \times(-10)=100 \times(-10)=-1000$
(v) $(-5)^{4}=(-5) \times(-5) \times(-5) \times(-5)=25 \times 25=625$
## Exercise 11.1
1. Find the value of:
(i) $2^{6}$
(ii) $9^{3}$
(iii) $11^{2}$
(iv) $5^{4}$
2. Express the following in exponential form:
(i) $6 \times 6 \times 6 \times 6$
(ii) $t \times t$
(iii) $b \times b \times b \times b$
(iv) $5 \times 5 \times 7 \times 7 \times 7$
(v) $2 \times 2 \times a \times a$
(vi) $a \times a \times a \times c \times c \times c \times c \times d$
3. Express each of the following numbers using exponential notation:
(i) 512
(ii) 343
(iii) 729
(iv) 3125
4. Identify the greater number, wherever possible, in each of the following?
(i) $4^{3}$ or $3^{4}$
(ii) $5^{3}$ or $3^{5}$
(iii) $2^{8}$ or $8^{2}$
(iv) $100^{2}$ or $2^{100}$
(v) $2^{10}$ or $10^{2}$
5. Express each of the following as product of powers of their prime factors:
(i) 648
(ii) 405
(iii) 540
(iv) 3,600
6. Simplify:
(i) $2 \times 10^{3}$
(ii) $7^{2} \times 2^{2}$
(iii) $2^{3} \times 5$
(iv) $3 \times 4^{4}$
(v) $0 \times 10^{2}$
(vi) $5^{2} \times 3^{3}$
(vii) $2^{4} \times 3^{2}$
(viii) $3^{2} \times 10^{4}$
7. Simplify:
(i) $(-4)^{3}$
(ii) $(-3) \times(-2)^{3}$
(iii) $(-3)^{2} \times(-5)^{2}$
(iv) $(-2)^{3} \times(-10)^{3}$
8. Compare the following numbers:
(i) $2.7 \times 10^{12} ; 1.5 \times 10^{8}$
(ii) $4 \times 10^{14} ; 3 \times 10^{17}$
### LAWS OF EXPONENTS
#### Multiplying Powers with the Same Base
(i) Let us calculate $2^{2} \times 2^{3}$
$$
\begin{aligned}
2^{2} \times 2^{3} & =(2 \times 2) \times(2 \times 2 \times 2) \\
& =2 \times 2 \times 2 \times 2 \times 2=2^{5}=2^{2+3}
\end{aligned}
$$
Note that the base in $2^{2}$ and $2^{3}$ is same and the sum of the exponents, i.e., 2 and 3 is 5
(ii) $(-3)^{4} \times(-3)^{3}=[(-3) \times(-3) \times(-3) \times(-3)] \times[(-3) \times(-3) \times(-3)]$
$$
\begin{aligned}
& =(-3) \times(-3) \times(-3) \times(-3) \times(-3) \times(-3) \times(-3) \\
& =(-3)^{7} \\
& =(-3)^{4+3}
\end{aligned}
$$
Again, note that the base is same and the sum of exponents, i.e., 4 and 3, is 7
(iii) $a^{2} \times a^{4}=(a \times a) \times(a \times a \times a \times a)$
$$
=a \times a \times a \times a \times a \times a=a^{6}
$$
(Note: the base is the same and the sum of the exponents is $2+4=6$ )
Similarly, verify:
$$
\begin{aligned}
& 4^{2} \times 4^{2}=4^{2+2} \\
& 3^{2} \times 3^{3}=3^{2+3}
\end{aligned}
$$
Can you write the appropriate number in the box.
$$
\begin{aligned}
(-11)^{2} \times(-11)^{6} & =(-11)^{\square} \\
b^{2} \times b^{3} & =b^{\square}(\text { Remember, base is same; } b \text { is any integer). } \\
c^{3} \times c^{4} & =c^{\square}(c \text { is any integer }) \\
d^{10} \times d^{20} & =d^{\square}
\end{aligned}
$$
## TRY THESE
Simplify and write in exponential form:
(i) $2^{5} \times 2^{3}$
(ii) $p^{3} \times p^{2}$
(iii) $4^{3} \times 4^{2}$
(iv) $a^{3} \times a^{2} \times a^{7}$
(v) $5^{3} \times 5^{7} \times 5^{12}$
(vi) $(-4)^{100} \times(-4)^{20}$
## Caution!
Consider $2^{3} \times 3^{2}$
Can you add the exponents? No! Do you see 'why'? The base of $2^{3}$ is 2 and base of $3^{2}$ is 3 . The bases are not same.
#### Dividing Powers with the Same Base
Let us simplify $3^{7} \div 3^{4}$ ?
Thus
$$
\begin{aligned}
3^{7} \div 3^{4} & =\frac{3^{7}}{3^{4}}=\frac{3 \times 3 \times 3 \times 3 \times 3 \times 3 \times 3}{3 \times 3 \times 3 \times 3} \\
& =3 \times 3 \times 3=3^{3}=3^{7-4}
\end{aligned}
$$
(Note, in $3^{7}$ and $3^{4}$ the base is same and $3^{7} \div 3^{4}$ becomes $3^{7-4}$ )
Similarly,
or
$$
\begin{aligned}
5^{6} \div 5^{2} & =\frac{5^{6}}{5^{2}}=\frac{5 \times 5 \times 5 \times 5 \times 5 \times 5}{5 \times 5} \\
& =5 \times 5 \times 5 \times 5=5^{4}=5^{6-2}
\end{aligned}
$$
Let $a$ be a non-zero integer, then,
$$
a^{4} \div a^{2}=\frac{a^{4}}{a^{2}}=\frac{a \times a \times a \times a}{a \times a}=a \times a=a^{2}=a^{4}
$$
Now can you answer quickly?
$$
a^{4} \div a^{2}=a^{4-2}
$$
$$
\begin{aligned}
10^{8} \div 10^{3} & =10^{8-3}=10^{5} \\
7^{9} \div 7^{6} & =7 \\
a^{8} \div a^{5} & =a
\end{aligned}
$$
## TRy These
Simplify and write in exponential form: (eg., $11^{6} \div 11^{2}=11^{4}$ )
(i) $2^{9} \div 2^{3}$
(ii) $10^{8} \div 10^{4}$
(iii) $9^{11} \div 9^{7}$
(v) $7^{13} \div 7^{10}$
(iv) $20^{15} \div 20^{13}$
#### Taking Power of a Power
Consider the following
Simplify $\left(2^{3}\right)^{2} ;\left(3^{2}\right)^{4}$
Now, $\left(2^{3}\right)^{2}$ means $2^{3}$ is multiplied two times with itself.
$$
\begin{aligned}
\left(2^{3}\right)^{2} & =2^{3} \times 2^{3} \\
& =2^{3+3}\left(\text { Since } a^{m} \times a^{n}=a^{m+n}\right) \\
& =2^{6}=2^{3 \times 2}
\end{aligned}
$$
Thus
$$
\left(2^{3}\right)^{2}=2^{3 \times 2}
$$
Similarly
$$
\begin{aligned}
\left(3^{2}\right)^{4} & =3^{2} \times 3^{2} \times 3^{2} \times 3^{2} \\
& =3^{2+2+2+2} \\
& \left.=3^{8} \text { (Observe } 8 \text { is the product of } 2 \text { and } 4\right) . \\
& =3^{2 \times 4}
\end{aligned}
$$
Can you tell what would $\left(7^{2}\right)^{10}$ would be equal to?
So
$$
\begin{aligned}
\left(2^{3}\right)^{2} & =2^{3 \times 2}=2^{6} \\
\left(3^{2}\right)^{4} & =3^{2 \times 4}=3^{8} \\
& =7^{2 \times 10}=7^{20} \\
\left(a^{2}\right)^{3} & =a^{2 \times 3}=a^{6} \\
& =a^{m \times 3}=a^{3 m}
\end{aligned}
$$
Simplify and write the answer in exponential form:
(i) $\left(6^{2}\right)^{4}$
(ii) $\left(2^{2}\right)^{100}$
(iii) $\left(7^{50}\right)^{2}$
(iv) $\left(5^{3}\right)^{7}$
From this we can generalise for any non-zero integer ' $a$ ', where ' $m$ ' and ' $n$ ' are whole numbers,
$$
\left(a^{m}\right)^{n}=a^{m n}
$$
EXample 7 Can you tell which one is greater $\left(5^{2}\right) \times 3$ or $\left(5^{2}\right)^{3}$ ?
Solution $\left(5^{2}\right) \times 3$ means $5^{2}$ is multiplied by 3 i.e., $5 \times 5 \times 3=75$
but $\left(5^{2}\right)^{3}$ means $5^{2}$ is multiplied by itself three times i.e.,
$$
5^{2} \times 5^{2} \times 5^{2}=5^{6}=15,625
$$
Therefore
$$
\left(5^{2}\right)^{3}>\left(5^{2}\right) \times 3
$$
#### Multiplying Powers with the Same Exponents
Can you simplify $2^{3} \times 3^{3}$ ? Notice that here the two terms $2^{3}$ and $3^{3}$ have different bases, but the same exponents.
Now,
$$
\begin{aligned}
2^{3} \times 3^{3} & =(2 \times 2 \times 2) \times(3 \times 3 \times 3) \\
& =(2 \times 3) \times(2 \times 3) \times(2 \times 3) \\
& =6 \times 6 \times 6 \\
& =6^{3} \quad(\text { Observe } 6 \text { is the product of bases } 2 \text { and } 3) \\
& =(4 \times 4 \times 4 \times 4) \times(3 \times 3 \times 3 \times 3) \\
& =(4 \times 3) \times(4 \times 3) \times(4 \times 3) \times(4 \times 3) \\
& =12 \times 12 \times 12 \times 12 \\
& =12^{4}
\end{aligned}
$$
## TRY THEse
Consider $4^{4} \times 3^{4}$
Consider, also, $3^{2} \times a^{2}$
$$
\begin{aligned}
& =(3 \times 3) \times(a \times a) \\
& =(3 \times a) \times(3 \times a) \\
& =(3 \times a)^{2} \\
& =(3 a)^{2} \quad(\text { Note: } 3 \times a=3 a)
\end{aligned}
$$
$$
\begin{aligned}
& a^{m} \times b^{m}=(a b)^{m} \text { : } \\
& \begin{array}{ll}
\text { (i) } 4^{3} \times 2^{3} & \text { (ii) } 2^{5} \times b^{5} \\
\text { (iii) } a^{2} \times t^{2} & \text { (iv) } 5^{6} \times(-2)^{6} \\
\text { (v) }(-2)^{4} \times(-3)^{4} &
\end{array}
\end{aligned}
$$
Similarly, $a^{4} \times b^{4}$
$=(a \times a \times a \times a) \times(b \times b \times b \times b)$
$$
\begin{aligned}
& =(a \times b) \times(a \times b) \times(a \times b) \times(a \times b) \\
& =(a \times b)^{4} \\
& =(a b)^{4} \quad(\text { Note } a \times b=a b)
\end{aligned}
$$
In general, for any non-zero integer $a$
$$
a^{m} \times b^{m}=(a b)^{m} \quad \text { (where } m \text { is any whole number) }
$$
Example 8Express the following terms in the exponential form:
(i) $(2 \times 3)^{5}$
(ii) $(2 a)^{4}$
(iii) $(-4 m)^{3}$
## Solution
(i) $(2 \times 3)^{5}=(2 \times 3) \times(2 \times 3) \times(2 \times 3) \times(2 \times 3) \times(2 \times 3)$
$$
\begin{aligned}
& =(2 \times 2 \times 2 \times 2 \times 2) \times(3 \times 3 \times 3 \times 3 \times 3) \\
& =2^{5} \times 3^{5}
\end{aligned}
$$
## Try These
Put into another form
using $a^{m} \div b^{m}=\left(\frac{a}{b}\right)^{m}$ :
(i) $4^{5} \div 3^{5}$
(ii) $2^{5} \div b^{5}$
(iii) $(-2)^{3} \div b^{3}$
(iv) $p^{4} \div q^{4}$
(v) $5^{6} \div(-2)^{6}$ (ii) $(2 a)^{4}=2 a \times 2 a \times 2 a \times 2 a$
$=(2 \times 2 \times 2 \times 2) \times(a \times a \times a \times a)$
$=2^{4} \times a^{4}$
(iii) $(-4 m)^{3}=(-4 \times m)^{3}$
$$
=(-4 \times m) \times(-4 \times m) \times(-4 \times m)
$$$$
=(-4) \times(-4) \times(-4) \times(m \times m \times m)=(-4)^{3} \times(m)^{3}
$$
#### Dividing Powers with the Same Exponents
Observe the following simplifications:
(i) $\frac{2^{4}}{3^{4}}=\frac{2 \times 2 \times 2 \times 2}{3 \times 3 \times 3 \times 3}=\frac{2}{3} \times \frac{2}{3} \times \frac{2}{3} \times \frac{2}{3}=\left(\frac{2}{3}\right)^{4}$
(ii) $\frac{a^{3}}{b^{3}}=\frac{a \times a \times a}{b \times b \times b}=\frac{a}{b} \times \frac{a}{b} \times \frac{a}{b}=\left(\frac{a}{b}\right)^{3}$
From these examples we may generalise
$$
a^{m} \div b^{m}=\frac{a^{m}}{b^{m}}=\left(\frac{a}{b}\right)^{m} \text { where } a \text { and } b \text { are any non zero integers }
$$
and $m$ is a whole number.
Example 9 Expand: (i) $\left(\frac{3}{5}\right)^{4}$ (ii) $\left(\frac{-4}{7}\right)^{5}$
What is $a^{0}$ ?
Obeserve the following pattern:
$2^{6}=64$
$2^{5}=32$
$2^{4}=16$
$2^{3}=8$
$2^{2}=$ ?
$2^{1}=$ ?
$2^{0}=$ ?
You can guess the value of $2^{0}$ by just studying the pattern!
You find that $2^{0}=1$
If you start from $3^{6}=729$, and proceed as shown above finding $3^{5}, 3^{4}, 3^{3}, \ldots$ etc, what will be $3^{0}=$ ?
## SOLUTION
(i) $\left(\frac{3}{5}\right)^{4}=\frac{3^{4}}{5^{4}}=\frac{3 \times 3 \times 3 \times 3}{5 \times 5 \times 5 \times 5}$
(ii) $\left(\frac{-4}{7}\right)^{5}=\frac{(-4)^{5}}{7^{5}}=\frac{(-4) \times(-4) \times(-4) \times(-4) \times(-4)}{7 \times 7 \times 7 \times 7 \times 7}$
## - Numbers with exponent zero
Can you tell what $\frac{3^{5}}{3^{5}}$ equals to?
$$
\frac{3^{5}}{3^{5}}=\frac{3 \times 3 \times 3 \times 3 \times 3}{3 \times 3 \times 3 \times 3 \times 3}=1
$$
by using laws of exponents
$$
3^{5} \div 3^{5}=3^{5-5}=3^{0}
$$
So
$$
3^{0}=1
$$
Can you tell what $7^{\circ}$ is equal to?
$$
7^{3} \div 7^{3}=7^{3-3}=7^{0}
$$
And
$$
\frac{7^{3}}{7^{3}}=\frac{7 \times 7 \times 7}{7 \times 7 \times 7}=1
$$
Therefore
$$
7^{0}=1
$$
Similarly
$$
a^{3} \div a^{3}=a^{3-3}=a^{0}
$$
And
$$
a^{3} \div a^{3}=\frac{a^{3}}{a^{3}}=\frac{a \times a \times a}{a \times a \times a}=1
$$
Thus
$$
a^{0}=1 \text { (for any non-zero integer } a \text { ) }
$$
So, we can say that any number (except 0 ) raised to the power (or exponent) 0 is 1.
### Miscellaneous Examples using the Laws of Exponents
Let us solve some examples using rules of exponents developed.
Example 10 Write exponential form for $8 \times 8 \times 8 \times 8$ taking base as 2 .
Solution We have, $8 \times 8 \times 8 \times 8=8^{4}$
But we know that
Therefore
$$
\begin{aligned}
8 & =2 \times 2 \times 2=2^{3} \\
8^{4} & =\left(2^{3}\right)^{4}=2^{3} \times 2^{3} \times 2^{3} \times 2^{3} \\
& =2^{3 \times 4} \quad\left[\text { [You may also use }\left(a^{m}\right)^{n}=a^{m n}\right] \\
& =2^{12}
\end{aligned}
$$
EXAMPLE 11 Simplify and write the answer in the exponential form.
(i) $\left(\frac{3^{7}}{3^{2}}\right) \times 3^{5}$
(ii) $2^{3} \times 2^{2} \times 5^{5}$
(iii) $\left(6^{2} \times 6^{4}\right) \div 6^{3}$
(iv) $\left[\left(2^{2}\right)^{3} \times 3^{6}\right] \times 5^{6}$
(v) $8^{2} \div 2^{3}$
## Solution
(i) $\begin{aligned}\left(\frac{3^{7}}{3^{2}}\right) \times 3^{5} & =\left(3^{7-2}\right) \times 3^{5} \\ & =3^{5} \times 3^{5}=3^{5+5}=3^{10}\end{aligned}$ (ii) $2^{3} \times 2^{2} \times 5^{5}=2^{3+2} \times 5^{5}$
$$
=2^{5} \times 5^{5}=(2 \times 5)^{5}=10^{5}
$$
(iii) $\left(6^{2} \times 6^{4}\right) \div 6^{3}=6^{2+4} \div 6^{3}$
$$
=\frac{6^{6}}{6^{3}}=6^{6-3}=6^{3}
$$
(iv) $\left[\left(2^{2}\right)^{3} \times 3^{6}\right] \times 5^{6}=\left[2^{6} \times 3^{6}\right] \times 5^{6}$
$$
\begin{aligned}
& =(2 \times 3)^{6} \times 5^{6} \\
& =(2 \times 3 \times 5)^{6}=30^{6}
\end{aligned}
$$
(v) $8=2 \times 2 \times 2=2^{3}$
Therefore $8^{2} \div 2^{3}=\left(2^{3}\right)^{2} \div 2^{3}$
$$
=2^{6} \div 2^{3}=2^{6-3}=2^{3}
$$
Example 12 Simplify:
(i) $\frac{12^{4} \times 9^{3} \times 4}{6^{3} \times 8^{2} \times 27}$
(ii) $2^{3} \times a^{3} \times 5 a^{4}$
(iii) $\frac{2 \times 3^{4} \times 2^{5}}{9 \times 4^{2}}$
## Solution
(i) We have
$$
\begin{aligned}
\frac{12^{4} \times 9^{3} \times 4}{6^{3} \times 8^{2} \times 27} & =\frac{\left(2^{2} \times 3\right)^{4} \times\left(3^{2}\right)^{3} \times 2^{2}}{(2 \times 3)^{3} \times\left(2^{3}\right)^{2} \times 3^{3}} \\
& =\frac{\left(2^{2}\right)^{4} \times(3)^{4} \times 3^{2 \times 3} \times 2^{2}}{2^{3} \times 3^{3} \times 2^{2 \times 3} \times 3^{3}}=\frac{2^{8} \times 2^{2} \times 3^{4} \times 3^{6}}{2^{3} \times 2^{6} \times 3^{3} \times 3^{3}} \\
& =\frac{2^{8+2} \times 3^{4+6}}{2^{3+6} \times 3^{3+3}}=\frac{2^{10} \times 3^{10}}{2^{9} \times 3^{6}} \\
& =2^{10-9} \times 3^{10-6}=2^{1} \times 3^{4} \\
& =2 \times 81=162
\end{aligned}
$$
(ii) $2^{3} \times a^{3} \times 5 a^{4}=2^{3} \times a^{3} \times 5 \times a^{4}$
$$
\begin{aligned}
& =2^{3} \times 5 \times a^{3} \times a^{4}=8 \times 5 \times a^{3+4} \\
& =40 a^{7}
\end{aligned}
$$
(ii) $\frac{2 \times 3^{4} \times 2^{5}}{9 \times 4^{2}}=\frac{2 \times 3^{4} \times 2^{5}}{3^{2} \times\left(2^{2}\right)^{2}}=\frac{2 \times 2^{5} \times 3^{4}}{3^{2} \times 2^{2 \times 2}}$
$$
\begin{aligned}
& =\frac{2^{1+5} \times 3^{4}}{2^{4} \times 3^{2}}=\frac{2^{6} \times 3^{4}}{2^{4} \times 3^{2}}=2^{6-4} \times 3^{4-2} \\
& =2^{2} \times 3^{2}=4 \times 9=36
\end{aligned}
$$
Note: In most of the examples that we have taken in this Chapter, the base of a power was taken an integer. But all the results of the chapter apply equally well to a base which is a rational number.
## EXERCISE 11.2
1. Using laws of exponents, simplify and write the answer in exponential form:
(i) $3^{2} \times 3^{4} \times 3^{8}$
(ii) $6^{15} \div 6^{10}$
(iii) $a^{3} \times a^{2}$
(iv) $7^{x} \times 7^{2}$
(v) $\left(5^{2}\right)^{3} \div 5^{3}$
(vi) $2^{5} \times 5^{5}$
(vii) $a^{4} \times b^{4}$
(viii) $\left(3^{4}\right)^{3}$
(ix) $\left(2^{20} \div 2^{15}\right) \times 2^{3}$
(x) $8^{t} \div 8^{2}$
2. Simplify and express each of the following in exponential form:
(i) $\frac{2^{3} \times 3^{4} \times 4}{3 \times 32}$
(ii) $\left(\left(5^{2}\right)^{3} \times 5^{4}\right) \div 5^{7}$
(iii) $25^{4} \div 5^{3}$
(iv) $\frac{3 \times 7^{2} \times 11^{8}}{21 \times 11^{3}}$
(v) $\frac{3^{7}}{3^{4} \times 3^{3}}$
(vi) $2^{0}+3^{0}+4^{0}$
(vii) $2^{0} \times 3^{0} \times 4^{0}$
(viii) $\left(3^{0}+2^{0}\right) \times 5^{0}$
(ix) $\frac{2^{8} \times a^{5}}{4^{3} \times a^{3}}$
(x) $\left(\frac{a^{5}}{a^{3}}\right) \times a^{8}$
(xi) $\frac{4^{5} \times a^{8} b^{3}}{4^{5} \times a^{5} b^{2}}$
(xii) $\left(2^{3} \times 2\right)^{2}$
3. Say true or false and justify your answer:
(i) $10 \times 10^{11}=100^{11}$
(ii) $2^{3}>5^{2}$
(iii) $2^{3} \times 3^{2}=6^{5}$
(iv) $3^{0}=(1000)^{0}$ 4. Express each of the following as a product of prime factors only in exponential form:
(i) $108 \times 192$
(ii) 270
(iii) $729 \times 64$
(iv) 768
4. Simplify:
(i) $\frac{\left(2^{5}\right)^{2} \times 7^{3}}{8^{3} \times 7}$
(ii) $\frac{25 \times 5^{2} \times t^{8}}{10^{3} \times t^{4}}$
(iii) $\frac{3^{5} \times 10^{5} \times 25}{5^{7} \times 6^{5}}$
### Decimal Number System
Let us look at the expansion of 47561 , which we already know:
$$
47561=4 \times 10000+7 \times 1000+5 \times 100+6 \times 10+1
$$
We can express it using powers of 10 in the exponent form:
Therefore, $47561=4 \times 10^{4}+7 \times 10^{3}+5 \times 10^{2}+6 \times 10^{1}+1 \times 10^{0}$
(Note $10,000=10^{4}, 1000=10^{3}, 100=10^{2}, 10=10^{1}$ and $1=10^{\circ}$ )
Let us expand another number:
$$
\begin{aligned}
104278 & =1 \times 100,000+0 \times 10,000+4 \times 1000+2 \times 100+7 \times 10+8 \times 1 \\
& =1 \times 10^{5}+0 \times 10^{4}+4 \times 10^{3}+2 \times 10^{2}+7 \times 10^{1}+8 \times 10^{0} \\
& =1 \times 10^{5}+4 \times 10^{3}+2 \times 10^{2}+7 \times 10^{1}+8 \times 10^{0}
\end{aligned}
$$
Notice how the exponents of 10 start from a maximum value of 5 and go on decreasing by 1 at a step from the left to the right upto 0 .
### EXrpessing Large Numibers in the Standard Form
Let us now go back to the beginning of the chapter. We said that large numbers can be conveniently expressed using exponents. We have not as yet shown this. We shall do so now.
1. Sun is located $300,000,000,000,000,000,000 \mathrm{~m}$ from the centre of our Milky Way Galaxy.
2. Number of stars in our Galaxy is $100,000,000,000$.
3. Mass of the Earth is 5,976,000,000,000,000,000,000,000 kg.
These numbers are not convenient to write and read. To make it convenient we use powers.
Observe the following:
$$
\begin{aligned}
59 & =5.9 \times 10=5.9 \times 10^{1} \\
590 & =5.9 \times 100=5.9 \times 10^{2} \\
5900 & =5.9 \times 1000=5.9 \times 10^{3} \\
59000 & =5.9 \times 10000=5.9 \times 10^{4} \text { and so on. }
\end{aligned}
$$
## Try These
Expand by expressing powers of 10 in the exponential form:
(i) 172
(ii) 5,643
(iii) 56,439
(iv) $1,76,428$ We have expressed all these numbers in the standard form. Any number can be expressed as a decimal number between 1.0 and 10.0 including 1.0 multiplied by a power of 10. Such a form of a number is called its standard form. Thus,
$$
5,985=5.985 \times 1,000=5.985 \times 10^{3} \text { is the standard form of } 5,985 .
$$
Note, 5,985 can also be expressed as $59.85 \times 100$ or $59.85 \times 10^{2}$. But these are not the standard forms, of 5,985. Similarly, 5,985 $=0.5985 \times 10,000=0.5985 \times 10^{4}$ is also not the standard form of 5,985.
We are now ready to express the large numbers we came across at the beginning of the chapter in this form.
The, distance of Sun from the centre of our Galaxy i.e.,
$300,000,000,000,000,000,000 \mathrm{~m}$ can be written as
$$
3.0 \times 100,000,000,000,000,000,000=3.0 \times 10^{20} \mathrm{~m}
$$
Now, can you express 40,000,000,000 in the similar way?
Count the number of zeros in it. It is 10 .
So, $\quad 40,000,000,000=4.0 \times 10^{10}$
Mass of the Earth $=5,976,000,000,000,000,000,000,000 \mathrm{~kg}$
$$
=5.976 \times 10^{24} \mathrm{~kg}
$$
Do you agree with the fact, that the number when written in the standard form is much easier to read, understand and compare than when the number is written with 25 digits? Now,
$$
\begin{gathered}
\text { Mass of Uranus }=86,800,000,000,000,000,000,000,000 \mathrm{~kg} \\
=8.68 \times 10^{25} \mathrm{~kg}
\end{gathered}
$$
Simply by comparing the powers of 10 in the above two, you can tell that the mass of Uranus is greater than that of the Earth.
The distance between Sun and Saturn is $1,433,500,000,000 \mathrm{~m}$ or $1.4335 \times 10^{12} \mathrm{~m}$. The distance betwen Saturn and Uranus is $1,439,000,000,000 \mathrm{~m}$ or $1.439 \times 10^{12} \mathrm{~m}$. The distance between Sun and Earth is $149,600,000,000 \mathrm{~m}$ or $1.496 \times 10^{11} \mathrm{~m}$.
Can you tell which of the three distances is smallest?
EXAMPLE 13 Express the following numbers in the standard form:
(i) 5985.3
(iii) $3,430,000$
## SOLUTION
(i) $5985.3=5.9853 \times 1000=5.9853 \times 10^{3}$
(ii) $65,950=6.595 \times 10,000=6.595 \times 10^{4}$
(iii) $3,430,000=3.43 \times 1,000,000=3.43 \times 10^{6}$
(iv) $70,040,000,000=7.004 \times 10,000,000,000=7.004 \times 10^{10}$
A point to remember is that one less than the digit count (number of digits) to the left of the decimal point in a given number is the exponent of 10 in the standard form. Thus, in $70,040,000,000$ there is no decimal point shown; we assume it to be at the (right) end. From there, the count of the places (digits) to the left is 11 . The exponent of 10 in the standard form is $11-1=10$. In 5985.3 there are 4 digits to the left of the decimal point and hence the exponent of 10 in the standard form is $4-1=3$.
## ExERCISE 11.3
1. Write the following numbers in the expanded forms:
279404, 3006194, 2806196, 120719, 20068
2. Find the number from each of the following expanded forms:
(a) $8 \times 10^{4}+6 \times 10^{3}+0 \times 10^{2}+4 \times 10^{1}+5 \times 10^{0}$
(b) $4 \times 10^{5}+5 \times 10^{3}+3 \times 10^{2}+2 \times 10^{0}$
(c) $3 \times 10^{4}+7 \times 10^{2}+5 \times 10^{0}$
(d) $9 \times 10^{5}+2 \times 10^{2}+3 \times 10^{1}$
3. Express the following numbers in standard form:
(i) $5,00,00,000$
(ii) $70,00,000$
(iii) $3,18,65,00,000$
(iv) $3,90,878$
(v) 39087.8
(vi) 3908.78
4. Express the number appearing in the following statements in standard form.
(a) The distance between Earth and Moon is $384,000,000 \mathrm{~m}$.
(b) Speed of light in vacuum is $300,000,000 \mathrm{~m} / \mathrm{s}$.
(c) Diameter of the Earth is 1,27,56,000 $\mathrm{m}$.
(d) Diameter of the Sun is 1,400,000,000 m.
(e) In a galaxy there are on an average 100,000,000,000 stars.
(f) The universe is estimated to be about 12,000,000,000 years old.
(g) The distance of the Sun from the centre of the Milky Way Galaxy is estimated to be 300,000,000,000,000,000,000 $\mathrm{m}$.
(h) $60,230,000,000,000,000,000,000$ molecules are contained in a drop of water weighing $1.8 \mathrm{gm}$.
(i) The earth has 1,353,000,000 cubic km of sea water.
(j) The population of India was about 1,027,000,000 in March, 2001.
## What have We Discussed?
1. Very large numbers are difficult to read, understand, compare and operate upon. To make all these easier, we use exponents, converting many of the large numbers in a shorter form.
2. The following are exponential forms of some numbers?
$$
\begin{aligned}
10,000 & =10^{4}(\text { read as } 10 \text { raised to } 4) \\
243 & =3^{5}, 128=2^{7} .
\end{aligned}
$$
Here, 10, 3 and 2 are the bases, whereas 4,5 and 7 are their respective exponents. We also say, 10,000 is the $4^{\text {th }}$ power of 10,243 is the $5^{\text {th }}$ power of 3 , etc.
3. Numbers in exponential form obey certain laws, which are:
For any non-zero integers $a$ and $b$ and whole numbers $m$ and $n$,
(a) $a^{m} \times a^{n}=a^{m+n}$
(b) $a^{m} \div a^{n}=a^{m-n}, \quad m>n$
(c) $\left(a^{m}\right)^{n}=a^{m n}$
(d) $a^{m} \times b^{m}=(a b)^{m}$
(e) $a^{m} \div b^{m}=\left(\frac{a}{b}\right)^{m}$
(f) $a^{0}=1$
(g) $(-1)^{\text {even number }}=1$
$$
(-1)^{\text {odd number }}=-1
$$
### Introduction
Symmetry is an important geometrical concept, commonly exhibited in nature and is used almost in every field of activity. Artists, professionals, designers of clothing or jewellery, car manufacturers, architects and many others make use of the idea of symmetry. The beehives, the flowers, the tree-leaves, religious symbols, rugs, and handkerchiefs everywhere you find symmetrical designs.
Architecture
Engineering
Nature
You have already had a 'feel' of line symmetry in your previous class.
A figure has a line symmetry, if there is a line about which the figure may be folded so that the two parts of the figure will coincide.
You might like to recall these ideas. Here are some activities to help you.
Compose a picture-album showing symmetry.
Create some colourful Ink-dot devils
Make some symmetrical paper-cut designs. Enjoy identifying lines (also called axes) of symmetry in the designs you collect.
Let us now strengthen our ideas on symmetry further. Study the following figures in which the lines of symmetry are marked with dotted lines. [Fig 12.1 (i) to (iv)]
(i)
(ii)
(iii)
(iv)
Fig 12.1
### Lines of Symietry for Regular Polygons
You know that a polygon is a closed figure made of several line segments. The polygon made up of the least number of line segments is the triangle. (Can there be a polygon that you can draw with still fewer line segments? Think about it).
A polygon is said to be regular if all its sides are of equal length and all its angles are of equal measure. Thus, an equilateral triangle is a regular polygon of three sides. Can you name the regular polygon of four sides?
An equilateral triangle is regular because each of its sides has same length and each of its angles measures $60^{\circ}$ (Fig 12.2).
Fig 12.2
A square is also regular because all its sides are of equal length and each of its angles is a right angle (i.e., $90^{\circ}$ ). Its diagonals are seen to be perpendicular bisectors of one another (Fig 12.3).
Fig 12.3 If a pentagon is regular, naturally, its sides should have equal length. You will, later on, learn that the measure of each of its angles is $108^{\circ}$ (Fig 12.4).
Fig 12.4
Fig 12.5
A regular hexagon has all its sides equal and each of its angles measures $120^{\circ}$. You will learn more of these figures later (Fig 12.5).
The regular polygons are symmetrical figures and hence their lines of symmetry are quite interesting,
Each regular polygon has as many lines of symmetry as it has sides [Fig 12.6 (i) - (iv)]. We say, they have multiple lines of symmetry.
three lines of symmetry
Equilateral Triangle (i)
Square
Regular Pentagon
(ii)
(iii)
## Fig 12.6
Regular Hexagon
(iv)
Fig 12.7
Perhaps, you might like to investigate this by paper folding. Go ahead!
The concept of line symmetry is closely related to mirror reflection. A shape has line symmetry when one half of it is the mirror image of the other half (Fig 12.7). A mirror line, thus, helps to visualise a line of symmetry (Fig 12.8).
Is the dotted line a mirror line? No.
Is the dotted line a mirror line? Yes. While dealing with mirror reflection, care is needed to note down the left-right changes in the orientation, as seen in the figure here (Fig 12.9).
(i)
(ii)
Fig 12.9
The shape is same, but the other way round!
## Play this punching game!
Fold a sheet into two halves
Punch a hole
two holes about the symmetric fold.
Fig 12.10
The fold is a line (or axis) of symmetry. Study about punches at different locations on the folded paper and the corresponding lines of symmetry (Fig 12.10).
## ExERCISE 12.1
1. Copy the figures with punched holes and find the axes of symmetry for the following:
(a)
(e)
(b)
(f)
(c)
(g)
(d)
(i)
(j)
(k)
(1)
2. Given the line(s) of symmetry, find the other hole(s):
(a)
(b)
(c)
(d)
(e)
3. In the following figures, the mirror line (i.e., the line of symmetry) is given as a dotted line. Complete each figure performing reflection in the dotted (mirror) line. (You might perhaps place a mirror along the dotted line and look into the mirror for the image). Are you able to recall the name of the figure you complete?
(a)
(b)
(c)
(d)
(e)
(f)
4. The following figures have more than one line of symmetry. Such figures are said to have multiple lines of symmetry.
(a)
(b)
(c)
Identify multiple lines of symmetry, if any, in each of the following figures:
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
5. Copy the figure given here.
Take any one diagonal as a line of symmetry and shade a few more squares to make the figure symmetric about a diagonal. Is there more than one way to do that? Will the figure be symmetric about both the diagonals?
6. Copy the diagram and complete each shape to be symmetric about the mirror line(s):
(a)
(b)
(c)
7. State the number of lines of symmetry for the following figures:
(a) An equilateral triangle
(b) An isosceles triangle
(c) A scalene triangle
(d) A square
(e) A rectangle
(f) A rhombus
(g) A parallelogram
(h) A quadrilateral
(i) A regular hexagon
(j) A circle
8. What letters of the English alphabet have reflectional symmetry (i.e., symmetry related to mirror reflection) about.
(a) a vertical mirror
(b) a horizontal mirror
(c) both horizontal and vertical mirrors
9. Give three examples of shapes with no line of symmetry.
10. What other name can you give to the line of symmetry of
(a) an isosceles triangle?
(b) a circle?
### Rotational Symimetry
What do you say when the hands of a clock go round?
You say that they rotate. The hands of a clock rotate in only one direction, about a fixed point, the centre of the clock-face.
Rotation, like movement of the hands of a clock, is called a clockwise rotation; otherwise it is said to be anticlockwise.
What can you say about the rotation of the blades of a ceiling fan? Do they rotate clockwise or anticlockwise? Or do they rotate both ways?
If you spin the wheel of a bicycle, it rotates. It can rotate in either way: both clockwise and anticlockwise. Give three examples each for (i) a clockwise rotation and (ii) anticlockwise rotation.
When an object rotates, its shape and size do not change. The rotation turns an object about a fixed point. This fixed point is the centre of rotation. What is the centre of rotation of the hands of a clock? Think about it.
The angle of turning during rotation is called the angle of rotation. A full
Fig 12.11 turn, you know, means a rotation of $360^{\circ}$. What is the degree measure of the angle of rotation for (i) a half-turn? (ii) a quarter-turn?
A half-turn means rotation by $180^{\circ}$; a quarter-turn is rotation by $90^{\circ}$.
When it is $12 \mathrm{O}^{\prime}$ clock, the hands of a clock are together. By $3 \mathrm{O}^{\prime}$ clock, the minute hand would have made three complete turns; but the hour hand would have made only a quarter-turn. What can you say about their positions at 6 O'clock?
Have you ever made a paper windmill? The Paper windmill in the picture looks symmetrical (Fig 12.11); but you do not find any line of symmetry. No folding can help you to have coincident halves. However if you rotate it by $90^{\circ}$ about the fixed point, the windmill will look exactly the same. We say the windmill has a rotational symmetry.
Fig 12.12
In a full turn, there are precisely four positions (on rotation through the angles $90^{\circ}$, $180^{\circ}, 270^{\circ}$ and $360^{\circ}$ ) when the windmill looks exactly the same. Because of this, we say it has a rotational symmetry of order 4.
Here is one more example for rotational symmetry.
Consider a square with $\mathrm{P}$ as one of its corners (Fig 12.13).
Let us perform quarter-turns about the centre of the square marked $\mathbf{x}$.
(i)
(ii)
(iii)
(iv)
(v)
Fig 12.13 Fig 12.13 (i) is the initial position. Rotation by $90^{\circ}$ about the centre leads to Fig 12.13 (ii). Note the position of $\mathrm{P}$ now. Rotate again through $90^{\circ}$ and you get Fig 12.13 (iii). In this way, when you complete four quarter-turns, the square reaches its original position. It now looks the same as Fig12.13 (i). This can be seen with the help of the positions taken by $\mathrm{P}$.
Thus a square has a rotational symmetry of order 4 about its centre. Observe that in this case,
(i) The centre of rotation is the centre of the square.
(ii) The angle of rotation is $90^{\circ}$.
(iii) The direction of rotation is clockwise.
(iv) The order of rotational symmetry is 4 .
## TRY These
1. (a) Can you now tell the order of the rotational symmetry for an equilateral triangle? (Fig 12.14)
(i)
(ii)
(iii)
(iv)
Fig 12.14
(b) How many positions are there at which the triangle looks exactly the same, when rotated about its centre by $120^{\circ}$ ?
2. Which of the following shapes (Fig 12.15) have rotational symmetry about the marked point.
(i)
(ii)
(iii)
(iv)
Fig 12.15
## Do This
Draw two identical parallelograms, one-ABCD on a piece of paper and the other A' B' C' D' on a transparent sheet. Mark the points of intersection of their diagonals, $\mathrm{O}$ and $\mathrm{O}^{\prime}$ respectively (Fig 12.16). on $\mathrm{O}$.
Place the parallelograms such that A' lies on A, B' lies on B and so on. O' then falls
Fig 12.16
Stick a pin into the shapes at the point $\mathrm{O}$.
Now turn the transparent shape in the clockwise direction.
How many times do the shapes coincide in one full round?
What is the order of rotational symmetry?
The point where we have the pin is the centre of rotation. It is the intersecting point of the diagonals in this case.
Every object has a rotational symmetry of order 1, as it occupies same position after a rotation of $360^{\circ}$ (i.e., one complete revolution). Such cases have no interest for us.
You have around you many shapes, which possess rotational symmetry (Fig 12.17).
Fruit
(i)
Road sign
(ii)
Wheel
(iii)
Fig 12.17
For example, when you slice certain fruits, the cross-sections are shapes with rotational symmetry. This might surprise you when you notice them [Fig 12.17(i)].
Then there are many road signs that exhibit rotational symmetry. Next time when you walk along a busy road, try to identify such road signs and find about the order of rotational symmetry [Fig 12.17(ii)].
Think of some more examples for rotational symmetry. Discuss in each case:
(i) the centre of rotation (ii) the angle of rotation
(iii) the direction in which the rotation is affected and
(iv) the order of the rotational symmetry.
## TRY THESE
Give the order of the rotational symmetry of the given figures about the point marked $\times($ Fig 12.17).
(i)
(ii)
(iii)
Fig 12.18
## Exercise 12.2
1. Which of the following figures have rotational symmetry of order more than 1 :
(a)
(b)
(c)
(d)
(e)
(f)
2. Give the order of rotational symmetry for each figure:
(a)
(e)
(b)
(f)
(c)
$(\mathrm{g})$
(d)
(h)
### Line Symimetri and Rotational Symimetry
You have been observing many shapes and their symmetries so far. By now you would have understood that some shapes have only line symmetry, some have only rotational symmetry and some have both line symmetry and rotational symmetry. For example, consider the square shape (Fig 12.19).
How many lines of symmetry does it have?
Does it have any rotational symmetry?
If 'yes', what is the order of the rotational symmetry?
Think about it.
Fig 12.19
The circle is the most perfect symmetrical figure, because it can be rotated around its centre through any angle and at the same time it has unlimited number of lines of symmetry. Observe any circle pattern. Every line through the centre (that is every diameter) forms a line of (reflectional) symmetry and it has
rotational symmetry around the centre for every angle.
## Do THis
Some of the English alphabets have fascinating symmetrical structures. Which capital letters have just one line of symmetry (like $\mathbf{E}$ )? Which capital letters have a rotational symmetry of order 2 (like I)?
By attempting to think on such lines, you will be able to fill in the following table:
| $\begin{array}{c}\text { Alphabet } \\ \text { Letters }\end{array}$ | $\begin{array}{c}\text { Line } \\ \text { Symmetry }\end{array}$ | $\begin{array}{c}\text { Number of Lines of } \\ \text { Symmetry }\end{array}$ | $\begin{array}{c}\text { Rotational } \\ \text { Symmetry }\end{array}$ | $\begin{array}{c}\text { Order of Rotational } \\ \text { Symmetry }\end{array}$ |
| :---: | :---: | :---: | :---: | :---: |
| Z | No | 0 | Yes | 2 |
| S | | | | |
| H | Yes | | Yes | |
| O | Yes | | Yes | |
| E | Yes | | | |
| N | | | Yes | |
| C | | | | |
## Exercise 12.3
1. Name any two figures that have both line symmetry and rotational symmetry.
2. Draw, wherever possible, a rough sketch of
(i) a triangle with both line and rotational symmetries of order more than 1 .
(ii) a triangle with only line symmetry and no rotational symmetry of order more than 1.
(iii) a quadrilateral with a rotational symmetry of order more than 1 but not a line symmetry.
(iv) aquadrilateral with line symmetry but not a rotational symmetry of order more than 1 .
3. If a figure has two or more lines of symmetry, should it have rotational symmetry of order more than 1 ?
4. Fill in the blanks:
| Shape | Centre of Rotation | Order of Rotation | Angle of Rotation |
| :--- | :--- | :--- | :--- |
| Square | | | |
| Rectangle | | | |
| Rhombus | | | |
| $\begin{array}{l}\text { Equilateral } \\ \text { Triangle }\end{array}$ | | | |
| $\begin{array}{l}\text { Regular } \\ \text { Hexagon }\end{array}$ | | | |
| Circle | | | |
| Semi-circle | | | |
5. Name the quadrilaterals which have both line and rotational symmetry of order more than 1 .
6. After rotating by $60^{\circ}$ about a centre, a figure looks exactly the same as its original position. At what other angles will this happen for the figure?
7. Can we have a rotational symmetry of order more than 1 whose angle of rotation is (i) $45^{\circ} ?$
(ii) $17^{\circ} ?$
## What have We Discussed?
1. A figure has line symmetry, if there is a line about which the figure may be folded so that the two parts of the figure will coincide.
2. Regular polygons have equal sides and equal angles. They have multiple (i.e., more than one) lines of symmetry.
3. Each regular polygon has as many lines of symmetry as it has sides.
| $\begin{array}{l}\text { Regular } \\ \text { Polygon }\end{array}$ | $\begin{array}{c}\text { Regular } \\ \text { hexagon }\end{array}$ | $\begin{array}{c}\text { Regular } \\ \text { pentagon }\end{array}$ | Square | $\begin{array}{c}\text { Equilateral } \\ \text { triangle }\end{array}$ |
| :--- | :---: | :---: | :---: | :---: |
| $\begin{array}{l}\text { Number of lines } \\ \text { of symmetry }\end{array}$ | 6 | 5 | 4 | 3 |
4. Mirror reflection leads to symmetry, under which the left-right orientation have to be taken care of.
5. Rotation turns an object about a fixed point.
This fixed point is the centre of rotation.
The angle by which the object rotates is the angle of rotation.
A half-turn means rotation by $180^{\circ}$; a quarter-turn means rotation by $90^{\circ}$. Rotation may be clockwise or anticlockwise.
6. If, after a rotation, an object looks exactly the same, we say that it has a rotational symmetry.
7. In a complete turn (of $360^{\circ}$ ), the number of times an object looks exactly the same is called the order of rotational symmetry. The order of symmetry of a square, for example, is 4 while, for an equilateral triangle, it is 3.
8. Some shapes have only one line of symmetry, like the letter $\mathrm{E}$; some have only rotational symmetry, like the letter $\mathrm{S}$; and some have both symmetries like the letter $\mathrm{H}$.
The study of symmetry is important because of its frequent use in day-to-day life and more because of the beautiful designs it can provide us.
## Visualising Solid Shapes
### Introduction: Plane Figures and Solid Shapes
In this chapter, you will classify figures you have seen in terms of what is known as dimension.
In our day to day life, we see several objects like books, balls, ice-cream cones etc., around us which have different shapes. One thing common about most of these objects is that they all have some length, breadth and height or depth.
That is, they all occupy space and have three dimensions.
Hence, they are called three dimensional shapes.
Do you remember some of the three dimensional shapes (i.e., solid shapes) we have seen in earlier classes?
## Try These
(i)
(ii)
(iii)
(a) Cuboid
(c) Cube Match the shape with the name:
(b) Cylinder
(v)
(vi)
(e) Pyramid
(f) Cone
Fig 13.1 Try to identify some objects shaped like each of these.
By a similar argument, we can say figures drawn on paper which have only length and breadth are called two dimensional (i.e., plane) figures. We have also seen some two dimensional figures in the earlier classes.
Match the 2 dimensional figures with the names (Fig 13.2):
(i)
(ii)
(iii)
(iv)
(v)
(a) Circle
(b) Rectangle
(c) Square
(d) Quadrilateral
(e) Triangle
Fig 13.2
Note: We can write 2-D in short for 2-dimension and 3-D in short for 3-dimension.
### Faces, Edges and Vertices
Do you remember the Faces, Vertices and Edges of solid shapes, which you studied earlier? Here you see them for a cube:
(i)
(ii)
(iii)
Fig 13.3
The 8 corners of the cube are its vertices. The 12 line segments that form the skeleton of the cube are its edges. The 6 flat square surfaces that are the skin of the cube are its faces.
## Do This
Complete the following table:
Table 13.1
Can you see that, the two dimensional figures can be identified as the faces of the three dimensional shapes? For example a cylinder $\square$ has two faces which are circles, and a pyramid, shaped like this has triangles as its faces.
We will now try to see how some of these 3-D shapes can be visualised on a 2-D surface, that is, on paper.
In order to do this, we would like to get familiar with three dimensional objects closely. Let us try forming these objects by making what are called nets.
### Nets For Building 3-D Shapes
Take a cardboard box. Cut the edges to lay the box flat. You have now a net for that box. A net is a sort of skeleton-outline in 2-D [Fig13.4 (i)], which, when folded [Fig13.4 (ii)], results in a 3-D shape [Fig13.4 (iii)].
(i)
(ii)
(iii)
Fig 13.4 Here you got a net by suitably separating the edges. Is the reverse process possible?
Here is a net pattern for a box (Fig 13.5). Copy an enlarged version of the net and try to make the box by suitably folding and gluing together. (You may use suitable units). The box is a solid. It is a 3-D object with the shape of a cuboid.
Similarly, you can get a net for a cone by cutting a slit along its slant surface (Fig 13.6).
You have different nets for different shapes. Copy enlarged versions of the nets given (Fig 13.7) and try to make the 3-D shapes indicated. (You may also like to prepare skeleton models using strips of cardboard fastened with paper clips).
Cut along here Fig $\mathbf{1 3 . 6}$
Cube
(i)
Cylinder
(ii)
Cone
(iii)
Fig 13.7
We could also try to make a net for making a pyramid like the Great Pyramid in Giza (Egypt) (Fig 13.8). That pyramid has a square base and triangles on the four sides.
Fig 13.8
See if you can make it with the given net (Fig 13.9).
Fig 13.9
## Try These
Here you find four nets (Fig 13.10). There are two correct nets among them to make a tetrahedron. See if you can work out which nets will make a tetrahedron.
Fig 13.10
(i)
(iv)
## Exercise 13.1
2. Dice are cubes with dots on each face. Opposite faces of a die always have a total
of seven dots on them. Here are two nets to make dice (cubes); the numbers inserted in each square indicate the number of dots in that box.
(ii)
(v)
(iii)
(vi)
Insert suitable numbers in the blanks, remembering that the number on the opposite faces should total to 7.
3. Can this be a net for a die?
Explain your answer.
4. Here is an incomplete net for making a cube. Complete it in at least two different ways. Remember that a cube has six faces. How many are there in the net here? (Give two separate diagrams. If you like, you may use a squared sheet for easy manipulation.)
5. Match the nets with appropriate solids:
(a)
(b)
(c)
(d)
(i)
(ii)
(iii)
(iv)
## Play this game
You and your friend sit back-to-back. One of you reads out a net to make a 3-D shape, while the other attempts to copy it and sketch or build the described 3-D object.
### Drawing Solids on a Flat Surface
Your drawing surface is paper, which is flat. When you draw a solid shape, the images are somewhat distorted to make them appear three-dimensional. It is a visual illusion. You will find here two techniques to help you.
#### Oblique Sketches
Here is a picture of a cube (Fig 13.11). It gives a clear idea of how the cube looks like, when seen from the front. You do not see certain faces. In the drawn picture, the lengths
Fig 13.11 are not equal, as they should be in a cube. Still, you are able to recognise it as a cube. Such a sketch of a solid is called an oblique sketch.
How can you draw such sketches? Let us attempt to learn the technique.
You need a squared (lines or dots) paper. Initially practising to draw on these sheets will later make it easy to sketch them on a plain sheet (without the aid of squared lines or dots!) Let us attempt to draw an oblique sketch of a $3 \times 3 \times 3$ (each edge is 3 units) cube (Fig 13.12).
Step 1
Draw the front face.
Step 3
Join the corresponding corners
Step 2
Draw the opposite face. Sizes of the faces have to be same, but the sketch is somewhat off-set from step 1.
Step 4
Redraw using dotted lines for hidden edges. (It is a convention)
The sketch is ready now.
Fig 13.12
In the oblique sketch above, did you note the following?
(i) The sizes of the front faces and its opposite are same; and
(ii) The edges, which are all equal in a cube, appear so in the sketch, though the actual measures of edges are not taken so.
You could now try to make an oblique sketch of a cuboid (remember the faces in this case are rectangles)
Note: You can draw sketches in which measurements also agree with those of a given solid. To do this we need what is known as an isometric sheet. Let us try to make a cuboid with dimensions $4 \mathrm{~cm}$ length, $3 \mathrm{~cm}$ breadth and $3 \mathrm{~cm}$ height on given isometric sheet.
#### Isometric Sketches
Have you seen an isometric dot sheet? (A sample is given at the end of the book). Such a sheet divides the paper into small equilateral triangles made up of dots or lines. To draw sketches in which measurements also agree with those of the solid, we can use isometric dot sheets. [Given on inside of the back cover (3rd cover page).]
Let us attempt to draw an isometric sketch of a cuboid of dimensions $4 \times 3 \times 3$ (which means the edges forming length, breadth and height are 4, 3, 3 units respectively) (Fig 13.13).
Step 1
Draw a rectangle to show the front face.
Step 3
Connect the matching corners with appropriate line segments.
Fig 13.14 (i)
Step 2
Draw four parallel line segments of length 3 starting from the four corners of the rectangle.
Step 4
This is an isometric sketch of the cuboid. Fig 13.13
Note that the measurements are of exact size in an isometric sketch; this is not so in the case of an oblique sketch.
EXample 1 Here is an oblique sketch of a cuboid [Fig 13.14(i)] Draw an isometric sketch that matches this drawing.
## Solution
Here is the solution [Fig 13.14(ii)]. Note how the measurements are taken care of.
Fig 13.14 (ii) How many units have you taken along (i) 'length'? (ii) 'breadth'? (iii) 'height'? Do they match with the units mentioned in the oblique sketch?
## ExERCISE 13.2
1. Use isometric dot paper and make an isometric sketch for each one of the given shapes:
(i)
(iii)
(ii)
(iv)
Fig 13.15
2. The dimensions of a cuboid are $5 \mathrm{~cm}, 3 \mathrm{~cm}$ and $2 \mathrm{~cm}$. Draw three different isometric sketches of this cuboid.
3. Three cubes each with $2 \mathrm{~cm}$ edge are placed side by side to form a cuboid. Sketch an oblique or isometric sketch of this cuboid.
4. Make an oblique sketch for each one of the given isometric shapes:
5. Give (i) an oblique sketch and (ii) an isometric sketch for each of the following:
(a) A cuboid of dimensions $5 \mathrm{~cm}, 3 \mathrm{~cm}$ and $2 \mathrm{~cm}$. (Is your sketch unique?)
(b) A cube with an edge $4 \mathrm{~cm}$ long.
An isometric sheet is attached at the end of the book. You could try to make on it some cubes or cuboids of dimensions specified by your friend.
#### Visualising Solid Objects
## Do This
Sometimes when you look at combined shapes, some of them may be hidden from your view.
Here are some activities you could try in your free time to help you visualise some solid objects and how they look. Take some cubes and arrange them as shown in Fig 13.16.
(i)
(ii)
(iii)
Fig 13.16
Now ask your friend to guess how many cubes there are when observed from the view shown by the arrow mark.
## TRY These
Try to guess the number of cubes in the following arrangements (Fig 13.17).
(i)
(ii)
(iii)
Fig 13.17 Such visualisation is very helpful. Suppose you form a cuboid by joining such cubes. You will be able to guess what the length, breadth and height of the cuboid would be.
Fig 13.18 Example 2 If two cubes of dimensions $2 \mathrm{~cm}$ by $2 \mathrm{~cm}$ by $2 \mathrm{~cm}$ are placed side by side, what would the dimensions of the resulting cuboid be?
Solution As you can see (Fig 13.18) when kept side by side, the length is the only measurement which increases, it be comes $2+2=4 \mathrm{~cm}$.
The breadth $=2 \mathrm{~cm}$ and the height $=2 \mathrm{~cm}$.
## TRY THEse
1. Two dice are placed side by side as shown: Can you say what the total would be
Fig 13.19 on the face opposite to
(a) $5+6$
(b) $4+3$
(Remember that in a die sum of numbers on opposite faces is 7)
2. Three cubes each with $2 \mathrm{~cm}$ edge are placed side by side to form a cuboid. Try to make an oblique sketch and say what could be its length, breadth and height.
### Viewing Different Sections of A Solid
Now let us see how an object which is in 3-D can be viewed in different ways.
#### One Way to View an Object is by Cutting or Slicing
## Slicing game
Here is a loaf of bread (Fig 13.20). It is like a cuboid with a square face. You 'slice' it with a knife.
Fig 13.20
When you give a 'vertical' cut, you get several pieces, as shown in the Figure 13.20. Each face of the piece is a square! We call this face a 'cross-section' of the whole bread. The cross section is nearly a square in this case.
Beware! If your cut is not 'vertical' you may get a different cross section! Think about it. The boundary of the cross-section you obtain is a plane curve. Do you notice it?
## A kitchen play
Have you noticed cross-sections of some vegetables when they are cut for the purposes of cooking in the kitchen? Observe the various slices and get aware of the shapes that result as cross-sections.
## Play this
Make clay (or plasticine) models of the following solids and make vertical or horizontal cuts. Draw rough sketches of the cross-sections you obtain. Name them wherever you can.
Fig 13.21
## Exercise 13.3
1. What cross-sections do you get when you give a
(i) vertical cut
(ii) horizontal cut
to the following solids?
(a) A brick
(b) A round apple
(c) Adie
(d) A circular pipe
(e) An ice cream cone
#### Another Way is by Shadow Play
## A shadow play
Shadows are a good way to illustrate how three-dimensional objects can be viewed in two dimensions. Have you seen a shadow play? It is a form of entertainment using solid articulated figures in front of an illuminated back-drop to create the illusion of moving
images. It makes some indirect use of ideas in Mathematics.
You will need a source of light and a few solid shapes for this activity. (If you have an overhead projector, place the solid under the lamp and do these investigations.)
Keep a torchlight, right in front of a Cone. What type of shadow does
Fig 13.23 it cast on the screen? (Fig 13.23)
The solid is three-dimensional; what is the dimension of the shadow?
If, instead of a cone, you place a cube in the above game, what type of shadow will
you get?
Experiment with different positions of the source of light and with different positions of the solid object. Study their effects on the shapes and sizes of the shadows you get.
Here is another funny experiment that you might have tried already: Place a circular plate in the open when the Sun at the noon time is just right above it as shown in Fig 13.24 (i). What is the shadow that you obtain?
(i) Will it be same during
(b) evenings?
Fig 13.24 (i) - (iii)
Study the shadows in relation to the position of the Sun and the time of observation.
## ExERCISE 13.4
1. A bulb is kept burning just right above the following solids. Name the shape of the shadows obtained in each case. Attempt to give a rough sketch of the shadow. (You may try to experiment first and then answer these questions).
A ball
(i)
A cylindrical pipe
A book
(ii)
(iii)
2. Here are the shadows of some 3-D objects, when seen under the lamp of an overhead projector. Identify the solid(s) that match each shadow. (There may be multiple answers for these!)
A circle
(i) A square
(ii) A triangle
A rectangle
(iii)
(iv) 3. Examine if the following are true statements:
(i) The cube can cast a shadow in the shape of a rectangle.
(ii) The cube can cast a shadow in the shape of a hexagon.
#### A Third Way is by Looking at it from Certain Angles to Get Different Views
One can look at an object standing in front of it or by the side of it or from above. Each time one will get a different view (Fig 13.25).
Front view
Side view
Top view
Fig 13.25
Here is an example of how one gets different views of a given building. (Fig 13.26)
Building
Front view
Side view
Top view
Fig 13.26
You could do this for figures made by joining cubes.
Fig 13.27
Try putting cubes together and then making such sketches from different sides.
## MATHEMATICS
## TRY These
1. For each solid, the three views (1), (2), (3) are given. Identify for each solid the corresponding top, front and side views.
## Solid
Its views
(2)
(3)
Front
Front
2. Draw a view of each solid as seen from the direction indicated by the arrow.
(i)
(ii)
(iii)
## What have We Discussed?
1. The circle, the square, the rectangle, the quadrilateral and the triangle are examples of plane figures; the cube, the cuboid, the sphere, the cylinder, the cone and the pyramid are examples of solid shapes.
2. Plane figures are of two-dimensions (2-D) and the solid shapes are of three-dimensions (3-D).
3. The corners of a solid shape are called its vertices; the line segments of its skeleton are its edges; and its flat surfaces are its faces.
4. A net is a skeleton-outline of a solid that can be folded to make it. The same solid can have several types of nets.
5. Solid shapes can be drawn on a flat surface (like paper) realistically. We call this 2-D representation of a 3-D solid.
6. Two types of sketches of a solid are possible:
(a) An oblique sketch does not have proportional lengths. Still it conveys all important aspects of the appearance of the solid.
(b) An isometric sketch is drawn on an isometric dot paper, a sample of which is given at the end of this book. In an isometric sketch of the solid the measurements kept proportional.
7. Visualising solid shapes is a very useful skill. You should be able to see 'hidden' parts of the solid shape.
8. Different sections of a solid can be viewed in many ways:
(a) One way is to view by cutting or slicing the shape, which would result in the cross-section of the solid.
(b) Another way is by observing a 2-D shadow of a 3-D shape.
(c) A third way is to look at the shape from different angles; the front-view, the side-view and the top-view can provide a lot of information about the shape observed.
## Symmetry
### Introduction
Symmetry is an important geometrical concept, commonly exhibited in nature and is used almost in every field of activity. Artists, professionals, designers of clothing or jewellery, car manufacturers, architects and many others make use of the idea of symmetry. The beehives, the flowers, the tree-leaves, religious symbols, rugs, and handkerchiefs - everywhere you find symmetrical designs.
Architecture
Engineering
Nature
You have already had a 'feel' of line symmetry in your previous class.
A figure has a line symmetry, if there is a line about which the figure may be folded so that the two parts of the figure will coincide.
You might like to recall these ideas. Here are some activities to help you.
Compose a picture-album showing symmetry.
Create some colourful Ink-dot devils
Make some symmetrical paper-cut designs. Enjoy identifying lines (also called axes) of symmetry in the designs you collect.
Let us now strengthen our ideas on symmetry further. Study the following figures in which the lines of symmetry are marked with dotted lines. [Fig 14.1 (i) to (iv)]
(i)
(ii)
(iii)
(iv)
Fig 14.1
### Lines of Symmetry for Regular Polygons
You know that a polygon is a closed figure made of several line segments. The polygon made up of the least number of line segments is the triangle. (Can there be a polygon that you can draw with still fewer line segments? Think about it).
A polygon is said to be regular if all its sides are of equal length and all its angles are of equal measure. Thus, an equilateral triangle is a regular polygon of three sides. Can you name the regular polygon of four sides?
An equilateral triangle is regular because each of its sides has same length and each of its angles measures $60^{\circ}$ (Fig 14.2).
Fig 14.2
A square is also regular because all its sides are of equal length and each of its angles is a right angle (i.e., $90^{\circ}$ ). Its diagonals are seen to be perpendicular bisectors of one another (Fig 14.3).
Fig 14.3 If a pentagon is regular, naturally, its sides should have equal length. You will, later on, learn that the measure of each of its angles is $108^{\circ}$ (Fig 14.4).
Fig 14.4
A regular hexagon has all its sides equal and each of its angles measures $120^{\circ}$. You will learn more of these figures later (Fig 14.5).
The regular polygons are symmetrical figures and hence their lines of
Fig 14.5 symmetry are quite interesting,
Each regular polygon has as many lines of symmetry as it has sides [Fig 14.6 (i) - (iv)]. We say, they have multiple lines of symmetry.
three lines of symmetry
Equilateral Triangle
(i)
Square
Regular Pentagon
(ii)
Fig 14.6
Perhaps, you might like to investigate this by paper folding. Go ahead!
The concept of line symmetry is closely related to mirror reflection. A shape has line symmetry when one half of it is the mirror image of the other half (Fig 14.7). A mirror line, thus, helps to visualise a line of symmetry (Fig 14.8).
Is the dotted line a mirror line? No. (iii)
Regular Hexagon
(iv)
six lines of symmetry
Fig 14.7 While dealing with mirror reflection, care is needed to note down the left-right changes in the orientation, as seen in the figure here (Fig 14.9).
(i)
(ii)
Fig 14.9
The shape is same, but the other way round!
## Play this punching game!
Fold a sheet into two halves
Punch a hole
two holes about the symmetric fold.
Fig 14.10
The fold is a line (or axis) of symmetry. Study about punches at different locations on the folded paper and the corresponding lines of symmetry (Fig 14.10).
## EXERCISE 14.1
1. Copy the figures with punched holes and find the axes of symmetry for the following:
(a)
(e)
(b)
(f)
(c)
(g)
(d)
(i)
(j)
(k)
(1)
2. Given the line(s) of symmetry, find the other hole(s):
(a)
(b)
(c)
(d)
(e)
3. In the following figures, the mirror line (i.e., the line of symmetry) is given as a dotted line. Complete each figure performing reflection in the dotted (mirror) line. (You might perhaps place a mirror along the dotted line and look into the mirror for the image). Are you able to recall the name of the figure you complete?
(a)
(b)
(c)
(d)
(e)
(f)
4. The following figures have more than one line of symmetry. Such figures are said to have multiple lines of symmetry.
(a)
(b)
(c)
Identify multiple lines of symmetry, if any, in each of the following figures:
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
5. Copy the figure given here.
Take any one diagonal as a line of symmetry and shade a few more squares to make the figure symmetric about a diagonal. Is there more than one way to do that? Will the figure be symmetric about both the diagonals?
6. Copy the diagram and complete each shape to be symmetric about the mirror line(s):
(a)
(b)
(c)
7. State the number of lines of symmetry for the following figures:
(a) An equilateral triangle (b)
(b) An isosceles triangle
(c) A scalene triangle
(d) A square
(e) A rectangle
(f) Arhombus
(g) A parallelogram
(h) Aquadrilateral
(i) A regular hexagon
(j) A circle
8. What letters of the English alphabet have reflectional symmetry (i.e., symmetry related to mirror reflection) about.
(a) a vertical mirror
(b) a horizontal mirror
(c) both horizontal and vertical mirrors
9. Give three examples of shapes with no line of symmetry.
10. What other name can you give to the line of symmetry of
(a) an isosceles triangle?
(b) a circle?
### Rotational Symmetri
What do you say when the hands of a clock go round?
You say that they rotate. The hands of a clock rotate in only one direction, about a fixed point, the centre of the clock-face.
Rotation, like movement of the hands of a clock, is called a clockwise rotation; otherwise it is said to be anticlockwise.
What can you say about the rotation of the blades of a ceiling fan? Do they rotate clockwise or anticlockwise? Or do they rotate both ways?
If you spin the wheel of a bicycle, it rotates. It can rotate in either way: both clockwise and anticlockwise. Give three examples each for (i) a clockwise rotation and (ii) anticlockwise rotation.
When an object rotates, its shape and size do not change. The rotation turns an object about a fixed point. This fixed point is the centre of rotation. What is the centre of rotation of the hands of a clock? Think about it.
The angle of turning during rotation is called the angle of rotation. A full turn, you know, means a rotation of $360^{\circ}$. What is the degree measure of the angle of rotation for (i) a half-turn? (ii) a quarter-turn?
A half-turn means rotation by $180^{\circ}$; a quarter-turn is rotation by $90^{\circ}$.
When it is $12 \mathrm{O}^{\prime}$ clock, the hands of a clock are together. By 3 O'clock, the minute hand would have made three complete turns; but the hour hand would have made only a quarter-turn. What can you say about their positions at 6 O'clock?
Have you ever made a paper windmill? The Paper windmill in the picture looks symmetrical (Fig 14.11); but you do not find any line of symmetry. No folding can help you to have coincident halves. However if you rotate it by $90^{\circ}$ about the fixed point, the windmill will look exactly the same. We say the windmill has a rotational symmetry.
Fig 14.11
Fig 14.12
In a full turn, there are precisely four positions (on rotation through the angles $90^{\circ}$, $180^{\circ}, 270^{\circ}$ and $360^{\circ}$ ) when the windmill looks exactly the same. Because of this, we say it has a rotational symmetry of order 4.
Here is one more example for rotational symmetry.
Consider a square with $\mathrm{P}$ as one of its corners (Fig 14.13).
Let us perform quarter-turns about the centre of the square marked $x$.
(i)
(iii)
(iv)
(v)
Fig 14.13 Fig 14.13 (i) is the initial position. Rotation by $90^{\circ}$ about the centre leads to Fig 14.13 (ii). Note the position of $\mathrm{P}$ now. Rotate again through $90^{\circ}$ and you get Fig 14.13 (iii). In this way, when you complete four quarter-turns, the square reaches its original position. It now looks the same as Fig14.13 (i). This can be seen with the help of the positions taken by $\mathrm{P}$.
Thus a square has a rotational symmetry of order 4 about its centre. Observe that in this case,
(i) The centre of rotation is the centre of the square.
(ii) The angle of rotation is $90^{\circ}$.
(iii) The direction of rotation is clockwise.
(iv) The order of rotational symmetry is 4 .
## TRY These
1. (a) Can you now tell the order of the rotational symmetry for an equilateral triangle? (Fig 14.14)
(i)
(ii)
(iii)
(iv)
Fig 14.14
(b) How many positions are there at which the triangle looks exactly the same, when rotated about its centre by $120^{\circ}$ ?
2. Which of the following shapes (Fig 14.15) have rotational symmetry about the marked point.
(i)
(ii)
(iii)
(iv)
Fig 14.15
## Do THis
Draw two identical parallelograms, one-ABCD on a piece of paper and the other A' B' C' D' on a transparent sheet. Mark the points of intersection of their diagonals, $\mathrm{O}$ and $\mathrm{O}^{\prime}$ respectively (Fig 14.16).
Place the parallelograms such that $\mathrm{A}^{\prime}$ lies on $\mathrm{A}, \mathrm{B}^{\prime}$ lies on $\mathrm{B}$ and so on. O' then falls on $\mathrm{O}$. Stick a pin into the shapes at the point $\mathrm{O}$.
Now turn the transparent shape in the clockwise direction.
How many times do the shapes coincide in one full round?
What is the order of rotational symmetry?
The point where we have the pin is the centre of rotation. It is the intersecting point of the diagonals in this case.
Every object has a rotational symmetry of order 1, as it occupies same position after a rotation of $360^{\circ}$ (i.e., one complete revolution). Such cases have no interest for us.
You have around you many shapes, which possess rotational symmetry (Fig 14.17).
(i)
(ii)
Fig 14.16
Fig 14.17
For example, when you slice certain fruits, the cross-sections are shapes with rotational symmetry. This might surprise you when you notice them [Fig 14.17(i)].
Then there are many road signs that exhibit rotational symmetry. Next time when you walk along a busy road, try to identify such road signs and find about the order of rotational symmetry [Fig 14.17(ii)].
Think of some more examples for rotational symmetry. Discuss in each case:
(i) the centre of rotation (ii) the angle of rotation
(iii) the direction in which the rotation is affected and
(iv) the order of the rotational symmetry.
## TRY These
Give the order of the rotational symmetry of the given figures about the point marked $\times($ Fig 14.17).
(i)
(ii)
(iii)
Fig 14.18
## EXercise 14.2
1. Which of the following figures have rotational symmetry of order more than 1 :
(a)
(b)
(c)
(d)
(e)
(f)
2. Give the order of rotational symmetry for each figure:
(a)
(e)
(b)
(f)
(c)
(g)
(d)
(h)
### Line Symmetry And Rotational Symmetry
Fig 14.19
You have been observing many shapes and their symmetries so far. By now you would have understood that some shapes have only line symmetry, some have only rotational symmetry and some have both line symmetry and rotational symmetry.
- For example, consider the square shape (Fig 14.19).
How many lines of symmetry does it have?
Does it have any rotational symmetry?
If 'yes', what is the order of the rotational symmetry?
Think about it.
The circle is the most perfect symmetrical figure, because it can be rotated around its centre through any angle and at the same time it has unlimited number of lines
of symmetry. Observe any circle pattern. Every line through the centre (that is every diameter) forms a line of (reflectional) symmetry and it has rotational symmetry around the centre for every angle.
## Do THIS
Some of the English alphabets have fascinating symmetrical structures. Which capital letters have just one line of symmetry (like $\mathbf{E}$ )? Which capital letters have a rotational symmetry of order 2 (like I)?
By attempting to think on such lines, you will be able to fill in the following table:
| $\begin{array}{c}\text { Alphabet } \\ \text { Letters }\end{array}$ | $\begin{array}{c}\text { Line } \\ \text { Symmetry }\end{array}$ | $\begin{array}{c}\text { Number of Lines of } \\ \text { Symmetry }\end{array}$ | $\begin{array}{c}\text { Rotational } \\ \text { Symmetry }\end{array}$ | $\begin{array}{c}\text { Order of Rotational } \\ \text { Symmetry }\end{array}$ |
| :---: | :---: | :---: | :---: | :---: |
| Z | No | 0 | Yes | 2 |
| S | | | | |
| H | Yes | | Yes | |
| O | Yes | | Yes | |
| E | Yes | | | |
| N | | | Yes | |
| C | | | | |
## EXercise 14.3
1. Name any two figures that have both line symmetry and rotational symmetry.
2. Draw, wherever possible, a rough sketch of
(i) a triangle with both line and rotational symmetries of order more than 1.
(ii) a triangle with only line symmetry and no rotational symmetry of order more than 1.
(iii) a quadrilateral with a rotational symmetry of order more than 1 but not a line symmetry.
(iv) aquadrilateral with line symmetry but not a rotational symmetry of order more than 1 .
3. If a figure has two or more lines of symmetry, should it have rotational symmetry of order more than 1 ?
4. Fill in the blanks:
| Shape | Centre of Rotation | Order of Rotation | Angle of Rotation |
| :--- | :--- | :--- | :--- |
| Square | | | |
| Rectangle | | | |
| Rhombus | | | |
| $\begin{array}{l}\text { Equilateral } \\ \text { Triangle }\end{array}$ | | | |
| $\begin{array}{l}\text { Regular } \\ \text { Hexagon }\end{array}$ | | | |
| Circle | | | |
| Semi-circle | | | |
5. Name the quadrilaterals which have both line and rotational symmetry of order more than 1.
6. After rotating by $60^{\circ}$ about a centre, a figure looks exactly the same as its original position. At what other angles will this happen for the figure?
7. Can we have a rotational symmetry of order more than 1 whose angle of rotation is (i) $45^{\circ} ?$
(ii) $17^{\circ} ?$
## What have We Discussed?
1. A figure has line symmetry, if there is a line about which the figure may be folded so that the two parts of the figure will coincide.
2. Regular polygons have equal sides and equal angles. They have multiple (i.e., more than one) lines of symmetry.
3. Each regular polygon has as many lines of symmetry as it has sides.
| $\begin{array}{l}\text { Regular } \\ \text { Polygon }\end{array}$ | $\begin{array}{c}\text { Regular } \\ \text { hexagon }\end{array}$ | $\begin{array}{c}\text { Regular } \\ \text { pentagon }\end{array}$ | Square | $\begin{array}{c}\text { Equilateral } \\ \text { triangle }\end{array}$ |
| :--- | :---: | :---: | :---: | :---: |
| $\begin{array}{l}\text { Number of lines } \\ \text { of symmetry }\end{array}$ | 6 | 5 | 4 | 3 |
4. Mirror reflection leads to symmetry, under which the left-right orientation have to be taken care of.
5. Rotation turns an object about a fixed point.
This fixed point is the centre of rotation.
The angle by which the object rotates is the angle of rotation.
A half-turn means rotation by $180^{\circ}$; a quarter-turn means rotation by $90^{\circ}$. Rotation may be clockwise or anticlockwise.
6. If, after a rotation, an object looks exactly the same, we say that it has a rotational symmetry.
7. In a complete turn (of $360^{\circ}$ ), the number of times an object looks exactly the same is called the order of rotational symmetry. The order of symmetry of a square, for example, is 4 while, for an equilateral triangle, it is 3.
8. Some shapes have only one line of symmetry, like the letter E; some have only rotational symmetry, like the letter S; and some have both symmetries like the letter $\mathrm{H}$.
The study of symmetry is important because of its frequent use in day-to-day life and more because of the beautiful designs it can provide us.
## Visualising Solid Shapes
### Introduction: Plane Figures and Solid Shapes
In this chapter, you will classify figures you have seen in terms of what is known as dimension.
In our day to day life, we see several objects like books, balls, ice-cream cones etc., around us which have different shapes. One thing common about most of these objects is that they all have some length, breadth and height or depth.
That is, they all occupy space and have three dimensions.
Hence, they are called three dimensional shapes.
Do you remember some of the three dimensional shapes (i.e., solid shapes) we have seen in earlier classes?
## TRY These
Try to identify some objects shaped like each of these.
By a similar argument, we can say figures drawn on paper which have only length and breadth are called two dimensional (i.e., plane) figures. We have also seen some two dimensional figures in the earlier classes.
Match the 2 dimensional figures with the names (Fig 15.2):
(i)
(ii)
(iii)
(iv)
(v)
(a) Circle
(b) Rectangle
(c) Square
(d) Quadrilateral
(e) Triangle
Fig 15.2
Note: We can write 2-D in short for 2-dimension and 3-D in short for 3-dimension.
### Faces, Edges and Vertices
Do you remember the Faces, Vertices and Edges of solid shapes, which you studied earlier? Here you see them for a cube:
(i)
(ii)
(iii)
Fig 15.3
The 8 corners of the cube are its vertices. The 12 line segments that form the skeleton of the cube are its edges. The 6 flat square surfaces that are the skin of the cube are its faces.
## Do THIS
Complete the following table:
Table 15.1
| Faces $(\mathbf{F})$ | 6 |
| :--- | :---: |
| Edges $(\mathbf{E})$ | 12 |
| Vertices $(\mathbf{V})$ | 8 |
Can you see that, the two dimensional figures can be identified as the faces of the three dimensional shapes? For example a cylinder $\square$ has two faces which are circles, and a pyramid, shaped like this has triangles as its faces.
We will now try to see how some of these 3-D shapes can be visualised on a 2-D surface, that is, on paper.
In order to do this, we would like to get familiar with three dimensional objects closely. Let us try forming these objects by making what are called nets.
### Nets For Building 3-D Shapes
Take a cardboard box. Cut the edges to lay the box flat. You have now a net for that box. A net is a sort of skeleton-outline in 2-D [Fig154 (i)], which, when folded [Fig154 (ii)], results in a 3-D shape [Fig154 (iii)].
(i)
(ii)
Here you got a net by suitably separating the edges. Is the reverse process possible?
Here is a net pattern for a box (Fig 15.5). Copy an enlarged version of the net and try to make the box by suitably folding and gluing together. (You may use suitable units). The box is a solid. It is a 3-D object with the shape of a cuboid.
Similarly, you can get a net for a cone by cutting a slit along its slant surface (Fig 15.6).
Cut along here Fig $\mathbf{1 5 . 6}$
Cube
Cylinder
Cone
(i)
(iii)
Fig 15.7
We could also try to make a net for making a pyramid like the Great Pyramid in Giza (Egypt) (Fig 15.8). That pyramid has a square base and triangles on the four sides.
Fig 15.8
Fig 15.9
See if you can make it with the given net (Fig 15.9).
## Try These
Here you find four nets (Fig 15.10). There are two correct nets among them to make a tetrahedron. See if you can work out which nets will make a tetrahedron.
Fig 15.10
## Exercise 15.1
1. Identify the nets which can be used to make cubes (cut out copies of the nets and try it):
(i)
(iv)
(ii)
(v)
(iii)
(vi)
2. Dice are cubes with dots on each face. Opposite faces of a die always have a total of seven dots on them.
Here are two nets to make dice (cubes); the numbers inserted in each square indicate the number of dots in that box.
Insert suitable numbers in the blanks, remembering that the number on the opposite faces should total to 7 .
3. Can this be a net for a die?
Explain your answer.
4. Here is an incomplete net for making a cube. Complete it in at least two different ways. Remember that a cube has six faces. How many are there in the net here? (Give two separate diagrams. If you like, you may use a squared sheet for easy manipulation.)
5. Match the nets with appropriate solids:
(a)
(b)
(c)
(d)
(i)
(ii)
(iii)
(iv)
## Play this game
You and your friend sit back-to-back. One of you reads out a net to make a 3-D shape, while the other attempts to copy it and sketch or build the described 3-D object.
### Drawing Solids on a Flat Surface
Your drawing surface is paper, which is flat. When you draw a solid shape, the images are somewhat distorted to make them appear three-dimensional. It is a visual illusion. You will find here two techniques to help you.
#### Oblique Sketches
Here is a picture of a cube (Fig 15.11). It gives a clear idea of how the cube looks like,
Fig 15.11 when seen from the front. You do not see certain faces. In the drawn picture, the lengths are not equal, as they should be in a cube. Still, you are able to recognise it as a cube. Such a sketch of a solid is called an oblique sketch.
How can you draw such sketches? Let us attempt to learn the technique.
You need a squared (lines or dots) paper. Initially practising to draw on these sheets will later make it easy to sketch them on a plain sheet (without the aid of squared lines or dots!) Let us attempt to draw an oblique sketch of a $3 \times 3 \times 3$ (each edge is 3 units) cube (Fig 15.12).
Step 1
Draw the front face.
Step 3
Join the corresponding corners
Step 2
Draw the opposite face. Sizes of the faces have to be same, but the sketch is somewhat off-set from step 1.
## Step 4
Redraw using dotted lines for hidden edges. (It is a convention)
The sketch is ready now.
Fig 15.12
In the oblique sketch above, did you note the following?
(i) The sizes of the front faces and its opposite are same; and
(ii) The edges, which are all equal in a cube, appear so in the sketch, though the actual measures of edges are not taken so.
You could now try to make an oblique sketch of a cuboid (remember the faces in this case are rectangles)
Note: You can draw sketches in which measurements also agree with those of a given solid. To do this we need what is known as an isometric sheet. Let us try to make a cuboid with dimensions $4 \mathrm{~cm}$ length, $3 \mathrm{~cm}$ breadth and $3 \mathrm{~cm}$ height on given isometric sheet.
#### Isometric Sketches
Have you seen an isometric dot sheet? (A sample is given at the end of the book). Such a sheet divides the paper into small equilateral triangles made up of dots or lines. To draw sketches in which measurements also agree with those of the solid, we can use isometric dot sheets. [Given on inside of the back cover (3rd cover page).]
Let us attempt to draw an isometric sketch of a cuboid of dimensions $4 \times 3 \times 3$ (which means the edges forming length, breadth and height are 4, 3, 3 units respectively) (Fig 15.13).
Draw a rectangle to show the
Fig 15.14 (i) front face.
Step 3
Connect the matching corners
with appropriate line segments.
Fig 15.13
Note that the measurements are of exact size in an isometric sketch; this is not so in the case of an oblique sketch.
EXAMrle 1 Here is an oblique sketch of a cuboid [Fig 15.14(i)]. Draw an isometric sketch that matches this drawing.
Here is the solution [Fig 15.14(ii)]. Note how the measurements are taken care of.
Step 2
Draw four parallel line segments of length 3 starting from the four corners of the rectangle.
Step 4
This is an isometric sketch of the cuboid.
Fig 15.14 (ii) How many units have you taken along (i) 'length'? (ii) 'breadth'? (iii) 'height'? Do they match with the units mentioned in the oblique sketch?
## Exercise 15.2
1. Use isometric dot paper and make an isometric sketch for each one of the given shapes:
(i)
(iii)
Fig 15.15
(ii)
(iv)
2. The dimensions of a cuboid are $5 \mathrm{~cm}, 3 \mathrm{~cm}$ and $2 \mathrm{~cm}$. Draw three different isometric sketches of this cuboid.
3. Three cubes each with $2 \mathrm{~cm}$ edge are placed side by side to form a cuboid. Sketch an oblique or isometric sketch of this cuboid.
4. Make an oblique sketch for each one of the given isometric shapes:
5. Give (i) an oblique sketch and (ii) an isometric sketch for each of the following:
(a) A cuboid of dimensions $5 \mathrm{~cm}, 3 \mathrm{~cm}$ and $2 \mathrm{~cm}$. (Is your sketch unique?)
(b) A cube with an edge $4 \mathrm{~cm}$ long.
An isometric sheet is attached at the end of the book. You could try to make on it some cubes or cuboids of dimensions specified by your friend.
#### Visualising Solid Objects
## Do THIs
Sometimes when you look at combined shapes, some of them may be hidden from your view.
Here are some activities you could try in your free time to help you visualise some solid objects and how they look. Take some cubes and arrange them as shown in Fig 15.16.
(i)
(ii)
(iii)
Fig 15.16
Now ask your friend to guess how many cubes there are when observed from the view shown by the arrow mark.
## TRY These
Try to guess the number of cubes in the following arrangements (Fig 15.17).
(i)
(ii)
(iii)
Fig 15.17 Such visualisation is very helpful. Suppose you form a cuboid by joining such cubes. You will be able to guess what the length, breadth and height of the cuboid would be.
EXample 2 If two cubes of dimensions $2 \mathrm{~cm}$ by $2 \mathrm{~cm}$ by $2 \mathrm{~cm}$ are placed side by side, what would the dimensions of the resulting cuboid be?
Solution As you can see (Fig 15.18) when kept side by side, the length is the only measurement which increases, it becomes $2+2=4 \mathrm{~cm}$.
Fig 15.18
The breadth $=2 \mathrm{~cm}$ and the height $=2 \mathrm{~cm}$.
## Try These
1. Two dice are placed side by side as shown: Can you say what the total would be on the face opposite to
(a) $5+6$
(b) $4+3$
(Remember that in a die sum of numbers on opposite faces is 7 )
Fig 15.19
2. Three cubes each with $2 \mathrm{~cm}$ edge are placed side by side to form a cuboid. Try to make an oblique sketch and say what could be its length, breadth and height.
### Viewing Different Sections of a Solid
Now let us see how an object which is in 3-D can be viewed in different ways.
#### One Way to View an Object is by Cutting or Slicing
## Slicing game
Here is a loaf of bread (Fig 15.20). It is like a cuboid with a square face. You 'slice' it with a knife.
When you give a 'vertical' cut, you get several pieces, as shown in the Figure 15.20. Each face of the piece is a square! We call this face a 'cross-section' of the whole bread. The cross section is nearly a square in this case.
Beware! If your cut is not 'vertical' you may get a different cross section! Think about it. The boundary of the cross-section you obtain is a plane curve. Do you notice it?
Fig 15.20
## A kitchen play
Have you noticed cross-sections of some vegetables when they are cut for the purposes of cooking in the kitchen? Observe the various slices and get aware of the shapes that result as cross-sections.
## Play this
Make clay (or plasticine) models of the following solids and make vertical or horizontal cuts. Draw rough sketches of the cross-sections you obtain. Name them wherever you can.
Fig 15.21
## EXercise 15.3
1. What cross-sections do you get when you give a
(i) vertical cut
(ii) horizontal cut
to the following solids?
(a) A brick
(b) A round apple
(d) A circular pipe
(e) An ice cream cone
(c) A die
#### Another Way is by Shadow Play
## A shadow play
Shadows are a good way to illustrate how three-dimensional objects can be viewed in two
Fig 15.22 dimensions. Have you seen a shadow play? It is a form of entertainment using solid articulated figures in front of an illuminated back-drop to create the illusion of moving
Fig 15.23 images. It makes some indirect use of ideas in Mathematics.
You will need a source of light and a few solid shapes for this activity. (If you have an overhead projector, place the solid under the lamp and do these investigations.)
Keep a torchlight, right in front of a Cone. What type of shadow does it cast on the screen? (Fig 15.23)
The solid is three-dimensional; what is the dimension of the shadow?
If, instead of a cone, you place a cube in the above game, what type of
shadow will you get?
Experiment with different positions of the source of light and with different positions of the solid object. Study their effects on the shapes and sizes of the shadows you get.
Here is another funny experiment that you might have tried already: Place a circular plate in the open when the Sun at the noon time is just right above it as shown in Fig 15.24 (i). What is the shadow that you obtain?
(i) Will it be same during
(b) evenings?
Fig 15.24 (i) - (iii)
Study the shadows in relation to the position of the Sun and the time of observation.
## EXercise 15.4
1. A bulb is kept burning just right above the following solids. Name the shape of the shadows obtained in each case. Attempt to give a rough sketch of the shadow. (You may try to experiment first and then answer these questions).
A ball
(i)
A cylindrical pipe
A book
(ii)
2. Here are the shadows of some 3-D objects, when seen under the lamp of an overhead projector. Identify the solid(s) that match each shadow. (There may be multiple answers for these!)
A circle
A square
A triangle
A rectangle
(i)
(ii)
(iii)
(iii) 3. Examine if the following are true statements:
(i) The cube can cast a shadow in the shape of a rectangle.
(ii) The cube can cast a shadow in the shape of a hexagon.
#### A Third Way is by Looking at it from Certain Angles to Get Different Views
One can look at an object standing in front of it or by the side of it or from above. Each time one will get a different view (Fig 15.25).
Front view
Side view
Top view
Fig 15.25
Here is an example of how one gets different views of a given building. (Fig 15.26)
Building
Front view
Side view
Top view
Fig 15.26
You could do this for figures made by joining cubes.
Fig 15.27
Try putting cubes together and then making such sketches from different sides.
## Try These
1. For each solid, the three views (1), (2), (3) are given. Identify for each solid the corresponding top, front and side views.
## Solid
## Its views
(2)
Front
(3)
Front
2. Draw a view of each solid as seen from the direction indicated by the arrow.
(i)
(ii)
## What have We Discussed?
1. The circle, the square, the rectangle, the quadrilateral and the triangle are examples of plane figures; the cube, the cuboid, the sphere, the cylinder, the cone and the pyramid are examples of solid shapes.
2. Plane figures are of two-dimensions (2-D) and the solid shapes are of three-dimensions (3-D).
3. The corners of a solid shape are called its vertices; the line segments of its skeleton are its edges; and its flat surfaces are its faces.
4. A net is a skeleton-outline of a solid that can be folded to make it. The same solid can have several types of nets.
5. Solid shapes can be drawn on a flat surface (like paper) realistically. We call this 2-D representation of a 3-D solid.
6. Two types of sketches of a solid are possible:
(a) An oblique sketch does not have proportional lengths. Still it conveys all important aspects of the appearance of the solid.
(b) An isometric sketch is drawn on an isometric dot paper, a sample of which is given at the end of this book. In an isometric sketch of the solid the measurements kept proportional.
7. Visualising solid shapes is a very useful skill. You should be able to see 'hidden' parts of the solid shape.
8. Different sections of a solid can be viewed in many ways:
(a) One way is to view by cutting or slicing the shape, which would result in the cross-section of the solid.
(b) Another way is by observing a 2-D shadow of a 3-D shape.
(c) A third way is to look at the shape from different angles; the front-view, the side-view and the top-view can provide a lot of information about the shape observed.
| Textbooks |
\begin{document}
\title{Generalised Interpolation by Solving Recursion-Free
Horn Clauses}
\begin{abstract}
In this paper we present \textsc{InterHorn}\xspace, a solver for recursion-free Horn clauses. The main application domain of \textsc{InterHorn}\xspace lies in solving interpolation problems arising in software verification. We show how a range of interpolation problems, including path, transition, nested, state/transition and well-founded interpolation can be handled directly by \textsc{InterHorn}\xspace. By detailing these interpolation problems and their Horn clause representations, we hope to encourage the emergence of a common back-end interpolation interface useful for diverse verification tools. \end{abstract}
\section{Introduction}
Interpolation is a key ingredient of a wide range of software verification tools that is used to compute approximations of sets and relations over program states, see e.g.~\cite{McMillanCAV06,AlbarghouthiVMCAI12,AlbarghouthiCAV12,WeissenbacherCAV11,SharyginaATVA12,HeizmannPOPL10,POPL11,PLDI12,TACAS12,JaffarCP09}. These approximations come in different forms, e.g., as path interpolation~\cite{JhalaPOPL04}, transition interpolation~\cite{JhalaCAV05}, nested interpolation~\cite{HeizmannPOPL10}, state/transition interpolation~\cite{AlbarghouthiVMCAI12}, or well-founded interpolation~\cite{SAS05}. As a result algorithms and tools for solving interpolation problems have become an important area of research contributing to the advances in state-of-the-art of software verification.
In this paper we present \textsc{InterHorn}\xspace, a solver for constraints in form of recursion-free Horn clauses that can be applied on various interpolation problems occurring in software verification. \textsc{InterHorn}\xspace takes as input clauses whose literals are either assertions in the theory of linear arithmetic or unknown relations. In addition, \textsc{InterHorn}\xspace also accepts well-foundedness conditions on the unknown relations. The set of input clauses can represent either a DAG or a tree of dependencies between interpolants to be discovered. The output of \textsc{InterHorn}\xspace is either an interpretation of unknown relations in terms of linear arithmetic assertions that turns the input clauses into valid implications over rationals/reals and satisfies well-foundedness conditions, or the statement that no such interpretation exists. \textsc{InterHorn}\xspace is sound and complete for clauses without well-foundedness conditions. (\textsc{InterHorn}\xspace is incomplete when well-foundedness conditions are present, since it relies on synthesis of linear ranking functions.) \textsc{InterHorn}\xspace is a part of a general solver for recursive Horn clauses~\cite{PLDI12} and has already demonstrated its practicability in a software verification competition~\cite{SVCOMP12}. The main novelty offered by \textsc{InterHorn}\xspace wrt.\ existing interpolating procedures~\cite{GriggioTACAS11,PrincessIJCAR10,OpenSmtTACAS10,HoenickeSPIN12} lies in the ability to declaratively specify the interpolation problem as a set of recursion-free Horn clauses and the support for well-foundedness conditions.
\section{Interpolation by solving recursion-free Horn clauses} \label{sec-illustration}
In this section we provide examples of how interpolation related problems arising in software verification can be formulated as solving of recursion-free Horn clauses. This collection of examples is not exhaustive and serves as an illustration of the approach. We omit any description of how interpolation is used by verification methods, since it is out of scope of this paper, and rather focus on the form of interpolation problems and their representation as recursion-free Horn clauses. Further examples can be found in the literature, e.g.,~\cite{PLDI12}, as well as are likely to emerge in the future.
\paragraph{\bf Path interpolation}
Interpolation can be used for the approximation of sets of states reachable by a program along a given path, see e.g.~\cite{JhalaPOPL04}. A flat program (transition system) consists of program variables $v$, an initiation condition $\mathit{init}(v)$, a set of program transitions $\setOf{\mathit{next}_1(v, v'), \dots, \mathit{next}_N(v, v')}$, and a description of safe states~$\mathit{safe}(v)$. A path is a sequence of program transitions.
Given a path $\mathit{next}_1(v, v'), \dots, \mathit{next}_n(v, v')$, the path interpolation problem is to find assertions $I_0(v), I_1(v), \dots, I_n(v)$ such that:
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
\mathit{init}(v) \rightarrow I_0(v), & \\[\jot]
I_{k-1}(v) \land \mathit{next}_k(v, v') \rightarrow I_k(v'),
& \text{for each $k \in 1..n$} \\[\jot]
I_n(v) \rightarrow \mathit{safe}(v). &
\end{array} \end{equation*}
We observe that there are no recursive dependencies induced by the above implications between the interpolants to be discovered, i.e., $I_0(v)$ does not depend on any other interpolant, while $I_1(v)$ depends on $I_0(v)$, and $I_n(v)$ depends on $I_0(v), \dots, I_{n-1}(v)$. \textsc{InterHorn}\xspace leverages such absence of dependency cycles in our solving algorithm, see Section~\ref{sec-algo}.
\paragraph{\bf Transition interpolation}
Interpolation can be applied to compute over-approximation of program transitions, see e.g.~\cite{JhalaCAV05}. Given a path $\mathit{next}_1(v, v'), \dots, \mathit{next}_n(v, v')$, a transition interpolation problem is to find $T_1(v, v'), \dots, T_n(v, v')$ such that:
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
\mathit{next}_k(v, v') \rightarrow T_k(v, v'),
& \text{for each $k \in 1..n$} \\[\jot]
\mathit{init}(v_0) \land T_1(v_0, v_1) \land \dots \land
T_n(v_{n-1}, v_n) \rightarrow \mathit{safe}(v_n). &
\end{array} \end{equation*}
Again, we note there are no recursive dependencies between the assertions to be computed.
\paragraph{\bf Well-founded interpolation}
We can also use interpolation in combination with additional well-foundedness constraints when proving program termination, see e.g.~\cite{SAS05}. We assume a path $\mathit{stem}_1(v, v'), \dots, \mathit{stem}_m(v, v')$ that contains transitions leading to a loop entry point, and a path $\mathit{loop}_1(v, v'), \dots, \mathit{loop}_n(v, v')$ around the loop. A well-founded interpolation problem amounts to finding $I_0(v), I_1(v), \dots, I_m(v)$, and $T_1(v,v'), \dots, T_n(v,v')$ such that:
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
\mathit{init}(v) \rightarrow I_0(v), & \\[\jot]
I_{k-1}(v) \land \mathit{stem}_k(v, v') \rightarrow I_k(v'),
& \text{for each $k \in 1..m$}\\[\jot]
I_m(v) \land \mathit{loop}_1(v, v') \rightarrow T_1(v, v'), & \\[\jot]
T_{k-1}(v, v') \land \mathit{loop}_k(v', v'') \rightarrow T_k(v, v''),
& \text{for each $k\in 2..n$} \\[\jot]
\mathit{wf}(T_n(v, v')). &
\end{array} \end{equation*}
Note that the last clause, which is a unit clause, requires that the relation $T_n(v, v')$ is well-founded, i.e., does not admit any infinite chains.
\iffalse \paragraph{\bf Example for well-founded interpolation}
We use as example the program from Figure~2 of \cite{SAS05}. The program has two nested loops, with program variables $x$~, $y$ and the program counter $pc$~. The initiation condition is $\mathit{init}(v)=(pc=\ell_0)$ and its transition relation follows.
\begin{equation*}
\begin{array}[t]{l@{\;\;=\;\;}l}
\mathit{next}_1(v,v') & (x\geq0 \land x'=x+1 \land y'=1 \land pc=\ell_0
\land pc'=\ell_1)\\[\jot]
\mathit{next}_2(v,v') & (y>x \land x'=x-2 \land y'=y \land pc=\ell_1
\land pc'=\ell_0)\\[\jot]
\mathit{next}_3(v,v') & (y\leq x \land x'=x \land y'=y+1 \land
pc=\ell_1 \land pc'=\ell_1)
\end{array} \end{equation*}
The first interpolation problem obtained from the abstraction refinement algorithm \cite{SAS05} corresponds to a counterexample potentially looping at location $\ell_1$~. The interpolation problem required for checking the spuriousness of the counterexample corresponds to the following set of Horn clauses:
\begin{equation*}
\begin{array}[t]{ll}
\mathit{init}(v) \rightarrow I_0(v),~ \\[\jot]
I_0(v) \land \mathit{next}_1(v,v') \rightarrow I_1(v'),~ \\[\jot]
I_1(v) \land \mathit{next}_3(v,v') \rightarrow T_1(v,v'),~
\\[\jot]
\mathit{wf}(T_1(v,v'))~.
\end{array} \end{equation*}
The solution computed by \textsc{InterHorn}\xspace ensures that this counterexample is spurious: $I_0(v) = \mathit{true}$~, $I_1(v) = \mathit{true}$~ and $T_1(v,v')=(x \geq y \land y'-y>x'-x)$~. A second set of Horn clauses corresponds to a counterexample potentially looping between locations $\ell_1$ and $\ell_0$~:
\begin{equation*}
\begin{array}[t]{ll}
\mathit{init}(v) \rightarrow I_0(v)~, \\[\jot]
I_0(v) \land \mathit{next}_1(v,v') \rightarrow I_1(v')~, \\[\jot]
I_1(v) \land \mathit{next}_2(v,v') \rightarrow T_1(v,v')~,
\\[\jot]
T_1(v,v') \land \mathit{next}_1(v',v'') \rightarrow T_2(v,
v'')~, \\[\jot]
\mathit{wf}(T_2(v,v'))~.
\end{array} \end{equation*}
\noindent \textsc{InterHorn}\xspace computes the following solution: $I_0(v) = \mathit{true}$~, $I_1(v) = \mathit{true}$~, $T_1(v,v') = (x'=x-2)$ and $T_2(v,v') = (x\geq 2 \land x'=x-1)$~. Eventually, after solving similar interpolation problems, the given example can be proven terminating. \fi
\paragraph{\bf Search tree interpolation}
Interpolation has been used for optimizing the search for solutions for a constraint programming goal \cite{JaffarCP09}. In that work, it is considered the case when the search tree corresponds to the state space exploration of an imperative program in order to prove some safety property. A node from the tree is labeled with a formula $s(v)$ that is a symbolic representation for reachable states at a program point. The tree structure corresponds to program transitions, a node $n$ has as many children as the transitions starting at the program point corresponding to $n$~, i.e., $\mathit{next}_1(v,v'), \dots, \mathit{next}_m(v,v')$~. To optimize the search, symbolic states are generalized by computing interpolants in post-order tree traversal. During the tree traversal, for a node $n$~, initially labeled $s_0$~, and having children with labels $s_1$ to $s_m$~, a generalized label of the node $n$ is computed as $I_1(v) \land \dots \land I_m(v)$ and is subject to the following implications:
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
s_0(v) \rightarrow I_1(v) \land \dots \land I_m(v) & \\[\jot]
I_k(v) \rightarrow (\mathit{next}_k(v,v') \rightarrow s_k(v'))
& \text{for each $k \in 1..m$}
\end{array} \end{equation*}
\noindent These implications correspond to the following recursion-free Horn clauses,
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
s_0(v) \rightarrow I_k(v), & \text{for each $k \in 1..m$} \\[\jot]
I_k(v) \rightarrow (\exists v': \mathit{next}_k(v,v') \rightarrow s_k(v')),
& \text{for each $k \in 1..m$}
\end{array} \end{equation*}
\noindent where the quantifier elimination in $\exists v': \mathit{next}_k(v,v') \rightarrow s_k(v')$ can be automated for $\mathit{next}_k$ and $s_k$ background constraints in the theory of linear arithmetic.
\iffalse \paragraph{\bf Example for search tree interpolation}
We use as example the program from Figure~2 of \cite{JaffarCP09}. The program has one variable $x$. One interpolation problem obtained from the search tree traversal algorithm \cite{JaffarCP09} corresponds to the following set of Horn clauses:
\begin{equation*}
\begin{array}[t]{ll}
x=0 \rightarrow I_{D1}(v) \land I_{D2}(v), \\[\jot]
I_{D1}(v) \rightarrow (\exists v': x'=x \rightarrow x'\leq 7), \\[\jot]
I_{D2}(v) \rightarrow (\exists v': x'=x+4 \rightarrow x'\leq 7),
\end{array} \end{equation*}
\noindent where the formula $\mathit{node}(D) = (x=0)$ appears in the body of the first clause, while the formulas $\mathit{node}(F) = (x \leq 7)$ and $\mathit{node}(G) = (x \leq 7)$ appear in the head of the second and respectively third clause. Our implementation produces different solutions than those optimal for optimizing the search tree traversal, however we note that the interpolation problems obtained from the algorithm \cite{JaffarCP09} are expressible as recursion-free Horn clauses and solvable by \textsc{InterHorn}\xspace. \fi
\paragraph{\bf Nested interpolation}
For programs with procedures, interpolation can compute over-approximations of sets of program states that are expressed over variables that are in scope at respective program locations, see e.g.~\cite{JhalaPOPL04,HeizmannPOPL10}. A procedural program consists of a set of procedures $P$ including the main procedure $\mathit{main}$, global program variables $g$ that include a dedicated variable for return value passing, as well as procedure descriptions. For each procedure $p\in P$ we provide its local variables $l_p$, a finite set of intra-procedural program transitions of the form $\mathit{inst}^p(g, l_p, g', l'_p)$, a finite set of call transitions of the form $\mathit{call}^{p, q}(g, l_p, l_q)$ where $q\in P$ is the name of the callee, a finite set of return transitions of the form~$\mathit{ret}^p(g, l_p, g')$, as well as a description of safe states~$\mathit{safe}^p(g, l_p)$.
A path in a procedural program is a sequence of program transitions (including intra-procedural, call and return transitions) that respects the calling discipline, which we do not formalize here.
Given a path $\mathit{next}_1(v, v'), \dots, \mathit{next}_n(v, v')$. Find $I_0(v_0), I_1(v_1), \dots, I_n(v_n)$, where $v_0, \dots, v_n$ are determined through the following implications, such that:
\begin{equation*}
\begin{array}[t]{@{}l@{}}
\mathit{init}(g, l_\mathit{main}) \rightarrow I_0(g, l_\mathit{main}),
\\[\jot]
\begin{array}[t]{@{}l@{\;}l@{}}
I_{k-1}(g, l_p) \mathrel{\land}
\begin{cases}
\begin{array}[t]{@{}l@{\;}l@{}}
\mathit{inst}^p(g, l_p, g', l'_p) \rightarrow I_k(g', l'_p),
&
\text{if $\mathit{next}_k(v, v') = \mathit{inst}^p(g, l_p, g',
l'_p)$}
\\[\jot]
\mathit{call}^{p,q}(g, l_p, l_q) \rightarrow I_k(g, l_q),
&
\text{if $\mathit{next}_k(v, v') = \mathit{call}^{p,q}(g, l_p,
l_q)$}
\\[\jot]
\mathit{ret}^p(g, l_p, g') \rightarrow I_k(g', l_q),
&
\begin{array}[t]{@{}l@{}}
\text{if $\mathit{next}_k(v, v') = \mathit{ret}^p(g, l_p,
g')$ returns to $q$} \\[\jot]
\end{array}
\end{array}
\end{cases}\\
\end{array}\\
\text{for each $k\in 1..n$}\\[\jot]
I_n(g, l_p) \rightarrow \mathit{safe}^p(g, l_p),
\quad
\text{when $\mathit{next}_n(v, v')$ occurs in procedure $p$. }
\end{array} \end{equation*}
Similarly to the previously described interpolation problems, there are no recursive dependencies in the above clauses.
\paragraph{\bf State/transition interpolation}
As illustrated by the example of well-founded interpolation, interpolants can represent over-approximations of sets of states as well as binary relations. The Whale algorithm provides a further example of such usage~\cite{AlbarghouthiVMCAI12}. Given a sequence of assertions $\mathit{next}_1(v, v'), \dots, \mathit{next}_n(v, v')$ that represent an under-approximation of a path through a procedure with a guard $\mathit{g}(v)$ and a summary $\mathit{s}(v, v')$. Find guards $G_1(v), \dots, G_n(v)$ and summaries $S_1(v, v'), \dots, S_n(v, v')$ such that:
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
\mathit{next}_k(v, v') \rightarrow S_k(v, v'),
& \text{for each $k \in 1..n$} \\[\jot]
g(v) \rightarrow G_1(v), & \\[\jot]
G_k(v) \land S_k(v, v') \rightarrow G_{k+1}(v'),
& \text{for each $k\in 1..n-1$}
\\[\jot]
G_n(v) \land S_n(v, v') \rightarrow s(v, v'). &
\end{array} \end{equation*}
There are no recursive dependencies among the unknown guards and summaries.
\paragraph{\bf Solving unfoldings of recursive Horn clauses}
A variety of reachability and termination verification problems for programs with procedures, multi-threaded programs, and functional programs can be formulated as the satisfiability of a set of recursive Horn clauses, e.g.,~\cite{CAV11,POPL11,PLDI12}. These clauses are obtained from the program during a so-called constraint generation step. The satisfiability checking performed during the constraint solving step amounts to the inference of inductive invariants, procedure summaries, function types and other required auxiliary assertions. Existing solvers, e.g., HSF~\cite{PLDI12} and $\mu$\textit{Z}~\cite{GenPDRSAT12}, rely on solving recursion-free unfoldings when iteratively constructing a solution for recursive Horn clauses.
We illustrate the generation of recursion-free unfolding using an invariance proof rule for flat programs. This rule can be formalised by as follows. For a given program find an invariant $\mathit{Inv}(v)$ such that
\begin{equation*}
\begin{array}[t]{@{}l@{}}
\mathit{init}(v) \rightarrow \mathit{Inv}(v), \\[\jot]
\mathit{Inv}(v) \land \mathit{next}(v, v')\rightarrow \mathit{Inv}(v'),
\quad \text{for each program transition $\mathit{next(v, v')}$}\\[\jot]
\mathit{Inv}(v) \rightarrow \mathit{safe}(v).
\end{array} \end{equation*}
An unfolding of these recursive clauses introduces auxiliary relations that refer to $\mathit{Inv}(v)$ at each intermediate step. For example we consider an unfolding that starts with the first clause above and then applies a clause from the second line for a transition $\mathit{next}_1(v, v')$ and then for a transition $\mathit{next}_2(v, v')$ before traversing the last clause. This unfolding is represented by the following recursion-free clauses:
\begin{equation*}
\begin{array}[t]{@{}l@{}}
\mathit{init}(v) \rightarrow \mathit{Inv}_0(v),\;
\mathit{Inv}_0(v) \land \mathit{next}_1(v, v')\rightarrow\mathit{Inv}_1(v'),\\[\jot]
\mathit{Inv}_1(v) \land \mathit{next}_2(v, v')\rightarrow \mathit{Inv}_2(v'),\;
\mathit{Inv}_2(v) \rightarrow \mathit{safe}(v).
\end{array} \end{equation*}
A solution for these clauses contributes to solving the recursive clauses.
\section{Algorithm overview} \label{sec-algo}
In this section we briefly describe how \textsc{InterHorn}\xspace solves recursion-free Horn clauses. We refer to \cite[Section~7]{POPL11} for a solving algorithm for clauses over linear rational arithmetic, to \cite{APLAS11} for a treatment of a combined theory of linear rational arithmetic and uninterpreted functions, and to \cite{TACAS12} for a support of well-foundedness conditions.
\textsc{InterHorn}\xspace critically relies on the following two observations. First, applying resolution on clauses that describe the interpolation problem terminates and yields an assertion that does not contain any unknown relations. For example, resolution of clauses in Section~\ref{sec-illustration} that describe path, transition, nested and state/transition interpolation results in the implication of the form $\mathit{init}(v_0) \land (\bigwedge_{k=1}^n \mathit{next}_k(v_{k-1}, v_k)) \rightarrow \mathit{safe}(v_n)$. Second, the obtained assertion is valid if and only if the set of clauses is satisfiable. From the proof of validity (or alternatively, from the proof of unsatisfiability of the negated assertion) we construct the solutions.
\paragraph{\bf Clauses without well-foundedness conditions}
\textsc{InterHorn}\xspace goes through three main steps when given a set of recursion-free clauses that does not contain any well-foundedness condition. For example, we consider the following recursion-free clauses as input:
\begin{equation*} x \geq 10 \rightarrow p(x),\;\;
p(u) \land w = u+v \rightarrow q(v, w),\;\;
q(y, z) \land y \leq 0 \rightarrow z \geq y. \end{equation*}
During the first step we apply resolution on the set of clauses. Since the clauses are recursion-free, the resolution application terminates. The result is an assertion that only contains constraints from the background theory. After applying resolution we obtain for our example (note that we use fresh variables here to stress the fact that clauses are implicitly universally quantified): $a \geq 10 \land c = a+b \land b \leq 0 \rightarrow c \geq b$.
The second step amounts to checking the validity of the obtained assertion.\footnote{Instead of validity checking we can check satisfiability of the negated assertion.} If the assertion is not valid then we report that the original set of clauses imposes constraints that cannot be satisfied. Otherwise we produce a proof of validity. In our example the proof of validity can be represented as a weighted sum of the inequalities in the antecedent of the implication, with the weights 1, $-1$, and 0, respectively.
The third step traverses the input clauses and computes the solution assignment by taking the proof into account. For the clause $x\geq 10 \rightarrow p(x)$ we determine that $x \geq 10$ contributes to $p(x)$ with a weight 1, since during the resolution $x \geq 10$ gave rise to $a \geq 10$ whose weight is~1. Thus we obtain $p(x) = (x \geq 10)$. For the clause $p(u) \land w = u +v \rightarrow q(v, w)$ we combine $p(u)$ and $w = u +v$ with the weight of the latter set to $-1$, since $w = u +v$ yielded a contribution to the proof with weight~$-1$. This leads to $q(v, w) = (u\geq10)+(-1)*(w=u+v)=(w\geq 10+v)$.
Finally, \textsc{InterHorn}\xspace outputs the solution:
\begin{equation*} p(x) = (x \geq 10),\;\; q(v,w) = (w \geq 10+v). \end{equation*}
We observe that the substitution of the solutions into the input clauses produces valid implications: $x \geq 10 \rightarrow x \geq 10,$ $u \geq 10 \land w = u+v \rightarrow w \geq 10+v,$ and $z \geq 10+y \land y \leq 0 \rightarrow z \geq y$.
\paragraph{\bf Clauses with a well-foundedness condition}
In case of a well-foundedness condition occurring in the input, \textsc{InterHorn}\xspace introduces additional steps to take this condition into account. For example, we consider the following recursion-free clauses with a well-foundedness condition as input:
\begin{equation*}
\begin{array}[t]{@{}c@{}}
x \geq 10 \rightarrow p(x),\quad
p(u) \land w = u+v \rightarrow q(v, w),\quad
q(y, z) \land y \leq 0 \rightarrow r(y, z), \\[\jot]
\mathit{wf}(r(s, t)).
\end{array} \end{equation*}
The first step is again the resolution of the given clauses that produces a clause providing an under-approximation for the relation that is subject to the well-foundedness condition. For our example, we obtain: $a \geq 10 \land c = a+b \land b \leq 0 \rightarrow r(b, c)$.
The second step attempts to find a well-founded relation that over-approximates the projection of the antecedent of the clause obtained by resolution on the variables in its head. For our example this projection amounts to performing an existential quantifier elimination on $\exists a: a \geq 10 \land c = a+b \land b \leq 0$, which gives~$c \geq 10+b \land b \leq 0$. This relation is well-founded, which is witnessed by a ranking relation over $b$ and $c$ with a bound component $b \leq 0$ and the decrease component $c \geq b+1$.
The third step uses the well-founded over-approximation to construct a clause that introduces an upper bound on the relation under well-foundedness condition. This clause replaces the well-formedness condition by an approximation condition wrt.\ an assertion. For our example, the clause $\mathit{wf}(r(s,t))$ is replaced by the clause $r(s, t) \rightarrow (s \leq 0 \land t \geq s+1)$.
Lastly, we apply the solving method for clauses without well-foundedness conditions described previously. In our example, the set of clauses to be solved becomes:
\begin{equation*}
\begin{array}[t]{@{}c@{}}
x \geq 10 \rightarrow p(x),\quad
p(u) \land w = u+v \rightarrow q(v, w),\quad
q(y, z) \land y \leq 0 \rightarrow r(y, z), \\[\jot]
r(s, t) \rightarrow (s \leq 0 \land t \geq s+1).
\end{array} \end{equation*}
Finally, \textsc{InterHorn}\xspace outputs the solution:
\begin{equation*} p(x) = (x\geq10),\;\; q(v,w) = (w\geq10+v),\;\; r(s,t) = (s\leq0 \land t\geq s+10). \end{equation*}
\iffalse
\subsubsection{Path Interpolation}
There are no recursive dependencies. Resolution terminates and yields
\begin{equation*}
\mathit{init}(v_0)
\land \mathit{next}_1(v_0, v_1)
\land
\dots
\land \mathit{next}_n(v_{n-1}, v_n)
\rightarrow \mathit{safe}(v_n). \end{equation*}
We compute a proof of unsatisfiability for
\begin{equation*}
\mathit{init}(v_0)
\land
(\bigwedge_{k=1}^n \mathit{next}_{k-1}(v_{k-1}, v_k))
\land \neg\mathit{safe}(v_n). \end{equation*}
Traversal of the proof and taking the clauses into consideration results in $I_0(v), I_1(v), \dots, I_n(v)$.
\subsubsection{Well-founded Interpolation}
~\cite{TACAS12}
Given a path $\mathit{stem}_1(v, v'), \dots, \mathit{stem}_m(v, v')$ that contains a transitions leading to a loop entry point, and a path $\mathit{loop}_1(v, v'), \dots, \mathit{loop}_n(v, v')$ that contains around the loop. Find $I_0(v), I_1(v), \dots, I_m(v)$, and $T_1(v,v'), \dots, T_n(v,v')$ such that:
\begin{equation*}
\begin{array}[t]{@{}l@{\qquad}l@{}}
\mathit{init}(v) \rightarrow I_0(v), & \\[\jot]
I_{k-1}(v) \land \mathit{stem}_k(v, v') \rightarrow I_k(v'),
& \text{for each $k \in 1..m$}\\[\jot]
I_m(v) \land \mathit{loop}_1(v, v') \rightarrow T_1(v, v'), & \\[\jot]
T_{k-1}(v, v') \land \mathit{loop}_k(v', v'') \rightarrow T_k(v, v''),
& \text{for each $k\in 2..n$} \\[\jot]
\mathit{wf}(T_n(v, v')). &
\end{array} \end{equation*}
\subsubsection{Transition Interpolants}
There are no recursive dependencies. Resolution terminates and yields
\begin{equation*}
\mathit{init}(v_0)
\land \mathit{next}_1(v_0, v_1)
\land
\dots
\land \mathit{next}_n(v_{n-1}, v_n)
\rightarrow \mathit{safe}(v_n). \end{equation*}
We compute a proof of unsatisfiability for
\begin{equation*}
\mathit{init}(v_0)
\land
(\bigwedge_{k=1}^n \mathit{next}_{k-1}(v_{k-1}, v_k))
\land \neg\mathit{safe}(v_n). \end{equation*}
Traversal of the proof and taking the clauses into consideration results in $T_1(v, v'), \dots, T_n(v, v')$.
\subsubsection{Nested Interpolants}
There are no recursive dependencies. Resolution terminates and yields $\dots$. We compute a proof of unsatisfiability for $\dots$. Traversal of the proof and taking the clauses into consideration results in $I_0(v_0), I_1(v_1), \dots, I_n(v_n)$.
\subsubsection{State/transition interpolants}
\fi
\section{Implementation}
\textsc{InterHorn}\xspace is implemented in SICStus Prolog~\cite{sicstus}. For computing proofs of validity (resp.\ unsatisfiability) over linear rational arithmetic theory, \textsc{InterHorn}\xspace relies on a proof producing version of a simplex algorithm~\cite{GuptaPhD}. For computing well-founded approximations (also over linear rational arithmetic theory), \textsc{InterHorn}\xspace uses a linear ranking functions synthesis algorithm~\cite{VMCAI04}. \textsc{InterHorn}\xspace can be downloaded from \url{http://www7.in.tum.de/tools/interhorn/}, accepts input in form of Prolog terms and outputs an appropriately formatted result.
\section{Conclusion}
We presented \textsc{InterHorn}\xspace, a solver for recursion-free Horn clauses that can be used to deal with various interpolation problems. The main directions for the future development include adding support for uninterpreted functions, along the lines of~\cite{APLAS11}, and integer arithmetic. After developing our work, we became aware of a related work highlighting the relation between interpolation and recursion-free Horn clauses \cite{RummerVSTTE13}. The authors of \cite{RummerVSTTE13} show that some interpolation problems correspond to various fragments of recursion-free Horn clauses and establish complexity results for these fragments assuming the background theory of linear integer arithmetic. Our work is less concerned with the different fragments of recursion-free Horn clauses and more with how interpolation problems arise in software verification. The well-founded interpolation problem is beyond the scope of \cite{RummerVSTTE13}.
\end{document} | arXiv |
\begin{definition}[Definition:Nicely Normed Star-Algebra]
Let $A = \left({A_F, \oplus}\right)$ be a star-algebra whose conjugation is denoted $*$.
Then $A$ is a '''nicely normed $*$-algebra''' {{iff}}:
:$\forall a \in A: a + a^* \in \R$
:$\forall a \in A, a \ne 0: 0 < a \oplus a^* = a^* \oplus a \in \R$
\end{definition} | ProofWiki |
AI News, Difference between revisions of "Artificial Neural Networks/Activation Functions"
On Thursday, October 4, 2018
Difference between revisions of "Artificial Neural Networks/Activation Functions"
There are a number of common activation functions in use with neural networks.
The output is a certain value, A1, if the input sum is above a certain threshold and A0 if the input sum is below a certain threshold.
These kinds of step activation functions are useful for binary classification schemes.
In other words, when we want to classify an input pattern into one of two groups, we can use a binary classifier with a step activation function.
Each identifier would be a small network that would output a 1 if a particular input feature is present, and a 0 otherwise.
Combining multiple feature detectors into a single network would allow a very complicated clustering or classification problem to be solved.
linear combination is where the weighted sum input of the neuron plus a linearly dependent bias becomes the system output.
In these cases, the sign of the output is considered to be equivalent to the 1 or 0 of the step function systems, which enables the two methods be to equivalent if
This is called the log-sigmoid because a sigmoid can also be constructed using the hyperbolic tangent function instead of this relation, in which case it would be called a tan-sigmoid.
Sigmoid functions in this respect are very similar to the input-output relationships of biological neurons, although not exactly the same.
Sigmoid functions are also prized because their derivatives are easy to calculate, which is helpful for calculating the weight updates in certain training algorithms.
The softmax activation function is useful predominantly in the output layer of a clustering system.
On Saturday, October 6, 2018
Understanding Activation Functions in Neural Networks
Recently, a colleague of mine asked me a few questions like "why do we have so many activation functions?", "why is that one works better than the other?", "how do we know which one to use?", "is it hardcore maths?" and so on.
So I thought, why not write an article on it for those who are familiar with neural network only at a basic level and is therefore, wondering about activation functions and their "why-how-mathematics!".
Simply put, it calculates a "weighted sum" of its input, adds a bias and then decides whether it should be "fired" or not ( yeah right, an activation function does this, but let's go with the flow for a moment ).
Because we learnt it from biology that's the way brain works and brain is a working testimony of an awesome and intelligent system ).
To check the Y value produced by a neuron and decide whether outside connections should consider this neuron as "fired" or not.
You would want the network to activate only 1 neuron and others should be 0 ( only then would you be able to say it classified properly/identified the class ).
And then if more than 1 neuron activates, you could find which neuron has the "highest activation" and so on ( better than max, a softmax, but let's leave that for now ).
But..since there are intermediate activation values for the output, learning can be smoother and easier ( less wiggly ) and chances of more than 1 neuron being 100% activated is lesser when compared to step function while training ( also depending on what you are training and the data ).
Ok, so we want something to give us intermediate ( analog ) activation values rather than saying "activated" or not ( binary ).
straight line function where activation is proportional to input ( which is the weighted sum from neuron ).
We can definitely connect a few neurons together and if more than 1 fires, we could take the max ( or softmax) and decide based on that.
If there is an error in prediction, the changes made by back propagation is constant and not depending on the change in input delta(x) !!!
That activation in turn goes into the next level as input and the second layer calculates weighted sum on that input and it in turn, fires based on another linear activation function.
No matter how many layers we have, if all are linear in nature, the final activation function of last layer is nothing but just a linear function of the input of first layer!
No matter how we stack, the whole network is still equivalent to a single layer with linear activation ( a combination of linear functions in a linear manner is still another linear function ).
It tends to bring the activations to either side of the curve ( above x = 2 and below x = -2 for example).
Another advantage of this activation function is, unlike linear function, the output of the activation function is always going to be in range (0,1) compared to (-inf, inf) of linear function.
The network refuses to learn further or is drastically slow ( depending on use case and until gradient /computation gets hit by floating point value limits ).
Imagine a network with random initialized weights ( or normalised ) and almost 50% of the network yields 0 activation because of the characteristic of ReLu ( output 0 for negative values of x ).
That means, those neurons which go into that state will stop responding to variations in error/ input ( simply because gradient is 0, nothing changes ).
When you know the function you are trying to approximate has certain characteristics, you can choose an activation function which will approximate the function faster leading to faster training process.
For example, a sigmoid works well for a classifier ( see the graph of sigmoid, doesn't it show the properties of an ideal classifier?
Activation function
In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs.
A standard computer chip circuit can be seen as a digital network of activation functions that can be 'ON' (1) or 'OFF' (0), depending on input.
This is similar to the behavior of the linear perceptron in neural networks.
However, only nonlinear activation functions allow such networks to compute nontrivial problems using only a small number of nodes[citation needed].
In artificial neural networks this function is also called the transfer function.
In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell.[1]
In its simplest form, this function is binary—that is, either the neuron is firing or not.
The function looks like
{\displaystyle \phi (v_{i})=U(v_{i})}
is the Heaviside step function.
In this case many neurons must be used in computation beyond linear separation of categories.
line of positive slope may be used to reflect the increase in firing rate that occurs as input current increases.
Such a function would be of the form
{\displaystyle \phi (v_{i})=\mu v_{i}}
{\displaystyle \mu }
This activation function is linear, and therefore has the same problems as the binary function.
In addition, networks constructed using this model have unstable convergence because neuron inputs along favored paths tend to increase without bound, as this function is not normalizable.
All problems mentioned above can be handled by using a normalizable sigmoid activation function.
One realistic model stays at zero until input current is received, at which point the firing frequency increases quickly at first, but gradually approaches an asymptote at 100% firing rate.
Mathematically, this looks like
{\displaystyle \phi (v_{i})=U(v_{i})\tanh(v_{i})}
where the hyperbolic tangent function can be replaced by any sigmoid function.
This behavior is realistically reflected in the neuron, as neurons cannot physically fire faster than a certain rate.
This model runs into problems, however, in computational networks as it is not differentiable, a requirement to calculate backpropagation.
The final model, then, that is used in multilayer perceptrons is a sigmoidal activation function in the form of a hyperbolic tangent.
Two forms of this function are commonly used:
{\displaystyle \phi (v_{i})=\tanh(v_{i})}
whose range is normalized from -1 to 1, and
{\displaystyle \phi (v_{i})=(1+\exp(-v_{i}))^{-1}}
is vertically translated to normalize from 0 to 1.
The latter model is often considered more biologically realistic, but it runs into theoretical and experimental difficulties with certain types of computational problems.
special class of activation functions known as radial basis functions (RBFs) are used in RBF networks, which are extremely efficient as universal function approximators.
These activation functions can take many forms, but they are usually found as one of three functions:
{\displaystyle c_{i}}
is the vector representing the function center and
{\displaystyle a}
{\displaystyle \sigma }
are parameters affecting the spread of the radius.
Support vector machines (SVMs) can effectively utilize a class of activation functions that includes both sigmoids and RBFs.
In this case, the input is transformed to reflect a decision boundary hyperplane based on a few training inputs called support vectors
{\displaystyle x}
The activation function for the hidden layer of these machines is referred to as the inner product kernel,
{\displaystyle K(v_{i},x)=\phi (v_{i})}
The support vectors are represented as the centers in RBFs with the kernel equal to the activation function, but they take a unique form in the perceptron as
{\displaystyle \beta _{0}}
must satisfy certain conditions for convergence.
These machines can also accept arbitrary-order polynomial activation functions where
Activation function having types:
Some desirable properties in an activation function include:
The following table compares the properties of several activation functions that are functions of one fold x from the previous layer or layers:
The following table lists activation functions that are not functions of a single fold x from the previous layer or layers:
{\displaystyle \delta _{ij}}
Fundamentals of Deep Learning – Activation Functions and When to Use Them?
Internet provides access to plethora of information today.
When our brain is fed with a lot of information simultaneously, it tries hard to understand and classify the information between useful and not-so-useful information.
Let us go through these activation functions, how they work and figure out which activation functions fits well into what kind of problem statement.
Before I delve into the details of activation functions, let's do a little review of what are neural networks and how they function.
A neural network is a very powerful machine learning mechanism which basically mimics how a human brain learns.
The brain receives the stimulus from the outside world, does the processing on the input, and then generates the output.
As the task gets complicated multiple neurons form a complex network, passing information among themselves.
The black circles in the picture above are neurons. Each neuron is characterized by its weight, bias and activation function.
A linear equation is simple to solve but is limited in its capacity to solve complex problems. A neural network without an activation function is essentially just a linear regression model.
The activation function does the non-linear transformation to the input making it capable to learn and perform more complex tasks.
We would want our neural networks to work on complicated tasks like language translations and image classifications.
Activation functions make the back-propagation possible since the gradients are supplied along with the error to update the weights and biases.
If the value Y is above a given threshold value then activate the neuron else leave it deactivated.
When we simply need to say yes or no for a single class, step function would be the best choice, as it would either activate the neuron or leave it to zero.
The function is more theoretical than practical since in most cases we would be classifying the data into multiple classes than just a single class.
This makes the step function not so useful since during back-propagation when the gradients of the activation functions are sent for error calculations to improve and optimize the results.
The gradient of the step function reduces it all to zero and improvement of the models doesn't really happen.
We saw the problem with the step function, the gradient being zero, it was impossible to update gradient during the backpropagation.
Now if each layer has a linear transformation, no matter how many layers we have the final output is nothing but a linear transformation of the input.
Our choice of using sigmoid or tanh would basically depend on the requirement of gradient in the problem statement.
First things first, the ReLU function is non linear, which means we can easily backpropagate the errors and have multiple layers of neurons being activated by the ReLU function.
But ReLU also falls a prey to the gradients moving towards zero. If you look at the negative side of the graph, the gradient is zero, which means for activations in that region, the gradient is zero and the weights are not updated during back propagation.
So in this case the gradient of the left side of the graph is non zero and so we would no longer encounter dead neurons in that region.
The parametrised ReLU function is used when the leaky ReLU function still fails to solve the problem of dead neurons and the relevant information is not successfully passed to the next layer.
The softmax function is also a type of sigmoid function but is handy when we are trying to handle classification problems.
The softmax function would squeeze the outputs for each class between 0 and 1 and would also divide by the sum of the outputs.
The softmax function is ideally used in the output layer of the classifier where we are actually trying to attain the probabilities to define the class of each input.
Now that we have seen so many activation functions, we need some logic / heuristics to know which activation function should be used in which situation.
However depending upon the properties of the problem we might be able to make a better choice for easy and quicker convergence of the network.
In this article I have discussed the various types of activation functions and what are the types of problems one might encounter while using each of them.
Artificial Neural Networks/Activation Functions
A visual proof that neural nets can compute any function
One of the most striking facts about neural networks is that they can compute any function at all.
No matter what the function, there is guaranteed to be a neural network so that for every possible input, $x$, the value $f(x)$ (or some close approximation) is output from the network, e.g.:
For instance, here's a network computing a function with $m = 3$ inputs and $n = 2$ outputs:
What's more, this universality theorem holds even if we restrict our networks to have just a single layer intermediate between the input and the output neurons - a so-called single hidden layer.
For instance, one of the original papers proving the result* *Approximation by superpositions of a sigmoidal function, by George Cybenko (1989).
Another important early paper is Multilayer feedforward networks are universal approximators, by Kurt Hornik, Maxwell Stinchcombe, and Halbert White (1989).
Again, that can be thought of as computing a function* *Actually, computing one of many functions, since there are often many acceptable translations of a given piece of text..
Or consider the problem of taking an mp4 movie file and generating a description of the plot of the movie, and a discussion of the quality of the acting.
Two caveats Before explaining why the universality theorem is true, I want to mention two caveats to the informal statement 'a neural network can compute any function'.
To make this statement more precise, suppose we're given a function $f(x)$ which we'd like to compute to within some desired accuracy $\epsilon > 0$.
The guarantee is that by using enough hidden neurons we can always find a neural network whose output $g(x)$ satisfies $|g(x) - f(x)|
If a function is discontinuous, i.e., makes sudden, sharp jumps, then it won't in general be possible to approximate using a neural net.
Summing up, a more precise statement of the universality theorem is that neural networks with a single hidden layer can be used to approximate any continuous function to any desired precision.
In this chapter we'll actually prove a slightly weaker version of this result, using two hidden layers instead of one.
In the problems I'll briefly outline how the explanation can, with a few tweaks, be adapted to give a proof which uses only a single hidden layer.
Universality with one input and one output To understand why the universality theorem is true, let's start by understanding how to construct a neural network which approximates a function with just one input and one output:
To build insight into how to construct a network to compute $f$, let's start with a network containing just a single hidden layer, with two hidden neurons, and an output layer containing a single output neuron:
In the diagram below, click on the weight, $w$, and drag the mouse a little ways to the right to increase $w$.
As we learnt earlier in the book, what's being computed by the hidden neuron is $\sigma(wx + b)$, where $\sigma(z) \equiv 1/(1+e^{-z})$ is the sigmoid function.
But for the proof of universality we will obtain more insight by ignoring the algebra entirely, and instead manipulating and observing the shape shown in the graph.
This won't just give us a better feel for what's going on, it will also give us a proof* *Strictly speaking, the visual approach I'm taking isn't what's traditionally thought of as a proof.
Occasionally, there will be small gaps in the reasoning I present: places where I make a visual argument that is plausible, but not quite rigorous.
function playVideo (name) { var div = $('#'+name)[0];
} function videoEnded (name) { var div = document.getElementById(name);
} We can simplify our analysis quite a bit by increasing the weight so much that the output really is a step function, to a very good approximation.
It's easy to analyze the sum of a bunch of step functions, but rather more difficult to reason about what happens when you add up a bunch of sigmoid shaped curves.
With a little work you should be able to convince yourself that the position of the step is proportional to $b$, and inversely proportional to $w$.
It will greatly simplify our lives to describe hidden neurons using just a single parameter, $s$, which is the step position, $s = -b/w$.
As noted above, we've implicitly set the weight $w$ on the input to be some large value - big enough that the step function is a very good approximation.
We can easily convert a neuron parameterized in this way back into the conventional model, by choosing the bias $b = -w s$.
In particular, we'll suppose the hidden neurons are computing step functions parameterized by step points $s_1$ (top neuron) and $s_2$ (bottom neuron).
Here, $a_1$ and $a_2$ are the outputs from the top and bottom hidden neurons, respectively* *Note, by the way, that the output from the whole network is $\sigma(w_1 a_1+w_2 a_2 + b)$, where $b$ is the bias on the output neuron.
We're going to focus on the weighted output from the hidden layer right now, and only later will we think about how that relates to the output from the whole network..
You'll see that the graph changes shape when this happens, since we have moved from a situation where the top hidden neuron is the first to be activated to a situation where the bottom hidden neuron is the first to be activated.
Similarly, try manipulating the step point $s_2$ of the bottom hidden neuron, and get a feel for how this changes the combined output from the hidden neurons.
You'll notice, by the way, that we're using our neurons in a way that can be thought of not just in graphical terms, but in more conventional programming terms, as a kind of if-then-else statement, e.g.:
In particular, we can divide the interval $[0, 1]$ up into a large number, $N$, of subintervals, and use $N$ pairs of hidden neurons to set up peaks of any desired height.
Apologies for the complexity of the diagram: I could hide the complexity by abstracting away further, but I think it's worth putting up with a little complexity, for the sake of getting a more concrete feel for how these networks work.
didn't say it at the time, but what I plotted is actually the function \begin{eqnarray} f(x) = 0.2+0.4 x^2+0.3x \sin(15 x) + 0.05 \cos(50 x), \tag{113}\end{eqnarray} plotted over $x$ from $0$ to $1$, and with the $y$ axis taking values from $0$ to $1$.
The solution is to design a neural network whose hidden layer has a weighted output given by $\sigma^{-1} \circ f(x)$, where $\sigma^{-1}$ is just the inverse of the $\sigma$ function.
If we can do this, then the output from the network as a whole will be a good approximation to $f(x)$* *Note that I have set the bias on the output neuron to $0$..
How well you're doing is measured by the average deviation between the goal function and the function the network is actually computing.
It's only a coarse approximation, but we could easily do much better, merely by increasing the number of pairs of hidden neurons, allowing more bumps.
So, for instance, for the second hidden neuron $s = 0.2$ becomes $b = -1000 \times 0.2 = -200$.
So, for instance, the value you've chosen above for the first $h$, $h = $ , means that the output weights from the top two hidden neurons are and , respectively.
Just as in our earlier discussion, as the input weight gets larger the output approaches a step function.
Here, we assume the weight on the $x$ input has some large value - I've used $w_1 = 1000$ - and the weight $w_2 = 0$.
Of course, it's also possible to get a step function in the $y$ direction, by making the weight on the $y$ input very large (say, $w_2 = 1000$), and the weight on the $x$ equal to $0$, i.e., $w_1 = 0$:
The number on the neuron is again the step point, and in this case the little $y$ above the number reminds us that the step is in the $y$ direction.
But do keep in mind that the little $y$ marker implicitly tells us that the $y$ weight is large, and the $x$ weight is $0$.
That reminds us that they're producing $y$ step functions, not $x$ step functions, and so the weight is very large on the $y$ input, and zero on the $x$ input, not vice versa.
If we choose the threshold appropriately - say, a value of $3h/2$, which is sandwiched between the height of the plateau and the height of the central tower - we could squash the plateau down to zero, and leave just the tower standing.
This is a bit tricky, so if you think about this for a while and remain stuck, here's two hints: (1) To get the output neuron to show the right kind of if-then-else behaviour, we need the input weights (all $h$ or $-h$) to be large;
Even for this relatively modest value of $h$, we get a pretty good tower function.
To make the respective roles of the two sub-networks clear I've put them in separate boxes, below: each box computes a tower function, using the technique described above.
In particular, by making the weighted output from the second hidden layer a good approximation to $\sigma^{-1} \circ f$, we ensure the output from our network will be a good approximation to any desired function, $f$.
The $s_1, t_1$ and so on are step points for neurons - that is, all the weights in the first layer are large, and the biases are set to give the step points $s_1, t_1, s_2, \ldots$.
Of course, such a function can be regarded as just $n$ separate real-valued functions, $f^1(x_1, \ldots, x_m), f^2(x_1, \ldots, x_m)$, and so on.
As a hint, try working in the case of just two input variables, and showing that: (a) it's possible to get step functions not just in the $x$ or $y$ directions, but in an arbitrary direction;
(b) by adding up many of the constructions from part (a) it's possible to approximate a tower function which is circular in shape, rather than rectangular;
To do part (c) it may help to use ideas from a bit later in this chapter.
Recall that in a sigmoid neuron the inputs $x_1, x_2, \ldots$ result in the output $\sigma(\sum_j w_j x_j + b)$, where $w_j$ are the weights, $b$ is the bias, and $\sigma$ is the sigmoid function:
That is, we'll assume that if our neurons has inputs $x_1, x_2, \ldots$, weights $w_1, w_2, \ldots$ and bias $b$, then the output is $s(\sum_j w_j x_j + b)$.
It should be pretty clear that if we add all these bump functions up we'll end up with a reasonable approximation to $\sigma^{-1} \circ f(x)$, except within the windows of failure.
Suppose that instead of using the approximation just described, we use a set of hidden neurons to compute an approximation to half our original goal function, i.e., to $\sigma^{-1} \circ f(x) / 2$.
And suppose we use another set of hidden neurons to compute an approximation to $\sigma^{-1} \circ f(x)/ 2$, but with the bases of the bumps shifted by half the width of a bump:
Although the result isn't directly useful in constructing networks, it's important because it takes off the table the question of whether any particular function is computable using a neural network.
As argued in Chapter 1, deep networks have a hierarchical structure which makes them particularly well adapted to learn the hierarchies of knowledge that seem to be useful in solving real-world problems.
Put more concretely, when attacking problems such as image recognition, it helps to use a system that understands not just individual pixels, but also increasingly more complex concepts: from edges to simple geometric shapes, all the way up through complex, multi-object scenes.
In later chapters, we'll see evidence suggesting that deep networks do a better job than shallow networks at learning such hierarchies of knowledge.
and empirical evidence suggests that deep networks are the networks best adapted to learn the functions useful in solving many real-world problems.
Activation Functions in Neural Networks (Sigmoid, ReLU, tanh, softmax)
ActivationFunctions #ReLU #Sigmoid #Softmax #MachineLearning Activation Functions in Neural Networks are used to contain the output between fixed values ...
Neural Network Calculation (Part 2): Activation Functions & Basic Calculation
From In this part we see how to calculate one section of a neural network. This calculation will be repeated many times to ..
Activation Functions
In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation ...
Deep Learning with Tensorflow - Activation Functions
Enroll in the course for free at: Deep Learning with TensorFlow Introduction The majority of data ..
Derivative of the sigmoid activation function, 9/2/2015
Activation Function using Sigmoid & ReLU using TensorFlow
Impact of Bias on the Sigmoid Activation function
OptimizersLossesAndMetrics - Keras
Here I go over the nitty-gritty parts of models, including the optimizers, the losses and the metrics. I first go over the usage of optimizers. Optimizers are ...
Sigmoid function
A sigmoid function is a mathematical function having an "S" shape (sigmoid curve). Often, sigmoid function refers to the special case of the logistic function ...
Neural network tutorial: The back-propagation algorithm (Part 1)
In this video we will derive the back-propagation algorithm as is used for neural networks. I use the sigmoid transfer function because it is the most common, but ...
Saturday, January 19, 2019;
Tuesday, March 12, 2019;
Innovate UK artificial intelligence
Friday, December 28, 2018;
WebOodi artificial intelligence
Thursday, August 2, 2018;
Top 15 Scala Libraries for Data Science in 2018
Friday, August 3, 2018;
Real Time Updates in Hadoop with Kudu, Big Data Journey Part 3
Tuesday, November 27, 2018;
Machine Learning Approaches for Clinical Psychology and Psychiatry artificial intelligence
Sunday, December 30, 2018;
Brazil: Five Killed In Cathedral Shooting artificial intelligence
Wednesday, September 12, 2018;
Deep Learning with R
UW artificial intelligence
Wednesday, January 16, 2019;
Deep Learning & Artificial Intelligence Solutions from NVIDIA artificial intelligence | CommonCrawl |
\begin{document}
\title{\small{Phase transition of anisotropic \\Ginzburg--Landau equation}}
\author{Yuning Liu} \address{NYU Shanghai, 1555 Century Avenue, Shanghai 200122, China, and NYU-ECNU Institute of Mathematical Sciences at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China} \email{[email protected]}
\begin{abstract} \normalsize In this work, we study the effective geometric motions of an anisotropic Ginzburg--Landau equation with a small parameter $\varepsilon>0$ which characterizes the width of the transition layer. Given a classical solution to the curve-shortening flow, we assume that the initial datum of the Ginzburg--Landau equation undergoes phase transitions across the initial (closed) curve. We show that the small $\varepsilon$ asymptotics of the solutions correspond to a planar field $\mathbf{u}(x,t)$ which is of unit length on one side of the free boundary determined by the flow, and is zero on the other side. The proof is achieved by deriving differential inequalities of several modulated energies and by exploring their coercivity properties using weak convergence methods. In particular, by a (boundary) blow-up argument we show that $\mathbf{u}$ must be tangential to the free boundary. Under additional assumptions we prove that $\mathbf{u}$ solves a geometric evolution equation for the Oseen--Frank model in liquid crystals.
\noindent \textbf{Keywords:} modulated energy, weak convergence methods, blow-up analysis, mean curvature flow, phase-transition.
\end{abstract} \date{\today}
\maketitle
\tableofcontents
\section{Introduction}
In the study of liquid crystals one often encounters elastic energies with anisotropy, i.e. energies with distinct coefficients multiplying the square of the divergence and
the curl of the order parameters. Typical examples involve the Oseen--Frank model \cite{HardtKinderlehrerLin1986}, Ericksen's model \cite{MR1294333,lin2020isotropic} and the Landau--De Gennes model \cite{ball2017mathematics}. From a microscopic point of view, the anisotropy of these models can be interpreted as excluded volume potential of molecular interaction \cite{HLWZZ}. Anisotropic models also arise in the theory of superconductivity, cf. \cite{MR1313011}.
The anisotropy brings various new challenges to the studies of both the variational problems and their gradient flows of the aforementioned models.
In particular, compared with the isotropic models, the powerful analytic tools such as maximum principle and monotonicity formula are usually invalid for anisotropic ones.
The attempt of
this work is to study an anisotropic system modeling the isotropic-nematic phase transition of a liquid crystals droplet. Let $\Omega\subset \mathbb{R}^2$ be a bounded domain with smooth boundary. We consider the anisotropic Ginzburg--Landau type energy \begin{equation}\label{GL energy}
A_\varepsilon (\mathbf{u})=\int_{\Omega} \(\frac{\varepsilon }2 \mu |\operatorname{div} \mathbf{u}|^2+\frac\varepsilon 2 |\nabla \mathbf{u}|^2+\frac{1}\varepsilon F(\mathbf{u}) \)\, dx. \end{equation}
Here $\mathbf{u}:\Omega\mapsto \mathbb{R}^2$ is the order parameter describing the state of the system. The function $F(\mathbf{u})$ is a double equal-well potential which permits the isotropic-nematic phase transition, i.e. taking its global minimum value $0$ at $\{ 0\}\cup {\mathbb{S}^1}$. An example of $F$ is the Chern--Simons-Higgs model
$F(\mathbf{u})= |\mathbf{u}|^2(1-|\mathbf{u}|^2)^2$. See e.g. \cite{MR1050529,MR1050530} for the physics and
\cite{MR1324400,MR4076075,MR3910590} for the mathematical analysis of related variational problems. The parameter $\varepsilon >0 $ denotes the relative intensity of elastic and bulk energy, which is usually quite small. The parameter $\mu\in \mathbb{R}^+$ is material dependent which measures the degree of anisotropy.
The variational investigations of
the isotropic-nematic phase transition involving \eqref{GL energy} were first done by Golovaty et al. \cite{MR4076075,MR3910590} in the static case. In the current work we shall study its $L^2$-gradient flow. To be more precise, we consider the flow \begin{subequations} \label{GL system} \begin{align} \partial_t \mathbf{u}_\varepsilon -\mu \nabla\operatorname{div} \mathbf{u}_\varepsilon &= \Delta \mathbf{u}_\varepsilon - \frac 1{\varepsilon ^2}\partial F(\mathbf{u}_\varepsilon )&&~\text{in}~ \Omega\times (0,T),\label{Ginzburg-Landau}\\ \mathbf{u}_\varepsilon (x,0)&=\mathbf{u}_\varepsilon ^{in}(x)&&~\text{in}~\Omega,\\ \mathbf{u}_\varepsilon (x,t)&=0&&~ \text{on}~\partial\Omega\times (0,T),\label{bc of omega} \end{align}
\end{subequations} where $\partial F(\mathbf{u} )$ is the differential of $F(\mathbf{u})$. We shall study the small $\varepsilon $-asymptotics of this system for initial datum $\mathbf{u}_\varepsilon ^{in}$ undergoing a sharp transition near a smooth interface.
To state the main result, we assume that \begin{equation}\label{interface}
I=\bigcup_{t\in [0,T]} I_t \times \{t\}~\text{is a smoothly evolving simple closed curve in}~\Omega, \end{equation}
starting from a closed smooth curve $I_0\subset \Omega$. Let $\Omega^+_t$ be the domain enclosed by $I_t$, and $d_I(x,t)$ be the signed-distance from $x$ to $I_t$ which takes positive values in $\Omega^+_t$, and negative values in $\Omega^-_t=\Omega\backslash \overline{\Omega^+_t}$. Equivalently,
\begin{equation}\label{def:omegapm} \Omega^\pm_t:= \{x\in\Omega\mid d_I(x,t)\gtrless0\}. \end{equation}
The space-time `cylinders' separated by $I$ are denoted by
\[\Omega^\pm:=\bigcup_{t\in [0,T]}\Omega_t^\pm\times \{t\}\end{equation} respectively. For $\delta>0$, the (open) $\delta$-neighborhood of $I_t$ is denoted by \begin{equation}
I_t(\delta):= \{x\in\Omega: | d_I(x,t)|<\delta\}. \end{equation} Let $\delta_0\in (0,1)$ be sufficiently small so that the nearest point projection $P_{I}(\cdot,t): I_t(4\delta_0) \rightarrow I_t$ is smooth for any $t\in [0,T]$, and that the interface \eqref{interface} stays at least $4\delta_0$ distance away from the physical boundary $\partial\Omega$. A more detailed description of the geometry can be found in the beginning of Section \ref{sec entropy} or \cite{MR2754215}.
The first step to study the singular limit of \eqref{GL system} is to construct a Lyapunov functional which encodes a distance between the approximate solution \eqref{Ginzburg-Landau} and the set \eqref{interface} where the solution gradient $\nabla \mathbf{u}_\varepsilon $ will be concentrated, and then to derive a differential inequality of the functional.
To this end, we denote by $\boldsymbol{\xi }(x,t)$ an appropriate extension of the inner normal vector $\mathbf{n}(x,t)$ of $I_t$ to the whole computational domain $\Omega$ (see \eqref{def:xi} below for its precise definition). Following \cite{MR3353807,MR4072686,fischer2020convergence}, we introduce the modulated energy \begin{align}\label{entropy}
E_\varepsilon [\mathbf{u}_\varepsilon | I](t):= & \int_\Omega \frac{\varepsilon }2 \mu |\operatorname{div} \mathbf{u}_\varepsilon (\cdot,t)|^2\, dx\nonumber\\
&+\int_\Omega\(\frac{\varepsilon }{2}\left|\nabla \mathbf{u}_\varepsilon (\cdot,t)\right|^2+\frac{1}{\varepsilon } {F (\mathbf{u}_\varepsilon (\cdot,t))}- \boldsymbol{\xi }\cdot\nabla \psi_\varepsilon (\cdot,t) \)\, dx, \end{align} where $\psi_\varepsilon $ is defined by \begin{align}
\psi_\varepsilon (x,t):= \int_0^{| \mathbf{u}_\varepsilon (x,t)|} g(s)\, ds .\label{psi} \end{align} We shall work with a general class of potentials $F(\mathbf{u})$ under standard structural assumptions (see e.g. \cite{MR1237490,MR1425577}): we assume \begin{align}\label{bulk}
F(\mathbf{u}) =f(|\mathbf{u}|)= g^2(|\mathbf{u}|)/2,
\end{align}
where $f$ and $g$ satisfy
\begin{subequations}\label{bulk assump} \begin{align} &g\geqslant 0\text{ and is Lipschitz continuous}, ~g(0)=g(1)=0;\label{g lip}\\ & f\in C^\infty( \mathbb{R}_+\cup \{0\}),\quad f(s)>0\text{ for } s\in \mathbb{R}_+\backslash \{1\};\\ &\exists s_0\in (0,1) \text{ s.t. } f'(s) >0\text{ on } (0,s_0), f'(s) <0\text{ on } ( s_0,1);\label{f increase}\\ &f'(0)= f'(1)=0,\qquad f''(0), f''(1)>0;\label{bulk stable}\\ & \exists c_0\in \mathbb{R}^+ \text{ s.t. } f(s)\geqslant 2c_0^2 s^2\text{ for }s\geqslant 1.\label{growth f} \end{align} \end{subequations}
It is obvious that the Chern--Simons--Higgs potential corresponds to the choice $g(s)=|s||s^2-1|$.
Now we state the main result of this work: \begin{theorem}\label{main thm} Assume that the moving interface $I$ \eqref{interface} is a curve-shortening flow \footnote{It is also called one-dimensional mean curvature flow} and that the initial data of \eqref{GL system} satisfies \begin{align} \mathbf{u}_\varepsilon ^{in}(x) = \mathbf{u}^{in}(x) & \quad \text{in}~ \Omega^+_0\backslash I_0(\delta_0), \text{ and } \mathbf{u}_\varepsilon ^{in}(x)=0 \text{ in } \Omega^-_0\backslash I_0(\delta_0),\label{initial out} \end{align} for some $\mathbf{u}^{in}\in H^1(\Omega;{\mathbb{S}^1})$. Assume further that there exists a constant $c_1>0$ which is independent of $\varepsilon $ so that \begin{subequations}\label{initial assupms} \begin{align}
&A_\varepsilon (\mathbf{u}_\varepsilon ^{in})\leqslant c_1,\qquad \|\mathbf{u}_\varepsilon ^{in}\|_{L^\infty(\Omega)}\leqslant 1, \label{bdd1}\\
& E_\varepsilon [\mathbf{u}_\varepsilon ^{in} | I_0]+\int_{I_0(\delta_0)} |\psi_\varepsilon ^{in}-\mathbf{1}_{\Omega_0^+}| |d_{I_0}| \, dx\leqslant c_1\varepsilon .\label{initial} \end{align} \end{subequations} Then there exists $C_1,C_2>0$ which depend only on the geometry of $I$ so that
\begin{align}\label{initial preserve}
&\sup_{t\in [0,T]}E_\varepsilon [\mathbf{u}_\varepsilon | I](t)+\sup_{t\in [0,T]}\int_{I_t(\delta_0)} |\psi_\varepsilon -\mathbf{1}_{\Omega_t^+}| |d_{I}|\, dx\leqslant C_1\varepsilon ,\\
&\sup_{t\in [0,T]}\int_{\Omega}| \psi_\varepsilon -\mathbf{1}_{\Omega_t^+}| \, dx \leqslant C_2\varepsilon ^{1/3}.\label{con psi} \end{align} Moreover, up to extraction of a subsequence $\varepsilon _k\downarrow 0$, we have \begin{align}
\mathbf{u}_{\varepsilon _k}\xrightarrow{k\to\infty} \mathbf{1}_{\Omega_t^+} \mathbf{u}~&\text{ in }~ C([0,T];L^2_{loc}(\Omega^\pm_t)),\label{reg limit} \end{align} where $\mathbf{u}\in H^1( \Omega^+;{\mathbb{S}^1})$ and satisfies the anchoring boundary condition \begin{align} \mathbf{u}\cdot\mathbf{n}=0~\qquad \text{ for } \mathcal{H}^1\text{-a.e. }x\in \partial \Omega_t^+.\label{anchoring bc} \end{align}
\end{theorem}
We make a few remarks about the conditions \eqref{initial assupms} for the initial data. The first condition in \eqref{bdd1} ensures that the system has finite energy for all time. The second one $\|\mathbf{u}_\varepsilon ^{in}\|_{L^\infty}\leqslant 1$ is imposed to avoid technical issues but can be relaxed. Indeed, in the proof of Theorem \ref{sharp l1} below, it is a simple condition to guarantee a well-prepared initial data for a Gr\"{o}nwall's inequality. The essential assumption is \eqref{initial}, which is used to obtain the differential inequality in Theorem \ref{thm close energy with div} below. Such an inequality is the heart of the work and is the source of various modulated inequalities.
Construction of initial data satisfying \eqref{initial} is summarized in the following proposition. For the convenience of the readers, its proof will be given in Appendix \ref{app initial}. \begin{prop} \label{prop initial data}
Let $I_0\subset \mathbb{R}^2$ be a smooth simple closed curve with inner normal vector $\mathbf{n}$. Then for every $\mathbf{u}^{in}\in H^1(\Omega,{\mathbb{S}^1})$ with trace $\mathbf{u}^{in}|_{I_0}\cdot \mathbf{n}=0$, there exists $\mathbf{u}_\varepsilon ^{in} \in H^1(\Omega)\cap L^\infty(\Omega)$ s.t. \eqref{initial out} holds. Moreover, there exists a constant $c_1>0$ which only depends on $I_0$ and $\|\mathbf{u}^{in}\|_{H^1(\Omega)}$ such that $\mathbf{u}_\varepsilon ^{in}$ is well-prepared in the sense of \eqref{initial assupms}.
\end{prop}
In the bulk region $\Omega^+$, we can show further that the limit $\mathbf{u}$ in \eqref{reg limit} solves a geometric evolution equation under the following choice of $f$ \eqref{bulk}:
\begin{theorem}\label{main thm oseen frank}
Let the assumptions of Theorem \ref{main thm} be in place. Assume further that
\begin{align}\label{bulk2}
&f(s)= s^2 \text{ for } s\leqslant 1/4; f(s)= (s-1)^2 \text{ for } s\geqslant 3/4;\nonumber\\
& f(s)\geqslant 1/{16} \text{ for } s\in (1/4, 3/4).
\end{align}
Then there exists a sufficiently small $\mu_0$ (independent of $\varepsilon $) so that for any $\mu\in (0,\mu_0)$, the vector field $\mathbf{u}$ in \eqref{reg limit} satisfies the weak formulation \begin{align} \int_\Omega \partial_t \mathbf{u}\wedge \mathbf{u} \,\varphi\, dx=-\mu \int_\Omega \(\operatorname{div} \mathbf{u}\) \operatorname{rot} (\varphi\, \mathbf{u})\, dx- \int_\Omega \nabla\varphi\cdot \nabla \mathbf{u}\wedge \mathbf{u}\, dx\label{weak OF flow} \end{align} for almost every $t\in (0,T)$ and every $\varphi (x)\in C^1_c(\Omega_t^+)$.
\end{theorem} In the above equation $\wedge$ denotes the standard wedge product in $\mathbb{R}^2$ and \[\operatorname{rot} \mathbf{u}=-\partial_{x_2}u_1+\partial_{x_1} u_2\quad \text{ for } \mathbf{u}=(u_1,u_2).\end{equation} The equation \eqref{weak OF flow} is the weak formulation of an Oseen--Frank flow
\[\partial_t \mathbf{u}=\Delta \mathbf{u}+\mu (I_2-\mathbf{u}\otimes \mathbf{u}) \nabla \operatorname{div} \mathbf{u}+|\nabla \mathbf{u}|^2\mathbf{u},\qquad t\in [0,T], x\in\Omega_t^+,\label{OF flow}\end{equation}
Note that if $\mathbf{u}$ is regular enough, then \eqref{weak OF flow} is equivalent to \eqref{OF flow}. Complementing \eqref{OF flow} with the boundary condition in \eqref{anchoring bc} leads to a complete parabolic system in $\Omega^+$. It is worth mentioning that \eqref{OF flow} is the $L^2$-gradient flow of the variational problem \[\inf\int_{U} \mu |\operatorname{div} \mathbf{u}|^2+ |\nabla \mathbf{u}|^2\, dx,\label{OS variational}\end{equation} where the infimum is taken among mappings $\mathbf{u}\in H^1(U;{\mathbb{S}^1})$ fulfilling certain boundary conditions on $\partial U$ (see \cite{HardtKinderlehrerLin1986} for the analysis of the full Oseen--Frank model).
There is a huge number of literature on the co-dimensional one scaling limit of Allen--Cahn type equation to the (two-phase) mean curvature flow. To list a few, we mention the convergence to a Brakke's flow by \cite{MR1803974,MR1237490,MR3495430,MR2253464} and the convergence to the viscosity solution by \cite{MR1177477,MR1674799} and the references therein. All these works consider the scalar equation and make heavy use of the maximum principle and the comparison principle. In contrast, there are much fewer works on the vectorial Allen--Cahn equation, i.e. the time dependent Ginzburg--Landau equation (see for instance \cite{MR1443865,MR978829}).
To the best of the author's knowledge, there are mainly two strategies for the convergences of the vectorial cases, both assuming the limiting two-phase problem has a classical solution. One is the asymptotical expansion method introduced in \cite{MR1672406}, which has been used in \cite{fei2021matrix,MR4059996} for matrix-valued Allen--Cahn equations. Another one is the relative entropy method introduced in \cite{fischer2020convergence}, motivated by \cite{MR4072686,MR3353807}. The generalization to matrix--valued case has been done in \cite{MR4284534}.
The current work is intended to generalize the methods of \cite{fischer2020convergence,MR4284534} to the anisotropic system \eqref{GL system}. This system can be considered as a dynamical version of the one studied in recent works by Golovaty et al. \cite{MR4076075,MR3910590}. Concerning the method we use, at least in obtaining the anchoring boundary condition \eqref{anchoring bc}, which is one of the main novelty in this work, we are inspired by a recent work of Lin--Wang \cite{lin2020isotropic}. There the authors studied isotropic-nematic phase transitions in the static case based on an anisotropic Ericksen's model.
This work will be organized as follows: In Section \ref{sec entropy}, we shall adapt the relative entropy method of \cite{fischer2020convergence} to the vectorial and anisotropic system \eqref{GL system}, and then derive a differential inequality, i.e. Proposition \ref{gronwallprop}. Such an inequality was first derived in \cite{MR4284534} for a matrix-valued equation. However, when applied to \eqref{GL system}, it will include a term which does not have an obvious sign due to the additional $\operatorname{div}$ term. This problem will be solved in Section \ref{sec close} during the proof of the differential inequality in Theorem \ref{thm close energy with div}. This theorem, which proves the first part of Theorem \ref{main thm}, is a major novelty of this work, and is also the main reason that we restrict ourselves to planar vector fields rather than mappings in higher dimensional spaces. This differential inequality is used in Section \ref{sec level} Theorem \ref{sharp l1} to derive an $L^1$ estimate of $\psi_\varepsilon $ (see \eqref{con psi}), which in turn will lead to estimates of the level sets of $\psi_\varepsilon $ in Lemma \ref{sharp est of bdy}. With this key lemma, we derive in Section \ref{anchoring} the anchoring boundary condition \eqref{anchoring bc}.
Section \ref{sec har} is devoted to the proof of Theorem \ref{main thm oseen frank}.
Unless specified otherwise $C>0$ is a generic constant whose value might change from line to line, and will depend only on the geometry of the interface \eqref{interface} but not on $\varepsilon $ or $t\in [0,T]$. In order to simplify the presentation, we shall sometimes abbreviate the estimates like $X\leqslant CY$ by $X\lesssim Y$ for some non-negative quantities $X,Y$. For two square matrices $A,B$, their Frobenius inner product is defined by $A:B:= \operatorname{tr} A^{\mathsf{T}} B$ . We shall also use the following abbreviations for the partial derivatives with respect to $t$ and $x$: \begin{equation} \partial_0=\partial_t,\quad \partial_1=\partial_{x_1}, \quad \partial_2=\partial_{x_2}, \quad \nabla=(\partial_1,\partial_2). \end{equation} For a function of $\mathbf{u}$, like $F(\mathbf{u})$, its gradient will be denoted by $\partial F=(\partial_{u_1} F,\partial_{u_2} F)$.
\section{Preliminaries}\label{sec entropy}
As the gradient flow of \eqref{GL energy}, the system \eqref{GL system} enjoys the energy dissipation law \begin{equation}\label{dissipation}
A_\varepsilon (\mathbf{u}_\varepsilon (\cdot,T))+ \int_0^T \int_\Omega \varepsilon |\partial_t \mathbf{u}_\varepsilon |^2 \,d x \,d t=A_\varepsilon (\mathbf{u}_\varepsilon ^{in}(\cdot))~\text{for all}~ T> 0. \end{equation} For initial data undergoing a phase transition near the initial interface $I_0$, we shall show that $\nabla \mathbf{u}_\varepsilon $ will be concentred on $I_t$. To this aim, the law \eqref{dissipation} is not sufficient to imply the (strong) convergence of $\mathbf{u}_\varepsilon $, not even in the domain away from $I_t$. Following a recent work of Fisher et al. \cite{fischer2020convergence} we shall develop in this section a differential inequality, which modulates the concentration and leads to the compactness of solutions in Sobolev spaces.
We first set up the geometry of the moving interface \eqref{interface}. Under a local parametrization $\boldsymbol{\varphi }_t(s):\mathbb{T}^1\to I_t$, the curve-shortening flow reads \[\partial_t \boldsymbol{\varphi }_t(s)\cdot \mathbf{n}(s,t)=\mathbf{H}(\boldsymbol{\varphi }_t(s),t)\cdot \mathbf{n}(s,t) \label{csf}\end{equation} where $\mathbf{H}$ is the (mean) curvature vector pointing to the inner normal $\mathbf{n}(s,t)$.
For any $t\in [0,T]$, we assume that the nearest-point projection $P_I(\cdot,t):I_t(4\delta_0)\mapsto I_t$ is smooth for some sufficiently small $\delta_0\in (0,1)$ which only depends on the geometry of $I_t$. Analytically we have $P_I(x,t) =x-\nabla d_I(x,t) d_I(x,t)$. So for each fixed $t\in [0,T]$, any point $x\in I_t(4\delta_0)$ corresponds to a unique pair $(r,s)$ with $r=d_I(x,t)$ and $s\in \mathbb{T}^1$, and thus the identity $$d_I(\boldsymbol{\varphi }_t(s)+r\mathbf{n}(s,t), t)\equiv r$$ holds with independent variables $(r,s,t)$.
Differentiating this identity with respect to $r$ and $t$ leads to the following identities:
\[\nabla d_I(x,t)= \mathbf{n}(s,t),\qquad -\partial_t d_I(x,t)=\partial_t \boldsymbol{\varphi }_t(s)\cdot\mathbf{n}(s,t)=: V(s,t).\label{velocity}\end{equation}
This extends the normal vector and the normal velocity of $I_t$ to a neighborhood of it. We shall extend $\mathbf{n}$ to the whole computational domain $\Omega$ by \[\boldsymbol{\xi } (x,t)=\phi \( \frac{d_I(x,t)}{\delta_0}\)\nabla d_I(x,t)\label{def:xi}\end{equation} where $\phi(x)\geqslant 0$ is an even, smooth function on $\mathbb{R}$ that decreases for $x\in [0,1]$, and satisfies \begin{equation} \begin{cases}
\phi(x)>0~&\text{for}~|x|< 1, \\
\phi(x)=0~&\text{for}~|x|\geqslant 1, \\
1-4 x^2\leqslant \phi(x)\leqslant 1-\frac 12 x^2~&\text{for}~|x|\leqslant 1/2. \end{cases}\label{phi func control} \end{equation} To fulfill these requirements, we can simply choose
\[\phi(x)=e^{\frac 1{x^2-1}+1}~\text{for}~|x|< 1~\text{and}~\phi(x)=0~\text{for}~|x|\geqslant 1.\label{phi func}\end{equation}
\picdis{\begin{tikzpicture}[scale = 0.8] \begin{axis}[axis equal,axis lines = left,
]
\addplot[domain=0: 0.9999,color=red,samples=100]{exp(x^2/(x^2-1))} node[above] {$\phi(x)$}; \addplot[domain=0:0.5,color=black]{1-4*x^2} node[above] {$1-4x^2$}; \addplot[domain=0:1,color=black]{1-0.5*x^2} node[below] {$1-\frac{x^2}2$}; \end{axis}
\end{tikzpicture} \qquad \begin{tikzpicture}[scale = 1] \begin{axis}[axis equal,
axis lines=none,
xtick=\empty,
ytick=\empty,
]
\draw (30, 100) node {$\Omega_t^-$};
\addplot[samples=8, domain=1.8:pi-0.3,
variable=\t,
quiver={
u={-cos(deg(t))},
v={-sin(deg(t))},
scale arrows=0.09},
->,black]
({cos(deg(t))}, {sin(deg(t))});
\addplot[samples=10, domain=1.8:pi-0.3,
variable=\t,
quiver={
u={-1.5*cos(deg(t))},
v={-1.5*sin(deg(t))},
scale arrows=0.07},
->,black]
({1.5*cos(deg(t))}, {1.5*sin(deg(t))});
\addplot[samples=10, domain=1.8:pi-0.3,
variable=\t,
quiver={
u={-1.25*cos(deg(t))},
v={-1.25*sin(deg(t))},
scale arrows=0.13},
->,black]
({1.25*cos(deg(t))}, {1.25*sin(deg(t))}) ;
\addplot[samples=100, domain=1.5:pi-0.2,dashed]
({1.5*cos(deg(x))}, {1.5*sin(deg(x))});
\addplot[samples=100, domain=1.5:pi-0.2, dashed,black]
({cos(deg(x))}, {sin(deg(x))}) ;
\addplot[samples=100, domain=1.5:pi-0.2, very thick,red]
({1.25*cos(deg(x))}, {1.25*sin(deg(x))}) node[right]{$I_t$};
\draw (100, 45) node {$\boldsymbol{\xi } $};
\draw (100, 10) node {$\Omega_t^+$}; \end{axis} \end{tikzpicture}
}
We proceed with the extension of the (mean) curvature. Choosing a cut-off function
\[\eta_0\in C_c^\infty(I_t(2\delta_0))~\text{ with }\eta_0=1\text{ in }I_t(\delta_0),\label{cut-off eta delta}\end{equation}
we extend the (mean) curvature vector $\mathbf{H}$ by \[ \mathbf{H}(x,t)=\kappa \nabla d_I(x,t) \quad\text{with}\quad \kappa(x,t)=-\Delta d_I (P_I(x,t))\eta_0(x,t).\label{def:H}\end{equation} As $\mathbf{H}$ \eqref{def:H} is extended constantly in the normal direction, we have
\begin{align}
(\mathbf{n}\cdot\nabla )\mathbf{H}&=0\text{ and } (\boldsymbol{\xi } \cdot\nabla )\mathbf{H}=0\quad \text{in}~I_t(\delta_0).\label{normal H}
\end{align}
Moreover, by \eqref{def:xi} we have
\begin{equation}\label{bc n and H}
\boldsymbol{\xi } =0~\text{on}~\partial\Omega~\text{and}~\mathbf{H}=0~\text{on}~\partial\Omega.
\end{equation} We claim also the following:
\begin{subequations}\label{xi der}
\begin{align}
|\nabla\cdot \boldsymbol{\xi } +\mathbf{H} \cdot \boldsymbol{\xi }| \lesssim &~ | d_I|\quad \text{in}~I_t(\delta_0),\label{div xi H}\\
\partial_t d_I(x,t) +(\mathbf{H}( x,t)\cdot\nabla) d_I(x,t) &=0\quad \text{in}~I_t(\delta_0),\label{mcf}\\ \partial_t \boldsymbol{\xi } +\left(\mathbf{H} \cdot \nabla\right) \boldsymbol{\xi } +\left(\nabla \mathbf{H}\right)^{\mathsf{T}} \boldsymbol{\xi } &=0\quad \text{in}~I_t(\delta_0),\label{xi der1} \end{align}
\end{subequations} where $\nabla \mathbf{H}:=\{\partial_j H_i\}_{1\leqslant i,j\leqslant 2}$ is a matrix with $i$ being the row index.
\begin{proof}[Proof of \eqref{xi der}]
Recalling \eqref{def:xi}, $\phi_0(\tau):=\phi (\frac \tau{\delta_0})$ is an even function. So it follows from $\phi_0'(0)=0$ and Taylor's expansion in $d_I$ that \begin{align*}
\nabla\cdot \boldsymbol{\xi } &=|\nabla d_I|^2 \phi_0'(d_I)+\phi_0(d_I)\Delta d_I(x,t) \\&=O(d_I) +\phi_0 (d_I)\Delta d_I(P_I(x,t),t), \end{align*} and this together with \eqref{def:H} leads to \eqref{div xi H}. Using \eqref{velocity} and \eqref{def:H}, we can write \eqref{csf} as the transport equation \eqref{mcf}. By \eqref{mcf} we have the following identities in $I_t(\delta_0)$: \begin{align*} \partial_t \nabla d_I+(\mathbf{H} \cdot \nabla) \nabla d_I +(\nabla \mathbf{H} )^{\mathsf{T}} \nabla d_I=0,\\ \partial_t \phi_0(d_I )+ (\mathbf{H} \cdot\nabla) \phi_0(d_I)=0. \end{align*}
These two equations together imply \eqref{xi der1}.
\end{proof}
For our purposes, it is more convenient to write $\psi_\varepsilon =d_F\circ \mathbf{u}_\varepsilon $ where
\[d_F (\mathbf{u}):=\int_0^{|\mathbf{u}|} g(s)\, ds\overset{\eqref{g lip}}\in C^1(\mathbb{R}^2)\quad \text{ and }\quad \partial d_F(0) =0.\label{df def}\end{equation}
Without loss of generality, we assume \[ \int_0^1 g(s)\, ds=1.\label{normalization}\end{equation} For a fixed $\varepsilon >0$, $\mathbf{u}_\varepsilon (x,t)$ is differentiable. So by chain rule
\begin{align}
\label{ADM chain rule}
\partial_i\psi_\varepsilon (x,t) = \partial_i \mathbf{u}_\varepsilon (x,t)\cdot\partial d_F\(\mathbf{u}_\varepsilon (x,t)\). \end{align} We define the phase-field analogues of the normal vector and the mean curvature vector respectively by \begin{subequations} \begin{align}
\mathbf{n}_\varepsilon (x,t)&:=\begin{cases}
\frac{\nabla \psi_\varepsilon }{|\nabla \psi_\varepsilon |}(x,t)&\text{ if } \nabla \psi_\varepsilon (x,t)\neq 0,\\ 0& \text{otherwise}.
\end{cases} \label{normal diff}\\ \mathbf{H}_\varepsilon (x,t)&:=\begin{cases}
-\left(\varepsilon \Delta \mathbf{u}_\varepsilon -\frac{1}{\varepsilon }\partial F(\mathbf{u}_\varepsilon ) \right)\cdot\frac{\nabla \mathbf{u}_\varepsilon }{\left|\nabla \mathbf{u}_\varepsilon \right|} &\text{ if } \nabla \mathbf{u}_\varepsilon \neq 0,\\ 0&\text{otherwise}. \end{cases}
\label{mean curvature app} \end{align} \end{subequations}
Note that in \eqref{mean curvature app}, the inner product is made with the column vectors of $\nabla \mathbf{u}_\varepsilon =(\partial_1 \mathbf{u}_\varepsilon ,\partial_2 \mathbf{u}_\varepsilon )$. We also define the projection: \begin{align} \label{projection1} \Pi_{\mathbf{u}_\varepsilon } \partial_i \mathbf{u}_\varepsilon := \begin{cases}
\(\partial_i \mathbf{u}_\varepsilon \cdot\frac{\mathbf{u}_\varepsilon }{| \mathbf{u}_\varepsilon | }\) \frac{\mathbf{u}_\varepsilon }{| \mathbf{u}_\varepsilon | }&~\text{if}~ \mathbf{u}_\varepsilon \neq 0,\\ 0,&~\text{otherwise}. \end{cases} \end{align} Using \eqref{df def}, \eqref{ADM chain rule} and \eqref{projection1}, we deduce \[\label{projectionnorm}
|\nabla \psi_\varepsilon | = |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon | |\partial d_F (\mathbf{u}_\varepsilon )|\quad \text{ for any }(x,t).\end{equation} We claim the following identity: \begin{align} & \label{projection}
\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon =\frac{|\nabla\psi_\varepsilon |} {|\partial d_F (\mathbf{u}_\varepsilon )|^2}\partial d_F (\mathbf{u}_\varepsilon )\otimes \mathbf{n}_\varepsilon \quad \text{ if } g(\mathbf{u}_\varepsilon )> 0. \end{align} Indeed, by \eqref{df def} $g(\mathbf{u}_\varepsilon )>0$ implies $\partial d_F(\mathbf{u}_\varepsilon )\neq 0$. So we have \begin{align}
\frac{|\nabla\psi_\varepsilon |} {|\partial d_F (\mathbf{u}_\varepsilon )|^2}\partial d_F (\mathbf{u}_\varepsilon )\otimes \mathbf{n}_\varepsilon \overset{
\eqref{normal diff}}=\frac{\partial d_F (\mathbf{u}_\varepsilon ) } {|\partial d_F (\mathbf{u}_\varepsilon )|^2}\otimes \nabla\psi_\varepsilon \overset{\eqref{ADM chain rule}}= \frac{\mathbf{u}_\varepsilon }{| \mathbf{u}_\varepsilon |}\otimes \nabla |\mathbf{u}_\varepsilon |, \end{align} and this implies \eqref{projection} in view of \eqref{projection1}.
The following lemma establishes coercivity properties of $E_\varepsilon [\mathbf{u}_\varepsilon | I]$ \eqref{entropy}. As we shall not integrate the time variable throughout this section, we shall suppress its dependence, and abbreviate $\int_\Omega$ by $\int$. \begin{lemma}\label{lemma:energy bound} There exists a universal constant $C>0$ which is independent of $t\in [0,T)$ and $\varepsilon $ such that the following estimates hold for every $t\in (0,T)$: \begin{subequations} \label{energy bound} \begin{align}
\int \(\frac{\varepsilon }{2} \left|\nabla \mathbf{u}_\varepsilon \right|^2+\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon )-|\nabla \psi_\varepsilon | \) \, d x & \leqslant E_\varepsilon [\mathbf{u}_\varepsilon | I] , \label{energy bound-1}\\
\varepsilon \int \( \mu |\operatorname{div} \mathbf{u}_\varepsilon |^2+\left|\nabla \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|^2 \)\, d x & \leqslant 2 E_\varepsilon [\mathbf{u}_\varepsilon | I] ,\label{energy bound0}\\
\int\left(\sqrt{\varepsilon }\left|\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|-\frac1{\sqrt{\varepsilon }} \left|\partial d_F (\mathbf{u}_\varepsilon )\right| \right)^{2}\, d x & \leqslant 2 E_\varepsilon [ \mathbf{u}_\varepsilon | I] ,\label{energy bound2}\\
\int\( {\frac{\varepsilon }{2}}\left| \nabla \mathbf{u}_\varepsilon \right|^{2} +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon )+\left|\nabla \psi_\varepsilon \right|\)\left(1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon \right) \, d x & \leqslant C E_\varepsilon [ \mathbf{u}_\varepsilon | I] ,\label{energy bound1} \\
\int \(\frac{\varepsilon }2 \left|\nabla \mathbf{u}_\varepsilon \right| ^{2} +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon )+|\nabla\psi_\varepsilon |\) \min\(d^2_I,1\)\, d x & \leqslant C E_\varepsilon [ \mathbf{u}_\varepsilon | I]. \label{energy bound3}
\end{align} \end{subequations} \end{lemma} \begin{proof}
The case $\mu\equiv 0$ was proved in \cite{MR4284534}, and the proof carries out to the current case. Using \eqref{normal diff}, we obtain $\nabla\psi_\varepsilon =|\nabla\psi_\varepsilon |\mathbf{n}_\varepsilon $. Note also that \eqref{projection1} implies
\[\left|\nabla \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|^2+\left| \Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|^2=\left|\nabla \mathbf{u}_\varepsilon \right|^2.\label{gougudingli}\end{equation}
Altogether, we can write
\begin{align}
E_\varepsilon [\mathbf{u}_\varepsilon | I] = & \int \frac{\varepsilon }2 \mu |\operatorname{div} \mathbf{u}_\varepsilon |^2+ \frac \varepsilon 2\left| \nabla \mathbf{u}_\varepsilon \right|^2 +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon )-|\nabla \psi_\varepsilon | +\int |\nabla\psi_\varepsilon | (1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon )\nonumber\\
= & \frac{\varepsilon }2 \int \mu |\operatorname{div} \mathbf{u}_\varepsilon |^2+ \left|\nabla \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|^2 \nonumber \\
&+ \int \frac \varepsilon 2\left| \Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|^2 +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon )-|\nabla \psi_\varepsilon |\nonumber \\
& +\int |\nabla\psi_\varepsilon | (1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon ).\label{E decom1} \end{align}
We claim that the second last integral has non-negative integrand. Indeed, by \eqref{df def} and \eqref{bulk} we have $|\partial d_F (\mathbf{u})| = \sqrt{2 F (\mathbf{u})}$, and the claim follows from \eqref{projectionnorm} and completing a square. This also yields \eqref{energy bound-1}, \eqref{energy bound0} and \eqref{energy bound2}.
Combining \eqref{energy bound-1} with
$E_\varepsilon [ \mathbf{u}_\varepsilon | I]\geqslant \int\left(1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon \right)\left|\nabla \psi_\varepsilon \right|$
and $1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon \leqslant 2$ yields \eqref{energy bound1}. Finally, by \eqref{phi func control} and $\delta_0\in (0,1)$ we have \[1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon \geqslant 1-\phi\(\frac {d_I}{\delta_0}\) \geqslant \min \(\frac {d^2_I}{2\delta_0^2}, 1-\phi(\tfrac 1 2)\)\geqslant C_{\phi,\delta_0} \min(d^2_I,1).\label{lowerbdcali}\end{equation} This together with \eqref{energy bound1} implies \eqref{energy bound3}. \end{proof}
The following result was first proved in \cite{fischer2020convergence} for the scalar Allen-Cahn equation, and was generalized to the vectorial case in \cite{MR4284534}. \begin{prop}\label{gronwallprop}
There exists a generic constant $C>0$ depending only on the geometry of the interface $I_t$ so that
\begin{align}
\frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] &+\frac 1{2\varepsilon }\int \(\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2-|\mathbf{H}_\varepsilon |^2\)\,dx+\frac 1{2\varepsilon }\int \left| \varepsilon \partial_t \mathbf{u}_\varepsilon -(\nabla\cdot \boldsymbol{\xi } )\partial d_F (\mathbf{u}_\varepsilon ) \right|^2\,dx\nonumber \\
&+\frac 1{2\varepsilon }\int \Big| \mathbf{H}_\varepsilon -\varepsilon |\nabla \mathbf{u}_\varepsilon |\mathbf{H} \Big|^2\,dx \leqslant CE_\varepsilon [ \mathbf{u}_\varepsilon | I]. \label{gronwall}
\end{align} \end{prop}
We present the proof in Appendix \ref{appendix} for the convenience of the readers.
\section{Uniform estimates of solutions}\label{sec close}
The second term on the left-hand side of \eqref{gronwall} does not have an obvious sign. When $\mu=0$, it is non-negative by \eqref{mean curvature app} and \eqref{Ginzburg-Landau}.
The main task of this section is to show that it is controllable for any fixed $\mu>0$ in 2D. \begin{theorem}\label{thm close energy with div} Under the assumptions of Theorem \ref{main thm}, there exists a generic constant $C>0$ such that
\begin{align}\label{energy bound4}
\sup_{t\in [0,T]} \varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]+&\int_0^T\int_\Omega |\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 + \left|\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon \right|^2 \, dxdt \leqslant e^{(1+T)C(I_0)}.
\end{align}
\end{theorem} To proceed we need an estimate of tangential derivatives. \begin{lemma} Let $\eta_1 \in C_c^\infty(I_t(4\delta_0))$ be a cut-off function which $\equiv 1$ in $I_t(3\delta_0)$ and $\mathbf{n}=\nabla d_I$. There exists a universal constant $C>0$ which is independent of $t\in (0,T)$ and $\varepsilon $ such that
\begin{align}
\int \eta_1\left| \nabla \mathbf{u}_\varepsilon \(I_2-\mathbf{n}\otimes \mathbf{n} \) \right|^2\leqslant C \varepsilon ^{-1}
E_\varepsilon [ \mathbf{u}_\varepsilon | I](t)\qquad \forall t\in (0,T).\label{tan est of Q2} \end{align}
\end{lemma} \begin{proof} We can use \eqref{projection} to estimate the norm of the matrix product \begin{align*}
&\left| \Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon (I_2- \mathbf{n}_\varepsilon \otimes\boldsymbol{\xi } ) \right|^2\\
\overset{\eqref{projection}}=&\,\left|\frac{|\nabla\psi_\varepsilon |} {|\partial d_F (\mathbf{u}_\varepsilon )|^2}\partial d_F (\mathbf{u}_\varepsilon )\otimes (\mathbf{n}_\varepsilon -\boldsymbol{\xi } )\right|^2\qquad \text{ on the set }~\{ x\mid g(\mathbf{u}_\varepsilon )>0\}\\
\overset{\eqref{projectionnorm}}\leqslant & \, 2(1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon )\left| \Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon \right|^2\\
\leqslant &\, 2(1-\boldsymbol{\xi } \cdot\mathbf{n}_\varepsilon )\left| \nabla \mathbf{u}_\varepsilon \right|^2. \end{align*}
On the set $\{ x\mid |\mathbf{u}_\varepsilon |=0\}$ we find $\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon =0$ by \eqref{projection1}. On $ \{ x\mid |\mathbf{u}_\varepsilon |=1\}$ we deduce from
\cite[Theorem 4.4]{MR3409135} that $\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon =0$ a.e.. Thus the above estimate holds a.e. in $\Omega$.
This together with \eqref{energy bound1} implies
\[\label{tan est of Q}
\int \Big| \Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon (I_2- \mathbf{n}_\varepsilon \otimes\boldsymbol{\xi } ) \Big|^2 \leqslant C \varepsilon ^{-1}
E_\varepsilon [ \mathbf{u}_\varepsilon | I].\end{equation} In $I_t(4\delta_0)$ where $\mathbf{n}=\nabla d_I$ is well-defined, we have the decomposition
\[I_2-\mathbf{n}_\varepsilon \otimes \mathbf{n} =I_2- \mathbf{n}_\varepsilon \otimes \boldsymbol{\xi }+ \mathbf{n}_\varepsilon \otimes ( \boldsymbol{\xi } -\mathbf{n}).\end{equation} It follows from \eqref{def:xi} and \eqref{phi func control} that
\[| \boldsymbol{\xi }-\mathbf{n} |^2\leqslant 2| \boldsymbol{\xi }-\mathbf{n} |=2\(1-\phi(\tfrac{d_I}{\delta_0})\)\lesssim \min \(d^2_I ,1\).\label{diff normal d2}\end{equation}
These inequalities together with \eqref{energy bound3} lead to
\[ \int \eta_1\Big| \Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon (I_2- \mathbf{n}_\varepsilon \otimes\mathbf{n} ) \Big|^2\leqslant C \varepsilon ^{-1}
E_\varepsilon [ \mathbf{u}_\varepsilon | I].\label{tan est of Q1}\end{equation} By \eqref{tan est of Q1} and the formula $$(I_2-\mathbf{n}\otimes \mathbf{n})-(I_2-\mathbf{n}_\varepsilon \otimes \mathbf{n})=(\mathbf{n}_\varepsilon -\boldsymbol{\xi })\otimes \mathbf{n}+(\boldsymbol{\xi }-\mathbf{n})\otimes \mathbf{n}~\text{ in }~I_t(4\delta_0),$$
the proof of \eqref{tan est of Q2} is finally done by
$$ \int | \nabla \mathbf{u}_\varepsilon |^2 \Big( |\mathbf{n}_\varepsilon -\boldsymbol{\xi }|^2+|\boldsymbol{\xi }-\mathbf{n}|^2\Big) \leqslant C \varepsilon ^{-1}
E_\varepsilon [ \mathbf{u}_\varepsilon | I],$$ which follows from \eqref{diff normal d2}, \eqref{energy bound1} and \eqref{energy bound3}.
\end{proof} To proceed we need {an $L^4$-estimate} of $\mathbf{u}_\varepsilon $.
\begin{lemma}\label{L infinity bound}
Under the assumption \eqref{bdd1}, there exists a constant $C=C(c_1)>0$ such that \begin{subequations} \begin{align}
& \sup_{t\in [0,T]} A_\varepsilon (\mathbf{u}_\varepsilon (\cdot,t)) + \sup_{t\in [0,T]} \|\nabla\psi_\varepsilon (\cdot, t)\|_{L^1(\Omega) } \leqslant C,\label{nablapsiest}\\ \label{L infinity bound1}
& \sup_{t\in [0,T]} \|\mathbf{u}_\varepsilon (\cdot, t) \|_{L^4(\Omega)}\leqslant ~ C.\end{align} \end{subequations} \end{lemma} \begin{proof} It follows from \eqref{dissipation}, \eqref{projection1} and the Cauchy--Schwarz inequality that
$$A_\varepsilon (\mathbf{u}_\varepsilon )\geqslant \int_\Omega \(\frac \varepsilon 2|\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon |^2+\frac 1\varepsilon F(\mathbf{u}_\varepsilon )\)\, dx\geqslant \int_\Omega |\nabla\psi_\varepsilon |\, dx,$$
and \eqref{nablapsiest} is proved. To prove \eqref{L infinity bound1}, we first note that if $|\mathbf{u}_\varepsilon |> 2$, then
$$\psi_\varepsilon =\int_0^2g(s) \, ds+\int_2^{|\mathbf{u}_\varepsilon |}g(s) \, ds\overset{\eqref{growth f}}\geqslant c_0 (|\mathbf{u}_\varepsilon |^2-4).$$
This combined with Sobolev's embedding in 2D and $\psi_\varepsilon |_{\partial\Omega}=0$ (cf. \eqref{bc of omega}) leads to \begin{align*}
\int_\Omega |\mathbf{u}_\varepsilon |^4\, dx &\lesssim 1+\int_{\{x: |\mathbf{u}_\varepsilon |> 2\}} |\mathbf{u}_\varepsilon |^4\, dx\nonumber\\
&\lesssim 1+\|\psi_\varepsilon \|^2_{L^2(\Omega)}\lesssim 1+\|\nabla \psi_\varepsilon \|^2_{L^1(\Omega)}\overset{ \eqref{nablapsiest}}\leqslant C. \end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm close energy with div}] We shall use the following notation \begin{subequations}\label{2d operator} \begin{align} &\nabla^\perp =(-\partial_2 , \partial_1), \\ & \mathbf{w}^\bot=(-w_2,w_1)~\text{ for }~\mathbf{w}=(w_1,w_2)\in C^1(\Omega;\mathbb{R}^2),\\ &\operatorname{rot} \mathbf{w}=\nabla^\bot \cdot \mathbf{w}=-\partial_2 w_1+\partial_1 w_2.\label{curl} \end{align} \end{subequations} We shall also employ the Einstein summation convention by summing over repeated Latin indices. It follows from \eqref{gronwall} that \begin{align}\label{energy1}
\frac 2 \varepsilon \frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] &+\frac 1{\varepsilon ^2 }\int \Big(\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2-|\mathbf{H}_\varepsilon |^2\Big)+\Big| \mathbf{H}_\varepsilon -\varepsilon |\nabla \mathbf{u}_\varepsilon |\mathbf{H} \Big|^2\,dx\nonumber\\
&+\frac 1{\varepsilon ^2 }\int \left| \varepsilon \partial_t \mathbf{u}_\varepsilon -\partial d_F (\mathbf{u}_\varepsilon )(\nabla\cdot \boldsymbol{\xi } ) \right|^2\,dx \leqslant \frac C \varepsilon E_\varepsilon [ \mathbf{u}_\varepsilon | I].
\end{align} By the orthogonal projection \eqref{projection1}, we can write \begin{align*}
&\left| \varepsilon \partial_t \mathbf{u}_\varepsilon -\partial d_F (\mathbf{u}_\varepsilon ) (\nabla\cdot\boldsymbol{\xi } ) \right|^2\\=
&\left| \varepsilon \partial_t \mathbf{u}_\varepsilon -\varepsilon \Pi_{\mathbf{u}_\varepsilon } \partial_t \mathbf{u}_\varepsilon \right|^2+\left| \varepsilon \Pi_{\mathbf{u}_\varepsilon } \partial_t \mathbf{u}_\varepsilon -\partial d_F (\mathbf{u}_\varepsilon ) (\nabla\cdot\boldsymbol{\xi } ) \right|^2. \end{align*} Substituting this into \eqref{energy1} yields \begin{align}\label{energy2}
\frac 2 \varepsilon \frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] &+\frac 1{\varepsilon ^2 }\int \Big(\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2-|\mathbf{H}_\varepsilon |^2\Big)+\Big| \mathbf{H}_\varepsilon -\varepsilon |\nabla \mathbf{u}_\varepsilon |\mathbf{H} \Big|^2\,dx\nonumber\\
&+ \int \left|\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon \right|^2\,dx \leqslant \frac C \varepsilon E_\varepsilon [ \mathbf{u}_\varepsilon | I] \end{align} To estimate the second term on the left-hand side, we use \eqref{Ginzburg-Landau}, \eqref{mean curvature app} and write
\[ \mathbf{H}_\varepsilon =- \varepsilon ( \partial_t \mathbf{u}_\varepsilon -\mu \nabla\operatorname{div} \mathbf{u}_\varepsilon )\cdot\frac{\nabla \mathbf{u}_\varepsilon }{|\nabla \mathbf{u}_\varepsilon |}.\end{equation} Note that the inner product is made with the column vectors of $\nabla \mathbf{u}_\varepsilon =(\partial_1 \mathbf{u}_\varepsilon ,\partial_2 \mathbf{u}_\varepsilon )$. Using the above formula, we expand the integrands of \eqref{energy2} and apply the Cauchy-Schwarz inequality: \begin{align*}
&~\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2-|\mathbf{H}_\varepsilon |^2+ \Big| \mathbf{H}_\varepsilon -\varepsilon |\nabla \mathbf{u}_\varepsilon | \mathbf{H}\Big|^2\\
=&~\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2+\varepsilon ^2 |\mathbf{H}|^2 |\nabla \mathbf{u}_\varepsilon |^2+2\varepsilon ^2 \partial_t \mathbf{u}_\varepsilon \cdot (\mathbf{H}\cdot \nabla) \mathbf{u}_\varepsilon -2\varepsilon ^2\mu \nabla \operatorname{div} \mathbf{u}_\varepsilon \cdot(\mathbf{H}\cdot\nabla ) \mathbf{u}_\varepsilon \\
= &~\varepsilon ^2|\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2+\varepsilon ^2 \( |\mathbf{H}|^2 |\nabla \mathbf{u}_\varepsilon |^2- |(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 \) -2\varepsilon ^2\mu \nabla \operatorname{div} \mathbf{u}_\varepsilon \cdot (\mathbf{H}\cdot\nabla ) \mathbf{u}_\varepsilon . \end{align*} Note that the second term in the last display is non-negative due to Cauchy-Schwarz's inequality, and this implies that \begin{align*}
& \int |\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 \, dx\nonumber\\
\leqslant & \frac 1{\varepsilon ^2} \int \Big(\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2-|\mathbf{H}_\varepsilon |^2\Big)+ \Big| \mathbf{H}_\varepsilon -\varepsilon |\nabla \mathbf{u}_\varepsilon | \mathbf{H} \Big |^2\, dx
+2\mu \int \nabla \operatorname{div} \mathbf{u}_\varepsilon \cdot (\mathbf{H}\cdot\nabla ) \mathbf{u}_\varepsilon \, dx \end{align*} Adding the above inequality to \eqref{energy2} yields
\begin{align}\label{lower bound relative2}
2\varepsilon ^{-1} \frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] &+\int |\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 \, dx + \int \left|\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon \right|^2\,dx \nonumber \\
&\leqslant C \varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]+2\mu \int \nabla \operatorname{div} \mathbf{u}_\varepsilon \cdot (\mathbf{H}\cdot\nabla ) \mathbf{u}_\varepsilon \, dx. \end{align}
In the sequel, we denote $\mathbf{u}_\varepsilon =(u^1_\varepsilon ,u^2_\varepsilon )$ and $\mathbf{H}=(H^1,H^2)$.
To estimate the last term we use integration by parts and \eqref{bc n and H} \begin{align}\label{control is div2} &-\int \nabla \operatorname{div} \mathbf{u}_\varepsilon \cdot (\mathbf{H}\cdot\nabla ) \mathbf{u}_\varepsilon \\ = & \int (\operatorname{div} \mathbf{u}_\varepsilon ) (\mathbf{H}\cdot\nabla ) \operatorname{div} \mathbf{u}_\varepsilon + \int \operatorname{div} \mathbf{u}_\varepsilon (\partial_j \mathbf{H}\cdot\nabla ) u_\varepsilon ^j\nonumber\\ =&-\frac 12 \int (\operatorname{div} \mathbf{H} ) (\operatorname{div} \mathbf{u}_\varepsilon )^2+\int (\operatorname{div} \mathbf{u}_\varepsilon ) \partial_k H^j \partial_k u_\varepsilon ^j+\int (\operatorname{div} \mathbf{u}_\varepsilon ) (\partial_j H^k-\partial_k H^j) \partial_k u_\varepsilon ^j.\nonumber \end{align}
Using \eqref{energy bound0}, the first integral in the last line of \eqref{control is div2} is bounded by $\varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]$ up to a constant. The second integral can be treated by decomposing $\nabla u_\varepsilon ^j$ along $\mathbf{n}$: \begin{align} \label{tan est1} &\int (\operatorname{div} \mathbf{u}_\varepsilon ) \,\nabla H^j\cdot \nabla u_\varepsilon ^j\nonumber\\ = & \int (\operatorname{div} \mathbf{u}_\varepsilon ) \,\nabla H^j\cdot \((I_2-\mathbf{n}\otimes \mathbf{n} ) \nabla u_\varepsilon ^j\)+\int (\operatorname{div} \mathbf{u}_\varepsilon ) \, (\mathbf{n} \cdot \nabla H^j) \( \mathbf{n} \cdot \nabla u_\varepsilon ^j\)\nonumber\\
\overset{\eqref{normal H}}\lesssim & \int |\operatorname{div} \mathbf{u}_\varepsilon |^2 + \int |\nabla\mathbf{H}|^2 |(I_2-\mathbf{n}\otimes \mathbf{n} )\cdot \nabla u_\varepsilon ^j|^2+\int |\nabla \mathbf{u}_\varepsilon |^2 \min\(d^2_I ,1\). \end{align}
By \eqref{def:H}, we can choose $\eta_1=|\nabla\mathbf{H}|^2$ and use \eqref{tan est of Q2} to estimate the second integral in the last display. The other two terms can be controlled by $\varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]$ using \eqref{energy bound0} and \eqref{energy bound3} respectively. To summarize we deduce from \eqref{lower bound relative2}, \eqref{control is div2} and \eqref{tan est1} that \begin{align}\label{lower bound relative3}
2\varepsilon ^{-1} \frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] &+\int |\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 \, dx + \int \left|\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon \right|^2\,dx \nonumber \\&\leqslant C \varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]-2\mu\int (\operatorname{div} \mathbf{u}_\varepsilon ) (\partial_j H^k-\partial_k H^j) \partial_k u_\varepsilon ^j.\end{align} It remains to estimate the last integral in \eqref{lower bound relative3}. By matrix decomposition \footnote{For a square matrix $A$, the decomposition $A=\frac{A+A^T}2+\frac{A-A^T}2$ is orthogonal under the Frobenius inner product $A:B\triangleq \operatorname{tr} A^T B$. } and \eqref{curl}, $$ (\partial_j H^k-\partial_k H^j) \partial_k u_\varepsilon ^j=-( \operatorname{rot} \mathbf{u}_\varepsilon ) \operatorname{rot} \mathbf{H}.$$
So integrating by parts and using \eqref{Ginzburg-Landau} and \eqref{2d operator}, we obtain \begin{align*} &~\mu\int (\operatorname{div} \mathbf{u}_\varepsilon ) (\partial_j H^k-\partial_k H^j) \partial_k u_\varepsilon ^j\\ =&-\mu\int (\operatorname{div} \mathbf{u}_\varepsilon ) ( \operatorname{rot} \mathbf{u}_\varepsilon ) \operatorname{rot} \mathbf{H}\\ \overset{\eqref{curl}}=&~\mu\int (\operatorname{div} \mathbf{u}_\varepsilon ) \mathbf{u}_\varepsilon \cdot\nabla^\bot\operatorname{rot} \mathbf{H}-\int \mu \nabla \operatorname{div} \mathbf{u}_\varepsilon \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H}\\ \overset{\eqref{Ginzburg-Landau}}=&~\mu\int (\operatorname{div} \mathbf{u}_\varepsilon ) \mathbf{u}_\varepsilon \cdot\nabla^\bot\operatorname{rot} \mathbf{H} -\int (\partial_t \mathbf{u}_\varepsilon - \Delta \mathbf{u}_\varepsilon ) \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H} \\ =& \mu\int (\operatorname{div} \mathbf{u}_\varepsilon ) \mathbf{u}_\varepsilon \cdot\nabla^\bot\operatorname{rot} \mathbf{H}-\int \Big(\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon \Big) \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H}\\ &+\int (\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H}+\int \Delta \mathbf{u}_\varepsilon \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H}. \end{align*} Submitting this identity into \eqref{lower bound relative3}, and using the Cauchy--Schwarz inequality, \eqref{L infinity bound1} and \eqref{energy bound0}, we find \begin{align}\label{lower bound relative4}
&2\varepsilon ^{-1} \frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] +\frac 12 \int |\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 \, dx + \int \left|\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon \right|^2\,dx \nonumber \\
& \leqslant C\(1+ \varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]\)+\int (\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H}+\int \Delta \mathbf{u}_\varepsilon \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H} \nonumber\\
& = C\(1+ \varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]\)+\int H_k \(\partial_k \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_k \mathbf{u}_\varepsilon \) \cdot \mathbf{u}_\varepsilon ^\bot \operatorname{rot} \mathbf{H}\nonumber\\
&\qquad\qquad\qquad\qquad -\int (\partial_k \operatorname{rot} \mathbf{H})\(\partial_k \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon } \partial_k \mathbf{u}_\varepsilon \) \cdot \mathbf{u}_\varepsilon ^\bot.
\end{align}
Note that in the last step we used integration by parts, the equation $\Pi_{\mathbf{u}_\varepsilon }\partial_k \mathbf{u}_\varepsilon \cdot \mathbf{u}_\varepsilon ^\bot =0$,
which follows from \eqref{projection1}, and $ \partial_k \mathbf{u}_\varepsilon \cdot \partial_k \mathbf{u}_\varepsilon ^\bot=0$. Finally applying the Cauchy--Schwarz inequality and then \eqref{energy bound0} and \eqref{L infinity bound1} to the last two integrals of \eqref{lower bound relative4} yields
\begin{align}
2\varepsilon ^{-1} &\frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I] +\frac 12 \int |\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon |^2 \, dx + \int \left|\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon \right|^2\,dx \nonumber \\
&\lesssim 1+ \varepsilon ^{-1} E_\varepsilon [ \mathbf{u}_\varepsilon | I]. \end{align} This combined with \eqref{initial} and Gr\"{o}nwall's inequality leads to \eqref{energy bound4}. \end{proof}
Using \eqref{energy bound3} and \eqref{energy bound4}, we immediately obtain the following:
\begin{coro}\label{coro space-time der bound}
There exists a generic constant $C>0$ such that
\begin{subequations}
\begin{align}
\sup_{t\in [0,T]}\int_{\Omega^\pm_t\backslash I_t(\delta)}\(|\nabla \mathbf{u}_\varepsilon |^2+\frac 1{\varepsilon ^2}
F(\mathbf{u}_\varepsilon )+\frac 1\varepsilon |\nabla\psi_\varepsilon | \)\, dx & \leqslant C\delta^{-2} ,\label{space der bound local}\\
\int_0^T\int_{\Omega^\pm_t\backslash I_t(\delta)} |\partial_t \mathbf{u}_\varepsilon |^2\, dx dt &\leqslant C\delta^{-2},\label{time der bound local} \end{align}
\end{subequations}
holds for each fixed $\delta\in (0, 4\delta_0)$.
\end{coro}
Indeed, the estimate of $\nabla \mathbf{u}_\varepsilon $ in \eqref{space der bound local} together with the bound on $\partial_t \mathbf{u}_\varepsilon +(\mathbf{H} \cdot\nabla) \mathbf{u}_\varepsilon $ in \eqref{energy bound4} lead to \eqref{time der bound local}. Another consequence of \eqref{energy bound4} is the following lemma concerning the unit vector field \begin{align}\label{def uhat} \widehat{\mathbf{u}}_\varepsilon =\begin{cases}
\frac{\mathbf{u}_\varepsilon }{|\mathbf{u}_\varepsilon |}&\text{ if } \mathbf{u}_\varepsilon \neq 0,\\ 0& \text{otherwise}. \end{cases} \end{align}
\begin{lemma} We have the following estimates for $\widehat{\mathbf{u}}_\varepsilon $: \begin{subequations} \begin{align}
&\sup_{t\in [0,T]}\int |\mathbf{u}_\varepsilon |^2 \left| \nabla \widehat{\mathbf{u}}_\varepsilon \right|^2\, dx+ \sup_{t\in [0,T]}\int \left| \widehat{\mathbf{u}}_\varepsilon \cdot \nabla |\mathbf{u}_\varepsilon | \right|^2\, dx\leqslant C \label{bound degree+orien},\\
&\sup_{t\in [0,T]}\int \(\widehat{\mathbf{u}}_\varepsilon \cdot \mathbf{n}_\varepsilon \)^2 |\nabla \psi_\varepsilon |\, dx\leqslant C \varepsilon . \label{cos law} \end{align} \end{subequations}
\end{lemma} \begin{proof} We first deduce from \eqref{energy bound4} and \eqref{energy bound0} that
\[\sup_{t\in [0,T]} \int \( \mu |\operatorname{div} \mathbf{u}_\varepsilon |^2+\left|\nabla \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \right|^2 \)\, d x\leqslant C.\label{energy bound7}\end{equation}
By \eqref{def uhat} we can write $\mathbf{u}_\varepsilon =|\mathbf{u}_\varepsilon |\widehat{\mathbf{u}}_\varepsilon $ and using \eqref{projection1}, we have the formula
\[\nabla \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon =|\mathbf{u}_\varepsilon | \nabla\widehat{\mathbf{u}}_\varepsilon ~ \text{ if }\mathbf{u}_\varepsilon \neq 0.\end{equation} Substituting this formula into \eqref{energy bound7} yields the estimate of the first term in \eqref{bound degree+orien}. To obtain the second one, we use the following formula which follows from \eqref{projection1}:
\[\operatorname{tr} \nabla \mathbf{u}_\varepsilon -\operatorname{tr} \(\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon \)=\operatorname{div} \mathbf{u}_\varepsilon -\widehat{\mathbf{u}}_\varepsilon \cdot \nabla |\mathbf{u}_\varepsilon |~ \text{ if }\mathbf{u}_\varepsilon \neq 0.\end{equation}
Note that on the set $\{ x\mid \mathbf{u}_\varepsilon =0\}$, there holds $\nabla |\mathbf{u}_\varepsilon |=0$ a.e., and thus the above formula still holds. This together with the inequality $|\operatorname{tr} A |^2\leqslant 2 |A|^2$ for any $A\in \mathbb{R}^{2\times 2}$ and \eqref{energy bound7} yields the estimate of $\widehat{\mathbf{u}}_\varepsilon \cdot \nabla |\mathbf{u}_\varepsilon |$ and \eqref{bound degree+orien} is proved.
Regarding \eqref{cos law}, it suffices to estimate over the set $\{ x\mid \nabla\psi_\varepsilon \neq 0\}$ because the integral over the complement vanishes. By \eqref{normal diff}, we have $\mathbf{n}_\varepsilon =\frac{\nabla |\mathbf{u}_\varepsilon |}{|\nabla |\mathbf{u}_\varepsilon ||}$. Denoting $\widehat{\mathbf{u}}_\varepsilon \cdot \mathbf{n}_\varepsilon =\cos\theta_\varepsilon $, we obtain for any $t\in[0,T]$ that
\[\mu \int \cos^2 \theta_\varepsilon \big|\nabla |\mathbf{u}_\varepsilon |\big|^2\, dx=\mu \int \left| \widehat{\mathbf{u}}_\varepsilon \cdot \mathbf{n}_\varepsilon \right|^2 \big|\nabla |\mathbf{u}_\varepsilon |\big|^2\, dx\overset{\eqref{bound degree+orien}}\leqslant C(\mu). \end{equation}
This combined with \eqref{energy bound-1} and the obvious inequality $\big|\nabla |\mathbf{u}_\varepsilon |\big|\leqslant |\nabla \mathbf{u}_\varepsilon |$ yields \begin{align*}
C(\mu) &\geqslant \int \frac \mu 2 \cos^2 \theta_\varepsilon \big|\nabla |\mathbf{u}_\varepsilon |\big|^2\, dx+\int \(\frac{1}{2} \Big|\nabla |\mathbf{u}_\varepsilon | \Big|^2+\frac{1}{\varepsilon ^2 } F (\mathbf{u}_\varepsilon )-\frac 1{\varepsilon }|\nabla \psi_\varepsilon | \) \, d x \\
&\geqslant \frac 1{\varepsilon } \int \(\sqrt{ \mu\cos^2 \theta_\varepsilon +1} \,\,\Big|\nabla |\mathbf{u}_\varepsilon |\Big|\sqrt{2F (\mathbf{u}_\varepsilon )}- |\nabla \psi_\varepsilon | \) \, d x \\
&= \frac 1{\varepsilon } \int \(\sqrt{\mu \cos^2 \theta_\varepsilon +1} -1\) |\nabla \psi_\varepsilon | \, d x \qquad\forall t\in [0,T]. \end{align*}
Note that in the last step we used $\big|\nabla |\mathbf{u}_\varepsilon |\big|\sqrt{2F (\mathbf{u}_\varepsilon )}=|\nabla\psi_\varepsilon |$. So \eqref{cos law} follows from conjugation. \end{proof}
\section{Estimates of level sets}\label{sec level} The main result of this section is the following $L^1$-estimate of $\psi_\varepsilon $.
\begin{theorem}\label{sharp l1} Under the assumptions of Theorem \ref{main thm}, there exists $C>0$ independent of $\varepsilon $ so that
\[\sup_{t\in [0,T]}\int_{\Omega}| \psi_\varepsilon -\mathbf{1}_{\Omega_t^+}| \, dx \leqslant C\varepsilon ^{1/3}.\label{volume convergence}\end{equation} \end{theorem}
\begin{proof}[Proof of Theorem \ref{sharp l1}] Our goal is to estimate $2\psi_\varepsilon -1-\chi$ where $\chi(x,t)=\pm 1$ in $\Omega_t^\pm$.
{\it Step 1: introducing two weighted errors.}
In the proof, we shall denote the positive and negative parts of a function $h$ by $h^+$ and $h^-$ respectively. By \cite[pp. 153]{MR3409135}, for any $h\in W^{1,1}(U)$, we have \[\partial_i (h(x))^+= (\partial_i h(x)) \mathbf{1}_{\{x:h(x)>0\}}(x)\quad \text{for } a.e.~~x\in U.\label{der positive part}\end{equation}
Let $\eta(\cdot)$ be a truncation of the identity map \[\eta(s)=\left\{ \begin{array}{rl} s\qquad \text{when } &s \in [-\delta_0, \delta_0],\\
\delta_0\qquad \text{when } &s\geqslant \delta_0,\\
-\delta_0\qquad \text{when } &s\leqslant -\delta_0. \end{array} \right.\label{truncation eta}\end{equation}
and $ |\eta|(s)=:\zeta(s)$. Using the formula $h=h^+-h^-$, we can write
\[2\psi_\varepsilon -1=2(\psi_\varepsilon -1)^++ \(1 -2(\psi_\varepsilon -1)^-\), \label{psi deco}\end{equation}
and we shall estimate its difference with $\chi$. This will be done by
establishing differential inequalities of the following energies \begin{subequations} \begin{align} g_\varepsilon (t):=&\int ( \psi_\varepsilon -1)^+\zeta(d_I) \, dx,\label{gronwall1}\\ h_\varepsilon (t):=&\int \Big(\chi-[1- 2(\psi_\varepsilon -1)^-] \Big)\eta(d_I) \, dx.\label{gronwall2} \end{align} \end{subequations} The integrand of \eqref{gronwall1} is non-negative. As $\psi_\varepsilon \geqslant 0$, we have $(\psi_\varepsilon -1)^-\in [0,1]$ and thus $[1 -2(\psi_\varepsilon -1)^-]$ ranges in $[-1,1]$.
Using $\eta \chi=|\eta|$, we deduce that the integrand of \eqref{gronwall2} is also non-negative and
\[h_\varepsilon (t)=\int \Big|1 -2(\psi_\varepsilon -1)^--\chi\Big| \zeta(d_I).\label{gronwall4}\end{equation}
Concerning their initial datum, it follows from \eqref{normalization} and \eqref{initial assupms} that $0\leqslant \psi_\varepsilon (x,0)\leqslant 1$ a.e. in $\Omega$. This implies $g_\varepsilon (0)=0$. Moreover, by \eqref{initial out} we have $\psi_\varepsilon (x,0)\equiv 1$ in $\Omega^+_0\backslash I_0(\delta_0)$ and $\equiv 0$ in $\Omega^-_0\backslash I_0(\delta_0)$. Altogether we obtain
\begin{align}\label{gronwallinitial1}
h_\varepsilon (0)
=\int_{I_0(\delta_0)} \Big(\chi(x,0)+1- 2 \psi_\varepsilon ^{in}(x) \Big) d_{I_0}(x) \, dx\overset{\eqref{initial}}\leqslant 2c_1\varepsilon .
\end{align}
{\it Step 2: estimates of weighted errors.}
Using \eqref{psi} and \eqref{bulk}, we have \begin{align}\label{volume evo1} \partial_t \psi_\varepsilon
=&(\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon )\cdot\frac{\mathbf{u}_\varepsilon }{|\mathbf{u}_\varepsilon |} \sqrt{2F(\mathbf{u}_\varepsilon )}-\mathbf{H}\cdot\nabla\psi_\varepsilon . \end{align} So we can calculate the evolution of \eqref{gronwall1} by \begin{align*} g_\varepsilon '(t)
\overset{\eqref{der positive part}}=&\int_{\{ \psi_\varepsilon > 1\}} (\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon )\cdot\frac{\mathbf{u}_\varepsilon }{|\mathbf{u}_\varepsilon |} \sqrt{2F(\mathbf{u}_\varepsilon )}\zeta(d_I) \\ &-\int_{\{ \psi_\varepsilon > 1\}} \mathbf{H}\cdot \nabla\psi_\varepsilon \zeta(d_I) +\int ( \psi_\varepsilon -1)^+ \partial_t \zeta(d_I) \\
\overset{\eqref{der positive part} }{=}&\int_{\{ \psi_\varepsilon > 1\}} (\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon )\cdot\frac{\mathbf{u}_\varepsilon }{|\mathbf{u}_\varepsilon |} \sqrt{2F(\mathbf{u}_\varepsilon )}\zeta(d_I) \\ &-\int \mathbf{H}\cdot \nabla ( \psi_\varepsilon -1)^+ \zeta(d_I) -\int ( \psi_\varepsilon -1)^+ \mathbf{H}\cdot\nabla \zeta(d_I) \\ &+\int \Big(\partial_t \zeta(d_I)+\mathbf{H}\cdot\nabla \zeta(d_I)\Big) ( \psi_\varepsilon -1)^+.\end{align*} We combine the second and the third integrals in the last step via integration by parts and find
\begin{align*} g_\varepsilon '(t)
=&\int_{\{ \psi_\varepsilon > 1\}} (\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon )\cdot\frac{\mathbf{u}_\varepsilon }{|\mathbf{u}_\varepsilon |} \sqrt{2F(\mathbf{u}_\varepsilon )}\zeta(d_I) \\ &+\int (\operatorname{div} \mathbf{H}) ( \psi_\varepsilon -1)^+ \zeta(d_I) +\int \Big(\partial_t \zeta(d_I)+\mathbf{H}\cdot\nabla \zeta(d_I)\Big) ( \psi_\varepsilon -1)^+\nonumber\\
\overset{\eqref{mcf}}\leqslant & \int \varepsilon \Big|\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon \Big|^2+ \int \frac 1{\varepsilon }{F(\mathbf{u}_\varepsilon )}\zeta^2(d_I) +Cg_\varepsilon (t) \\
\overset{\eqref{energy bound3}}\leqslant & \int \varepsilon \Big|\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon \Big|^2+C E_\varepsilon [\mathbf{u}_\varepsilon |I] +Cg_\varepsilon (t). \end{align*} Using $g_\varepsilon (0)=0$ and \eqref{energy bound4}, we can apply the Gr\"{o}nwall lemma and obtain $g_\varepsilon (t)\leqslant C \varepsilon $ for some $C$ which is independent of $\varepsilon $ and $t\in [0,T]$. Concerning $h_\varepsilon $, for simplicity we denote $\chi-[1- 2(\psi_\varepsilon -1)^-]=:w_\varepsilon $. Using $\partial_i \chi\eta (d_I)\equiv 0$ (as a distribution), we find
\[\partial_i w_\varepsilon \eta(d_I)=2\partial_i \psi_\varepsilon \mathbf{1}_{\{\psi_\varepsilon < 1\}}\eta(d_I).\end{equation} So by the same calculation for $g_\varepsilon $ we obtain \begin{align*} h_\varepsilon '(t)
=&\int_{\{ \psi_\varepsilon < 1\}} 2(\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon )\cdot\frac{\mathbf{u}_\varepsilon }{|\mathbf{u}_\varepsilon |} \sqrt{2F(\mathbf{u}_\varepsilon )}\eta(d_I) \\ &+\int (\operatorname{div} \mathbf{H})w_\varepsilon \eta(d_I) +\int \Big(\partial_t \eta(d_I)+\mathbf{H}\cdot\nabla \eta(d_I)\Big) w_\varepsilon \\
\leqslant & \int \varepsilon \Big|\partial_t \mathbf{u}_\varepsilon +(\mathbf{H}\cdot\nabla)\mathbf{u}_\varepsilon \Big|^2+C E_\varepsilon [\mathbf{u}_\varepsilon |I] +Ch_\varepsilon (t).\end{align*} Using \eqref{gronwallinitial1} and \eqref{energy bound4}, we can apply the Gr\"{o}nwall lemma and obtain $h_\varepsilon (t)\leqslant C\varepsilon $. Finally, \begin{align}
&\int |2\psi_\varepsilon -1-\chi|\zeta(d_I)\nonumber\\
& \overset{\eqref{psi deco}}\leqslant \int 2(\psi_\varepsilon -1)^+\zeta(d_I) + \int \Big|1 -2(\psi_\varepsilon -1)^--\chi\Big| \zeta(d_I)\nonumber\\
&\overset{\eqref{gronwall4}}= 2g_\varepsilon +h_\varepsilon \leqslant C\varepsilon .\label{gronwall3} \end{align}
{\it Step 3: Remove the weight.}
First note that \eqref{gronwall3} implies $\eqref{volume convergence}$ with $\Omega$ being replaced by $\Omega\backslash I_t(\delta_0)$. So we shall focus on the estimate in $I_t(\delta_0)$. In the sequel, we denote $\chi_\varepsilon =2\psi_\varepsilon -1$ and $\delta_0=\delta$.
For fixed $t\in [0,T]$ and $p\in I_t$ with normal vector $\mathbf{n}$, applying Lemma \ref{Lemma cubic est} below with $f(r,p,t)=\left|\chi\left(p + r \mathbf{n}, t\right)-\chi_\varepsilon (p+ r \mathbf{n}, t )\right|$, we find \begin{align*}
& \(\int_{I_t (\delta)}\left|\chi(x,t)-\chi_\varepsilon (x, t)\right| \, dx\)^{3/2} \\ =& \( \int_{I_t } \int_{-\delta}^{\delta}f(r,p,t) \, dy \, d S(p)\)^{3/2} \\ \overset{\text{H\"{o}lder}}\lesssim& \int_{I_t } \( \int_{-\delta}^{\delta}f(r,p,t) \, dy \)^{3/2} d S(p) \\
\overset{\eqref{cubic est}}\lesssim& \int_{I_t }\|f(\cdot,p,t)\|_{L^2(-\delta,\delta)}\sqrt{\int_{-\delta}^{\delta}f(r,p,t) |r| \, dr} \, dS(p) \\
\overset{\text{H\"{o}lder}}\lesssim& \|f(\cdot,t)\|_{L^2(I_t(\delta))}\sqrt{\int_{I_t }\int_{-\delta}^{\delta}f(r,p,t) |r| \, dr dS(p)}. \end{align*}
In view of \eqref{psi} and \eqref{bc of omega} we have $\psi_\varepsilon |_{\partial\Omega}=0$. So by Sobolev's embedding, \begin{align*}
& \(\int_{I_t (\delta)}\left|\chi(x,t)-\chi_\varepsilon (x, t)\right| \, dx\)^3 \\
\lesssim \,& \, \( \|\chi\|^2_{L^2(\Omega)}+\|\chi_\varepsilon \|^2_{L^2(\Omega)}\) \int_{\Omega} |\chi_\varepsilon -\chi | \zeta(d_I) \, dx \\
\lesssim \, & \, (1+ \|\nabla \psi_\varepsilon \|^2_{L^1(\Omega)}) \int_{\Omega} |\chi_\varepsilon -\chi | \zeta(d_I) \, dx\overset{\eqref{gronwall3}}\lesssim \varepsilon . \end{align*} This gives the desired estimate in $I_t(\delta_0)$ and thus the proof of \eqref{volume convergence} is finished. \end{proof} \begin{lemma}\label{Lemma cubic est} For any integrable function $f:[-\delta,\delta]\to \mathbb{R}^+\cup \{0\}$, we have
\[\(\int_{-\delta}^\delta f(r)\, dr \)^3 \leqslant C \|f\|^{2}_{L^2(-\delta,\delta)} \int_{-\delta}^\delta |r| f(r)\, dr.\label{cubic est} \end{equation} \end{lemma} \begin{proof}
By symmetry and the Cauchy--Schwarz inequality, \begin{align*}
\| f\|^4_{L^1(0,\delta)} &= \int_{[0,\delta]^4} f(x_1)f(x_2)f(y_1)f(y_2)\, dx_1dx_2dy_1dy_2\\ &=2\int_{[0,\delta]^4\cap \{x_1+x_2\leqslant y_1+y_2\}} f(x_1)f(x_2)f(y_1)f(y_2)\, dx_1dx_2dy_1dy_2\\ &=2 \int_{[0,\delta]^2} \(\int_{[0,\delta]^2\cap \{x_1+x_2\leqslant y_1+y_2\}} 1\cdot f(x_1)f(x_2)\, dx_1dx_2 \)f(y_1)f(y_2)dy_1dy_2\\ &\leqslant C \int_{[0,\delta]^2} (y_1+y_2)\sqrt{ \int_{[0,\delta]^2} f^2(x_1)f^2(x_2)\, dx_1dx_2 }f(y_1)f(y_2)dy_1dy_2\\
&= C\| f\|^2_{L^2(0,\delta)}\| f\|_{L^1(0,\delta)} \int_0^\delta r f(r)\, dr. \end{align*}
\end{proof}
In order to study the level sets of $\psi_\varepsilon $, we need the following lemma.
\begin{lemma}\label{sharp est of bdy} Let $t\in [0,T]$ be fixed. For any $\delta\in (0,1/8)$, there exists $b=b_{\varepsilon ,\delta}\in [\frac 1 2-\delta,\frac 1 2+\delta]$ s.t. the set $\{x:\psi_\varepsilon (x,t) >b\}$ has finite perimeter and
\[\left|\mathcal{H}^1(\{x: \psi_\varepsilon (x,t) =b\})-\mathcal{H}^1 (I_t)\right|\leqslant C \varepsilon ^{1/6}\delta^{-1} \end{equation}
where $C>0$ is dependent of $t, \varepsilon $ and $\delta$. \end{lemma}
\begin{proof} We consider the set
\[S_t^{\varepsilon ,\delta} =\{x\in \Omega: |2\psi_\varepsilon (x,t)- 1|\leqslant 2\delta \}, \qquad \forall \delta \in (0,1/8).\label{local set}\end{equation} It follows from the co-area formula of BV function \cite[section 5.5]{MR3409135} that $ S_t^{\varepsilon ,\delta}$ has finite perimeter for almost every $\delta$. Moreover, by \eqref{energy bound4} and \eqref{energy bound1}, we have for almost every $\delta \in (0,1/8)$ that \begin{align*}
C\varepsilon & \geqslant \int_{S_t^{\varepsilon ,\delta}} \(|\nabla\psi_\varepsilon |- \boldsymbol{\xi } \cdot \nabla\psi_\varepsilon \)\, dx\qquad (\geqslant 0)\\ &=\int_{\frac 12 -\delta}^{\frac 12 +\delta} \mathcal{H}^1\(\{x:\psi_\varepsilon =s\}\)\, ds-\int_{\partial S_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1+\int_{S_t^{\varepsilon ,\delta}} (\operatorname{div} \boldsymbol{\xi } ) \psi_\varepsilon \, dx, \end{align*} where $\nu$ is the outward unit normal of the set $ S_t^{\varepsilon ,\delta}$, defined on its (measure-theoretic) boundaries. By \eqref{nablapsiest} and Sobolev's embedding, we can control
$$\left|\int_{S_t^{\varepsilon ,\delta}} (\operatorname{div} \boldsymbol{\xi } ) \psi_\varepsilon \, dx\right|\leqslant |S_t^{\varepsilon ,\delta}|^{\frac 12 } \|\nabla\psi_\varepsilon \|_{L^1(\Omega)}\overset{\eqref{nablapsiest}}\lesssim |S_t^{\varepsilon ,\delta}|^{\frac 12 }.$$
Here $|A|=\mathcal{L}^2 (A)$ is the 2D Lebesgue measure of a set $A$. So we have
\[\left|\int_{\frac 12 -\delta}^{\frac 12 +\delta} \mathcal{H}^1\(\{x:\psi_\varepsilon =s\}\)\, ds- \int_{\partial S_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1\right|\lesssim \varepsilon + |S_t^{\varepsilon ,\delta}|^{\frac 12 }.\label{volume est1}\end{equation} By the divergence theorem and the orientation of $S_t^{\varepsilon ,\delta}$, \begin{align*} \int_{\partial S_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1&\overset{\eqref{local set}}= -\(\frac 12-\delta \) \int_{\{x: \psi_\varepsilon < \frac 12-\delta\}} \operatorname{div} \boldsymbol{\xi } \, dx- \(\frac 12 +\delta \)\int_{\{x: \psi_\varepsilon > \frac 12+\delta\}} \operatorname{div} \boldsymbol{\xi } \, dx,\\ -2\delta \mathcal{H}^1 (I_t)&\overset{\eqref{def:xi}}=\quad \(\frac 12-\delta \)\int_{\Omega_t^-}\operatorname{div} \boldsymbol{\xi } \, dx\qquad\quad +\(\frac 12+\delta \) \int_{\Omega_t^+}\operatorname{div} \boldsymbol{\xi } \, dx. \end{align*} Adding the above two equations to \eqref{volume est1}, we find \begin{align}\label{volume est3}
&\left| \int_{\frac 12 -\delta}^{\frac 12 +\delta} \mathcal{H}^1\(\{x:\psi_\varepsilon =s\}\)\, ds-2\delta \mathcal{H}^1 (I_t)\right|\nonumber\\
&\lesssim \varepsilon + |S_t^{\varepsilon ,\delta}|^{1/2}+ | \Omega_t^- \triangle {\{x: \psi_\varepsilon <\tfrac 1 2-\delta\}} |+ | \Omega_t^+\triangle {\{x: \psi_\varepsilon >\tfrac1 2+\delta\}}| . \end{align} where $A\triangle B=(A-B)\cup (B-A)$ is the symmetric difference of two sets $A,B$. \begin{align*}
r_\varepsilon ^+\triangleq&~| \Omega_t^+- {\{x: \psi_\varepsilon >\tfrac1 2+\delta\}}|+| {\{x: \psi_\varepsilon >\tfrac1 2+\delta\}}- \Omega_t^+|\\
=&~| \Omega_t^+- {\{x\in \Omega_t^+: \psi_\varepsilon >\tfrac1 2+\delta\}}- {\{x\in \Omega_t^-: \psi_\varepsilon >\tfrac1 2+\delta\}}|\\
& +| {\{x\in \Omega_t^-: \psi_\varepsilon >\tfrac1 2+\delta\}}|\\
\leqslant &~| {\{x\in \Omega_t^+: \psi_\varepsilon \leqslant \tfrac1 2+\delta\}}|+2| {\{x\in \Omega_t^-: \psi_\varepsilon >\tfrac1 2+\delta\}}|. \end{align*}
Now using Chebyshev's inequality and \eqref{volume convergence} yields $r_\varepsilon ^+\lesssim \varepsilon ^{1/3} $. Similar estimates apply to $ |S_t^{\varepsilon ,\delta}|$ and $r_\varepsilon ^-=| \Omega_t^- \triangle {\{x: \psi_\varepsilon <\tfrac 1 2-\delta\}} |$. Substitute these estimates into \eqref{volume est3} yields \begin{align}\label{volume est4}
&\left| \frac 1{ 2\delta} \int_{\frac 12 -\delta}^{\frac 12 +\delta} \mathcal{H}^1\(\{x:\psi_\varepsilon =s\}\)\, ds- \mathcal{H}^1 (I_t)\right|\leqslant C\varepsilon ^{1/6}\delta^{-1}. \end{align} So the desired result follows from Fubini's theorem. \end{proof}
\begin{lemma}\label{sharp est of bdy1} Let $t\in [0,T]$ be fixed. For any $\delta\in (0,1/8)$, there exists $q=q_{\varepsilon ,\delta}\in [ 2-\delta, 2+\delta]$ s.t. the set $\{x:\psi_\varepsilon (x,t) <q \}$ has finite perimeter and
\[ \mathcal{H}^1(\{x: \psi_\varepsilon (x,t) =q\}) \leqslant C \varepsilon ^{1/6}\delta^{-1} \end{equation}
where $C>0$ is dependent of $t, \varepsilon $ and $\delta$. \end{lemma} \begin{proof} We consider the set
\[Q_t^{\varepsilon ,\delta} =\{x\in \Omega: |\psi_\varepsilon (x,t)- 2|\leqslant \delta \}, \qquad \forall \delta \in (0,1/8),\label{local set1}\end{equation}
Using \eqref{energy bound4}, \eqref{energy bound1} and the co-area formula, we have for almost every $\delta \in (0,1/8)$ that \begin{align*}
C\varepsilon & \geqslant \int_{Q_t^{\varepsilon ,\delta}} \(|\nabla\psi_\varepsilon |- \boldsymbol{\xi } \cdot \nabla\psi_\varepsilon \)\, dx\qquad (\geqslant 0)\\ &=\int_{ 2 -\delta}^{ 2 +\delta} \mathcal{H}^1\(\{x:\psi_\varepsilon =s\}\)\, ds-\int_{\partial Q_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1+\int_{Q_t^{\varepsilon ,\delta}} (\operatorname{div} \boldsymbol{\xi } ) \psi_\varepsilon \, dx, \end{align*} where $\nu$ is the outward unit normal of the set $Q_t^{\varepsilon ,\delta}$. By \eqref{nablapsiest} and Sobolev's embedding,
$$\left|\int_{Q_t^{\varepsilon ,\delta}} (\operatorname{div} \boldsymbol{\xi } ) \psi_\varepsilon \, dx\right|\leqslant |Q_t^{\varepsilon ,\delta}|^{\frac 12 } \|\nabla\psi_\varepsilon \|_{L^1(\Omega)}\overset{\eqref{nablapsiest}}\lesssim |Q_t^{\varepsilon ,\delta}|^{\frac 12 }.$$ So we have
\[\int_{ 2 -\delta}^{ 2 +\delta} \mathcal{H}^1\(\{x:\psi_\varepsilon =s\}\)\, ds\leqslant \left| \int_{\partial Q_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1\right|+C \varepsilon + C |Q_t^{\varepsilon ,\delta}|^{\frac 12 }.\label{volume est1new}\end{equation} Using \eqref{bc n and H}, we have $\int_{\Omega}\operatorname{div} \boldsymbol{\xi }\, dx=0$, and thus \begin{align*} \int_{\partial Q_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1&\overset{\eqref{local set1}}= \( 2-\delta \) \int_{\{x: \psi_\varepsilon \geqslant 2-\delta\}} \operatorname{div} \boldsymbol{\xi } \, dx+ \( 2 +\delta \)\int_{\{x: \psi_\varepsilon \leqslant 2+\delta\}} \operatorname{div} \boldsymbol{\xi } \, dx\\
&= \( 2-\delta \) \int_{\{x: \psi_\varepsilon \geqslant 2-\delta\}} \operatorname{div} \boldsymbol{\xi } \, dx- \( 2 +\delta \)\int_{\{x: \psi_\varepsilon > 2+\delta\}} \operatorname{div} \boldsymbol{\xi } \, dx. \end{align*} By Chebyshev's inequality and \eqref{volume convergence}, we deduce
$$|Q_t^{\varepsilon ,\delta}|^{\frac 12 }+\left| \int_{\partial Q_t^{\varepsilon ,\delta}} \boldsymbol{\xi } \cdot \nu \psi_\varepsilon \, d\mathcal{H}^1\right|\lesssim \varepsilon ^{1/6}. $$ This combined with \eqref{volume est1new} and Fubini's theorem leads to the desired estimate. \end{proof}
\section{Proof of Theorem \ref{main thm}: Anchoring boundary condition}\label{anchoring}
The following result was first proved in \cite{MR4284534} for a liquid crystal model. To state it we use
$\wedge$ for the wedge product in $\mathbb{R}^2$: $\mathbf{a}\wedge \mathbf{b}=\mathbf{a}^\bot\cdot \mathbf{b}$ where $\mathbf{a}^\bot=(-a_2,a_1)$. \begin{prop}\label{global control prop} There exists a sequence of $\varepsilon _k\downarrow 0$ such that $\mathbf{u}_k:=\mathbf{u}_{\varepsilon _k}$ satisfy \begin{subequations}\label{global control} \begin{align}\label{global control1}
\partial_t \mathbf{u}_k\wedge\mathbf{u}_k \xrightarrow{k\to \infty}& g_0(x,t)~\text{weakly in}~ L^2(0,T;L^{4/3}(\Omega)),\\
\partial_i \mathbf{u}_k\wedge \mathbf{u}_k \xrightarrow{k\to \infty}& g_i(x,t)~\text{weakly-star in}~ L^\infty(0,T;L^{4/3}(\Omega)) \label{global control t} \end{align} \end{subequations} for $1\leqslant i\leqslant 2$. Moreover, there exists $\mathbf{u}=\mathbf{u}(x,t)$ such that \begin{subequations} \begin{align} &\mathbf{u}\in L^\infty(0,T; W^{1,4/3}\cap H^1_{loc}(\Omega^+_t;{\mathbb{S}^1})),\label{orientation integrable1} \\ &\partial_t \mathbf{u}\in L^2(0,T;L^{4/3}\cap L^2_{loc}(\Omega^+_t;{\mathbb{S}^1})),\label{orientation integrable2}\\ &\mathbf{u}=0 \text{ for a.e. } t\in (0,T), x\in \Omega_t^-,\label{u equal 0 region}\\
& g_i=\partial_i \mathbf{u}\wedge \mathbf{u} ~\text{a.e.}~t\in (0,T),~ x \in \Omega^\pm_t,\label{g is what} \end{align} \end{subequations} for $ 0\leqslant i\leqslant 2$. Furthermore, \begin{subequations}\label{weak strong convergence} \begin{align} \partial_t \mathbf{u}_k\xrightarrow{ k\to\infty } \partial_t \mathbf{u} &~\text{weakly in}~ L^2(0,T;L^2_{loc}(\Omega^\pm_t)),\label{deri con2}\\ \nabla \mathbf{u}_k\xrightarrow{k\to\infty } \nabla \mathbf{u} &~\text{weakly in}~ L^\infty(0,T;L^2_{loc}(\Omega^\pm_t)),\label{deri con}\\
\mathbf{u}_k\xrightarrow{k\to\infty } \mathbf{u} & ~\text{strongly in}~ C([0,T];L^2_{loc}(\Omega^\pm_t)).\label{deri con1} \end{align} \end{subequations} \end{prop} \begin{proof}
We denote $\Omega^\pm:=\bigcup_{t\in [0,T]}\Omega_t^\pm\times \{t\}$. By \eqref{projection1} we find \begin{equation}
\Pi_{\mathbf{u}_\varepsilon }\partial_i \mathbf{u}_\varepsilon (x,t)\wedge \mathbf{u}_\varepsilon (x,t)=0~\text{ for a.e. } ~(x,t)\in \Omega\times (0,T)\label{wedge trick} \end{equation} for $0\leqslant i\leqslant 2$ where $\partial_0:=\partial_t$. Using \eqref{L infinity bound1}, \eqref{energy bound4} and \eqref{energy bound0}, we find \begin{align}
&\| \partial_t \mathbf{u}_\varepsilon \wedge\mathbf{u}_\varepsilon \|_{L^2(0,T;L^{4/3}(\Omega))}+\| \nabla \mathbf{u}_\varepsilon \wedge \mathbf{u}_\varepsilon \|_{L^\infty(0,T;L^{4/3}(\Omega))}\nonumber\\
\overset{\eqref{wedge trick}}=&\| (\partial_t \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\partial_t \mathbf{u}_\varepsilon )\wedge \mathbf{u}_\varepsilon \|_{L^2(0,T;L^{4/3}(\Omega))}\nonumber\\
&+\| (\nabla \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon ) \wedge \mathbf{u}_\varepsilon \|_{L^\infty(0,T;L^{4/3}(\Omega))}\leqslant C \end{align} for some $C$ independent of $\varepsilon $. So \eqref{global control} follows from weak compactness.
It follows from \eqref{L infinity bound1}, \eqref{space der bound local} and \eqref{time der bound local} that, for any $\delta\in (0,\delta_0)$, there exists a subsequence $\varepsilon _k=\varepsilon _k(\delta)>0$ such that \begin{subequations}\label{udelta conv} \begin{align}
\mathbf{u}_{\varepsilon _k}\xrightarrow{k\to\infty } \mathbf{u}&~\text{weakly-star in}~ L^\infty(0,T ;L^4(\Omega))\label{convergence L4},\\ \partial_t \mathbf{u}_{\varepsilon _k}\xrightarrow{ k\to\infty } \partial_t \bar{\mathbf{u}}_\delta&~\text{weakly in}~ L^2(0,T;L^2(\Omega^\pm_t\backslash I_t(\delta))),\label{convergence weak time der}\\ \nabla \mathbf{u}_{\varepsilon _k}\xrightarrow{k\to\infty } \nabla \bar{\mathbf{u}}_\delta&~\text{weakly-star in}~ L^\infty(0,T;L^2(\Omega^\pm_t\backslash I_t(\delta))),\label{convergence weak gradient}
\end{align} \end{subequations}
and $\mathbf{u}=\bar{\mathbf{u}}_\delta$ a.e. in $U^\pm(\delta):=\cup_{t\in [0,T]} \(\Omega_t^\pm\backslash I_t(\delta)\)\times \{t\}$. This combined with \eqref{convergence weak time der} and \eqref{convergence weak gradient} leads to \begin{equation}\label{regular limit} \mathbf{u}\in L^\infty(0,T;H^1_{loc}(\Omega^\pm_t)) ~\text{with}~\partial_t \mathbf{u}\in L^2(0,T;L^2_{loc}(\Omega^\pm_t)). \end{equation}
By a diagonal argument we obtain \eqref{weak strong convergence}. Using the Aubin--Lions lemma, we also obtain
\[ \mathbf{u}_{\varepsilon _k}\xrightarrow{k\to\infty } \bar{\mathbf{u}}_\delta,~\text{strongly in}~ C([0,T];L^2(\Omega^\pm_t\backslash I_t(\delta))).\label{convergence strong conti}\end{equation} Using \eqref{space der bound local}, \eqref{convergence strong conti} and Fatou's lemma, we deduce that $F(\mathbf{u})=F(\bar{\mathbf{u}}_\delta)=0$ a.e. in $U^\pm(\delta)$. Since this holds for any $\delta\in (0,\delta_0)$, by \eqref{bulk} we find that
$ \mathbf{u}$ ranges in $\{0\}\cup {\mathbb{S}^1}$ a.e. in $\Omega\times (0,T)$. This combined with \eqref{volume convergence} and \eqref{regular limit} yields \eqref{u equal 0 region} and
\begin{align} \mathbf{u}\in L^\infty (0,T;H^1_{loc}(\Omega^+_t;{\mathbb{S}^1})) \text{ with }\partial_t \mathbf{u} \in L^2(0,T;L^2_{loc}(\Omega^+_t;{\mathbb{S}^1})).\label{orientation} \end{align}
It remains to show the integrability of $\nabla_{x,t} \mathbf{u}$ up to the boundary: choose a sequence \begin{equation}\label{cut-off} \{\eta^k(\cdot,t)\}\subset C_c^\infty(\Omega^+_t)~\text{ such that }~\eta^k(x,t)\xrightarrow{k\to\infty} \mathbf{1}_{\Omega^+_t}(x). \end{equation} By \eqref{global control1}, \eqref{global control t} and \eqref{weak strong convergence}, we deduce that for $0\leqslant i\leqslant 2$, \begin{equation}\label{precise S} \eta^k g_i= \eta^k \partial_i \mathbf{u}\wedge \mathbf{u} \text{ a.e. in } \Omega\times (0,T). \end{equation} Since $ g_i\in L^1(\Omega\times (0,T))$, by the dominated convergence theorem, we get \eqref{g is what}. Recalling that $\mathbf{u}$ maps $\Omega^+$ into ${\mathbb{S}^1}$, we deduce
\[ |\partial_i \mathbf{u}|^2=|\partial_i \mathbf{u}\wedge \mathbf{u}|^2~a.e.~\text{in}~\Omega^+,~0\leqslant i\leqslant 2.\end{equation} So we improve \eqref{orientation} to \eqref{orientation integrable1} and \eqref{orientation integrable2}. \end{proof}
Given the subsequence $\varepsilon _k$ in Proposition \ref{global control prop}, we shall abbreviate $\psi_{\varepsilon _k}, \mathbf{u}_{\varepsilon _k}$ by $\psi_k$ and $\mathbf{u}_k$ respectively.
Recalling Lemma \ref{sharp est of bdy} and Lemma \ref{sharp est of bdy1}, by choosing $\delta_k=\varepsilon _k^{\frac 1{12}}$ we find
$$b_k:=b_{\delta_k}\in [\tfrac 1 2-\delta_k,\tfrac 1 2+\delta_k],\quad q_k:=q_{\delta_k}\in [2-\delta_k,2+\delta_k]$$ with $(b_k,q_k)\xrightarrow{k\to \infty}(\tfrac 12 ,2)$ such that the set
\[\Omega_t^k:=\{x\in\Omega: b_k<\psi_k(x,t) < q_k\}\label{bk def}\end{equation} has finite perimeter for each fixed $t\in [0,T]$. Moreover for some $C>0$ which is dependent of $t, \varepsilon $ and $\delta$, there holds \begin{align}
\mathcal{H}^1(\{x: \psi_k(x,t) =q_k\}) &\leqslant C\varepsilon _k^{\frac 1{12}},\label{shrinking length}\\
\left|\mathcal{H}^1(\partial\Omega_t^k)-\mathcal{H}^1 (I_t)\right| & \leqslant C \varepsilon _k^{\frac 1{12}}. \label{sharp bdy est2} \end{align}
The convergence of $\mathbf{u}_k$ up to the boundary is given in the following proposition. Note that \eqref{weak strong convergence} states convergence away from the boundary.
\begin{prop}\label{prop con sbv}
Let $\mathbf{u}$ be the limit vector field in Proposition \ref{global control prop}.
For any fixed $t\in [0,T]$, up to extraction of a subsequence, there holds \begin{subequations} \begin{align}
\mathbf{1}_{ \Omega_t^k} \widehat{\mathbf{u}}_k(x,t) &\xrightarrow{k\to\infty } \mathbf{1}_{\Omega_t^+}\mathbf{u} ~\text{ weakly-star in } BV(\Omega),\label{strong convergence of u1}\\ \mathbf{1}_{ \Omega_t^k} \nabla \widehat{\mathbf{u}}_k(x,t) &\xrightarrow{k\to\infty } \mathbf{1}_{\Omega_t^+}\nabla \mathbf{u} ~\text{ weakly in } L^1(\Omega),\label{convergence diffusive}\\
\mathbf{1}_{ \Omega_t^k} \widehat{\mathbf{u}}_k &\xrightarrow{k\to\infty } \mathbf{1}_{\Omega_t^+}\mathbf{u} ~\text{ strongly in } L^p(\Omega),\text{ for any fixed } p\in [1,\infty).\label{strong convergence of u3} \end{align} \end{subequations}
\end{prop} \begin{proof}
We first claim that there exists a constant $C_f>0$ so that the following statements hold for
any $\delta\in (0,1/8)$ and $t\in [0,T]$: \begin{align}
|\mathbf{u}_\varepsilon (x,t)|\geqslant C_f\delta\qquad &\qquad \forall x\in \{x:\psi_\varepsilon \geqslant \delta\}.\label{push away} \end{align} Indeed, by \eqref{f increase}, $f$ (and also $g$) is increasing on $[0,s_0]$.
If $|\mathbf{u}_\varepsilon |\geqslant s_0$, we are done. Otherwise
$$\delta \leqslant \psi_\varepsilon =\int^{|\mathbf{u}_\varepsilon |}_0g(s) \, ds\leqslant |\mathbf{u}_\varepsilon | g(s_0),$$ which implies \eqref{push away}. This combined with \eqref{bound degree+orien} and \eqref{bk def} yields
\[\sup_{t\in [0,T]}\int_{\Omega_t^k} \left| \nabla \widehat{\mathbf{u}}_k\right|^2\, dx\leqslant C\label{local gradient} \end{equation} for $k$ sufficiently large. This implies that the sequence $ \mathbf{v}_k(\cdot,t):= \mathbf{1}_{ \Omega_t^k} \widehat{\mathbf{u}}_k(\cdot,t)$ is bounded in $L^\infty(\Omega)$, and their distributional derivatives have no Cantor parts. Moreover, the absolute continuous parts and jump parts enjoy the estimates \eqref{local gradient} and \eqref{sharp bdy est2} respectively. So it follows from \cite{ambrosio1995new} (or \cite[Section 4.1]{MR1857292}) that $\{\mathbf{v}_k(\cdot,t)\}$ is compact in $SBV(\Omega)$, the class of special function of bounded variation on $\Omega$. More precisely, there exists $\mathbf{v}\in SBV(\Omega)$ s.t. $\mathbf{v}_k\to\mathbf{v}$ weakly-star in $BV(\Omega)$ as $k\to \infty$, and the absolute continuous part of the gradient $\nabla^{a} \mathbf{v}_k=\mathbf{1}_{ \Omega_t^k} \nabla \widehat{\mathbf{u}}_k$ converges weakly in $L^1(\Omega)$ to $\nabla^{a} \mathbf{v}$. To identify $\mathbf{v}$, we use \eqref{volume convergence} to deduce that $\mathbf{1}_{\Omega_t^k}\to \mathbf{1}_{\Omega_t^+}$ as $k\to\infty$. This combined with \eqref{deri con1} yields $\mathbf{v}=\mathbf{1}_{\Omega_t^+}\mathbf{u}$, and thus \eqref{strong convergence of u1} and \eqref{convergence diffusive} are proved. Finally by \eqref{strong convergence of u1}, the compact embedding of $BV$ functions and the $L^\infty$ bound we get \eqref{strong convergence of u3}.
\end{proof}
Now we finish the proof of Theorem \ref{main thm} by proving \eqref{anchoring bc}.
\begin{theorem}\label{vtangen} For any fixed $t\in [0,T]$, we have $\mathbf{u}\cdot\boldsymbol{\xi }=0$ for $\mathcal{H}^1$-a.e. $x\in I_t$. \end{theorem} The proof here is inspired by the blow-up argument in \cite{lin2020isotropic}. See also \cite{MR1218685} for the applications of such a method in the study of quasi-convex functionals. To proceed we define the following measures for any Borel set $A\subset \Omega$: \begin{subequations}\label{def measures} \begin{align} \theta(A)&=\mathcal{H}^1 (A\cap I_t),\label{thetameasure}\\
\theta_k(A)&=\int_{A\cap \Omega_t^k} |\nabla \psi_k|\, dx,\label{def measure mue1}\\\mu_k(A)&=\int_A \(\widehat{\mathbf{u}}_k\cdot\mathbf{n}_k \)^2 \,d\theta_k,\label{def measure mue} \end{align} \end{subequations}
where $\mathbf{n}_k=\nabla \psi_k/|\nabla \psi_k|$. \begin{lemma} For any fixed $t\in [0,T]$, there exists a Radon measure $\mu$ so that
\[\theta_k \xrightharpoonup{k\to \infty} \frac 12 \theta,\quad \mu_k\xrightharpoonup{k\to \infty} \mu \quad \text{ weakly-star as Radon measures.} \label{weak radom}\end{equation} \end{lemma} \begin{proof} By \eqref{nablapsiest} and \eqref{cos law}, the families of measures
$\{\theta_k\}, \{\mu_k\}$ have bounded total variations. So their weak convergences follow from weak compactness \cite[Theorem 1.59]{MR1857292}.
It remains to show that the limit of $\theta_k$ is $\frac 12 \theta$.
To this end, we define truncation functions \begin{align}\label{truncation h} T_k(s)=\left\{ \begin{array}{rl} 0\qquad \text{when } &s\leqslant b_k\\
s-b_k \qquad \text{when } &b_k\leqslant s\leqslant q_k\\
q_k-b_k\qquad \text{when } &s\geqslant q_k \end{array} \right.,\\
T(s)=\left\{ \begin{array}{rl} 0\qquad \text{when } &s\leqslant \frac 12\\
s-1/2 \qquad \text{when } &\frac 12\leqslant s\leqslant 2\\ 3/2 \qquad \text{when } &s\geqslant 2 \end{array} \right.. \end{align}
It is obvious that $T_k\xrightarrow{k\to \infty}T$ uniformly on $\mathbb{R}$. Moreover, \begin{align} &\nabla T_k(\psi_k)=\nabla\psi_k \mathbf{1}_{\Omega_t^k}\qquad a.e. ~x\in\Omega,\label{psi-bc}\\ &T_k(\psi_k)\xrightarrow{k\to \infty} \tfrac 12 \mathbf{1}_{\Omega_t^+}\text{ strongly in }L^p(\Omega)\qquad\text{for any fixed}~p\in [1,\infty).\label{psi-b} \end{align} Indeed, by \eqref{df def} we know that $\psi_k(\cdot,t)\in C^1(\Omega)$. Also by \eqref{bk def} we have $T_k'(\psi_k)=\mathbf{1}_{\Omega_t^k}$ for a.e. $x\in\Omega$. Therefore \eqref{psi-bc} follows from the chain rule (cf. \cite[Proposition 3.24]{MR3099262} for a more general setting). Also \eqref{psi-b} follows from \eqref{volume convergence} and the dominated convergence theorem. Now using \eqref{energy bound4} and \eqref{energy bound1}, for any $g(x)\in C^1_c(\Omega)$, we have
\begin{align*}
\int g\, d\theta_k=\int_{ \Omega_t^k} g |\nabla \psi_k|\, dx&\overset{\eqref{energy bound1}}=O(\varepsilon _k)+ \int_{ \Omega_t^k} g \, \boldsymbol{\xi }\cdot \nabla \psi_k \, dx\\
&\overset{\eqref{psi-bc}}=O(\varepsilon _k)+ \int_{\Omega} g \, \boldsymbol{\xi }\cdot \nabla T_k(\psi_k)\, dx \\
&~=O(\varepsilon _k)- \int_{\Omega} \operatorname{div} (g \boldsymbol{\xi }) T_k(\psi_k)\, dx .
\end{align*}
Using \eqref{psi-b} and $\boldsymbol{\xi }$ being the inner normal, we can pass to the limit in the above equations and obtain
\begin{align*}
\lim_{k\to \infty}\int g\, d\theta_k= -\frac 12 \int_{\Omega_t^+} \operatorname{div} (g \boldsymbol{\xi }) \, dx =\frac 12\int_{I_t} g\, d\mathcal{H}^1\overset{\eqref{thetameasure}}=\frac 12 \int g\, d\theta\quad \forall g(x)\in C^1_c(\Omega).
\end{align*}
By approximation, one can pass from $C^1_c(\Omega)$ to $C_c(\Omega)$,
and this proves \eqref{weak radom}. \end{proof}
\begin{proof}[Proof of Theorem \ref{vtangen}]
Recall the Radon measures $\mu,\theta$ in \eqref{weak radom}. By the Besicovitch derivation theorem \cite[Theorem 2.22]{MR1857292}, there exists a singular non-negative measure $\mu^s$ so that \begin{align} \mu=&(D_\theta \mu)\theta+\mu^s,\text{ with }\mu^s\perp \theta,\label{radom-nikodym}\\ D_\theta \mu(x_0)=&\lim_{r\downarrow 0}\frac{\mu(B_r(x_0))}{\theta(B_r(x_0))}\text{ for }\theta-a.e.~x_0\in {\rm supp}(\theta)=I_t.\label{besico} \end{align} These combined with the weakly lower semi-continuity of the convergence of Radon measures yields
\[\int_\Omega D_\theta \mu \,d\theta\overset{\eqref{radom-nikodym}}\leqslant \mu(\Omega)\leqslant \liminf_{k\to\infty} \mu_k(\Omega)\overset{\eqref{cos law}}=0.\label{lowersemi3}\end{equation} Now by \eqref{key lowersemi} below, we find \[\frac 12 \int_\Omega \(\mathbf{u}\cdot \boldsymbol{\xi } \)^2\, d\theta\leqslant \int_\Omega D_\theta \mu\, d\theta.\end{equation} This combined with \eqref{lowersemi3} leads to the desired result.
\end{proof}
\begin{lemma} Let $t\in [0,T]$ be fixed. For $\theta$-a.e. $x_0\in \operatorname{supp}(\theta)=I_t$, there holds \[\frac 12 \(\mathbf{u}\cdot \boldsymbol{\xi } \)^2 (x_0)\leqslant (D_\theta \mu)(x_0).\label{key lowersemi}\end{equation} \end{lemma} \begin{proof} During the proof, we shall suppress the dependence on $t$. For instance we shall abbreviate $\mathbf{u}_k(x,t)$ by $\mathbf{u}_k(x)$. For any $x_0 \in I_t$ and any $R>0$, it follows from \eqref{strong convergence of u3}, \eqref{psi-b} and the dominated convergence theorem that \begin{align*}
& \lim_{k\to \infty} \int_{ B_R(x_0) } \mathbf{1}_{\Omega_t^k}\widehat{\mathbf{u}}_k(x) \cdot \frac{x-x_0}{|x-x_0|} T_k(\psi_k) \,dx
= \frac 12 \int_{ B_R(x_0) } \mathbf{1}_{\Omega^+_t}\mathbf{u} (x) \cdot \frac{x-x_0}{|x-x_0|} \,dx.
\end{align*}
We can use spherical coordinate to rewrite the above two integrals in the form of $\int_0^R\int_{\partial B_r(x_0)}\cdot \,dSdr$, and then apply Fubini's theorem. Therefore, we find $r_{j} \downarrow 0$ such that for each $j$, there holds \begin{align} \lim _{k\to\infty} \int_{\partial B_{r_{j}}(x_0) \cap \Omega_t^k } \widehat{\mathbf{u}}_k \cdot \nu T_k(\psi_k)\,d\mathcal{H}^1 = \frac 12 \int_{\partial B_{r_{j}}(x_0) \cap \Omega^+_t} \mathbf{u} \cdot \nu\, d \mathcal{H}^{1}\label{boundary concentration} \end{align} where $\nu$ is the outer normal.
Moreover, we can arrange $r_j$ so that $\theta(\partial B_{r_{j}}(x_0))=0$ and $\mu\left(\partial B_{r_{j}}(x_0)\right)=0$ (cf. \cite[Proposition 1.62]{MR1857292} and the discussion afterwards). This combined with \eqref{weak radom} yields
\[
\lim_{k\to \infty}\theta_k(B_{r_{j}}(x_0)) = \frac 12 \theta(B_{r_{j}}(x_0)),\quad
\lim_{k\to \infty}\mu_k(B_{r_{j}}(x_0)) = \mu(B_{r_{j}}(x_0)). \label{weak radom1}\end{equation}
To proceed, we use convexity to write, for some $a_m,c_m\in \mathbb{R}$, that
\[ s^2=\sup_{m\in\mathbb{N}^+}\, (a_m s+c_m)\qquad \forall s\in \mathbb{R}.\label{radom convex}\end{equation}
(cf. \cite[Proposition 2.31]{MR1857292}). For $\theta$-a.e. $x_0\in {\rm supp}(\theta)=I_t$, we deduce from \eqref{besico}, \eqref{weak radom1}, \eqref{def measure mue} and \eqref{radom convex} that \begin{align} D_{\theta} \mu(x_0)&\overset{\eqref{besico}}{=}\lim _{j \rightarrow \infty} \frac{\mu\left(B_{r_{j}}(x_0)\right)}{\theta\left(B_{r_{j}}(x_0)\right)}\overset{\eqref{weak radom1}}{=}\lim _{j \rightarrow \infty} \lim_{k\to \infty}\frac{\mu_k\left(B_{r_{j}}(x_0)\right)}{\theta\left(B_{r_{j}}(x_0)\right)}\nonumber\\ &\overset{\eqref{def measure mue}}{=}\lim _{j \rightarrow \infty} \lim_{k\to \infty}\frac{1}{\theta\left(B_{r_{j}}(x_0)\right)}\int_{B_{r_{j}}(x_0)} \(\widehat{\mathbf{u}}_k\cdot\mathbf{n}_k \)^2\, d\theta_k\nonumber\\ &\overset{\eqref{radom convex}}{\geqslant} \lim _{j \rightarrow \infty} \lim_{k\to \infty}\frac{1}{\theta\left(B_{r_{j}}(x_0)\right)}\int_{B_{r_{j}}(x_0)} \(a_m \widehat{\mathbf{u}}_k\cdot\mathbf{n}_k +c_m\) \, d\theta_k \end{align} for each $m\geqslant 1$. By \eqref{def measure mue1}, \eqref{weak radom1} and \eqref{psi-bc}, we obtain \begin{align}
D_{\theta} \mu(x_0)\overset{\eqref{def measure mue1}}\geqslant & \lim _{j \rightarrow \infty} \lim_{k\to \infty}\left[\frac{a_m}{\theta\left(B_{r_{j}}(x_0)\right)}\int_{B_{r_{j}}(x_0)} \mathbf{1}_{\Omega_t^k} \widehat{\mathbf{u}}_k\cdot\mathbf{n}_k |\nabla\psi_k|\, dx +c_m\frac{ \theta_k (B_{r_{j}}(x_0))}{\theta\left(B_{r_{j}}(x_0)\right)} \right]\nonumber\\ \overset{\eqref{weak radom1},\eqref{psi-bc}}= & \lim _{j \rightarrow \infty} \frac{a_m}{\theta\left(B_{r_{j}}(x_0)\right)}\lim_{k\to \infty}\int_{B_{r_{j}}(x_0)\cap \Omega_t^k} \widehat{\mathbf{u}}_k\cdot\nabla T_k(\psi_k)\, dx + \frac {c_m}2.\label{recover bk} \end{align}
Note that in the last step we used $\mathbf{n}_k|\nabla \psi_k|=\nabla \psi_k$ (cf. \eqref{normal diff} ). It remains to compute the last integral under the limit $k\to \infty$ for fixed $j,m$. We employ \eqref{psi-bc} and integrate by parts to find \begin{align}\label{lower last1}
&\int_{B_{r_{j}}(x_0)\cap \Omega_t^k} \widehat{\mathbf{u}}_k \cdot \nabla T_k(\psi_k) \, dx\\
=& \int_{\partial \(B_{r_{j}}(x_0)\cap \Omega_t^k\)} \widehat{\mathbf{u}}_k \cdot \nu T_k(\psi_k) \, d\mathcal{H}^1- \int_{B_{r_{j}}(x_0)} \mathbf{1}_{\Omega_t^k}(\operatorname{div} \widehat{\mathbf{u}}_k) T_k(\psi_k) \, dx\nonumber\\
=&:A_k-B_k. \nonumber \end{align} Note that the integrand of $A_k$ is uniformly bounded in $L^\infty$. To compute the limit of $A_k$, we first deduce from \eqref{truncation h} that $T_k(\psi_k)=0$ on the set $\{x\in \Omega\mid \psi_k(x,t)=b_k\}$ which has finite perimeter (cf. \eqref{sharp bdy est2}). So we employ \eqref{bk def} to find \begin{align}
A_k= & \int_{\partial B_{r_{j}}(x_0)\cap \Omega_t^k} \widehat{\mathbf{u}}_k \cdot \nu T_k(\psi_k) \, d\mathcal{H}^1+\int_{ B_{r_{j}}(x_0)\cap \{x|\psi_k(x)=q_k\} } \widehat{\mathbf{u}}_k \cdot \nu T_k(\psi_k) \, d\mathcal{H}^1.
\end{align}
The limit of the first integral is given in \eqref{boundary concentration}, and that of the second vanishes by \eqref{shrinking length}. So we conclude that
\begin{align}\label{con ak} \lim_{k\to \infty}A_k= \frac 12 \int_{\partial B_{r_{j}}(x_0) \cap \Omega^+_t} \mathbf{u} \cdot \nu\, d \mathcal{H}^{1}. \end{align} Concerning the integral $B_k$, by \eqref{convergence diffusive}
$\mathbf{1}_{\Omega_t^k}\operatorname{div} \widehat{\mathbf{u}}_k$ converges weakly in $L^1(\Omega)$. Moreover, $T_k(\psi_k)$ is uniformly bounded in $L^\infty$, and converges a.e. in $\Omega$, due to \eqref{psi-b}. Therefore applying the Product Limit Theorem (cf. \cite{MR1014927} or \cite[p. 169]{MR2683475}) we obtain \begin{align} \lim_{k\to \infty}B_k=\frac 12 \int_{ B_{r_{j}}(x_0) \cap \Omega^{+}_t} (\operatorname{div} \mathbf{u})\, dx.\label{lower last} \end{align} Using \eqref{con ak} and \eqref{lower last}, we can compute the limit in \eqref{lower last1} and find \begin{align}
&\lim_{k\to\infty}\int_{B_{r_{j}}(x_0)\cap \Omega_t^k} \widehat{\mathbf{u}}_k \cdot \nabla T_k(\psi_k) \, dx \nonumber\\
=& \frac 12 \int_{\partial B_{r_{j}}(x_0) \cap \Omega^+_t} \mathbf{u} \cdot \nu \, d \mathcal{H}^{1} -\frac 12 \int_{\partial\( B_{r_{j}}(x_0) \cap \Omega^{+}_t\) } \mathbf{u}\cdot\nu\, d \mathcal{H}^{1}\nonumber\\ =& \frac 12 \int_{ B_{r_{j}}(x_0) \cap \partial \Omega^+_t} \mathbf{u} \cdot \boldsymbol{\xi }\, d \mathcal{H}^{1}. \label{lower last2} \end{align} Note that $\boldsymbol{\xi }$ is the inner normal according to \eqref{def:xi}. Substituting \eqref{lower last2} into
\eqref{recover bk} yields \begin{align} D_{\theta} \mu(x_0)&\geqslant \lim _{j \rightarrow \infty} \frac{a_m}{\theta\left(B_{r_{j}}(x_0)\right)}\frac 12 \int_{ B_{r_{j}}(x_0) \cap I_t} \mathbf{u} \cdot\boldsymbol{\xi }\, d \mathcal{H}^{1} +\frac{c_m}2\nonumber\\ &\overset{\eqref{thetameasure}}= \frac {a_m}2 (\mathbf{u}\cdot\boldsymbol{\xi } )(x_0)+\frac{c_m}2,\qquad \forall m\in \mathbb{N}^+. \end{align}
This together with \eqref{radom convex} implies \eqref{key lowersemi}, and the proof is finished.
\end{proof}
\section{Proof of Theorem \ref{main thm oseen frank}: Oseen--Frank limit in the bulk}\label{sec har} The method here is inspired by \cite{MR4163316,du2020weak,MR3518239}.
We denote $\partial_t \mathbf{u}_\varepsilon =: \boldsymbol{\tau}_\varepsilon $ and we write \eqref{Ginzburg-Landau} equivalently by \begin{align}\label{Ginzburg-Landau stat} \boldsymbol{\tau}_\varepsilon &=\mu \nabla\operatorname{div} \mathbf{u}_\varepsilon +\Delta \mathbf{u}_\varepsilon - \frac 1{\varepsilon ^2}\partial F(\mathbf{u}_\varepsilon )\,\,\,\text{in}~ \Omega\times (0,T). \end{align} By Corollary \ref{coro space-time der bound}, for a.e. $t\in (0,T)$ and for any compact set $K\subset \subset \Omega_t^+$, we have \begin{align}
\int_K |\boldsymbol{\tau}_\varepsilon |^2\, dx+ \int_{K}\(\frac 12 |\nabla \mathbf{u}_\varepsilon |^2+\frac 1{\varepsilon ^2} F( \mathbf{u}_\varepsilon ) \)\, dx & \leqslant C(K)\quad \text{ at }t.\label{kortum bound big ball} \end{align}
\begin{prop}\label{partial regularity} Under the assumption \eqref{kortum bound big ball}, there exist absolute constants $\eta_0,\mu_0>0$ which enjoys the following properties: for any $\mu\in (0,\mu_0)$, if $B_{2r}(x_0)\subset K$ satisfies \begin{align}\label{small energy of GL}
\int_{B_{2r}(x_0)}\(\frac 12 |\nabla \mathbf{u}_\varepsilon |^2+\frac 1{\varepsilon ^2} F( \mathbf{u}_\varepsilon ) \)\, dx & \leqslant \eta_0^2, \end{align} then up to extraction of a subsequence there holds \[\nabla \mathbf{u}_\varepsilon (\cdot,t)\xrightarrow{\varepsilon \to 0} \nabla \mathbf{u}(\cdot,t)\text{ strongly in }L^2(B_{r/2}(x_0)).\end{equation} \end{prop}
\begin{lemma} Assume \eqref{kortum bound big ball} and \eqref{small energy of GL} for sufficiently small $\eta_0$. Then
\[3/4\leqslant |\mathbf{u}_\varepsilon (\cdot,t)| \leqslant 5/4 \text{ on } B_r(x_0).\label{rho near 11}\end{equation} \end{lemma}
\begin{proof} Without loss of generality, we assume $x_0=0$. For brevity we write $B_r(0)$ by $B_r$.
{\bf Step 1.} We prove H\"{o}lder estimate on a small ball:
\[\forall x_1 \in B_r, \quad |\mathbf{u}_\varepsilon (x)-\mathbf{u}_\varepsilon (y)|\lesssim \sqrt{\frac{|x-y|}\varepsilon },\quad \forall x,y\in B_{\varepsilon /2}(x_1).\label{kortum holder}\end{equation} To prove \eqref{kortum holder}, let $\hat{\mathbf{u}}_\varepsilon (x)=\mathbf{u}_\varepsilon (x_1+\varepsilon x): B_1\to \mathbb{R}^2$. Then we can write \eqref{Ginzburg-Landau stat} by \[\mu \nabla\operatorname{div} \hat{\mathbf{u}}_\varepsilon (x) +\Delta \hat{\mathbf{u}}_\varepsilon (x)=\varepsilon ^2\boldsymbol{\tau}_\varepsilon (x_1+\varepsilon x ) +\partial F(\hat{\mathbf{u}}_\varepsilon (x) ), \quad x\in B_1(0).\label{kortum est p f} \end{equation}
It follows from \eqref{kortum bound big ball} and a change of variable that $\varepsilon ^2\boldsymbol{\tau}_\varepsilon $ is uniformly bounded in $L^2(B_1)$. To estimate $\partial F(\hat{\mathbf{u}}_\varepsilon )$, we need the following inequality \[|f'(s)|^2\leqslant C_1 f(s),\qquad \forall s\geqslant 0,\label{bulk trick1}\end{equation} which follows from \eqref{bulk2}. Therefore,
\[ \|\partial F(\hat{\mathbf{u}}_\varepsilon )\|^2_{L^2(B_1)}\overset{\eqref{bulk}}=\varepsilon ^{-2}\|f'(|\mathbf{u}_\varepsilon | )\|^2_{L^2(B_\varepsilon (x_1))}\overset{\eqref{bulk trick1}}\leqslant \varepsilon ^{-2}C_1 \int_{B_\varepsilon (x_1)}
f(|\mathbf{u}_\varepsilon | )\, dx\overset{\eqref{kortum bound big ball}}\lesssim 1. \end{equation} Altogether, we prove that the right hand side of \eqref{kortum est p f} is bounded in $L^2(B_1)$.
By the interior estimate of elliptic system (see \cite[Theorem 4.9]{MR3099262}), we obtain
\[\|\hat{\mathbf{u}}_\varepsilon \|_{H^2(B_{1/2})}\lesssim 1+\|\hat{\mathbf{u}}_\varepsilon \|_{L^2(B_1)}.\label{morreyh2}\end{equation} To estimate the last term, we employ \begin{align}
\|\hat{\mathbf{u}}_\varepsilon \|_{L^2(B_1)}^2 & \lesssim 1+ \varepsilon ^{-2} \int_{B_\varepsilon (x_1) \cap \{ |\mathbf{u}_\varepsilon |\geqslant 1|\}} \( | \mathbf{u}_\varepsilon |-1\)^2 \nonumber\\
& \overset{\eqref{bulk2}}= 1+ \varepsilon ^{-2} \int_{B_\varepsilon (x_1)} f (| \mathbf{u}_\varepsilon |) \overset{\eqref{kortum bound big ball}}\lesssim 1.\label{chebychev1} \end{align} So \eqref{morreyh2} and Morrey's embedding $H^2\hookrightarrow C^{1/2}$ imply \eqref{kortum holder}.
{\bf Step 2:}
We claim that when $\eta_0$ and $\varepsilon $ are sufficiently small, there holds \eqref{rho near 11} or \begin{align}
|\mathbf{u}_\varepsilon | \leqslant 1/4 \text{ on } B_r.\label{rho near 0} \end{align} Indeed, if none of them were correct, then (by mean value property)
there would exist a point $x_1\in B_r$ with \[ |\mathbf{u}_\varepsilon (x_1)|\in (1/4,3/4)\cup (5/4,+\infty).\end{equation}
According to \eqref{bulk2}, this implies $f(|\mathbf{u}_\varepsilon (x_1)|)\geqslant \frac 1{16}$. By \eqref{kortum holder}, then there would exist $\theta\in (0,1/2)$ so that
\[F(\mathbf{u}_\varepsilon (x))=f(|\mathbf{u}_\varepsilon (x)|)\geqslant \frac 1{32}\qquad \forall x\in B_{\varepsilon \theta}(x_1).\end{equation} Integrating this inequality over $B_{\varepsilon \theta}(x_1)$ would contradict \eqref{small energy of GL} when $\eta_0$ is sufficiently small. So we have either \eqref{rho near 11} or \eqref{rho near 0}. However, with \eqref{rho near 0} we deduce from \eqref{bulk2} that \begin{align}\label{GL equ near 0}
\mu \nabla\operatorname{div} \mathbf{u}_\varepsilon +\Delta \mathbf{u}_\varepsilon - \varepsilon ^{-2} \mathbf{u}_\varepsilon =\boldsymbol{\tau}_\varepsilon \,\,\,\text{in}~ B_r. \end{align} So we can apply the interior estimate of elliptic system and deduce
\[\| \mathbf{u}_\varepsilon \|_{H^2(B_{r/2 })}+\varepsilon ^{-2}\| \mathbf{u}_\varepsilon \|_{L^2(B_{r/2 })}\lesssim \|\boldsymbol{\tau}_\varepsilon \|_{L^2(B_r)}+\| \mathbf{u}_\varepsilon \|_{L^2(B_r)}.\label{H^2 ue}\end{equation}
Indeed, one can adapt the proof of \cite[Theorem 4.9]{MR3099262} to include the `good' term involving $\varepsilon ^{-2}$. By a similar estimate as done in \eqref{chebychev1}, we can show $\| \mathbf{u}_\varepsilon \|_{L^2(B_r)}\lesssim 1$, and thus close the estimate \eqref{H^2 ue} which would imply the strong convergence $\mathbf{u}_\varepsilon \to 0$ in $L^2(B_{r/2})$. However, by \eqref{deri con1} $\mathbf{u}_{\varepsilon _k}$ converges to a unit vector field in $K\subset\subset \Omega_t^+$. Therefore, we rule out \eqref{rho near 0} and obtain \eqref{rho near 11}. \end{proof}
By \eqref{rho near 11}, we can polar decompose $ \mathbf{u}_\varepsilon =\rho_\varepsilon \mathbf{v}_\varepsilon $ where
\[ \rho_\varepsilon =|\mathbf{u}_\varepsilon |,\quad \mathbf{v}_\varepsilon =e^{i\theta_\varepsilon }=\mathbf{u}_\varepsilon /|\mathbf{u}_\varepsilon |\text{ on } B_r(x_0).\label{polar decom}\end{equation}
\begin{lemma}
Under the assumptions \eqref{kortum bound big ball} and \eqref{small energy of GL} for sufficiently small.
Then under polar decomposition \eqref{polar decom}, the equation \eqref{Ginzburg-Landau stat} is equivalent to the following system of $\mathbf{w}_\varepsilon =(\theta_\varepsilon , \rho_\varepsilon )$ in $B_r(x_0)$ \begin{subequations}\label{polar equ} \begin{align} \label{polar decompose equ1}
\Delta \rho_\varepsilon -\varepsilon ^{-2} f'(\rho_\varepsilon )&+\mu \mathbf{v}_\varepsilon ^{\otimes 2}: \nabla^2\rho_\varepsilon +\mu\rho_\varepsilon (\mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon ^\bot):\nabla^2\theta_\varepsilon \nonumber\\
&=\boldsymbol{\tau}_\varepsilon \cdot \mathbf{v}_\varepsilon + \mathcal{N}_{1,\varepsilon }(\nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon ),\\
\rho_\varepsilon \Delta\theta_\varepsilon +& \rho_\varepsilon \mu (\mathbf{v}_\varepsilon ^{\bot})^{\otimes 2}: \nabla^2\theta_\varepsilon +\mu (\mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon ^\bot):\nabla^2\rho_\varepsilon \nonumber \\
&= \boldsymbol{\tau}_\varepsilon \cdot \mathbf{v}_\varepsilon ^\bot+ \mathcal{N}_{2,\varepsilon }(\nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon ),\label{polar decompose equ2} \end{align} \end{subequations} where $\mathcal{N}_{k,\varepsilon }(\cdot,\cdot):\mathbb{R}^{2\times 2}\times \mathbb{R}^{2\times 2}\mapsto \mathbb{R}$ (with $k=1,2$) are quadratic forms with uniformly bounded coefficients depending on $(\rho_\varepsilon ,\mathbf{v}_\varepsilon )$.
\end{lemma} \begin{proof} To simplify the presentation we will suppress the subscript $\varepsilon $. By \eqref{polar decom} we have
\[\Delta \mathbf{v}\cdot\mathbf{v}=-|\nabla \mathbf{v}|^2=-|\nabla\theta|^2,\label{bocher}\end{equation}
and $\partial_i \mathbf{v}=\partial_i \theta \mathbf{v}^\bot$. So we have \begin{align} (\mathbf{v}\cdot \nabla)\mathbf{v}&=(\mathbf{v}\cdot \nabla \theta)\mathbf{v}^\bot ,\quad (\mathbf{v}\cdot \nabla)\mathbf{v}^\bot=-(\mathbf{v}\cdot \nabla \theta)\mathbf{v},\label{convection e}\\ \partial_i \operatorname{div} \mathbf{v}&=\partial_i \nabla \theta\cdot \mathbf{v}^\bot+\nabla\theta\cdot\partial_i \mathbf{v}^\bot,\qquad 1\leqslant i\leqslant 2.\label{nabladiv e1} \end{align} Substituting \eqref{polar decom} into \eqref{Ginzburg-Landau stat} yields \begin{align}\label{polar decompose equ} \boldsymbol{\tau} =&~\Delta \rho \mathbf{v} +2\nabla\rho \cdot \nabla \mathbf{v} +\rho \Delta \mathbf{v} -\varepsilon ^{-2} f'(\rho ) \mathbf{v} \nonumber\\ &+\mu \nabla^2\rho \cdot \mathbf{v} +\mu \rho \nabla \operatorname{div} \mathbf{v} +\mu \(\nabla \rho \cdot \partial_i \mathbf{v} \)_{1\leqslant i\leqslant 2} +\mu\nabla\rho \operatorname{div} \mathbf{v}. \end{align} Testing \eqref{polar decompose equ} by $\mathbf{v} $ and using \eqref{bocher} and \eqref{nabladiv e1}, we obtain \begin{align}\label{polar decompose equ3} \boldsymbol{\tau} \cdot \mathbf{v}
=& \Delta \rho -\rho |\nabla \theta |^2-\varepsilon ^{-2} f'(\rho )\nonumber \\ &+\mu \nabla^2\rho : \mathbf{v} ^{\otimes 2} +\mu\rho \nabla^2\theta :(\mathbf{v} \otimes \mathbf{v} ^\bot)\nonumber\\ &+ \mu \left[ (\nabla\rho\cdot\mathbf{v}^\bot)(\nabla\theta\cdot \mathbf{v})-\rho(\nabla\theta\cdot\mathbf{v})^2+(\nabla\rho\cdot\mathbf{v})(\nabla\theta\cdot \mathbf{v}^\bot)\right]. \end{align} Note that the second term and those in the last line of the above equation are quadratic in $(\nabla \rho,\nabla \theta)$.
So we can write \eqref{polar decompose equ3} as \eqref{polar decompose equ1}. The derivation of the $\theta$-equation is similar: we first test \eqref{polar decompose equ} by $\mathbf{v} ^\bot$ and then use $\Delta \mathbf{v} \cdot \mathbf{v} ^\bot=\Delta \theta $ and \eqref{nabladiv e1}, and finally insert the quadratic terms into $\mathcal{N}_2(\cdot,\cdot)$ \begin{align}
\boldsymbol{\tau}\cdot \mathbf{v} ^\bot =&~ \rho \Delta\theta+\mu \nabla^2\rho : (\mathbf{v} \otimes \mathbf{v} ^\bot) +\mu \rho \nabla^2\theta:(\mathbf{v}^\bot\otimes \mathbf{v}^\bot) - \mathcal{N}_2(\nabla\mathbf{w},\nabla\mathbf{w}).\end{align}
\end{proof} \begin{lemma} Under the assumptions \eqref{kortum bound big ball} and \eqref{small energy of GL} for $\eta_0$ sufficiently small. Then there exists $\mu_0>0$ so that the following inequality holds for any $\mu\in (0,\mu_0)$: \begin{align}\label{CZ est3}
\|\nabla^2 (\theta_\varepsilon , \rho_\varepsilon )\|_{L^{4/3}(B_{r/2}(x_0))} \lesssim 1. \end{align} \end{lemma}
\begin{proof} Denoting $\mathbf{w}_\varepsilon =(\theta_\varepsilon , \rho_\varepsilon )$, we deduce from \eqref{small energy of GL} and \eqref{rho near 11} that
\[\|\nabla \mathbf{w}_\varepsilon \|_{L^2( B_r(x_0))}\leqslant C\eta_0.\label{OS flow1} \end{equation} For brevity we write the space $L^p( B_r(x_0))$ by $L^p$. Let $\chi$ be a cut-off function which $\equiv 1$ in $B_{r/2}(x_0)$ and $\equiv 0$ in $\Omega\backslash B_r(x_0)$, and let \[ \bar{\mathbf{w}}_\varepsilon =(\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )~\text{ with }~\bar{\rho}_\varepsilon =\chi (\rho_\varepsilon -1)\quad \text{ and }\quad \bar{\theta}_\varepsilon =\chi \theta_\varepsilon .\label{bar def}\end{equation} Then multiplying \eqref{polar decompose equ2} by $\chi$ and rearranging the terms, we obtain \begin{align}\label{CZ est} \rho_\varepsilon \Delta\bar{\theta}_\varepsilon +\rho_\varepsilon [\chi,\Delta] \theta_\varepsilon +&\mu \rho_\varepsilon (\mathbf{v}_\varepsilon ^{\bot})^{\otimes 2}: \nabla^2\bar{\theta}_\varepsilon +\mu \rho_\varepsilon (\mathbf{v}_\varepsilon ^{\bot})^{\otimes 2}: [\chi,\nabla^2]\theta_\varepsilon \nonumber\\&+\mu (\mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon ^\bot):\nabla^2\bar{\rho}_\varepsilon +\mu (\mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon ^\bot):[\chi,\nabla^2]\rho_\varepsilon \nonumber \\
&= \chi \boldsymbol{\tau}_\varepsilon \cdot \mathbf{v}_\varepsilon ^\bot+ \mathcal{N}_{2,\varepsilon }(\chi \nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon ). \end{align} Note that the commutators involve at most first order derivatives of $\mathbf{w}_\varepsilon =(\theta_\varepsilon ,\rho_\varepsilon )$, which enjoys \eqref{OS flow1}, and $\boldsymbol{\tau}_\varepsilon $ enjoys \eqref{kortum bound big ball}. Moreover, by \eqref{rho near 11}, $3/4\leqslant \rho_\varepsilon \leqslant 5/4$. So applying the $L^p$-estimate of elliptic equation \cite[pp. 109]{MR2435520} to \eqref{CZ est} yields
\begin{align}\label{polar decompose equ4}
\|\nabla^2 \bar{\theta}_\varepsilon \|_{L^{4/3}}
\lesssim ~& 1+ \mu \|\nabla^2 (\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )\|_{L^{4/3}}+\left\| \mathcal{N}_{2,\varepsilon }(\chi \nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon )\right\|_{L^{4/3}}. \end{align} To estimate the last term, we employ the bi-linearity of $\mathcal{N}_{k,\varepsilon }$: \begin{align*}
&\left\| \mathcal{N}_{k,\varepsilon }(\chi \nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon )\right\|_{L^{4/3}}\\
\leqslant ~& \left\| \mathcal{N}_{k,\varepsilon }( \nabla\bar{\mathbf{w}}_\varepsilon ,\nabla\mathbf{w}_\varepsilon )\right\|_{L^{4/3}}+\left\| \mathcal{N}_{2,\varepsilon }( \nabla\chi\otimes\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon )\right\|_{L^{4/3}}\\
\overset{\eqref{OS flow1}}\lesssim & \|\nabla\bar{\mathbf{w}}_\varepsilon \|_{L^4}\|\nabla\mathbf{w}_\varepsilon \|_{L^2}+1\\
\lesssim ~& \|\nabla^2\bar{\mathbf{w}}_\varepsilon \|_{L^{4/3}}\|\nabla\mathbf{w}_\varepsilon \|_{L^2}+1\qquad \text{by Sobolev's embedding}. \end{align*} Combining the above two inequalities we obtain the $L^p$-estimate of $\bar{\theta}_\varepsilon $:
\begin{align}\label{CZ est1}
\|\nabla^2 \bar{\theta}_\varepsilon \|_{L^{4/3}}
\lesssim ~& 1+ \mu \|\nabla^2 (\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )\|_{L^{4/3}}+ \|\nabla\bar{\mathbf{w}}_\varepsilon \|_{L^4}\|\nabla\mathbf{w}_\varepsilon \|_{L^2}. \end{align} Now we turn to the estimate of $\rho_\varepsilon $.
Using \eqref{rho near 11} and \eqref{bulk2}, we have $f'(\rho)=2(\rho-1)$ and we can write \eqref{polar decompose equ1} as \begin{align*} &-2\varepsilon ^{-2} \bar{\rho}_\varepsilon +\Delta \bar{\rho}_\varepsilon +[\chi,\Delta] (\rho_\varepsilon -1) +\mu \mathbf{v}_\varepsilon ^{\otimes 2}: \nabla^2\bar{\rho}_\varepsilon \\ &+\mu \mathbf{v}_\varepsilon ^{\otimes 2}: [\chi,\nabla^2](\rho_\varepsilon -1) +\mu\rho_\varepsilon (\mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon ^\bot):\nabla^2\bar{\theta}_\varepsilon +\mu\rho_\varepsilon (\mathbf{v}_\varepsilon \otimes \mathbf{v}_\varepsilon ^\bot):[\chi,\nabla^2]\theta_\varepsilon \nonumber\\
&=\chi \boldsymbol{\tau}_\varepsilon \cdot \mathbf{v}_\varepsilon + \mathcal{N}_{1,\varepsilon }(\chi \nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon ). \end{align*} In the same way that we did for \eqref{polar decompose equ4}, we find
\begin{align*}
\|\nabla^2 \bar{\rho}_\varepsilon \|_{L^{4/3}}
\lesssim ~& 1+ \mu \|\nabla^2 (\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )\|_{L^{4/3}}+\left\| \mathcal{N}_{1,\varepsilon }(\chi \nabla\mathbf{w}_\varepsilon ,\nabla\mathbf{w}_\varepsilon )\right\|_{L^{4/3}}\\
\lesssim ~& 1+ \mu \|\nabla^2 (\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )\|_{L^{4/3}}+ \|\nabla^2\bar{\mathbf{w}}_\varepsilon \|_{L^{4/3}}\|\nabla\mathbf{w}_\varepsilon \|_{L^2}. \end{align*} Combining this with \eqref{CZ est1} and \eqref{OS flow1} leads to \begin{align*}
\|\nabla^2 (\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )\|_{L^{4/3}(B_r(x_0))}
\lesssim 1+ \eta_0\|\nabla^2(\bar{\theta}_\varepsilon ,\bar{\rho}_\varepsilon )\|_{L^{4/3}(B_r(x_0))}. \end{align*} Therefore by \eqref{bar def}, we obtain \eqref{CZ est3} for $\eta_0$ sufficiently small. \end{proof}
\begin{proof}[Proof of Proposition \ref{partial regularity}] By \eqref{CZ est3} and the Rellich--Kondrachov theorem, we find \begin{align*} \(\nabla \mathbf{v}_{\varepsilon _k},\nabla \rho_{\varepsilon _k} \) \xrightarrow{k\to \infty} \( \nabla \mathbf{v}, 0\) \text{ strongly in } L^2(B_{r/2}(x_0)). \end{align*}
Finally, using \eqref{rho near 11} and $|\mathbf{v}_\varepsilon |=1$ we obtain the strong convergence \begin{align*} \nabla \mathbf{u}_{\varepsilon _k}=\nabla \mathbf{v}_{\varepsilon _k} \rho_{\varepsilon _k}+\mathbf{v}_{\varepsilon _k} \nabla\rho_{\varepsilon _k} \xrightarrow{k\to \infty} \nabla \mathbf{u}\text{ strongly in }L^2(B_{r/2}(x_0)). \end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{main thm oseen frank}] We employ the covering argument of \cite{MR990191}. For any test function $\varphi(x)\in C_c^1(\Omega_t^+)$, we choose the compact set $K:=\overline{\mathrm{supp} (\varphi)}\subset\subset \Omega_t^+$. By Proposition \ref{partial regularity} we define the set of singular points at time $t \in(0, T]$ by \[
\Sigma_t:=\bigcap_{r>0}\left\{x \in K: B_{2r}(x)\subset K, \liminf _{k \rightarrow \infty} \int_{B_{2r} (x )} \frac{1}{2}\left| \nabla \mathbf{u}_{\varepsilon _k}\right|^{2}+\frac{F( \mathbf{u}_{\varepsilon _k})}{ \varepsilon _{k}^2}>\frac{\eta_0^2} 2\right\}. \end{equation} By \eqref{kortum bound big ball}, we deduce that $\Sigma_t$ is discrete. Therefore w.l.o.g. we can assume $\Sigma_t\cap K=\{x_0\}$ and $B_{2r}(x_0)\subset K$. Let $\eta(x)\in C_c^1(B_2(0))$ be a cut-off function which $\equiv 1$ in $B_1(0)$ and \[\varphi_\delta(x):=\varphi(x)\(1-\eta(\tfrac{ x-x_0}\delta)\)\xrightarrow{\delta\to 0} \varphi(x)\text{ for }x\neq x_0.\label{varphi a.e. cov}\end{equation}
It is obvious that $\varphi_\delta=0$ in $B_{\delta}(x_0)$, and outside $B_{\delta}(x_0)$ the strong converges of $\nabla \mathbf{u}_{\varepsilon _k}$ holds, due to Proposition \ref{partial regularity}. So we apply $ \wedge \mathbf{u}_{\varepsilon _k} \varphi$ to the equation \eqref{Ginzburg-Landau}, integrate by parts and pass to the limit $k\to \infty$: \begin{align}\label{varphi conv} &\int_\Omega \partial_t \mathbf{u} \wedge \mathbf{u} \,\varphi_\delta+\mu \int_\Omega \(\operatorname{div} \mathbf{u}\) (\operatorname{rot} \mathbf{u}) \varphi_\delta\nonumber\\ &+\int_\Omega \nabla\varphi_\delta\cdot \nabla \mathbf{u} \wedge \mathbf{u}+\mu \int_\Omega \(\operatorname{div} \mathbf{u} \) \nabla^\bot \varphi_\delta\cdot \mathbf{u}=0. \end{align} By \eqref{varphi a.e. cov} and the regularity of $\mathbf{u}$ stated in Proposition \ref{global control prop} we can pass to the limit $\delta\to 0$ in the first and second integral using the dominated convergence theorem. Concerning the third one, we have \begin{align*}
\int_\Omega \nabla\varphi_\delta\cdot \nabla \mathbf{u} \wedge \mathbf{u}&= \int_\Omega \(1-\eta(\tfrac{ x-x_0}\delta)\)\nabla\varphi\cdot \nabla \mathbf{u} \wedge \mathbf{u}\\ &- \int_{B_{2\delta}(x_0)} \frac {1}\delta \varphi(x) (\nabla\eta)(\tfrac{ x-x_0}\delta) \cdot \nabla \mathbf{u} \wedge \mathbf{u}. \end{align*}
By \eqref{varphi a.e. cov}, the first integral on the right hand side converges to $ \int_\Omega \nabla\varphi\cdot \nabla \mathbf{u} \wedge \mathbf{u}$. We claim that the second integral vanishes as $\delta\to 0$. Indeed, by $|B_{2\delta}(x_0)|=4\pi \delta^2$ and Cauchy--Schwarz's inequality we have \begin{align}
\left|\int_{B_{2\delta}(x_0)} \frac {1}\delta \varphi(x) (\nabla\eta)(\tfrac{ x-x_0}\delta) \cdot \nabla \mathbf{u} \wedge \mathbf{u}\right| \lesssim \| \nabla \mathbf{u}\|_{L^2(B_{2\delta}(x_0))}\xrightarrow{\delta\to 0}0.
\end{align} Similar argument applies to the fourth integral in \eqref{varphi conv}. So we finally obtain \begin{align} &\int_\Omega \partial_t \mathbf{u} \wedge \mathbf{u} \,\varphi +\mu \int_\Omega \(\operatorname{div} \mathbf{u}\) (\operatorname{rot} \mathbf{u}) \varphi\nonumber\\ &+\int_\Omega \nabla\varphi\cdot \nabla \mathbf{u} \wedge \mathbf{u}+\mu \int_\Omega \(\operatorname{div} \mathbf{u} \) \nabla^\bot \varphi\cdot \mathbf{u}=0, \end{align} which is equivalent to \eqref{weak OF flow}.
\end{proof}
\appendix
\section{Proof of Proposition \ref{prop initial data}}\label{app initial}
\begin{proof}[Proof of Proposition \ref{prop initial data}]
Let $I_0\subset \Omega$ be the initial interface and $\eta_0(z)$ be the cut-off function \eqref{cut-off eta delta}. Then we define \begin{equation}\label{cut-off initial} s_\varepsilon (x):= \eta_0\( x \)\theta\(\frac{d_{I_0}(x)}\varepsilon \)+\Big(1-\eta_0\( x \)\Big)\mathbf{1}_{\Omega^+_0}, \end{equation} where $\theta(z)$ is the solution of the ODE \begin{align} -\theta''(z)+f'(\theta)=0,\quad \theta(-\infty)=0,~\theta(+\infty)=1.\label{travelling wave} \end{align}
For every $\mathbf{u}^{in}\in H^1(\Omega,{\mathbb{S}^1})$ with trace $\mathbf{u}^{in}|_{I_0}\cdot \mathbf{n}_{I_0}=0$, the initial datum defined by \begin{equation}\label{sharp initial} \mathbf{u}_\varepsilon ^{in} (x):=s_\varepsilon (x) \mathbf{u}^{in}(x) \end{equation} satisfies $\mathbf{u}_\varepsilon ^{in}\in H^1(\Omega)\cap L^\infty(\Omega)$ and \begin{align}\label{transition initial data} \mathbf{u}_\varepsilon ^{in}(x)=\left\{ \begin{array}{rl}
\mathbf{u}^{in} & \quad \text{if}~ x\in \Omega^+_0\backslash I_0(2\delta_0),\\ \theta\(\frac{d_{I_0}(x)}\varepsilon \) \mathbf{u}^{in} & \quad \text{if}~ x\in I_0(\delta_0),\\ 0&\quad \text{if}~ x\in \Omega^-_0\backslash I_0(2\delta_0). \end{array} \right. \end{align} To verify \eqref{initial}, we first compute the modulated energy \eqref{entropy} of the initial data $\mathbf{u}_\varepsilon ^{in} $. We write \eqref{cut-off initial} by \begin{equation}\label{shat} s_\varepsilon (x)= \theta\(\frac{d_{I_0}(x)}\varepsilon \) +\hat{s}_\varepsilon (x), \end{equation} where $ \hat{s}_\varepsilon (x):=\(1-\eta_0\( x \)\)\(\mathbf{1}_{\Omega^+_0} -\theta\(\frac{d_{I_0}(x)}\varepsilon \)\) $. By \eqref{cut-off eta delta} and the exponential decay of $\theta$, which solves \eqref{travelling wave}, we deduce that \begin{equation}\label{hat S decay}
\|\hat{s}_\varepsilon \|_{L^\infty(\Omega)}+\|\nabla \hat{s}_\varepsilon \|_{L^\infty(\Omega)}\leqslant Ce^{-\frac C\varepsilon }, \end{equation}
for some constant $C>0$ that only depends on $I_0$. Thus by \eqref{def:xi} $$F(\mathbf{u}_\varepsilon ^{in} )=f(\theta +\hat{s}_\varepsilon )=f(\theta)+ O(e^{-C/\varepsilon }).$$ With \eqref{sharp initial}, \eqref{shat} and \eqref{hat S decay}, we obtain $$
|\nabla \mathbf{u}_\varepsilon ^{in} |^2= \varepsilon ^{-2}\theta'^2 +\theta^2 |\nabla\mathbf{u}^{in}|^2+O(e^{-C/\varepsilon })(|\nabla\mathbf{u}^{in}|^2+1). $$ Recalling \eqref{bulk}, we have $\psi_\varepsilon =\int_0^{\theta+\hat{s}_\varepsilon }\sqrt{2f(s)}\, ds$.
So the integrand of $E_\varepsilon [\mathbf{u}_\varepsilon | I](0)$ writes \begin{align}\label{initial cal}
&\frac{\varepsilon}2 \left|\nabla \mathbf{u}_\varepsilon ^{in}\right|^2+\frac{1}{\varepsilon} F(\mathbf{u}_\varepsilon ^{in})-\boldsymbol{\xi }\cdot\nabla \psi_\varepsilon \nonumber \\
=&\frac 1{2\varepsilon } \theta'^2 +\frac{1}{\varepsilon} f(\theta )-\varepsilon ^{-1} \boldsymbol{\xi }\cdot \mathbf{n}_{I_0} \theta' \sqrt{2f(\theta )}+\frac \varepsilon 2\theta^2 |\nabla\mathbf{u}^{in}|^2+O(e^{-C/\varepsilon })(|\nabla\mathbf{u}^{in}|^2+1)
\end{align}
By \eqref{def:xi} we know $1-\boldsymbol{\xi }\cdot \mathbf{n}_{I_0}=O(d_I^2)$. So we have
$$\varepsilon ^{-1} \boldsymbol{\xi }\cdot \mathbf{n}_{I_0} \theta' \sqrt{2f(\theta )}=\varepsilon ^{-1} \theta' \sqrt{2f(\theta)}+O(e^{-C/\varepsilon })+ \varepsilon ^{-1} O(d_{I_0}^2) \theta' \sqrt{2f(\theta)}.$$
Note that the last term can be written as $$\varepsilon ^{-1} O(d_{I_0}^2) \theta' \sqrt{2f(\theta)}= O(\varepsilon )z^2 \theta'(z) \sqrt{2f(\theta(z))}\big|_{z= \frac{d_{I_0}(x)}{\varepsilon } }.$$
Substituting the above two equations into \eqref{initial cal} yields
\begin{align}\label{initial cal1}
&\int\frac{\varepsilon}2 \left|\nabla \mathbf{u}_\varepsilon ^{in}\right|^2+\frac{1}{\varepsilon} F(\mathbf{u}_\varepsilon ^{in})-\boldsymbol{\xi }\cdot\nabla \psi_\varepsilon \nonumber \\ =&\int \underbrace{\frac 1{2\varepsilon } \theta'^2 +\frac{1}{\varepsilon} f(\theta )-\frac 1{\varepsilon } \theta' \sqrt{2f(\theta) }}_{=0}\nonumber\\
&+\int \frac \varepsilon 2\theta^2 |\nabla\mathbf{u}^{in}|^2+O(e^{-C/\varepsilon })\int (|\nabla\mathbf{u}^{in}|^2+1) +O(\varepsilon ).
\end{align}
Note that the integrand of the first integral vanishes because $\theta'^2(z)=2f(\theta(z))$, as a consequence of \eqref{travelling wave}.
Now we turn to the first term in \eqref{entropy}. Using \eqref{hat S decay}
\[ |\operatorname{div} \mathbf{u}_\varepsilon ^{in}|^2\leqslant |\nabla\theta\cdot\mathbf{u}^{in}|^2+\theta^2|\operatorname{div} \mathbf{u}^{in}|^2+O(e^{-C/\varepsilon }).\label{div cal}\end{equation}
Using the exponential decay of $\theta'$ and $\mathbf{u}^{in}\cdot\mathbf{n}_{I_0}\big|_{I_0}=0$, we can write
\[|\nabla\theta\cdot\mathbf{u}^{in}|=\Big|\frac{d_{I_0}(x)}\varepsilon \theta'\(\frac{d_{I_0}(x)}\varepsilon \)\frac{\mathbf{u}^{in}\cdot\mathbf{n}_{I_0}}{d_{I_0}(x)}\Big|\leqslant C\Big| \frac{\mathbf{u}^{in}\cdot\mathbf{n}_{I_0}}{d_{I_0}}\Big|.\end{equation}
Applying Hardy's inequality \cite{MR1655516} (in the normal direction) yields
\[\int_\Omega |\nabla\theta\cdot\mathbf{u}^{in}|^2\, dx \leqslant C \int_\Omega | \nabla \mathbf{u}^{in} |^2\, dx.\end{equation}
Combining this with \eqref{div cal} and \eqref{initial cal1} leads to $E_\varepsilon [\mathbf{u}_\varepsilon ^{in} | I_0]\leqslant C\varepsilon $. To verify the second estimate in \eqref{initial preserve}. We shall only give the estimate in $\Omega_t^+$ while the one for $\Omega_t^-$ follows in the same way. We use \eqref{normalization} to deduce that at $t=0$, \begin{align}
&\int_{I_t(\delta_0)\cap \Omega_t^+} |\psi_\varepsilon -1| d_{I}(x)\, dx\Big|_{t=0}\nonumber\\
&\overset{\eqref{sharp initial},\eqref{normalization}}= \int_{I_t(\delta_0)\cap \Omega_t^+}\(\int_{s_\varepsilon (x)}^1 \sqrt{2f(s)}\, ds\) d_{I}(x)\, dx\Big|_{t=0}\nonumber\\
&\overset{\eqref{shat},\eqref{hat S decay}}=\varepsilon \int_{I_t(\delta_0)\cap \Omega_t^+}\(\int_{\theta(\frac{d_{I}(x)}\varepsilon )}^1 \sqrt{2f(s)}\, ds\) \frac{d_{I}(x)}\varepsilon \, dx\Big|_{t=0}+O(e^{-C/\varepsilon }). \end{align} By the exponential decay of $z\int_{\theta(z)}^1\sqrt{2f(s)}\, ds$ to $0$ as $z\uparrow \infty$, we obtain \eqref{initial}. \end{proof}
\section{Proof of Proposition \ref{gronwallprop}}\label{appendix}
\begin{lemma}\label{lemma:expansion 1} The following identity holds \begin{align}
&\int \nabla \mathbf{H}: (\boldsymbol{\xi } \otimes\mathbf{n}_\varepsilon )\left|\nabla \psi_\varepsilon \right| \, d x
-\int (\nabla \cdot \mathbf{H}) \, \boldsymbol{\xi } \cdot \nabla \psi_\varepsilon \, d x\nonumber\\
=&\int \nabla \mathbf{H}: (\boldsymbol{\xi } -\mathbf{n}_\varepsilon ) \otimes\mathbf{n}_\varepsilon \left|\nabla \psi_\varepsilon \right|\, d x+\int \mathbf{H}_\varepsilon \cdot\mathbf{H} |\nabla \mathbf{u}_\varepsilon |\, d x \nonumber\\
&+\int\nabla\cdot\mathbf{H} \( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}\varepsilon F (\mathbf{u}_\varepsilon ) -|\nabla \psi_\varepsilon | \)\, d x +\int\nabla\cdot\mathbf{H} ( |\nabla\psi_\varepsilon |-\boldsymbol{\xi } \cdot\nabla\psi_\varepsilon )\, d x\nonumber\\
&-\int (\nabla \mathbf{H})_{ij} \,\varepsilon \(\partial_i \mathbf{u}_\varepsilon \cdot \partial_j \mathbf{u}_\varepsilon \)\, d x +\int \nabla \mathbf{H}: (\mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon )\left|\nabla \psi_\varepsilon \right|\, d x\label{expansion1}
\end{align} \end{lemma} \begin{proof}
We
introduce the energy stress tensor $T_\varepsilon $
\begin{equation*}
(T_\varepsilon )_{ij}= \( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon ) \) \delta_{ij} - \varepsilon \partial_i \mathbf{u}_\varepsilon \cdot \partial_j \mathbf{u}_\varepsilon .
\end{equation*} By \eqref{mean curvature app},
we have the identity
$\nabla \cdot T_\varepsilon
=\mathbf{H}_\varepsilon |\nabla \mathbf{u}_\varepsilon |.$
Testing this identity by $\mathbf{H}$, integrating by parts and using \eqref{bc n and H}, we obtain
\begin{equation*}
\begin{split}
&\int \mathbf{H}_\varepsilon \cdot\mathbf{H} |\nabla \mathbf{u}_\varepsilon |\,d x =- \int \nabla\mathbf{H} \colon T_\varepsilon \,d x,\\
&=- \int\nabla\cdot\mathbf{H} \( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon ) \)\, dx+ \int (\nabla\mathbf{H})_{ij} \, \varepsilon \(\partial_i \mathbf{u}_\varepsilon \cdot \partial_j \mathbf{u}_\varepsilon \) d x.
\end{split}
\end{equation*}
So adding zero leads to
\begin{equation*}
\begin{split}
&\int \nabla \mathbf{H}: \mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon \left|\nabla \psi_\varepsilon \right|d x\\
&=\int \mathbf{H}_\varepsilon \cdot\mathbf{H} |\nabla \mathbf{u}_\varepsilon |\, dx+\int\nabla\cdot\mathbf{H} \( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}{\varepsilon } F (\mathbf{u}_\varepsilon ) -|\nabla \psi_\varepsilon | \)\,d x+\int\nabla\cdot\mathbf{H} |\nabla\psi_\varepsilon |\,d x\\
&-\int (\nabla\mathbf{H})_{ij}\,\varepsilon \(\partial_i \mathbf{u}_\varepsilon \cdot \partial_j \mathbf{u}_\varepsilon \) d x+\int (\nabla \mathbf{H}): (\mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon )\left|\nabla \psi_\varepsilon \right|d x.
\end{split}
\end{equation*}
This easily implies \eqref{expansion1}. \end{proof} The second lemma gives the expansion of the time derivative of \eqref{entropy}.
\begin{lemma}\label{lemma exact dt relative entropy}
Under the assumptions of Theorem \ref{main thm}, the following identity holds \begin{subequations}\label{time deri 4}
\begin{align}
\frac{d}{d t} E&\left[\mathbf{u}_\varepsilon | I\right]
+\frac 1{2\varepsilon }\int \(\varepsilon ^2 \left| \partial_t \mathbf{u}_\varepsilon \right|^2-|\mathbf{H}_\varepsilon |^2\)\,dx\nonumber\\
&+\frac 1{2\varepsilon }\int \Big| \varepsilon \partial_t \mathbf{u}_\varepsilon -(\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon ) \Big|^2d x
+\frac 1{2\varepsilon }\int \Big| \mathbf{H}_\varepsilon -\varepsilon |\nabla \mathbf{u}_\varepsilon | \mathbf{H} \Big|^2\,d x\nonumber \\
=&\frac 1{2\varepsilon } \int \Big| (\nabla \cdot \boldsymbol{\xi } ) |\partial d_F (\mathbf{u}_\varepsilon )|\mathbf{n}_\varepsilon +\varepsilon |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon | \mathbf{H}\Big|^2\,d x\label{tail1}
\\&+\frac \varepsilon {2} \int |\mathbf{H}|^2\(|\nabla \mathbf{u}_\varepsilon |^2-|\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |^2\)\,d x
-\int \nabla \mathbf{H}\cdot (\boldsymbol{\xi } -\mathbf{n}_\varepsilon )^{\otimes 2}\left|\nabla \psi_\varepsilon \right|\,d x\label{tail2}\\
& +\int \(\nabla\cdot\mathbf{H}\) \( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}\varepsilon F (\mathbf{u}_\varepsilon ) -|\nabla \psi_\varepsilon | \)\,d x\label{tail4}\\
&+\int\(\nabla\cdot\mathbf{H}\) \(1-\boldsymbol{\xi } \cdot \mathbf{n}_\varepsilon \)|\nabla\psi_\varepsilon |\, d x+ \int J_\varepsilon ^1+J_\varepsilon ^2\, d x,\label{tail3}
\end{align}
\end{subequations}
where $J_\varepsilon ^1, J_\varepsilon ^2$ are given by
\begin{align}
J_\varepsilon ^1
:=& \nabla \mathbf{H}: \mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon \(|\nabla \psi_\varepsilon |-\varepsilon |\nabla \mathbf{u}_\varepsilon |^2\)\nonumber\\
&+\varepsilon \nabla \mathbf{H}:(\mathbf{n}_\varepsilon \otimes \mathbf{n}_\varepsilon )\(
|\nabla \mathbf{u}_\varepsilon |^2-|\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon |^2\) \nonumber\\
&-\sum_{ij}\varepsilon (\nabla \mathbf{H})_{ij} \Big((\partial_i \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon } \partial_i \mathbf{u}_\varepsilon )\cdot(\partial_j \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon } \partial_j \mathbf{u}_\varepsilon )\Big)\, \label{J1}\\
J_\varepsilon ^2:= &- \(\partial_t \boldsymbol{\xi } +\left(\mathbf{H} \cdot \nabla\right) \boldsymbol{\xi } +\left(\nabla \mathbf{H}\right)^{\mathsf{T}} \boldsymbol{\xi } \)\cdot \nabla \psi_\varepsilon \label{J2}
\end{align} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma exact dt relative entropy}] We shall employ the Einstein summation convention by summing over repeated Latin indices.
Using the energy dissipation law \eqref{dissipation} and adding zero, we compute the time derivative of the energy \eqref{entropy} by
\begin{align}
&\frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I]
+\varepsilon \int |\partial_t \mathbf{u}_\varepsilon |^2\,d x-\int (\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon )\cdot \partial_t \mathbf{u}_\varepsilon \,d x\nonumber\\
=&\int \left(\mathbf{H} \cdot \nabla\right) \boldsymbol{\xi } \cdot\nabla \psi_\varepsilon \,d x
+\int \left(\nabla \mathbf{H} \right)^{\mathsf{T}} \boldsymbol{\xi } \cdot\nabla \psi_\varepsilon \,d x+ \int J_\varepsilon ^2\, d x
\label{time deri 1}
\end{align}
Due to the symmetry of the Hessian of $\psi_\varepsilon $ and the boundary conditions \eqref{bc n and H}, we have
\begin{align*}
\int \nabla \cdot (\boldsymbol{\xi } \otimes \mathbf{H} ) \cdot \nabla \psi_\varepsilon \, d x = \int \nabla \cdot (\mathbf{H} \otimes \boldsymbol{\xi } ) \cdot \nabla \psi_\varepsilon \, d x.
\end{align*}
Hence, the first integral on the right-hand side of \eqref{time deri 1} can be rewritten as
\begin{align*}
&\int\left(\mathbf{H} \cdot \nabla\right) \,\boldsymbol{\xi } \cdot \nabla \psi_\varepsilon \, d x\nonumber \\
&=\int \nabla \cdot (\boldsymbol{\xi } \otimes \mathbf{H} ) \cdot \nabla \psi_\varepsilon \, d x
-\int (\nabla \cdot \mathbf{H} ) \,\boldsymbol{\xi } \cdot \nabla \psi_\varepsilon \, d x\nonumber \\
&= \int(\nabla \cdot \boldsymbol{\xi } ) \,\mathbf{H} \cdot \nabla \psi_\varepsilon \, d x
+\int(\boldsymbol{\xi } \cdot \nabla) \,\mathbf{H} \cdot \nabla \psi_\varepsilon \, d x
-\int (\nabla \cdot \mathbf{H} ) \,\boldsymbol{\xi } \cdot \nabla \psi_\varepsilon \,d x.
\end{align*}
Therefore
\begin{equation*}
\begin{split}
&\frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I]
+\varepsilon \int |\partial_t \mathbf{u}_\varepsilon |^2\,d x-\int (\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon )\cdot \partial_t \mathbf{u}_\varepsilon \,d x \\
=& \int (\nabla \cdot \boldsymbol{\xi } )\, \mathbf{H} \cdot \nabla \psi_\varepsilon d x+\int (\boldsymbol{\xi } \cdot \nabla) \,\mathbf{H} \cdot \nabla \psi_\varepsilon \,d x
-\int (\nabla \cdot \mathbf{H} )\, \boldsymbol{\xi } \cdot \nabla \psi_\varepsilon \,d x\\
& +\int \nabla \mathbf{H} : \(\boldsymbol{\xi } \otimes\mathbf{n}_\varepsilon \)\left|\nabla \psi_\varepsilon \right| d x+ \int J_\varepsilon ^2\, d x.
\end{split}
\end{equation*}
Using \eqref{expansion1} to replace the 3nd and 4th integrals on the right-hand side above yields \begin{align}\label{time deri 3-}
&\frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I]
+\varepsilon \int |\partial_t \mathbf{u}_\varepsilon |^2\,d x-\int (\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon )\cdot \partial_t \mathbf{u}_\varepsilon \,d x
\\ =& \int (\nabla \cdot \boldsymbol{\xi } ) \,\mathbf{H} \cdot \nabla \psi_\varepsilon \,d x
+\int (\boldsymbol{\xi } \cdot \nabla)\, \mathbf{H} \cdot \nabla \psi_\varepsilon \,d x
+\int \nabla \mathbf{H} : (\boldsymbol{\xi } -\mathbf{n}_\varepsilon ) \otimes\mathbf{n}_\varepsilon \left|\nabla \psi_\varepsilon \right|d x\nonumber\\
& +\int \mathbf{H}_\varepsilon \cdot \mathbf{H} |\nabla \mathbf{u}_\varepsilon |\,d x
+ \int \nabla\cdot\mathbf{H} \( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}\varepsilon F (\mathbf{u}_\varepsilon ) -|\nabla \psi_\varepsilon | \)\,d x\nonumber\\&
+\int \nabla\cdot\mathbf{H} \(|\nabla\psi_\varepsilon |-\boldsymbol{\xi } \cdot \nabla\psi_\varepsilon \)\,d x-\int (\nabla \mathbf{H})_{ij} \,\varepsilon \(\partial_i \mathbf{u}_\varepsilon \cdot \partial_j \mathbf{u}_\varepsilon \)\, d x\nonumber\\
&
+\int \nabla \mathbf{H} : \mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon \left|\nabla \psi_\varepsilon \right|\,d x +\int J_\varepsilon ^2\, dx\nonumber
\end{align}
We claim that $J_\varepsilon ^1$ arises from the 2nd and 3rd to last integral. Indeed, on the set $\{ x \mid g(|\mathbf{u}_\varepsilon |)>0\}$, it follows from \eqref{projection1} and \eqref{projection} that \begin{align*}
\Pi_{\mathbf{u}_\varepsilon } \partial_i \mathbf{u}_\varepsilon \cdot \Pi_{\mathbf{u}_\varepsilon } \partial_j \mathbf{u}_\varepsilon |\partial d_F(\mathbf{u}_\varepsilon )|^2=\partial_i \psi_\varepsilon \partial_j \psi_\varepsilon =n_\varepsilon ^i n_\varepsilon ^j |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon |^2|\partial d_F(\mathbf{u}_\varepsilon )|^2 \end{align*} So by adding zero we find
\begin{equation*}
\begin{split}
& \nabla \mathbf{H} : \mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon \left|\nabla \psi_\varepsilon \right|- (\nabla \mathbf{H})_{ij} \,\varepsilon \(\partial_i \mathbf{u}_\varepsilon \cdot \partial_j \mathbf{u}_\varepsilon \)\\
\overset{\eqref{projection1}}=& \nabla \mathbf{H} : \mathbf{n}_\varepsilon \otimes\mathbf{n}_\varepsilon \left|\nabla \psi_\varepsilon \right|- \varepsilon (\nabla \mathbf{H})_{ij}(\Pi_{\mathbf{u}_\varepsilon } \partial_i \mathbf{u}_\varepsilon \cdot \Pi_{\mathbf{u}_\varepsilon } \partial_j \mathbf{u}_\varepsilon ) \\
& - (\nabla \mathbf{H})_{ij} \,\varepsilon \Big((\partial_i \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon } \partial_i \mathbf{u}_\varepsilon )\cdot(\partial_j \mathbf{u}_\varepsilon -\Pi_{\mathbf{u}_\varepsilon } \partial_j \mathbf{u}_\varepsilon )\Big) \overset{\eqref{J1}}= J_\varepsilon ^1
\end{split}
\end{equation*}
On the set $\{ x\mid |\mathbf{u}_\varepsilon |=1\}$, by \eqref{projection1} and \eqref{ADM chain rule} we have $\Pi_{\mathbf{u}_\varepsilon } \partial_j \mathbf{u}_\varepsilon =0$ and $\nabla \psi_\varepsilon =0$ a.e.. So we obtain the above identity without using \eqref{projection}. On $\{ x\mid \mathbf{u}_\varepsilon =0\}$ we employ \cite[Theorem 4.4]{MR3409135} to deduce that $\nabla \mathbf{u}_\varepsilon =0$ a.e., and thus the identity still holds.
Using the definition \eqref{normal diff} of $\mathbf{n}_\varepsilon $, we merge the 2nd and 3rd integrals on the right-hand side of \eqref{time deri 3-} and use $ \nabla \mathbf{H} :(\boldsymbol{\xi } \otimes \boldsymbol{\xi } )$ (due to \eqref{normal H}) to obtain
\begin{align}\label{time deri 3}
&\frac{d}{d t} E_\varepsilon [ \mathbf{u}_\varepsilon | I]
=-\varepsilon \int |\partial_t \mathbf{u}_\varepsilon |^2\, dx+\int (\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon )\cdot \partial_t \mathbf{u}_\varepsilon \, dx\nonumber\\
&+ \int (\nabla\cdot \boldsymbol{\xi } ) \,\mathbf{H} \cdot \nabla \psi_\varepsilon \, dx+\int \mathbf{H}_\varepsilon \cdot \mathbf{H} |\nabla \mathbf{u}_\varepsilon | \, dx -\int \nabla \mathbf{H} : (\boldsymbol{\xi } -\mathbf{n}_\varepsilon )^{\otimes 2}\left|\nabla \psi_\varepsilon \right|\, dx\nonumber\\
& +\int \(\nabla\cdot\mathbf{H}\) \Big( \frac{\varepsilon }2 |\nabla \mathbf{u}_\varepsilon |^2 +\frac{1}\varepsilon F (\mathbf{u}_\varepsilon ) -|\nabla \psi_\varepsilon | \Big)\, dx\nonumber\\&+\int (\nabla\cdot\mathbf{H}) \(1-\boldsymbol{\xi } \cdot \mathbf{n}_\varepsilon \)|\nabla\psi_\varepsilon |\, dx+ \int J_\varepsilon ^1+J_\varepsilon ^2 \, dx
\end{align}
Now we complete squares for the first four terms on the right-hand side of \eqref{time deri 3}.
Reordering terms, we have
\begin{align*}
\notag-&\varepsilon |\partial_t \mathbf{u}_\varepsilon |^2+ (\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon )\cdot \partial_t \mathbf{u}_\varepsilon
+ (\nabla \cdot \boldsymbol{\xi } ) \mathbf{H} \cdot \nabla \psi_\varepsilon + \mathbf{H}_\varepsilon \cdot \mathbf{H} |\nabla \mathbf{u}_\varepsilon |
\\\notag&= -\frac1{2\varepsilon } \Big( |\varepsilon \partial_t \mathbf{u}_\varepsilon |^2 -2(\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon )\cdot \varepsilon \partial_t \mathbf{u}_\varepsilon
+(\nabla \cdot \boldsymbol{\xi } )^2 | \partial d_F (\mathbf{u}_\varepsilon )|^2 \Big)
\\\notag&\quad - \frac1{2\varepsilon } |\varepsilon \partial_t \mathbf{u}_\varepsilon |^2 + \frac1{2\varepsilon }(\nabla \cdot \boldsymbol{\xi } )^2 | \partial d_F (\mathbf{u}_\varepsilon )|^2
+ (\nabla \cdot \boldsymbol{\xi } ) \mathbf{H} \cdot \nabla \psi_\varepsilon
\\\notag&\quad - \frac1{2\varepsilon } \Big( |\mathbf{H}_\varepsilon |^2 - 2 \varepsilon |\nabla \mathbf{u}_\varepsilon | \mathbf{H}_\varepsilon \cdot \mathbf{H} + \varepsilon ^2 |\nabla \mathbf{u}_\varepsilon |^2 |\mathbf{H}|^2\Big)
+ \frac1{2\varepsilon } \Big( |\mathbf{H}_\varepsilon |^2 + \varepsilon ^2 |\nabla \mathbf{u}_\varepsilon |^2 |\mathbf{H}|^2\Big)
\\&\notag = -\frac1{2\varepsilon } \Big|\varepsilon \partial_t \mathbf{u}_\varepsilon - (\nabla \cdot \boldsymbol{\xi } ) \partial d_F (\mathbf{u}_\varepsilon ) \Big|^2
- \frac1{2\varepsilon } \Big|\mathbf{H}_\varepsilon - \varepsilon |\nabla \mathbf{u}_\varepsilon | \mathbf{H} \Big|^2
- \frac1{2\varepsilon } |\varepsilon \partial_t \mathbf{u}_\varepsilon |^2 +\frac1{2\varepsilon } |\mathbf{H}_\varepsilon |^2
\\&\quad + \frac1{2\varepsilon } \Big( (\nabla \cdot \boldsymbol{\xi } )^2 |\partial d_F (\mathbf{u}_\varepsilon )|^2 + 2\varepsilon (\nabla \cdot \boldsymbol{\xi } ) \nabla \psi_\varepsilon \cdot \mathbf{H} + |\varepsilon \Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |^2 |\mathbf{H}|^2 \Big)
\\\notag&\quad +\frac\varepsilon {2} \left(|\nabla \mathbf{u}_\varepsilon |^2- | \Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |^2\right) |\mathbf{H}|^2.
\end{align*}
Using \eqref{normal diff} and the chain rule \eqref{projectionnorm}, the terms above form the last missing square.
Integrating over the domain $\Omega$ and substituting into \eqref{time deri 3} we arrive at \eqref{time deri 4}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{gronwallprop}]
The proof here is exactly the same as the case $\mu=0$, done in \cite[Lemma 4.4]{MR4284534}. This is because the form of the energy dissipation law \eqref{dissipation} remains unchanged in the presence of the divergence term in \eqref{Ginzburg-Landau}. Note that in the statement of Lemma 4.4 of \cite{MR4284534} the second term on the left hand side of \eqref{time deri 4} is missing, though the proof of the identity there is correct. See \cite[equation (4.33)]{MR4284534}
We first estimate the right-hand side of \eqref{time deri 4} by $E_\varepsilon [\mathbf{u}_\varepsilon | I]$ up to a constant that only depends on $I_t$. We start with \eqref{tail1}: it follows from the triangle inequality that
\begin{equation*}
\begin{split}
\int& \left|\frac1{\sqrt{\varepsilon }} (\nabla \cdot \boldsymbol{\xi } ) |\partial d_F (\mathbf{u}_\varepsilon )|\mathbf{n}_\varepsilon +\sqrt{\varepsilon } |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon | \mathbf{H}\right|^2d x
\\ \leqslant &\int \left|(\nabla\cdot \boldsymbol{\xi } )
\left( \frac{1}{\sqrt{\varepsilon }} |\partial d_F (\mathbf{u}_\varepsilon )|-\sqrt{\varepsilon } |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon |
\right)\mathbf{n}_\varepsilon \right|^2\,d x\\&+\int \Big|(\nabla\cdot \boldsymbol{\xi } )\sqrt{\varepsilon } |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon |(\mathbf{n}_\varepsilon -\boldsymbol{\xi } )\Big|^2\, d x \\&+ \int \left|
\big((\nabla \cdot \boldsymbol{\xi } ) \boldsymbol{\xi } +\mathbf{H} \big) \sqrt{\varepsilon } |\Pi_{\mathbf{u}_\varepsilon } \nabla \mathbf{u}_\varepsilon | \right|^2\,d x.
\end{split}
\end{equation*}
The first integral on the right-hand side of the above inequality is controlled by \eqref{energy bound2}. Due to the elementary inequality $|\boldsymbol{\xi } - \mathbf{n}_\varepsilon |^2 \leqslant 2 (1-\mathbf{n}_\varepsilon \cdot\boldsymbol{\xi } )$, the second integral is controlled by \eqref{energy bound1}. The third integral can be treated using the relation $\mathbf{H}=(\mathbf{H}\cdot\boldsymbol{\xi } ) \boldsymbol{\xi } +O(d_I(x,t))$ and \eqref{div xi H}. So it can be controlled by \eqref{energy bound3}.
The integrals in \eqref{tail2} can be controlled using \eqref{energy bound2} and \eqref{energy bound1}. The integrals in \eqref{tail4} is controlled by \eqref{energy bound-1}. The first term in \eqref{tail3} can be controlled using \eqref{energy bound1}. It remains to estimate \eqref{J1} and \eqref{J2}. The last two terms on the right-hand side of $J_\varepsilon ^1$ can be bounded using \eqref{energy bound0}. Therefore,
\begin{align*}
\int J_\varepsilon ^1\, dx
\overset{\eqref{energy bound0}}\leqslant &\int \nabla \mathbf{H}: \( \mathbf{n}_\varepsilon \otimes (\mathbf{n}_\varepsilon -\boldsymbol{\xi } )\) \(|\nabla \psi_\varepsilon |-\varepsilon |\nabla \mathbf{u}_\varepsilon |^2\)\, dx\nonumber\\
&+\int (\boldsymbol{\xi } \cdot \nabla) \mathbf{H}\cdot \mathbf{n}_\varepsilon \(|\nabla \psi_\varepsilon |-\varepsilon |\nabla \mathbf{u}_\varepsilon |^2\)\, dx+ C E_\varepsilon [\mathbf{u}_\varepsilon | I]\nonumber\\
\overset{\eqref{normal H}}\lesssim & \int |\mathbf{n}_\varepsilon -\boldsymbol{\xi } | \Big(\varepsilon |\nabla \mathbf{u}_\varepsilon |^2-\varepsilon |\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |^2\Big)\, dx\\
&+\int |\mathbf{n}_\varepsilon -\boldsymbol{\xi } | \left|\varepsilon |\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |^2-|\nabla \psi_\varepsilon |\right| \, dx\nonumber\\
&+ \int\min \(d^2_I ,1\) \(|\nabla \psi_\varepsilon |+\varepsilon |\nabla \mathbf{u}_\varepsilon |^2\)\, dx+ E_\varepsilon [\mathbf{u}_\varepsilon | I].
\end{align*}
The first and the third integral in the last step can be estimated using \eqref{energy bound0} and \eqref{energy bound3} respectively.
Then we employ \eqref{projectionnorm} and yield
\begin{align*}
\int J_\varepsilon ^1\, dx
\lesssim &\, \int |\mathbf{n}_\varepsilon -\boldsymbol{\xi } | \left| \varepsilon |\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |^2-|\nabla \psi_\varepsilon |\right|\, dx+ E_\varepsilon [\mathbf{u}_\varepsilon | I]\nonumber\\
\overset{\eqref{projectionnorm}}= &\, \int |\mathbf{n}_\varepsilon -\boldsymbol{\xi } | \sqrt{\varepsilon } |\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon | \left| \sqrt{\varepsilon } |\Pi_{\mathbf{u}_\varepsilon }\nabla \mathbf{u}_\varepsilon |-\frac{1}{\sqrt{\varepsilon }}|\partial d_F (\mathbf{u}_\varepsilon )|\right|\, dx+ E_\varepsilon [\mathbf{u}_\varepsilon | I].
\end{align*}
Finally applying the Cauchy-Schwarz inequality and then \eqref{energy bound2} and \eqref{energy bound1}, we obtain
$\int J_\varepsilon ^1 \lesssim E_\varepsilon [\mathbf{u}_\varepsilon | I].$
As for $J_\varepsilon ^2$ \eqref{J2}, we employ \eqref{xi der1} and \eqref{energy bound3} to obtain a similar estimate $\int J_\varepsilon ^2 \lesssim E_\varepsilon [\mathbf{u}_\varepsilon | I].$
All in all, we proved that the right-hand side of \eqref{time deri 4} is bounded by $E_\varepsilon [\mathbf{u}_\varepsilon | I]$ up to a multiplicative constant which only depends on $I_t$. \end{proof}
\noindent{\it Acknowledgements}.
Y. Liu is partially supported by NSF of China under Grant 11971314.
\end{document} | arXiv |
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Walsh Fourier transform of the Möbius function
Asked 11 years, 11 months ago
This question is related to this previous question where I asked about ordinary Fourier coefficients.
##Special case: is Möbius nearly orthogonal to Morse
Harold Calvin Marston Morse (24 March 1892 – 22 June 1977), August Ferdinand Möbius (November 17, 1790 – September 26, 1868)
Consider the sequence of values of the Möbius functions on nonnegative integers. (Starting with 0 for 0.)
0, 1, −1, −1, 0, −1, 1, −1, 0, 0, 1, −1, 0, −1, 1, 1, 0, −1, 0, −1, 0, 1, 1, −1, 0, 0, ...
And the Morse sequence
1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1
Are these two sequences nearly orthogonal?
Remark: This case of the general problem follows from the solution of Mauduit and Rivat of a 1968 conjecture of Gelfond. They show that primes are equally likely to have odd or even digit sum in base 2. See Ben Green's remark below.)
Start with the Möbius function $\mu (m)$. (Thus $\mu(m)=0$ unless all prime factors of $m$ appear once and $\mu (m)=(-1)^r$ if $m$ has $r$ distinct prime factors.) Now, for a n-digit positive number $m$ regard the Mobius function as a Boolean function $\mu(x_1,x_2,\dots,x_n)$ where $x_1,x_2,\dots,x_n$ are the binary digits of $m$.
For example $\mu (0,1,0,1)=\mu(2+8)=\mu(10)=1$. We write $\Omega_n$ as the set of 0-1 vectors $x=(x_1,x_2,x_m)$ of length $n$. We also write $[n]=\{1,2,\dots,n\}$, and $N=2^n$.
Next consider for some natural number $n$ the Walsh-Fourier transform
$$\hat \mu (S)= \frac{1}{2^n} \sum _{x\in \Omega_n} \mu(x_1,x_2,\dots,x_n)(-1)^{\sum\{x_i:i\in S\}}.$$
So $\sum_{S \subset [n]}|\hat \mu (S)|^2$ is roughly $6/\pi ^2$; and the prime number theorem asserts that $\hat \mu(\emptyset)=o(1)$; In fact the known strong form of the prime number theorem asserts that
$$|\hat \mu (\emptyset )| \lt n^{-A} =(\log N)^{-A},$$
for every $A>0$. (Note that $|\hat \mu (\emptyset)=\sum_{k=0}^{N-1}\mu(k)$.)
Is it true that the individual coefficients tend to 0? Is it known even that $|\hat \mu (S)| \le n^{-A}$ for every $A>0$.
Solved in the positive by Jean Bourgain (April 12, 2011): Moebius-Walsh correlation bounds and an estimate of Mauduit and Rivat; (Dec, 2011) For even stronger results see Bourgain's paper On the Fourier-Walsh Spectrum on the Moebius Function.
Is it the case that
$$(*) \sum \{ \hat \mu ^2(S)~:~|S|<(\log n)^A \} =o(1), $$ for every $A>0$.
(This does not seem to follow from bounds we can expect unconditionally on individual coefficients.) Solved in the positive by Ben Green (March 12, 2011): On (not) computing the Mobius function using bounded depth circuits. (See Green's answer below.)
The Riemann Hypothesis asserts that $$|\hat \mu (\emptyset )| < N^{-1/2+\epsilon}.$$
Does it follows from the GRH that for some $c>0$, $$| \hat \mu (S)| < N^{-c},$$ for every $S$?
An upper bound of $(\log N)^{-{\log \log N}^A}$ suffices to get the desired application.
##The motivation
The motivation for these questions from a certain computational complexity extension of the prime number theorem. It asserts that every function on the positive integers that can be represented by bounded depth Boolean circuit in terms of the binary expansion has diminishing correlation with the Mobius function. This conjecture that we can refer to as the $AC^0$- prime number conjecture is discussed here, on my blog and here, on Dick Lipton's blog. The conjecture follows from formula (***) by a result of Linial Mansour and Nisan on Walsh-Fourier coefficients of $AC^0$ functions.
Question 3 suggests that perhaps we can deduce the $AC^0$ prime number conjecture from the GRH which would be of interest. Of course, it will be best to prove it unconditionally. (Ben Green proved it unconditionally).
For polynomial size formulas, namely for functions expressible by depth-2 polynomial size circuits we may need even less. A result of Mansour shows that the inequality $|\hat \mu (S)| \le n^{-(\log \log n)^A}$ for every $A>0$, would suffice! Moreover, a conjecture of Mansour (which also follows from a more general conjecture called the influence/entropy conjecture, see this blog post for a description of both conjectures) implies that it will be enough to prove that
$$|\hat \mu (S)| \le n^{-A}$$
for every $A>0$, to deduce the PNT for formulas.)
Let me mention that the question follows to a large extent a line of research associating $AC^0$ formulas with number theoretic questions. See the papers by Anna Bernasconi and Igor Shparlinski and the paper by Eric Allender Mike Saks Igor Shparlinski, and the paper Complexity of some arithmetic problems for binary polynomials by Eric Allender, Anna Bernasconi, Carsten Damm, Joachim von zur Gathen, Michael Saks, and Igor Shparlinski.
Related MO question: Odd-bit primes ratio
analytic-number-theory
computational-number-theory
computational-complexity
nt.number-theory
co.combinatorics
33 revs, 6 users 85%
Gil Kalai
$\begingroup$ Gil, fascinating question. If $|S|$ is very large (say $S = [n]$) then this is basically the work of Mauduit and Rivat, who show that primes are equally likely to have odd or even digit sum in base 2. On the other hand if $|S|$ is very small (e.g. $|S| = 1$) then you've got to sum $\mu$ over a set that looks like a GAP, $P$, of small dimension; this should be possible decomposing $1_P$ into exponentials. Question is what happens in the middle! I'll think about it... $\endgroup$
– Ben Green
Mar 6, 2011 at 17:42
$\begingroup$ P.S. could you edit your post a bit? I think there are some $n^A$s that should be $n^{-A}$, etc. $\endgroup$
$\begingroup$ Thats very interesting Ben, for question (2) we need to worry only about S of size polylog (n) $\endgroup$
– Gil Kalai
$\begingroup$ Dear Ben, I do not see why the result of Mauduit and Rivat that you mentioned implies the statement about [n]. (The [n] case is a nice statement about Morse sequence being "orthogonal" to the Mobius function.) $\endgroup$
An update to my earlier answer. I've written a proof of this "AC0 prime number conjecture" as a short paper, which can be found here.
https://arxiv.org/abs/1103.4991
I thought a bit about establishing a nontrivial bound on the Fourier-Walsh coefficients $\hat{\mu}(S)$ for all sets $S$. My paper does this when $|S| < cn^{1/2}/\log n$ (here $S \subseteq \{1,\dots,n\}$). On the GRH it works for $|S| = O(n/\log n)$. I remarked before that the extreme case $S = \{1,\dots,n\}$ follows from work of Mauduit and Rivat.
I still believe that there is hope of proving such a bound in general, but this does seem to be pretty tough. At the very least one has to combine the work of Mauduit and Rivat with the material in my note above, and neither of these (especially the former) is that easy.
7 revs, 3 users 96%
Ben Green
$\begingroup$ Very nice! I said "(This does not seem to follow from bounds we can expect unconditionally on individual coefficients.)" But I was wrong, it does seem to follow if you extend the bound for the empty coefficients to all other small coefficients. (Also I am a bit confused: what is the best known bound for SUM_{k=1}^N \mu(k)?) $\endgroup$
$\begingroup$ Gil, Well it's a bit more complicated than that. I'll try and write some proper notes on the whole thing. If it works out I'm not sure it's really publishable in the sense that it's the crudest possible joining of two things already in the literature. But then again, those are two quite different literatures, so perhaps the link should be properly noted. I think there are some more general observations I can extract from the Harman-Katai paper and I shall do so in due course. Best wishes Ben $\endgroup$
$\begingroup$ Regarding proving it for all Fourier coefficients. It is sort of interesting on its own and it will imply Mobius randomness for functions of the binary digits that we understand their Walsh expansions. If you can prove (say under GRH) that the Fourier coefficients hat \mu(S) is below X^{-1/4 +epsilon} for every S, |S|<= n^{1/2+epsilon} then it will show that the "monotone PNT" namely that every function expressed by a monotone Boolean function (regardless of its computational complexity) of the binary digits is orthogonal to the Mobius function. $\endgroup$
$\begingroup$ Dear Ben, also maybe the results in the paper in the "Some background" already contain part (or all, although I did not find it) of what is needed. $\endgroup$
$\begingroup$ One obvious motivation for understanding the general case of Walsh functions is that we can believe that the AC^0 result extend to Boolean circuits which allow also addition mod 2 gates. This is the net step beyonf AC^o where still a lot in known and a PNT will be very welcome. If we can combine some facts from analytic number theory with the Rasborov-Smolensky methods for this type of functions this will be very nice. $\endgroup$
| Show 1 more comment
Rather than updating the question, let me devote a separate answer to discuss the emerging knowledge. (Please corrent me if I make any mistake.) I will update this answer when necessary.
First, the Prime Number Theorem in its stronger known form asserts that
$$|\sum_{k=1}^X \mu(k)| \le X/e^{\sqrt \log X}. $$
And the RH asserts that $$|\sum_{k=1}^X \mu(k)| \le X^{1/2+\epsilon}.$$
Let $X=2^n$, the Prime number theorem deals with the Walsh coefficient $\hat \mu(\emptyset)$.
(Remark: I am still a little confused about the situation, since the upper bound for the ordinary discrete Fourier coefficients in this answer by Matt Young are not as strong as the statement for the 0th coefficient given by the PNT. This is now clarified by Ben's remark below.)
##The second question and the $AC^0$-prime number conjecture (resolved by Ben Green, March 12).
Ben wrote a paper showing that $$\hat \mu (S) \le X/e^{\sqrt \log X}, $$ whenever $|S| \le n^{1/2-\epsilon}$ using the Herman Katai's method. This is more than enough to imply a positive answer to question 2.
Ben's positive answer to question 2 implies the $AC^0$- Prime Number Conjecture (a.k.a Sarnak-Kalai conjecture)! In my opinion, this is a very nice result.
Ben's number theoretic argument is rather delicate from it the implication is rather direct. Hastad Switching lemma implies that the total influence (a.k.a. average sensitivity) of an $AC^0$ Boolean function is polynomial in (log n) and this implies that most of the Fourier-Walsh coefficients are below the polylog level which together with an affarmative answer to question 2 gives the $AC^0$ PNC. The connection of $AC^0$ circuits and Walsh expansion was first explored by Linial, Mansour and Nisan and their full result (which was later improved a little by Hastad) asserts that the Fourier-Walsh coefficients decay exponentially above their expected value. The exponential decay does not play a role here, but it will imply stronger orthogonality consequence with better upper bounds on the Walsh coefficients of the Mobius functions.
##The first question (Update April 12, resolved by Jean Bourgain)
The first question was if $\hat \mu(S)$ tends to zero uniformly with X and at what rate.
A special case of interest was the correlation between the Mobius function and the Morse function (which is $\hat \mu ([n])$. Ben Green noted that the method by Mauduit and Rivat gives directly that
$$\hat \mu ([n]) \le X^{-c}, $$ for some positive constant $c$.
Also according to Ben the results and methods of Harman and Kátai will give that $\hat \mu (S)$ uniformly tends to zero whenever $|S| \le n^{1/2-\epsilon}$ (in fact they give a stronger result mentioned below).
According to Ben, the technique of Mauduit-Rivat are likely to work unless S∩[n/3] is very ``thin'', and with more effort combining Mauduit-Rivat and Herman-Katai.
Update (April 12): Jean Bourgain proved (private communication) that for every Walsh function $W_S$ we have
$\sum_{m=1}^{X}\mu(m) W_S(m) \le X \cdot e^{-(\log X)^{1/10}}.$
In other words, $\hat \mu(S) \le e^{-(\log X)^{1/10}}.$
Jean also showed that under GRH
$\sum_{m=1}^X\mu(m)W_S(m) \le X^{1-(c/(\log\log X)^2)}.$
In other words, $\hat \mu(S) \le X^{-(c/\log \log X)^2}.$
This result suffices to show under the GRH the "monotone prime number conjecture."
Update Sept 14: Bourgain's paper is now arxived.
##The relation with known CS literature?
In the question there are links to several papers which deals with related question of the inability of $AC^0$ functions to compute certain number theoretic questions. These papers rely heavily on Fourier expansion of $AC^0$ circuits, Linial-Mansour-Nisan, Hastad etc.
It seems that the paper by Anna Bernasconi and Igor Shparlinski (Wayback Machine) and some papers cited there are most relevant. It looks that there is a proof there that much weight of the Fourier coefficients of a function expressing square-freeness (which is close to Mobius but seems easier).
##Follow up questions
Give an affarmative answer to question (1)
Extend the PNT when you consider functions expressed ACC[p] circuits, namely by Boolean depth circuits with mod p gates. Note that question (1) is a very special case of ACC2, It would be nice to "merge" the Rasborov-Smolensky method to deal with ACC[p] functions with some ANT. Now that Ben settled the PNC for $AC^0$ functions this will be a natural next step. I will ellaborate on this question below.
Give an affermative answer to question 3. It will imply that under GRH the AC^0 PNT extends "almost" all the way to log-depth. (Update: The new result of Bourgain comes very close to that.)
Showing that $\hat \mu (S) \le X^{-1/3}$ will imply "The prime number conjecture for monotone Boolean functions" namely that the Mobius function is asymptotically orthogonal to every function described by a monotone Boolean function of the digits. (No complexity assumptions.) (Update: The new result of Bourgain appears to implies this under GRH.)
(This probably implies statement like: if you consider a randon sequance of integers $0=X_1,...,X_n$ so that $X_{i+1}$ is obtained from $X_i$ by switching a digit from 0 to 1 then the Mobius function will change sign many times on the sequence.)
It would be interesting to see if appropriate low level complexity classes (Also allowing random inputs to the circuits) account for other known results about "Mobius randomness". Interesting examples: Standard L functions, the Green-Tao bracketed polynomials; non deterministic sequences in the sense of Peter Sarnak,
There is no special reason to state the AC^0 Prime number theorem just based on the binary digit expressions. Can it be extended to expansions w.r.t. other p's?
Low degree polynomials over Z/2Z
The "Walsh-Fourier" functions considered in the question are high degree monomials over the real but they can be considered as linear functions over Z/2Z. For that, replace the values {-1,+1} by {0,1} both in the domain and range of our Boolean functions? What about low degree polynomials instead of linear polynomials?
If we can extend the results to polynomials over Z/2Z of degree at most polylog (n) this will imply by a result of Razborov Mobius randomness for AC02 circuits. (This is interesting also under GRH).
Update (12 April) While this question is still open. Jean Bourgain was able to prove Mobius randomness for AC0(2) circuits of certain sublinear size. Jean also noted that to show that the Mobius function itself is non-approximable by a AC0[2) circuit (namely you cannot reach correlation say of 0.99) can easily be derived from Razborov Smolensky theorem, since an easy computation shows that $\mu(3x)^2$ has correlation >0.8 with the 0(mod 3) function.
Moreover (As explained by Avi Wigderson), if we can show that certain functions have very low correlations with all low degree polynomials over Z/2Z this will already be ground breaking result in computational complexity. (Say, correlation which is smaller than 1/n.) This will be interesting even under GRH.
Let me just say what low degree polynomials are. You have a bunch of sets of variables; all the sets are small (smaller than $log n^t$), and your function is the parity of the number of sets for which all variables has value '1'. (If the sets are singletones we are back to the Walsh functions.)
##More updates, more questions (December 2011)
Update: Jean Bourgain has now proved that every monotone Boolean function is asymptotically orthogonal to the Mobius function, unconditionally.
Questions: In addition to questions mentioned above it will be interesting
To relate these results with other recent results on Mobius randomness.
To see if the results about Mobius randomness translate to result about primes. Namely, are results of the form
(*) A $\pm$1 function f is asymptotically orthogonal to the Mobius function
can be "transformmed into a result of the form:
(**) There are infinitely many promes primes such that f(p)=1
Of course, we will probably need also to assume that the density of ${n:f(n)=1}$ is not too small. See also this MO question.
$\begingroup$ Gil, Regarding the "normal" Fourier coefficients of $\mu$. It basically has to do with the fact that the error term in the plain prime number theorem is $e^{-C\sqrt{\log X}}$. But in the prime number theorem in $a \mod{q}$, there is an additional term coming from a so-called ``Siegel zero''; a zero of a Dirichlet $L$-function $L(s,\chi)$ anomalously close to $1$. But these only exist for a very few values of $q$, and the observation that Harman-Katai make is that if $q$ ranges over powers of $2$ then you get to ignore them (to simplify rather dramatically). $\endgroup$
$\begingroup$ Sorry, when I say "exist" I mean "not known not to exist". Of course GRH implies there are no such zeros. $\endgroup$
$\begingroup$ @Gil 'promes'? Made me laugh. $\endgroup$
– David Roberts ♦
$\begingroup$ More seriously, the link to Bourgain's September paper (pdf from arxiv.de) was broken, so I gave a link to the paper's abstract page on the home arXiv site. And I've added the link to the December paper. $\endgroup$
$\begingroup$ Are the $e^{ \sqrt{ \log X}}$s in the first and third displayed equation meant to be $X / e^{ \sqrt{\log X}}$? $\endgroup$
– Will Sawin
Thanks for contributing an answer to MathOverflow!
Use MathJax to format equations. MathJax reference.
The enigmatic complexity of number theory
Odd-bit primes ratio
Discrete Fourier Transform of the Möbius Function
Horst Knörrer's Permutation Cancellation Problem
What's the deal with Möbius pseudorandomness?
Can primes be (almost) random sequence in von Mises sense?
What is the crucial difference the Maynard/Tao approach and Goldston-Pintz-Yildirim that extends to prime k-tuples with $k>2$
Is this Riemann zeta function product equal to the Fourier transform of the von Mangoldt function?
Positive proportion of logarithmic gaps between consecutive primes
Coefficients of modular forms and the Sato-Tate distribution
What is known about the prime number theorem for Beurling generalised primes
Probability measure on the boolean cube with small support and small Fourier transform | CommonCrawl |
The 2D Parker's mean-field dynamo equations with a various distributions of the $\alpha$- and $\omega$-effects are considered. We show that smooth profiles of $\alpha$ and $\omega$ can produce dipole configuration of the magnetic field with the realistic magnetic energy spectrum. We emphasize that fluctuating $\alpha$-effect leads to increase of the magnetic energy at the small scales, breaking the dipole configuration of the field. The considered geostrophic profiles of $\alpha$ and $\omega$ correspond to the small-scale polarwards/equatorwards travelling waves with the small dipole field contribution. The same result is observed for the dynamic form of the $\alpha$-quenching, where two branches of the weak and strong solution coexist.
Received 28 July 2014; accepted 30 July 2014; published 8 August 2014.
Citation: Reshetnyak M. Yu. (2014), The mean-field dynamo model in geodynamo, Russ. J. Earth Sci., 14, ES2001, doi:10.2205/2014ES000539. | CommonCrawl |
Moduli stack of vector bundles
In algebraic geometry, the moduli stack of rank-n vector bundles Vectn is the stack parametrizing vector bundles (or locally free sheaves) of rank n over some reasonable spaces.
It is a smooth algebraic stack of the negative dimension $-n^{2}$.[1] Moreover, viewing a rank-n vector bundle as a principal $GL_{n}$-bundle, Vectn is isomorphic to the classifying stack $BGL_{n}=[{\text{pt}}/GL_{n}].$
Definition
For the base category, let C be the category of schemes of finite type over a fixed field k. Then $\operatorname {Vect} _{n}$ is the category where
1. an object is a pair $(U,E)$ of a scheme U in C and a rank-n vector bundle E over U
2. a morphism $(U,E)\to (V,F)$ consists of $f:U\to V$ in C and a bundle-isomorphism $f^{*}F{\overset {\sim }{\to }}E$.
Let $p:\operatorname {Vect} _{n}\to C$ be the forgetful functor. Via p, $\operatorname {Vect} _{n}$ is a prestack over C. That it is a stack over C is precisely the statement "vector bundles have the descent property". Note that each fiber $\operatorname {Vect} _{n}(U)=p^{-1}(U)$ over U is the category of rank-n vector bundles over U where every morphism is an isomorphism (i.e., each fiber of p is a groupoid).
See also
• classifying stack
• moduli stack of principal bundles
References
1. Behrend 2002, Example 20.2.
• Behrend, Kai (2002). "Localization and Gromov-Witten Invariants". In de Bartolomeis; Dubrovin; Reina (eds.). Quantum Cohomology. Lecture Notes in Mathematics. Lecture Notes in Mathematics. Vol. 1776. Berlin: Springer. pp. 3–38.
| Wikipedia |
Korosensei is fast, but can he do better?
Korosensei can move at Mach 20 and takes advantage of this fact to make frequent trips around the world. Obviously, in the interest of efficiency, he often takes the opportunity to make multiple stops per trip. But what is the most efficient way for him to do so?
We can think about this using graphs. Suppose all the possible destinations are vertices and edges with costs connect the vertices, representing the cost of traveling from some destination $u$ to another destination $v$. At first glance, it might make sense to use physical distance in our case, but that is not necessarily a measure of the "best" way to get there. For instance, taking a bus or train between two cities may get you to the same place, but the actual cost depends on the path that's taken and you may choose to include other criteria like the cost of tickets and other stuff like that.
So the problem is this: Korosensei has a list of places he wants to hit up during lunch hour. What is the most cost-effective way of getting to each of those places exactly once and get back to school? This is what's known as the Traveling Salesman problem (TSP).
As it turns out, this problem, like many graph theory problems, is NP-complete, meaning it's quite computationally hard and taxing for computers to solve. Formally, if we're given a graph $G = (V,E)$ and a cost function $d: V \times V \to \mathbb R$, the problem is to find a cycle that visits every vertex exactly once with smallest possible cost. Usually, if we have $n$ vertices, then $G$ is the complete graph on $n$ vertices, that is, a graph where every vertex has an edge to every other vertex. Otherwise, it may not be possible to find such a cycle (and the problem of finding such a cycle is also an NP-complete problem, Hamiltonian cycle).
It's called the Traveling Salesman problem because the idea is that if you're a traveling salesman, you'd want to find that least-cost cycle because it'd be the most cost-effective way to hit up all the cities you need to go to in order to sell whatever it is that you're selling. Of course, the hardness of the problem means that our poor salesman or Korosensei are stuck having to check the cost of every cycle. How many are there? Well, since we're dealing with a complete graph, this means we can get to any vertex from any vertex so we can take any permutation of our vertices and that's a cycle, and there are $n!$ permutations. This means that as $n$ gets larger, the difficulty of solving this problem grows exponentially.
One way we can attempt to make NP-complete problems feasible to solve is to loosen our demands a bit. Right now, we want the absolute least-cost cycle, but what if it's not necessary for us to get the least cost cycle? What if we're okay with getting something that's within a factor of 2 or 3 or $f(n)$ for some function $f$ that depends on the size of the input? These are called approximation algorithms, in that they approximate the optimal solution by some factor. Unfortunately this doesn't work for TSP.
meaning it must've used one of the new expensive edges I added and so we know the original graph didn't have a Hamiltonian cycle.
Now this is a bit discouraging, but Korosensei would encourage us to keep on thinking about how to assassinate the hell out of this problem. In the most general case, the cost function $d$ has no constraints and the way that I've initially motivated Korosensei's problem, $d$ can be arbitrary, with costs adjusted to his needs. However, some of the attempts to make TSP more computationally feasible have to do with making some reasonable restrictions on our cost function. This is another fairly standard approach to making computationally hard problems easier to deal with: figure out some cases of the problem that are still useful but might help to simplify or restrict the problem a bit.
It's called the triangle inequality because of what I just described: draw a triangle out of $u$, $v$, and $w$ and the distance between any two should be shorter than going through the third point.
I say should because for an arbitrary cost function, this isn't always the case. One example is flight pricing. Sometimes, it's more expensive to fly from $u$ to $w$ than it is to fly from $u$ to $v$, even if the flight stops over at $w$. Anyhow, Korosensei doesn't have to deal with silly things like flight pricing and so we impose this reasonable triangle inequality condition on his cost function. Does it help? It turns out it does and we can get an approximation that'll be at most twice the cost of the optimal solution.
How? Well, first we find a minimum spanning tree. A spanning tree of a graph is a tree (a graph that contains no cycles) that connects all of the vertices of a graph together. The minimum spanning tree is the spanning tree with the least weight out of all the spanning trees. The nice thing about MSTs is that we know how to do this efficiently.
Given how fast Korosensei is, this is probably good enough for him and is a decent tradeoff between the time it takes to solve the problem and his actual travel time.
This entry was posted in Anime, math and tagged Anime, assassination classroom, manga, math, traveling salesman by blkmage. Bookmark the permalink. | CommonCrawl |
Monetary Policy, Natural Resources, and Federal Redistribution
Ohad Raveh1
Environmental and Resource Economics (2020)Cite this article
40 Accesses
Can monetary policy shocks induce redistribution across natural resource rich and poor states within a federation? We conjecture that resource-rich states are capital intensive, hence their investment is more responsive to changes in monetary policy. Consequently, contractionary monetary policy shocks (e.g., increases in the interest rate) may induce redistribution from resource-poor states to resource-rich ones, via an equalizing federal transfer scheme, because investment is reduced more strongly in the latter. We test these hypotheses using a panel of U.S. states covering several decades, and find that: (1) resource-rich states are significantly and persistently more capital intensive; (2) contractionary monetary policy shocks induce a relative drop (increase) in investment (federal transfers) in resource-rich states, over the course of four years; (3) these patterns are driven by resource-induced differences in the capital share in the economy. We estimate that a one standard deviation contractionary monetary shock induces, within the first year, federal redistribution of approximately \(\$2.5\) billion from the resource-poor to the resource-rich states, representing about \(11\%\) of the total average annual federal transfers received by the latter states.
See Ramey (2016) for a detailed review.
See, e.g., Ledoit (2011), Brunnermeier and Sannikov (2013), Doepke et al. (2015), Gornemann et al. (2016), and Coibion et al. (2016). Notably, this was also accompanied by a shift in policymakers' attention to the relation between monetary policy and economic inequality (see, e.g., Mersch 2014; Bullard 2014; Bernanke 2015; Forbes 2015).
Previous research studied the heterogeneous impacts of monetary policy across different intra-federal (cross-state) dimensions, including for instance the extent of price rigidities, and the characteristics of the housing market. We elaborate on this when reviewing the related literature in the next section.
Notably, such a scheme is a standard feature of virtually all federations, via which federal governments forward fiscal transfer payments to local governments, partially in an attempt to redistribute resources equitably across regions. Importantly, it represents the primary, direct federal redistribution mechanism [see, e.g., Martinez-Vazquez and Searle (2007)], and consequently, the focus of our analysis.
Capital in this context refers to reproducible capital, namely the component of capital that is financed through investment, and thus does not include natural capital related to mineral production (i.e., capital from which point-source resources, on which we focus, are produced).
For instance, data from the EUKLEMS data set (O'Mahony and Timmer 2009) indicate that the mining sector has the largest average share of capital compensation in total compensation among the major NAICS industries, over the period of 1970–2007.
Evidence for these input–output linkages between oil and gas and various manufacturing sectors at the U.S. county level is provided by Allcott and Keniston (2018).
Grants under the Medicaid system are allocated to the states in accordance with the Federal Medicaid Assistance Percentages (FMAP) formula, where: \(FMAP=1-(\frac{[state\_per\_capita\_income]^{2}}{[U.S.\_per\_capita \_income]^{2}})*0.45\).
In the empirical part we show that such reactions are observed as quickly as within the same fiscal year as the income-affecting shocks, and up to several years after their occurrence. This persistence is also a feature of some of the programs; for instance, Medicaid's formula often adopts income measures from the precedent three years, thus impacting the patterns of redistribution both in the short and medium terms.
We undertake a sectoral-level analysis to be able to isolate the mining sector, which as will be evident is important for effectively excluding measures related to natural capital.
Severance taxes are levied by U.S. states on the exploitation of natural resources located in their territories.
This set of additional controls includes the following state indicators, in addition to the capital share: real GSP per capita, population, real per capita government expenditures, and the unemployment rate.
Various studies, however, have shown that the interest rate is an important transmission channel of redistribution, including Bassetto (2014), Costinot et al. (2014), and Hurst et al. (2016).
Ferrero and Seneca (2015) examine the connection between monetary policy and resource richness; however, they do so within the context of deriving the optimal monetary policy under a shock to the oil price. In contrast, we consider the implications of exogenous resource-richness and monetary policy shocks to federal redistribution and other economic indicators at the local level.
Notably, the share of capital compensation in value added is a direct measure of \(\alpha\) in a standard CRS aggregate production function, \(F(K,L)=K^{\alpha }L^{\beta }\), which in turn points at the extent of the intensity of capital usage in production. In addition, it is consistent with the measure employed by related studies [see, e.g., Karabarbounis and Neiman (2014)].
Garofalo and Yamarik (2002) applied their methodology to construct a state-level panel data series of capital stocks.
Notably, the EUKLEMS methodology regards capital compensation as the value added residual after subtracting labor compensation from it. This definition, in turn, suggests that the capital share measure is equivalent to one minus the value added share of labor compensation. Consequently, it excludes taxes and accounts for amortization.
This includes all the states that have an average share of severance tax income in total tax income of at least \(10\%\) over the sample period.
While our focus is on U.S. states, this observation is also more generally applicable across countries. Figure A1 in the Online Appendix presents a similar graph for a sample of 217 countries, which points at a similar difference in the capital share of resource rich and poor countries (see the Online Appendix for the list of countries, classifications, and sources).
We are concerned with the natural capital that is related to mineral production specifically because the latter represents the key difference between the resource rich and poor states, by construction.
The mining sector represents the sector in which subsoil minerals are produced.
This measure may, to some extent, capture differences in tax and extraction rates, technologies, or characteristics related to the mobile tax bases (labor and capital). However, these differences are relatively marginal in terms of affecting the cross-state differences in natural endowments, given the federal context (which exhibits significant convergence, mobility, and tax competition across states), and the long term perspective that considers averages over several decades. Nonetheless, we also examine different resource measures, and consider the impact of cross-state differences in more detail, when testing the robustness of the baseline specifications.
This methodology is based on the notion that changes in the oil prices induce differential impacts across resource abundance levels, hence yielding concurrent cross-sectional and time variation that is not swept away by the fixed-effects. This approach has been adopted by various studies in the related literature, including James (2015) and Perez-Sebastian and Raveh (2016), among others.
Albeit being absorbed by the state and year fixed effects, CS and Oil are outlined separately in (4) for completeness.
Notably, 29 states have coastal access, 17 states are located in the American South, and 14 states have no gubernatorial term limits. See the Data Appendix for the list of states included in each case. Together with states' land size, the plausible exogeneity of each measure is motivated by its cross-sectional nature.
We execute this procedure utilizing lagged levels of the explanatory variables, dated \(t-1\) to \(t-3\), as instruments. We assume that the endogenous variables are persistent in the medium term, and that level lags affect the dependent variable only via their impact on the corresponding endogenous variables, suggesting that the proposed instruments yield a viable first stage, and that they follow the exclusion restriction. The results are robust to employing different lagged periods for the set of instruments, including the maximum available lags provided by the sample.
The period covered is limited by the availability of the main measure of monetary policy shocks employed, as we describe below.
We annualize their raw quarterly series by summing up the corresponding quarterly observations.
Romer and Romer (2004) estimate these shocks in two steps. First, they derive intended changes in the Federal Funds Rate from narrative records of internal briefings to the Federal Open Market Committee. Second, they regress predicted developments in interest rates on changes in the Federal Reserve's Greenbook forecasts to derive a typical response function. The policy shocks are then deviations from this function, expressed in percentage points.
Importantly, motivated by the plausible exogeneity of the international price of oil, the extent of state mineral production is not endogenous to the monetary policy. We provide supportive evidence for this conjecture in Table A4 in the Online Appendix which shows that, controlling for real GSP per capita, state fixed effects, and the international oil price (and with the exclusion of year fixed effects, which otherwise absorb the monetary shocks and oil price) monetary policy shocks are not directly associated with the resource abundance proxies tested in the main analysis.
Notably, this methodology follows that of other recent studies that have also examined the heterogeneous state effects of national shocks, by testing the impact of an interaction between state factors and a national shock, including de Ridder and Pfajfar (2017), Perez-Sebastian et al. (2019), and Williams and Liu (2019), among others.
Considering the unequal burden of federal taxation (Albouy 2009), we control for income in the analysis presented in a later sub-section. As will be evident, this addition does not alter the main results.
States' capital expenditures are financed mostly through (state) public debt, which is responsive to contemporaneous changes in the interest rate. By being matched to these local capital expenditures, related federal transfers are hence similarly affected by changes in the interest rate, despite financing long-term projects.
Albeit being absorbed by the state fixed effects, CS is outlined separately in (6) for completeness.
Similar to monetary, T is added separately in (7), despite being absorbed by the time fixed effects, for completeness.
We employ annualized versions of these shocks, by either aggregating their quarterly values in the cases of the TFP, tax, and fiscal changes, or considering a recession in case one is reported in at least one of the quarters within the given year. Similar to the case of monetary, the plausible exogeneity of these shocks is motivated by their national perspective, under which no specific state is sufficiently large to induce changes.
For instance, Perez-Sebastian et al. (2019) report that federal tax shocks induce systematically different impacts on the vertical tax reactions of resource rich and poor U.S. states.
The length of the examined horizon is based on the repeated observation in the related literature that the effects of monetary policy are observed, and measurable, over the medium-term horizon of approximately four years, after which additional long-term factors (e.g., technology) play a dominant role [see, e.g., Christiano et al. (2005)].
Notably, consistent with the observation that there is convergence to zero over the given horizon [see, e.g., Christiano et al. (2005)], the observed differential patterns imply that there are persistent differences in the rate of convergence across the state-groups.
See, e.g., Auray and Eyquem (2014) for a reminiscent illustration within a standard New-Keynesian framework of a monetary union.
Albouy D (2009) The unequal geographic burden of federal taxation. J Politi Econ 117(4):635–667
Allcott H, Keniston D (2018) Dutch disease or agglomeration? The local economic effects of natural resource booms in modern america. Rev Econ Stud 85(2):596–731
Arellano B, Bond S (1991) Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Rev Econ Stud 58(2):277–297
Auclert A (2019) Monetary policy and the redistribution channel. Am Econ Rev 109(6):2333–2367
Auray S, Eyquem A (2014) Welfare reversals in a monetary union. Am Econ J Macroecon 6(4):246–90
Bassetto M (2014) Optimal fiscal policy with heterogeneous agents. Quant Econ 5(3):675–704
Benigno P (2004) Optimal monetary policy in a currency area. J Int Econ 63:298–320
Benigno P, Lopez-Salido D (2006) Inflation persistence and optimal monetary policy in the Euro area. J Money Credit Bank 38:587–614
Bernanke B (2015) Monetary policy and inequality. Ben Bernanke's Blog, 1 June 2015
Besley T, Case A (2003) Political institutions and policy choices: evidence from the United States. J Econ Lit 41:7–73
Bishop A, Formby P, Thustle D (1992) Convergence of the South and non-South income distributions, 1969–1979. Am Econ Rev 82(1):262–272
Brunnermeier MK, Sannikov Y (eds) (2013) Redistributive monetary policy. Jackson hole. The Federal Reserve Bank of Kansas City, Kansas
Bullard J (2014) Income inequality and monetary policy: a framework with answers to three questions. Speech at C. Peter McColough series on international economics, council on foreign relations, 26 June 2015
Carlino G, DeFina R (1998) The differential regional effects of monetary policy. Rev Econ Stat 80(4):572–587
Carlino G, DeFina R (1999) The differential regional effects of monetary policy: evidence from the U.S. States. J Reg Sci 39(2):339–358
Carvalho C (2006) Heterogeneity in price stickiness and the real effects of monetary shocks. B.E. J Macroecon 6:1–58
Christiano LJ, Eichenbaum M, Evans CL (2005) Nominal rigidities and the dynamic effects of a shock to monetary policy. J Politi Econ 113(1):1–45
Coibion O, Gorodnichenko Y, Kueng L, Silvia J (2016) Innocent bystanders? monetary policy and inequality in the US. In: Working paper 18170, National Bureau of Economic Research
Costinot A, Lorenzoni G, Werning I (2014) A theory of capital controls as dynamic terms-of-trade manipulation. J Politi Econ 122:77–128
de Ridder M, Pfajfar D (2017) Policy shocks and wage rigidities: empirical evidence from regional effects of national shocks. In: Cambridge-INET Working Paper, No. 2017/06
Doepke M, Schneider M (2006) Inflation and the redistribution of nominal wealth. J Politi Econ 114:1069–1097
Doepke M, Schneider M, Selezneva V (2015) Distributional effects of monetary policy. Hutchins Center Working Papers
Evers MP (2012) Federal fiscal transfer rules in monetary unions. Eur Econ Rev 56(3):507–525
Evers MP (2015) Fiscal federalism and monetary unions: a quantitative assessment. J Int Econ 97(1):59–75
Farhi E, Werning I (2014) Fiscal Unions. NBER Working Paper No. 18280
Fernald J (2014) A quarterly, utilization-adjusted series on total factor productivity. Federal Reserve Bank of San Francisco Working Paper 2012–19
Ferrero A, Seneca M (2015) Notes on the underground: monetary policy in resource-rich economies. OxCarre Working Paper No. 158
Forbes K (2015) Low interest rates: king midas' golden touch?. Speech at the Institute of Economic Affairs, 24 Feb 2015
Garofalo G, Yamarik S (2002) Regional convergence: evidence from a new state-by-state capital stock series. Rev Econ Stat 84:316–323
Gilchrist S, Schoenle R, Sim J, Zakrajsek E (2016) Financial heterogeneity and monetary union. Working paper, Boston University
Gornemann N, Kuester K, Nakajima M (2016) Doves for the rich, hawks for the poor? distributional consequences of monetary policy
Hurst E, Keys BJ, Seru A, Vavra J (2016) Regional redistribution through the US mortgage market. Am Econ Rev 106(10):2982–3028
James A (2015) US state fiscal policy and natural resources. Am Econ J Econ Policy 7(3):238–257
Jorda O (2005) Estimation and inference of impulse responses by local projections. Am Econ Rev 95:161–182
Karabarbounis L, Neiman B (2014) The global decline of the labor share. Q J Econ 129(1):61–103
Ledoit O (2011) The redistributive effects of monetary policy. University of Zurich Department of Economics Working Paper No. 44
Martinez-Vazquez J, Searle B (2007) Fiscal equalization: challenges in the design of intergovernmental transfers. Springer, New York
McKinnon RI (1963) Optimum currency areas. Am Econ Rev 53(4):717–725
Mersch Y (2014) Monetary policy and economic inequality. Keynote speech, corporate credit conference, 17 Oct 2014
Mundell RA (1961) American economic association. Am Econ Rev 51(4):657–665
Nickell S (1981) Biases in dynamic models with fixed effects. Econometrica 49:1417–1426
O'Mahony M, Timmer MP (2009) Output, input and productivity measures at the industry level: the EU KLEMS database. Econ J 119(538):F374–F403
Perez-Sebastian F, Raveh O (2016) The natural resource curse and fiscal decentralization. Am J Agric Econ 98(1):212–230
Perez-Sebastian F, Raveh O, Reingewertz Y (2019) Heterogeneous vertical tax externalities and macroeconomic effects of federal tax changes: the role of fiscal advantage. J Urban Econ 112:85–110
Ramey V (2016) Macroeconomic shocks and their propagation. In: Handbook of macroeconomics (Forthcoming), Elsevier
Ramey V, Zubairy S (2018) Government spending multipliers in good times and in bad: evidence from U.S. historical data. J Politi Econ 126(2):850–901
Romer DC, Romer HD (2004) A new measure of monetary shocks: derivation and implications. Am Econ Rev 94(4):1055–1084
Romer DC, Romer HD (2010) The macroeconomic effects of tax changes: estimates based on a new measure of fiscal shocks. Am Econ Rev 100:763–801
Rubio M (2014) Housing-market heterogeneity in a monetary union. J Int Money Finance 40:163–184
Sims AC, Zha T (2006) Were there regime switches in US monetary policy? Am Econ Rev 96(1):54–81
Tenyero S, Thwaites G (2016) Pushing on a string: US monetary policy is less powerful in recessions. Am Econ J Macroecon forthcom 4:43–74
van der Ploeg F (2011) Natural resources: curse or blessing? J Econ Lit 49(2):366–420
Van der Ploeg F, Poelhekke S (2016) The impact of natural resources: survey of recent quantitative evidence. J Dev Stud 53(2):205–216
Venables A (2016) Using natural resources for development: why has it proven so difficult? J Econ Perspect 30:161–184
Verstegen L, Meijdam L (2016) The effectiveness of a fiscal transfer mechanism in a monetary union: a DSGE model for the Euro area. Working Paper, Tilburg University
Williams N, Liu C (2019) State-level implications of federal tax policies. J Monet Econ forthcom 105:74–90
Department of Environmental Economics and Management, Faculty of Agriculture, Hebrew University of Jerusalem, Rehovot, 76100, Israel
Ohad Raveh
Search for Ohad Raveh in:
Correspondence to Ohad Raveh.
Supplementary material 1 (pdf 257 KB)
Data Appendix
We use an annual-based, state-level, panel that covers the 50 U.S. states over the period 1969–2007. Unless otherwise specified, variables are based on data from the U.S. Bureau of Economic Analysis and the U.S. Census Bureau. Descriptive statistics for all variables appear in Table 7.
Variable Definitions
Real per capita GSP Real Gross State Product divided by state population.
Hours worked per capita Number of hours worked per week divided by state population. The sample excludes Alaska and Arizona, and covers the period 2000–2007 (Source: variable 'UHRSWORK', IPUMS-USA).
Resource income Share of severance tax revenues in states' total tax revenues.
Real per capita capital stock State-level measure of capital stock, divided by state population, in constant prices (Source: Garofalo and Yamarik 2002, including an extension of it available at the second author's homepage).
Mining share GSP share of the mining sector.
Price measure An interaction of the real international oil price in year t and the average share of severance tax revenues in total state tax revenues over the complete sample period (1969–2007).
Resource dummy A dummy variable that captures the states that have an average share of severance tax revenues in total state tax revenues (computed over 1969–2007) above 10%; these include: AK, LA, MT, ND, NM, OK, TX, and WY.
Monetary policy shocks Monetary policy shocks a-la Romer and Romer (2004) [Source: Tenyero and Thwaites (2016)].
Sims–Jha shocks Contractionary monetary policy shocks [Source: Sims and Zha (2006)], available for the period 1963–2003.
Capital share Capital compensation divided by value added, available for the period 1970–2000 (Source: constructed by the author, as outlined in the text).
Real federal transfers per capita Real transfers from the federal the state governments, divided by state population, examined in total, as well as in the following categories: public welfare, health and hospitals, employment security administration, education, highways, and natural resources.
Real investment per capita Real investment, divided by state population [Source: Garofalo and Yamarik (2002), including an extension of it available at the second author's homepage].
Real consumption per capita Real consumption, divided by state population. Sample covers the period 1997–2007.
Real income per capita Real income (wages and salaries) divided by state population.
Population State population.
Unemployment rate The number of unemployed individuals in the state, divided by state population.
Real government expenditures per capita Real state government expenditures, divided by state population.
Land State size in square miles.
North–South (NS) An indicator for whether the state is located in the American South, as defined by the U.S. Census Bureau, including: AL, AR, DE, FL, GA, KY, LA, MD, MO, MS, NC, OK, SC, TN, TX, VA, WV.
Coast An indicator for whether the state has a coast, including: AL, AK, CA, CT, DE, FL, GA, HA, IL, LA, ME, MD, MA, MI, MN, MS, NH, NJ, NY, NC, OH, OR, PA, RI, SC, TX, VA, WA, WI.
Term limits (TL) An indicator for whether the state has no gubernatorial term limits, including: CT, IA, ID, IL, MA, MN, ND, NH, NY, TX, UT, VT, WA, WI.
TFP shocks Aggregate, national TFP shocks, aggregated to an annual level [Source: Fernald (2014)].
Federal tax shocks (Fed) Narrative-based federal tax shocks, aggregated to an annual level, and normalized by U.S. GDP [Source: Romer and Romer (2010)].
Business cycles (Cycle) An indicator for whether the U.S. economy is in a recession (Source: U.S. Federal Reserve).
Defense shocks Narrative-based national defense news shocks, in real $ billions, aggregated to an annual level [Source: Ramey and Zubairy (2018)].
Table 7 Descriptive statistics
Raveh, O. Monetary Policy, Natural Resources, and Federal Redistribution. Environ Resource Econ (2020) doi:10.1007/s10640-020-00400-9
Accepted: 02 January 2020
DOI: https://doi.org/10.1007/s10640-020-00400-9
Monetary shocks
Natural resource abundance
Capital share
JEL Classification | CommonCrawl |
Stimulus dependent diversity and stereotypy in the output of an olfactory functional unit
Ezequiel M. Arneodo1 na1 nAff5,
Kristina B. Penikis1,2 na1,
Neil Rabinowitz2,3,6 na1,
Angela Licata1,
Annika Cichy4,
Jingji Zhang4,
Thomas Bozza ORCID: orcid.org/0000-0002-6574-81544 &
Dmitry Rinberg1,2
Nature Communications volume 9, Article number: 1347 (2018) Cite this article
Olfactory system
Olfactory inputs are organized in an array of functional units (glomeruli), each relaying information from sensory neurons expressing a given odorant receptor to a small population of output neurons, mitral/tufted (MT) cells. MT cells respond heterogeneously to odorants, and how the responses encode stimulus features is unknown. We recorded in awake mice responses from "sister" MT cells that receive input from a functionally characterized, genetically identified glomerulus, corresponding to a specific receptor (M72). Despite receiving similar inputs, sister MT cells exhibit temporally diverse, concentration-dependent, excitatory and inhibitory responses to most M72 ligands. In contrast, the strongest known ligand for M72 elicits temporally stereotyped, early excitatory responses in sister MT cells, consistent across a range of concentrations. Our data suggest that information about ligand affinity is encoded in the collective stereotypy or diversity of activity among sister MT cells within a glomerular functional unit in a concentration-tolerant manner.
Objects in the world are represented by complex patterns of activity in peripheral sensory neurons. Prior to reaching cortical areas, these representations are transformed and reformatted. One of the central challenges in sensory neuroscience is to understand the functional role and computational logic of these transformations in extracting salient information about the environment.
In mammals, the olfactory bulb is the single interface between primary olfactory sensory neurons (OSNs) and higher brain regions such as piriform cortex. OSNs carry information about odors to the olfactory bulb via a vast array of glomeruli. Each glomerulus is a functional unit, collecting input from OSNs that express a single olfactory receptor gene1 and that share similar response properties2. Each glomerulus provides exclusive excitatory input to a set of 10–20 mitral/tufted (MT) cells, which project to higher brain areas3. The output of a given MT cell depends not only on the response of the glomerulus providing its input but also on the activity of the complex network of inhibitory interneurons within which it is embedded3.
It is still not understood how odor information is represented by MT cells. As an odor is inhaled, a unique subset of glomeruli is activated, resulting in a spatiotemporal pattern that evolves over the course of the respiration cycle4,5. Once this input reaches the MT layer, however, there is substantial heterogeneity among cellular responses. The population of MT cells responds to a given odor with various combinations of temporally patterned excitation and inhibition6,7. Recent observations from anesthetized animals suggest that MT cells that are connected to the same glomerulus (sister MT cells) respond to odors with variable excitation, inhibition, and response timing8,9,10. However, it is not clear how the complexity and diversity of MT responses relate to specific attributes of the odor stimulus. What determines whether sister MT cells show uniform or divergent responses to a given odorant? Are these response properties stable under natural variation in the odor signal, such as changes to odor concentration? Given that sister MT cells do not always behave in a unified way, what information can this subpopulation of cells convey about an odor?
Here we provide an answer to these questions by assessing the odor representation at the input and output of a glomerular functional unit in awake mice. Using a combination of mouse genetics, electrophysiology, and imaging, we define the functional properties of inputs to a genetically tagged glomerulus, and then use optogenetics to identify MT cells that get input from this glomerulus. We observe, for the first time, stimulus-dependent diversity or stereotypy among sister MT cell responses in awake animals. We find that relative ligand affinity for a given odorant receptor is a major determinant of whether the MT cells respond in a uniform manner, and whether individual cell responses are consistent across concentrations. Our results directly link a fundamental stimulus property with a robust, concentration-invariant response feature, and suggest a novel way of looking at olfactory coding.
Inputs and outputs of the M72 glomerulus
To study how a single channel in the olfactory bulb, an ensemble of MT cells connected to the same glomerulus, processes stimulus information, we characterized the inputs and outputs of the mouse M72 glomerulus.
First, to characterize the input, we measured the responses of genetically identified M72-expressing OSNs (M72-OSNs) to a defined set of M72 ligands in a semi-intact preparation of the olfactory epithelium11. The dendritic knobs of fluorescently labeled OSNs from M72-GFP mice12 were targeted for recording via perforated patch (Fig. 1a,b). The relative sensitivities of M72-OSNs to each ligand covered a large range of receptor sensitivities: concentration at half-maximal response (EC50) values of the seven odorants spanned three orders of magnitude, from 0.03 to 36 µM (Fig. 1c, Supplementary Table 1). In all figures, we present odors rank-ordered by the M72-OSN sensitivity, from least sensitive (high EC50) on the left to most sensitive (low EC50) on the right.
Characterizing information in a single channel of the mouse olfactory bulb. Central insert: schematic of the olfactory bulb network. Axons from OSNs expressing the same receptor gene converge to form glomeruli, each providing the sole excitatory input to a few MT cells. Odor signals are subject to significant modification by a network of inhibitory neurons (small gray dots). a Experimental setup for characterizing OSN responses to odor. Patch clamp recordings are made from dendrites of fluorescently labeled OSNs expressing the M72 receptor. b Example traces of OSN odor responses. c Normalized dose–response curves for seven M72 ligands fitted by the Hill equation (n = 5–7 OSNs per odorant; mean ± SEM); EC50 values indicated in linear plot above. Odors used: 2-hydroxyacetophenone (2HA); ethyl tiglate (ETG); 4-methyl acetophenone (4MA); acetophenone (ACP); menthone (MEN); benzaldehyde (BNZ); and 2,4-dimethyl acetophenone (DMA). EC50 values are given in Supplementary Table 1. d Experimental setup for imaging. An awake, head-fixed mouse (OMP-GCaMP + M72-RFP) with implanted window above the OB is positioned under the microscope. e Left: image of a RFP M72 glomerulus. Right: Ca2+ image of glomerular response to an odor (2HA). M72 glomerulus here and further is marked by magenta arrow. f Experimental setup for in vivo recording of odor responses from MT cells connected to the M72 glomerulus. A head-fixed mouse is positioned in front of the odor port. The sniff signal is recorded by a pressure sensor via a cannula implanted in the nasal cavity. Brief pulses of blue light are delivered to the ChR2-expressing M72 glomerulus through an optical fiber positioned above the glomerulus. MT cell responses are recorded with a Si-probe inserted nearby. g Example of MT cell excitation following laser stimulation of the M72 glomerulus. Raster plot (upper panel) and PSTH (lower panel) around the onset of a 1 ms pulse showing the stimulus response (black line) and the baseline activity (gray line). h Distribution of response latencies to a 1 ms, 5–10 mW light pulse. Light-responsive cells with latencies longer than 20 ms (colored gray in the histogram) were excluded from the analysis
Second, to confirm that the M72 ligands would drive MT cells in vivo, we imaged presynaptic OSN activity in identified M72 glomeruli in awake mice (Fig. 1d, e). We used a strain of mice in which all the OSNs express the calcium activity indicator GCaMP3, and in which M72-OSNs also express the red fluorescent protein (RFP). This allowed us to assess the level of activation of M72 (and surrounding) glomeruli for each of the odorant stimuli and concentrations used to record MT cell activity.
Third, to characterize the output of the M72 glomerulus, we measured responses of M72-MT cells to the same odorants. To do so, we developed a novel method to identify these cells in awake, freely breathing animals (Fig. 1f). We used a strain of mice in which M72-OSNs express a channelrhodopsin2-yellow fluorescent protein fusion protein (ChR2-YFP) and are therefore light-sensitive13. We periodically stimulated the M72 glomerulus with a 473 nm light pulse while recording extracellular activity in the olfactory bulb (Fig. 1f). Those cells whose firing rate increased shortly after light stimulation were considered putative M72-MT cells (Fig. 1g). The distribution of light-evoked response latencies had its mode and median at 6 ms (Fig. 1h, see Methods); we excluded cells with latencies slower than 20 ms as likely being more than one synapse from the M72 glomerulus. Most of these putative M72-MT cells were recorded from different animals. To compare M72-MT cells with the general MT population, we also recorded generic (i.e., non-M72) cells. No differences were evident between M72-MT and generic MT cell populations in the distributions of spontaneous firing rate, preferred sniff phase, and recording depth (Supplementary Fig. 1). In total, we recorded N = 53 M72-MT cells and 312 generic MT cells.
Functional characterization of MT cells
MT cell activity is strongly influenced by the temporal dynamics of respiration6,14 and the duration of odor exposure. In freely breathing, head-fixed mice, there is considerable variability in sniff frequency and duration (Fig. 2a, b, c). Such variability causes peri-stimulus time histograms (PSTHs) of MT cell odor responses to be temporally smeared (Fig. 2c)6, and makes it difficult to compare MT responses between different mice with different sniff patterns. Here we monitored MT responses in relation to the sniff cycle and focused our analyses on slower sniffs—those with an inhalation duration > 100 ms (Fig. 2c)—because they comprised 75% of all sniffs across all mice, while the rarer, fast sniffs seemed to mark a distinct behavioral state (Supplementary Fig. 2). MT responses during fast sniffs showed the same general trends as during slow sniffs, but the lower number of events precluded a rigorous analysis (Supplementary Fig. 3). Finally, to avoid adaptation effects, we restricted our analyses to the first sniff cycle after odor onset.
MT cell activity depends on sniff dynamics. a A pressure signal showing one complete sniff cycle, recorded from the mouse nasal cavity. b Scatter plot of all inhalation and sniff durations, collated across mice. Top: marginal histogram of sniff durations, across all sessions and mice (black), and from one example session (green). Right: marginal histogram of inhalation durations. c Spiking of MT cells depends on inhalation duration. Black dots: raster of spike times of an example M72-MT cell across 2100 inhalations, during baseline (no odor) condition. Responses are aligned to the onset of inhalation, and are sorted (rows) by inhalation duration. Colored background shows sniff phase: red = inhalation; blue = exhalation. The color map is positioned in a, aligned with the pressure axis. Horizontal dashed line demarcates the rarer fast inhalations (below) from the more common slower inhalations (above). d Snifflet model of MT cell responses. Top: pressure traces of three successive sniffs of short, medium, and longer inhalation duration. Inhalation periods illustrated with thickened lines. Middle: a model fit of the sniff-induced firing rate of the MT cell following a particular temporal pattern, denoted as a "snifflet". Bottom: the observed spikes. The time courses of the snifflets are the free parameters of the model; these are fitted to each cell, for each stimulus condition, given the observed spikes. e Snifflet fit to an example M72-MT cell response to a single odor. Left: estimated snifflet for this cell/odor. Shaded region: ±1 SEM. Here and further: the time axis for a snifflet is shown as a thick bar corresponding to the normalized duration of inhalation, followed by a thin bar for the rest of the normalized sniff. Right: gray bars show the trial-averaged PSTH; thick black line shows average firing rate across trials; thin green lines show the model-fitted firing rates on each trial, given the dilations induced by different inhalations
To further account for variability due to sniff dynamics, we developed a statistical model for the responses of MT cells that factored in both the dependency on the stimulus and on the pattern of sniffing. We modeled the spiking response of an MT cell as arising due to an odor-dependent firing rate pattern, a "snifflet", that gets temporally dilated as a function of the duration of each sniff (Fig. 2d). The model fits best when the temporal dilation is a function of the inhalation duration (Supplementary Fig. 4). To characterize how each cell responds to each odor, we estimate the corresponding snifflet from the observed spiking data (Fig. 2e), which we accomplish using fast Bayesian methods15,16. This snifflet representation factored out the variability in sniff duration, allowing us to compare activity across cells and across mice.
From this point forward, we characterize the odor-evoked MT cell responses by comparing the corresponding snifflets. Our results, however, do not depend on the specifics of these modeling decisions: when we used the snifflet model without temporal dilation, the results were qualitatively identical (Supplementary Fig. 5).
Diversity and stereotypy of MT cell responses
Despite receiving input from functionally similar OSNs, we observed a striking degree of response diversity across M72-MT cells. This diversity was evident directly in the raw patterns of activity, illustrated in the snifflet and raster plots in Fig. 3. Diversity was observed for most M72 ligands, presented at the same approximate concentration, 0.075 ± 0.01 µM. Interestingly, responses to a particular ligand, 2-hydroxyacetophenone (2HA), were less variable across the M72-MT cell population (right column of Fig. 3). Almost every cell responded to this odor with a short-latency increase in firing rate. After this robust, reproducible burst, the responses diverged and exhibited considerable variability. Notably, 2HA is the strongest ligand yet identified for M72, with an EC50 that is two orders of magnitude lower than that of any other identified M72 ligand11 (Fig. 1c, Supplementary Table 1).
MT cell responses to odor presentation. a Raster plots and snifflet estimates for three example M72-MT cells, for each odor (sorted by affinity) and baseline conditions. Rasters are shown in real time, as in Fig. 2, for the first sniff of each trial. Trials are sorted by inhalation duration, background color corresponds to sniff phase (as in Fig. 2), and inhalation offset is marked in red. The corresponding snifflet for each odor and cell (below, blue) was estimated based on the first sniff after odor onset. Baseline snifflets (first column, black) were estimated from activity in the three seconds prior to each odor onset. Baseline snifflets are overlaid in gray in each odor panel for comparison. Time axis for snifflets as in Fig. 2. b Top row: estimated snifflets from all recorded M72-MT cells. The snifflet of each cell is normalized such that a cell's mean snifflet value across odors at inhalation onset is zero, and its maximum peak across odors is unitary. Snifflets are omitted if the cell's responses were best described as constant over the duration of the sniff. Note the substantial variability in snifflet shapes for all odors except 2HA. Bottom row: all snifflets from the population of non-M72-MT cells
The differences in M72-MT behavior for 2HA and other ligands cannot be attributed simply to different levels of parent glomerulus activation, since 2HA and other odorants (like 2,4-dimethyl acetophenone, acetophenone, and 4-methyl acetophenone) evoked similar magnitude responses in the M72 glomerulus at the concentrations presented (Fig. 4a). Thus, direct feedforward activation of M72-MT cells via their parent glomerulus does not alone determine whether their responses are diverse or stereotyped.
Response variability/stereotypy across the M72-MT population. a Map of Ca2+ activity of OSN terminals in the vicinity of M72 glomerulus (magenta arrow) for all odors. Vertical bars indicate the average level of M72 glomerulus activation for each odor. b Mean snifflet responses. Left (for all following): schematic of measurement. Right: mean normalized snifflets for each odor (as in Fig. 3b), averaged across cells. M72-MT cells in blue; generic MT cells in pink; mean baseline snifflets in gray. Shaded region denotes ±1 SEM. c Polarity of the first significant response. Right: histograms of first response polarity (yellow: excitatory; green: inhibitory; gray: no significant rate change) for M72-MT cells (left bar; bold) and generic MT cells (right bar; desaturated). Statistical tests: Pearson's chi-squared, comparing number of excitatory/inhibitory responses for M72 and generic cells (here and further: *p < 0.05; **p < 0.01; and ***p < 0.001). d Latency to first significant response. Significance is measured as a 3σ difference between the odor-evoked snifflet and the baseline snifflet. Right: cumulative distribution functions (CDFs) of response latencies among the M72-MT (blue) and generic MT (pink) populations. Cells for which odor-evoked activity did not significantly deviate from baseline are omitted. For all odors except 2HA (last column), there is no significant difference between the latency distributions of the M72-MT population and of the generic MT population (Kolmogorov–Smirnoff two-sample tests)
To quantify the diversity across cell responses, we constructed several metrics (Fig. 4b–d). First, for each odor, we computed the mean response of the ensemble of MT cells. We normalized each cell's set of snifflets by the snifflet with the largest amplitude and then averaged these across cells. The mean responses of M72-MT cells to most odorants were barely distinguishable from their respective mean responses in the baseline (no odor) condition (Fig. 4b). In contrast, the excitatory response to the strongest ligand (2HA) was still present in the mean activity.
Second, we compared the polarity of the response of each cell to each odor. For each cell and odor, we labeled the response as excitatory or inhibitory, based on the sign of the first significant (3σ) deviation of the cell's odor-evoked snifflet from its baseline (i.e., no odor) snifflet (Fig. 4c, left). For all odors except 2HA, there was considerable diversity amongst M72-MT cells in first response polarity, with roughly one-third of cells having an excitatory response, one-third having an inhibitory response, and the remainder being unresponsive to the odor (i.e., no significant deviation from baseline). These distributions were indistinguishable from those found amongst the generic MT cell population (Fig. 4c, right; Pearson's chi-squared tests). Again, the exception was 2HA, for which almost every M72-MT cell's first significant response was excitatory.
Third, we compared the onset latencies of odor responses. We computed these as the time of first significant deviation of a cell's odor-evoked snifflet from its baseline snifflet (Fig. 4d, left). For all odors but 2HA, the distributions of latencies for M72-MT cells were indistinguishable from those seen amongst the generic MT cell population (Fig. 4d, right; Kolmogorov–Smirnov tests). For 2HA, response latencies amongst M72-MT cells were consistently short (Fig. 4d).
Finally, we found that these properties did not significantly covary with the depth of the recording site (Supplementary Fig. 1a), mean spontaneous firing rate (Supplementary Fig. 1b), or preferred phase of firing during baseline sniffing (Supplementary Fig. 1c). This observation suggests that response differences are not attributable to differences in neuron types (i.e., mitral vs tufted cells).
In summary, although the M72-MT cells receive common input from sensory neurons, their responses to ligands of M72-OSNs are typically as diverse as the rest of the MT population. The exception to this pattern is a high-affinity M72 ligand, 2HA, to which M72-MT cells respond with an initially stereotyped temporal profile, characterized by a strong, short-latency, excitatory transient.
Population response stereotypy across concentration
The experiment above reveals two different response modes for the M72-MT population: cells can either respond with similar temporal profiles (as we see for 2HA); or with a diverse range of temporal profiles (as we see for all other odors). But which feature of odor stimuli determines the population response mode? Is it the identity of a stimulus (i.e., ligand affinity) or the effective concentration of a stimulus?
To address this question, we selected two odors—menthone (MEN), a weaker ligand, and 2HA, the strongest ligand—and presented them at concentrations spanning two orders of magnitude (N = 14–16 M72 and 107–167 generic MT cells; not every cell tested on every odor/concentration condition). With respect to the originally tested concentration, we decreased the concentration of 2HA 10-fold (C−1) and 100-fold (C−2), and both increased and decreased the concentration of MEN 10-fold (C+1 and C−1, respectively).
As the concentration of MEN changed, the level of M72 glomerulus activation varied from almost no response at the lowest concentration to a near saturating response at the highest concentration (Fig. 5a). Despite this significant change in glomerular activation, the M72-MT responses remained as diverse as the generic MT responses (Fig. 5b–d, left). In contrast, M72-MT responses to 2HA remained stereotyped at all concentrations (Fig. 5b–d, right). Thus, the diversity of M72-MT cell responses is dependent on odor identity, and not on concentration.
Robustness of population diversity/stereotypy across odor concentrations. Panels as in Fig. 4. a Map of Ca2+ activity for each odor, and vertical bars indicating the average level of M72 glomerulus activation. Left three columns show increasing concentrations of MEN, a weak ligand for M72-OSNs. Right three columns show increasing concentrations of 2HA, a strong ligand for M72-OSNs. C0 denotes concentrations equivalent to stimuli presented in Fig. 4; C−2, C−1, and C1 denote concentrations of 0.01C0, 0.1C0, and 10C0, respectively. (Note: due to differences in the experimental setups, in imaging experiments the concentrations for MEN were 1.8× lower than the corresponding concentrations in electrophysiological experiments, and 2× higher for 2HA.) b Mean snifflet responses for M72-MT (blue) and other (pink) populations. c Distribution of the polarities of the first significant responses for M72 (left bar; bold) and other (right bar; desaturated) MT cells. d CDFs of the latency to first significant response
Single-cell response stereotypy across concentration
Thus far, we have shown that the M72-MT cell population responds in a stereotyped way to a strong ligand, but with considerable diversity to other ligands. This raises the question of whether individual MT cells respond in a different manner to these two classes of stimuli.
Analyzing individual cells from the second dataset above, we found that changing the concentration of a single odorant could affect single MT cell responses in different ways. As shown in Fig. 6a, the responses of two M72-MT cells to 2HA were consistent at different concentrations: these cells displayed an early excitatory transient at all three concentrations, with the onset latency decreasing as concentration increased, similar to recent reports17. Conversely, responses of the same cells to MEN significantly changed with concentration (Fig. 6a, left): increasing the concentration of MEN could attenuate or even reverse an excitatory response observed at a lower concentration.
Individual M72-MT cells' responses are consistent across concentrations for 2HA but not for MEN. a Odor responses of two example M72-MT cells to three concentrations of MEN (weak ligand) and 2HA (strong ligand). Shaded areas denote ±1 SEM; light gray trace shows baseline response. b Distributions of cells by category of concentration dependency for MEN (left) and 2HA (right). We define three categories: "consistent" (cyan)—the response polarity is the same for all three concentrations; "dropped" (red)—there is at least one concentration at which the response did not significantly differ from baseline; and "flipped" (orange)—the response polarity is different at different concentrations. Cells for which the response was not significantly different from baseline in all three concentrations were omitted. Pairwise comparisons: chi-squared tests
To quantify these observations, we categorized the concentration dependency of each cell's response to each odor, based on its first significant deviation from the baseline. We assigned the label "consistent" if the cell's first significant response to the odor was always excitatory or always inhibitory across concentrations; "dropped", if there was a response for one or two of the concentrations, but no significant response for the other(s); or "flipped", if responses of opposite polarity were observed at different concentrations. We found that the majority of M72-MT cells had consistent initial responses across concentrations of 2HA, while the distribution of response categories was mixed for MEN and for the generic MT population (Fig. 6b).
Thus, a high-affinity ligand not only elicits stereotypic responses across M72-MT cells but also evokes temporal response patterns within cells that are robust to changes in concentration across two orders of magnitude. These observations do not appear to hold for other ligands of M72, nor amongst cells of the general MT population.
Glomeruli are considered functional units in early olfactory processing. Here we have studied for the first time the odor-evoked responses in MT cells that receive input from a genetically identified glomerulus in awake, freely breathing animals. We optogenetically identified sister MT cells that receive excitatory drive from the glomerulus of the M72 odorant receptor, recorded their responses to a set of well-characterized M72 ligands, and analyzed the data using novel statistical tools (Figs. 1 and 2). Despite receiving excitatory drive from functionally similar sensory neurons, M72-MT cells responded to most odorants with highly diverse temporal patterns that were as heterogeneous as those found in the generic MT population. However, this response diversity was not observed in response to 2HA, which has the highest apparent affinity of all identified M72 receptor agonists (Figs. 3 and 4). M72-MT responses to 2HA almost always included a stereotyped early excitatory transient, after which response diversity resumed. These odor-specific patterns of response diversity and stereotypy remained unchanged across odor concentration (Figs. 5 and 6), suggesting that they do not depend on how strongly the glomerulus is activated. Our data indicate that MT cells within a specific olfactory functional unit encode a strong ligand in a markedly different way than weaker ligands.
Previous studies6,7 have demonstrated that responses amongst randomly selected MT cells in awake mice are diverse in their polarity and timing across the sniff cycle. Our results suggest that this diversity cannot be attributed solely to different sources of glomerular feedforward input, as we observed that MT cells sharing a common glomerulus show similar response diversity to most odorants.
There are multiple potential sources that could account for the observed MT cell response diversity: (1) variability across animals; (2) variation in intrinsic biophysical properties across cells18,19, including differences between mitral and tufted cells20; (3) non-homogeneity of excitatory synaptic connections within a glomerulus; and (4) heterogeneity of inhibitory network connectivity. While the first three factors may play some role in the observed diversity of the responses, they cannot easily explain the fact that this diversity vanishes with a strong ligand, nor can they explain that a strong ligand (but not a weak ligand) evokes stereotypical responses over a range of concentrations.
A significant source of between-cell response variability comes from the rich inhibitory network, which includes granule cells, periglomerular, and other inhibitory cells in the olfactory bulb. The connections from granule cells to MT cells are sparse and heterogeneously distributed3,21,22, as are the connections from the periglomerular inhibitory network23,24. Functionally, too, the activity of each MT cell appears to be influenced by a handful of glomeruli that are spatially sparse and can be very distant25. Diversity across M72-MT cell responses to a single odor could therefore result from each cell having different connectivity within the lateral network.
Is this diversity/stereotypy phenomenon unique to the M72 glomerulus, or is it a general feature of glomerular channels of the olfactory bulb? Previous observations in an anesthetized preparation suggest that the effect we observe exists for another glomerulus. Tan et al.8 recorded firing rate responses of MT cells receiving input from the I7 odorant receptor. They showed that the strongest known ligand for that receptor consistently evoked high spike counts in I7 MT cells, but other odorants (not necessarily ligands for I7) typically did not. When we analyzed our data using their method, we observed the same pattern (Fig. 7), implying that this phenomenon may be a common feature of all glomerular channels in the active, functioning bulb. Moreover, we revealed a temporal specificity of this effect, namely that the consistent response to a strong ligand manifests as a temporally stereotyped, early excitatory transient.
Odor-evoked rate responses display stereotypy also only for the strongest ligand. a Odor response profiles of three M72-MT cells, calculated as the mean change in spike count over the duration of the entire first sniff. Each cell's responses are normalized such that 0 (horizontal line) is the spike count during baseline (air) and 1 is the maximum spike count across odors. b Odor response profiles averaged across the population; for M72-MT cells (top) and generic MT cells (bottom). M72-MT response is significantly higher than the generic MT response only for 2HA (Wilcoxon signed-rank test). Error bars = SEM
Dhawale et al.10, working in anesthetized mice, found that sister MT cells typically responded to an odor with substantial temporal diversity, yet responded coherently to the common drive provided by optogenetic stimulation of the parent glomerulus. By studying a glomerulus with functionally defined inputs, we show for the first time that response diversity among sister MT cells exists even for known ligands of the parent glomerulus, and that a coherent response can in fact be elicited by an odor stimulus. We posit a parallel between activation of a glomerulus with a strong ligand and artificial optogenetic stimulation—in both cases, the specific glomerular excitation could be transmitted to MT cells unperturbed by preceding activity in other channels, thereby producing synchronous activity in the corresponding sister MT cells.
Assuming universality of the observed phenomenon, what could underlie the observed patterns of diversity and stereotypy? We propose that the relative timing of activity entering the olfactory bulb could be responsible for the observed patterns of diversity and stereotypy in sister MT cells. When an odorant is presented, glomeruli corresponding to the highest-affinity receptors may be excited first, while those with lower affinities may be activated later (Fig. 8a). Such an activation sequence could result from several mechanisms: (1) a gradual rise in odorant concentration during inhalation, over tens to hundreds of milliseconds, resulting in different OSN types reaching threshold at different times26,27; and (2) a cellular signal integration process by which OSNs activated by stronger ligands reach firing threshold faster28. The earliest activated glomeruli would confer excitatory drive to downstream sister MT cells, which would further propagate signals into downstream inhibitory networks. This inhibitory activity will then feed back to the population of all MT cells in different ways (due to the heterogeneity of lateral connectivity), thus diversifying responses in subsequently activated glomerular channels (Fig. 8b). For a given odorant, the net effect of these feedforward and recurrent dynamics will thus be different for MT cells in early- and late-responding channels. The initial input experienced by MT cells of early-responding channels will be dominated by feedforward excitation, causing these cells to produce a stereotypical burst of action potentials early in the sniff cycle (Fig. 8c). Sister MT cells associated with late-responding channels will receive heterogeneous inputs from the inhibitory network coincidentally with the feedforward excitation, resulting in diverse responses.
A proposed mechanism responsible for stereotypy/diversity among sister MT cell odor responses. a In the presence of a given odor, OSN receptors' relative sensitivities to the odorant determine the relative response latencies of their corresponding glomeruli. As inhalation carries the odor into the nose (top), odor concentration rises gradually (middle, left). More sensitive olfactory receptors reach their activation threshold earlier (middle, right), and thus respond earlier (bottom). The labels 1 to N denote the temporal order of OSN channel activation. b The ensuing flow of activity in the bulb. The schematized olfactory bulb network is shown as in Fig. 1: glomeruli (large colored circles); MT cells (colored triangles); and inhibitory neurons (small gray circles; top = periglomerular layer, bottom = granule layer). Thin gray lines represent inhibitory connections. The glomeruli are depicted horizontally in the order of the temporal sequence of activation (1 to N), as shown in a. MT cells connected to glomerulus 1 (red) receive early feedforward (FF) excitation, which propagates through the inhibitory network. MT cells connected to glomerulus N (green) receive both FF excitation and lateral inhibition. c Responses of MT cells connected to early-activated (red) and late-activated (green) glomeruli. Driven by a common excitatory input, MT cells connected to early-activated glomeruli share an initial short-latency excitatory response. MT cells connected to later-activated glomeruli are subject to both excitatory drive and heterogeneous inhibitory influences, and thus show diverse responses
The results of our experiment are consistent with this hypothesis. Although we do not know the exact timing of M72-OSN activation relative to other channels for each odor, the fact that 2HA is such a strong ligand for the receptor suggests that the M72 channel is one of the first to be activated in response to this odor. Conversely, the relatively weak sensitivity of M72 receptors to the other ligands predicts that the M72-OSNs would be activated relatively late.
Moreover, this model provides a simple explanation for why these results do not change with odor concentration. As odor concentration decreases, OSNs are typically activated later4. However, decreasing odor concentration would not change the relative timing of glomerular activation within our model. The MT cells connected to later-activated glomeruli would receive diverse and concentration-dependent inhibitory drive; thus, sister MT cells would still respond to the odor differently from one another, but also would show variable responses across concentration (Figs. 5 and 6).
Our data are consistent with this hypothesis, but there are a few limitations to our conclusions. First, showing that the relationship between diversity/stereotypy and odor affinity holds for all glomeruli would require recording from MT cells across many different channels that have known high-affinity ligands, and testing odorants across concentration—currently such experiments are technically very challenging.
Second, gauging the relative sensitivity of all odorant receptors to a given odorant, or the temporal sequence of glomerular activation in vivo, is also quite challenging. Here we used the inverse approach, assuming that the relative sensitivity of M72 receptors to multiple odorants is a proxy for the temporal ordering. It is possible that other receptors could have an even higher affinity to 2HA than M72. It is tempting to assume that MT cells should respond earlier for a higher-affinity ligand than for an intermediate or weak ligand. While this relationship is observed in our data (Supplementary Fig. 6), the absolute latency of response to a given ligand likely depends not only on receptor affinity but also on the physical and chemical properties of the ligand. For example, a high concentration of a hydrophilic ligand may evoke earlier responses than a weak concentration of a hydrophobic ligand. Thus, we avoid drawing conclusions about latency differences across different odors, and focus instead on latency differences for a given odor between cells.
Third, our methods may introduce sampling biases. It is possible that we are only recording from a subset of M72-MT neurons. Our optogenetic technique only identifies MT cells that receive a dominant feedforward excitatory connection from the M72 glomerulus; this, however, accounts for a particularly relevant subset of cells in conveying odor information. It is also plausible that a small fraction of our putative M72-MT population is made up of granule cells, although we consider this unlikely due to their location and size relative to MT cells29.
And lastly, while validation of the mechanism proposed above is beyond the scope of this report, our model provides a set of hypotheses. For instance, under this model, blocking inhibition (optogenetically or pharmacologically) would increase the number of odors evoking a stereotyped response in sister MT cells. Future experiments are needed to explore this possibility and the mechanisms underlying the conditions of diversity and stereotypy in MT output.
How might stimulus-dependent stereotypy be incorporated into a broader olfactory code? We imagine a downstream decoder that is particularly sensitive to synchrony between sister MT cells of a common glomerulus. Such a decoder need not depend on a topographical map between the olfactory bulb and the piriform cortex as it could be implemented through random projections30. For strong ligands, sister MT cells will all respond with a relatively short-latency excitatory transient, and the decoding neuron will thus fire as well. For weaker ligands, the temporal diversity amongst sister MT cells would fail to provide sufficient coherent input to drive the decoding neuron. In such a scheme, lateral inhibition preserves information in early-responding channels and scrambles information in late-responding channels. This configuration would act to sharpen and sparsify the odor representation, reducing the dimensionality of the peripheral combinatorial code to one that is dominated by the most sensitive glomerular channels. Moreover, our observation that the temporal stereotypy/diversity of responses is robust to changes in concentration means that the readout representation would also be concentration-tolerant, and thus could encode odor identity independent of concentration. This model is consistent with a recently proposed primacy coding hypothesis31, which emphasizes the role of the most sensitive glomeruli as those responsible for concertation invariant odor identification. The stereotypy of MT cell responses driven by strong ligands is also consistent with recent observation of concertation invariant early odor responses of cortical cells32. The phenomenon of diversity/stereotypy of MT cell responses described here may provide a mechanism by which olfactory bulb circuitry could implement a primacy coding model.
For electrophysiological experiments, we used adult homozygous M72-IRES-ChR2-YFP mice (strain Olfr160tm1.1(COP4*/EYFP)Tboz). Data for MT cell in vivo odor responses were collected in 17 animals (13 males and 4 females). Animals were 6–10 weeks old at the beginning of the experiment and were maintained on a 12-h light/dark cycle (lights on at 20:00 h) in isolated cages in a temperature- and humidity-controlled animal facility. All animal care and experimental procedures were in strict accordance with protocols approved by the New York University Langone Medical Center and Northwestern University Institutional Animal Care and Use Committees.
OSN electrophysiology
Perforated patch recordings were made from the dendritic knobs of fluorescently labeled M72-expressing OSNs as described previously11,13,33. In short, the olfactory epithelium from neonatal mice was removed and kept in oxygenated artificial cerebrospinal fluid (95% O2 and 5% CO2), containing 124 mM NaCl, 3 mM KCl, 1.3 mM MgSO4, 2 mM CaCl2, 26 mM NaHCO3, 1.25 mM NaHPO4, and 15 mM glucose, pH 7.4, 305 mOsm. The epithelium was transferred to a recording chamber at 20–23 °C and imaged using an upright fluorescence IR-DIC microscope equipped with a charge-coupled devise (CCD) camera and a 40× water-immersion objective. Perforated patch clamp was performed by including 260 μM amphotericin B in the recording pipette, which was filled with 70 mM KCl, 53 mM KOH, 30 mM methanesulfonic acid, 5 mM EGTA, 10 mM HEPES, and 70 mM sucrose, pH 7.2, 310 mOsm. The electrodes had tip resistances ranging from 8 to 10 MΩ, and liquid junction potentials were corrected in all experiments. Signals were acquired at 10 kHz and low-pass filtered at 2.9 kHz. Odorants were applied via pressure ejection via a multi-barrel pipette placed 20 μm downstream of the cell. Odorants were dissolved in dimethyl sulfoxide and diluted in bath solution to achieve desired concentrations.
Gene targeting
OMP-GCaMP3: The coding sequence of GCaMP333,34 was flanked by AscI sites and cloned into a targeting vector for the olfactory marker protein (omp) locus34,35 so that the coding sequence of OMP is replaced by that of GCaMP3, followed by a self-excising neomycin selection cassette35,36. The targeting vector was electroporated into a 129 ES line, and clones were screened for recombination by long-range PCR. Chimeras were generated from recombinant clones by aggregation with C57BL/6 embryos.
M72-RFP (M72-IRES-tauCherry): A cassette containing an internal ribosome entry site (IRES), followed by the coding sequence for a fusion of bovine tau and mCherry12,36 and a self-excising neomycin selection cassette, was inserted into an AscI site located three nucleotides downstream of the M72 coding sequence in an M72 (olfr160)-targeting vector12. The targeting vector was electroporated into a 129 ES line, and clones were screened for recombination by long-range PCR. Chimeras were generated from recombinant clones by injection into C57BL/6 blastocysts.
Olfactory bulb imaging
Awake in vivo imaging: Imaging was done in 7- to 8-week-old naive male mice that were heterozygous for the OMP-GCaMP3 and homozygous for the M72-RFP allele, and that had been implanted with chronic optical imaging windows and head bars. Mice were first anesthetized with isoflurane (2–3%) in oxygen and administered buprenorphine (0.1 mg kg−1) as analgesic; bupivacaine (2 mg kg−1) as a local anesthetic at the incision site; and dexamethasone (2 mg kg−1) to reduce cerebral edema. The animal was secured in a stereotaxic head holder (Kopf instruments) and the bone overlying the olfactory bulbs was thinned to transparency using a dental drill. Two micro-screws were placed into the skull to structurally support the head bar. A custom-built titanium head bar (3 mm × 15 mm, <1 g) was attached to the skull using Vetbond cyanoacrylate glue and cemented in place using dental cement (Dental Cement, Pearson Dental Supply). Black Ortho Jet dental acrylic (Lang Dental Manufacturing) was extended from the head cap around the thinned bone forming a small chamber. The area overlying the olfactory bulbs was covered with multiple thin layers of prism clear cyanoacrylate glue (Loctite #411) as described11,13,34.
Following complete recovery from surgery, mice were placed on a water restriction schedule (1 ml per day). After 7–10 days of water restriction, mice were slowly habituated to the imaging setup where they were trained to lick for a water reward. During imaging sessions, mice were positioned on a custom-built wheel and secured with the head bar in a custom-built holder.
Light excitation was provided using a 200 W metal-halide lamp (Prior Scientific) filtered through standard filters sets for RFP (49008, Chroma) and green fluorescent protein (GFP; 96343, Nikon). Optical signals for GCaMP were recorded using a CCD camera (NeuroCCD SM256; RedShirtImaging) at 25 Hz with a ×4 temporal binning. Each recording trial was 16 s consisting of a 6 s pre-stimulus interval, a 4 s odor pulse, and a 6 s poststimulus interval. Only one odorant per day was tested to avoid cross contamination of different odorants. Different concentrations of the same odorant were interleaved with clean air trials to identify potential contamination.
Response maps
Response maps were obtained by temporally averaging the response signal over a 0.5 s window around the time of maximum response, and subtracting a pre-stimulus response baseline (1.6 s window). For low concentrations, stimuli were presented at least three times and averaged to obtain response maps. Images were processed and analyzed in Neuroplex (RedShirtImaging) and Image J (NIH) software.
Response amplitudes were measured from a region of interest drawn around the M72 glomerulus in the RFP image. Only the first trial of each odor concentration was used to obtain the response amplitude to avoid potential adaptation effects.
Implantation surgery
Mice were anesthetized using isofluorane gas anesthesia. A diamond-shaped bar for head fixation37, a reference electrode, and a pressure cannula for sniff recording6 were implanted. To implant the sniffing cannula, which was a thin 8.5-mm-long stainless capillary (gauge 23, Small Parts capillary tubing), a small hole was drilled in the nasal bone, into which the cannula was inserted, fixed with glue, and stabilized with dental cement. The reference electrode was implanted in the cerebellum. The mice were given at minimum 5 days after surgery for recovery.
Setup and odor habituation
After recovery, mice entered a regime of water restriction, with 1 ml administered every day. Five days into this regime, the mice were placed in a head-fixation setup for lick training6,17,37. To reduce stress to the animals and movement artifact during recordings, mice were positioned on a running wheel. Mice could stand still or walk on the wheel as desired. The first few sessions were brief (10–20 min) and served purely to acclimate the animals to head fixation and the running wheel. Mice typically remained mostly quiescent after one to two sessions of head fixation, after which lick training sessions began. A lick spout was placed in front of the animal, which delivered a droplet of water every time the animal licked it. Mice typically learned the water-rewarding nature of the head-fix setup within one to three sessions. We then removed control of the water delivery from the mouse and started delivering one out of seven odors in pseudo-random sequence, with an average inter-stimulus interval of 8 s and stimulus duration of 1000–4000 ms. A drop of water was delivered to the mouse automatically every three to five odor presentations. Animals underwent three to five sessions of odor exposure (200–400 trials each) of this type before recordings. This procedure served several purposes: (i) it reduced the distress of mice in the setup; (ii) it reduced the movement artifact during recordings; and (iii) it habituated the animals to the set of odorants used in the experiment, thus eliminating any novelty effects.
Water delivery was based on gravitational flow controlled by a pinch valve (98302–12, Cole-Parmer) connected via Tygon tubing to a stainless steel cannula (gauge 21, Small Parts capillary tubing), which served as a lick tube. The lick tube was mounted on a micromanipulator and positioned near the mouse's mouth. The water volume was calibrated to give approximately 2.5 μl per valve opening. Licks were detected by the closing of an electrical circuit through the grounded mouse (the circuit was open until the mouse connected the metal cannula to ground).
Behavioral and stimulus delivery control
All behavioral events (odor and final valve opening, laser stimulation, water delivery, and lick detection) were monitored and controlled by a real-time (1 ms), Arduino platform-based, behavioral controller box, developed at Janelia Farm Research Campus, HHMI. In each trial, the behavioral controller read trial parameters, and sent trial results together with a continuous sniffing signal to a PC running a custom-written Python program, Voyeur (partially developed by Physion Consulting, Cambridge, MA). Voyeur is a trial-based, behavioral experiment control and acquisition software that allows behavioral protocols to compute parameters of trials and send them to embedded real-time hardware systems. The Arduino code and Python application source is available as a GitHub repository (search for Voyeur in GitHub). Every stimulus and behavioral event had an associated trigger signal that was sent to the recording system for precise synchronization with neural activity recordings.
Sniff recording
To monitor the sniff signal, the implanted sniffing cannula was connected to a pressure sensor through an 8–12 cm-long polyethylene tube (801000, A-M Systems). The pressure was transduced with a pressure sensor (24PCEFJ6G, Honeywell) and homemade preamplifier circuit. The signal from the preamplifier was recorded together with electrophysiological data on one of the data acquisition channels. The timing of the pressure signal was calibrated with a hot wire anemometer (mini CTA 5439, Dantec Dynamics, Denmark) as in Shusterman et al.6. The time differences between pressure signal and the flow signal during calibration did not exceed 2–3 ms. The cannula was capped when not in use.
Light stimulation
Light stimulation was produced via a 100 µm multimodal fiber coupled to a 473-nm diode laser (model FTEC2471-M75YY0, Blue Sky Research). The end of the fiber was cut flat and polished. The light stimulus power at the open end was measured by a power meter (Model, PM100D, Thorlabs), and calibrated to adjust the amplitude of the voltage pulses sent to the laser, to achieve a consistent power output across experiments.
Odor delivery
For odor stimulus delivery for electrophysiological experiments, we used an eight-odor air dilution olfactometer. Approximately 1 s prior to odor delivery, a stream of nitrogen was diverted through one of the odorant vials at a rate between 100 and 10 ml min−1, and then merged into a clean air stream, flowing at a rate between 900 and 990 ml min−1, thus providing 10- to 100-fold air dilution. Gas flows were controlled by mass flow controllers (Alicat MC series) with 0.5% accuracy. The odorized stream of 1000 ml min−1 was homogenized in a long thin capillary before reaching the final valve. Between stimuli, a steady stream of clean air with the same rate flowed to the odor port continuously, and the flow from the olfactometer was directed to an exhaust. During stimulus delivery, a final valve (four-way Teflon valve, NResearch, SH360T042) switched the odor flow to the odor port, and diverted the clean airflow to the exhaust (Supplementary Fig. 7). Temporal odor concentration profile was checked by mini photoionization detector (PID) (Aurora Scientific, model 200B). The concentration reached a steady state 95–210 ms (depending on a specific odor) after final valve opening. To minimize pressure shocks and provide temporally precise, reproducible, and fast odor delivery, we matched the flow impedances of the odor port and exhaust lines, and the flow rates from the olfactometer and clean air lines. As sniff activity was monitored in real time, the final valve was activated at the onset of exhalation, so that the odor reached steady-state concentration before the next inhalation. At the end of the odor delivery (duration 1–4 s) the final valve was deactivated, and the nitrogen flow was diverted from the odor vial to an empty line. Inter-odor delivery interval was 7–14 s, during which clean air was flowing through all Teflon tubing.
All odorants (see Supplementary Table 1, purchased from Sigma-Aldrich) were diluted in mineral oil and stored in liquid phase in dark vials. The level of dilution of each odorant was estimated to achieve equal concentrations for all odorants of 0.075 + 0.01 µM after 10-fold air dilution11,13,38. Each vial contained 5 ml of mineral oil with diluted odorant and 45 ml of headspace.
For concentration series experiments for two odorants, 2HA and MEN, we changed the dilution level across approximately two orders of magnitude. The final desired concentrations were calibrated daily, immediately before the experiment began, and were achieved by tuning the air dilution and matching PID signals between vials with different liquid dilutions.
The odor delivery system for imaging experiments was almost identical. However, due to differences in dilution procedure the matching concentration for 2HA was 2× higher in the imaging setup than in electrophysiological experiments, and for MEN the matching concentration was 1.8× lower.
Olfactory bulb electrophysiology
MT cell spiking activity was recorded using 16- or 32-channel Si-probes (NeuroNexus, model: a2x2-tet-3mm-150-150-121(F16), Buzsaki32(F32)). Cells were recorded in the dorsal mitral cell layer. The identity of MT cells was established on the basis of criteria formulated in previous work10,39 (while we cannot rule out granule cells, it is unlikely that we recorded from them with our extracellular technique, based on their location and significantly smaller soma size29). The data were acquired using a 32-channel data acquisition system (HHMI Janelia Farm Research Campus, Applied Physics and Instruments Group,) with widely open broadband filters and sampling frequency of 19 531 Hz.
Initial preparation: At the beginning of a recording session, a mouse was anesthetized with gas isofluorane and placed in the head-restraint setup. The running wheel was locked and a heating pad was placed under the animal. The lateral M72 glomerulus in either the right or left olfactory bulb was located using a fluorescent dissecting microscope, and the overlying bone was thinned. The open end of the fiber used for optical stimulation was positioned above the glomerulus, making contact with the thinned bone but without pressing on it.
A craniotomy was made just medial to the glomerulus, the dura removed, and the silicon probe was inserted at an angle (25–45° from vertical), driven by a digital micromanipulator (MP-285, Sutter Instruments). The insertion point was chosen so that at a depth of ~300–500 µm from the brain surface, the tip of the most posterior shank in the probe would be roughly in line with the glomerulus in the medial/lateral axis. The anterior/posterior position was varied (following anatomical data from Liu and Urban, unpublished).
Search for MT cells putatively connected to M72 glomerulus: The anesthesia was removed and once the animal awakened, the probe was lowered to the external plexiform layer and advanced at ~5 µm intervals. At each position, a light pulse (0.5–15 mW power, 1–2 ms duration) was delivered to the glomerulus, triggered on the onset of inhalation. The peri-stimulus activity on all sites of the Si-probe was monitored. A spiking increase with short latency after the light pulse (below 20 ms, typically 5–10 ms) indicated the presence of a cell receiving input from the stimulated glomerulus. If no light-responsive cells were found upon reaching a depth of ~700–800 µm, the electrode was raised, reinserted, and the search repeated.
Recording odor responses: After locating a putative M72-MT cell, odor recording session was initiated. Multiple odorant stimuli with fixed concentrations or two odorant stimuli with multiple concentrations were presented pseudo-randomly with 7–14 s inter-trial interval. After every 2–4 odor trials, a light pulse was delivered to re-confirm the presence of the M72-MT cell. For each odor stimulus, 20–35 trials were collected.
All sites of the Si-probe were used to monitor activity of other, non-light-responsive units, during M72-MT recording sessions. In addition, to increase the pool of non-M72-MT cell (other cells), we anesthetized the animal again, performed a new craniotomy, and placed the probe at a new site, usually further anterior, and performed recordings with the same stimulus set.
Spike extraction
Acquired electrophysiological data were filtered and spike sorted. We used the Klusta suite software package for spike detection and spike sorting6,40 and software written by E.M.A. and D.R.
Identification of M72-MT cells
We defined MT cells functionally connected to the M72 glomerulus as units that displayed an excitatory, short-latency response to light stimulation (1 ms, 5–15 mW) of the ChR2-expressing M72 glomerulus. While in general it is difficult to establish monosynaptic connectivity using optogenetic stimulation41, we capitalized on the known anatomy of the olfactory bulb: MT cells receive excitatory input from a single glomerulus, and interactions between MT cells connected to different glomeruli are inhibitory3,42.
We compared the PSTHs of MT cells with and without light stimulation. PSTHs with 4 ms temporal bins were referenced to the onset of inhalation at the onset of inhalation, when the light stimulation was presented. The MT cell was considered light responsive if light-evoked activity exceeded activity in the no-light condition by at least one standard deviation, in at least one 4 ms temporal bin, within 50 ms after the onset of the light pulse. The latency was estimated as the first time point when such a deviation occurred. The distribution of latencies is shown in Fig. 1e. The majority of the responses (1.5 interquartile range (IQR)) occurred with latencies shorter than 22 ms. Two cells responded with latencies larger than 1.5 IQR, 22 ms, and were removed from the pool of cells used in this study.
Recording of generic MT cells
Most generic cells were recorded simultaneously with the M72-MT cells, and identified as the ones that did not respond to light stimulation of the M72 glomerulus. Note that the distance between the shanks ranges from 150 to 600 μm, and that the range of inhibitory connections within the MT cell networks has been found to be spatially sparse and long and heterogeneous in range. Occasionally, penetrations were done at a random location.
Estimation of unit depth
We identified the group of sites of the array in which the unit was detected, and estimated its centroid. We then computed its distance to the tip of the probe, corrected by the angle of insertion to project to the dorsal/ventral axis. The position of the tip of the probe (dorsal/ventral) was kept track of and recorded during the experiment, relative to the surface of the brain at insertion point.
Estimation of mean firing rate and preferred phase of spontaneous spiking
We computed the mean firing rate of the cell across sniff cycles prior to odor presentation. We estimated the preferred phase of the spontaneous firing as the time when the baseline (no odor) snifflet rate was maximum.
Snifflet analysis of the response profiles
We built probabilistic models to describe the encoding of odor stimuli by MT neurons. These models take the form of Generalized Linear Models43,44, and describe a generative model for the spiking data. For a given cell and a given odor (or the baseline condition, with no odor presentation), we assumed that the firing rate of the cell changes over the course of the sniff according to a specific temporal pattern, which we call a "snifflet". Given that the observed spiking patterns during individual sniffs depend on the inhalation duration (Fig. 2d), we built this dependency into the model. In particular, we assumed that during an individual sniff, the firing rate is generated by temporally dilating the snifflet by a sniff-dependent factor. The value of this dilation factor, α, depends on the duration of the inhalation phase of that sniff. More formally, we write the rate r(t) as
$$r\left( t \right) = {\mathrm{exp}}\left[ {\mathop {\sum }\limits_{i = 1}^n \psi \left( {\alpha _{i}\left( {t - \tau _{i}} \right)} \right) \cdot {\mathrm{{\Pi}}}\left( {\tau _{i} \mathrm{\le} t {} \tau _{i + 1}} \right)} \right]$$
where ψ(t) is the odor-evoked snifflet, α i is the temporal dilation factor for the ith sniff, τ i is the onset time of the ith sniff, n is the total number of sniffs and \({\mathrm{{\Pi}}}\left( \cdot \right)\) is an indicator function, such that the response pattern is reset at the onset of the next sniff. Removing the indicator function, so that snifflets from successive odors overlap, does not qualitatively change any of the major results in the main text.
The model requires a choice of dilation factors, α i , for each sniff. Motivated by the work of Shusterman et al.6, we fixed the dilation factors as the reciprocal of the inhalation durations, \(\alpha _i = 1/d_i^{\mathrm{inh}}\), where \(d_i^{\mathrm{inh}}\) is a duration of the inhalation phase of the sniff cycle. This produced better model fits than alternative choices, such as dilating with the reciprocal of the full sniff durations, separate dilation for the inhalation and rest-of-sniff components of the response, or fixing α i = 1 (i.e., no temporal dilation) throughout (Supplementary Fig. 4).
The free parameters of the model are the snifflet time course, ψ(t), for each cell and odor. We parameterized the snifflets as a length-KD vector (for integers K and D), such that the first D components represent the evolution of the cell's firing rate during the inhalation period, and the remaining (K − 1)D components represent the evolution of the cell's firing rate during the remainder of the sniff. The integer D thus defines the sampling resolution for the snifflet, and K the relative duration of post-inhalation response to model. We use D = 30 and K = 4 in the main text, but other values produced similar results.
We also placed priors over the components of ψ(t), to constrain the snifflets would evolve smoothly in time. We used the Automatic Smoothness Determination prior45 and learned the hyper-parameters via evidence optimization16,46. Including the prior dramatically increased the quality of model predictions on held-out data (Supplementary Fig. 4).
We solve for ψ(t), by maximizing its posterior probability. Given a point estimate of the hyper-parameters, and using a fixed scheme for determining \(\alpha\) (above), this is a convex problem15, which we solve using conventional Newton methods. We approximate the posterior on ψ(t) using a Laplace approximation. For the purposes of illustration, we show the snifflets in the main text in their exponentiated form (i.e., in terms of firing rate, rather than log firing rate). Where error bars on individual snifflets are shown (Fig. 2e), the shaded areas illustrate only the marginal variance of the approximate posterior at each time point, rather than the joint covariance across time. Statistical comparisons between odor-evoked and baseline snifflets (Figs. 4 and 5) were performed in log firing rate space; we again consider only the marginal variance at each time point.
The snifflet model provides a parameterization of how inhalation duration (a nuisance variable) affects spiking response, allowing us to factor this relationship out from our results and study the differences across odors and cells. To verify that our results did not depend on the particulars of the snifflet model, we fitted the spiking data without adjusting for variations in sniff duration (Supplementary Fig. 5).
Mouse strain availability
OMP-GCaMP3 and M72-RFP strains will be made available through The Jackson Laboratory (Stock #029581 and #029637).
Code availability
Code developed for this work is available in the following github repositories: https://github.com/admiracle/Voyeur (stimulus delivery system control); https://github.com/zekearneodo/ephys-tools (post-recording data preparation, pre-processing and initial sniff analysis); and https://github.com/rabbitmcrabbit/snifflet (snifflet analysis).
The electrophysiological dataset is available at https://doi.org/10.6084/m9.figshare.5877474. The complete datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
Mombaerts, P. et al. Visualizing an olfactory sensory map. Cell 87, 675–686 (1996).
Bozza, T., Feinstein, P., Zheng, C. & Mombaerts, P. Odorant receptor expression defines functional units in the mouse olfactory system. J. Neurosci. 22, 3033–3043 (2002).
Shepherd, G. M. The Synaptic Organization of the Brain (Oxford Univ. Press, USA, 2004).
Spors, H. & Grinvald, A. Spatio-temporal dynamics of odor representations in the mammalian olfactory bulb. Neuron 34, 301–315 (2002).
Carey, R. M., Verhagen, J. V., Wesson, D. W., Pírez, N. & Wachowiak, M. Temporal structure of receptor neuron input to the olfactory bulb imaged in behaving rats. J. Neurophysiol. 101, 1073–1088 (2009).
Shusterman, R., Smear, M. C., Koulakov, A. A. & Rinberg, D. Precise olfactory responses tile the sniff cycle. Nat. Neurosci. 14, 1039–1044 (2011).
Cury, K. M. & Uchida, N. Robust Odor coding via inhalation-coupled transient activity in the mammalian olfactory bulb. Neuron 68, 570–585 (2010).
Tan, J., Savigner, A., Ma, M. & Luo, M. Odor information processing by the olfactory bulb analyzed in gene-targeted mice. Neuron 65, 912–926 (2010).
Kikuta, S., Fletcher, M. L., Homma, R., Yamasoba, T. & Nagayama, S. Odorant response properties of individual neurons in an olfactory glomerular module. Neuron 77, 1122–1135 (2013).
Dhawale, A. K., Hagiwara, A., Bhalla, U. S., Murthy, V. N. & Albeanu, D. F. Non-redundant odor coding by sister mitral cells revealed by light addressable glomeruli in the mouse. Nat. Neurosci. 13, 1404–1412 (2010).
Zhang, J., Huang, G., Dewan, A., Feinstein, P. & Bozza, T. Uncoupling stimulus specificity and glomerular position in the mouse olfactory system. Mol. Cell. Neurosci. 51, 79–88 (2012).
Potter, S. M. et al. Structure and emergence of specific olfactory glomeruli in the mouse. J. Neurosci. 21, 9713–9723 (2001).
Smear, M., Resulaj, A., Zhang, J., Bozza, T. & Rinberg, D. Multiple perceptible signals from a single olfactory glomerulus. Nat. Neurosci. 16, 1687–1691 (2013).
Buonviso, N., Amat, C. & Litaudon, P. Respiratory modulation of olfactory neurons in the rodent brain. Chem. Senses 31, 145–154 (2006).
Paninski, L. Maximum likelihood estimation of cascade point-process neural encoding models. Network 15, 243–262 (2004).
Park, M. & Pillow, J. W. Receptive field inference with localized priors. PLoS Comput. Biol. 7, e1002219 (2011).
ADS MathSciNet CAS Article PubMed PubMed Central Google Scholar
Sirotin, Y. B., Shusterman, R. & Rinberg, D. Neural coding of perceived odor intensity. eNeuro 2, e0083 (2015).
Padmanabhan, K. & Urban, N. N. Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content. Nat. Neurosci. 13, 1276–1282 (2010).
Angelo, K. et al. A biophysical signature of network affiliation and sensory processing in mitral cells. Nature 488, 375–378 (2012).
ADS CAS Article PubMed PubMed Central Google Scholar
Fukunaga, I., Berning, M., Kollo, M., Schmaltz, A. & Schaefer, A. T. Two distinct channels of olfactory bulb output. Neuron 75, 320–329 (2012).
Egger, V. & Urban, N. N. Dynamic connectivity in the mitral cell-granule cell microcircuit. Semin. Cell Dev. Biol. 17, 424–432 (2006).
Kato, H. K., Gillet, S. N., Peters, A. J., Isaacson, J. S. & Komiyama, T. Parvalbumin-expressing interneurons linearly control olfactory bulb output. Neuron 80, 1218–1231 (2013).
Chen, T.-W., Lin, B.-J. & Schild, D. Odor coding by modules of coherent mitral/tufted cells in the vertebrate olfactory bulb. Proc. Natl Acad. Sci. USA 106, 2401–2406 (2009).
Whitesell, J. D., Sorensen, K. A., Jarvie, B. C., Hentges, S. T. & Schoppa, N. E. Interglomerular lateral inhibition targeted on external tufted cells in the olfactory bulb. J. Neurosci. 33, 1552–1563 (2013).
Fantana, A. L., Soucy, E. R. & Meister, M. Rat olfactory bulb mitral cells receive sparse glomerular inputs. Neuron 59, 802–814 (2008).
Mozell, M. M. Evidence for a chromatographic model of olfaction. J. Gen. Physiol. 56, 46–63 (1970).
Jiang, J. & Zhao, K. Airflow and nanoparticle deposition in rat nose under various breathing and sniffing conditions: a computational evaluation of the unsteady effect. J. Aerosol Sci. 41, 1030–1043 (2010).
Ghatpande, A. S. & Reisert, J. Olfactory receptor neuron responses coding for rapid odour sampling. J. Physiol. (Lond.) 589, 2261–2273 (2011).
Cazakoff, B. N., Lau, B. Y. B., Crump, K. L., Demmer, H. S. & Shea, S. D. Broadly tuned and respiration-independent inhibition in the olfactory bulb of awake mice. Nat. Neurosci. 17, 569–576 (2014).
Ganguli, S. & Sompolinsky, H. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annu. Rev. Neurosci. 35, 485–508 (2012).
Wilson, C. D., Serrano, G. O., Koulakov, A. A. & Rinberg, D. A primacy code for odor identity. Nat. Commun. 8, 1477 (2017).
ADS Article PubMed PubMed Central Google Scholar
Bolding, K. A. & Franks, K. M. Complementary codes for odor identity and intensity in olfactory cortex. eLife 6, e22630 (2017).
Tian, L. et al. Imaging neural activity in worms, flies and mice with improved GCaMP calcium indicators. Nat. Methods 6, 875–881 (2009).
Bozza, T., McGann, J. P., Mombaerts, P. & Wachowiak, M. In vivo imaging of neuronal activity by targeted expression of a genetically encoded probe in the mouse. Neuron 42, 9–21 (2004).
Bunting, M., Bernstein, K. E., Greer, J. M., Capecchi, M. R. & Thomas, K. R. Targeting genes for self-excision in the germ line. Genes Dev. 13, 1524–1528 (1999).
Shaner, N. C. et al. Improved monomeric red, orange and yellow fluorescent proteins derived from Discosoma sp. red fluorescent protein. Nat. Biotechnol. 22, 1567–1572 (2004).
Osborne, J. E. & Dudman, J. T. RIVETS: a mechanical system for in vivo and in vitro electrophysiology and imaging. PLoS ONE 9, e89007 (2014).
Wojcik, P. T. & Sirotin, Y. B. Single scale for odor intensity in rat olfaction. Curr. Biol. 24, 568–573 (2014).
Rinberg, D., Koulakov, A. & Gelperin, A. Sparse odor coding in awake behaving mice. J. Neurosci. 26, 8857–8865 (2006).
Hazan, L., Zugaro, M. & Buzsáki, G. Klusters, NeuroScope, NDManager: a free software suite for neurophysiological data processing and visualization. J. Neurosci. Methods 155, 207–216 (2006).
Lima, S. Q., Hromádka, T., Znamenskiy, P. & Zador, A. M. PINP: a new method of tagging neuronal populations for identification during in vivo electrophysiological recording. PLoS ONE 4, e6099 (2009).
Urban, N. N. & Sakmann, B. Reciprocal intraglomerular excitation and intra- and interglomerular lateral inhibition between mouse olfactory bulb mitral cells. J. Physiol. (Lond.) 542, 355–367 (2002).
Truccolo, W., Eden, U. T., Fellows, M. R., Donoghue, J. P. & Brown, E. N. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J. Neurophysiol. 93, 1074–1089 (2005).
Pillow, J. W. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008).
Sahani, M. & Linden, J. F. Evidence optimization techniques for estimating stimulus-response functions. Neural Inf. Process. Syst. 15, 317 (2003).
Park, M. & Pillow, J. W. Bayesian inference for low rank spatiotemporal neural receptive fields. Neural Inf. Process. Syst. 26, 2688–2696 (2013).
We would like to acknowledge Chris Wilson, Sasha Devore, Dion Khodagholy, and Eero Simoncelli for discussions; Kathy Nagel and Alex Koulakov for comments on the manuscript; and Admir Resulaj and Gaby Serrano for technical assistance. Thanks to Loren Looger for sharing GCaMP3 plasmids prior to publication. The work was supported by grants from the Whitehall Foundation (D.R.), and NIH/NICDC grants: R01DC014366 and R01DC013797 (D.R.); R01DC013576 (T.B.); and F31DC014903 (K.B.P.). E.M.A. was supported by a Pew Latin American Fellowship in Biomedical Sciences, N.R. was supported by the Howard Hughes Medical Institute, and A.C. was supported by the German Research Foundation.
Ezequiel M. Arneodo
Present address: Biocircuits Institute, University of California San Diego, 9500 Gilman Dr, La Jolla, CA, USA
These authors contributed equally: Ezequiel M. Arneodo, Kristina B. Penikis, Neil Rabinowitz.
Neuroscience Institute, NYU Langone Health, 435 E 30h St, New York, NY, 10016, USA
Ezequiel M. Arneodo, Kristina B. Penikis, Angela Licata & Dmitry Rinberg
Center for Neural Science, New York University, 4 Washington Place, New York, NY, 10003, USA
Kristina B. Penikis, Neil Rabinowitz & Dmitry Rinberg
Howard Hughes Medical Institute, New York, 10003, NY, USA
Neil Rabinowitz
Department of Neurobiology, Northwestern University, 2205 Tech Drive, Evanston, IL, 60208, USA
Annika Cichy, Jingji Zhang & Thomas Bozza
DeepMind, 6 Pancras Square, London, N1C 4AG, UK
Kristina B. Penikis
Angela Licata
Annika Cichy
Jingji Zhang
Thomas Bozza
Dmitry Rinberg
E.M.A., T.B., and D.R. conceived of the study. E.M.A and D.R. built the experimental setup. E.M.A., K.B.P., N.R. and D.R. shaped the experimental design. T.B. initiated the transgenic approach and generated the gene-targeted mice. E.M.A., K.B.P. and A.L. performed electrophysiological recording in awake mice. A.C. performed in vivo imaging experiments. J.Z. performed OSN recordings. N.R. developed and performed snifflet analysis. E.M.A., K.B.P., N.R., A.C., T.B. and D.R. wrote the manuscript. T.B. supervised optical imaging and OSN recordings. D.R. supervised the project.
Correspondence to Dmitry Rinberg.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Arneodo, E.M., Penikis, K.B., Rabinowitz, N. et al. Stimulus dependent diversity and stereotypy in the output of an olfactory functional unit. Nat Commun 9, 1347 (2018). https://doi.org/10.1038/s41467-018-03837-1
Antagonistic odor interactions in olfactory sensory neurons are widespread in freely breathing mice
Joseph D. Zak
Gautam Reddy
Venkatesh N. Murthy
Nature Communications (2020)
Strength in diversity: functional diversity among olfactory neurons of the same type
Eryn Slankster
Seth R. Odell
Dennis Mathew
Journal of Bioenergetics and Biomembranes (2019)
Single olfactory receptors set odor detection thresholds
Adam Dewan | CommonCrawl |
\begin{document}
\title[Specht's problem over commutative Noetherian rings] {Specht's problem for associative affine algebras over commutative Noetherian rings}
\author{Alexei Belov-Kanel} \author{Louis Rowen} \author{Uzi Vishne}
\address{Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900,Israel} \email{\{belova, rowen, vishne\}@math.biu.ac.il}
\thanks{This work was supported by the Israel Science Foundation (grant no. 1207/12).}
\begin{abstract} In a series of papers \cite{BRV1}, \cite{BRV2}, \cite{BRV3} we introduced full quivers and pseudo-quivers of representations of algebras, and used them as tools in describing PI-varieties of algebras. In this paper we apply them to obtain
a complete proof of Belov's solution of Specht's problem for affine algebras over an arbitrary Noetherian ring. The inductive step relies on a theorem that enables one to find a ``$\bar q$-characteristic coefficient-absorbing polynomial in each T-ideal $\Gamma$,'' i.e., a non-identity of the representable algebra~$A$ arising from $\Gamma$, whose ideal of evaluations in $A$ is closed under multiplication by $\bar q$-powers of the characteristic coefficients of matrices corresponding to the generators of $A$, where $\bar q$ is a suitably large power of the order of the base field. The passage to an arbitrary Noetherian base ring $C$ involves localizing at finitely many elements a kind of $C$, and reducing to the field case by a local-global principle. \end{abstract}
\maketitle
\newcommand\LL[2]{{\stackrel{\mbox{#1}}{\mbox{#2}}}} \newcommand\LLL[3]{\stackrel{\stackrel{\mbox{#1}}{\mbox{#2}}}{\mbox{#3}}} \newcommand\LLLL[4] {\stackrel {\stackrel{\mbox{#1}}{\mbox{#2}}} {\stackrel{\mbox{#3}}{{\mbox{#4}}}} } \newcommand\AR[1]{{\begin{matrix}#1\end{matrix}}}
\tableofcontents \setcounter{tocdepth}{1} {\small \tableofcontents}
\section{Introduction}
Until \S\ref{nonassoc}, all algebras are presumed to be associative (not necessarily with unit element), over a given commutative ring $C$ having unit element 1. The free (associative) algebra is denoted by $C\{x\}$, whose elements are called \textbf{polynomials}. The \textbf{T-ideal} of a set of polynomials \textbf{in an algebra} $A$ is the ideal generated by all substitutions of these polynomials in $A$. For example, the set $\operatorname{id}(A)$ of polynomial identities of an algebra~ $A$ is a T-ideal of $C\{ x\}$. A T-ideal is \textbf{finitely based} if it is generated as a T-ideal by finitely many polynomials. For example, when $A$ is a commutative algebra over a field of characteristic 0, $\operatorname{id}(A)$ is finitely based, by the single polynomial $[x_1,x_2]= x_1x_2-x_2x_1.$
Our objective in this paper is to complete the affirmative proof of the affine case of Specht's problem, that any affine PI-algebra over an arbitrary commutative Noetherian ring satisfies the ACC (ascending chain condition) on T-ideals, or, equivalently, any T-ideal is finitely based. In characteristic $0$ over fields, this is the celebrated theorem of Kemer~\cite{Kem1}. When $\operatorname{char} (F)>0$ there are non-affine counterexamples \cite{B1,G}, with a straightforward exposition given in \cite{BRV4}, so the best one could hope for is a positive result for affine PI-algebras. Kemer \cite{Kem11} proved this result for affine PI-algebras over infinite fields, and Belov extended the theorem to affine PI-algebras over arbitrary commutative Noetherian rings, in his second dissertation, with the main ideas given in \cite{B2}. We give full details of the proof (over arbitrary commutative Noetherian rings), cutting through combinatoric complications by utilizing the full strength of the theory of full quivers as expounded in \cite{BRV1},\cite{BRV2}, and~\cite{BRV3}. Actually, working over arbitrary commutative Noetherian base rings raises the question of the ACC for T-ideals of algebras that do not satisfy a PI (because the coefficients of the identities need not be invertible), but we still can obtain a positive solution in Theorem~\ref{SpechtNoeth1}.
Note that there is no hope for such a result over a non-Noetherian commutative base ring $C$, because of the following observation:
\begin{lem}\label{Tid} If $\mathcal I \triangleleft C$, then $\mathcal I A$ is a T-ideal of $A.$ In particular, $\mathcal I C\{ x\} $ is a T-ideal of~$C\{ x\}.$ \end{lem} \begin{proof} Clearly $\mathcal I A \triangleleft A$, and is closed under endomorphisms. \end{proof} Consequently, any chain of ideals of $C$ gives rise to a corresponding chain of T-ideals.
The positive solution to Specht's problem has structural applications, extending Braun's Theorem on the nilpotence of the radical of a relatively free algebra to the case where the base ring is Noetherian, cf.~Theorem~\ref{Jnilp}.
It might be instructive to indicate briefly where our approach differs from Kemer's characteristic 0 approach. Kemer first obtains his deep \textbf{Finite Dimensionality Theorem} that any algebra is PI-equivalent to a finite dimensional algebra $A$. Extending the base field, one may assume the base field $K$ is algebraically closed, so Wedderburn's principal theorem enables one to decompose $A = \bar A \oplus J$ as vector spaces, where $J$ is the radical of $A$ and $\bar A $ can be identified with the algebra $A/J$. In two deep lemmas, exposed in \cite[Section~4.4]{BR}, Kemer shows that the nilpotence index of $J$ and the vector space dimension of $\bar A$ over $K$ can be described as invariants in terms of evaluations of polynomials on $A$, and then working combinatorically he shows that these computational invariants can be used to prove his Finite Dimensionality Theorem. Kemer's Finite Dimensionality Theorem fails for algebras over finite fields. Thus, we need some other technique, and we turn to the theory of quivers of representations of algebras into matrices, which were described computationally in \cite{BRV2} and \cite{BRV3}.
Recall \cite[pp.~28ff.]{BR} that an algebra $A$ over an integral domain $C$ is {\bf representable} if it can be embedded as a $C$-subalgebra of $\M[n](K)$ for a suitable faithful commutative $C$-algebra $K\supset C$ (which can be much larger than $C$).
In \cite{BRV2} we considered the {\bf full quiver} of a representation of an associative algebra over a field, and determined properties of full quivers by means of a close examination of the structure of Zariski closed algebras, studied in \cite{BRV1}.
The full quiver (or pseudo-quiver) is a directed, loopless graph without cycles, in which vertices correspond to simple subalgebras and edges to elements of the radical. A maximal subpath of this graph is called a {\bf branch}. In place of Kemer's lemmas, we utilize the combinatorics of the full quiver to compute the invariants described above.
Our affirmation of Specht's problem (in the affine case) is divided into two stages: First we assume that the base ring $C$ is a field $F$ of order $q$ (where $q$ could be infinity). Recall, when $q< \infty$, that $q$ is a power of $p = \operatorname{char} (F)< \infty,$ and the Frobenius map $a \mapsto a^q$ is an $F$-algebra endomorphism.
In the second stage, using ring-theoretic methods, we reduce from the case of a general ring $C$ to the situation of the first stage.
The main difficulty in this approach is to discern whether the algebras we are working with actually are representable. When the base ring is an infinite field $F$, Kemer~\cite{Kem1} proved that any relatively free affine $F$-algebra is representable; this is also treated in \cite{B2} for $F$ finite, but the proof is rather difficult. Consequently, we plan to treat the representability theorem in a separate paper. Although this decision enables us to provide a quicker and more transparent proof of Specht's problem, it forces us to consider T-ideals $\mathcal I$ for which $C\{ x \}/\mathcal I$ need not a priori be representable. Accordingly, we need some method for ``carving out'' T-ideals $\mathcal I$ for which $C\{ x \}/\mathcal I$ is representable.
In \cite{BRV3}, a \textbf{trace-absorbing polynomial} for an algebra $A$ is defined as a non-identity of $A$ whose T-ideal is also an ideal of the algebra $\hat A$ obtained by taking $A$ together with the traces adjoined. The main result of \cite{BRV3} was that such polynomials exist for relatively free algebras. Explicitly, we proved the following:
\begin{itemize} \item Trace Adjunction Theorem~(\cite[Theorem~5.16]{BRV3}). Any branch of a basic full quiver of a relatively free algebra $A$ naturally gives rise to a trace-absorbing polynomial of $A$. \end{itemize}
The Trace Adjunction Theorem provides a powerful inductive tool. For example, as indicated in~\cite{BRV4}, it streamlines the proof of the rationality of Hilbert series of relatively free algebras.
In this paper we need to consider more generally the
\textbf{characteristic coefficients} of a matrix $a$, by which we mean the coefficients of its characteristic polynomial ${\lambda} ^n + \sum _{k=0}^{n-1} {\alpha}_k {\lambda} ^k.$ The $k$-th \textbf{characteristic coefficient} of $a$ is ${\alpha} _k.$ For example, the trace and determinant are respectively the $(n-1)$-th and $0$-th characteristic coefficients. In characteristic 0 one can recover all the characteristic coefficients from the traces, which is why we only dealt with traces in \cite{BRV3}. But here we need to generalize the result to arbitrary characteristic coefficients. Furthermore, since the multilinearization process cannot be reversed over a finite field, we cannot prove theorems about absorbing arbitrary characteristic coefficients in this case, but must content ourselves with absorbing $\bar q$-powers of characteristic coefficients, which we call $\bar q$-\textbf{characteristic coefficients}, for $\bar q$ a suitable power of $q = |F|$. (In fact, for technical reasons involving the quiver, we need to use an idea of Drensky~\cite{D} and consider \textbf{symmetrized} characteristic coefficients, cf.~Definition~\ref{sym}.)
Thus, we generalize ``trace-absorbing polynomials'' to ``$\bar q$-characteristic coefficient\-absorbing polynomials,'' cf.~Definition~\ref{absorp}, for the inductive step in the solution of Specht's problem. We do not prove here that affine PI-algebras are representable (one of the keystones of Kemer's theory in characteristic 0); this involves a more intense study of full quivers, which we leave for a later paper. Nevertheless, once we have answered Specht's problem affirmatively for representable relatively free, affine algebras in Theorem~\ref{Spechtfin}, the passage to Noetherian base rings enables us to verify Specht's problem for all affine PI-algebras in Theorem~\ref{SpechtNoeth}, and an elementary module-theoretic argument yields the result for all varieties in Theorem~\ref{SpechtNoeth1}.
Our approach parallels \cite{BRV3}, but with an emphasis on working inside the set of evaluations of a given non-identity $f$ of the algebra $A$. Although as formulated in \cite[Theorem~5.16]{BRV3}, the Trace Adjunction Theorem enables us to obtain characteristic coefficient-absorbing non-identities, here we need to find a nonzero substitution inside~$f$. This is done in Theorem~\ref{traceq2}, but at the cost of a considerably more involved proof than that of \cite[Theorem~5.12]{BRV3}. For starters, in characteristic $p$, the multilinearization procedure degenerates in the sense that one cannot recover a polynomial from specializations of its multilinearizations. This means that one could have a proper inclusion of T-ideals which contain exactly the same multilinear polynomials (seen for example by taking the Boolean identity $x^2 +x$ in characteristic $2$, whose multilinearization is just the identity of commutativity), so the inductive step in Specht's problem requires coping with $A$-quasi-linear (and $A$-homogeneous) polynomials rather than just with multilinear polynomials. Ironically, working in characteristic $p$ does yield one step that is easier, given in \Lref{barq}.
Recall that the trace-absorbing polynomial of \cite[Theorem~5.12]{BRV3} is obtained by means of a ``hiking procedure'' which forces substitutions of the polynomial into the radical. The main innovation needed here in hiking arises from the necessity to deal with several monomials of our polynomial $f$ at a time, which was not the case in \cite{BRV3}. Thus, we introduce hiking of ``higher stages,'' in particular \textbf{stage 2} hiking, which eliminates substitutions of $f$ in the ``wrong'' matrix components, \textbf{stage 3} hiking, which differentiates the sizes of the base fields of the different components of maximal matrix degree, and \textbf{stage 4} hiking, which removes hidden radical substitutions.
Two definitions of actions by characteristic coefficients (one in terms of matrix computations and one in terms of polynomial evaluations) can be defined on the T-ideal that is generated by this polynomial, which thus is a common ideal of $A$ and the algebra~$\hat A$ obtained by adjoining traces to $A$, and $\hat A$ is Noetherian by Shirshov's Theorem \cite[Chapter 2]{BR}. We perform the same reasoning for $\bar q$-characteristic coefficient-absorbing polynomials, but also need {stage 4} hiking in \Lref{complem}, in order to identify these two actions. This enables us to pass to Noetherian algebras and conclude the verification of Specht's problem for algebras over arbitrary fields, in Theorem~\ref{Spechtfin}.
The extension to algebras over an arbitrary commutative Noetherian base ring $C$ is given in Theorems~\ref{SpechtNoeth} and ~\ref{SpechtNoeth1}. The proof has a different flavor, based on considerations about $C$-torsion which lead to a formal reduction to the case that $C$ is an integral domain, in which case we repeatedly apply a version of a local-global principle and conclude by passing to its field of fractions and applying the results from the previous paragraph.
\section{Preliminary material}\label{backg}
Let us start by reviewing the background, especially about full quivers, their relationship with relatively free algebras, and the polynomials that they yield.
\subsection{Characteristic coefficients of matrices}
We start with some observations about characteristic coefficients of matrices, which we need to utilize in characteristic $p$.
Any matrix $a \in \M[n](K)$ can be viewed either as a linear transformation on the $n$-dimensional space $V = K^{(n)}$, and thus having Hamilton-Cayley polynomial $f_a$ of degree~$n$, or (via left multiplication) as a linear transformation $\tilde a$ on the $n^2$-dimensional space $\tilde V = \M[n](K)$ with Hamilton-Cayley polynomial $f_{\tilde a}$ of degree $n^2$.
\begin{rem}\label{eig} The matrix $\tilde a$ can be identified with the matrix $$a \otimes I \in \M[n](K) \otimes \M[n](K) \cong \M[n^2](K),$$ so its eigenvalues have the form $\beta \otimes 1 = \beta$ for each eigenvalue $\beta$ of $a$. \end{rem}
\begin{lem}\label{noZub0} Notation as above, $f_{\tilde a}= f_a^n$, over any integral domain of arbitrary characteristic. \end{lem} \begin{proof} By a standard specialization argument, it is enough to check the equality over the free commutative ring $\mathbb Z [ \xi_1, \xi_2, \dots],$ which can be embedded into an algebraically closed field of characteristic 0. By Zariski density, we may assume that $a$ is diagonal, in which case it is clear that the determinant of $\tilde a$ is $\det(a)^n.$ But then we conclude by taking ${\lambda} ^n -a$ instead of $a$. \end{proof} \Lref{noZub0} often is used in conjunction with the next observation.
\begin{lem}\label{obv1} Suppose $a \in \M[n](F),$ with $\operatorname{char}(F) = p$, and $f_a = |{\lambda} I - a| = \sum {\alpha} _i {\lambda} ^i$ is the characteristic polynomial of $a$. Then, for any $p$-power $\bar q,$ $\sum {\alpha} _i ^{\bar q} {\lambda} ^i$ is the characteristic polynomial of $a^{\bar q}$. \end{lem} \begin{proof}
Follows from $f_{a^p}({\lambda}^p) = |{\lambda}^p I - a^p| = |{\lambda} I - a|^p = f_{a}({\lambda})^p$. \end{proof}
\begin{prop}\label{obv2} Suppose $a \in \M[n](F).$ Then the characteristic coefficients of $a$ are integral over the $F$-algebra $C$ generated by the characteristic coefficients of $\tilde a$. \end{prop} \begin{proof} The integral closure $\bar C$ of $C$ contains all the eigenvalues of $\tilde a,$ which are the eigenvalues of $a,$ so the characteristic coefficients of $\tilde a$ also belong to $\bar C.$ \end{proof}
\begin{defn}\label{cc1} The
${\alpha} _i ^{\bar q}$ of \Lref{obv1} are called the $\bar q$-\textbf{characteristic coefficients} of $a$. \end{defn}
\begin{rem}\label{qwhat} We choose $\bar q$ sufficiently large so that the theory will run smoothly. By \cite[Remark~2.35 and Lemma~2.36]{BR}, when $\bar q >n$ (which is greater than the nilpotence index in the Jordan decomposition), then the matrix $a^{\bar q}$ is semisimple. \end{rem}
\subsection{Varieties of PI-algebras}
We work with polynomials in the free algebra built from a countable set of indeterminates over the given commutative Noetherian base ring $C$. The set $\operatorname{id}(A)$ is well-known to be a T-ideal of the free algebra $C\set{x}$. More generally, given a polynomial $f$, we define $\langle f(A) \rangle$ to be the ideal generated by $A$. Thus, $\langle f(A) \rangle = 0$ iff $f\in \operatorname{id}(A)$.
A polynomial is \textbf{blended} if each indeterminate appearing nontrivially in the polynomial appears in each of its monomials. As noted in \cite[Remark~22.18]{BRV3}, any T-ideal is additively spanned by T-ideals of blended polynomials, and we only consider blended polynomials throughout this paper.
Given a T-ideal $\mathcal I$ of the free algebra $C\{ x\},$ we can form the \textbf{relatively free algebra} $C\{ x\}/\mathcal I,$ which is free in the class of all PI-algebras $A$ for which $\operatorname{id} (A) \subseteq \mathcal I.$ Using this correspondence, it is enough to classify relatively free algebras.
We continue by taking our base ring to be a field $F$, and we investigate relatively free PI algebras in terms of the full quivers of their representations, making use of \textbf{generic elements}, as constructed in \cite[Construction 7.14]{BRV1} and studied in \cite[Theorem 7.15]{BRV1}. (A generic element of a finite dimensional algebra having base $\{ b_1, \dots, b_n \}$ over an infinite field is just an element of the form $\sum \xi_i b_i,$ where the $ \xi_i$ are indeterminates, but the situation for algebras over a finite field becomes considerably more intricate.)
As in \cite{BRV3}, we rely heavily on the {\bf Capelli polynomial} $$c_k(x_1, \dots, x_k; y_1, \dots, y_k)= \sum_{\pi \in S_k} \operatorname{sgn} (\pi) x_{\pi (1)}y_1 \cdots x_{\pi (k)} y_k$$ of degree $2k$ (cf.~\cite{BR}). Any $C$-subalgebra of $\M[n](K)$ satisfies the identities $c_k$ for all $k > n^2$.
Recall that a polynomial $f$ which is linear in the first $t$ variables is $t$-\textbf{alternating} if substituting $x_j \mapsto x_i$ results in $0$ for any $1\leq i < j \leq t$. \begin{defn}\label{cent0} We denote by $h_n$ the $n^2$-alternating central polynomial on $n\times n$ matrices \cite[p.~25]{BR}. (We define formally $h_0 = 1$, and also have $h_1 = x_1$ and $h_2 = c_4 g$ where $g$ is the multilinearization of the central polynomial $[y_1,y_2]^2$ for $2 \times 2$ matrices, where we use fresh indeterminates for $c_4$.)
When appropriate, we write $h_n(x)$ to emphasize that $h_n$ is evaluated on indeterminates $x_1, x_2, \dots.$ \end{defn} The central polynomial $h_n$ is a crucial polynomial for our deliberations, since it is always central for $\M[n](C)$ regardless of the commutative base ring $C$.
\subsection{Quasi-linear functions and quasi-linearizations}
Although the theory works most smoothly for multilinear polynomials, in characteristic $p$ we do not have the luxury of being able to recover a (blended) polynomial from its multilinearization, the way we can in characteristic 0. For example, one cannot recover the Boolean identity $x^2-x$ from its multilinearization $x_1x_2+x_2x_1$, which holds in any commutative algebra of characteristic 2. Thus, we must stop the linearization process before arriving at multilinear identities.
It is convenient at times to work slightly more generally with functions rather than polynomials, in order to be able to apply linear transformations.
\begin{defn} A function $f$ is $i$-\textbf{quasi-linear} on $A$ if $$f(\dots, a_i + a_i', \dots) = f(\dots, a_i , \dots)+f(\dots, a_i', \dots)$$ for all $a_i, a_i' \in A; $ $f$ is $A$-\textbf{quasi-linear} if $f$ is $i$-quasi-linear on $A$ for all $i$. \end{defn}
Quasi-linear polynomials are used heavily by Kemer in \cite{Kem11}.
In contrast to \cite{BRV3} in which quasi-linear polynomials played a somewhat secondary role, here they are at the forefront of the theory, since we cannot avoid finite fields. Accordingly, we need to develop them here, expanding on \cite[Exercise 1.9 and~1.10]{BR}.
\begin{rem} When $\operatorname{char}(F) = p,$ any $p^t$ power of an $A$-quasi-linear central polynomial is $A$-quasi-linear. \end{rem}
Any identity of $A$ is obviously $A$-quasi-linear, since the only values are 0, so quasi-linear polynomials are only interesting for non-identities.
\begin{lem} If $f$ is $A$-quasi-linear in $x_i$, then $$f(a_1, a_2, \dots, a_{i,1}+\cdots+ a_{i,d_i}, \dots)= \sum _{j=1}^{d_i} f(a_1, a_2, \dots, a_{i,j}, \dots), \quad \forall a_i \in A.$$\end{lem} \begin{proof} The assertion is immediate from the definition.\end{proof}
As usual, for any monomial $h(x_1, x_2, \dots ), $ define $\deg _i h$ to be the degree of $h$ in $x_i$. For any polynomial $f(x_1, x_2, \dots ) , $ define $\deg _i f$ to be the maximal degree $\deg _i h$ of its monomials; the sum of all such monomials is called the \textbf{leading $i$-part} of $f$.
\begin{defn} Suppose $f(x_1, x_2, \dots) \in C\{ x\}$ has degree $d_i$ in $x_i$. The \textbf{$i$-partial linearization} of $f $ is \begin{equation}\label{partlin}\Delta_i f := f(x_1, x_2, \dots, x_{i,1}+\cdots+ x_{i,d_i}, \dots)- \sum _{j=1}^{d_i} f(x_1, x_2, \dots, x_{i,j}, \dots)\end{equation} where the substitutions were made in the $i$ component, and $x_{1,1},\dots,x_{1,d_i}$ are new variables. \end{defn}
Note that the $i$-partial linearization procedure lowers the degree of the polynomial in the various indeterminates, in the sense that the degree in each $x_{i,j}$ is less than~ $d_i$. It follows at once that applying the $i$-partial linearization procedure repeatedly, if necessary, to each $x_i$ in turn in any polynomial $f$, yields a polynomial that is $A$-quasi-linear.
In \cite{Kem11} and \cite{BRV3}, quasi-linearizations had been defined slightly differently, as homogeneous components of partial linearizations as defined in \eqref{partlin}; when $f$ belongs to a variety $V$ defined over an infinite field, these remain in $V$. However, in \cite[Example~2.2]{BRV4} we saw that over a finite field these homogeneous components might not necessarily stay in the same variety, which is the reason we have modified the customary definition.
Formally, this procedure is slightly stronger than that given in \cite{BRV3}, but yields the following nice result:
\begin{prop}\label{targil} Suppose
$\operatorname{char}(F) = p$ and $d_i = \deg _i f$ is not a $p$-power. Then the leading $i$-part of $f$ can be recovered from a suitable specialization of the leading $k$-part of an $i$-partial linearization of $f $, for suitable $k$. \end{prop} \begin{proof} Taking $k$ such that $\binom{d_i}k$ is not divisible by $p,$ we note that \eqref{partlin} has $\binom{d_i}k$ terms of degree $k$ in $x_{i,1}$ and degree $d_i -k$ in $x_{i,2}$, so we specialize $x_{i,1}\mapsto x_i$, $x_{i,2}\mapsto 1,$ and all other $x_{i,j}\mapsto 0.$ \end{proof}
\begin{cor} For any polynomial $f$ which is not an identity of $A$, the T-ideal generated by $f$ contains an $A$-quasi-linear non-identity for which the degree in each indeterminate is a $p$-power, where $p = \operatorname{char} (F).$ \end{cor} \begin{proof} Apply Proposition~\ref{targil} repeatedly, until the degree in each indeterminate is a $p$-power. \end{proof}
\subsection{Radical and semisimple substitutions}
\begin{rem}\label{delpt} When studying a representation $\rho {\,{:}\,} A \to M_n(K)$ of an algebra $A$, we usually identify $A$ with its image. In case $A$ is an algebra over a field, we write $A = S \oplus J$, the Wedderburn decomposition into the semisimple part $S$ and the radical $J$. Then we can choose the representation such that the Zariski closure of $A$ has the \textbf{Wedderburn block decomposition} of \cite[Theorem~5.7]{BRV1}, in which the semisimple part $S$ is written as matrix blocks along the diagonal.
A {\bf semisimple substitution} (into a {Zariski closed}\ algebra $A$) is a substitution into an element of $S$ in some Wedderburn block of $A$, and a {\bf radical substitution} is a substitution into an element of $J$ in some Wedderburn block. A {\bf pure substitution} is a substitution that is either a semisimple substitution or a radical substitution, i.e., into $S \oplus J$. \end{rem}
\begin{defn} Write $A = S \oplus J$, the Wedderburn decomposition into the semisimple part $S$ and the radical $J$. A {\bf semisimple substitution} (into a {Zariski closed}\ algebra $A$) is a substitution into an element of $S$ in some Wedderburn block of $A$, and a {\bf radical substitution} is a substitution into an element of $J$ in some Wedderburn block. A {\bf pure substitution} is a substitution that is either a semisimple substitution or a radical substitution, i.e., into $S \oplus J$. \end{defn}
\begin{rem}\label{sub1} By \cite[Remark~2.20]{BRV3}, one can check whether an $A$-quasi-linear polynomial $f(x)$ is a PI of $A$ merely by specializing the indeterminates $x_i$ to pure substitutions.
More generally, let $f$ be any polynomial. Given a substitution $f(\overline{x_1}; \overline{x_2},\dots)$, if we specialize $\overline{x_1} \mapsto \overline{x_{1,1}} +\overline{x_{1,2}} $, then \begin{equation}\label{quasi1} f(\overline{x_1}; \overline{x_2}, \dots) = f(\overline{x_{1,1}} ; \overline{x_2}, \dots)+ f(\overline{x_{1,2}} ; \overline{x_2}, \dots)+ \Delta f(\overline{x_{1,1}} ,\overline{x_{1,2}} ; \overline{x_2}, \dots),\end{equation} where $\Delta f$ is obtained from the $1$-partial linearization by specializing $x_{1,j}\mapsto 0$ for all $j>2$ and then discarding these $x_{1,j}$ from the notation. \end{rem} One can interpret Equation~\eqref{quasi1} as follows: \begin{lem}\label{sub2} Suppose $x_1$ has some specialization $x_1 \mapsto \sum \overline{x_{1,j}}$ where the $\overline{x_{1,j}}$ are pure substitutions. (For example, some of them might be semisimple and others radical.) Then all specializations involving ``mixing'' the $\overline{x_{1,j}}$ occur in $ \Delta f(\overline{x_{1,1}} ,\overline{x_{1,2}} , \overline{x_2}, \dots)$. \end{lem} \begin{proof} The ``mixed'' substitutions do not occur in the first two terms of the right side of Equation~\eqref{quasi1}.\end{proof}
\Lref{sub2} enables us to apply the quasi-linearization procedure on specific substitutions of $A$, rather than on all of $A$, and will be needed when studying specific specializations of a polynomial $f$. If $f$ were linear in $x_1$ then we could separate these into distinct specializations of $f$. But when $f$ is non-linear in $x_1$, we often need to turn to \Lref{sub2}.
In \cite{Kem11}, the definition of quasi-linear also included homogeneity, which can be obtained automatically over infinite fields. Here again, since we are working over finite fields, we need to be careful. We say that a function $f$ is $i$-\textbf{quasi-homogeneous} of degree~$s_i$ on~$A$ if $$f(\dots, {\alpha} a_i, \dots) = {\alpha} ^ {s_i} f(\dots, a_i , \dots)$$ for all ${\alpha} \in F, a_i \in A; $ $f(x_1, \dots, x_t; y_1, \dots, y_m)$ is $A$-\textbf{quasi-homogeneous} of degree $s$ on~$A$, if $f$ is $i$-quasi-homogeneous on $A$ of degree $s_i$ for all $1 \le i\le t,$ with $s = s_1 \cdots s_t$.
The next lemma shows the philosophy of our approach, although we cannot use it directly because we are working over finite fields.
\begin{rem}\label{quasi0} Suppose $f = \sum f_j \in \mathcal I$, where each $f_j$ is $i$-quasi-homogeneous of degree $ s_{i,j}$ on $A$. Then fixing some $j_0$ and taking $s= s_{i,j_0}$ yields $$f(\dots, {\alpha} a_i, \dots) - {\alpha} ^ s f(\dots, a_i , \dots) = \sum _{j \ne j_0} ({\alpha} ^ {s_{i,j}} - {\alpha} ^ s)f_j(\dots, a_i , \dots).$$ This lowers the number of $i$-homogeneous components of $f$, and provides an inductive procedure for reducing to quasi-homogeneous functions.\end{rem}
\begin{lem}\label{quasi} Given any T-ideal $\mathcal I$ and any polynomial $f\in \mathcal I$ which is a non-identity of $A$, we can obtain an $A$-quasi-homogeneous polynomial in $\mathcal I$. \end{lem} \begin{proof} By Remark~\ref{quasi0}, taking $s$ to be the degree of $x_i$ in some monomial, this monomial cancels in $$f(\dots, {\alpha} a_i, \dots) - {\alpha} ^ s f(\dots, a_i , \dots),$$ so one concludes by induction. \end{proof}
\begin{defn} A specialization \textbf{radically annihilates} the polynomial $f$ if the number of radical substitutions is at least the nilpotence index of $J$. \end{defn}
In case a substitution radically annihilates $f$, each monomial of $f$ must evaluate to~0. One main idea here is that the nilpotence of the radical forces an evaluation to be 0 when the specialization radically annihilates the polynomial $f$.
At the outset, for full quivers defined over a field, the semisimple part $S$ is the sum of the diagonal Wedderburn blocks of $A$, and $J$ is the sum of the off-diagonal Wedderburn blocks. However, after ``gluing up to infinitesimals,'' some of the radical $J$ might be transferred to the diagonal blocks. For example, when $A$ is a local algebra, there is a single block, which thus contains all of $J$.
\begin{defn}\label{intern} A radical substitution is \textbf{internal} if it occurs in a diagonal block (after ``gluing up to infinitesimals''); otherwise it is \textbf{external}. \end{defn}
\subsubsection{The Hamilton-Cayley equation applied to quasi-linear polynomials}
One of the key techniques used here (and throughout combinatorial PI-theory) is to absorb characteristic coefficients into some (multilinear) alternating polynomial $f(x_1, \dots, x_t; y_1, \dots, y_t),$ as exemplified in \cite[Theorem~J, p.~25]{BR}. Since we must cope with quasi-linear polynomials in this paper, we need to extend the theory to quasi-linear polynomials. Accordingly, we need another definition.
\begin{defn} A polynomial $f(x_1, \dots, x_t; y_1, \dots, y_t),$ is \textbf{$(A;t;\bar q)$-quasi-alternating} if $f$ is $A$-quasi-linear in $x_1, \dots, x_t$ and quasi-homogeneous of degree~$\bar q$, for which $f$ becomes 0 whenever $x_i$ is substituted throughout for $x_j$ for any $1 \le i< j \le t.$\end{defn}
Fortunately, the task of working with quasi-linear polynomials over infinite fields was already done in Kemer's verification of \cite[Equation~(40)]{Kem11}; he uses the terminology \textbf{forms} for our characteristic coefficients. If $f$ is $(A;t;\bar q)$-quasi-alternating, then we still get Kemer's conclusion. This can also be stated in the language of \cite[Theorem~J, Equation~1.19, page 27]{BR} (with the same proof), as follows: \begin{equation}\label{traceab0} \alpha _k ^{\bar q}f(a_1, \dots, a_t, r_1, \dots, r_m) = \sum f(T^{k_1}a_1, \dots, T^{k_t}a_t, r_1, \dots, r_m),\end{equation} summed over all vectors $(k_1, \dots, k_t)$ with each $k_i \in \{ 0, 1\}$ and $k_1 + \dots + k_t = k,$ where $\alpha _k $ is the $k$-th characteristic coefficient of a linear transformation $T {\,{:}\,} V \to V,$ and $f$ is $(A;t;\bar q)$-quasi-alternating. Of course, when applying \eqref{traceab0} in arbitrary characteristic, we must consider all characteristic coefficients and not just the traces.
We want to determine a value of $\bar q$ for which our polynomial $f$ will be $(A;t;\bar q)$-quasi-alternating. When dealing with a representable affine algebra $A$, which has a finite number of generators, we may assume that the base field $F$ is finite, and thus any element, viewed as a matrix in $\M[n](K)$, must have all of its characteristic values in a field $\bar F$ which is a field extension of $F$ of some finite order $\bar q.$ The idea is to take the characteristic polynomial of the matrix $a^{\bar q}$ instead of the characteristic polynomial of the matrix $a$.
\begin{rem}\label{noZub} There is a delicate issue here, insofar as Amitsur's proof of \cite[Theorem~J]{BR} relies on $T$ acting on a vector space $V$. If we take $V = \M[n](K)$ then its dimension is $n^2$, but we can bypass this difficulty by appealing to the upcoming \Lref{obv3}. (Note also that when $n$ is a power of $p$, then $n^2$ is still a power of $p$. In view of \Lref{noZub0}, we can just replace $\bar q$ by $\bar q^2$. Likewise, we could replace $T$ by any p-power of $T$.) \end{rem}
\begin{lem}\label{obv3} Suppose $C$ is an algebra containing the $\bar q$-characteristic coefficients of a matrix $a \in \M[n](C)$. Then $a$ is integral over $C$.\end{lem} \begin{proof} By assumption $a^{\bar q}$ is integral over $C$, implying at once that $a$ is integral over $C$. \end{proof}
\begin{rem}\label{Shirsh} For our applications of Shirshov's Theorem we only need to adjoin finitely many ${\bar q}$-characteristic coefficients to a given affine $C$-algebra $A$ to obtain an algebra $\hat A$ integral of bounded degree over the commutative algebra $\hat C_{\bar q}$ obtained by adjoining the same ${\bar q}$-characteristic coefficients to~$C$.
Thus, when we are given a representation $\rho {\,{:}\,} A \to \M[n](C)$, we stipulate that the generators of $\hat C_{\bar q}$ include all ${\bar q}$-characteristic coefficients of products of a given finite set of generators of $\rho(A)$ (viewed as a matrix in $\M[n](C)$). \end{rem}
\begin{defn} We call $\hat C_{\bar q}$ of Remark~\ref{Shirsh} the ${\bar q}$-\textbf{characteristic closure} of $C$. \end{defn}
\begin{lem}\label{ShirshL} $\hat C_{\bar q} $ is its own ${\bar q}$-characteristic closure. \end{lem} \begin{proof} We appeal to a result of Amitsur~\cite[Theorem~A]{Am}; this describes the characteristic coefficients of a linear combination $\sum \beta_i r_i$ of matrices in the subalgebra generated by the characteristic coefficients of products of the $r_i$. \end{proof}
\begin{rem} If $f(y_1, y_2, \dots)$ is $A$-quasi-linear in $y_1$ and $g(x_1, x_2, \dots)$ is $(A;t;\bar q)$-quasi-alternating, then $$f(g(x_1, x_2, \dots)y_1, y_2,\dots), \qquad f(y_1g(x_1, x_2, \dots), y_2,\dots)$$ are $(A;t;\bar q)$-quasi-alternating. \end{rem}
\begin{rem}[{\cite[Remark~G, p.~25]{BR}}] Let $f(x_1, \dots, x_{t+1};y)$ be any $(A;t;\bar q)$-quasi-alternating polynomial. Then the polynomial \begin{equation}\label{alt1} \sum _{i=1}^{t+1} (-1)^i f(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_{t+1}, x_i ; y)\end{equation} is $(t+1)$-$A$-quasi-alternating. \end{rem}
\subsubsection{Zubrilin's theory applied to quasi-linear polynomials}
Even without the trick of Remark~\ref{noZub}, we could resort to a more polynomial-oriented approach developed by~Zubrilin, expounded in \cite{BR}, which generalizes \cite[Theorem~J, p.~25]{BR}. Since Zubrilin's theory as developed in \cite{BR} requires us to start with multilinear polynomials and we must cope with quasi-linear polynomials in this paper, we need to extend the theory to quasi-linear polynomials. The idea is to take the characteristic polynomial of the matrix $a^{\bar q}$ instead of the characteristic polynomial of the matrix $a.$ Zubrilin's theory can be considered to be the case $\bar q=1,$ and extends readily to the general case. Unfortunately, the theory as developed in \cite{BR} requires many computations, so here we only indicate where the proofs are modified in this more general situation.
Recall \cite[Definition~2.40]{BR} that if $f(x_1,\dots,x_t;y)$ is multilinear in the variables $x_i$, then $(\delta_jf)(x_1,\dots,x_t;y;z)$ is the sum over all the possible substitution of $zx_i$ for $x_i$ in $j$~out of the first $n$ places. Explicitly, let $f(x_1,\ldots, x_{n}, \vec y, \vec t)$ be multilinear in the $x_i$ (and perhaps involving additional variables summarized as $\vec{y}$ and $\vec{t}$). Take $0\le k\le n$, and expand $$ f^*=f((z+1)x_1,\ldots,(z+1)x_n,\vec y, \vec t), $$ where $z$ is a new variable. Then we write $\delta^{(x,n)}_{k,z}(f) := \delta^{(x,n)}_{k,z}(f)(x_1,\ldots, x_{n},z)$ for the homogeneous component of $f^*$ of degree $k$ in the noncommutative variable $z$.
\begin{prop}[{\cite[Corollary~2.45]{BR}, \cite{zubrilin.1}}]\label{Zub} Let $f(x_1, \dots, x_t, x_{t+1};y)$ be any $(A;t;\bar q)$-quasi-alternating polynomial which is linear in $x_{t+1}$. Also suppose the polynomial of \eqref{alt1} is an identity of $A$. Then $A$ also satisfies the identity \begin{equation}\label{alt2} \sum_{j=0}^n (-1)^j \delta_{j,z}^{(n)}(f(x_1,\ldots,x_n,z^{n-j}x_{n+1}))\equiv 0 \quad \text{modulo } \mathcal{CAP}_{n+1}.\end{equation} \end{prop}
We will need to use Proposition~\ref{Zub} in the general case (Theorem \ref{SpechtNoeth}), even though Remark~\ref{noZub} suffices for the field-theoretic case.
\section{Review of quivers of representations}
Our main tool is the quiver of a representation, which we recall from \cite{BRV2} and \cite{BRV3}/ (This differs from the customary definition of quiver, since it is not Morita invariant but takes into account the matrix size.)
\subsection{Full quivers and pseudo-quivers}
\begin{rem}\label{linop} Any representable algebra $A \subseteq \M[n](K)$ has its Wedderburn block form described in detail in \cite{BRV1} and \cite[Definition~3.10]{BRV2}, which is the keystone of \cite{BRV2}. This Wedderburn block form induces an action of $A$ on $\M[n](K),$ by which we view each element $a\in A$ as a linear operator $\ell_a$ on $V = K^n$ via left multiplication. (Likewise, we also have a right action via right multiplication.) In the sequel, we usually consider the algebra $A$ in this context.\end{rem}
For further reference, we also bring in the slightly more general notion of pseudo-quiver, to enable linear changes of basis in the representation.
See \cite{BRV2} and \cite{BRV3} for details about full quivers and pseudo-quivers. It is useful to formulate the definition purely geometrically, without reference to the original algebra.
\begin{defn} An \textbf{(abstract) full quiver} (as well as \textbf{(abstract) pseudo-quiver}) is a directed graph $\Gamma$, without double edges and without cycles, having the following information attached to the vertices and edges:
\begin{enumerate} \item The vertices are ordered, say from $\bf 1$ to $\bf k$, and an edge can only take a vertex to a vertex of higher order. There also are identifications of vertices and of edges, called \textbf{gluing}. Gluing of vertices is of one of the following types: \begin{itemize}\item \textbf{Identical gluing}, which identifies matrix entries in the corresponding blocks; \item \textbf{Frobenius gluing}, which identifies matrix entries in one block with their $q$-th power in another block, where $q$ is a power of $p$;\item \textbf{Gluing up to infinitesimals} described in \cite[Definition 2.3]{BRV3}; \item (in the case of pseudo-quivers) Linear relations among the vertices, cf.~Remark~\ref{myst} below. \end{itemize}
Each vertex is labelled with a roman numeral ($I$, ${I\!\!\,I}$ etc.); glued vertices are labelled with the same
roman numeral.
The first vertex listed in a glued component of vertices is also given a pair of subscripts $(n_{\bf i},t_{\bf i})$: the \textbf{matrix degree} $n_{\bf i}$ and the \textbf{cardinality} of the corresponding field extension of $F$.
\item Off-diagonal gluing (i.e., gluing among the edges) has several possible types, including \textbf{Frobenius gluing} and \textbf{proportional gluing} with an accompanying \textbf{scaling factor}. Absence of a scaling factor indicates scaling factor 1; such gluing is called \textbf{identical gluing} when there is no Frobenius twist indicated.
Frobenius gluing of a block with itself and gluing up to infinitesimals can be viewed as modifying the base ring, yielding a commutative affine algebra over a field instead of a field.
Very briefly, the \textbf{quiver of a representation} is obtained by taking the Wedderburn block form of the image, associating vertices to the diagonal blocks and arrows to the blocks above the diagonal. Gluing corresponds to identification of matrix components in the algebra. The pseudo-quiver is obtained when we make extra identifications of the vertices (which results in extra gluing).
\end{enumerate}
\end{defn}
\begin{rem}\label{myst0} (i) Any representation of an algebra $A$ into Wedderburn block form gives rise to a full quiver. Starting with the $k$ vertices $v_1, \dots, v_k$ corresponding to central idempotents of the blocks, we proceed as in Remark~\ref{linop}. We identify the entries of the respective blocks according to gluing (with identical gluing identifying entries and with Frobenius gluing corresponding to the Frobenius automorphism). For any two idempotents $e_{\bf i},e_{\bf j}$ we choose a base of $e_{\bf i} A e_{\bf j}$ for the arrows between ${\bf i}$ and ${\bf j}$. Then by definition, any two consecutive vertices have only a single arrow joining them, although now we must accept new gluing (corresponding to linear dependence) of vertices.
(ii) Conversely, any abstract full quiver gives rise to a $C$-subalgebra $A$ of $M_n(K)$ in Wedderburn block, form, where we read off the diagonal blocks from the vertices (together with the matrix size, base field, and gluing), and then write down the off-diagonal parts from the arrows together with the relations that are registered together with the quiver.
Note that this observation does not require that $C$ be a field. In this way, we can define the algebra of a quiver over an arbitrary integral domain. \end{rem}
\begin{rem}\label{myst} Any full quiver of a representation of an algebra $A$ gives
rise to a pseudo-quiver. Starting with the $k$ vertices $v_1,
\dots, v_k$ corresponding to central idempotents of the blocks, we
proceed as in Remark~\ref{linop}. We identify the entries of the
respective blocks according to gluing (with identical gluing
identifying entries and with Frobenius gluing corresponding to the
Frobenius automorphism). For any two idempotents $e_{\bf i},e_{\bf
j}$ we choose a base of $e_{\bf i} A e_{\bf j}$ for the arrows
between ${\bf i}$ and ${\bf j}$. Then by definition, any two
consecutive vertices have only a single arrow joining them,
although now we must accept new gluing (corresponding to linear dependence) of vertices. \end{rem}
\subsection{Degree vectors}
\begin{defn}\label{dv} The {\bf length} of a path $\mathcal B$ in a pseudo-quiver is its number of arrows, excluding loops, which equals its number of vertices minus $1$. Thus, a typical path has vertices $r_1, \dots, r_{\ell+1}$, where the vertex $r_j$ has matrix degree $n_j$. We call $(n_1, \dots, n_\ell)$ the \textbf{degree vector} of the path $\mathcal B$. We order the degree vectors according to the largest $n_j$ which appears in the distinct glued components, counting multiplicity. More precisely, for any degree vector we discard any duplications (due to gluing), and then associate the number $d_k$ to the number of components of matrix degree $k$. We order the degree vectors according to these sets of $d_k$, taken lexicographically.
For example, the degree vector $(3,1,3,3)$ with no gluing of vertices is greater than $(3,2,2,3,2,3,3)$ with the fourth, sixth, and seventh vertices glued since 3 appears three times unglued
in the first degree vector but only twice in the second degree vector. \end{defn}
\begin{rem}\label{dv2} We also define a secondary order on $\mathcal B$ with respect to the grade defined above, because further gluing will lower the number of elements in the grading monoid. \end{rem}
\subsubsection{Degenerate gluing between branches}$ $
{\bf Degenerate gluing} is the situation in which each edge of one branch is glued to the corresponding edge of another branch; then the two branches produce the same values when we multiply out the elements in the corresponding algebra. We can eliminate degenerate gluing by passing to the pseudo-quiver, but it often is more convenient for us to make use of \cite[Proposition~3.13]{BRV3}:
\begin{prop}\label{degglu} Any representable, relatively free algebra has a representation
whose full quiver has no degenerate gluing. \end{prop}
Since we are working with T-ideals, which correspond to relatively free algebras, this result enables us to bypass pseudo-quivers.
As shown in \cite[Lemma 2.8]{BRV3}, gluing of two vertices of a pseudo-quiver can often be eliminated simply by joining the vertices (since the arrows now are linear operators).
We call this process \textbf{reducing} the pseudo-quiver,
and we always assume in the sequel that our pseudo-quivers are reduced.
Conversely, given a quiver or pseudo-quiver $\Gamma$, one can take an arbitrary commutative Noetherian algebra $K$ and build a representable algebra $A$ into $\M[n](K)$ from $\Gamma$. One theme of \cite{BRV3} is how the geometric properties of $\Gamma$ yield identities and non-identities of~$A$.
Linear relations among the vertices of a pseudo-quiver can only occur if all paths between these vertices have the same ``grade,'' as described in \cite[\S 2.7]{BRV3}. This becomes somewhat intricate in characteristic $p$, in presence of Frobenius gluing, so we review the idea here. Later on, we will need an inductive procedure which applies to when the gluing is strengthened. In principle, this is obvious, because any further gluing which does not lower the degree vector must lower the power of $q$ used in the Frobenius twist, and so this must terminate after a finite number of steps. To state this formally requires some technical details, which we review from \cite{BRV3}.
Take $A$ to be the Zariski closure of a representable relatively free algebra, having full quiver $\Gamma$. We write $\mathcal M_\infty$ for the multiplicative monoid $\set{1,q,q^2,\dots,\epsilon}$, where $\epsilon a = \epsilon$ for every $a \in \mathcal M_{m}$. (In other words, $\epsilon$ is the ``zero'' element adjoined to the multiplicative monoid $\langle q \rangle$.) When the base field $F$ is infinite, the full quiver can be separated (replacing ${\alpha}$ by ${\alpha} \gamma$ where $\gamma^q \ne \gamma$), and the algebra can be embedded in a graded algebra. In other words, each diagonal block is $\mathcal M_\infty$-graded, where the indeterminates~${\lambda}_{\bf i}$ are given degree 1; hence, $A$ is naturally $\overline{\mathcal M}$-graded.
When $F$ is a finite field of order $q$, $\mathcal M_m$ denotes the monoid obtained by adjoining a ``zero'' element $ \epsilon$ to the subgroup $\langle q \rangle$ of the Euler group $U(\Z_{q^m-1})$, namely $\mathcal M_{m} = \set{1,q,q^2,\dots,q^{m-1}, \epsilon}$
where $\epsilon a = \epsilon$ for every $a \in \mathcal M_{m}$.
Let
$\overline{\mathcal M}$ be the semigroup $\mathcal M /\!\!\sim$, where $\sim$ is the equivalence relation obtained by matching the degrees of glued variables: When two vertices have Frobenius gluing $\alpha {\rightarrow} \phi^i(\alpha)$, we identify $1$ with $q^i$ in their respective components.
By \cite[Lemma~2.7]{BRV3}, every $\mathcal M_{m}$ is a quotient of $\mathcal M_{\infty}$, and more generally whenever $m {\!\,{|}\!\,} m'$, the natural group projection $\mathbb Z_{q^{m'}-1} \to \mathbb Z_{q^m-1}$ extends to a monoid homomorphism $\mathcal M_{m'} \to \mathcal M_{m}.$
The diagonal blocks $S_r$ of $A$ (under multiplication) can be viewed as $\mathcal M_{t_r}$-modules, where we define the product $[q^i]a$ to be $a^{q^i}$, and $[\epsilon]a = 0$. $S_r$ itself is not graded, and we need to pass to the larger algebra $B$ arising from the sub-Peirce decomposition of~$A$, cf.~\cite[ Remark~2.33]{BRV3}.
Let $A_{r,r'}$ denote the $(r,r')$-sub-Peirce component of $A$. Then $A_{r,r'}$ is naturally a left $F_r$-module and right $F_{r'}$-module, so $A_{r,r'}$ is graded by the monoid $\mathcal M_{\hat r}$ obtained from $F_{\hat r}$, the compositum of $F_r$ and $F_{r'}$. (In other words, if $F_r$ has $q^t$ elements and $F_{r'}$ has $q^{t'}$ elements, then $F_{\hat r}$ has $q^{\hat t}$ elements, where $\hat t = \operatorname{lcm}(t,t').$) Since the free module is graded and the monoid $\mathcal M_{\hat r}$ is invariant under the Frobenius relations, we see that the Frobenius relations preserve the grade under $\mathcal M_{\hat r}$.
\subsection{Canonization Theorems}
\begin{defn} A full quiver (resp.~ pseudo-quiver) is \textbf{basic} if it has a unique initial vertex $r$ and unique terminal vertex $s$. A basic full quiver (resp.~ pseudo-quiver) $\Gamma$ is \textbf{canonical} if it has vertices $r'$ and $s'$ satisfying the following properties:
\begin{itemize} \item $\Gamma$ starts with a unique path $p_0$ from $r$ to $r'.$ Thus, every vertex not on $p_0$ succeeds~$r'$. \item $\Gamma$ ends with a unique path $p_0'$ from $s'$ to $s.$ Thus, every vertex not on $p_0'$ precedes~$s'$. \item Any two paths from the vertex $r'$ to the vertex $s'$ have the same grade.
\end{itemize}
An \textbf{enhanced canonical full quiver (resp.~pseudo-quiver)} is a canonical full quiver (resp.~pseudo-quiver) with uniform grade. \end{defn}
We recall the following ``canonization theorems:''
\begin{itemize} \item \cite[Theorem~6.12]{BRV2}. Any relatively free affine PI-algebra $A$ has a representation for whose full quiver all gluing is Frobenius proportional.
\item \cite[Theorem~3.5]{BRV3}. Any basic full quiver (resp. pseudo-quiver) of a relatively free algebra can be modified (via a change of base) to an enhanced canonical full quiver (resp.~ pseudo-quiver).
\item \cite[Corollary~3.6]{BRV3}. Any relatively free algebra is a subdirect product of algebras with representations whose full quivers (resp.~pseudo-quivers) are enhanced canonical.
\item \cite[Theorem~3.12]{BRV3}. For any $C$-closed T-ideal of a relatively free algebra $A$, the full quiver of $A'= A/\mathcal I$ is obtained by means of the following elementary operations on the full quiver of $A$: Gluing, new linear dependences on the vertices, and new relations on the base ring. \end{itemize}
Thus, the PI-theory can be reduced to the case of enhanced canonical full quivers. Accordingly, all of the full quivers and pseudo-quivers that we consider in this paper will be enhanced canonical.
\section{Evaluations of polynomials arising from algebras of full quivers}
We get to the crucial point of this paper, which is how to evaluate a polynomial $f(x_1,x_2 ,\dots )$ on a representable algebra $A$, in terms of the full quiver $\Gamma$ of $A$. This question is quite difficult in general, but we note that the quasi-linearization of $f$ (as defined above) is in the T-ideal generated by $f$, so we may assume that $f$ is quasi-linear. Thus, by Remark~\ref{sub1}, the evaluations of a quasi-linear polynomial $f(x)$ are spanned by the evaluations obtained by pure specializations of the indeterminates $x_i$ to~$S\cup J$.
The reader should already note that all of the proofs of this section are algorithmic, involving only a finite number of steps. This observation will be needed below in the reduction of the base ring from an integral domain to a field, in the second proof of Theorem~\ref{SpechtNoeth}.
\begin{rem}\label{expans} Any nonzero evaluation arises from a string of substitutions $x_i \mapsto \overline{x_i}$ to elements corresponding to some path of the full quiver $\Gamma$. (We are permitted to have substitutions repeating in the same matrix block, i.e., the vertex repeats via a loop.) The $\overline{x_i}$ connect two vertices, say of matrix degree $n_i$ and $n'_{i}.$ Suppose $t$ is the nilpotence index of the radical $J$. Then any string involving $t$ radical substitutions is~0. If we replace $x_i$ by $h_{n_i}x_{i,1}\cdots x_{i,t} h_{n_i}x_{i}$, then we still get the same evaluation when $x_{i,1}, \cdots, x_{i,t}$ are specialized to the identity matrices in $n_i \times n_i$ matrix blocks, which in particular are semisimple substitutions. Note that the $h_{n_i}$ were inserted in order to locate the semisimple substitutions of the $x_{i,1}$ inside matrix blocks of size at least $n_i \times n_i$.
On the other hand, the number of radical substitutions must be at most the nilpotence index of $A$, so at least one of these extra substitutions must be semisimple, if we are still to have a nonzero evaluation. By taking $h_{n_i}x_{i,1}^{\bar q}x_{i,2}\cdots x_{i,t} h_{n_i}x_{i}$ ($\bar q$ as in Remark~\ref{qwhat}), we force this radical substitution to come from $\overline{x_{i,1}}\, ^{\bar q}.$
Thus, this process eventually will yield a polynomial having a nonzero specialization corresponding to a path whose degree vector involves a semisimple substitution at each matrix block.\end{rem}
\subsection{Characteristic coefficient-absorbing polynomials inside T-ideals}\label{trab}
The main goal of \cite{BRV3} was to show that any relatively free representable algebra $A$ has a T-ideal in common with a Noetherian algebra $\tilde A$ which is a finite module over a commutative affine algebra when $A$ is affine. This T-ideal yields a non-identity of a branch of $A$. The method was to define some action of characteristic coefficients (i.e., coefficients of the characteristic polynomial) of elements of a Zariski-closed algebra $A$, such that the values of the nonidentity obtained from its full quiver are closed under multiplication by these characteristic coefficients. This enabled us to preserve Hamilton-Cayley type properties in the evaluations of diagonal blocks. Here we use the same techniques, but need to refine them in order to obtain the desired polynomial within a given T-ideal obtained from an arbitrary polynomial (not necessarily arising from a branch).
Because we are working with quasi-linear polynomials instead of multilinear polynomials, we must utilize only $\bar q$-characteristic coefficients instead of all characteristic coefficients. Let us recall (For example, \cite[Lemma~5.1]{BRV3}:
\begin{lem}\label{barq} Suppose $A$ is a representable algebra, over a field of characteristic $p$. When, for some $m$,
$\bar q = p^m >n$ (which is greater than the nilpotence index of the Jacobson radical), then the element $a^{\bar q}$ is semisimple for every $a \in A$. \end{lem}
This was enough to reduce various problems in \cite{BRV3} and \cite{BRV4} to the characteristic 0 case, effectively replacing $\bar q$ by 1, but does not suffice here to reduce Specht's problem to the characteristic 0 case. Nevertheless, it is still a key element of the proof, when we keep track of $\bar q$.
Again, our objective is to apply the celebrated theorem of
Shirshov \cite[Chapter 2]{BR} to adjoin $\bar q$-characteristic coefficients to $A$ and obtain
an algebra finite as a module over a commutative affine
$F$-algebra. For technical reasons, we only succeed in adjoining
$\bar q$-powers of characteristic coefficients, so we formulate the following definition, modified from \cite{BRV3}:
\begin{defn}\label{absorp} Given a quasi-linear polynomial $f(x;y)$ in indeterminates labelled $x_i,y_i$, we say $f$ is {\bf $\bar q$-characteristic coefficient-absorbing} with respect to a full quiver $\Gamma = \Gamma(A)$ if the following properties hold:
\begin{enumerate} \item $f$ specializes to 0 under any substitution in which at least one of the $x_i$ is specialized to a radical element of $A$. (In other words, the only nonzero values of $f$ are obtained when all substitutions of the $x_i$ are semisimple.)
\item $f(\mathcal A(\Gamma))^+$
absorbs multiplication by any $\bar q$-characteristic coefficient of any
element in a simple (diagonal)
matrix block of $\mathcal A(\Gamma)$.
\end{enumerate}
\end{defn}
There are two ways of obtaining intrinsically the coefficients of the characteristic polynomial $$f_a = {\lambda}^n + \sum_{k=1}^{n-1} (-1)^k {\alpha} _j (a) {\lambda} ^{n-k}$$ of a matrix $a$. Fixing $k$, we write ${\alpha}$ for ${\alpha}_k.$ (For example, if $k=1$ then ${\alpha} (a) = \operatorname{tr} (a)$.)
Recall from \cite[Definition~2.23]{BRV3} that we defined \begin{equation}\label{trmat} \operatorname{tr}_{\operatorname{mat}}(a) = \sum _{i,j = 1}^n e_{ij} a e_{ji},\end{equation} called the {\bf matrix definition} of trace. We need a generalization.
\begin{defn} \label{trmat0} In any matrix ring $\M[n](W)$, we define \begin{equation}
\a_{\operatorname{mat}}(a) : = \sum _{j=1}^n \sum e_{j,i_1} a e_{i_2,i_2}a \cdots a e_{i_{k}i_k} a
e_{i_1,j},\end{equation} the inner sum taken over all vectors $(i_1,\dots,i_k)$ of length $k$. \end{defn}
Of course, these characteristic coefficients $ \a_{\operatorname{mat}}(a) $ commute iff $W$ is a commutative ring. This is a key issue, that we will need to address.
\begin{lem}\label{L1} For any $\M[n](F)$-quasi-linear polynomial $f(x_1, x_2, \dots)$ which is also $\M[n](F)$-quasi-homogeneous of degree $\bar q$ in $x_1$, the polynomial $$\hat f = f(c_{n^2}(y) x_1 c_{n^2}(z), x_2, \dots)$$ is $\bar q$-characteristic coefficient absorbing in $x_1$. \end{lem} \begin{proof} The same proof as in \cite[Theorem~J, Equation~1.19, page~27]{BR}, when the assertion is formulated as: \begin{equation}\label{traceab00} \alpha_k ^{\bar q}f(a_1, \dots, a_t, r_1, \dots, r_m) = \sum f(T^{k_1}a_1, \dots, T^{k_t}a_t, r_1, \dots, r_m),\end{equation} summed over all vectors $(k_1, \dots, k_t)$ with each $k_i \in \{ 0, 1\}$ and $k_1 + \dots + k_t = k,$ where $\alpha_k $ is the $k$-th characteristic coefficient of a linear transformation $T {\,{:}\,} V \to V,$ and $f$ is $(A;t;\bar q)$-quasi-alternating. \end{proof}
\begin{rem}\label{CHid} Notation as in \eq{traceab00}, the Cayley-Hamilton identity for $n_i \times n_i$ matrices which are evaluations of $f$ is $$0 = \sum_{k=0}^{n_i} \alpha_k ^{\bar q}f(a_1, \dots, a_t, r_1, \dots, r_m) = \sum_{k_1,\dots,k_t} f(T^{k_1}a_1, \dots, T^{k_t}a_t, r_1, \dots, r_m),$$ which is thus an identity in the T-ideal generated by $f$. \end{rem}
Note that this is the same argument as used by Zubrilin in the proof of Proposition~\ref{Zub}.
Iteration yields:
\begin{prop}\label{q2} For any polynomial $f(x_1, x_2, \dots)$ quasi-linear in $x_1$ with respect to a matrix algebra $\M[n](F)$, there is a polynomial $\hat f $ in the T-ideal generated by $f$ which is $\bar q$-characteristic coefficient absorbing. \end{prop}
\begin{defn} Fixing $0 \le k <n,$ we denote the $k$-th $\bar q$-characteristic coefficient of $a$, defined implicitly in \Lref{L1}, as $\operatorname{\alpha}_{\operatorname{pol}}^{\bar q}(a)$. (Strictly speaking, $k$ should be included in the notation, but since $k$ is taken arbitrarily in our results, we do not bother to specify it.) \end{defn}
\begin{defn}\label{HCind} We call the identity $\sum_{k_1,\dots,k_t} f(T^{k_1}x_1, \dots, T^{k_t}x_t, r_1, \dots, r_m)$ obtained in \Rref{CHid}, the {\bf Hamilton-Cayley identity induced by $f$}. \end{defn}
The following result holds for arbitrary algebras of paths.
\begin{rem}\label{trac3} We need an action of matrix characteristic coefficients (computed on the diagonal components of the given representation of $A$) on the T-ideal of~$f$. To do this, one computes the characteristic coefficient as in Definition~\ref{trmat0}, and applies this on each (glued) matrix component, i.e., the Peirce component corresponding to vertices on each side of an arrow. More precisely, suppose we have two Peirce components, whose idempotents are $e_r = \sum_k e_{r,k}$ and $e_s = \sum_\ell e_{s,\ell}$. For any arrow $\alpha$ from (non-glued) vertices $r_i$ to $s_\ell$, we consider the matrix $(a_{uv})$ corresponding to $\alpha$, and take characteristic coefficients on the $r_k$-diagonal component on the left, and the $s_\ell$-diagonal component on the right. In other words, if the vertex corresponding to $r$ has matrix degree $n_i$, taking an $n_i \times n_i$ matrix $w$, we define ${\operatorname{\alpha}_{\operatorname{pol}}^{\bar q}} _u(w)$ as in the action of \Lref{L1} and then the left action \begin{equation}\label{mtr1} a_{u,v} \mapsto {\operatorname{\alpha}_{\operatorname{pol}}^{\bar q}}_u (w) a_{u,v}. \end{equation} Likewise, for an $n_j \times n_j$ matrix $w$ we define the right action \begin{equation}\label{mtr2} a_{u,v} \mapsto a_{u,v} {\operatorname{\alpha}_{\operatorname{pol}}^{\bar q}} _v (w). \end{equation} (However, we only need the action when the vertex is non-empty; we forego the action for empty vertices.) \end{rem}
We can proceed further whenever these two $\bar q$-characteristic coefficient actions coincide on the T-ideal of $f$.
\begin{lem} \label{onecomp0}
$\a_{\operatorname{mat}}(a)^{\bar q} = \operatorname{\alpha}_{\operatorname{pol}}^{\bar q}(a)$ in $\M[n](C)$ for $C$ commutative.
Thus, left multiplication by $\a_{\operatorname{mat}}(a)$ acts on the set of evaluations of any $n_i^2$-alternating polynomial $f(x;y)$ on an $n_i \times n_i$ matrix component. \end{lem} \begin{proof} Follows at once from Equation ~\eqref{traceab00}. \end{proof}
\subsection{Identification of matrix actions for unmixed substitutions} Our main objective is to introduce $\bar q$-characteristic coefficient-absorbing polynomials corresponding to all
canonical full quivers, in order to identify these two notions of $\bar q$-characteristic coefficients, working with matrix substitutions inside a given polynomial. We have the natural bimodule action of $\bar q$-characteristic coefficients on $A$ given in terms of the full quiver, which we can identify with $\operatorname{\alpha}_{\operatorname{pol}}^{\bar q}(a) ,$ defined in Equation ~\eqref{traceab00}, whenever the matrix characteristic coefficients commute. The theory subdivides into two cases: \begin{itemize} \item The substitution of an indeterminate is to sums of elements in the same glued Wedderburn component. \item The substitution of an indeterminate to sums of elements in the different glued Wedderburn components. \end{itemize} The techniques are different. We start with the first sort of situation, which we call ``unmixed,'' which can be treated via the argument of Example~\ref{onecomp0}. For the second sort of situation, which we call ``mixed,'' \Lref{sub2} is applicable, but requires an intricate ``hiking procedure'' (defined presently) on the quasi-linearization of a polynomial. \begin{rem}\label{onecomp} The argument of \Lref{onecomp0} holds for a single diagonal matrix component over a commutative ring. \end{rem}
Recall that the T-\textbf{space} of a polynomial $f$ on an $F$-algebra $A$ is defined as the $F$-subspace of $A$ spanned by the evaluations of $f$ on $A$. Ironically, the sophisticated hiking procedure fails to handle the unmixed case since it relies on a T-space argument which thus must fail in view of Shchigolev's counterexample for ACC for T-spaces \cite{Sh}. So the theory actually requires separate treatment of the ``degenerate'' unmixed case. The proof of Remark~\ref{onecomp} involves the full force of T-ideals rather than T-spaces.
\subsection{Hiking}
We arrive at the main new idea of this paper. Let $A$ be a Zariski closed algebra, and let $f$ be a quasi-linear non-identity. The goal is to replace $f$ by a better structured non-identity in its T-ideal, for which the $\bar q$-characteristic coefficients of the matrix blocks defined in components of the full quiver commute with each other and also with radical substitutions of arrows connecting glued vertices. This enables us to compute these $\bar q$-characteristic coefficients in terms of polynomial evaluations. We must cope with the possibility that our semisimple substitution
has been sent to the `wrong' component, either because its matrix degree is too large or the base field is of the wrong size.
We write $[a,b]$ for the additive commutator $ab-ba,$ and $[a,b]_q$ for the \textbf{Frobenius commutator} $ab -b^q a$. \begin{lem}\label{hike00} If $f(x_1, \dots, x_n)$ is any
polynomial quasi-linear in $x_i$, then \begin{equation}\label{comm1} f(a_1, \dots, [a,a_i],\dots a_n) = f(a_1, \dots, aa_i,\dots, a_n)- f(a_1, \dots, a_i a,\dots, a_n).\end{equation} and, more generally, \begin{equation}\label{comm11} f(a_1, \dots, [a,a_{i_1}\cdots a_{i_k}],\dots a_n) = \sum _{j=1}^k f(a_1, \dots, a_{i_1}\cdots [a, a_{i_j}]\cdots a_{i_k},\dots a_n).\end{equation} for all substitutions in $A$. \end{lem} \begin{proof} By quasi-linearity, we may assume that $f$ is a monomial, in which case we see that all of the intermediate terms cancel. \end{proof} We recall the hiking procedure of \cite[Lemma~5.8]{BRV3}, but the hiking procedure here is quite subtle, and requires four different stages.
\subsubsection{Stage 1 hiking}
\begin{lem}\label{prehike} Suppose a quasi-linear nonidentity $f$ of a Zariski closed algebra $A$ has a nonzero value for some semisimple substitution of some $x_i$ in $A$, corresponding to an arrow in the full quiver whose initial vertex is labelled by $(n_i, t_i)$ and whose terminal vertex is labelled by $(n_i', t_i')$. Replacing $x_i$ by $[x_i, h_{n_i}]$ (where the $h_{n_i}$ involve new indeterminates) yields a quasi-linear polynomial \begin{equation}\label{hik01} f(\dots, [x_i, h_{n_i}], \dots)\end{equation} in which any substitution of $x_i$ into this diagonal block yields 0.\end{lem} \begin{proof} The evaluations of $h_{n_i}$ in the semisimple part are central; hence, any nonzero value in $ f(\dots, [x_i, h_{n_i}],\dots)$ forces us into a radical substitution.\end{proof}
\begin{lem}\label{hike} For $f$ as in \Lref{prehike}, \begin{equation}\label{hik02} \nabla_i f := f(\dots, [x_i, h_{\max\{n_i, n_{i'}\}}], \dots)\end{equation} also does not vanish on $A$. In the case of Frobenius gluing $x \mapsto x^{q^\ell}$, we need to take instead the substitution $$x_i \to f (\dots, [x_i, h_{\max\{n_i, n_{i'}\}}]_{q^\ell}, \dots).$$ \end{lem} \begin{proof} There are substitutions in the appropriate diagonal block (the one whose degree is $\max\{n_i, n_{i'}\}$) for which
$h_{\max\{n_i, n_{i'}\}}$ is a nonzero scalar, and we specialize $x_i$ to an element which passes from one block to the other. \end{proof}
\begin{cor}
Any nonidentity can be hiked via successive stage 1 hiking to ensure semisimple substitutions
in each matrix component. \end{cor} \begin{proof} Once we have a nonzero substitution of~$f$ with external radical substitutions in all the hiked positions, we may continue to hike as much as we want without affecting the fact that we have a nonzero substitution, i.e., that $f$ is a nonidentity.\end{proof} \begin{exmpl} Stage 1 hiking is illustrated via the full quiver given for the Grassmann algebra on two generators: \begin{equation}\label{G2+} \xymatrix@C=40pt@R=32pt{ I \ar@/^0pt/[r]^{\alpha} \ar@/_0pt/[d]^{\beta} & I \ar@/^0pt/[d]^{-\beta} \\ I \ar@/^0pt/[r]^{\alpha} & I } \end{equation}
Clearly the critical nonidentity for each branch is $[x_1, x_2],$ and we get the Grassmann identity $[[x_1, x_2],x_3]$ by taking $f= x_1$ and hiking.
\end{exmpl}
We call this procedure (application of \eqref{hik02})
\textbf{stage 1} hiking, since we also need other forms of hiking which we call \textbf{stage 2}, \textbf{stage 3} and \textbf{stage 4} hiking.
\begin{rem}
Stage 1 hiking absorbs all internal radical substitutions, cf.~ Definition~\ref{intern}, because of the use of the central polynomial $h_{\max\{n_i, n_{i'}\}},$ so when working with fully hiked polynomials, we need consider only the Peirce decomposition (and not the more complicated sub-Peirce decomposition, see \cite{BRV1}.) In this manner, stage 1 hiking leads us to external radical substitutions for $x_i$, say from a block of degree $n_i$ to a block of degree $n_{i+1}.$ \end{rem}
Explicitly, after stage 1 hiking, we have obtained expressions of the form \begin{equation}\label{hik2} g_i(x,y,z) = z_{i,1}[ h_{\max\{n_i, n_{i'}\}}(x_{i,1}, x_{i,2},\dots),y_i]z_{i,2}.\end{equation} To simplify notation, we assume $n_{i+1} = n_i'.$ Now we define \begin{equation}\label{fullv1} \tilde f
= f(h_{n_1}, g_1, h_{n_2}, g_2, \cdots, g_{\ell} ,h_{n_{\ell}+1}),\end{equation} where different indeterminates are used in each polynomial, in which we get the term \begin{equation}\label{fullv2} h_{n_1} g_1 h_{n_2} g_2 \cdots g_{\ell}
h_{n_{\ell}+1}.\end{equation}
Since the radical of a Zariski closed algebra $A$ is nilpotent, we can perform stage 1 hiking on $f$ only at a finite number of different positions (bounded by the nilpotence index of $A$) before getting an identity. Stopping before the last such hike gives us a nonidentity which would become an identity after any further hike of stage 1.
Unfortunately, our polynomial~$f$ has several different monomials, and when we hike with respect to one of these monomials, some other monomial will give us some permutation of \eqref{fullv2}, in which the substitutions might go into the ``wrong component.'' The difficulty that we will encounter is that any matrix component can be embedded naturally into a larger matrix component, so a given matrix substitution could be viewed as being in this larger component, thereby ruining our attempts to compute with polynomials on each individual matrix component. Indeed, even a radical substitution could be replaced by a semisimple substitution in a larger component.
\subsubsection{Stage 2 hiking}
In view of the previous paragraph, we also need a second stage of hiking, to take care of substitutions into the ``wrong'' component.
Given a nonzero specialization of a monomial of $f$ under the substitutions $x_i \mapsto \overline{x_i}$, $i \ge 1,$ where $\overline{x_i} \in \M[n_i](K)$, consider the specialization of another (permuted) monomial of $f$ under the substitutions $x_i \mapsto \overline{x_i}'$, $i \ge 1,$ where $\overline{x_i}' \in \M[n_j](K)$ (and perhaps $j \ne i$).
\begin{exmpl} Consider the algebra $$\set{\left(\begin{matrix} {\alpha} & * & *& * \\ 0 & \beta & * & * \\ 0 & 0 & * & * \\
0 & 0 & * & *
\end{matrix}\right) : {\alpha}, \beta \in F},$$ where $*$ denotes an arbitrary element in $K$. The corresponding full quiver $I_{(1,1)} \to {I\!\!\,I}_{(1,1)}\to {I\!\!\,I\!\!\,I}_2$ would normally give us the polynomial $$ z_{1,1}[ x_{1,1}
,y_1]z_{1,2}z_{2,1}[ h_2(x_{2,1}, \dots)
,y_2] z_{3,1},$$
which could be condensed to $ [ x_{1,1}
,y_1 ]z [ h_2(x_{2,1}, \dots)
,y_2] $ since various indeterminates can be specialized to 1. But if $f(x_1,x_2,x_3, \dots)$ has both monomials $x_1x_2x_3\cdots $ and $x_3x_2x_1\dots $ then hiking in the second monomial yields the permuted term $$[ h_2(x_{2,1}, \dots)
,y_2]z[ x_{1,1}
,y_1 ]$$ which permits a nonzero evaluation with all substitutions in the lower $2\times 2$ matrix component, and we cannot get a proper hold on the substitutions. \end{exmpl}
We need to hike $f$ further, to guarantee that the specialization of some unintended monomial of~$f$ in \eqref{fullv1} does not land in a subsequent matrix component $\M[n_j](K)$ for $n_j>n_i$; our next stage of hiking eliminates all such specializations.
\begin{lem}\label{hike2} Let $H$ denote the
central polynomial $h_{n_{j}}^{\bar q}$ of $M_{n_j}(K),$ and take $$z_{i,1}[ h_{n_i}(x_{i,1}, x_{i,2},\dots),y_i]z_{i,2}g_{i+1} \cdots g_{j-1} H^{q_1}
- z_{i,1}H^{q_2}[ h_{n_i}(x_{i,1}, x_{i,2},\dots), y_i]z_{i,2}g_{i+1} \cdots g_{j-1},$$ (for each pair $(q_1,q_2)$ that occurs in Frobenius twists in the branch; in characteristic 0 we would just take $q_1=q_2= 1$.) The product of these terms, taken over all the $q_1$ and~$q_2$, becomes nonzero iff the substitution is to the $i$ component (of matrix size $n_i$.) \end{lem} \begin{proof} Specializing this expression into the $j$-component (of size $n_j$) would yield two equal terms which cancel, since $H$ takes on scalar values, and thus yield 0. But specializing into the $i$ component (of size $n_i$) would yield one term nonzero and the other~0 since $H$ is an identity on $n_i \times n_i$ matrices, so their difference would be nonzero. \end{proof}
In this way, we eliminate the ``wrong'' specializations in other monomials of $f$ while preserving the ``correct'' ones. The modification of $f$ according to this specialization is called \textbf{stage 2 hiking of the polynomial~$f$}.
\subsubsection{Stage 3 hiking} So far we have guaranteed the specializations to be in the matrix components of the correct size, but we need to fine tune still further because the centers of the components may be of different sizes. Next, we will want to reduce to the case where for any two branches, the base field for each vertex has the same order. Suppose $\mathcal B'$ is another branch with the same degree vector. If the corresponding base fields for the $i$-th vertex of $\mathcal B$ and $\mathcal B'$ are $n_i$ and $n_i'$ respectively, we take $t_i = q^{n'_i}$ and replace $x_i$ by $(h_{n_i}^{t_i}-h_{n_i})x_i.$ The modification of $f$ according to this specialization is called \textbf{stage~3 hiking of the polynomial~$f$}.
\subsubsection{Stage 4 hiking}
Some of the radical substitutions are \textbf{internal} in the sense that they occur in a diagonal block (after ``gluing up to infinitesimals''). Hiking absorbs all internal radical substitutions, because of the use of the central polynomial~ $h_{n_i},$ so when working with fully hiked polynomials, we need consider only the Peirce decomposition (and not the more complicated sub-Peirce decomposition; cf.~\cite{BRV1}.)
\begin{lem}\label{modhike1} There is a substitution to hike $f$ further such that \begin{equation}\label{hikemore0}\tilde c_{n_i^2}(y) x_i c_{{n'_i}^2(y)}\tilde c_{n_i^2}({\alpha} _k y) x_i c_{{n'_i}^2({\alpha} _k y)} - \tilde c_{n_i^2}({\alpha} _k y) x_i c_{{n'_i}^2({\alpha} _k y)} \tilde c_{n_i^2}(y) x_i c_{{n'_i}^2(y)}\end{equation} vanishes under any specialization to the $n_i, n'_i$ blocks. \end{lem}\begin{proof} By Proposition~\ref{q2} there is a Capelli polynomial $\tilde c_{n_i^2}$ and $p$-power $\bar q$ such that \begin{equation}\label{CC=CC} \tilde c_{n_i^2}( {\alpha}_k y) x_i c_{{n'_i}^2}(y) = {\alpha}_k ^{\bar q}(y_1)c_{n_i^2}(y) x_i c_{{n'_i}^2}(y)\end{equation} on the diagonal blocks.
Since $\bar q$-characteristic coefficients commute on any diagonal block, we see from this that \begin{equation}\label{hikemore}\tilde c_{n_i^2}(y) x_i c_{{n'_i}^2(y)}\tilde c_{n_i^2}({\alpha} _k y) x_i c_{{n'_i}^2({\alpha} _k y)} - \tilde c_{n_i^2}({\alpha} _k y) x_i c_{{n'_i}^2({\alpha} _k y)} \tilde c_{n_i^2}(y) x_i c_{{n'_i}^2(y)}\end{equation} vanishes identically on any diagonal block, where $z = {\alpha}_k y$.
One concludes from this that substituting~\eqref{hikemore} for $x_i$ would hike our polynomial one step further. But there are only finitely many ways of performing this hiking procedure. Thus, after a finite number of hikes, we arrive at a polynomial in which we have complete control of the substitutions and the $\bar q$-characteristic coefficients commute. \end{proof}
\subsection{Admissible polynomials}$ $
Although hiking is the key tool in this analysis, one must note that the only time that the hiking procedure makes the two actions of Remark~\ref{trac3} coincide is when it is applied to the components of maximal matrix degree. In other words, the polynomial $f$ is required to have a nonzero evaluation on a vector of maximal matrix degree. Thus we must consider the following sort of polynomial.
\begin{defn} A polynomial $f$ is $A$-\textbf{admissible} on a Zariski-closed algebra $A$ if $f$ takes on some nonzero evaluation on a vector of maximal matrix degree. We denote such a vector as $v_\mathcal B $, where $\mathcal B $ is the branch of the full quiver which gives rise to $v_\mathcal B $, and call $v_\mathcal B $ the \textbf{matrix vector of} $f$. \end{defn}
If all of the substitutions of an indeterminate are to glued edges of the full quiver, connecting vertices corresponding to the same matrix degree $n_1$, then the notion of admissibility is irrelevant, since one could just replace $f$ by $c_{n_1^2}f$ and obtain the desired action on the matrix component from equation~\eqref{traceab00}.
But there is a subtle difficulty. Without gluing, we could proceed directly via Remark~\ref{trac3}. But gluing between branches leads to complications in applying Remark~\ref{trac3}. For example, one could have two strings $I \to {I\!\!\,I} \to {I\!\!\,I\!\!\,I}$ and $I \to {I\!\!\,I\!\!\,I} \to {I\!\!\,I}$, whereby the substitutions in $f$ go to incompatible components. The following definition, inspired by Drensky~\cite{D}, enables us to bypass this difficulty in the proof of the next theorem.
\begin{defn}\label{sym} Given matrices $a_1, \dots, a_k,$ the \textbf{symmetrized} $(t;j)$ characteristic coefficient is the $j$-elementary symmetric function applied to the $t$-characteristic coefficients of $a_1, \dots, a_k.$ \end{defn} For example, taking $t=1$, the symmetrized $(1,j)$-characteristic coefficients ${\alpha}_t$ are $$\sum_{j=1}^k \operatorname{trace}(a_j),\quad \sum _{j_1 >j_2} \operatorname{trace}(a_{j_1})\operatorname{trace}(a_{j_2}),\quad \dots , \quad \prod _{j=1}^k \operatorname{trace}(a_{j}).$$
\begin{thm}\label{traceq2} Suppose $f $ is an $A$-admissible non-identity of a representable, relatively free algebra~ $A$. Then the T-ideal $\mathcal I$ generated by $f$ contains a $\bar q$-characteristic coefficient-absorbing $A$-admissible non-identity $\tilde f$.
Furthermore, the T-ideal $\mathcal I_\mathcal B$
of all fully hiked $A$-admissible polynomial obtained from the degree vector $v_\mathcal B$ is comprised of evaluations of $\bar q$-characteristic coefficient-absorbing polynomials, comprised of sums of evaluations on pure specializations in $\mathcal B$. \end{thm} \begin{proof} Let $\Gamma$ be the full quiver of $A$. We follow the proof of \cite[Theorem~5.8]{BRV3}, where we follow the process of building the polynomial $\Phi$ of \cite[Definition~4.11]{BRV3}, but we must be more careful. Whereas in \cite[Theorem~5.12]{BRV3} we were looking for any trace-absorbing non-identity and thus could hike the polynomial of an arbitrary maximal path of a pseudo-quiver, now we need to work within the polynomial $f$ belonging to a given T-ideal and thus work simultaneously with all maximal paths of the pseudo-quiver of the corresponding relatively free algebra.
Our polynomial $f$ need not be multilinear, although we can make it $A$-quasi-linear with respect to $A$. Since the other polynomials we insert are multilinear, $\tilde f$ remains $A$-quasi-linear. But $f$ need not even be A-quasi-homogeneous, so multiplying some variable by a $\bar q$-characteristic coefficient might throw $f$ out of the T-ideal $\mathcal I$.
If the base field $F$ were infinite, we could apply \Lref{quasi} to replace $f$ by an A-quasi-homogeneous polynomial, but in general this cannot be done so easily. Accordingly, we adopt the following strategy in characteristic $p$:
We say that two strings in $A: = A(\Gamma)$ are \textbf{compatible} if their degree vectors are the same. (This means that the the matrix sizes of their semisimple substitutions match.) Our overall goal is to modify $f$ so that all non-zero substitutions in $f$ are compatible, and also to match the Frobenius twists, thereby enabling us to define characteristic coefficient-absorption.
\begin{itemize} \item We work with symmetrized $\bar q$-characteristic coefficients instead of traces. (The reason is given before Definition~\ref{sym}; we take $\bar q$-powers to make sure that we are working with semisimple matrices.)
\item We pass to $\bar q$ powers of transformations for a suitable power $\bar q$ of $p$.
\item We pick a particular branch $\mathcal B$ of $A$ on which we can place the substitutions of a nonzero evaluation of $f$, and modify $\tilde f$ so that all branches that do not have the same degree vector $v_\mathcal B $ as that of $\mathcal B$, cf.~Definition~\ref{dv}, become incompatible.
\item We make branches incompatible to $\mathcal B$ when they do not have the same size base fields as $\mathcal B$ for the corresponding vertices.
\item We eliminate all Frobenius twists which do match those of $\mathcal B$. \end{itemize}
The hiking procedure only involves a finite number of polynomials, and is all implemented in the following technical lemma: \begin{lem}[Compatibility Lemma]\label{complem} For any $A$-admissible
non-identity $f$ of a representable Zariski-closed algebra $A$, the T-ideal $\mathcal I$ generated by the polynomial $f$ contains a symmetrized $\bar q$-characteristic coefficient-absorbing polynomial $\bar f$, not an identity of $A$, in which all substitutions providing nonzero evaluations of $f$ are compatible. \end{lem} \begin{proof} If all the vertices of the full quiver $\Gamma$ of $A$ are glued, we are done by Remark~\ref{onecomp}. Thus, we assume that $\Gamma$ has non-glued vertices, and thus has strings with degree vectors of length $>1$.
Since applying the hiking procedure of Remark~\ref{modhike1} does not change the hypotheses, we may assume that $\tilde f$ is fully hiked. We choose $\bar q$ according to \Lref{barq}.
We consider all substitutions $x_i \mapsto \overline{x_i } $ to semisimple and radical elements. Of all substitutions which do not annihilate $f$, some monomial of $f$ then specializes to a nonzero evaluation, i.e., a path in the full quiver, and we choose such a substitution whose path $\mathcal P$ has maximal degree vector, and, after modifying $f$ along the lines of Remark~\ref{expans}, we may assume that the largest component $n_i$ of the degree vector involves some semisimple substitution which, with slight abuse of notation, we denote as $\overline{x_i}.$ We want to make any other substitution $x_i \mapsto \overline{x_i' } $ compatible with our given substitution.
Suppose first that $ \overline{x_i }$ is a substitution connecting vertices of matrix degree $n_{i,1}$ and $n_{i,2}$. In particular, since any two consecutive arrows have a common vertex, $n_{i,1} = \overline{x_i }$ must be an external radical substitution; if $ \overline{x_i }$ is a semisimple substitution or an internal radical substitution, then $n_{i,1} = n_{i,2}$.
Take any substitution $ \overline{x_i '}$, connecting vertices of degree $n_{i,1}'$ and $n_{i,2}'$. Replacing $x_i$ by $h_{n_i} x_i h_{n_i}$ would annihilate any substitution to a component of smaller matrix degree, so we may assume that $n_{i,1}' \ge n_{i,1} $ and $n_{i,2}' \ge n_{i,2} .$
Take $i$ such that $n_i$ is maximal. Since the path $\mathcal P$ is assumed to have maximal degree vector, we must have $n_{i,1}' = n_{i} = n_{i,2}'.$ Furthermore, replacing $x_i$ by $x_i^{\bar q}$ forces the semisimple substitution $ \overline{x_i '}^{\bar q}.$
We will force all our new substitutions $ \overline{x_j '}$ to be compatible with the original ones along $\mathcal P$, by working around this vertex.
Inductively (working backwards), we assume that $n_{k}' = n_{k } $ for all $j < k \le i.$ We already know that $n_{j }' \ge n_{j }$, but want to force $n_{j }' = n_{j }$. It is enough to check this for any external radical substitution $ \overline{x_j }$, since these fix the degree vectors.
If $\overline{x_j}'$ also is an external radical substitution, then $n_{j }' = n_{j }$ by maximality of the degree vector, so we are done unless $\overline{x_j}'$ is in the diagonal matrix block of degree $n_k \times n_k$; in other words, $n_{j }' = n_k$ whereas $n_j < n_k.$
Now we apply stage 2 hiking (as described in \Lref{hike2}), which preserves the compatible substitutions and annihilates the incompatible substitutions.
Thus, we only need concern ourselves with substitutions $\overline{x_j }$ into the same matrix component along the diagonal.
Since $f$ is presumed to be fully hiked, for any internal radical substitution $ \overline{x_j }$, we may assume that $ \overline{x_j}'$ also is an internal radical substitution; otherwise further hiking will not affect an evaluation along $\mathcal P$ but will make the other evaluation 0.
As noted above, when $ \overline{x_j }$ is a semisimple substitution, taking $x_j ^{\bar q}$ forces the semisimple substitution $ \overline{x_j '}^{\bar q}.$
In conclusion, by modifying $f$ we have forced all the substitutions $ \overline{x '}$ to be compatible with the original substitutions $ \overline{x }$.
Next, we want to reduce to the case where for any two branches, the base field for each vertex has the same order. Suppose $\mathcal B'$ is another branch with the same degree vector, but with a base field of different order. Stage 3 hiking will zero out all substitutions of $x_i$ in $\mathcal B'$, and thus make $\mathcal B'$ incompatible.
Since by definition any further hike of $\tilde f$ yields an identity, Remark~\ref{modhike1} dictates
that polynomial $\bar q$-characteristic coefficients defined in terms of $f$ via Equation \eqref{traceab00} must commute. There is a subtlety involved when the sub-Peirce component involves consecutively glued vertices, since then we need the radical substitution $b$ of an arrow connecting glued vertices to commute with $\operatorname{\alpha}_{\operatorname{pol}}^{\bar q} (a)$. But this is because taking any commutator hikes the polynomial further and thus is 0.
Now, as before, Remark~\ref{modhike1} dictates that polynomial , $\bar q$-characteristic coefficients defined in terms of $f$ via Equation \eqref{traceab00} must commute, and thus the symmetrized $\bar q$-characteristic coefficients commute, and applying \Lref{hike00} shows that they commute with any radical substitutions of arrows connecting glued vertices. Thus, the polynomial action on $\mathcal I$ coincides with the well-defined matrix action as described in Remark~\ref{trac3}(3).
We still might encounter two different branches with the same degree vector but with ``crossover gluing;'' i.e., $I \to {I\!\!\,I} \to {I\!\!\,I\!\!\,I}$ together with ${I\!\!\,I\!\!\,I} \to {I\!\!\,I} \to I.$ Applying the same polynomial $f$ to these two different branches might produce different results. To sidestep this difficulty, assuming there are $k$ such glued branches, we consider all possible matrices $a_1, \dots, a_k$ appearing in the corresponding position of these $k$ branches, and take the elementary symmetric functions $\sigma_1, \dots, \sigma_k$ on their $\bar q$-characteristic coefficients; in other words, we take the symmetrized $\bar q$-characteristic coefficients, as defined above. Now the polynomial action clearly is the same on the $\sigma_j$, and thus on all symmetric functions on the characteristic coefficients of $a_1, \dots, a_k$.
Since the nonidentities all contain an $n_i^2$-alternating polynomial at the component of matrix degree $n_i$ for each $i$, and we apply the $\bar q$-characteristic coefficient action simultaneously to each of these polynomials, their T-spaces
are closed under multiplication by $\bar q$-characteristic coefficients of the
simple components of semisimple substitutions, so we have a $\bar q$-characteristic coefficient-absorbing polynomial.
\end{proof}
Note that to apply the lemma we have to take Frobenius gluing into account. When substituting into blocks with Frobenius gluing, we get characteristic coefficents with different Frobenius twists, and then we do the symmetrization.
The lemma yields the proof of the first assertion \Tref{traceq2}, since we have obtained the desired $\bar q$-characteristic coefficient-absorbing $A$-admissible non-identity.
For the last assertion of the theorem, we note that the $\bar q$-characteristic coefficient-absorbing properties of the lemma were proved by means only of the properties of the degree vector $v_\mathcal B$ and not on the specific polynomial $\tilde f$, and $\tilde f$ vanishes for all pure substitutions to branches other than $\mathcal B$. But $x \tilde f$ also is $\bar q$-characteristic coefficient-absorbing with respect to the same degree vector, and likewise the $\bar q$-characteristic coefficient-absorbing properties pass to sums and homomorphic images. \end{proof}
We have completed the first step in our program:
\begin{thm}[$\bar q$-Characteristic Value Adjunction Theorem]\label{tradj} Let $\hat A$ be the algebra obtained by adjoining to $A$ the matrix symmetrized $\bar q$-characteristic coefficients of products of the sub-Peirce components of the generic generators of $A$ (of length up to the bound of Shirshov's Theorem~ \cite[Chapter 2]{BR}), and let $\hat C$ be the algebra obtained by adjoining to $F$ these symmetrized $\bar q$-characteristic coefficients. For any nonidentity $f$ of a representable relatively free affine algebra $A$, the T-ideal $\mathcal I$ generated by the polynomial~$f$ contains a nonzero T-ideal which is also an ideal of the algebra $\hat A$.
Also, $\hat A$ is a finite module over $\hat C$, and in particular is Noetherian. \end{thm} \begin{proof} We follow the proof of \cite[Theorem~5.16]{BRV3}. First we apply the partial linearization procedure to make $f$ $A$-quasi-linear. In view of \cite[Theorem~7.20 and Corollary~7.21]{BRV1}, we may assume that the generators of $A$ are generic elements, say $X_1, \dots, X_t$. We adjoin the Peirce components of these generic elements, noting that because the polynomial obtained by Theorem~\ref{traceq2} is fully hiked, all substitutions involving these Peirce components involve a product of a maximal number of radical elements and thus still are in its T-ideal $\mathcal I$.
Let $\hat C'$ be the commutative algebra generated by all the characteristic coefficients in the statement of the theorem. Then $\hat C'$ is a finite module over $\hat C$, implying $\hat A$ is finite over $\hat C$, in view of Shirshov's Theorem. \end{proof}
Note that in the affine case we can work with finitely many entries, and thus $\hat C'$ is a finite module; this simple argument fails in the non-affine case, which explains why this theory only applies to affine algebras.
We want to adjoin the matrix $\bar q$-characteristic coefficients by means of evaluations of the hiked polynomial $\tilde f.$ This can be done for the largest size Peirce components of these new components (applied {\it simultaneously} to each generator and each Peirce component) to obtain an algebra $\hat A$, and we obtain a finite submodule via the following modification of Shirshov's theorem:
\begin{defn} Given a module $M$ over a $C$-algebra $A$ and a commutative subalgebra $C_1\subset A$, we say that an element $a\in A$ is \textbf{integral} over $C_1$ with respect to $M$ if there is some monic polynomial $f \in C_1[{\lambda}]$ such that $f(c)M = 0$. $C_1$ is \textbf{integral} with respect to $M$ if each element of $C_1$ is integral with respect to $M$. \end{defn}
\begin{thm}\label{Shir} Suppose $A = C\{ a_1, \dots, a_\ell\}$ and $M$ is an $A$-module, and $A$ contains a commutative (not necessarily central) subalgebra $C_1$ such that each word in the generators of length at most the PI-degree is integral over $C_1$ with respect to $M$, and furthermore $(a_i c - ca_i)M = 0$ for each $1 \le i \le \ell$ and each $c \in C_1$. (In other words, $A/\!\operatorname{Ann} _A M$ is a $C_1$-algebra in the natural way.) Then $A/\!\operatorname{Ann} M$ is finite as a $C_1$-algebra. \end{thm}
\begin{proof} Apply Shirshov's height theorem \cite[Theorem~2.3]{BR} to $A/\!\operatorname{Ann}_A M$. \end{proof}
Inspired by \cite[Theorem~3.11]{BRV3}, we formulate the following definition.
\begin{defn}\label{reduct} Suppose $\Gamma$ is the full quiver of an algebra $A$. A {\bf reduction} of $\Gamma$ is a pseudo-quiver $\Gamma'$ obtained by at least one of the following possible procedures:
\begin{enumerate} \item New relations on the base ring and its pseudo-quiver $\Gamma'$ obtained by appropriate new gluing. This means: \begin{itemize}\item Gluing, perhaps up to infinitesimals, with or without a Frobenius twist (when the gluing is of a block with itself, with a Frobenius twist, it must become finite); \item New quasi-linear relations on arrows, perhaps up to infinitesimals; \item Reducing the matrix degree of a block attached to a vertex. \end{itemize}
\item New linear dependencies on vertices (which could include canceling extraneous vertices) between which any two paths must have the same grade. \end{enumerate}
A {\bf subdirect reduction} $\{\Gamma'_1, \dots, \Gamma'_m\}$ of $\Gamma$ is a finite set of reductions of $\Gamma$. A quiver is {\bf subdirectly irreducible} if it has no proper subdirect reduction. \end{defn}
\begin{lem}\label{induc} Every descending chain of reductions of our original pseudo-quiver must terminate after a finite number of steps. \end{lem} \begin{proof} By definition, any reduction erases or identifies vertices or arrows (after sufficiently many new quasi-linear relations), or lowers the degree vectors of the branches lexicographically, or lowers their grades (Remark~\ref{dv2}). Each of these processes must terminate, so the reduction procedure must terminate.\end{proof}
This is the key to our discussion of Specht's problem, since it enables us to formulate proofs by induction on the reduction of a pseudo-quiver.
\begin{lem}\label{addid} Suppose the algebra $A$ and the polynomial $\bar f$ are as in \Lref{complem}. Then the full quiver $\Gamma'$ corresponding to the T-ideal $\operatorname{id}(A)\cup \{ \bar f \}$ is a reduction of the full quiver $\Gamma$ corresponding to the $\operatorname{id}(A).$ \end{lem} \begin{proof} By construction, $\bar f$ has nonzero evaluations along the algebra of $\Gamma$, so the full quiver $\Gamma'$ could not be $\Gamma$, and thus must be a reduction.\end{proof}
\begin{rem}\label{addid1} In summary, given a T-ideal $\mathcal I$, the Zariski closure $A$ of its relatively free algebra $F\{ x \}/\mathcal I$ has some full quiver $\Gamma$. Any $A$-admissible non-identity $f$ gives rise to a symmetrized $\bar q$-characteristic coefficient-absorbing polynomial $\bar f$, not an identity of $A$. Letting $\mathcal I' $ be the T-ideal generated by $\mathcal I \cup \{ \bar f \}$, we see that the full quiver $\Gamma'$ of the Zariski closure $A$ of the relatively free algebra $F\{ x \}/\mathcal I'$ is a reduction of $\Gamma.$\end{rem}
\section{Solution of Specht's problem for affine algebras over finite fields}
Our verification of Specht's problem over finite fields involves an inductive procedure on full quivers. After getting started, we need each chain of reductions of a full quiver to terminate after a finite number of steps. To do so, we must cope with infinitesimals, which appear in~ Definition ~\ref{reduct}, requiring a few observations about Noetherian modules in order to wrap up the proof. In all of the applications here, the Noetherian module will be a relatively free associative algebra, but for future applications to nonassociative algebras we rely only on the module structure and state these basic observations without referring to the algebra multiplication. Since Specht's problem is solved by Kemer in characteristic 0, we assume throughout this section that the base field has characteristic $p$.
\subsection{Torsion over fields of characteristic $p$}
Throughout this subsection, we assume that $F$ is a field of characteristic $p$, and $C$ is a commutative Noetherian $F$-algebra.
\begin{defn}\label{tor0} Let $M$ be a $C$-module. For any $a\in M$, an element $c\in C$ is $a$-\textbf{torsion} if there is $k > 0$ such $c^k a = 0$. An element $c\in C$ is $M$-\textbf{torsion} if it is $a$-torsion for each $a \in M$. We define $\operatorname{tor}(C)_a = \set{c \in C \,:\, \mbox{$c$ is $a$-torsion}}$. \end{defn}
\begin{lem}\label{shrink}
For any finite $C$-module~$M$ and any $a\in M$, $\operatorname{tor}(C)_a$ is an ideal of $C$. Furthermore, define $\operatorname{tor}(C)_{a;k} = \{ c \in \operatorname{tor}(C)_a : c ^{p^k}a = 0 \}$. Then $\operatorname{tor}(C)_a = \operatorname{tor}(C)_{a;k}$ for some $k$.\end{lem} \begin{proof} $\operatorname{tor}(C)_a$ is an ideal, since we are in characteristic $p$. Then the series $\operatorname{tor}(C)_{a;1} \subseteq \operatorname{tor}(C)_{a;2} \subseteq \cdots$ stabilizes, so $\operatorname{tor}(C)_a = \operatorname{tor}(C)_{a;k}$ for some $k$. \end{proof}
\begin{prop}\label{4.3} Suppose $\hat A = \hat C \{ a_1, \dots, a_t \}$ is a relatively free, affine algebra over a commutative Noetherian $F$-algebra $\hat C$. Then $\hat A$ is a finite subdirect product of an algebra~$\hat A'$ defined over the $\hat C/{\operatorname{tor}(\hat C) _{a_i}}$, $1 \le i \le t,$ together with the $\left\{ \hat A/ c ^j \hat A : { c \in \hat C,\ j <k} \right\}$ where $k $ is the maximum of the torsion indices of $a_1, \dots, a_t$. \end{prop} \begin{proof} Let $\hat A_{i}$ denote the direct product of the localizations of $\hat A$ at the (finitely many) minimal prime ideals of $\operatorname{Ann} _C a_i$. There is a natural map $$\phi {\,{:}\,} \hat A \to \bigoplus \hat A_{i}\oplus \left(\bigoplus _{ c \in \hat C,\ j <k} \hat A/ c ^j \hat A \right).$$
If $a\in \ker \phi$, then looking at the first component we see that $a$ is annihilated by some power of some $c\in \hat C$, but this is preserved in one of the other components of $\phi(\hat A)$; hence, $a = 0$. In other words, $\phi$ is an injection. \end{proof}
We quote \cite[Lemma~3.10]{BRV3}, in order to continue:
{\it Suppose $A$ is a relatively free PI-algebra with pseudo-quiver $\Gamma$ with respect to a representation $\rho {\,{:}\,} A \mapsto \M[n](C)$, and $\mathcal I = \operatorname{id}(A)$ is a $C$-closed T-ideal. Then $A$ is PI-equivalent to the algebra of the pseudo-quiver $\Gamma$.}
\subsection{The main theorem in the field-theoretic case}
We are ready to solve Specht's problem for affine algebras over finite fields. Let us recall a key result from {\cite[Corollary~4.9]{BR}}. \begin{prop}\label{Lewin} Any T-ideal of an affine $F$-algebra contains the set of identities of some finite dimensional algebra, and thus of $\M[n](F)$ for some $n$. \end{prop} (The proof is characteristic free: The radical is nilpotent by the theorem of Braun-Kemer-Razmyslov, so one can display the relatively free algebra $A$ as a homomorphic image of a generalized upper triangular matrix algebra, by a theorem of Lewin, which satisfies the identities of $n\times n$ matrices.)
\begin{lem}\label{induction} Suppose $A$ is a relatively free affine algebra in the variety of a {Zariski closed}\ algebra $B$.
Consider a maximal path in the full quiver of $B$ with the corresponding degree vector~$v_A$. Let $\mathcal J$ be the ideal generated by the homogeneous elements of the degree vector~$v_A$. Then $A/\mathcal J$ is the relatively free algebra of a {Zariski closed}\ algebra, and hence representable, and its full quiver has fewer maximal paths of degree $v_A$ than $A$. \end{lem} \begin{proof} The proof is similar to that of the Second Canonization Theorem, \cite[Theorem~3.7]{BRV3}. Consider a maximal graded component in $A$. Add characteristic coefficients of the generators of the generic algebra constructed from $B$, and note that they agree with the grading of the paths.
Factoring out the product corresponding to the maximal degree vector
we obtain a representable
algebra, $B'$. Construct the full quiver of $B'$ as in the proof of \cite[Theorem~3.7]{BRV3}.
Then all paths in $B'$ have fewer maximal paths of degree $v_A$, and $A/\mathcal J$ is the relatively free algebra of $B'$. \end{proof}
We say that a T-ideal $\mathcal I$ of $F\{ x\}$ is \textbf{representable} if $F\{ x\}/\mathcal I$ is a representable algebra.
\begin{thm}\label{Spechtfin} Suppose $A$ is a relatively free, affine PI-algebra over a field $F$. Then any chain of T-ideals in the free algebra of $F\{ x\}$ ascending from $\operatorname{id}(A)$ must terminate. \end{thm}
\begin{proof} First, we need to move to representable affine algebras. But, by Proposition~\ref{Lewin}, the T-ideal of $A$ contains the T-ideal of a finite dimensional algebra, so we can replace $A$ by that algebra.
We want to show that any ascending chain of T-ideals \begin{equation}\label{asch} \mathcal I_1 \subseteq \mathcal I_2 \subseteq \mathcal I_3 \subseteq \cdots\end{equation} in the free algebra $F\{x\},$ with $\mathcal I_1 =\operatorname{id}(A)$, stabilizes. For each $j$, let $\mathcal I_{j}^{(0)}\subseteq \mathcal I_{j}$ denote the T-ideal of $A$ generated by symmetrized $\bar{q}$-characteristic coefficient-absorbing polynomials of $\mathcal I_j$ having a non-zero specialization with maximal degree vector.
Then we get the chain \begin{equation}\label{asch1} {\mathcal I_1}^{(0)} \subseteq {\mathcal I_2}^{(0)} \subseteq {\mathcal I_3}^{(0)} \subseteq \cdots .\end{equation}
Let $\Gamma$ be the full quiver of $A$, so $\operatorname{id}(A) = \operatorname{id}(\Gamma).$ Let $v_A$ denote the maximal degree vector for a non-zero evaluation of some polynomial in $A$. Since $\mathcal I _{j}^{(0)}$
is Noetherian, by Theorem~\ref{Shir} applied to Theorem~\ref{tradj}, we can define $\mathcal I_j^{(1)}$ to be the maximal T-ideal of $\hat A$ contained in $\mathcal I_j$.
Then, the chain \begin{equation}\label{asch11} {\mathcal I_1}^{(1)} \subseteq {\mathcal I_2}^{(1)} \subseteq {\mathcal I_3}^{(1)} \subseteq \cdots \end{equation} of ideals is in the Noetherian algebra $\hat A$, and thus stabilizes at some $\mathcal I_{j_0}^{(1)}$.
Passing to $A/\mathcal I_{j_0}^{(1)}$, we may assume that $\mathcal I_{j}^{(1)}= 0$ for each $j > j_0$. Hence, $A/\mathcal I_{j_0}^{(1)}\subseteq \hat{A}/\mathcal I_{j_0}^{(1)}$ is representable. If $\mathcal I_j^{(0)}$ were nonzero then the fully hiked polynomial of some $0 \neq f \in \mathcal I_j^{(0)}$ would be in $\mathcal I_j^{(1)}=0$, a contradiction. Thus, $\mathcal I_j^{(0)} = 0$ for each $j > j_0$. In other words, $\mathcal I_j$ has only zero evaluations in degree $v_A$.
Finally, let $\mathcal J$ be the T-ideal defined in \Lref{induction}. Thus $\mathcal J \cap \mathcal I_j = \mathcal I_j^{(0)}$, so passing to $A/\mathcal J$, which is relatively free and representable by \Lref{induction}, we lower the maximum degree vector, and conclude by induction.
But these lift to a chain of ideals of the Noetherian algebra $\hat A$, which must stabilize, and thus \eqref{asch11} stabilizes at some $\mathcal I _{j_0}^{(0)}$. Passing to $A/\mathcal I _{j_0}^{(0)}$ and starting the chain at $j_0$, we may assume that $\mathcal I _{j}^{(0)}= 0$ for each $j$.
We let $v_\mathcal B$ be the degree vector of some $A$-admissible polynomial, and let $\mathcal I_\mathcal B$ be the T-ideal of Theorem~\ref{traceq2}, which shows that $\mathcal I _{j}\cap \mathcal I_\mathcal B = 0$ for each $j> j_0.$ Let $ \overline{\mathcal I_j} = (\mathcal I _{j} + \mathcal I_\mathcal B)/\mathcal I_\mathcal B,$ a T-ideal of the relatively free algebra $ F\{ x\}/ \mathcal I_\mathcal B$ for each $j>j_0.$ By induction on the maximal degree vector of the quiver, applied to the relatively free algebra $ F\{ x\}/ \mathcal I_\mathcal B,$ the chain of T-ideals \begin{equation}\label{asch2} \overline{\mathcal I_{j+1}} \subseteq \overline{\mathcal I_{j+2}}\subseteq \overline{\mathcal I_{j+3}} \subseteq \cdots\end{equation} must stabilize, so we conclude that the original chain of T-ideals must stabilize. \end{proof}
\section{Solution of Specht's problem for PI-proper T-ideals of affine algebras over arbitrary commutative Noetherian rings}\label{nonassoc}
Using the same ideas, we finally can prove Specht's problem for affine PI-algebras over a commutative Noetherian ring $C$. Our strategy is to reduce to algebras over fields, since this
case is already solved. The argument is based on a formal reduction from algebras over rings to algebras over fields, much of which we formulate rather generally for algebras which are not necessarily associative. Accordingly, we fix a given algebraic variety $\mathcal V$ of algebras, and consider Specht's problem for algebras in $\mathcal V$. When necessary, we take $\mathcal V$ to be the variety of associative algebras.
One can construct the free nonassociative algebra, whose elements are polynomials where we also write in parentheses to indicate the order of multiplication. The \textbf{T-ideal} of a set of polynomials \textbf{in an algebra} $A$ is the ideal generated by all substitutions of these polynomials in $A$. The variety $\mathcal V$ itself is defined by a T-ideal $\assoc$ of identities of the free nonassociative algebra, and the corresponding factor algebra is the relatively free algebra of the variety $\mathcal V$. (For example, when $\mathcal V$ is the variety of associative algebras, $\assoc$ is generated by the \textbf{associator} $(x_1x_2)x_3 - x_1(x_2 x_3).$) Specht's problem now is whether every chain of ascending T-ideals containing the associator stabilizes, or, equivalently, if every T-ideal is finitely based modulo the T-ideal $\mathcal I_{\operatorname{assoc}}$ of the associator.
There is an extra technical issue, since usually the definition of PI requires that the ideal of the base ring generated by the coefficients of the PIs is all of $C$. (This is obviously the case when $C$ is a field). We call such a T-ideal {\it PI-proper}, and start in this section with that case. Finally, in \S\ref{notprop} we prove the general result for T-ideals which are not necessarily PI-proper.
\subsection{Reduction to algebras over integral domains}
The considerations of this section apply to arbitrary algebraic varieties (not necessarily associative).
\begin{defn}\label{Spechtobs22}
We say that a Noetherian ring $C$ is $\mathcal V$-\textbf{Specht} if Specht's problem has a positive solution in the variety $\mathcal V$ for PI-proper T-ideals defined over $C$, i.e., any PI-proper T-ideal generated by polynomials $f_1, f_2, \dots$ is finitely based modulo $\assoc$.
We say that $C$ is \textbf{almost $ \mathcal V$-Specht} if $C/I$ is $\mathcal V$-Specht for every nonzero ideal $I$ of $C$. If~$\mathcal V$ is not specified here, it is assumed to be the variety of associative algebras. \end{defn}
\begin{rem}\label{Spechtobs2} Here is the general reduction of Specht's problem to the case when $C$ is an integral domain. We need to show that any T-ideal generated by polynomials $f_1, f_2, \dots$ is finitely based modulo $\assoc$. Let $\mathcal I_{j}$ be the T-ideal generated by $f_1,\dots,f_j$. By Noetherian induction, we may assume that $C$ is almost $\mathcal V$-Specht. Suppose $c_1c_2 = 0$ for $0 \ne c_1, c_2 \in C.$
By Noetherian induction, the system $\{f_j\}$ is finitely based over $c_2 A$, so there is some $j_0$ for which each $f_j=g_j+c_2 h_j$ where $g_j\in\mathcal I_{j_0}$ and $h_j$ is arbitrary. The polynomials $f_j$ can be replaced by $c_2 h_j$ for all $j>j_0$. But the T-ideal generated by $\{c_2 h_j :{j>j_0}\}$ in $A/c_1 A$ is finitely based, by Noetherian induction over~$C/c_1C$.
Thus, we are done unless $c_1c_2 \ne 0$ for all $0 \ne c_1, c_2 \in C,$ implying that $C$ is an integral domain. \end{rem}
\subsection{Reduction to prime power torsion}
By Remark~\ref{Spechtobs2}, we may assume from now on that $C$ is an almost $\mathcal V$-Specht integral domain. Also, we assume that $C$ is infinite, since otherwise $C$ is a field, and we have solved the case of fields. To proceed further, we also introduce torsion in
the opposite direction.
\begin{defn}\label{tor00} Let $M$ be a module over a commutative integral domain~$C$. For $z \in C$ define $\operatorname{Ann}_M(z) = \{ a \in M: z a = 0\}$.
$\operatorname{Ann}_M(z) $ is called the $z$-\textbf{torsion} of $M$. $M$ is $z$-\textbf{torsionfree} if $\operatorname{Ann}_M(z) = 0$. For $I \triangleleft C$ and a $C$-module $M$, define $\operatorname{tor} _I(M) = \cup _{0 \ne z \in C} \operatorname{Ann}_M(z) , $ a $C$-submodule of $M$ since $C$ is an integral domain.
The {\bf $z$-torsion index} of $M$ is $k$ if the chain $$\operatorname{Ann}_M(z) {\,\subseteq\,} \operatorname{Ann}_M(z^2) {\,\subseteq\,} \cdots {\,\subseteq\,} \operatorname{Ann}_M(z^k) {\,\subseteq\,} \cdots$$ stabilizes at $k$. Reversing \Dref{tor0}, we say that $M$ is $z'$-\textbf{torsionfree} if its $w$-torsion submodule is 0 for each prime $w$ not dividing $z$. \end{defn}
\begin{lem}\label{Spechtobs} If $A$ is a relatively free algebra and $c\in C$, then $\operatorname{Ann}_A c $ is a T-ideal, with $cA \cong A/\!\operatorname{Ann}_A c$ as $C$-modules (but not necessarily as $C$-algebras).
We have an inclusion-reversing map $\{$Ideals of $C\} \to \{$T-Ideals of~$A\}$ given by $I \mapsto \operatorname{tor}_I(A). $ \end{lem} \begin{proof} $\operatorname{Ann}_A c$ is clearly a T-ideal, by \Lref{Tid}, and the rest of the first assertion is standard. The second assertion is likewise clear. \end{proof}
In any commutative Noetherian domain, any element can be factored as a finite product of irreducible elements (although not necessarily uniquely).
\begin{prop}\label{subdir1} Any module over an integral domain $C$ whose torsion involves only finitely many irreducible elements is a subdirect product of finitely many $z'$-torsionfree modules, where $z$ ranges over these irreducible elements of $C$. \end{prop} \begin{proof} Follows at once from the lemma. \end{proof}
A closely related result, which we record for reference in future work: \begin{prop}\label{subdir2S} Any Noetherian $\Z$-module is a subdirect product of finitely many $p'$-torsionfree modules, where $p$ ranges over the prime numbers.\end{prop}
\subsection{Homogeneous T-ideals}
Recall that a polynomial is {\bf homogeneous} if for each indeterminates $x_i$ each of its monomials has the same degree in the indeterminate $x_i$. Since the degrees grade the free algebra, any polynomial has a unique decomposition as a sum of homogeneous polynomials, which we call its {\bf homogeneous components}. Recall that a T-ideal $\mathcal I$ is {\bf{homogeneous}} if it contains all of the homogeneous components of each of its polynomials. \begin{rem} Although any homogeneous T-ideal is clearly generated by homogeneous polynomials, in general, homogeneous polynomials need not generate a homogeneous T-ideal, because of the vagaries of the quasi-linearization procedure (see \cite[Example~2.2]{BRV4} and \cite[Exercise~13.10]{BR}). \end{rem}
\begin{defn}\label{5.8} A set of polynomials $S$ is {\bf{ultra-homogeneous}} if it contains the homogeneous component of every element in $S$, as well as of the quasi-linearizations of all polynomials in $S$.
The {\bf ultra-homogeneous closure} $\overline{S}_{\operatorname{uh}}$ of a set $S$ is the intersection of all ultra-homogeneous sets containing it. (The ultra-homogeneous closure of a finite set is finite, since the procedure terminates for each polynomial after finitely many steps.)
The {\bf homogeneous socle} $\underline{\mathcal{\mathcal I}}_{\operatorname{soc}}$ of a T-ideal $\mathcal I$ is the union of homogeneous T-ideals contained in $\mathcal I$. \end{defn}
Note that any homogeneous T-ideal $\mathcal I$ contains the homogeneous components of the quasi-linearizations of each of its polynomials, so is automatically ultra-homogeneous.
\begin{prop}\label{5.80} The T-ideal generated by an ultra-homogeneous set of polynomials $S$ is homogeneous. \end{prop} \begin{proof} We need to show that the homogeneous components of any substitution remain in the T-ideal $\mathcal I$ generated by $S = \overline{S}_{\operatorname{uh}}$. By definition of quasi-linearization, it is enough to check this for monomial substitutions. But these are specializations of substitutions of letters (taking a different letter for each monomial), and thus are specializations of the homogeneous components of the quasi-linearizations, which by definition are in~$\mathcal I$. \end{proof}
In particular every set of multilinear identities generate a homogenous T-ideal.
\begin{cor}\label{quasi10} Let ${\mathcal V}$ be a variety satisfying the ACC on homogeneous T-ideals. Then every homogeneous $T$-ideal is finitely based. \end{cor} \begin{proof}The ultrahomogeneous closure of a finite set of polynomials is finite. \end{proof}
Let $z \in C$ be any nonzero element. Our overall goal would be to prove formally that if every field is $\mathcal V$-Specht then every commutative Noetherian ring is $\mathcal V$-Specht. Unfortunately, this is not quite in our grasp, since one detail still relies on associativity. We can prove the following theorem:
\begin{thm}\label{firstred} Let $\mathcal V$ be a variety of algebras such that every field is $\mathcal V$-Specht. If an integral domain $C$ is almost $\mathcal V$-Specht, and $\mathcal V$-Specht with respect to homogeneous T-ideals, then $C$ is $\mathcal V$-Specht. \end{thm}
Taking $\mathcal V$ to be the class of associative algebras, we conclude by proving that if a Noetherian ring $C$ is almost Specht and every field is Specht, then $C$ is Specht with respect to homogeneous T-ideals.
This will affirm Specht's problem for affine PI-algebras over an arbitrary Noetherian ring, and together with Theorem~\ref{SpechtNoeth1} below will affirm Specht's problem for arbitrary affine algebras over a Noetherian ring.
We deal with the reduction for other varieties in a subsequent paper.
\subsubsection{Proof of Theorem~\ref{firstred}}
Although we are working in the context of associative algebras, the proof of~Theorem~\ref{firstred} also works analogously for nonassociative algebras.
\begin{lem}\label{quasi1Lem} Suppose $\mathcal I$ is a T-ideal, and $f = \sum f_i\in \mathcal I$ has total degree $n$ (where $f_i$ are the homogeneous components). Then for every Vandermonde determinant $d$ of order $n$, $d \overline{\langle f\rangle}_{\operatorname{uh}} \subseteq \mathcal I$.\end{lem}
\begin{proof} Substitute $\lambda_{j} x_j$ for $x_j$, for fixed $j$. Let $d$ denote the determinant of the Vandermonde matrix $(\lambda_{j}^i)$ and $i = 1,\dots,n$. The homogeneous components of $d f$ are in $\mathcal I$, by the usual Vandermonde argument of multiplying by the adjoint matrix. \end{proof}
\begin{lem}\label{first1} Suppose $C$ is $\mathcal V$-Specht with respect to homogeneous T-ideals, and $\mathcal I $ is a proper T-ideal that properly contains its homogeneous socle $\mathcal I_0 .$ Then $\mathcal I $ contains a homogeneous T-ideal of the form $ d\mathcal I_1 ,$ where $\mathcal I_1\supset \mathcal I_0$ is a finitely based, proper T-ideal. (Here $d$ is a product of Vandermonde determinants.) \end{lem} \begin{proof} Take a proper polynomial $f\in \mathcal I \setminus \mathcal I_0$. By Lemma~\ref{quasi1Lem} there is $0 \ne d_1 \in C$ such that $d_1 f_i \in \mathcal I $, where $f_i$ are the homogeneous components of the quasi-linearizations of $f$. Continuing with the quasi-linearization procedure, which is finite, we see by induction that there is some $d'$ such that $d' g_{i,j} \in \mathcal I $, for each component $g_{i,j} $ in the various quasi-linearizations, implying $d'd_1 \overline{\langle f \rangle}_{\operatorname{uh}} \subseteq \mathcal I$, as desired. \end{proof}
Note that we had to take the ultra-homogeneous closure of a polynomial, and not a T-ideal, to remain with a finite set of polynomials. We are ready to prove Theorem~\ref{firstred}. \begin{proof} Let $A$ be an algebra in $\mathcal V$ . We introduce some notation: Given $z\in C$ and a T-ideal $\Gamma $, we write $\mathcal I_\Gamma(z)$ for the kernel of the composite map $A \to zA \to zA/(\Gamma \cap zA)$.
By hypothesis we can take $z\in C$ with $\overline{\mathcal I_\gamma(z)}_{\operatorname{uh}} $ maximal, and we write $z_\Gamma $ for $z$ and $\mathcal J(\Gamma)$ for $\mathcal I_\gamma(z)$. $\mathcal J(\Gamma) = \underline{\mathcal J(\Gamma)}_{\operatorname{soc}}$, since otherwise we could use Lemma \ref{first1} to increase $\underline{\mathcal J(\Gamma)}_{\operatorname{soc}}$, contrary to its definition. Hence, $\mathcal J(\Gamma)$ is already homogeneous.
Given a chain $\Gamma_1 \subseteq \Gamma_2 \subseteq \cdots$ of T-ideals of $A,$ we see by the hypothesis on homogeneous T-ideals that there is $i $ such that $\mathcal J(\Gamma_j) = \mathcal J(\Gamma_i)$ for all $j\ge i.$ Write $\hat z = \prod _{j\le i} z_{\Gamma_j}$. Then the $\mathcal J(\Gamma_j) \cap \hat z A$ also stabilize. If some $\Gamma_j \cap \hat z A$ properly contains $\mathcal J(\Gamma_i) \cap \hat z A$ then it has a (nonhomogeneous) polynomial $f$ and thus contains $\hat z \overline{f}_{\operatorname{uh}}$, which is impossible unless $\hat z C$ is a proper ideal of $C$.
But $\mathcal I_j(A)/\hat z \mathcal I_j(A)$ is a T-ideal over $C/\hat z C$, so we conclude by Noetherian induction. \end{proof}
\subsection{Conclusion of the solution of Specht's problem for arbitary affine PI-algebras over Noetherian rings} $ $
We start with some general considerations that can be used for arbitrary varieties. The following well-known fact is a key ingredient, yielding a tool for applying Noetherian induction. \begin{lem}[Baby Fitting Lemma]\label{lem3} Let $M$ be a $C$-module, with $z\in C,$ and take any $k\in \N$. Suppose $\operatorname{Ann}_M(z^{k+1}) {\,\subseteq\,} \operatorname{Ann}_M(z^k)$. Then $z^k M \cap \operatorname{Ann}_M(z) = 0$. \end{lem} \begin{proof} If $z^k a \in \operatorname{Ann}_M(z)$, then $z^{k+1}a = 0$, implying $z^k a = 0$ by assumption. \end{proof}
We also need some easy facts from module theory.
\begin{lem}\label{easy1} Let $M,N$ be modules over a commutative ring $C$. Let $f {\,{:}\,} M {\rightarrow} N$ be a homomorphism of modules.
(i) For every $z,z' \in C$, if the induced homomorphisms $f' {\,{:}\,} M/z'M {\rightarrow} N/z'N$ and $f'' {\,{:}\,} z'M/zz'M {\rightarrow} z'N/zz'N$ are 1:1, then so is the induced homomorphism $$f''' {\,{:}\,} M/zz'M {\rightarrow} N/zz'N.$$
(ii) If the induced homomorphisms $z^iM/z^{i+1}M {\rightarrow} z^iN/z^{i+1}N$ are 1:1 for every $0\leq i < k$, then the induced homomorphism $M/z^{k}M {\rightarrow} N/z^{k}N$ is 1:1 as well. \end{lem} \begin{proof} (i) If $a \in \ker f''',$ then $a +z'M \in \ker f' = 0,$ implying $a \in \ker f'' = 0$.
(ii) Induction on $k$, taking $z' = z^{k-1}$ in the previous lemma.
\end{proof}
Let $S$ denote the monoid generated in $C$ by $z$. Recall that $$S^{-1}M = \set{s^{-1}a \suchthat s \in S, a \in M},$$ where $s^{-1}a=s'^{-1}a'$ if there is $s_0 \in S$ such that $s_0(s'a-sa')=0$. In particular $s^{-1}a=0$ if there is $s_0 \in S$ such that $s_0a=0$. \begin{lem}\label{L3} Let $f {\,{:}\,} M {\rightarrow} N$ be a homomorphism of $C$-modules, and let $z \in C$. Let $f' {\,{:}\,} S^{-1}M {\rightarrow} S^{-1}N$ and $f_i {\,{:}\,} M/z^iM{\rightarrow} N/z^{i}N$ be the induced homomorphisms, where $S$ is the monoid generated by $z$.
\begin{enumerate}
\item Assume $N$ has $z$-torsion index $k$. If every $f_i$ is one-to-one and $f'$ is onto, then the restriction $f|_{z^kM} {\,{:}\,} {z^kM} \to {z^kN}$ is onto.
\item\label{L3.2} Assume $M$ has $z$-torsion index $k$. If $f'$ is one-to-one, then the restriction $f|_{z^kM} {\,{:}\,} {z^kM} \to {z^kN}$ is one-to-one. \item\label{L3.3} Assume $M$ has $z$-torsion index $k$. If $f_k$ and $f'$ are one-to-one, then $f$ is one-to-one. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Let $b \in N$. By assumption there is an element $z^{-\ell}a \in S^{-1}M$ such that $z^{-\ell}f(a) = f'(z^{-\ell}a) = 1^{-1}b \in S^{-1}N$, so for some $\ell'\geq 0$ we have that $z^{\ell'}f(a) = z^{\ell+\ell'}b$. Then $f_{\ell+\ell'}(z^{\ell'}a+z^{\ell+\ell'}M) = f(z^{\ell'}a)+z^{\ell+\ell'}N = 0$, so $z^{\ell'}a = z^{\ell+\ell'}a'$ for some $a' \in M$. But now $z^{\ell+\ell'}b = f(z^{\ell'}a) = z^{\ell+\ell'}f(a')$, so $z^{\ell+\ell'}(b-f(a')) = 0$. Since the torsion index of $N$ is $k$, we have that $z^kb = f(z^ka')$. \item Let $a \in M$ be such that $f(z^ka)=0$. Then $f'(1^{-1}z^ka) = 1^{-1}z^kf(a) = 0$ in~$S^{-1}N$, so by assumption $1^{-1}z^ka=0$, namely for some $\ell \geq 0$, $z^{k+\ell}a = 0$. Since $k$ is the $z$-torsion index of $M$, we have that $z^ka = 0$. \item Let $a \in M$ be such that $f(a) = 0$. Then $f_k(a+z^kM) = 0+z^kN$, so $a \in z^k M$, but then \eq{L3.2} implies that $a=0$. \end{enumerate} \end{proof}
Finite torsion index is essential in Lemma~\ref{L3} (which is why Lemma~\ref{up} below only applies to homogeneous T-ideals). \begin{exmpl}
(An example where the restrictions $f'$ and $f_i$ are isomorphisms, but $f|_{z^kM} {\,{:}\,} {z^kM} \to {z^kN}$ is neither onto nor one-to-one). Let $C = F[z]$, and $P = F[[z^{-1}]]/F$ with the natural $C$-module structure. Since multiplication by $z$ is onto, $P/z^i P = 0$ for every $i$, and $S^{-1}P = 0$ since $1^{-1}(z^{-i}) = z^{-i}z^i(z^{-i}) = z^{-i}0 = 1^{-1}0$. Let $f$ be the zero map from $P \oplus 0$ to $0 \oplus P$; it is neither one-to-one nor onto, but the induced maps $f_i$ and $f'$ are clearly (trivial) isomorphisms. Indeed, $P$ has infinite $z$-torsion index. \end{exmpl}
\begin{rem} For any $z \in C, $ $zA \cong A/\!\operatorname{Ann} z$ is a T-ideal of $A$.\end{rem}
To progress with the proof over an arbitrary base ring, we first need the special case where the T-ideal contains a representable T-ideal.
\begin{thm}[Small Specht Theorem]\label{SpechtNoeth51} Let $C$ be an almost Specht, commutative Noetherian ring,
and $A$ ~an affine PI-algebra containing a representable T-ideal $\mathcal I$; i.e., the algebra $A/\mathcal I$ is representable.
Then any chain of T-ideals in the free algebra of~ $C\{ x\}$ ascending from $\operatorname{id}(A)$ stabilizes. \end{thm} \begin{proof} By Lemma~\ref{Spechtobs2}, $C$ is an integral domain. We need to show that any ascending chain of PI-proper T-ideals \begin{equation}\label{asch3} \mathcal I_1 {\,\subseteq\,} \mathcal I_2 {\,\subseteq\,} \mathcal I_3 {\,\subseteq\,} \cdots\end{equation} of $A$, stabilizes. Since $\mathcal I \subseteq \mathcal I_1$, we may replace $A$ by $A/\mathcal I, $ and assume that $A$ is representable. We view $A \subseteq M_n(K),$ where $K$ is an algebraically closed field containing $C$. If $C$ is finite, then it is a field, and we are done by Theorem~\ref{Spechtfin}. So we may assume that $C$ is an infinite integral domain. Denoting $AK$ as $A_K$, we work with respect to a quiver $\Gamma$ of $A_K$ as a $K$-algebra.
As in \Tref{Spechtfin}, let $\mathcal I_j^{(1)}$ be the maximal subideal of $\mathcal I_j$ closed under multiplication by $\hat{C}$ of Theorem~\ref{tradj}. Thus \begin{equation}\label{asch31} \mathcal I_1^{(1)} {\,\subseteq\,} \mathcal I_2^{(1)} {\,\subseteq\,} \mathcal I_3^{(1)} {\,\subseteq\,} \cdots\end{equation} are ideals in the Noetherian algebra $\hat{A} = \hat{C}A$, so this chain stabilizes, and we may assume $\mathcal I_{j}^{(1)} = \mathcal I_{j_0}^{(1)}$ for $j > j_0$.
For a $T$-ideal $\mathcal I$ of $A$, let $\overline{\mathcal I} = K \mathcal I$, taken in $A_ K$. Define $\widetilde {\mathcal I} =K \mathcal I \cap A \supseteq \mathcal I$. Let $A' = A/{\mathcal I_{j_0}^{(1)}}$. Passing down to $A'$, we shall pass further to $A/\widetilde{\mathcal I_{j_0}^{(1)}}$.
The quotient $\widetilde{\mathcal I_{j_0}^{(1)}}/{\mathcal I_{j_0}^{(1)}}$ is torsion, so there is $0 \neq z \in \hat{C}$ such that $z \widetilde{ \mathcal I_j^{(1)}} = z \widetilde{ \mathcal I_{j_0}^{(1)}} {\,\subseteq\,} \mathcal I_{j_0}^{(1)}$. The chain $\operatorname{Ann} _{\hat{A} } z {\,\subseteq\,} \operatorname{Ann}_{\hat{A} } z^2 {\,\subseteq\,} \cdots {\,\subseteq\,} \operatorname{Ann} _{\hat{A} } z^k {\,\subseteq\,} \cdots$ stabilizes at some $k$, by the Noetherianity of $\hat A$. Now, applying the baby Artin-Rees lemma to $\hat{A}/\mathcal I_{j_0}^{(1)}$, we see that $$z^{k}\hat{A} \cap \widetilde{\mathcal I_j^{(1)}} {\,\subseteq\,} \mathcal I_{j_0}^{(1)}.$$
In particular the natural map $$A'{\rightarrow} (A'/z^{k}A') \,\oplus\, (A/\widetilde{\mathcal I_{j_0}^{(1)}})$$ is an injection. The image of the chain~\eqref{asch3} of the first summand on the right stabilizes by applying Noetherian induction. Thus, we pass to the second summand of the right, which has no $C$-torsion. Letting $\mathcal J$ be the ideal constructed in \Lref{induction}, we have for every $j > j_0$ that $\mathcal I_{j} \cap \mathcal J= 0$ in $A_K/A_K\widetilde{\mathcal I_{j_0}^{(1)}}$ as in the last paragraph of the proof of Theorem~\ref{Spechtfin}. Hence, a fortiori, $\mathcal I_{j} \cap \mathcal J= 0$ in $A/\widetilde{\mathcal I_{j_0}}$. We are done by induction on the degree vector. \end{proof}
\begin{lem}\label{up} Suppose $z \in C$ such that $C/zC$ and $C[z^{-1}]$ are $\mathcal V$-Specht. Then $C$ satisfies the ACC on homogeneous T-ideals from $\mathcal V$. \end{lem} \begin{proof} By induction on the length of $z$ as a product of primes, we may assume that $z$ is prime. Let $A$ be an affine algebra over $C$, and let $$\mathcal I_1 {\,\subseteq\,} \mathcal I_2 {\,\subseteq\,} \mathcal I_3 {\,\subseteq\,} \cdots$$ be an ascending chain of T-ideals in $A$.
Let $A_i = A/\mathcal I_i$, and consider the infinite commutative diagram \begin{equation} \xymatrix@R=15pt@C=12pt{ A_1/zA_1 \ar[r] \ar[d] & zA_1/z^2A_1 \ar[r] \ar[d] & z^2A_1/z^3A_1 \ar[r] \ar[d] & z^3A_1/z^4A_1 \ar[r] \ar[d] & \cdots \\ A_2/zA_2 \ar[r] \ar[d] & zA_2/z^2A_2 \ar[r] \ar[d] & z^2A_2/z^3A_2 \ar[r] \ar[d] & z^3A_2/z^4A_2 \ar[r] \ar[d] & \cdots \\ A_3/zA_3 \ar[r] \ar[d] & zA_3/z^2A_3 \ar[r] \ar[d] & z^2A_3/z^3A_3 \ar[r] \ar[d] & z^3A_3/z^4A_3 \ar[r] \ar[d] & \cdots \\ \vdots & \vdots & \vdots & \vdots & \\ } \end{equation} where the left-to-right maps are multiplication by $z$, and the top-to-bottom arrows are the natural projections. So all the maps are projections.
We claim that outside a certain rectangle, all the maps in this infinite matrix are one-to-one. Indeed, the entries are algebras over $C/zC$, so each row stabilizes by assumption. Letting $B_i$ denote the final algebra in row $i$, we obtain a chain of projections $B_1 {\rightarrow} B_2 {\rightarrow} \cdots$ which must also stabilize, proving that for some $k_0$, all of the rows stabilize after $k_0$ steps. We are done since each of the first $k_0$ columns stabilizes. It follows that when $i$ is large enough, all the maps $z^jA_i/ z^{j+1}A_{i} {\rightarrow} z^jA_{i+1}/z^{j+1}A_{i+1}$ are isomorphisms, so by \Lref{easy1}.(ii), the natural projection $A_i {\rightarrow} A_{i+1}$ induces isomorphisms $A_i/z^{j}A_{i} {\rightarrow} A_{i+1} /z^{j}A_{i+1}$ for every $j$.
Similarly, the chain of projections $$C[z^{-1}]A_1 {\rightarrow} C[z^{-1}]A_2 {\rightarrow} \cdots$$ stabilizes by the assumption on $C[z^{-1}]$.
Let $i$ be large enough. Since $\mathcal I_i {\,\subseteq\,} \mathcal I_{i+1}$ are homogeneous, the natural projection $A_i {\rightarrow} A_{i+1}$ preserves the degree grading. Each homogeneous component is finite as a $C$-module since $A$ is affine, and thus Noetherian, and therefore has finite $z$-torsion index. By \Lref{L3}.\eq{L3.3}, the map in each component is one-to-one, proving that $\mathcal I_i = \mathcal I_{i+1}$. \end{proof}
The main idea in the proof given above is a simple version of a spectral sequence. Having proved a special case, we do a more general case (in fact, our most general version holds for an arbitrary variety $\mathcal V$ of algebras).
\begin{thm}\label{SpechtNoethspec1} Suppose the relatively free algebra $A$ with respect to a T-ideal $\mathcal I$ is $z'$-torsionfree for some $z\in C$, where $\mathcal I$ is generated by a polynomial all of whose coefficients are $\pm 1$. Then any increasing chain of T-ideals of $A$ starting with $\mathcal I$ must terminate, for any commutative Noetherian ring $C$. \end{thm}
\begin{proof}
Writing $z$ as a product of primes, we may assume that $z$ is prime.
Let $C_0$ be the subring of $C$ generated by 1.
Letting $L$ be the field of fractions of
$C_0/ (C_0 \cap zC),$ we pass to $A \otimes _{C_0} L,$ so
we may assume that $C_0$ is a field. (If there is no $p$-torsion at any step, then we can localize at the natural numbers and reduce to the case of ${\mathbb Q}$-algebras, which was solved by Kemer.) But $C_0[z]$ thus is a PID, and by the argument in Lemma~\ref{up}, we can break up our chain into
chains of T-ideals defined over $C/zC$, so
we conclude
by Lemma~\ref{up}. \end{proof}
So far, these arguments have been applied to arbitrary varieties, and in fact there are Lie, alternative and Jordan versions of Iltyakov \cite{I1,I2} and Vais and Zelmanov~\cite{VZ}; their proofs are rather delicate, in part because it is still unknown whether any alternative, Lie, or Jordan algebra satisfying a Capelli system of identities must satisfy all the identities of a finite dimensional algebra. Belov \cite{B2} obtained a version of the Small Specht Theorem (Theorem~\ref{SpechtNoeth51}) for classes of algebras of characteristic 0 asymptotically close to associative algebras; this includes alternative and Jordan algebras.
Our method here is to develop some theory to take care of torsion in polynomials, to conclude the proof of Theorem \ref{SpechtNoeth} below.
In other words, we need some local-global correspondence that will enable us to pass from the global situation with torsion to the local situation without torsion. Our main tool is Proposition~\ref{subdir1}, but this only enables us to handle a finite number of irreducible elements of $C$ producing torsion, whereas there might be an infinite number of such elements. Thus we need some way of cutting down from infinite to finite.
The most direct argument relies on an (associative) result only available in Russian. Procesi asked whether the kernel of the canonical homomorphism $\operatorname{id}(\M(\Z)) \to \operatorname{id}(\M({\mathbb {Z}}/p\Z))$ is equal to $p \operatorname{id}(\M(\Z))$.
Schelter and later Kemer~\cite{K7} provided counterexamples, but Samo\u{\i}lov~\cite{Sam} showed that if $p > 2d$, the kernel of the canonical homomorphism $\operatorname{id}(\M(\Z)) \to \operatorname{id}(\M({\mathbb {Z}}/p\Z))$ is indeed equal to $p \operatorname{id}(\M(\Z))$. Unfortunately, this result appears so far only in his doctoral dissertation ~\cite{Sam}.
Thus, we give two versions for the conclusion of the proof of the next theorem, the first relying on Samo\u{\i}lov's Theorem, and the second for those readers who would prefer a full proof of Theorem~\ref{SpechtNoeth} in English. Another advantage of the second proof is that its reduction argument works for arbitrary varieties.
\begin{thm}\label{SpechtNoeth} Any PI-proper T-ideal $\mathcal I$ of $C\{x_1, \dots, x_\ell\}$ is finitely based, for any commutative Noetherian ring $C$. \end{thm}
\begin{proof} Let $A$ be the relatively free algebra of $\mathcal I$. We can replace $\mathcal I$ by the T-ideal of a PI-proper polynomial $f$ contained in it. But by Amitsur~\cite[Theorem~3.38]{BR}, any PI-algebra satisfies a power of a standard polynomial, so we may assume $f$ is such a polynomial, and thus has all nonzero coefficients in $\{\pm 1\}$.
Consider the localization $A \mapsto A\otimes_{\mathbb {Z}}\Z/p{\mathbb {Z}}$, viewed as a $C_p / p C_p$-algebra. By a theorem of Bergman and Dicks~\cite{BerD}, there is a canonical homomorphism of $A$ to a representable algebra, whose kernel $M$, in view of Lewin's theorem, vanishes modulo $p$ for any prime $p$, i.e., when we map $A \mapsto A\otimes_{\mathbb {Z}}\Z/p{\mathbb {Z}}$, viewed as a $C_p / p C_p$-algebra. But the map $A {\rightarrow} A\otimes_{\mathbb {Z}}\Z/p{\mathbb {Z}}$ is faithful whenever the kernel of the canonical homomorphism $\operatorname{id}(\M(\Z)) \to \operatorname{id}(\M({\mathbb {Z}}/p\Z))$ is equal to $p \operatorname{id}(\M(\Z))$, where $d$ is the size of matrices in the representation, which by Samo\u{\i}lov's Theorem~\cite{Sam} happens when $p>2n$, showing that $A$ is $(2n)!$-torsionfree. The claim then follows by Theorem~\ref{SpechtNoethspec1}. \end{proof}
We turn now to the second proof.
\begin{rem}\label{pat} The point here is that in view of Proposition~\ref{Zub}, taking $M$ to be the T-ideal generated by $f$ as notated there, assuming that $A$ satisfies a Capelli identity $c_{n+1}$ and we are in a matrix component of degree $n$, the relatively free algebra $A$ is integral over the affine $C$-algebra $C[\xi]$ where $\xi$ denotes the set of characteristic coefficients formally corresponding to the finitely many $\delta$ operators. Unfortunately, this case can be assured only when $C$ is a field, but by a careful use of localization we can formulate a local-global framework in which we can utilize this situation. \end{rem}
{\it Second proof of Theorem~\ref{SpechtNoeth}}. Let $A = C\{ a_1, \dots, a_\ell\} $ be the relatively free algebra of $\mathcal I$. We formulate an inductive argument in analogy to Theorem~\ref{Spechtfin}. In~order to apply the theory of full quivers, we need to pass to some field. By Remark~\ref{Spechtobs2} we may assume that $C$ is an integral domain. Let $F$ denote the field of fractions of $C$, and let $A_F : = A \otimes_C F$. We consider $\mathcal I \otimes F$ which contains the ideal of identities of $A_F$. Unfortunately, $(\mathcal I \otimes F) \cap C\set{x}$ might properly contain $\mathcal I.$ If the torsion over $C$ only involved finitely many primes we could handle this by means of Proposition~\ref{subdir1}, but this need not be the case. Thus, we need a more delicate argument which enables us to relate $\mathcal I$ with $\mathcal I_F$.
{\it Step 1.} We start with a proper PI of $A$. As mentioned in the first proof, Amitsur~\cite[Theorem~3.38]{BR} says that every PI-algebra satisfies some power of a standard identity, which we denote here as $f$. Let $\mathcal I_0$ denote the $T$-ideal of $ C\{ x\}$ generated by $f$, contained in~$\mathcal I,$ so $\mathcal I_{0} \otimes F$ is the $T$-ideal of $ F\{ x\}$ generated by $f$, contained in~$\mathcal I \otimes F$. The relatively free algebra $F\{ x\}/(\mathcal I_{0} \otimes F)$ has some full quiver $\Gamma_1$. Although $\Gamma_1$ does not have much to do with the original algebra $A$, it provides a base for an inductive argument, as well as a handle for using our field-theoretic results. Since the chain of reductions of any full quiver must terminate after a finite number of steps, we induct on $\Gamma_1$.
We need to show that every chain $\mathcal C = \set{\mathcal I_1 {\,\subseteq\,} \mathcal I_2 {\,\subseteq\,} \mathcal I_3 {\,\subseteq\,} \cdots}$ of $T$-ideals ascending from $\mathcal I_0$ stabilizes. Over the field $F$, we could do this by the argument of Theorem~\ref{Spechtfin}, which we recall is achieved by hiking $f$, obtaining matrix characteristic coefficients for the evaluations of a maximal branch of $\Gamma_1$, redefining these in terms of elements of the T-ideal $\mathcal I_{0} \otimes F$, using Theorem~\ref{Shir} to show that this part of the T-ideal is Noetherian, and then modding it out and applying Noetherian induction. Unfortunately, working over $C$ might involve $C$-torsion which could collapse infinite chains when passing to the field of fractions, $F$. We can use Proposition~\ref{subdir1} to eliminate torsion involved with a given finite set of elements of $C$, so our strategy is to show how the whole process just described can be achieved over a localization of $C$ by a finite number of elements, which are found independently of the specific chain $\mathcal C$. Thus, we can work over this localization just as well as over $F$, and pass back to $C$ by means of Proposition~\ref{subdir1}.
{\it Step 2.} We rely heavily on Proposition~\ref{subdir1} in order to eliminate torsion involved with a given finite set of elements of $C$, with the aim of modifying $A$ in order to make it more compatible with $\Gamma_1$. We say that a T-ideal of $F\{ x\}$ is $C$-\textbf{expanded} if it is generated by polynomials $\subset C\{ x\}$. We extend $f_1 = f$ to a set $\{ f_1, \dots, f_k\} \subset C\{ x\}$ generating a maximal possible $C$-expanded T-ideal of $F\{ x\}$ contained in $\mathcal I \otimes F$. (Such a finite set exists since we already have solved Specht's problem over fields, implying $F\{ x\}$ satisfies the ACC on $C$-expanded T-ideals.)
The coefficients of $f_1, \dots, f_k$ involve only finitely many elements of $C$. Utilizing Proposition~\ref{subdir1}, we can localize at these primes to obtain a new base ring $C'$ and assume that $A$ has no torsion at the coefficients of the polynomials $f_1, \dots, f_k$. Let $\mathcal I'$ (resp.~$\mathcal I' \otimes F$) denote the T-ideal of $C'\{x\}$ (resp.~of $F\{x\}$) generated by $f_1, \dots, f_k$, whose full quiver over $F$ is denoted as $\Gamma_2$. This might increase our $C'$-expanded T-ideal over $F$, requiring us to adjoin more polynomials, and thereby forcing us to localize by finitely many more primes, but the process must stop since $F\{ x\}$ satisfies the ACC on $C$-expanded T-ideals. This achieves our goal of matching a T-ideal over $C$ with a T-ideal over $F$.
{\it Step 3.} Our next goal is to hike to a $\bar q$-characteristic coefficient-absorbing polynomial.
As in Lemma~\ref{induction}, we take a maximal path in the full quiver $\Gamma_2$. Its polynomial can be hiked to a finite set of polynomials $\tilde f_1, \dots, \tilde f_m$. Unfortunately these might involve torsion with new primes of $C'$. But the torsion over $C'$ in localizing these finitely many polynomials involves only finitely many prime elements in $C'$, and by localizing at them we obtain a new base Noetherian ring $C''$ and an algebra $A'' = A \otimes C''$ over it. Now we can appeal again to Proposition~\ref{subdir1} and replace $A$ by $A''$; thereby, we may assume that~$A''$ is $z$-torsion free for the finitely many primes $z$ at which we localized. (Perhaps $F\{x\}$ has more $C''$-expanded T-ideals than $C'$-expanded T-ideals, so we must return to Step 2 and then Step 3, but this loop must terminate since $F\{ x\}$ satisfies the ACC on T-ideals.)
In this way, we avoid all torsion in computing the $\bar q$-characteristic coefficients in the maximal matrix components, and thereby perform these calculations in $A''$. In other words, we can use $\tilde f_1, \dots, \tilde f_m$ (taken over $C''$) to calculate $\bar q$-characteristic coefficients of the products of the generators of $A$ in terms of polynomials.
Starting with $C''$ we let $\mathcal I'' $ (resp.~$\mathcal I'' \otimes F$) be the T-ideal of $C''\{x\}$ (resp.~of~ $F\{x\}$) generated by $f_1, \dots, f_k$ and $\tilde f_1, \dots, \tilde f_m$.
{\it Step 4.} This is the most delicate part of the proof. Our strategy in this case is to go back to mimic the proof of the field-theoretic case (Theorem~\ref{Spechtfin}), removing $C$-torsion step by step when we pass back from $F\{x\}$ to $C''\{x\}$. But we must be careful to do everything in a finite number of steps. We would like to appeal to compactness from logic, but the argument is more subtle, since certain steps cannot be put in quantitative form. In particular, we note that the chain $$\{ \mathcal I_j\otimes_C F :j \in {\mathbb {N}}\}$$ stabilizes at $\mathcal I_{j_0}\otimes _C F$ for some $j_0$, which we take to be $j_0 = 1$, and we
define $\mathcal I_{1;F}^{(0)}\subseteq \mathcal I_1\otimes _C F$ to be the T-ideal of $A_F$ generated by symmetrized $\bar{q}$-characteristic coefficient-absorbing polynomials of $\mathcal I_1 \otimes _C F$ having a non-zero specialization with maximal degree vector, as described in the proof of Theorem~\ref{Spechtfin}.
This is generated by finitely many polynomials of $A_F$, which can be taken from $A''$ and define a T-ideal of $A''$ which we call $\mathcal I_{1}^{(0)}$. Working with $\mathcal I_{1;F}^{(0)}$ enables us to define finitely many characteristic coefficients which we define in terms of polynomials which we now call $g_1, \dots, g_m \in C''\{ x\}.$ Inverting the torsion, i.e., localizing at some $z \in C'',$ we now may assume that the $g_i$ are $C''$-torsion free (and nonzero since they localize to nonzero elements of $F\{ x\}$).
We would like to use $g_1, \dots, g_m$ to define ``characteristic coefficients'' for the elements of $\mathcal I_{1}^{(0)}\subset A'',$ but unfortunately these are no longer central. But inverting the $C''$-torsion of the $g_ig_j -g_jg_i$, $1 \le i,j \le m,$ we may assume that $g_1, \dots, g_m$ all commute, and $\mathcal I_{1}^{(0)}$ is a module over the commutative Noetherian ring $\hat C : = C''[g_1, \dots, g_m].$ This is enough for us to apply Theorem~\ref{Shir} to show that $\mathcal I_{1}^{(0)}$ is a finite module, and thus Noetherian. (We can define the $\delta$-operators via Remark~\ref{pat}, together with the module $M$ which is the T-ideal generated by $\tilde f$. Note that since we only need consider monomials up to a certain length, we need to adjoin only finitely many characteristic coefficients, again via localization and Proposition~\ref{subdir1}.)
In this way, after localizing by finitely many elements of $C$, we pass to finite modules over Noetherian rings.
After all of these localizations we have a new affine base ring $ C''' \supset C''$, and work over $\hat C ''' := C '''[g_1, \dots, g_m]$. We let $\mathcal I''' $ be the T-ideal of $C'''\{x\}$ generated by the new polynomials involved with these extra steps. Thus $\mathcal I''' \otimes F$ (resp. $\mathcal I''' \otimes {\hat C}$) is the T-ideal of $F\{x\}$ (resp.~of~ $\hat C''' \{x\}$) generated by the same new polynomials.
If $\mathcal I' \otimes F \subset \mathcal I''' \otimes F$, then the quiver of the relatively free algebra $F\{ x\}/(\mathcal I''' \otimes F)$ is a reduction of~$\Gamma_1$, so we conclude by induction on the complexity of the quiver, in view of Lemma~\ref{induc}.
Thus, we may assume that $\mathcal I''' \otimes F = \mathcal I' \otimes F$. Next we look at $(\mathcal I''' \otimes {\hat C})/(\mathcal I' \otimes \hat C)$. By assumption, this a torsion submodule of the Noetherian module $\mathcal I_{1;F}^{(0)}/(\mathcal I' \otimes \hat C)$ and thus is finite, so if nonzero is annihilated by some nonzero element $z \in C$. We can remove $z$-torsion one final time (again via localization and Proposition~\ref{subdir1}), passing to a new base ring $ C'''' \supset C'''$ and T-ideal $\mathcal I''''$ (resp.~$\mathcal I'''' \otimes F$) of $C''''\{x\}$ (resp.~of~ $F\{x\}$). If $\mathcal I' \otimes F\subset \mathcal I'''' \otimes F$, then the quiver of the relatively free algebra $F\{x\}/(\mathcal I'''' \otimes F)$ is a reduction of $\Gamma_1$, so we conclude by induction on the complexity of the quiver, in view of Lemma~\ref{induc}. Thus we may assume that $\mathcal I'''' \otimes F= \mathcal I' \otimes F$, and since $z$ has been inverted in the localization we conclude that our ascending chain of T-ideals from $\mathcal I'$ lifts to an ascending chain of T-ideals from $\mathcal I''''$ and we are done by the process given in the proof of Theorem~\ref{Spechtfin}. (The point is that the argument of modding out a certain Noetherian submodule of each T-ideal in $A \otimes _C C''''$ is algorithmic, depending on computations involving a finite number of polynomials whose $C$-torsion we have removed.) This concludes the second proof of Theorem~\ref{SpechtNoeth}.
In summary, we have performed various procedures in order to enable us to reduce the quiver. These procedures involve a T-ideal of $F\{x\}$ which might increase because of the procedure, but must eventually terminate because $F\{ x\}$ satisfies the ACC on T-ideals. But at this stage Step 3 does not vitiate Step 2, and we can conclude the proof using Step 4 to carve out representable T-ideals over $C''''$.
Alternatively, one could conclude by applying the compactness in logic to the proof of Kemer's theorem and checking that we only need finitely many elements, which can be computed. Fuller details of the compactness argument are forthcoming when we consider representability and the universal algebra version of Theorem~\ref{SpechtNoeth}.
\section{The case where the T-ideals are not necessarily PI-proper}\label{notprop}
Using the same ideas, we can extend Theorem~\ref{SpechtNoeth} still further, considering the general case where the T-ideals are not PI-proper; in other words, the ideals of $C$ generated by the coefficients of the polynomials in the T-ideals of $C\{ X\}$ do not contain the element 1. Towards this end, given a set $S$ of polynomials in $C\{ X\}$, define its \textbf{coefficient ideal} to be the ideal of $C$ generated by the coefficients of the polynomials in $S$. We need a few observations about the multilinearization procedure.
\begin{lem}\label{SpechtN1} If a T-ideal $\mathcal I$ contains a polynomial $f$ with coefficient $c,$ then $\mathcal I$ also contains a multilinear polynomial with coefficient $c.$ \end{lem} \begin{proof} First we note that one of the blended components of $f$ has coefficient $c,$ and then we multilinearize it. \end{proof}
\begin{lem}\label{SpechtN2} If $c$ is in the coefficient ideal of a T-ideal $\mathcal I$ of $C\{x\},$ then some multilinear $f\in \mathcal I$ has coefficient $c$. \end{lem} \begin{proof} If $c = \sum c_i s_i$ where $s_i$ appears as a coefficient of $f_i \in \mathcal I$, then taking the $f_i = f_i(x_1,\dots x_{m_i})$ to be multilinear, we may assume that $c_i$ is the coefficient of $x_1\cdots x_{m_i}$. Taking $m > \max\{ m_i \}$,
we see that the coefficient of $x_1\cdots x_{m}$ in
$$\sum _i s_i f_i(x_1,\dots x_{m_i})x_{m_i+1}\cdots x_{m}$$ is 1. \end{proof}
\begin{cor}\label{SpechtN3} A T-ideal is PI-proper iff its coefficient ideal contains 1. \end{cor}
\begin{prop}\label{SpechtNoet1} Suppose $C$ is a Noetherian integral domain, and $\mathcal I$ is a T-ideal with coefficient ideal $I$. Then there is a polynomial $f \in {\mathbb {Z}}\{x\}$ for which $cf \in \mathcal I$ for all $c\in I.$ \end{prop} \begin{proof} Since $C$ is Noetherian, we can write $I = \sum_{i=1}^t Cc_i,$ and then it is enough to prove the assertion for $c = c_i$, $1 \le i \le t.$
We take the relatively free, countably generated algebra $A$ whose generators $\{ y_1, y_2 \dots, \}$ are given the lexicographic order, and let $M_m$ denote the space of multilinear words of degree $m$ in $\{ y_1, \dots, y_m \}$. In view of Shirshov's Height Theorem \cite[Theorem~2.3]{BR}, the space $\sum _i c_i M_m$ has bounded rank as a $\Z$-module. On the other hand, there is a well-known action of the symmetric group $S_m$ acting on the indices of $y_1, \dots, y_m$ described in \cite[Chapter~5]{BR}. In particular, \cite[Theorem~5.51]{BR} gives us a rectangle such that any multilinear polynomial $f$ whose Young diagram contains this rectangle satisfies $c_i f \in \mathcal I.$ \end{proof}
Write $\mathcal M(K)$ for the generic $n\times n$ matrix algebra with characteristic coefficients over a commutative ring~$K$.
Zubkov~\cite{Z} proved that the canonical map $\mathcal M(\Z) \to \mathcal M(\Z/p\Z)$ has kernel $p \mathcal M(\Z) .$ (As noted above, this is false if one does not adjoin characteristic coefficients.) He also proved that the Hilbert series of the algebra $A\otimes _{\mathbb {Z}}{\mathbb Q}$ over ${\mathbb Q}$ and of $A\otimes _{\mathbb {Z}}\Z/p\Z$ over ${\mathbb {Z}}/p\Z$ coincide, implying $\mathcal I$ is a free $\Z$-module.
\begin{cor}\label{SpechtNoet2} If $\mathcal I$ is a T-ideal with coefficient ideal $I$, there is a PI-proper T-ideal of $C\{x\}$ whose intersection with $I\{x\}$ is contained in $\mathcal I.$ \end{cor}\begin{proof} We need to show that \begin{equation}\label{ver1} \mathcal I \cap c A = c\mathcal I\end{equation} for any $c \in C.$ We take the polynomial $f$ of Proposition~\ref{SpechtNoet1}. In view of Proposition~\ref{Lewin}, the T-ideal of $f$ contains the set of identities of some finite dimensional algebra, and thus of $M_n(C)$ for some $n$. Adjoining characteristic coefficients, we may replace $\mathcal I$ by a T-ideal of the free algebra with characteristic coefficients, and conclude with Zubkov's results ~\cite{Z} quoted above. \end{proof}
\begin{thm}\label{SpechtNoeth1} Any T-ideal in the free algebra $C\{ x\}$ is finitely based, for any commutative Noetherian ring $C$. \end{thm} \begin{proof} By Noetherian induction, we may assume that the theorem holds over $C/I$ for every nonzero ideal $I$ of $C$. Thus, by Remark~\ref{Spechtobs2}, $C$ is an integral domain. If $C$ is finite, then it is a field, and we are done by Theorem~\ref{Spechtfin}. So we may assume that $C$ is an infinite integral domain. We need to show that any T-ideal generated by a given set of polynomials $\{g_1, g_2, \dots \}$ is finitely based. The coefficient ideals of $\{g_1, g_2, \dots g_j\}$ stabilize to some ideal $I$ of $C$ at some $j_0$, since $C$ is Noetherian. We
let $A_0$ denote the relatively free algebra with respect to the T-ideal generated by $g_1, \dots, g_{j_0},$ Inductively, we let $A_i$ denote the relatively free algebra with respect to the T-ideal generated by $f_{j_0+1}, \dots, f_{j_0+i},$ and take a PI-proper polynomial $f_{i+1}$, not in $\operatorname{id}(A_i)$ such that $cf_{i+1}$ is in the T-ideal generated by $g_{i+1}$ in $A_i$ for all $c$ in the coefficient ideal of~$g_{i+1}$. (Such a polynomial exists in view of Proposition~\ref{SpechtNoet1}.) This gives us an ascending chain of PI-proper T-ideals of $A_0$, which must terminate in view of Theorem~\ref{SpechtNoeth}, a contradiction. \end{proof}
\subsubsection{Digression: Consequences of torsion for relatively free algebras}
Torsion has been so useful in this paper that we collect a few more elementary properties and apply them to relatively free algebras.
\begin{lem}\label{tor4} Suppose $C$ is a Noetherian integral domain, and $A$ is a relatively free affine $C$-algebra. \begin{enumerate} \item $A$ has $p$-torsion for only finitely many prime numbers $p$. \item There is some $k_0$ such that $p^{k}\operatorname{-}\!\operatorname{tor}(A) = p^{k+1}\operatorname{-}\!\operatorname{tor}(A)$ for all $k> k_0$ and all prime numbers $p$. \item Let $\phi_k {\,{:}\,} A \to A \otimes \Z/p^k\Z$ denote the natural homomorphism. If $p^k A \ne p^{k+1} A,$ then $\ker \phi_k \ne \ker \phi_{k+1}.$ \end{enumerate} \end{lem} \begin{proof} $p^k\operatorname{-}\!\operatorname{tor}(A)$ is a T-ideal for each $k$. Let $\mathcal I_k$ be the T-ideal generated by $p^k$-torsion elements. The $\mathcal I_k$ stabilize for some $k_0$, yielding (2), and (3) follows since once the chain stabilizes we have $p^k A = p^{k+1} A$. Likewise, the direct sum of these T-ideals taken over all primes stabilizes, yielding (1). \end{proof}
\subsection{Applications to relatively free algebras}
As Kemer~\cite{Kem2} noted, the ACC on T-ideals formally yields a Noetherian-type theory. We apply this method to Theorem~\ref{SpechtNoeth1}.
\begin{prop}\label{Trad} Any relatively free algebra $A$ over a commutative Noetherian ring has a unique maximal nilpotent T-ideal $N(A)$.\end{prop} \begin{proof} By ACC, there is a maximal nilpotent T-ideal, which is unique since the sum of two nilpotent T-ideals is a nilpotent T-ideal. \end{proof}
\begin{defn}\label{Tpr} The ideal $N(A)$ of Proposition~\ref{Trad} is called the \textbf{T-radical}. An algebra $A$ is \textbf{T-prime} if the product of two nonzero T-ideals is nonzero. A T-ideal $\mathcal I$ of $A$ is \textbf{T-prime} if $A/\mathcal I$ is a T-prime algebra. \end{defn}
\begin{prop}\label{struc1} The T-radical
is the intersection of a finite number of T-prime T-ideals.\end{prop} \begin{proof} Each T-prime T-ideal contains the T-radical, which we thus can mod out. Then just copy the usual argument using Noetherian induction. \end{proof}
Kemer characterized all T-prime algebras of characteristic 0, cf.~\cite[Theorem~6.64]{BR}. The situation in nonzero characteristic is much more difficult, but in general we can reduce to the field case.
\begin{prop} Each T-prime, relatively free algebra $A$ with 1 over a commutative Noetherian ring $C$ is either the free $C$-algebra or is PI-equivalent to a relatively free algebra over a field. In particular, either $A$ is free or PI. \end{prop}\begin{proof} The center $Z$ of $A$ is an integral domain over which $A$ is torsionfree, since if $c \in C$ has torsion then $0 = (cA)\operatorname{Ann} _A(c)$ implies $cA = 0$ so $c = 0.$ If $Z$ is finite then it is a field and we are done. If $Z$ is infinite then $A$ is PI-equivalent to $A \otimes Z K$ where $K$ is the field of fractions of $Z$. \end{proof}
In this we we see that this theory, in particular Corollary~\ref{SpechtNoet2}, provides a method for generalizing results about relatively free PI-algebras to relatively free algebras in a variety which is not necessarily PI-proper. For example, let us generalize a celebrated theorem of Braun\cite{Br}-Kemer-Razmyslov:
\begin{thm}\label{Jnilp} The Jacobson radical $J$ of any relatively free affine algebra $A$ is nilpotent. \end{thm} \begin{proof} Modding out the T-radical, and applying Proposition~\ref{struc1}, we may assume that $A$ is T-prime. If it is free then $J = 0$, so we may assume that $A$ is PI, where we are done by Braun's theorem. \end{proof}
Of course there is no hope to generalize this result to non-relatively free algebras, since the nilradical of an affine algebra need not be nilpotent.
\end{document} | arXiv |
The existence of BT singularity
Normal forms of BT bifurcation
The Bogdanov-Takens bifurcation study of 2m coupled neurons system with \(2m+1\) delays
Yanwei Liu1,
Xia Liu2,
Shanshan Li3,
Ruiqi Wang1Email author and
Zengrong Liu1
© Liu et al. 2015
Received: 26 March 2015
In this paper the Bogdanov-Takens (BT) bifurcation of an 2m coupled neurons network model with multiple delays is studied, where one neuron is excitatory and the next is inhibitory. When the origin of the model has a double zero eigenvalue, by using center manifold reduction of delay differential equations (DDEs), the second-order and third-order universal unfoldings of the normal forms are deduced, respectively. Some bifurcation diagrams and numerical simulations are presented to verify our main results.
Bogdanov-Takens bifurcation
normal form
By using the methods developed by the authors in [1–3], the codimension one and two bifurcations for some neural network models with time delays have been studied; see [4–19] for example and the references therein. But except the authors in [20, 21] who have carried out the Hopf bifurcation of some models with n neurons, there are few codimension two bifurcation results about the neural network models with n neurons and more delays. Recently, the BT bifurcation, Hopf-transcritical, and Hopf-pitchfork bifurcations of the following model:
$$ \begin{aligned} &\dot{u}_{1}(t)=-ku_{1}(t)+a f\bigl(u_{1}(t-r)\bigr)+b_{1}g_{1} \bigl(u_{2}(t-\tau_{2})\bigr), \\ &\dot{u}_{2}(t)=-ku_{2}(t)+a f\bigl(u_{2}(t-r) \bigr)+b_{2}g_{2}\bigl(u_{1}(t-\tau_{1}) \bigr), \end{aligned} $$
have been studied by Yuan and Wei in [10]. Guo et al. in [11] analyzed the fold and Hopf bifurcations, fold-Hopf bifurcations and Hopf-Hopf bifurcations of system (1.1) with \(k=1\), \(g_{1}=g_{2}=f\).
In the follow-up the authors in [12] studied the stability and bifurcation of the following four coupled model:
$$ \begin{aligned} &\dot{u}_{1}(t)=-ku_{1}(t)+ f\bigl(u_{1}(t-r)\bigr)+g_{1}\bigl(u_{4}(t- \tau_{4})\bigr), \\ &\dot{u}_{2}(t)=-ku_{2}(t)+ f\bigl(u_{2}(t-r) \bigr)+g_{2}\bigl(u_{1}(t-\tau_{1})\bigr), \\ &\dot{u}_{3}(t)=-ku_{3}(t)+ f\bigl(u_{3}(t-r) \bigr)+g_{3}\bigl(u_{2}(t-\tau_{2})\bigr), \\ &\dot{u}_{4}(t)=-ku_{4}(t)+ f\bigl(u_{4}(t-r) \bigr)+g_{4}\bigl(u_{3}(t-\tau_{3})\bigr). \end{aligned} $$
Fan et al. in [17] considered the following coupled model of two neurons:
$$ \begin{aligned} &\dot{u}_{1}(t)=-u_{1}(t)+a \tanh\bigl(u_{1}(t-\tau_{v})\bigr)-a_{12}\tanh \bigl(u_{2}(t-\tau _{2})\bigr), \\ &\dot{u}_{2}(t)=-u_{2}(t)+a_{21}\tanh \bigl(u_{1}(t-\tau_{1})\bigr)-a\tanh \bigl(u_{2}(t- \tau_{v})\bigr), \end{aligned} $$
the coupling strengths will change their signs. The authors developed the universal unfolding of BT bifurcation with \(Z_{2}\) symmetry at the origin of the system (1.3) in the special case of \(\tau_{v}=0\), \(\tau_{1}=\tau_{2}=\tau>0\), and \(a_{12}=a_{21}=b\).
The relation of systems (1.1) and (1.2) motivates us to extend the system (1.3) involving n neurons, i.e., the following system:
$$\begin{aligned}& \dot{u}_{1}(t)=-u_{1}(t)+a f_{1} \bigl(u_{1}(t-\tau_{s})\bigr)-a_{2m,1}g_{2m} \bigl(u_{2m}(t-\tau_{2m})\bigr), \\& \ldots, \\& \dot{u}_{i}(t)=-u_{i}(t)+(-1)^{i+1}a f_{i}\bigl(u_{i}(t-\tau_{s})\bigr)+(-1)^{i}a_{i-1,i}g_{i-1} \bigl(u_{i-1}(t-\tau_{i-1})\bigr), \\& \ldots, \\& \dot{u}_{2m}(t)=-u_{2m}(t)-a f_{2m} \bigl(u_{2m}(t-\tau_{s})\bigr)+a_{2m-1,2m}g_{2m-1} \bigl(u_{2m-1}(t-\tau_{2m-1})\bigr), \end{aligned}$$
where m is an integer, \(a>0\) is the feedback strength, \(\tau_{s}\) is the feedback delay; \(\tau_{1}, \tau_{2}, \ldots, \tau_{2m}\) denote the connection delays, \(a_{12}, a_{23}, \ldots, a_{2m-1,2m}, a_{2m,1}\) represent the connection strengths. Each neuron comes with a delayed self-feedback and a delayed connection from the other neuron, and one neuron is excitatory and the other inhibitory. As regards the relations of each neuron one can see Figure 1.
Architecture of the model with 2 m neurons, m is an integer.
For simplicity, we assume
(H1):
\(f_{i}(0)=g_{i}(0)=0\), \(f_{i}'(0)=g_{i}'(0)=1\), \(i=1,2,\ldots,2m\).
The universal unfoldings about the BT bifurcation at the origin of system (1.4) will be given. Therefore, our study is not trivial and our results are general.
The rest of this paper is organized as follows: in Section 2, the conditions under which the origin of system (1.4) is a BT singularity are demonstrated; in Section 3, the second- and third-order normal forms at the BT singularity of the coupled system are presented; in Section 4, some bifurcation diagrams and numerical simulations are shown.
2 The existence of BT singularity
Since the origin is always the equilibrium of system (1.4), linearizing system (1.4) at the origin yields
$$\begin{aligned}& \dot{u}_{1}(t)=-u_{1}(t)+au_{1}(t- \tau_{s})-a_{2m,1}u_{n}(t-\tau_{2m}), \\& \ldots, \\& \dot{u}_{i}(t)=-u_{i}(t)+(-1)^{i+1}a u_{i}(t-\tau_{s})+(-1)^{i}a_{i-1,i}u_{i-1}(t- \tau_{i-1}), \\& \ldots, \\& \dot{u}_{2m}(t)=-u_{2m}(t)-a u_{2m}(t- \tau_{s})+a_{2m-1,2m}u_{2m-1}(t-\tau_{2m-1}). \end{aligned}$$
Then the corresponding characteristic equation is
$$ F(\lambda)=\bigl((\lambda+1)^{2}-\delta \mathrm{e}^{-2\lambda\tau_{s}}\bigr)^{m}+(-1)^{m+1} \beta^{m} \mathrm{e}^{-2m\lambda\tau_{0}}=0, $$
where \(\delta=a^{2}\), \(\beta^{m}=a_{12}a_{23}a_{34}\cdots a_{2m-1,2m}a_{2m,1}\), \(\tau_{0}=\frac{\tau_{1}+\tau_{2}+\cdots+\tau_{2m}}{2m}\).
By (2.2) we can obtain
$$ \begin{aligned} &F(0) =(1-\delta)^{m}+(-1)^{m+1} \beta^{m}, \\ &F'(0) =2m(1-\delta)^{m-1}(1+\delta\tau_{s})-2(-1)^{m+1}m \tau_{0}\beta ^{m}, \\ &F''(0) =4m (m-1 ) (1-\delta)^{m-2}(1+ \delta \tau_{s})^{2}+2m(1-\delta)^{m-1}\bigl(1-2 \delta\tau_{s}^{2}\bigr) \\ &\hphantom{F''(0) ={}}{} +(-1)^{m+1}2m^{2}\tau_{0}^{2} \beta^{m}. \end{aligned} $$
Solving \(F(0)=0\), we have \(\delta=\beta+1\), then by \(F'(0)=0\) we have \(\beta=\frac{1+\tau_{s}}{\tau_{0}-\tau_{s}}>0\), which implies \(\delta=\frac{1+\tau_{0}}{\tau_{0}-\tau_{s}}\) and then
$$F''(0)=(-1)^{m-1}{\frac{2m ( 1+\tau_{{s}} ) ^{m-1} ( 2\tau_{{0}}+2 \tau_{{s}}\tau_{{0}}+1+2\tau_{{s}} ) }{ ( \tau_{{0}}-\tau_{{s}} ) ^{m-1}}} \neq0. $$
To show that the origin of system (1.4) is a BT singularity, we should investigate the conditions under which all the roots of (2.2), except \(\lambda=0\), have negative real parts.
Let \(\delta=\beta+1\). First, when \(\tau_{s}=\tau_{0}=0\), solving (2.2) one can obtain \(\lambda_{1}=0\) and \(\lambda_{2}=-2\).
Second, when \(\tau_{s}\neq\tau_{0}\), we assume \(\tau_{s}=0\), then (2.2) can be written as
$$ F(\lambda)=\bigl[(\lambda+1)^{2}-(\beta+1) \bigr]^{m}+(-1)^{m+1}\beta ^{m}\mathrm{e}^{-2m\lambda\tau_{0}}=0, $$
if \(\lambda=iq\) (\(q>0\)) is a root of (2.4), then it needs \((iq+1)^{2}-(\beta+1)=-\beta\mathrm{e}^{-2iq\tau_{0}}\), i.e.,
$$ -{q}^{2}-\beta+\beta\cos ( 2q\tau_{{0}} ) +i \bigl( 2 q- \beta\sin ( 2q\tau_{{0}} ) \bigr)=0, $$
by separating the real and negative parts of the above equation and a simple computation we have
$$ \left \{ \textstyle\begin{array}{l} \cos(2q\tau_{0})=\frac{\beta+q^{2}}{\beta}, \\ \sin(2q\tau_{0})=\frac{2q}{\beta}. \end{array}\displaystyle \right . $$
Hence, q should satisfy the equation \(q^{2}+2\beta+4=0\), due to \(\beta>0\), a positive q does not exist.
When \(\tau_{s}\neq0\), then \(\lambda=iw\) (\(w>0\)) is a root of (2.2) if and only if w satisfies the following equation:
$$ (iw+1)^{2}-(\beta+1)\mathrm{e}^{-2iw\tau_{s}}=-\beta \mathrm{e}^{-2iw\tau_{0}}, $$
then one can obtain
$$ \left \{ \textstyle\begin{array}{l} 1-w^{2}=(\beta+1)\cos(2w\tau_{s})-\beta\cos(2w\tau_{0}), \\ 2w=\beta\sin(2w\tau_{0})-(\beta+1)\sin(2w\tau_{s}), \end{array}\displaystyle \right . $$
which yields
$$ \bigl(1-w^{2}\bigr)^{2}+4w^{2}=( \beta+1)^{2}+\beta^{2}-2\beta(\beta+1)\cos\bigl(2w( \tau_{0}-\tau_{s})\bigr). $$
We rewrite (2.7) as
$$ \cos\bigl(2w(\tau_{0}-\tau_{s})\bigr)= \frac{2\beta(\beta+1)-2w^{2}-w^{4}}{2\beta(\beta+1)}. $$
To investigate the existence of positive root of (2.8), we first consider the following equations:
$$ \cos\bigl(2w(\tau_{0}-\tau_{s})\bigr)=0,\qquad \frac{2\beta(\beta+1)-2w^{2}-w^{4}}{2\beta (\beta+1)}=0, $$
which, respectively, have the positive roots
$$ w_{0}=\frac{\pi}{4(\tau_{0}-\tau_{s})},\qquad w_{*}=\sqrt{-1+\sqrt{ \beta ^{2}+(\beta+1)^{2}}}. $$
It is easy to verify that (2.8) does not have a positive root w if \(w_{*}< w_{0}\) and has a positive root w̄ if \(w_{*}\geq w_{0}\). To make this clear one can see Figure 2.
The distribution of the roots of ( 2.8 ).
Together with the above discussion, we have the following lemma.
Let (H2) \(\delta=\frac{1+\tau_{0}}{\tau_{0}-\tau_{s}}\), \(\beta=\frac{1+\tau_{s}}{\tau_{0}-\tau_{s}}\), \(\tau_{0}>\tau_{s}\), \(w_{*}< w_{0}\). Then all the roots of system (2.2), except \(\lambda=0\), have negative real parts, i.e., the origin of system (1.4) is a BT singularity, where \(w_{0}=\frac{\pi}{4(\tau_{0}-\tau_{s})}\), \(w_{*}=\sqrt{-1+\sqrt{\beta^{2}+(\beta+1)^{2}}}\).
3 Normal forms of BT bifurcation
From Lemma 2.1 we know that when (H1) and (H2) hold, then system (1.4) at the origin will undergo a BT bifurcation. In the following, we will generalize the methods introduced in [1, 3] to compute the second- and third-order normal forms of the BT bifurcation. For simplicity, as the authors have done in [17], we take \(a_{12}=a_{2m,1}=b\) and choose a and \(a_{21}\) as the bifurcation parameters, i.e., we consider \(a=a_{0}+\alpha_{1}\) and \(a_{12}=a_{2m,1}=b_{0}+\alpha_{2}\), where \(\alpha_{1}\) and \(\alpha_{2}\) are all near \((0,0)\) and \(\delta=a_{0}^{2}=\frac{1+\tau_{0}}{\tau_{0}-\tau_{s}}\), \(\beta=\frac{1+\tau_{s} }{\tau_{0}-\tau_{s}}\), \(\beta^{m}=a_{12}^{0}a_{23}a_{34}\cdots a_{2m-1,2m}a_{2m,1}^{0}=b_{0}^{2}a_{23}a_{34}\cdots a_{2m-1,2m}\). Then system (1.4) becomes
$$ \begin{aligned} &\dot{u}_{1}(t)=-u_{1}(t)+(a_{0}+ \alpha_{1}) f_{1}\bigl(u_{1}(t-\tau_{s}) \bigr)-(b_{0}+\alpha_{2})g_{2m}\bigl(u_{2m}(t- \tau_{2m})\bigr), \\ &\dot{u}_{2}(t)=-u_{2}(t)-(a_{0}+ \alpha_{1}) f_{2}\bigl(u_{2}(t-\tau_{s}) \bigr)+(b_{0}+\alpha_{2})g_{1}\bigl(u_{1}(t- \tau_{1})\bigr), \\ & \ldots, \\ &\dot{u}_{i}(t)=-u_{i}(t)+(-1)^{i+1}(a_{0}+ \alpha_{1}) f_{i}\bigl(u_{i}(t-\tau_{s}) \bigr)+(-1)^{i}a_{i-1,i}g_{i-1}\bigl(u_{i-1}(t- \tau_{i-1})\bigr), \\ &\ldots, \\ &\dot{u}_{2m}(t)=-u_{2m}(t)-(a_{0}+ \alpha_{1}) f_{2m}\bigl(u_{2m}(t- \tau_{s})\bigr)+a_{2m-1,2m}g_{2m-1}\bigl(u_{2m-1}(t- \tau_{2m-1})\bigr). \end{aligned} $$
For simplicity, we rewrite system (3.1) as the following retarded functional differential equation (FDE) with parameters in the phase space \(C=C([-\tau_{1},0]; R^{n})\) [1]:
$$ \dot{U}(t)=L(\alpha)U_{t}+G(U_{t},\alpha), $$
with \(\varphi=(\varphi_{1},\varphi_{2}\cdots,\varphi_{n})^{T}\in C\).
The operator \(L_{0}=L(0)\) has the form
$$ L_{0}(\varphi)=\int_{-\tau_{1}}^{0}d\eta( \theta)\varphi(\theta)=Au(t)+\sum_{l=1}^{2m}B_{l}u(t- \tau_{l})+B_{2m+1}u(t-\tau_{s}). $$
Define Λ to be the set of eigenvalues with zero real part, for a BT bifurcation, we have \(\Lambda_{0}=\{0,0\}\), using the formal adjoint theory to decompose the phase space of an FDE. P denotes the generalized eigenspace associated with the eigenvalues in Λ, and \(P^{*}\) is the dual space of P. Then the phase space C can be decomposed as \(C=P\oplus Q\) by Λ, where
$$ Q=\bigl\{ \phi\in C:\langle\psi,\phi\rangle=0 \bigr\} . $$
Denote the dual bases of P and \(P^{*}\) by Φ and Ψ, respectively, satisfying \(\langle\Psi(s),\Phi(\theta)\rangle=I_{2}\), \(\dot{\Phi}=\Phi B\) and \(-\dot{\Psi}=B\Psi\), with \(B=\bigl ({\scriptsize\begin{matrix}{} 0&1\cr 0&0 \end{matrix}}\bigr )\). Following similar methods to Lemma 3.1 in [3], we can obtain
$$ \Phi(\theta)= \begin{pmatrix} 1&\theta\\\phi_{21} &\phi_{22} +\theta\phi_{21} \\ \phi_{31} &\phi_{32} +\theta\phi_{31} \\ \cdots&\cdots \\ \phi_{2m,1} &\phi_{n2} +\theta\phi_{2m,1} \end{pmatrix} ,\qquad \Psi(0)= \begin{pmatrix} \psi_{11}&\psi_{12}&\cdots&\psi_{1,2m} \\ \psi_{21}& \psi_{22}&\cdots&\psi_{2,2m} \end{pmatrix}, $$
$$\begin{aligned}& \phi_{21}= \frac{b_{0}}{1+a_{0}},\qquad \phi_{31}= \frac{b_{0}a_{23}}{(1+a_{0})(-1+a_{0})},\qquad \ldots, \\& \phi_{i1}= \frac{b_{0}a_{23}a_{34}\cdots a_{i-1,i}}{(1+a_{0})^{\frac{i}{2}}(-1+a_{0})^{\frac{i}{2}-1}} \quad (i \mbox{ is even}), \\& \phi_{i1}= \frac{b_{0}a_{23}a_{34}\cdots a_{i-1,i}}{(1+a_{0})^{\frac{i-1}{2}}(-1+a_{0})^{\frac{i-1}{2}}}\quad (i \mbox{ is odd}), \qquad \ldots, \qquad \phi_{2m,1}=\frac{1+a_{0}}{b_{0}}; \\& \phi_{22}= \frac{b_{0}[a_{0}(\tau_{s}-\tau_{1})-1-\tau_{1}]}{(1+a_{0})^{2}},\qquad \phi_{32}= \frac{b_{0}a_{23}[a_{0}^{2}(2\tau_{s}-\tau_{1}-\tau_{2})+2+\tau_{1}+\tau _{2}]}{(1+a_{0})^{2}(-1+a_{0})^{2}}, \\& \ldots, \\& \phi_{i2}= \frac{b_{0}a_{23}a_{34}\cdots a_{i-1,i} [a_{0}^{2} ((i-1)\tau_{s}-\sum_{l=1}^{i-1}\tau _{l} )-a_{0}(\tau_{s}+1)+i-1+\sum_{l=1}^{i-1}\tau_{l} ]}{ (1+a_{0})^{\frac{i}{2}+1}(-1+a_{0})^{\frac{i}{2}}} \\& \quad (i \mbox{ is even}), \\& \phi_{i2}= \frac{b_{0}a_{23}a_{34}\cdots a_{i-1,i} [a_{0}^{2} ((i-1)\tau_{s}-\sum_{l=1}^{i-1}\tau _{l} )+i-1+\sum_{l=1} ^{i-1}\tau_{l} ]}{(1+a_{0})^{\frac{i+1}{2}}(-1+a_{0})^{\frac{i+1}{2}}}\quad (i \mbox{ is odd}), \\& \ldots, \\& \phi_{2m,2}= \frac{a_{0}(\tau_{2m}-\tau_{s})-(1+\tau_{2m})}{b_{0}};\qquad \psi_{21}= \frac{1+a_{0}}{\xi},\quad \xi=2m\tau_{s}+(1+\tau_{s})\sum _{l=1}^{2m}\tau_{l}+ \frac{2m}{2}, \\& \psi_{22}= -\frac{(-1+a_{0})\psi_{21}}{b_{0}},\qquad \psi_{23}= \frac{(1+a_{0})(-1+a_{0})\psi_{21}}{b_{0}a_{23}},\qquad \ldots, \\& \psi_{2i}= -\frac{(1+a_{0})^{\frac{i}{2}-1}(-1+a_{0})^{\frac{i}{2}}\psi _{21}}{b_{0}a_{23}a_{34}\cdots, a_{i-1,i}}\quad (i \mbox{ is even}), \\& \psi_{2i}= \frac{(1+a_{0})^{\frac{i-1}{2}}(-1+a_{0})^{\frac{i-1}{2}}\psi _{21}}{b_{0}a_{23}a_{34}\cdots a_{i-1,i}} \quad (i \mbox{ is odd}),\qquad \ldots, \\& \psi_{2,2m}= \frac{b_{0}\psi_{21}}{-(1+a_{0})}, \\ & \psi_{11}= \frac{1+a_{0}}{6m\xi^{2}}\Biggl[3a_{0}\xi \Biggl(\sum _{l=1}^{2m}\tau_{l}-2m \tau_{s} \Biggr) -\sum_{l=1}^{2m} \tau_{l} \Biggl(\sum_{l=1}^{2m} \tau_{l}+3m \Biggr) \\ & \hphantom{\psi_{11}={}}{}-\tau_{s} \Biggl(\sum_{l=1}^{2m} \tau_{l}+2m \Biggr) \Biggl(\sum_{l=1}^{2m} \tau_{l}-4m\tau_{s} \Biggr)\Biggr], \\ & \psi_{12}= -\frac{\psi_{11}(-1+a_{0})+\psi_{21}[a_{0}(\tau_{s}-\tau _{1})-1-\tau_{1}]}{b_{0}}, \\ & \psi_{13}= \frac{\psi_{11}(-1+a_{0})(1+a_{0})+\psi_{21}[a_{0}^{2}(\tau_{1}+\tau _{2}-2\tau_{s})-2-\tau_{1}-\tau_{2}]}{b_{0}a_{23}}, \\ & \ldots, \\ & \psi_{1i}= -\frac{\psi_{11}(1+a_{0})^{\frac{i-2}{2}}(-1+a_{0})^{\frac{i}{2}}}{ b_{0}a_{23}a_{34}\cdots a_{i-1,i}} -\frac{\psi_{21}(1+a_{0})^{\frac{i-4}{2}}(-1+a_{0})^{\frac{i-2}{2}} }{b_{0}a_{23}a_{34}\cdots a_{i-1,i}} \\ & \hphantom{\psi_{1i}={}}{}\times \Biggl[a_{0}^{2} \Biggl(\sum _{l=1}^{i-1}\tau _{l}-(i-1) \tau_{s} \Biggr)-a_{0}(\tau_{s}+1)-(i-1)-\sum _{l=1}^{i-1}\tau_{l} \Biggr] \quad (i \mbox{ is even}), \\ & \psi_{1i}= \frac{\psi_{11}(1+a_{0})^{\frac{i-1}{2}}(-1+a_{0})^{\frac{i-1}{2}}}{ b_{0}a_{23}a_{34}\cdots a_{i-1,i}} +\frac{\psi_{21}(1+a_{0})^{\frac{i-3}{2}}(-1+a_{0})^{\frac{i-3}{2}} }{b_{0}a_{23}a_{34}\cdots a_{i-1,i}} \\ & \hphantom{\psi_{1i}={}}{}\times \Biggl[a_{0}^{2} \Biggl(\sum _{l=1}^{i-1}\tau _{l}-(i-1) \tau_{s} \Biggr)-(i-1)-\sum_{l=1}^{i-1} \tau_{l} \Biggr] \quad (i \mbox{ is odd}), \\ & \ldots, \\ & \psi_{1,2m}= -\frac{\psi_{11}(1+a_{0})^{m-1}(-1+a_{0})^{m}}{ b_{0}a_{23}a_{34}\cdots a_{2m-1,2m}} -\frac{\psi_{21}(1+a_{0})^{m-2}(-1+a_{0})^{m-1} }{b_{0}a_{23}a_{34}\cdots a_{2m-1,2m}} \\ & \hphantom{\psi_{1,2m}={}}{}\times \Biggl[a_{0}^{2} \Biggl(\sum _{l=1}^{2m-1}\tau _{l}-(2m-1) \tau_{s} \Biggr)-a_{0}(\tau_{s}+1)-(2m-1)-\sum _{l=1}^{2m-1}\tau_{l} \Biggr]. \end{aligned}$$
The representation of \(\Psi(s)\) is given in the Appendix. Denote the Taylor expansion of \(\widehat{F}(u_{t},\alpha)\) with respect to \(u_{t}\) and α in system (3.1) as \(\widehat{F}(u_{t},\alpha)=\sum_{j\geq2}\frac{1}{j!}\widehat {F}_{j}(u_{t},\alpha)\), we have
$$\begin{aligned} \frac{1}{2}\widehat{F}_{2}(u_{t}, \alpha) =&A_{1}u(t)\alpha_{1}+A_{2}u(t)\alpha _{2}+\sum_{l=1} ^{2m+1} \bigl[B_{l1}u(t-\tau_{l})\alpha_{1} +B_{l2}u(t-\tau_{l})\alpha_{2}\bigr] \\ &{}+\sum_{i=1} ^{2m+1}\sum _{0\leq k\leq j\leq2m+1}D_{ikj}u_{i}(t-\tau_{k})u(t- \tau_{j}), \end{aligned}$$
where \(\tau_{0}=0\).
Using (3.3) and the formulas obtained in [3], we deduce the second order of the BT bifurcation normal form as follows.
Let (H1) and (H2) hold. Then the delay differential system (3.1) can be reduced to the following two-dimensional system of ODE on the center manifold at \((u_{t},\alpha)=(0,0)\):
$$ \begin{aligned} &\dot{z}_{1}=z_{2}, \\ &\dot{z}_{2}=k_{1}z_{1}+k_{2}z_{2}+ \eta_{1}z_{1}^{2}+\eta_{2}z_{1}z_{2}+ \mathit{h.o.t.}, \end{aligned} $$
$$\begin{aligned}& k_{1}= \psi_{2}^{0} \Biggl(A_{1}+\sum _{l=1} ^{2m+1}B_{l1} \Biggr)\phi _{1}^{0}\alpha_{1}+\psi_{2}^{0} \Biggl(A_{2}+\sum_{l=1} ^{2m+1}B_{l2} \Biggr)\phi_{1}^{0}\alpha_{2}, \\& k_{2}= \Biggl\{ \psi_{1}^{0} \Biggl(A_{1}+ \sum_{l=1} ^{2m+1}B_{l1} \Biggr) \phi_{1}^{0}+\psi_{2}^{0} \Biggl[ \Biggl(A_{1} +\sum_{l=1} ^{2m+1}B_{l1} \Biggr)\phi_{2}^{0}-\sum_{l=1} ^{2m+1}\tau_{l}B_{l1} \phi_{1}^{0} \Biggr] \Biggr\} \alpha_{1} \\& \hphantom{k_{2}={}}{}+ \Biggl\{ \psi_{1}^{0} \Biggl(A_{2}+ \sum_{l=1} ^{2m+1}B_{l2} \Biggr) \phi_{1}^{0}+\psi_{2}^{0} \Biggl[ \Biggl(A_{2} +\sum_{l=1} ^{2m+1}B_{l2} \Biggr)\phi_{2}^{0}-\sum_{l=1} ^{2m+1}\tau_{l}B_{l2} \phi_{1}^{0} \Biggr] \Biggr\} \alpha_{2}, \\& \eta_{1}= \psi_{2}^{0} \Biggl(\sum _{i=1} ^{2m+1}\sum_{0\leq k\leq j\leq2m+1}D_{ikj} \phi_{1}^{0}\phi_{i1}^{0} \Biggr), \\& \eta_{2}= 2\psi_{1}^{0} \Biggl(\sum _{i=1} ^{2m+1}\sum_{0\leq k\leq j\leq m}D_{ikj} \phi_{1}^{0}\phi_{i1}^{0} \Biggr)+ \psi_{2}^{0}\Biggl\{ \sum_{i=1} ^{2m+1}\sum_{0\leq k\leq j\leq2m+1}D_{ikj}\bigl( \phi_{1}^{0}\phi_{2i}^{0}+ \phi_{2}^{0}\phi_{i1}^{0}\bigr) \\& \hphantom{\eta_{2}={}}{}-\sum_{i=1} ^{2m+1}\sum _{0\leq k\leq j\leq2m+1}(\tau_{k}+\tau_{j})D_{ikj} \phi_{1}^{0}\phi_{i1}^{0}\Biggr\} . \end{aligned}$$
If \(\eta_{1}\neq0\) and \(\eta_{2}\neq0\) hold, the bifurcation curves related to the perturbation parameters \(\alpha_{1}\), \(\alpha_{2}\) are as follows [9, 22, 23]:
TB: \(k_{1}=0\) (transcritical bifurcation occurs),
\(H_{0}\): \(k_{2}=0\), \(k_{1}<0\) (Hopf bifurcation from the zero equilibrium point),
\(H_{1}\): \(k_{2}=\frac{\eta_{2}}{\eta_{1}}k_{1}\), \(k_{1}>0\) (a Hopf bifurcation from the non-trivial equilibrium),
\(H_{c}^{0}\): \(k_{2}=\frac{\eta_{2}}{7\eta_{1}}k_{1}\), \(k_{1}<0\) (a homoclinic bifurcation with the zero equilibrium point),
\(H_{c}^{1}\): \(k_{2}=\frac{6\eta_{2}}{7\eta_{1}}k_{1}\), \(k_{1}>0\) (a homoclinic bifurcation from the non-trivial equilibrium).
A numerical example is given in Section 4 (see Figures 3-5).
If \(f_{i}''(0)=g_{i}''(0)=0\), then \(\eta_{1}=\eta_{2}=0\), system (3.1) is degenerate. To determine the dynamics near BT bifurcation we need to calculate the higher-order normal form. As [1, 22] we have
$$ g_{3}^{1}(z,0,\mu)=\bigl(I-P_{I,3}^{1} \bigr)\tilde{f}_{3}^{1}(z,0,\mu)=\operatorname{ Proj}_{\operatorname{Im}(M_{3}^{1})^{c}}\tilde{f}_{3}^{1}(z,0,\mu), $$
$$\begin{aligned} \tilde{f}_{3}^{1}(z,0,\mu) =&f_{3}^{1}(z,0, \mu)+\frac{3}{2}\bigl[\bigl(D_{z}f_{2}^{1} \bigr) (z,0,\mu )U_{2}^{1}(z,\mu)+\bigl(D_{y}f_{2}^{1} \bigr) (z,0,\mu)U_{2}^{2}(z,\mu) \\ &{}-\bigl(D_{z}U_{2}^{1}\bigr) (z,\mu )g_{2}^{1}(z,0,\mu)\bigr]. \end{aligned}$$
It is easy to obtain
$$ f_{3}^{1}(z,0,\mu)= \left( \textstyle\begin{array}{l} \psi_{11}[a_{0}f_{1}'''\varphi_{1}^{3}(-\tau _{s})-b_{0}g_{2m}'''\varphi_{2m}^{3}(-\tau_{2m})] \\ \quad {}+\psi _{12}[-a_{0}f_{2}'''\varphi_{2}^{3}(-\tau_{s}) +b_{0}g_{1}'''\varphi_{1}^{3}(-\tau_{1})] \\ \quad {}+\sum_{i=3}^{2m}\psi_{1i}[(-1)^{i+1}a_{0}f_{i}'''\varphi_{i}^{3}(-\tau _{s})+(-1)^{i}a_{i-1,i}g_{i-1}'''\varphi_{i-1}^{3}(-\tau_{i-1})] \\ \psi_{21}[a_{0}f_{1}'''\varphi_{1}^{3}(-\tau_{s})-a_{2m,1}g_{2m}'''\varphi _{2m}^{3}(-\tau_{2m})] \\ \quad {}+\psi_{22}[-a_{0}f_{2}'''\varphi_{2}^{3}(-\tau_{s}) +b_{0}g_{1}'''\varphi_{1}^{3}(-\tau_{1})] \\ \quad {}+\sum_{i=3}^{2m}\psi_{2i}[(-1)^{i+1}a_{0}f_{i}'''\varphi_{i}^{3}(-\tau _{s})+(-1)^{i}a_{i-1,i}g_{i-1}'''\varphi_{i-1}^{3}(-\tau_{i-1})] \end{array}\displaystyle \right), $$
where \(f_{i}'''=f_{i}'''(0)\) and \(g_{i}'''=g_{i}'''(0)\), \(i=1,2,3,\ldots,2m\), \(\varphi_{1}(\theta)=z_{1}+\theta z_{2}\), \(\varphi_{j}(\theta)=\phi_{j1}z_{1}+(\phi_{j2}+\theta\phi_{j1})z_{2}\), \(j=2,3,\ldots,2m\).
To obtain the third-order normal form, one needs the decomposition
$$ \mathrm{V}_{3}^{4}\bigl(R^{2}\bigr)= \operatorname{Im}\bigl(M_{3}^{1}\bigr)\oplus\operatorname{Im} \bigl(M_{3}^{1}\bigr)^{c}. $$
Then the canonical basis in \(\mathrm{V}_{3}^{4}X(R^{2})\) has 40 elements: \(((z,\alpha)^{3},0)^{T}\), \((0,(z,\alpha)^{3})^{T}\), and for the bases of \(\operatorname{Im}(M_{3}^{1})\) and \(\operatorname{Im}(M_{3}^{1})^{c}\) one can refer to [22]. By the definition of \(\operatorname{Proj}_{\operatorname{Im}(M_{3}^{1})^{c}}\) we have
$$\begin{aligned}& \operatorname{Proj}_{\operatorname{Im}(M_{3}^{1})^{c}}p= \left \{ \textstyle\begin{array}{l@{\quad}l} p, & p\in\operatorname{Im}(M_{3}^{1})^{c}, \\ 0, & p\in\operatorname{Im}(M_{3}^{1}), \end{array}\displaystyle \right .\qquad \operatorname{Proj}_{\operatorname{Im}(M_{3}^{1})^{c}} \begin{pmatrix} z_{1}^{3} \\ 0 \end{pmatrix} = \begin{pmatrix}0\\ 3z_{1}^{2}z_{2} \end{pmatrix} , \\& \operatorname{Proj}_{\operatorname{Im}(M_{3}^{1})^{c}} \begin{pmatrix}z_{1}\alpha_{i}^{2}\\ 0 \end{pmatrix} = \begin{pmatrix}0\\ z_{2}\alpha_{i}^{2} \end{pmatrix} , \qquad \operatorname{Proj}_{\operatorname{Im}(M_{3}^{1})^{c}} \begin{pmatrix}z_{1}\alpha_{1}\alpha_{2}\\ 0 \end{pmatrix} = \begin{pmatrix}0\\ \alpha_{1}\alpha_{2}z_{2} \end{pmatrix} , \\& \operatorname{Proj}_{\operatorname{Im}(M_{3}^{1})^{c}} \begin{pmatrix}z_{1}^{2}\alpha_{i}\\ 0 \end{pmatrix} = \begin{pmatrix}0\\ 2z_{1}z_{2}\alpha_{i} \end{pmatrix} . \end{aligned}$$
Together with (3.6) and by [22, 23] the third-order normal form of system (3.1) can be written as
$$ \begin{aligned} &\dot{z}_{1}=z_{2}, \\ &\dot{z}_{2}=k_{1}z_{1}+k_{2}z_{2}+cz_{1}^{3}+dz_{1}^{2}z_{2}+ \mathit{h.o.t.}, \end{aligned} $$
where \(k_{1}\) and \(k_{2}\) are the same as in (3.4), and
$$\begin{aligned}& c= \frac{1}{6}\Biggl[\psi_{21}\bigl(a_{0}f_{1}'''-b_{0}g_{2m}''' \phi_{2m,1}^{3}\bigr)+\psi _{22}\bigl(b_{0}g_{1}'''-a_{0}f_{1}''' \phi_{21}^{3}\bigr) \\& \hphantom{c={}}{}+\sum_{i=3}^{2m} \psi_{2i}\bigl((-1)^{i+1}a_{0}f_{i}''' \phi_{i1}^{3}+ (-1)^{i}a_{i-1,i}g_{i-1}''' \phi_{i-1,1}^{3}\bigr)\Biggr], \\& d= \frac{1}{2}\Biggl\{ a_{0}f_{1}''' \bigl(\psi_{11}-\psi_{21}\tau_{s}- \psi_{12}\phi _{21}^{3}-\phi_{21}^{2}( \phi_{22}-\tau_{s}\phi_{21})\psi_{22} \bigr) +b_{0}g_{1}'''( \psi_{12}-\tau_{1}\psi_{22}) \\& \hphantom{d={}}{}-b_{0}g_{2m}''' \phi_{2m,1}^{2}\bigl(\psi_{11}\phi_{2m,1}+( \phi_{2m,2}-\tau _{2m}\phi_{2m,1})\psi_{21} \bigr) \\& \hphantom{d={}}{}+\sum_{i=3}^{2m} \bigl[(-1)^{i+1}a_{0}f_{1}''' \phi_{i1}^{2}\bigl(\psi_{1i}\phi _{i1}+ \psi_{2i}(\phi_{i2}-\tau_{s}\phi_{i1}) \bigr) \\& \hphantom{d={}}{}+(-1)^{i}a_{i-1,i}g_{i-1}''' \phi_{i-1,1}^{2}\bigl(\psi_{1i}\phi_{i-1,1}+( \phi _{i-1,2}-\tau_{i-1}\phi_{i-1,1})\psi_{2i} \bigr)\bigr]\Biggr\} . \end{aligned}$$
Let \(\bar{t}=-\frac{|c|}{d}t\), \(w_{1}=\frac{d}{\sqrt{|c|}}z_{1}\), \(w_{2}=-\frac{d^{2}}{|c|\sqrt{|c|}}z_{2}\). Then system (3.7) becomes
$$ \begin{aligned} &\dot{w}_{1}=w_{2}, \\ &\dot{w}_{2}=v_{1}w_{1}+v_{2}w_{2}+sw_{1}^{3}-w_{1}^{2}w_{2}+ \mathit{h.o.t.}, \end{aligned} $$
where \(v_{1}=(\frac{d}{c})^{2}k_{1}\), \(v_{2}=-\frac{d}{|c|}k_{2}\), \(s= \operatorname{sgn}(c)\). From [15] we know the bifurcation of system (3.8) is related to the sign of s. If \(s=1\), we have
S: \(v_{1}=0\), \(v_{2}\in R\) (a pitchfork bifurcation),
H: \(v_{2}=0\), \(v_{1}<0\) (a Hopf bifurcation at the trivial equilibrium),
T: \(v_{2}=-\frac{1}{5}v_{1}\), \(v_{1}<0\) (a heteroclinic bifurcation).
In Section 4, we show a numerical example under the case of \(s=1\) (see Figure 9).
If \(s=-1\), we have
H: \(v_{1}=v_{2}\), \(v_{1}>0\) (a Hopf bifurcation at the non-trivial equilibrium),
T: \(v_{2}=\frac{4}{5}v_{1}\), \(v_{1}>0\) (a homoclinic bifurcation),
\(H_{d}\): \(v_{2}=d_{0}v_{1}\), \(v_{1}>0\), \(d_{0}\approx0.752\) (a double cycle bifurcation).
4 Numerical simulation
To verify our main results in the previous sections, in this section, we choose system parameters and functions \(f_{i}(u_{i})\), \(g_{i}(u_{i})\) satisfying the conditions obtained in Sections 2 and 3 and give some numerical examples and simulations.
First, we take \(n=2\), \(f_{i}(u_{i})=\frac{(1+\epsilon)(\mathrm{e}^{u_{i}}-1)}{\mathrm{e}^{u_{i}}+\epsilon}\), \(g_{i}(u_{i})=\frac{\tanh (d_{i}u_{i})}{d_{i}}\) [10], \(i=1,2\), \(\epsilon=1.5\), \(d_{1}=2\), \(d_{2}=1\), \(\tau_{1}=6\), \(\tau_{2}=0=\tau_{3}=0\), \(a_{2,1}=a_{1,2}=\frac{\sqrt{3}}{3}\), \(a_{0}=\frac{2\sqrt{3}}{3}\). One can verify that all the conditions in (H1) and (H2) are satisfied. Moreover, \(f_{i}(0)=0\), \(f_{i}'(0)=1\), \(f_{i}''(0)\neq0\), \(g_{i}(0)=0\), \(g_{i}'(0)=1\), \(i=1,2\). Thus, the coefficients in (3.4) are \(k_{1}= 0.845299\alpha_{{1}}- 0.845299\alpha_{{2}}\), \(k_{2}=-0.0450298362\alpha _{1}+5.11682660736522\alpha_{2}\), \(\eta_{1}=0.07320508073\), \(\eta_{2}=0.1076706575\). The corresponding bifurcation curves of system (3.1) with \(m=1\) is obtained. (See Figure 3.)
The bifurcation set and phase portraits for ( 3.4 ).
If we take \((\alpha_{1}, \alpha_{2})=(-0.01, -0.001)\) and initial conditions \((u_{1}(0), u_{2}(0))=(0.001, 0.001)\), then, in Figure 4(a), one can see that the equilibrium \((0,0)\) is a locally stable focus. However, when \((\alpha_{1}, \alpha_{2})=(-0.01, 0.007935)\), the origin loses its stability, and a periodic solution is bifurcated from the origin (see Figure 4(b)).
The Hopf bifurcation from the zero equilibrium. (a) The time series with parameters \((\alpha_{1}, \alpha _{2})=(-0.01, -0.001)\) and initial conditions \((u_{1}(0), u_{2}(0))=(0.001, 0.001)\). (b) The time series with parameters \((\alpha_{1}, \alpha_{2})=(-0.01, 0.007935)\) and initial conditions \((u_{1}(0), u_{2}(0))=(0.001, 0.001)\).
Under initial values \((u_{1}(0),u_{2}(0))=(-0.02,-0.00001)\), if we take parameters \((\alpha_{1}, \alpha_{2})=(0.01, 0.00053)\), then system (3.1) has a locally stable non-trivial equilibrium which, however, becomes unstable when the parameters \((\alpha_{1}, \alpha_{2})\) cross the Hopf bifurcation curve \(H_{1}\) to another side. One can see a periodic solution is bifurcated from the non-trivial equilibrium as shown in Figure 5.
The Hopf bifurcation from the non-trivial equilibrium. (a) The time series with parameters \((\alpha_{1}, \alpha_{2})=(0.01, 0.000053)\) and initial conditions \((u_{1}(0), u_{2}(0))=(-0.02, -0.00001)\). (b) The time series with parameters \((\alpha_{1}, \alpha_{2})=(0.01, 0.006053)\) and initial conditions \((u_{1}(0), u_{2}(0))=(-0.02, -0.00001)\).
Second, when \(f_{i}''(0)=g_{i}''(0)=0\), we also give an example with \(m=2\) where \(\tau_{1}=\tau_{2}=\tau_{3}=\tau_{4}=2\), \(\tau_{s}=0\), \(a_{23}=0.7\), \(a_{34}=\frac{\sqrt{2}}{2}\), \(a_{41}=\frac{\sqrt{2}}{2}\), \(a_{0}=\frac{\sqrt{6}}{2}\), \(a_{12}=\frac{\sqrt{2}}{2}\), \(f_{i}(u_{i})=g_{i}(u_{i})=\tanh(u_{i})\), \(i=1,2,3,4\). One can verify \(s=1\), thus, with the parameters \(\alpha _{1}\) and \(\alpha_{2}\) changing in the small neighborhood of \((0,0)\), system (3.1) can undergo a pitchfork bifurcation, a Hopf bifurcation, and a heteroclinic bifurcation. The corresponding bifurcation diagram is exhibited in the parameter plane \((\alpha_{1}, \alpha_{2})\) (see Figure 6).
The bifurcation set and phase portraits for ( 3.7 ) with \(\pmb{m=2}\) .
It can be seen that when \((\alpha_{1}, \alpha_{2})=(-0.001,-0.001)\), three trajectory curves with different initial values consistently converge to the origin \((0,0,0,0)\), i.e., the zero equilibrium is a locally asymptotically stable focus under the given parameters \(\alpha_{1}\) and \(\alpha_{2}\). Keeping the initial conditions fixed, we move the perturbation parameter \((\alpha_{1}, \alpha _{2})\) until they across the pitchfork bifurcation curve S to another side, then the origin becomes unstable. Simultaneously, two local stable non-zero equilibria are bifurcated from the origin, which leads to system (3.1) undergoing a pitchfork bifurcation. In Figure 7, we see that system (3.1) has two local stable foci when \((\alpha_{1}, \alpha_{2})=(-0.001,-0.006)\).
Pitchfork bifurcation is shown with initial values \(\pmb{(u_{1}(0),u_{2}(0),u_{3}(0),u_{4}(0))=(0.00053, 0.004, 0.0002, 0.0004)}\) (green curve), \(\pmb{(-0.00053, -0.004, -0.0002, -0.0004)}\) (blue curve), \(\pmb{(0.5, 0.4, 0.3, 0.4)}\) (red curve). (a) The time series when \((\alpha_{1},\alpha_{2})=(-0.001, -0.001)\). (b) The time series when \((\alpha_{1},\alpha_{2})=(-0.001, -0.006)\).
Above the line S, we let \((\alpha_{1},\alpha_{2})\) pass the Hopf bifurcation curve H, and take \((\alpha_{1},\alpha_{2})=(-0.001, 0.005)\), then the zero equilibrium loses stability, which yields a stable periodic solution as shown in Figure 8.
Hopf bifurcation from the origin with initial values \(\pmb{(u_{1}(0),u_{2}(0),u_{3}(0),u_{4}(0))=(0.00053, 0.0004, 0.0002, 0.0004)}\) (blue curve), \(\pmb{(-0.153, -0.04, -0.02, -0.04)}\) (green curve), and parameters \(\pmb{(\alpha_{1},\alpha_{2})=(-0.001, 0.005)}\) . (a) The phase portrait in the plane \((u_{4},u_{1})\). (b) The time series in the plane \(({t},u_{1})\).
When the parameters \((\alpha_{1},\alpha_{2})\) are located under the bifurcation of \(S^{+}\), and near \(S^{+}\), all solutions will approach the outer stable periodic solution excluding equilibria and the stable manifold of the trivial equilibrium (see Figure 9(a)). However, when \((\alpha _{1},\alpha_{2})\) are chosen at the upper-left side of the curve S, then the solutions of system (3.1) are attracted to the corresponding non-zero equilibria if the initial conditions are close to one of the two non-zero equilibria (see the green and blue curves in Figure 9(b)). But if the initial conditions are chosen sufficiently far from the two non-zero equilibria, then the solutions approach the outer stable periodic solution (see the red curve in Figure 9).
Initial values \(\pmb{(u_{1}(0),u_{2}(0),u_{3}(0),u_{4}(0))=(0.00053, 0.004, 0.0002, 0.0004)}\) (blue curve), \(\pmb{(0.00053, 0.004, 0.0002, 0.0004)}\) (green curve), \(\pmb{(0.5, 0.4, 0.05, 0.4)}\) (red curve). (a) \((\alpha_{1},\alpha _{2})=(0.001, 0.002)\). (b) \((\alpha_{1},\alpha_{2})=(0.001, 0.0018)\).
The authors would like to thank the editor and the anonymous reviewers for their constructive suggestions and comments, which improved the presentation of the paper. In addition, this research is supported by the National Natural Science Foundation of China (No. 1117206) and Innovation Program of Shanghai Municipal Education Commission (No. 12YZ030).
The bases of P and its dual space \(P^{\ast}\) have the following representations:
$$\begin{aligned}& P=\operatorname{span}\Phi, \qquad \Phi(\theta)=\bigl(\phi_{1}( \theta), \phi_{2}(\theta)\bigr), \\ & P^{\ast}=\operatorname{span}\Psi,\qquad \Psi(s)=\operatorname{col}\bigl( \psi_{1}(s), \psi_{2}(s)\bigr), \end{aligned}$$
where \(\phi_{1}(\theta)=\phi_{1}^{0} \in R^{n}\backslash\{0\}\), \(\phi_{2}(\theta)=\phi_{2}^{0}+\phi_{1}^{0}\theta\), \(\phi_{2}^{0}\in R^{n}\), and \(\psi_{2}(s)=\psi_{2}^{0}\in R^{n\ast}\backslash\{0\}\), \(\psi_{1}(s)=\psi_{1}^{0}-s\psi_{2}^{0}\), \(\psi_{1}^{0}\in R^{n\ast}\), which satisfy
$$\begin{aligned} (1)&\quad \Biggl(A+\sum_{l=1} ^{2m+1}B_{l} \Biggr)\phi_{1}^{0}=0, \\ (2)&\quad \Biggl(A+\sum_{l=1} ^{2m+1}B_{l} \Biggr)\phi_{2}^{0}= \Biggl(\sum _{l=1} ^{2m+1}\tau_{l}B_{l}+I \Biggr)\phi_{1}^{0}, \\ (3)&\quad \psi_{2}^{0} \Biggl(A+\sum _{l=1} ^{2m+1}B_{l} \Biggr)=0, \\ (4)&\quad \psi_{1}^{0} \Biggl(A+\sum _{l=1} ^{2m+1}B_{l} \Biggr)=\psi _{2}^{0} \Biggl(\sum_{l=1} ^{2m+1}\tau_{l}B_{l}+I \Biggr), \\ (5)&\quad \psi_{2}^{0}\phi_{2}^{0}+ \psi_{2}^{0}\sum_{l=1} ^{2m+1}\tau _{l}B_{l}\phi_{2}^{0} -\frac{1}{2}\psi_{2}^{0}\sum _{l=1} ^{2m+1}\tau_{l}^{2}B_{l} \phi _{1}^{0}=1, \\ (6)&\quad \psi_{1}^{0}\phi_{2}^{0}+ \psi_{1}^{0}\sum_{l=1} ^{2m+1}\tau _{l}B_{l}\phi_{2}^{0} -\frac{1}{2}\psi_{1}^{0}\sum _{l=1} ^{2m+1}\tau_{l}^{2}B_{l} \phi_{1}^{0} \\ &\qquad {}-\frac{1}{2}\psi_{2}^{0}\sum _{l=1} ^{2m+1}\tau _{l}^{2}B_{l} \phi_{2}^{0}+ \frac{1}{6}\psi_{2}^{0} \sum_{l=1} ^{2m+1}\tau_{l}^{3}B_{l} \phi_{1}^{0}=0. \end{aligned}$$
YL, XL, RW conceived and designed the research; YL wrote the paper; YL, RW, ZL revised the manuscript; YL, XL, SL implemented numerical simulations; YL, SL, RW, ZL participated in the discussions. All authors read and approved the final manuscript.
Department of Mathematics, Shanghai University, Shanghai, 200444, China
College of Mathematics and Information Science, Henan Normal University, Xinxiang, 453007, China
Institute of Systems Biology, Shanghai University, Shanghai, 200444, China
Faria, T, Magalhaes, LT: Normal forms for retarded functional differential equations and applications to Bogdanov-Takens singularity. J. Differ. Equ. 122, 201-224 (1995) MATHMathSciNetView ArticleGoogle Scholar
Faria, T, Magalhaes, LT: Normal forms for retarded functional differential equations with parameters and applications to Hopf bifurcation. J. Differ. Equ. 122, 181-200 (1995) MATHMathSciNetView ArticleGoogle Scholar
Xu, Y, Huang, M: Homoclinic orbits and Hopf bifurcations in delay differential systems with TB singularity. J. Differ. Equ. 244, 582-598 (2008) MATHMathSciNetView ArticleGoogle Scholar
Song, Z, Xu, J: Stability switches and multi stability coexistence in a delay-coupled neural oscillators system. J. Theor. Biol. 313, 98-114 (2012) MathSciNetView ArticleGoogle Scholar
Song, Z, Xu, J: Stability switches and double Hopf bifurcation in a two-neural network system with multiple delays. Cogn. Neurodyn. 7, 505-521 (2013) MathSciNetView ArticleGoogle Scholar
Song, Z, Xu, J: Bifurcation and chaos analysis for a delayed two-neural network with a variation slope ratio in the activation function. Int. J. Bifurc. Chaos 22, 1250105 (2012) View ArticleGoogle Scholar
Song, Z, Yang, K, Xu, J, Wei, Y: Multiple pitchfork bifurcation and multi periodicity coexistences in a delay coupled neural oscillator system with inhibitory-to-inhibitory connection. Commun. Nonlinear Sci. Numer. Simul. 29, 327-345 (2015) MathSciNetView ArticleGoogle Scholar
Song, YL, Han, MA, Wei, JJ: Stability and Hopf bifurcation on a simplified BAM neural network with delays. Physica D 200, 185-204 (2005) MATHMathSciNetView ArticleGoogle Scholar
Campbell, SA, Yuan, Y: Zero singularities of codimension two and three in delay differential equations. Nonlinearity 21, 2671-2691 (2008) MATHMathSciNetView ArticleGoogle Scholar
Yuan, Y, Wei, JJ: Singularity analysis on a planar system with multiple delays. J. Dyn. Differ. Equ. 19, 437-456 (2007) MATHMathSciNetView ArticleGoogle Scholar
Guo, SJ, Chen, YM, Wu, JH: Two-parameter bifurcations in a network of two neurons with multiple delays. J. Differ. Equ. 244, 444-486 (2008) MATHMathSciNetView ArticleGoogle Scholar
Li, XL, Wei, JJ: Stability and bifurcation analysis in a system of four coupled neurons with multiple delays. Acta Math. Appl. Sin., Engl. Ser. 29(2), 425-448 (2013) MATHMathSciNetView ArticleGoogle Scholar
Ge, JH, Xu, J: Fold-Hopf bifurcation in a simplified four-neuron BAM (bidirectional associative memory) neural network with two delays. Sci. China, Technol. Sci. 53(3), 633-644 (2010) MATHMathSciNetView ArticleGoogle Scholar
Song, ZG, Xu, J: Codimension-two bursting analysis in the delayed neural system with external stimulations. Nonlinear Dyn. 67, 309-328 (2012) MATHView ArticleGoogle Scholar
He, X, Li, C, Shu, Y: Bogdanov-Takens bifurcation in a single inertial neuron model with delay. Neurocomputing 89, 193-201 (2012) View ArticleGoogle Scholar
Liu, X: Zero singularity of codimension two or three in a four-neuron BAM neural network model with multiple delays. Nonlinear Dyn. 77, 1783-1794 (2014) View ArticleGoogle Scholar
Fan, GH, Campbell, SA, Wolkowicz, GSK, Zhu, H: The bifurcation study of \(1:2\) resonance in a delayed system of two coupled neurons. J. Dyn. Differ. Equ. 25, 193-216 (2013) MATHMathSciNetView ArticleGoogle Scholar
He, X, Li, C, Huang, T, Li, C: Bogdanov-Takens singularity in tri-neuron network with time delay. IEEE Trans. Neural Netw. Learn. Syst. 24, 1001-1007 (2013) View ArticleGoogle Scholar
Song, ZG, Xu, J: Stability switches and Bogdanov-Takens bifurcation in an inertial two-neuron coupling system with multiple delays. Sci. China, Technol. Sci. 57(5), 893-904 (2014) MathSciNetView ArticleGoogle Scholar
Xiao, M, Zheng, W, Cao, JD: Hopf bifurcation of an \((n+1)\)-neuron bidirectional associative memory neural network model with delays. IEEE Trans. Neural Netw. Learn. Syst. 24, 118-130 (2013) View ArticleGoogle Scholar
Liu, QM, Yang, SM: Stability and Hopf bifurcation of an n-neuron Cohen-Grossberg neural network with time delays. J. Appl. Math. 2014, Article ID 468584 (2014) Google Scholar
Jiang, WH, Yuan, Y: Bogdanov-Takens singularity in Van der Pol's oscillator with delayed feedback. Physica D 227, 149-161 (2007) MATHMathSciNetView ArticleGoogle Scholar
Jiang, J, Song, YL: Bogdanov-Takens bifurcation in an oscillator with negative damping and delayed position feedback. Appl. Math. Model. 37, 8091-8105 (2013) MathSciNetView ArticleGoogle Scholar | CommonCrawl |
\begin{document}
\setcounter{page}{1} \thispagestyle{empty}
\begin{abstract} In this paper, we are interested by the cotangent sum $c_0\left(\frac{q}{p}\right)$ related to the Estermann zeta function for the special case when $q=1$ and get explicit formula for its series expansion, which represents an improvement of the identity $(2. 1)$ Theorem $(2.1)$ in the recent work \cite{GOUBI4}. \end{abstract}
\maketitle
\section{Introduction and main results} For $p$ a positive integer and $q=1,2,\cdots,p-1$ such that $\left(p,q\right)=1$, let the cotangent sum \cite{Rassias} \begin{equation}\label{equa1} c_0\left(\frac{q}{p}\right)=-\sum_{k=1}^{p-1}\frac{k}{p}\cot\frac{\pi kq}{p}. \end{equation} $c_0\left(\frac{q}{p}\right)$ is the value at $s=0$,\[E_0\left(0,\frac{q}{p}\right)=\frac{1}{4}+\frac{i}{2}c_0\left(\frac{q}{p}\right)\] of the Estermann zeta function \[E_0\left(s,\frac{q}{p}\right)=\sum_{k\geq1}\frac{d(k)}{k^s}\exp\left(\frac{2\pi ikq}{p}\right).\]\\ This sum is related directly to Vasyunin cotangent sum \cite{VASY}; \begin{equation}\label{equa2} V\left(\frac{q}{p}\right)=\sum_{r=1}^{p-1}\left\{\frac{rq}{p}\right\}\cot\left(\frac{\pi r}{p}\right)=-c_0\left(\frac{\overline{q}}{p}\right) \end{equation} which arises in the study of the Riemann zeta function by virtue of the formula \cite{Bettin,Maier}. \begin{eqnarray}\label{zetaint} \begin{split}
&\qquad \frac{1}{2\pi\sqrt{pq}}\int_{-\infty}^{+\infty}\left|\zeta\left(\frac{1}{2}+it\right)\right|^2\left(\frac{q}{p}\right)^{it}\frac{dt}{\frac{1}{4}+t^2}=\\ &\frac{\log2\pi-\gamma}{2}\left(\frac{1}{p}+\frac{1}{q}\right)+\frac{p-q}{2pq}\log\frac{q}{p}-\frac{\pi}{2pq} \left(V\left(\frac{p}{q}\right)+V\left(\frac{q}{p}\right)\right). \end{split} \end{eqnarray} This formula is connected to the approach of Nyman, Beurling and B\'{a}ez-Duarte to the Riemann hypothesis \cite{Maier1,Covalanko}. Which states that the Riemann hypothesis is true if and only if $\displaystyle\lim_{n\to\infty}d_N=0$, where
\[d^2_N=\inf_{A_N}\frac{1}{2\pi}\int_{-\infty}^{+\infty}\left|1-\zeta A\left(\frac{1}{2}+it\right)\right|^2\frac{dt}{\frac{1}{4}+t^2}\] and the infimum is taken over all Dirichlet polynomials \[A_N(s)=\sum_{n=1}^{N}\frac{a_n}{n^s}.\]
In the literature; different results about $V(\frac{q}{p})$ and $c_0(\frac{q}{p})$ are obtained. For more details we refer to \cite{GOUBI3, Rassias, Maier, Bettin2, GOUBI1} and reference therein.\\
Exactly our interest in this work is the case $q=1$ in order to compute explicitly the sequence $b_k$ in the series expansion $(2. 1)$ Theorem $(2.1)$ \cite{GOUBI4} of $c_0(\frac{1}{p})$: \begin{equation}\label{equac0} c_0\left(\frac{1}{p}\right)=\frac{p\left(p-1\right)\left(p-2\right)}{\pi}\sum_{k\geq0}\frac{b_k}{\left(k+1\right) \left(k+p+1\right)\left(k+2\right)\left(k+p\right)} \end{equation} where $b_k$ is generated by the function $f\left(x\right)=\frac{1}{\left(1-x\right)^2\left(1-x^p\right)}=\sum_{k\geq0}b_kx^k$. The first terms are $b_0=1, b_1=2$ and the others are given by the recursive formulae: \begin{equation}\label{recurse1} b_k-2b_{k-1}+b_{k-2}=0,\ 2\leq k\leq p-1,\ k=p+1, \end{equation} \begin{equation}\label{recurse2} b_p-2b_{p-1}+b_{p-2}=1 \end{equation} and \begin{equation}\label{recurse3} b_k-2b_{k-1}+b_{k-2}-b_{k-p}+2b_{k-p-1}-b_{k-p-2}=0,\ k\geq p+2. \end{equation}
\section{Statement of main results} In the following theorem, we prove that $b_k=\left(k+1-\frac{p}{2}\left\lfloor\frac{k}{p}\right \rfloor\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right)$, where $\left\lfloor.\right\rfloor$ is the well known floor function. \begin{theorem}\label{th1} \begin{equation}\label{equath1} c_0\left(\frac{1}{p}\right)=\frac{p\left(p-1\right)\left(p-2\right)}{\pi}\sum_{k\geq0}\frac{\left(k+1-\frac{p}{2}\left\lfloor\frac{k}{p}\right \rfloor\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right)}{\left(k+1\right)\left(k+p+1\right)\left(k+2\right)\left(k+p\right)}. \end{equation} \end{theorem} Let the integer sequence $s_n(p)$ defined by \[s_n(p)=p\left(p-1\right)\left(p-2\right)\sum_{k=0}^{n}\frac{\left(k+1-\frac{p}{2}\left \lfloor\frac{k}{p}\right\rfloor\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right)}{\left(k+1\right)\left(k+p+1\right)\left(k+2\right)\left(k+p\right)}.\] Then $\pi c_0\left(\frac{1}{p}\right)=\lim_{n\to\infty}s_n(p)$ which explains that $\pi c_0\left(\frac{1}{p}\right)$ lies to $\overline{\mathbb{Q}}.$ Only in means of the Theorem \ref{th1}, for $q=1$ the identity \eqref{zetaint} becomes
\begin{eqnarray} \begin{split}
&\qquad \frac{1}{2\pi\sqrt{p}}\int_{-\infty}^{+\infty}\left|\zeta\left(\frac{1}{2}+it\right)\right|^2\left(\frac{1}{p}\right)^{it}\frac{dt}{\frac{1}{4}+t^2}=\\ &\frac{p-1}{2p}\left[2\left(\log2\pi-\gamma\right)-\log p+p\left(p-2\right)\sum_{k\geq0}\frac{\left(k+1-\frac{p}{2}\left\lfloor\frac{k}{p}\right \rfloor\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right)}{\left(k+1\right)\left(k+p+1\right)\left(k+2\right)\left(k+p\right)}\right]. \end{split} \end{eqnarray} Let the arithmetical function of two variables \[\theta(i,j)=\frac{pi^2+\left(p+2r+2\right)i+2r+2} {\left(ip+r+1\right)\left(ip+p+r+1\right)\left(ip+r+2\right)\left(ip+p+r\right)}\] then the following proposition is obtained \begin{proposition}\label{propo1} \begin{equation}\label{equapropo1} c_0\left(\frac{1}{p}\right)=\frac{p\left(p-1\right)\left(p-2\right)}{2\pi}\sum_{i\geq0}\sum_{r=0}^{r-1}\theta(i,j) \end{equation} \end{proposition}
\section{Proof of main results}
\subsection{Proof of Theorem~\ref{th1}}
We take inspiration from the theory of generating functions \cite{GOSPA,GOUBI2}, and prove that the sequence $b_k$ generated by the rational function $f(x)$ is explicitly done in the following lemma. And the identity \eqref{equath1}, Theorem \ref{th1} is deduced. \begin{lemma}\label{genera} \begin{equation}\label{gene} \frac{1}{\left(1-x\right)^2\left(1-x^p\right)}=\sum_{k\geq0}b_kx^k,\
|x|<1 \end{equation} with \begin{equation}\label{gene1} b_k=\left(k+1-\frac{p}{2}\left\lfloor\frac{k}{p}\right\rfloor\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right) \end{equation} \end{lemma} \begin{proof} It is well known that \begin{equation}\label{eqprof1}
\frac{1}{1-x}=\sum_{k\geq0}x^k,\ |x|<1 \end{equation} and \begin{equation}\label{eqprof2}
\frac{1}{1-x^p}=\sum_{k\geq0}b_p(k)x^k,\ |x|<1. \end{equation} with $b_p(k)=1$ if $p$ divides $k$ and zero otherwise. Then \begin{equation*} \frac{1}{\left(1-x\right)^2}=\sum_{k\geq0}\left(k+1\right)x^k,\
|x|<1 \end{equation*} and \begin{equation*} \frac{1}{\left(1-x\right)^2\left(1-x^p\right)}=\sum_{k\geq0}\sum_{j=0}^{k}\left(k-j+1\right)b_p(j)x^k,\
|x|<1. \end{equation*} But \[\left\lfloor\frac{k}{p}\right\rfloor p\leq k<\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right) p\] then \[\sum_{j=0}^{k}\left(k-j+1\right)b_p(j)=\sum_{j=0}^{\left\lfloor\frac{k}{p}\right\rfloor}\left(k-jp+1\right)\] Since \[\sum_{j=0}^{\left\lfloor\frac{k}{p}\right\rfloor}\left(k-jp+1\right)=\left(k+1\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right) -\frac{p}{2}\left\lfloor\frac{k}{p}\right\rfloor\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right).\] Then \[b_k=\left(k+1\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right) -\frac{p}{2}\left\lfloor\frac{k}{p}\right\rfloor\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right).\] Finally \[b_k=\left(k+1-\frac{p}{2}\left\lfloor\frac{k}{p}\right\rfloor\right)\left(\left\lfloor\frac{k}{p}\right\rfloor+1\right)\]
\end{proof}
\subsection{Proof of Proposition \ref{propo1}.}
In this subsection, we expose two methods to prove the Proposition
\ref{propo1}.\\
{\bf Analytic method.} We began by the following interesting lemma
\begin{lemma}\label{lembk}
\begin{eqnarray}\label{bk} b_k= \left\{ \begin{array}{lll} k+1\ &\quad \textrm{if}\ k<p,\\ b_{k-p}+k+1\ &\quad \textrm{if}\ k\geq p. \end{array} \right. \end{eqnarray}
\end{lemma}
\begin{proof}
We remember for $|x|<1$ that $f(x)=\frac{1}{\left(1-x\right)^2\left(1-x^p\right)}=\sum_{k\geq0}b_kx^k$. It is well known that
$f^{(k)}(0)=\frac{d^kf(x)}{dx^k}|_{x=0}=k!b_k$. Taking $g(x)=\left(1-x\right)^{-2}$, it is easy to show that $g^{(k)}(x)=\frac{(k+1)!}{(1-x)^{k+2}}$.
Using Leibnitz formula for successive derivatives of any infinitely derivable function explained in the work \cite{GOUBI2}, and the identity $(1-x^p)f(x)=(1-x)^2$ we deduce that \[\frac{d^k(1-x^p)f(x)}{dx^k}=\frac{(k+1)!}{(1-x)^{k+2}}.\] But
\[\frac{d^k(1-x^p)f(x)}{dx^k}=\sum_{j=0}^{k}{k\choose
j}(1-x^p)^{(j)}f^{k-j}(x).\]
Since $(1-x^p)^{(j)}=-(p)_j\sigma_p(j)x^{p-j}$, where
$\sigma_p(j)=1$ if $j\leq p$ and zero otherwise, then
\[(1-x^p)f^{(k)}(x)-\sum_{j=1}^{k}{k\choose
j}(p)_j\sigma_p(j)x^{p-j}f^{k-j}(x)=\frac{(k+1)!}{(1-x)^{k+2}}.\] Furthermore $f^{(k)}(0)=(k+1)!$ for $k<p$ and for $k\geq p$ we have \[f^{(k)}(0)-{k\choose
p}p!f^{(k-p)}(0)=(k+1)!.\] Which means that $b_{k}=k+1$ for $k<p$
and $b_k=b_{k-p}+k+1$ for $k\geq p.$ \end{proof} \begin{corollary}\label{corobk} For $0\leq r\leq p-1$ and $i\geq0$ we have \begin{equation}\label{equacorobk} b_{ip+r}=\frac{1}{2}\left(pi^2+\left(p+2r+2\right)i+2r+2\right) \end{equation} \end{corollary} \begin{proof} First in means of the formula \eqref{bk} we have $b_r=r+1$, $b_{p+r}=p+2r+2$, $b_{2p+r}=3p+3r+3$, $b_{3p+r}=6p+4r+4$ and $b_{4p+r}=10p+5r+5.$ Then for $i\geq0$ suppose that $b_{ip+r}=\frac{1}{2}i(i+1)p+(i+1)(r+1)$ then \[b_{(i+1)p+r}=b_{ip+r}+(i+1)p+r+1=\frac{1}{2}i(i+1)p+(i+1)(r+1)+(i+1)p+r+1.\] Finally \[b_{(i+1)p+r}=\frac{1}{2}(i+1)(i+2)p+(i+2)(r+1)\] and the result follows. \end{proof} Combining the identities in Lemma \ref{lembk} and Corollary \ref{corobk} we get the desired result \eqref{equapropo1} in Proposition \ref{propo1}.\\
{\bf Arithmetic method.} This is not expensive, just writing $k=ip+r$ which is the Euclidean division of $k$ over $p$ then $\left\lfloor\frac{k}{p}\right\rfloor=i$. Substituting this value in the expression of $b_k$ in Lemma \ref{genera}, we get \[b_{ip+r}=\left(\frac{p}{2}i+r+1\right)\left(i+1\right).\] After development we get the result \eqref{equacorobk} of Corollary \ref{corobk}, which implies the result \eqref{equapropo1} Proposition \ref{propo1}.
\section{Consequences}
\subsection{Limit at infinity of $c_0(\frac{1}{p})$}
\begin{theorem}\label{th2}
\begin{equation}
0\leq\lim_{p\to+\infty}\frac{1}{p^3}c_0\left(\frac{1}{p}\right)\leq\frac{1}{2}
\end{equation}
\end{theorem}
An interesting open question is how about the exact value of this
limit. Since $c_0\left(\frac{1}{p}\right)$ is related directly to Riemann hypothesis where the real part of the zeros of the
Riemann zeta function is $\frac{1}{2}$, one can conjecture that
$\lim_{p\to+\infty}\frac{1}{p^3}c_0\left(\frac{1}{p}\right)=\frac{1}{2}$.
\subsection{Proof of Theorem \ref{th2}}
We also use notational convention $0^0=1$ and the functions \begin{equation} \varphi_m(r,p)=\sum_{i\geq0}\frac{i^m} {\left(ip+r+1\right)\left(ip+p+r+1\right)\left(ip+r+2\right)\left(ip+p+r\right)} \end{equation}
It's clear that $\varphi_m(r,p)$ converge for $i\in\left\{0,1,2\right\}$ and diverge for the others. In means of these functions another expression of $c_0\left(\frac{1}{p}\right)$ is deduced by using the identity \eqref{equacorobk} of the last Corollary \ref{corobk}: \begin{equation*} c_0\left(\frac{1}{p}\right)=\frac{p\left(p-1\right)\left(p-2\right)}{2\pi}\sum_{r=0}^{p-1}\left[p\varphi_2(r,p)+(p+2r+2)\varphi_1(r,p)+ (2r+2)\varphi_0(r,p)\right] \end{equation*}
In the following lemma, we develop some inequalities satisfied by the functions $\varphi_i(r,p)$ for $i\in\left\{0,1,2\right\}.$ \begin{lemma}\label{lemfurth1} \begin{equation} \frac{1}{2p^2(p+1)(2p-1)}+\frac{1}{81p^4}\zeta(4)<\varphi_0(r,p)<\frac{1}{2p(p+1)}+\frac{1}{p^4}\zeta(4) \end{equation} \begin{equation} \frac{1}{81p^4}\zeta(3)<\varphi_1(r,p)<\frac{1}{p^4}\zeta(3) \end{equation} \begin{equation} \frac{1}{81p^4}\zeta(2)<\varphi_2(r,p)<\frac{1}{p^4}\zeta(2) \end{equation} \end{lemma} \begin{proof} To get the proof of Lemma \ref{lemfurth1}, just remark that for $i\geq1$ we have \[\frac{1}{81i^4p^4}<\frac{1} {\left(ip+r+1\right)\left(ip+p+r+1\right)\left(ip+r+2\right)\left(ip+p+r\right)}<\frac{1}{i^4p^4}\] and for $0\leq r\leq p-1$ \[\frac{1}{2p^2(p+1)(2p-1)}<\frac{1} {\left(r+1\right)\left(p+r+1\right)\left(r+2\right)\left(p+r\right)}<\frac{1}{2p(p+1)}\] furthermore the last inequalities are deduced. \end{proof} For simplifying calculus, let us denoting the function $\phi(r,p)$ to be \[\phi(r,p)=p\varphi_2(r,p)+(p+2r+2)\varphi_1(r,p)+ (2r+2)\varphi_0(r,p).\] Then in one hand we have
\[\phi(r,p)<\frac{1}{p^3}\zeta(2)+\frac{p+2}{p^4}\zeta(3)+\frac{1}{p(p+1)}+\frac{2}{p^4}\zeta(4)+\frac{1}{p}\left( \frac{2}{p^3}\zeta(3)+\frac{1}{p+1}+\frac{2}{p^3}\zeta(4)\right)r\] and \begin{eqnarray*} \sum_{r=0}^{p-1}\phi(r,p)&<&\frac{1}{p^2}\zeta(2)+\frac{p+2}{p^3}\zeta(3)+\frac{1}{p+1}+\frac{2}{p^3}\zeta(4)\\ &+&\left(\frac{1}{p^3}\zeta(3)+\frac{1}{2(p+1)}+\frac{1}{p^3}\zeta(4)\right)(p-1). \end{eqnarray*} Which inducts that
\begin{eqnarray}\label{eq1}\\ \nonumber c_0\left(\frac{1}{p}\right)<\frac{\left(p-1\right)\left(p-2\right)}{2\pi}\left[\frac{\zeta(2)+2\zeta(3)+\zeta(4)}{p}+\frac{2\left(\zeta(3)+\zeta(4)\right)}{p^2} +1+\frac{1}{2}p \right] \end{eqnarray} In other hand
\begin{eqnarray*} \phi(r,p)&>&\frac{\zeta(2)}{81p^3}+\frac{(p+2)\zeta(3)}{81p^4}+\frac{1}{p^2(p+1)(2p-1)}+\frac{2\zeta(4)}{81p^4}\\ &+&\left(\frac{2\zeta(3)}{81p^4}+\frac{1}{p^2(p+1)(2p-1)}+\frac{2\zeta(4)}{81p^4}\right)r \end{eqnarray*} and \begin{eqnarray*} \sum_{r=0}^{p-1}\phi(r,p)&>&\frac{\zeta(3)}{81p^2}+\frac{(p+2)\zeta(3)}{81p^3}+\frac{1}{p(p+1)(2p-1)}+\frac{2\zeta(4)}{81p^3}\\ &+&\left(\frac{\zeta(3)}{81p^3}+\frac{1}{2p(p+1)(2p-1)}+\frac{\zeta(4)}{81p^3}\right)\left(p-1\right) \end{eqnarray*} Furthermore \begin{eqnarray}\label{eq2} \nonumber c_0\left(\frac{1}{p}\right)&>&\frac{\left(p-1\right)\left(p-2\right)}{2\pi}\left[\frac{\zeta(3)}{81p}+\frac{(p+2)\zeta(3)}{81p^2}+ \frac{1}{(p+1)(2p-1)}+\frac{2\zeta(4)}{81p^2}\right]\\ &+&\frac{\left(p-1\right)^2\left(p-2\right)}{2\pi}\left(\frac{\zeta(3)}{81p^2}+\frac{1}{2(p+1)(2p-1)}+\frac{\zeta(4)}{81p^2}\right) \end{eqnarray} The passage to the limit in two last inequalities \eqref{eq1} and \eqref{eq2} states that \begin{equation*} 0\leq\lim_{p\to+\infty}\frac{1}{p^3}c_0\left(\frac{1}{p}\right)\leq\frac{1}{2}. \end{equation*} \subsection{New property of the floor function} Returning back to own recursive formula of $b_k$ and using its expression \eqref{lembk} in Lemma \ref{genera} we obtain \begin{theorem}\label{th3} For $2\leq k\leq p-1$ and $k=p+1$ we have \begin{equation}\label{equa1th3} \sum_{i=0}^{2}{2\choose i}(-1)^{i}\left(k-i+1-\frac{p}{2}\left\lfloor\frac{k-i}{p}\right \rfloor\right)\left(\left\lfloor\frac{k-i}{p}\right\rfloor+1\right)=0, \end{equation} at $k=p$ \begin{equation}\label{equa2th3} \sum_{i=0}^{2}{2\choose i}(-1)^{i}\left(p-i+1-\frac{p}{2}\left\lfloor\frac{p-i}{p}\right \rfloor\right)\left(\left\lfloor\frac{p-i}{p}\right\rfloor+1\right)=1 \end{equation} and for $k\geq p+2$; \begin{equation}\label{equa3th3} \sum_{j=0}^{1}\sum_{i=0}^{2}(-1)^{i+j}{2\choose i}\left[\left(k-jp-i+1-\frac{p}{2}\left\lfloor\frac{k-jp-i}{p}\right\rfloor\right) \left(\left\lfloor\frac{k-jp-i}{p}\right\rfloor+1\right)\right]=0 \end{equation} \end{theorem} \begin{proof} We can write for $2\leq k\leq p+1$ \[b_k-2b_{k-1}+b_{k-2}=\sum_{i=0}^{2}{2\choose i}(-1)^ib_{k-i}\] and for $k\geq p+2$; \[b_k-2b_{k-1}+b_{k-2}-b_{k-p}+2b_{k-p-1}-b_{k-p-2}=\sum_{j=0}^{1}\sum_{i=0}^{2}{2\choose i}(-1)^{i+j}b_{k-jp-i}.\] But from the recursive formulae \eqref{recurse1}, \eqref{recurse2} and \eqref{recurse3} of the sequence $b_k$ we deduce respectively the formulae \eqref{equa1th3}, \eqref{equa2th3} and \eqref{equa3th3}. \end{proof}
\subsection{sum of some numerical series} Inspired from the results $c_0(\frac{1}{3})=\frac{1}{3\sqrt{3}}$, $c_0(\frac{1}{4})=\frac{1}{2}$ and $c_0(\frac{1}{6})=\frac{7}{3\sqrt{3}}$ in \cite{GOUBI1} and $c_0(\frac{1}{5})=\frac{\left(\sqrt{5}-1\right)\sqrt{5 -\sqrt{5}}+3\left(\sqrt{5}+1\right)\sqrt{5 +\sqrt{5}}}{10\sqrt{10}}$ in \cite{Bettin3}, we conclude that \begin{equation} \sum_{k\geq0}\frac{\left(k+1-\frac{3}{2}\left\lfloor\frac{k}{3}\right \rfloor\right)\left(\left\lfloor\frac{k}{3}\right\rfloor+1\right)}{\left(k+1\right)\left(k+4\right)\left(k+2\right)\left(k+3\right)}=\frac{\pi}{18\sqrt{3}}, \end{equation} \begin{equation} \sum_{k\geq0}\frac{\left(k+1-2\left\lfloor\frac{k}{4}\right \rfloor\right)\left(\left\lfloor\frac{k}{4}\right\rfloor+1\right)}{\left(k+1\right)\left(k+5\right)\left(k+2\right)\left(k+4\right)}=\frac{\pi}{48}, \end{equation} \begin{equation} \sum_{k\geq0}\frac{\left(k+1-\frac{5}{2}\left\lfloor\frac{k}{5}\right \rfloor\right)\left(\left\lfloor\frac{k}{5}\right\rfloor+1\right)}{\left(k+1\right)\left(k+6\right)\left(k+2\right)\left(k+5\right)}= \frac{\left(\sqrt{5}-1\right)\sqrt{5-\sqrt{5}}+3\left(\sqrt{5}+1\right)\sqrt{5+\sqrt{5}}}{600\sqrt{10}}\pi \end{equation} and \begin{equation} \sum_{k\geq0}\frac{\left(k+1-3\left\lfloor\frac{k}{6}\right \rfloor\right)\left(\left\lfloor\frac{k}{6}\right\rfloor+1\right)}{\left(k+1\right)\left(k+7\right)\left(k+2\right)\left(k+6\right)}=\frac{7\pi}{360\sqrt{3}}. \end{equation}
\end{document} | arXiv |
Definition of matrix-vector multiplication
I have just learned about matrix-vector multiplication.Is there a particular reason why we multiply a matrix by a column vector instead of a row vector?
For example $Ax = \begin{bmatrix}a&b\\c&d\end{bmatrix} \begin{bmatrix}x1\\x2\end{bmatrix}$
It is basically doing ax1+bx2 / cx1+dx2, why don't we use a row vector here? It seems to me multiplying a matrix by a row vector is more intuitive.
Edit I found this and I think it solves my question of why the multiplication is defined that way https://math.stackexchange.com/a/271937/231821
linear-algebra
GalaxyVintageGalaxyVintage
$\begingroup$ You can multiply a matrix by a row vector, but you would have to put the vector on the left. It's a matter of conventions, really. $\endgroup$ – Daniel Robert-Nicoud Jan 23 '16 at 0:20
$\begingroup$ In some mathematical topics (probability transition matrices for Markov chains) the convention is typically a row vector times a matrix. Both of these are just special cases of matrix-matrix multiplication. $\endgroup$ – hardmath Jan 23 '16 at 3:30
$\begingroup$ I suspect the convention of using column vectors comes from the older convention of writing coefficients to the right of the variable instead of the left. Generally in linear algebra, the variables are vectors and the coefficients are matrices. Of course we could have also defined matrix multiplication to be columns from the left times rows from the right instead of vice versa, but the conventional definition is more how we were used to seeing things flow. $\endgroup$ – Paul Sinclair Jan 23 '16 at 5:14
It has already been pointed out that you can multiply a row vector and a matrix. In fact, the only difference between the two multiplications below is that the numeric values in the first result are stacked in a column vector while the same numeric values are listed in a row vector in the second result:
$$\pmatrix{6& -7& 10 & 1 \\ 0& 3& -1 & 4 \\ 0& 5& -7 & 5 \\ 4&1&0&-2} \pmatrix{2\\-2\\-1\\1} = \pmatrix{17\\-1\\2\\4}$$
$$ \pmatrix{2 &-2&-1&1} \pmatrix{6& 0&0&4\\-7& 3&5&1\\10 & -1&-7&0\\1 & 4 & 5&-2} = \pmatrix{17&-1&2&4}$$
One simple pragmatic difference between these two equations is that the second one is a lot wider when it is fully written out.
It seems to me the first equation "fits" more neatly on the page because we have already committed to making an equation that is four rows tall (because of the $4\times4$ matrix, this is unavoidable), so there is no "cost" in also making the vectors four rows tall; and in return we get vectors that are only one column wide instead of four columns each. Now imagine the dimensions of the matrix were $6\times6$; the multiplication by a column vector would still fit neatly on this page but we might have some difficulty with the multiplication that uses row vectors; it might not fit within the margins of this column of text.
It's also possible that the convention is influenced by the interpretation of the matrix as a transformation to be applied to the vector, along with a preference for writing the names of transformations on the left of the thing they transform (much as we like to write a function name to the left of the input parameters of a function, that is, $f(x) = x^2$ rather than $(x)f = x^2$). But I'm not sure there is a more compelling reason behind this particular observation other than collective force of habit, and these patterns are not universal; sometimes people write the name of the transformation on the right.
David KDavid K
The multiplication of the matrix $A=\left(\begin{array}{cc} a& b\\ c & d\end{array}\right)$ by the vector $\left(\begin{array}{c} x\\ y\end{array}\right)$ represents the linear map
$$\mathbb{R}^2\to \mathbb{R}^2, \left(\begin{array}{c} x\\ y\end{array}\right)\to \left(\begin{array}{c} ax+by\\ cx+dy\end{array}\right)=\left(\begin{array}{cc} a& b\\ c & d\end{array}\right)\left(\begin{array}{c} x\\ y\end{array}\right).$$ It also can be written as
$$\mathbb{R}^2\to \mathbb{R}^2, (x,y)\to (ax+by,cx+dy)=(x,y)\left(\begin{array}{cc} a & c\\ b & d\end{array}\right).$$
mflmfl
$\begingroup$ Doesn't it get exhausting to type out matrices as \left(\begin{array}{cc} a& b\\ c & d\end{array}\right) instead of the much easier \pmatrix{a & b \\ c & d}? 😉 $\endgroup$ – user137731 Jan 23 '16 at 0:55
$\begingroup$ @Bye_World You are right. I will use the much shorter \pmatrix. Thanks. $\endgroup$ – mfl Jan 23 '16 at 1:00
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question.
Intuition behind Matrix Multiplication
Why, historically, do we multiply matrices as we do?
Matrix-vector multiplication before evaluating the matrix inverse
Matrix Multiplication - Product of [Row or Column Vector] and Matrix [Lay P94, Strang P59]
Matrix Multiplication By Rows
Geometric interpretation of 2D-Translation's Matrix representation
The Standarization of Matrix by Vector Multiplication
Matrix multiplication of columns times rows instead of rows times columns
Image of a vector through a linear map given by a matrix
Linear combination of vectors vs Dot product in Matrix vector multiplication
Issue with Matrix Multiplication using the Formal Definition
Do matrix multiplication rules apply when multiplying matrices made up of smaller matrices? | CommonCrawl |
3.4: Mathematical Induction - An Introduction
[ "article:topic", "Inductive Step", "Induction", "authorname:hkwong", "license:ccbyncsa", "showtoc:no", "inductive hypothesis", "anchor step", "induction hypothesis" ]
Combinatorics and Discrete Mathematics
Book: A Spiral Workbook for Discrete Mathematics (Kwong)
3: Proof Techniques
Contributed by Harris Kwong
Professors (Mathematics) at State University of New York at Fredonia
Summary and Review
Exercises 3.4
Mathematical induction can be used to prove that an identity is valid for all integers \(n\geq1\). Here is a typical example of such an identity: \[1+2+3+\cdots+n = \frac{n(n+1)}{2}. \nonumber\] More generally, we can use mathematical induction to prove that a propositional function \(P(n)\) is true for all integers \(n\geq1\).
Definition: Mathematical Induction
To show that a propositional function \(P(n)\) is true for all integers \(n\geq1\), follow these steps:
Basis Step: Verify that \(P(1)\) is true.
Inductive Step: Show that if \(P(k)\) is true for some integer \(k\geq1\), then \(P(k+1)\) is also true.
The basis step is also called the anchor step or the initial step. This proof technique is valid because of the next theorem.
Theorem \(\PageIndex{1}\label{thm:pmi}\): Principle of Mathematical Induction
If \(S \subseteq \mathbb{N}\) such that
\(1\in S\), and
\(k\in S \Rightarrow k+1\in S\),
then \(S=\mathbb{N}\).
Here is a sketch of the proof. From (i), we know that \(1\in S\). It then follows from (ii) that \(2\in S\). Applying (ii) again, we find that \(3\in S\). Likewise, \(4\in S\), then \(5\in S\), and so on. Since this argument can go on indefinitely, we find that \(S = \mathbb{N}\).
There is a subtle problem with this argument. It is unclear why "and so on" will work. After all, what does "and so on" or "continue in this manner" really mean? Can it really continue indefinitely? The trouble is, we do not have a formal definition of the natural numbers. It turns out that we cannot completely prove the principle of mathematical induction with just the usual properties for addition and multiplication. Consequently, we will take the theorem as an axiom without giving any formal proof.
Although we cannot provide a satisfactory proof of the principle of mathematical induction, we can use it to justify the validity of the mathematical induction. Let \(S\) be the set of integers \(n\) for which a propositional function \(P(n)\) is true. The basis step of mathematical induction verifies that \(1\in S\). The inductive step shows that \(k\in S\) implies \(k+1\in S\). Therefore, the principle of mathematical induction proves that \(S=\mathbb{N}\). It follows that \(P(n)\) is true for all integers \(n\geq1\).
The basis step and the inductive step, together, prove that \[P(1) \Rightarrow P(2) \Rightarrow P(3) \Rightarrow \cdots . \nonumber\] Therefore, \(P(n)\) is true for all integers \(n\geq1\). Compare induction to falling dominoes. When the first domino falls, it knocks down the next domino. The second domino in turn knocks down the third domino. Eventually, all the dominoes will be knocked down. But it will not happen unless these conditions are met:
The first domino must fall to start the motion. If it does not fall, no chain reaction will occur. This is the basis step.
The distance between adjacent dominoes must be set up correctly. Otherwise, a certain domino may fall down without knocking over the next. Then the chain reaction will stop, and will never be completed. Maintaining the right inter-domino distance ensures that \(P(k)\Rightarrow P(k+1)\) for each integer \(k\geq1\).
To prove the implication \[P(k) \Rightarrow P(k+1) \nonumber\] in the inductive step, we need to carry out two steps: assuming that \(P(k)\) is true, then using it to prove \(P(k+1)\) is also true. So we can refine an induction proof into a 3-step procedure:
Verify that \(P(1)\) is true.
Assume that \(P(k)\) is true for some integer \(k\geq1\).
Show that \(P(k+1)\) is also true.
The second step, the assumption that \(P(k)\) is true, is sometimes referred to as the inductive hypothesis or induction hypothesis. This is how a mathematical induction proof may look:
Proof: We proceed by induction on \(n\). When \(n = 1\), the left-hand side of the identity reduces to . . . , and the right-hand side becomes . . . . Hence, the identity holds when n = 1. Assume the identity holds when n = k for some integer k ≥ 1; that is, assume
\[ \cdots\]
for some integer \(k \geq 1\). We want to show that it also holds when \(n = k + 1\); that is, we want to show that
\[ \cdots . \nonumber\]
Using the inductive hypothesis (3.1), we find
Therefore, the identity also holds when \(n = k + 1\). This completes the induction.
The idea behind mathematical induction is rather simple. However, it must be delivered with precision.
Be sure to say "Assume the identity holds for some integer \(k\geq1\)." Do not say "Assume it holds for all integers \(k\geq1\)." If we already know the result holds for all \(k\geq1\), then there is no need to prove anything at all.
Be sure to specify the requirement \(k\geq1\). This ensures that the chain reaction of the falling dominoes starts with the first one.
Do not say "let \(n=k\)" or "let \(n=k+1\)." The point is, you are not assigning the value of \(k\) and \(k+1\) to \(n\). Rather, you are assuming that the statement is true when \(n\) equals \(k\), and using it to show that the statement also holds when \(n\) equals \(k+1\).
Example \(\PageIndex{1}\label{eg:induct1-01}\)
Use mathematical induction to show that \[1+2+3+\cdots+n = \frac{n(n+1)}{2} \nonumber\] for all integers \(n\geq1\).
In the basis step, it would be easier to check the two sides of the equation separately. The inductive step is the key step in any induction proof, and the last part, the part that proves \(P(k+1)\) is true, is the most difficult part of the entire proof. In this regard, it is helpful to write out exactly what the inductive hypothesis proclaims, and what we really want to prove. In this problem, the inductive hypothesis claims that
\[1+2+3+\cdots+k = \frac{k(k+1)}{2}. \nonumber\]
We want to prove that \(P(k+1)\) is also true. What does \(P(k+1)\) really mean? It says
\[1+2+3+\cdots+(k+1) = \frac{(k+1)(k+2)}{2}. \nonumber\]
Compare the left-hand sides of these two equations. The first one is the sum of \(k\) quantities, and the second is the sum of \(k+1\) quantities, and the extra quantity is the last number \(k+1\). The sum of the first \(k\) terms is precisely what we have on the left-hand side of the inductive hypothesis. Hence, by writing
\[1+2+3+\cdots+(k+1) = 1+2+\cdots+k+(k+1), \nonumber\]
we can regroup the right-hand side as
\[1+2+3+\cdots+(k+1) = [1+2+\cdots+k]+(k+1), \nonumber\]
so that \(1+2+\cdots+k\) can be replaced by \(\frac{k(k+1)}{2}\), according to the inductive hypothesis. With additional algebraic manipulation, we try to show that the sum does equal to \(\frac{(k+1)(k+2)}{2}\).
We proceed by induction on \(n\). When \(n=1\), the left-hand side of the identity reduces to 1, and the right-hand side becomes \(\frac{1\cdot2}{2}=1\); hence, the identity holds when \(n=1\). Assume it holds when \(n=k\) for some integer \(k\geq1\); that is, assume that
\[1+2+3+\cdots+k = \frac{k(k+1)}{2} \nonumber\]
for some integer \(k\geq1\). We want to show that it also holds when \(n=k+1\). In other words, we want to show that
Using the inductive hypothesis, we find
\[\begin{array}{r c l} 1+2+3+\cdots+(k+1) &=& 1+2+3+\cdots+k+(k+1) \\ &=& \frac{k(k+1)}{2}+(k+1) \\ &=& (k+1)\left(\frac{k}{2}+1\right) \\ &=& (k+1)\cdot\frac{k+2}{2}. \end{array} \nonumber\]
Therefore, the identity also holds when \(n=k+1\). This completes the induction.
We can use the summation notation (also called the sigma notation) to abbreviate a sum. For example, the sum in the last example can be written as
\[\sum_{i=1}^n i. \nonumber\]
The letter \(i\) is the index of summation. By putting \(i=1\) under \(\sum\) and \(n\) above, we declare that the sum starts with \(i=1\), and ranges through \(i=2\), \(i=3\), and so on, until \(i=n\). The quantity that follows \(\sum\) describes the pattern of the terms that we are adding in the summation. Accordingly,
\[\sum_{i=1}^{10} i^2 = 1^2+2^2+3^2+\cdots+10^2. \nonumber\]
In general, the sum of the first \(n\) terms in a sequence \(\{a_1,a_2,a_3,\ldots\,\}\) is denoted \(\sum_{i=1}^n a_i\). Observe that
\[\sum_{i=1}^{k+1} a_i = \left(\sum_{i=1}^k a_i\right) + a_{k+1}, \nonumber\]
which provides the link between \(P(k+1)\) and \(P(k)\) in an induction proof.
Use mathematical induction to show that, for all integers \(n\geq1\), \[\sum_{i=1}^n i^2 = 1^2+2^2+3^2+\cdots+n^2 = \frac{n(n+1)(2n+1)}{6}. \nonumber\]
We proceed by induction on \(n\). When \(n=1\), the left-hand side reduces to \(1^2=1\), and the right-hand side becomes \(\frac{1\cdot2\cdot3}{6}=1\); hence, the identity holds when \(n=1\). Assume it holds when \(n=k\) for some integer \(k\geq1\); that is, assume that \[\sum_{i=1}^k i^2 = \frac{k(k+1)(2k+1)}{6} \nonumber\] for some integer \(k\geq1\). We want to show that it still holds when \(n=k+1\). In other words, we want to show that \[\sum_{i=1}^{k+1} i^2 = \frac{(k+1)(k+2)[2(k+1)+1]}{6} = \frac{(k+1)(k+2)(2k+3)}{6}. \nonumber\] From the inductive hypothesis, we find \[\begin{array}{r c l} \sum_{i=1}^{k+1} i^2 &=& \left(\sum_{i=1}^k i^2\right) + (k+1)^2 \\ &=& \frac{k(k+1)(2k+1)}{6}+(k+1)^2 \\ &=& \textstyle\frac{1}{6}\,(k+1)[k(2k+1)+6(k+1)] \\ &=& \textstyle\frac{1}{6}\,(k+1)(2k^2+7k+6) \\ &=& \textstyle\frac{1}{6}\,(k+1)(k+2)(2k+3). \end{array} \nonumber\] Therefore, the identity also holds when \(n=k+1\). This completes the induction.
Use mathematical induction to show that \[3+\sum_{i=1}^n (3+5i) = \frac{(n+1)(5n+6)}{2} \nonumber\] for all integers \(n\geq1\).
Proceed by induction on \(n\). When \(n=1\), the left-hand side reduces to \(3+(3+5)=11\), and the right-hand side becomes \(\frac{2\cdot11}{2} =11\); hence, the identity holds when \(n=1\). Assume it holds when \(n=k\) for some integer \(k\geq1\); that is, assume that \[3+\sum_{i=1}^k (3+5i) = \frac{(k+1)(5k+6)}{2} \nonumber\] for some integer \(k\geq1\). We want to show that it still holds when \(n=k+1\). In other words, we want to show that \[3+\sum_{i=1}^{k+1} (3+5i) = \frac{[(k+1)+1]\,[5(k+1)+6]}{2} = \frac{(k+2)(5k+11)}{2}. \nonumber\] From the inductive hypothesis, we find \[\begin{array}{r c l} 3+\sum_{i=1}^{k+1} (3+5i) &=& \left(3+\sum_{i=1}^k (3+5i)\right) + [3+5(k+1)] \\ &=& \frac{(k+1)(5k+6)}{2} + 5k+8 \\ &=& \textstyle\frac{1}{2}\,[(k+1)(5k+6)+2(5k+8)] \\ &=& \textstyle\frac{1}{2}\,(5k^2+21k+22) \\ &=& \textstyle\frac{1}{2}\,(k+2)(5k+11). \end{array} \nonumber\] This completes the induction.
hands-on exercise \(\PageIndex{1}\label{he:induct1-01}\)
It is time for you to write your own induction proof. Prove that \[1\cdot2 + 2\cdot3 + 3\cdot4 + \cdots + n(n+1) = \frac{n(n+1)(n+2)}{3} \nonumber\] for all integers \(n\geq1\).
We give you a hand on this one, after which, you will be on your own. We lay out the template, all you need to do is fill in the blanks.
for some integer \(k\geq1\). We want to show that it also holds when \(n=k+1\); that is, we want to show that
It follows from the inductive hypothesis that \[\begin{aligned} \hskip2in &=& \hskip2in + \hskip1in \\ \hskip2in &=& \hskip1in + \hskip1in \\ \hskip2in &=& \hskip1in . \end{aligned} \nonumber\] This completes the induction.
exercise \(\PageIndex{2}\label{he:induct1-02}\)
Use induction to prove that, for all positive integers \(n\), \[1\cdot2\cdot3 + 2\cdot3\cdot4 + \cdots + n(n+1)(n+2) = \frac{n(n+1)(n+2)(n+3)}{4}. \nonumber\]
exercise \(\PageIndex{3}\label{he:sumfourn}\)
Use induction to prove that, for all positive integers \(n\), \[1+4+4^2+\cdots+4^n = \frac{1}{3}\,(4^{n+1}-1). \nonumber\]
All three steps in an induction proof must be completed; otherwise, the proof may not be correct.
Never attempt to prove \(P(k)\Rightarrow P(k+1)\) by examples alone. Consider \[P(n): \qquad n^2+n+11 \mbox{ is prime}. \nonumber\] In the inductive step, we want to prove that \[P(k) \Rightarrow P(k+1) \qquad\mbox{ for \emph{any} } k\geq1. \nonumber\] The following table verifies that it is true for \(1\leq k\leq 8\): \[\begin{array}{|*{10}{c|}} \hline n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline n^2+n+11 & 13 & 17 & 23 & 31 & 41 & 53 & 67 & 83 & 101 \\ \hline \end{array} \nonumber\] Nonetheless, when \(n=10\), \(n^2+n+11=121\) is composite. So \(P(9) \nRightarrow P(10)\). The inductive step breaks down when \(k=9\).
The basis step is equally important. Consider proving \[P(n): \qquad 3n+2 = 3q \mbox{ for some integer $q$} \nonumber\] for all \(n\in\mathbb{N}\). Assume \(P(k)\) is true for some integer \(k\geq1\); that is, assume \(3k+2=3q\) for some integer \(q\). Then \[3(k+1)+2 = 3k+3+2 = 3+3q = 3(1+q). \nonumber\] Therefore, \(3(k+1)+2\) can be written in the same form. This proves that \(P(k+1)\) is also true. Does it follow that \(P(n)\) is true for all integers \(n\geq1\)? We know that \(3n+2\) cannot be written as a multiple of 3. What is the problem?
The problem is: we need \(P(k)\) to be true for at least one value of \(k\) so as to start the sequence of implications \[P(1) \Rightarrow P(2), \qquad P(2) \Rightarrow P(3), \qquad P(3) \Rightarrow P(4), \qquad\ldots \nonumber\] The induction fails because we have not established the basis step. In fact, \(P(1)\) is false. Since the first domino does not fall, we cannot even start the chain reaction.
Thus far, we have learned how to use mathematical induction to prove identities. In general, we can use mathematical induction to prove a statement about \(n\). This statement can take the form of an identity, an inequality, or simply a verbal statement about \(n\). We shall learn more about mathematical induction in the next few sections.
Mathematical induction can be used to prove that a statement about \(n\) is true for all integers \(n\geq1\).
We have to complete three steps.
In the basis step, verify the statement for \(n=1\).
In the inductive hypothesis, assume that the statement holds when \(n=k\) for some integer \(k\geq1\).
In the inductive step, use the information gathered from the inductive hypothesis to prove that the statement also holds when \(n=k+1\).
Be sure to complete all three steps.
Pay attention to the wording. At the beginning, follow the template closely. When you feel comfortable with the whole process, you can start venturing out on your own.
Exercise \(\PageIndex{1}\label{ex:induct1-01}\)
Use induction to prove that \[1^3+2^3+3^3+\cdots+n^3 = \frac{n^2(n+1)^2}{4} \nonumber\] for all integers \(n\geq1\).
Use induction to prove that the following identity holds for all integers \(n\geq1\): \[1+3+5+\cdots+(2n-1) = n^2. \nonumber\]
Use induction to show that \[1+\frac{1}{3}+\frac{1}{3^2}+\cdots+\frac{1}{3^n} = \frac{3}{2}\left(1-\frac{1}{3^{n+1}}\right) \nonumber\] for all positive integers \(n\).
Use induction to establish the following identity for any integer \(n\geq1\): \[1-3+9-\cdots+(-3)^n = \frac{1-(-3)^{n+1}}{4}. \nonumber\]
Use induction to show that, for any integer \(n\geq1\): \[\sum_{i=1}^n i\cdot i! = (n+1)!-1. \nonumber\]
Use induction to prove the following identity for integers \(n\geq1\): \[\sum_{i=1}^n \frac{1}{(2i-1)(2i+1)} = \frac{n}{2n+1}. \nonumber\]
Evaluate \(\sum_{i=1}^n \frac{1}{i(i+1)}\) for a few values of \(n\). What do you think the result should be? Use induction to prove your conjecture.
Use induction to prove that \[\sum_{i=1}^n (2i-1)^3 = n^2(2n^2-1) \nonumber\] whenever \(n\) is a positive integer.
Use induction to show that, for any integer \(n\geq1\): \[1^2-2^2+3^2-\cdots+(-1)^{n-1}n^2 = (-1)^{n-1}\,\frac{n(n+1)}{2}. \nonumber\]
Exercise \(\PageIndex{10}\label{ex:induct1-10}\)
Use mathematical induction to show that \[\sum_{i=1}^n \frac{i+4}{i(i+1)(i+2)} = \frac{n(3n+7)}{2(n+1)(n+2)} \nonumber\] for all integers \(n\geq1\).
3.3: Indirect Proofs
3.5: More on Mathematical Induction
Harris Kwong
anchor step
inductive hypothesis
Inductive Step | CommonCrawl |
Learn Math Formulas from a handpicked tutor in LIVE 1-to-1 classes
Find the right tutor
Fractions Formula
The fractions formula helps in conveniently performing the numerous operations on fractions. Compared to normal integral numbers the basic arithmetic operations for fractions, follow different rules. Fractions formulas help us to carry out basic operations with fractions easily. The basic arithmetic operation of addition or subtraction requires the denominators of the fractions to be equal. And while dividing one fraction by another fraction, the division is transformed to multiplication, by taking the reciprocal of the 2nd fraction. Let us learn more about the fraction formula and solve a few examples in this section.
What is Fractions Formula?
Fractions are one of the most important aspects of arithmetic that we use in our daily life. Fractions represent a numerical value that is a part of the whole value and it is represented using this symbol / (called the fractional line), for example, a/b. Fractions' formulas help in framing rules to be followed while we perform the four main arithmetic operations i.e addition, subtraction, multiplication, and division. Listed below are the fractions formulas:
A mixed fraction has a whole number and a fraction associated with it. The mixed fraction is converted into an improper fraction by multiplying the denominator with the whole number and adding it to the numerator, to form the numerator of the improper fraction.
\( A\dfrac{b}{c} = \dfrac{Ac + b}{c} \)
The addition of like fractions is possible by the simple addition of numerators and having the same denominator for the answer. The denominator of the given fractions is equal to the denominator of the final answer.
\( \frac{a}{b} +\frac{c}{b} = \frac{a + c}{b}\)
For the addition of unlike fractions, each of the fractions is multiplied with suitable constants to make the denominators of the two fractions equal. The aim is to get the denominators of the fractions as equal, before performing the addition process.
\(\frac{a}{b} +\frac{c}{d} =\frac{a .d}{b. d} +\frac{c . b}{d . b} = \frac{ad + bc}{bd}\)
Multiplication of fractions is possible by multiplying the numerators and then the denominators of both the fractions and then writing it as a single fraction. Further, this product is simplified and reduced to get the final answer.
\( \frac{a}{b} \times\frac{c}{d} = \frac{ac}{bd}\)
Division of fractions is transformed into a multiplication of fractions by first inverting the fraction in the denominator, and then multiplying it with the numerator fraction.
\(\dfrac{(a/b)}{(c/d)} = \frac{a}{b} \times \frac{d}{c}\)
Fraction Formulas
The other important formulas we use are listed as follows:
Cross multiply rule to check if two fractions are equivalent: if a/b = c/d , then ad = bc
Reciprocal rule to flip numerator and denominator: if a/b is the fraction, then b/a is its reciprocal.
Break down tough concepts through simple visuals.
Math will no longer be a tough subject, especially when you understand the concepts through visualizations with Cuemath.
Examples Using Fractions Formula
Example 1: Find the sum of the fractions \(\frac{4}{11} \) and \( \frac{5}{8} \) using fractions formula.
\(\begin{align} \frac{4}{11} + \frac{5}{8} &=\frac{4 \times 8}{11 \times 8} + \frac{5 \times 11}{8 \times 11} \\ &= \frac{32}{88} + \frac{55}{88} \\ &= \frac{32 + 55}{88} \\ &= \frac{87}{88} \end{align} \)
Therefore the sum of the fractions is \(\frac{87}{88}\).
Example 2: Find the value of \(\frac{24}{36} \div \frac{96}{288} \).
\( \begin{align} \frac{24}{36} \div \frac{96}{288} & = \dfrac{\frac{24}{36}}{ \frac{96}{288} } \\ &= \frac{24}{36} \times \frac{288}{96} \\ &= \frac{24}{36} \times \frac{36 \times 8}{24 \times 4} \\ &= 2\end{align} \)
Hence the final value is 2 using the fractions formula.
Example 3: Milk is sold at $16 per gallon. Find the cost of \(6\dfrac{2}{5}\) gallons of milk.
Cost of one gallon of milk = $16
Therefore, the cost of \(6\dfrac{2}{5}\) gallons i.e. 32/5 gallons will be 32/5 * 16 = $102.4
Therefore, the cost of gallons of milk is $102.4
FAQs on Fractions Formulas
Fractions formula help in conveniently performing the numerous operations on fractions. Compared to normal integral numbers the basic arithmetic operations for fractions, follow different rules. Fractions formulas help us to carry out basic operations with fractions easily. The basic arithmetic operation of addition or subtraction requires the denominators of the fractions to be equal. And for the division of one fraction by another fraction, the division is transformed to multiplication, by taking the reciprocal of the second fraction.
What is Addition Formula Used for Solving Fractions?
There are three different addition formulas used while solving problems on fractions, they are:
\(A\dfrac{b}{c} = \dfrac{Ac + b}{c} \)
\(\frac{a}{b} +\frac{c}{b} = \frac{a + c}{b} \)
\( \frac{a}{b} +\frac{c}{d} =\frac{a .d}{b. d} +\frac{c . b}{d . b} = \frac{ad + bc}{bd} \)
What is Multiplication Formula Used for Solving Fractions?
To multiply the two fractions, multiply the numerators, multiply the denominators. Multiplication of fractions is possible by multiplying the numerators and then the denominators of both the fractions and then writing it as a single fraction. Further, this product is simplified and reduced to get the final answer.
\( \frac{a}{b} \times\frac{c}{d} = \frac{ac}{bd} \)
What is the Division Formula Used for Solving Fractions?
To divide one fraction by another, multiply the first fraction with the reciprocal of the second fraction. Then, multiply the numerators, multiply the denominators. Division of fractions is transformed into a multiplication of fractions by first inverting the fraction in the denominator, and then multiplying it with the numerator fraction. | CommonCrawl |
Session JH: Instrumentation III
Chair: Jolie Cizewski, Rutgers University
JH.00001: Mitigation of Beta-Gamma Summing in a Planar Germanium Double-Sided Strip Detector
Nicole Larson, Sean Liddick, Christopher Prokop, Scott Suchyta, Jeromy Tompkins
Beta-decay spectroscopy experiments at fragmentation facilities are typically performed using a position-sensitive solid-state detector as a stopping medium for radioactive ion implantation. Subsequent beta decays are detected and correlated to the previously implanted ions based on position and time information. The results from these beta-decay spectroscopy experiments are pertinent to nuclear structure and astrophysics applications. To maximize the beta-decay detection efficiency a novel planar germanium double-sided strip detector (GeDSSD) has been implemented at the National Superconducting Cyclotron Laboratory. While the GeDSSD offers a beta-decay detection efficiency that will be close to 90{\%}, the detector also has a very high efficiency for low-energy gamma rays (15.7{\%} at 250 keV, for example). This leads to a large percentage of events in which the simultaneous energy deposition from the beta decay and gamma ray sum together in the GeDSSD. In order to mitigate the beta-gamma summing effects and recover the high gamma-ray detection efficiency, an algorithm has been developed in an attempt to separate the energy deposition of beta-decay electrons from gamma-rays. Results of the algorithm in both GEANT4 simulation and experimental data will be presented. [Preview Abstract]
JH.00002: Commissioning of a new decay-detection array/tape transport station for CARIBU
A.J. Mitchell, C.J. Lister, P. Chowdhury, A.Y. Deo, J.A. Clark, B. Digiovine, M.P. Carpenter, G. Savard, D. Seweryniak, S. Zhu, E.A. McCutchan, S.L. Tabor, R. Dungan
The CARIBU facility [1] at Argonne National Laboratory provides a unique opportunity for research in nuclear structure, nuclear astrophysics and applied applications. A new decay-detection array for performing $\beta $-$\gamma $ coincidence measurements is being commissioned for use with exotic stopped beams. The new array consists of the existing ``X-array,'' with five HPGe detectors for detection of $\gamma $ rays, and a plastic scintillator for $\beta $-particle detection. Two operational modes are possible: ``Mode 1'' utilizes a stand-alone scintillator chamber; ``Mode 2'' incorporates a tape transport system into a modified chamber, offering significant contamination removal that would otherwise result from the subsequent decay chain. The design of the tape station has been adopted from a prototype diagnostic system currently installed at CARIBU. Here, a general overview of the apparatus, commissioning runs and analysis of data collected whilst operating in both modes will be discussed. \\[4pt] [1] G. Savard \textit{et al Nuclear Instruments and Methods in Physics Research B} 266 (2008) 4086--4091 [Preview Abstract]
JH.00003: Measuring $(d,p\gamma)$ Gamma Decays with Apollo at HELIOS
Aaron Couture, Matthew J. Devlin, Hye Young Lee, John M. O'Donnell, Birger Back, Calem R. Hoffman
The role of neutron capture reactions is critical for nucleosynthesis processes far off of stability. Unfortunately, due to the radioactive nature the target isotopes of interest and the difficulty in producing a neutron target, these reactions will never be amenable to direct measurement. Further, for most astrophysical environments favored for the $r$-process, the required reaction networks are so large as to make direct experimental treatment of all of the reactions of interest beyond the range of what is feasible. Neutron transfer reactions, such as $(d,p)$, combined with intense beams of radioactive ions can help to elucidate the nuclear physics at play. The HELIOS instrument at Argonne National Laboratory has been successfully used to study a range of reactions in inverse kinematics. To complement this effort, we have designed a scintillator array APOLLO to be used in conjunction with HELIOS to measure gamma-decay properties following neutron transfer. This faced challenges related to operation under vacuum and the 3~T field at HELIOS. The first measurements with this new instrument, including efficiency, resolution, and coincidence efficiency will be discussed. [Preview Abstract]
JH.00004: Coupling the ORRUBA and Gammasphere Arrays
S.D. Pain, A. Ratkiewicz, S. Burcher, Ian Marsh, J.A. Cizewski, S. Hardy, M.E. Howard, S. Ota, C. Shand, K.L. Jones, W.A. Peters, D.W. Bardayan, M. Matos, M.P. Carpenter, D. Seweryniak, S. Zhu, C.J. Lister, R.L. Kozub, J.C. Blackmon
The measurement of transfer reactions in inverse kinematics using heavy beams poses a number of experimental challenges. Even for nuclei in close proximity to double shell closures, the fragmentation of single-particle strength can result in relatively complex spectra with level spacings as low as of tens of keV. Coincident measurement of de-excitation gamma rays in coincidence with the charged reaction products can aid significantly in resolving the states populated, and provide constraints on numerous other properties, such as spin-parities, branching ratios and lifetimes of levels. The ORRUBA detector array, with extended angular coverage, is being coupled to Gammasphere in order to facilitate such measurements. The motivation, details and current status of the coupled arrays will be presented. *This work is supported in part by the U.S. Department of Energy and the National Science Foundation. [Preview Abstract]
JH.00005: SIPT---An Ultrasensitive Mass Spectrometer for Rare Isotopes
Samuel J. Novario, Georg Bollen, David L. Lincoln, Adrian A. Valverde, Ryan Ringle, Stefan Schwarz, Matthew Redshaw
Over the last few decades, advances in radioactive beam facilities like the Coupled Cyclotron Facility at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University (MSU) have made short-lived, rare-isotope beams available for study in various science areas, and new facilities, like the Facility for Rare Isotope Beams (FRIB) under construction at MSU, will provide even more exotic rare isotopes. The determination of the masses of these rare isotopes is of utmost importance since it provides a direct measurement of the binding energy of the nucleons in the atomic nucleus. For this purpose we are currently developing a dedicated Single-Ion Penning Trap (SIPT) mass spectrometer at NSCL to handle the specific challenges posed by rare isotopes. These challenges, which include short half-lives and extremely low production rates, are dealt with by employing the narrowband FT-ICR detection method under cryogenic conditions. Used in concert with the 9.4-T time-of-flight mass spectrometer, the 7-T SIPT system will ensure that the LEBIT mass measurement program at MSU will make optimal use of the wide range of rare isotope beams provided by the future FRIB facility. [Preview Abstract]
JH.00006: Testing and Characterization of the JENSA Gas Jet Target
K.A. Chipps
Next generation radioactive ion beam facilities are being planned and built across the globe, and with them an incredible new array of exotic isotopes will be available for study. To keep pace with the state of nuclear physics research, both new detector systems and new target systems are needed. The Jet Experiments in Nuclear Structure and Astrophysics (JENSA) gas jet target is one of these new target systems, designed to provide a target of light gas that is localized, dense, and pure. The JENSA system involves nearly two dozen pumps, a custom-built industrial compressor, and vacuum chambers designed to incorporate large arrays of both charged-particle and gamma-ray detectors. The JENSA gas jet target was originally constructed at Oak Ridge National Laboratory for testing and characterization, and will move to the ReA3 reaccelerated beam hall at the National Superconducting Cyclotron Laboratory (NSCL) for further characterization, optimization, and use. JENSA will form the main target for the proposed SEparator for CApture Reactions (SECAR), and together the two comprise the focus of the low energy experimental nuclear astrophysics community in the United States. Data on gas flow and jet characteristics of the current JENSA target system will be presented. [Preview Abstract]
JH.00007: Developments in precison mass measurements of short-lived r-process nuclei with CARIBU
S.T. Marley, A. Aprahamian, M. Mumpower, A. Nystrom, N. Paul, K. Siegl, S. Strauss, R. Surman, J.A. Clark, A. Perez Galvan, G. Savard, G. Morgan, R. Orford
The confluence of new radioactive beam facilities and modern precision mass spectrometry techniques now make it possible to measure masses of many neutron-rich nuclei important to nuclear structure and astrophysics. A recent mass sensitivity study (S. Brett, \emph{et al.} Eur. Phys. J., A 48, 184 (2012)) identified the nuclear masses that are the most influential to the final rapid-neutron capture process abundance distributions under various astrophysical scenarios. This work motivated a campaign of precision mass measurements using the Canadian Penning Trap (CPT) installed at the Californium Rare Isotope Breeder Upgrade (CARIBU) facility at Argonne National Laboratory. In order to measure the weakest and most short-lived (t$_{\frac{1}{2}}<$ 150 ms) of these influential nuclei, a series of upgrades to the CARIBU and CPT systems have been developed. The implementation of these upgrades, the $r$-process mass measurements, and the status of CARIBU facilty will be discussed. [Preview Abstract]
JH.00008: Optimizing VANDLE for Decay Spectroscopy
N.T. Brewer, S.Z. Taylor, R. Grzywacz, M. Madurga, S.V. Paulauskas, J.A. Cizewski, W.A. Peters
Understanding the decay properties of neutron rich isotopes has well established importance to the path of the r-process [1] and to the total decay heat for reactor physics [2]. Specifically, the half-life, branching ratio and spectra for $\beta$-n decay is of particular interest. With that in mind, we have continued attempts to improve upon the Versatile Array of Neutron Detectors at Low Energy (VANDLE) in terms of efficiency and TOF resolution through the use of new and larger scintillators. Details of the new implementation, design and characterization of the array will be shown and compared to previous results. \\[4pt] [1] M. Madurga et.al., Phys Rev. Lett. {\bf 109},112501 (2012)\\[0pt] [2] Rykaczewski, K. P., Physics {\bf 3}, 94 (2010) [Preview Abstract]
JH.00009: Experimental Results from Oak Ridge Isomer Spectrometer and Separator (ORISS)
A. Piechaczek, J.C. Batchelder, H.K. Carter, R.E. Goans, S. Liu, V. Shchepunov, E.F. Zganjar
ORISS is a linear multi reflection time-of-flight mass analyzer developed by the University Radioactive Ion Beam Consortium. It will be used to separate any isobar and many isomers for decay spectroscopy experiments. The entire system's operation was demonstrated with a less than ideal multi-isotopic ion source and achieved a mass resolving power as high as 430,000. To better characterize the system we have installed a monoisotopic 133Cs ion source. The radiofrequency quadrupole ion cooler and buncher, which serves as the ion injector into ORISS, was tested in stand-alone mode and achieved a longitudinal emittance of 22 $\pi $ eV $\times$ ns and transmission \textgreater 40{\%}. These very good results confirm our expectation that ORISS can achieve the design goals. Using the improved ion source, we expect, very soon, to demonstrate the complete system's design goals of 400,000 mass resolving power and 50{\%} transmission. [Preview Abstract]
JH.00010: Status of the Beam Thermalization Area at the NSCL
Kortney Cooper, Bradley Barquest, David Morrissey, Jose Alberto Rodriguez, Stefan Schwarz, Chandana Sumithrarachchi, Jeff Kwarsick, Guy Savard
Beam thermalization is a necessary process for the production of low-energy ion beams at projectile fragmentation facilities. Present beam thermalization techniques rely on passing high-energy ion beams through solid degraders followed by a gas cell where the remaining kinetic energy is dissipated through collisions with buffer gas atoms. Recently, the National Superconducting Cyclotron Laboratory (NSCL) upgraded its thermalization area with the implementation of new large acceptance beam lines and~a large RF-gas catcher constructed by Argonne National Lab (ANL). Two high-energy beam lines were commissioned~along with the installation and commissioning of this new device in late~2012. Low-energy radioactive ion beams have been successfully delivered to the Electron Beam Ion Trap (EBIT) charge breeder for the ReA3 reaccelerator, the SuN detector, the Low Energy Beam Ion Trap (LEBIT) penning trap, and~the Beam Cooler and Laser Spectroscopy (BeCoLa) collinear laser beamline. Construction of a gas-filled reverse cyclotron dubbed the CycStopper is also underway. The status of the beam thermalization area will be presented and the overall efficiency of the system will be~discussed. [Preview Abstract] | CommonCrawl |
Stochastic Backpropagation through Mixture Density Distributions Stochastic Backpropagation through Mixture Density Distributions Alex Graves 2016
Paper summary hlarochelle This paper derives an algorithm for passing gradients through a sample from a mixture of Gaussians. While the reparameterization trick allows to get the gradients with respect to the Gaussian means and covariances, the same trick cannot be invoked for the mixing proportions parameters (essentially because they are the parameters of a multinomial discrete distribution over the Gaussian components, and the reparameterization trick doesn't extend to discrete distributions). One can think of the derivation as proceeding in 3 steps: 1. Deriving an estimator for gradients a sample from a 1-dimensional density $f(x)$ that is such that $f(x)$ is differentiable and its cumulative distribution function (CDF) $F(x)$ is tractable: $\frac{\partial \hat{x}}{\partial \theta} = - \frac{1}{f(\hat{x})}\int_{t=-\infty}^{\hat{x}} \frac{\partial f(t)}{\partial \theta} dt$ where $\hat{x}$ is a sample from density $f(x)$ and $\theta$ is any parameter of $f(x)$ (the above is a simplified version of Equation 6). This is probably the most important result of the paper, and is based on a really clever use of the general form of the Leibniz integral rule. 2. Noticing that one can sample from a $D$-dimensional Gaussian mixture by decomposing it with the product rule $f({\bf x}) = \prod_{d=1}^D f(x_d|{\bf x}_{<d})$ and using ancestral sampling, where each $f(x_d|{\bf x}_{<d})$ are themselves 1-dimensional mixtures (i.e. with differentiable densities and tractable CDFs) 3. Using the 1-dimensional gradient estimator (of Equation 6) and the chain rule to backpropagate through the ancestral sampling procedure. This requires computing the integral in the expression for $\frac{\partial \hat{x}}{\partial \theta}$ above, where $f(x)$ is one of the 1D conditional Gaussian mixtures and $\theta$ is a mixing proportion parameter $\pi_j$. As it turns out, this integral has an analytical form (see Equation 22). **My two cents** This is a really surprising and neat result. The author mentions it could be applicable to variational autoencoders (to support posteriors that are mixtures of Gaussians), and I'm really looking forward to read about whether that can be successfully done in practice. The paper provides the derivation only for mixtures of Gaussians with diagonal covariance matrices. It is mentioned that extending to non-diagonal covariances is doable. That said, ancestral sampling with non-diagonal covariances would become more computationally expensive, since the conditionals under each Gaussian involves a matrix inverse. Beyond the case of Gaussian mixtures, Equation 6 is super interesting in itself as its application could go beyond that case. This is probably why the paper also derived a sampling-based estimator for Equation 6, in Equation 9. However, that estimator might be inefficient, since it involves sampling from Equation 10 with rejection, and it might take a lot of time to get an accepted sample if $\hat{x}$ is very small. Also, a good estimate of Equation 6 might require *multiple* samples from Equation 10. Finally, while I couldn't find any obvious problem with the mathematical derivation, I'd be curious to see whether using the same approach to derive a gradient on one of the Gaussian mean or standard deviation parameters gave a gradient that is consistent with what the reparameterization trick provides.
Stochastic Backpropagation through Mixture Density Distributions
Alex Graves
Keywords: cs.NE
Abstract: The ability to backpropagate stochastic gradients through continuous latent distributions has been crucial to the emergence of variational autoencoders and stochastic gradient variational Bayes. The key ingredient is an unbiased and low-variance way of estimating gradients with respect to distribution parameters from gradients evaluated at distribution samples. The "reparameterization trick" provides a class of transforms yielding such estimators for many continuous distributions, including the Gaussian and other members of the location-scale family. However the trick does not readily extend to mixture density models, due to the difficulty of reparameterizing the discrete distribution over mixture weights. This report describes an alternative transform, applicable to any continuous multivariate distribution with a differentiable density function from which samples can be drawn, and uses it to derive an unbiased estimator for mixture density weight derivatives. Combined with the reparameterization trick applied to the individual mixture components, this estimator makes it straightforward to train variational autoencoders with mixture-distributed latent variables, or to perform stochastic variational inference with a mixture density variational posterior.
[link] Summary by Hugo Larochelle 2 years ago
This paper derives an algorithm for passing gradients through a sample from a mixture of Gaussians. While the reparameterization trick allows to get the gradients with respect to the Gaussian means and covariances, the same trick cannot be invoked for the mixing proportions parameters (essentially because they are the parameters of a multinomial discrete distribution over the Gaussian components, and the reparameterization trick doesn't extend to discrete distributions).
One can think of the derivation as proceeding in 3 steps:
1. Deriving an estimator for gradients a sample from a 1-dimensional density $f(x)$ that is such that $f(x)$ is differentiable and its cumulative distribution function (CDF) $F(x)$ is tractable:
$\frac{\partial \hat{x}}{\partial \theta} = - \frac{1}{f(\hat{x})}\int_{t=-\infty}^{\hat{x}} \frac{\partial f(t)}{\partial \theta} dt$
where $\hat{x}$ is a sample from density $f(x)$ and $\theta$ is any parameter of $f(x)$ (the above is a simplified version of Equation 6). This is probably the most important result of the paper, and is based on a really clever use of the general form of the Leibniz integral rule.
2. Noticing that one can sample from a $D$-dimensional Gaussian mixture by decomposing it with the product rule $f({\bf x}) = \prod_{d=1}^D f(x_d|{\bf x}_{<d})$ and using ancestral sampling, where each $f(x_d|{\bf x}_{<d})$ are themselves 1-dimensional mixtures (i.e. with differentiable densities and tractable CDFs)
3. Using the 1-dimensional gradient estimator (of Equation 6) and the chain rule to backpropagate through the ancestral sampling procedure. This requires computing the integral in the expression for $\frac{\partial \hat{x}}{\partial \theta}$ above, where $f(x)$ is one of the 1D conditional Gaussian mixtures and $\theta$ is a mixing proportion parameter $\pi_j$. As it turns out, this integral has an analytical form (see Equation 22).
**My two cents**
This is a really surprising and neat result. The author mentions it could be applicable to variational autoencoders (to support posteriors that are mixtures of Gaussians), and I'm really looking forward to read about whether that can be successfully done in practice.
The paper provides the derivation only for mixtures of Gaussians with diagonal covariance matrices. It is mentioned that extending to non-diagonal covariances is doable. That said, ancestral sampling with non-diagonal covariances would become more computationally expensive, since the conditionals under each Gaussian involves a matrix inverse.
Beyond the case of Gaussian mixtures, Equation 6 is super interesting in itself as its application could go beyond that case. This is probably why the paper also derived a sampling-based estimator for Equation 6, in Equation 9. However, that estimator might be inefficient, since it involves sampling from Equation 10 with rejection, and it might take a lot of time to get an accepted sample if $\hat{x}$ is very small. Also, a good estimate of Equation 6 might require *multiple* samples from Equation 10.
Finally, while I couldn't find any obvious problem with the mathematical derivation, I'd be curious to see whether using the same approach to derive a gradient on one of the Gaussian mean or standard deviation parameters gave a gradient that is consistent with what the reparameterization trick provides.
Thanks for the summary. Do you know why Equation 5 (`\frac{partial F_d (x_d|x_{<d}) }{\partial\theta} = ... = 0`) which exploits the Leibniz rule is set to zero? At first glance if the assumption is that the PDF `f_d` depends on `\theta`, then so should the CDF `F`, and so wouldn't it generally have a non-zero partial derivative?
Good question! It's because $\hat{x}_d$ in Equation 5 was sampled as $\hat{x}_d = F^{-1}(u_d|{\bf x}_{<d})$ where $u_d\sim U(0,1)$. So $F_d(\hat{x}_d|{\bf x}_{<d}) = u_d$ and $u_d$ was sampled independently of $\theta$ (it's a uniform sample). So the derivative of $F_d(\hat{x}_d|{\bf x}_{<d})$ (i.e. of $u_d$) is 0. Hope this helps! | CommonCrawl |
Jan Trlifaj
Jan Trlifaj (born 30 December 1954) is a Professor of Mathematics at Charles University whose research interests include Commutative algebra, Homological algebra and Representation theory.[1]
Jan Trlifaj
Born (1954-12-30) December 30, 1954
NationalityCzech
Alma materCharles University
Scientific career
FieldsAlgebra
Representation theory
InstitutionsCharles University
ThesisAssociative rings and the Whitehead property of modules (1989)
Doctoral advisorLadislav Bican
Career and research
Jan Trlifaj studied mathematics at the Faculty of Mathematics and Physics, Charles University, from which he received MSc. in 1979, Ph.D. in 1989 under Ladislav Bican.[2] and Prof. of Mathematics in the field Algebra and number theory in 2009.
In the academic year 1994/95 he had the position as Postdoctoral Fellow of the Royal Society at Department of Mathematics at University of Manchester. In Fall 1998 he received the J.W.Fulbright Scholarship at the Department of Mathematics, University California at Irvine. During Fall 2002 and 2006 he was a visiting professor at Centre de Recerca Matemàtica, Barcelona. Since 1990, he has completed numerous short term visiting appointments and given over 100 invited lectures at conferences and seminars worldwide.[3]
Since 2017, he is Fellow of Learned Society of the Czech Republic.[4]
He served in the organizing committee of 18th International Conference on Representations of Algebras (ICRA 2018), held for 250 participants from 34 countries in August 2018 in Prague, Czech Republic.[5]
He has been elected Fellow of the American Mathematical Society (AMS) in 2020, for contributions to homological algebra and tilting theory for non finitely generated modules.[6]
He serves as Member of the Science board for Neuron prize that is awarded to best Czech scientists by Neuron Endowment Fund.[7]
Selected publications
Papers
• 1994: "Every *-module is finitely generated", Journal of Algebra, 169 (2): 392–398, doi:10.1006/jabr.1994.1291
• 1996: Trlifaj, Jan (1996), "Whitehead test modules", Transactions of the American Mathematical Society, 348 (4): 1521–1554, doi:10.1090/S0002-9947-96-01494-8
• 2001: Eklof, Paul C.; Trlifaj, Jan (2001), "How to make Ext vanish", Bulletin of the London Mathematical Society, 33 (1): 41–51, doi:10.1112/blms/33.1.41, S2CID 11541149 (with Paul C. Eklof), Shelah, Saharon; Trlifaj, Jan (2001), "Spectra of the Γ-invariant of uniform modules", Journal of Pure and Applied Algebra, 162 (2–3): 367–379, doi:10.1016/S0022-4049(00)00118-3 (with Saharon Shelah)
• 2007: Šťovíček, Jan; Trlifaj, Jan (2007), "All tilting modules are of countable type", Bulletin of the London Mathematical Society, 39 (1): 121–132, doi:10.1112/blms/bdl019 (with Jan Šťovíček)
• 2012: Herbera, Dolors; Trlifaj, Jan (2012), "Almost free modules and Mittag-Leffler conditions", Advances in Mathematics, 229 (6): 3436–3467, doi:10.1016/j.aim.2012.02.013 (with Dolors Herbera), Estrada, Sergio; Guil Asensio, Pedro A.; Prest, Mike; Trlifaj, Jan (2012), "Model category structures arising from Drinfeld vector bundles", Advances in Mathematics, 231 (3–4): 1417–1438, doi:10.1016/j.aim.2012.06.011 (with Sergio Estrada, Pedro A. Guil Asensio, and Mike Prest)
• 2014: Angeleri Hügel, Lidia; Pospíšil, David; Šťovíček, Jan; Trlifaj, Jan (2014), "Tilting, cotilting, and spectra of commutative noetherian rings", Transactions of the American Mathematical Society, 366 (7): 3487–3517, doi:10.1090/S0002-9947-2014-05904-7 (with Lidia Angeleri Huegel, David Pospíšil, and Jan Šťovíček)
• 2016: Slávik, Alexander; Trlifaj, Jan (2016), "Very flat, locally very flat, and contraadjusted modules", Journal of Pure and Applied Algebra, 220 (12): 3910–3926, arXiv:1601.00783, doi:10.1016/j.jpaa.2016.05.020, S2CID 119176440 (with Alexander Slávik)
Books
• 2006, 2012: Approximations and Endomorphism Algebras of Modules, de Gruyter Expositions in Mathematics 41, Vol. 1 - Approximations, Vol. 2 - Predictions, W. de Gruyter Berlin - Boston, xxviii + 972 pp. (with Rüdiger Göbel)
Awards and distinctions
• Prize of the Dean of MFF for the best monograph 2006[8]
• MFF UK Silver medal at the Sexagennial anniversary [9][10]
• Fellow of the American Mathematical Society, 2020[11]
References
1. Personal Profile of Prof. Jan Trlifaj at Department of Algebra, Charles University in Prague
2. Jan Trlifaj at the Mathematics Genealogy Project
3. Jan Trlifaj's Personal Page: Lectures
4. , Fellows of the Learned Society: Jan Trlifaj
5. , Representation Theory and Beyond, 2020
6. 2020 Class of Fellows of the AMS
7. , Neuron Prize Scientific panel
8. Report from the 2nd meeting of the Scientific Board of the Faculty of Mathematics and Physics 2007
9. , The list of 40 awardees, 60 years of MFF UK
10. MATFYZ – synonymum pro excelenci ve vědě již 60 let
11. List of Fellows American Mathematical Society
External links
• Jan Trlifaj at the Mathematics Genealogy Project
• Personal web page
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Publons
• ResearcherID
• Scopus
• zbMATH
Other
• IdRef
[[Category:21st-century Czech mathematicians]]
| Wikipedia |
\begin{document}
\title{Unitary Transformation in Probabilistic Teleportation} \author{Xiu-Lao Tian\footnote{Corresponding author: E-mail, [email protected]},
~~Wei Zhang,~~Xiao-Qiang Xi\\
{\small School of Science, Xi'an University of Posts and
Telecommunications, Xi'an 710121, China}}
\date{\today} \maketitle
\begin{abstract} We proposed a general transformation in probabilistic teleportation, which is based on different entanglement matching coefficients $K$ corresponding to different unitary evolution which provides one with more flexible evolution method experimentally. Through analysis based on the Bell basis and generalized Bell basis measurement for two probabilistic teleportation, we suggested a general probability of successful teleportation, which is not only determined by the entanglement degree of transmission channels and measurement methods, but also related to the unitary transformation in the teleportation process. \end{abstract}
\indent {Keywords: probabilistic teleportation; entanglement matching;
channel parameter matrix} \par{PACS: 03.67.Hk, 03.65.Ta}
\maketitle
\section{Introduction}
Quantum entanglement is one of the most fascinating characteristic of quantum physics, a fantastic application of entanglement \cite{R Horodecki2009,PRA74 052105,PRA79 024303} is quantum teleportation, which plays a key role in the field of quantum communication. Since the seminal work of Bennett et al. \cite{Bennett1993}, teleportation has been the research interest of researchers and a number of work both in theory and experiments has been devoted to it \cite{Bou1997,Nielsen1998,JOP2004,PRL Y. Yeo2006,Nature Lett2008, PLA375 922,PLA375 2140}.
Up to now the teleportation has been studied in different branches, such as directly and network controlled teleportation \cite{PRA Karlsson1998,PRA F G Deng 2005,PRA X Z Man2007,OPC S G F2009}; discrete-variables and continuous-variables teleportation \cite{PRA CV Te 1998,PRA P M2006,IJQI P M2008}; prefect and probabilistic teleportation \cite{PRA W L Li2000,PLA B S Shi2000} and so on. In fact, one of the key problem of teleportation is how to construct an usefulness quantum channel, different channels will yield different results, some channels can be used to realize perfect teleportation, while some others can only enable probabilistic teleportation. Because of the inevitable interaction with its surroundings, correlations in quantum states are difficult to maintain \cite{PLA374 3520,AOP}, therefore the probabilistic teleportation \cite{CPL H Lu2001,PRA J Fang2003,CPL T Gao2003,CPL F L Yan2006,CPL W X Jiang2007} has been widely discussed in recent years.
A necessary and sufficient condition for realizing perfect teleportation and successful teleportation has been given in \cite{T1,T2,T3,T4}. Based on the Bell basis measurement, we found that if the channel parameter matrix (CPM) is unitary, then one can always realize a perfect teleportation (i.e., the successful probability $p=1$), if the CPM is invertible but not unitary, however, one can only realize a probabilistic teleportation (i.e., the successful probability $p<1$).
Motivated by the idea of Ref. \cite{PRA W L Li2000}, in which the authors introduced an auxiliary qubit to realize probabilistic teleportation, we propose some unitary transformation methods in probabilistic teleportation in this work. These methods are based on different entanglement matching coefficients $K$ which corresponding to different unitary transformations. Through detailed analysis with the Bell basis and generalized Bell basis measurement in two probabilistic teleportation processes, we suggest a general probability of successful teleportation, which is not only determined by the entanglement degree of transmission channel and measurement methods, but also related to the unitary transformation in the teleportation process. For example, if we teleport the unknown one-qubit state $|\varphi\rangle_{1}$ via the channel state
$|\varphi\rangle_{2,3}=a|00\rangle +b|11\rangle$, then the whole probability of successful teleportation is $P= 2(Kab)^2$, where
$0<K\leq \min(\frac{1}{|a|},\frac{1}{|b|})$. Our conclusion covers and complements the results of Ref. \cite{PRA W L Li2000}. As different $K$ will give different kinds of evolution methods, one can have more flexible and selectable evolution method experimentally.
\section{Entanglement matching and probabilistic teleportation} Suppose Alice wants to send an unknown one-qubit state
$|\varphi\rangle_{1}$ to Bob \begin{eqnarray}
|\varphi\rangle_{1}=R^{i}|i\rangle
=R^{0}|0\rangle+R^{1}|1\rangle=\alpha|0\rangle+\beta|1\rangle, \end{eqnarray}
where $R^{i}R_{i}^*=|\alpha|^2+|\beta|^2=1$, with $i$ being taken to be 0 or 1 and a repeated index denotes summation.
The general two-qubit state $|\varphi\rangle_{2,3} $ used as the quantum channel can be expressed as follows \begin{eqnarray}
|\varphi\rangle_{2,3}=\frac{1}{\sqrt{2}}X^{jk}|jk\rangle=
\frac{1}{\sqrt{2}}(X^{00}|00\rangle
+X^{01}|01\rangle +X^{10}|10\rangle+X^{11}|11\rangle), \end{eqnarray} where $X^{jk}X_{jk}^*=2$. If Alice adopts the standard Bell basis measurement (BM) $\phi_{ij}^{\lambda}$ $(\lambda=1,2,3,4)$
on her particles, i.e., $\phi_{ij}^{1,2}=(|00\rangle
\pm|11\rangle)/ \sqrt{2}$ and
$\phi_{ij}^{3,4}=(|01\rangle \pm |10\rangle)/ \sqrt{2}$. Then with the denotation of Bell basis \cite{T1,T2,T3}, the total state of the system can be rewritten as \begin{eqnarray}
|\Psi\rangle_{tot}=\frac{1}{\sqrt{2}}R^{i}X^{jk}
{L}^\lambda_{ij}|\lambda k\rangle
=\frac{1}{{2}}R^{i}X^{jk}{T}^\alpha_{ij}|\alpha k\rangle
=\frac{1}{2}R^{i}\sigma^{(\lambda)k}_{i}|\alpha k\rangle, \end{eqnarray} where $\sigma_i^{(\lambda)k}=X^{jk}T_{ij}^{\lambda}=X^{jk}{\sqrt{2}} {L}^\alpha_{ij}$ is the element of \begin{eqnarray}
\sigma^{\alpha}=XT^{\alpha}=
\left(\begin{array}{cc}
\sigma_{0}^{\lambda0}&\sigma_{1}^{\lambda0}\\
\sigma_{0}^{\lambda1}&\sigma_{1}^{\lambda1}
\end{array}\right)=
\left(\begin{array}{cc}
X^{00}&X^{10}\\
X^{01}&X^{11} \end{array}\right) \left(\begin{array}{cc}
T_{00}^{\lambda}&T_{10}^{\lambda}\\
T_{01}^{\lambda}&T_{11}^{\lambda} \end{array}\right). \label{sigmat} \end{eqnarray}
After Alice's measurement, the total state will collapse to \begin{eqnarray}
|\Psi^{\alpha}\rangle_B=\frac{1}{2}R^{i}{\sigma}_i^{(\alpha)k}|k\rangle. \end{eqnarray} Obviously, based on the BM method, all the $T^\alpha$ are unitary. So if the CPM $X$ is unitary, one can always realize the perfect teleportation (i.e., the whole probability $p=1$), if $X$ is invertible but not unitary, one can only realize a probabilistic teleportation (i.e., the whole probability $p<1$).
In Ref. \cite{PRA W L Li2000}. Li et al. presented a protocol of probabilistic teleportation by introducing an auxiliary qubit state
$|0\rangle_{A}$ and performing an unitary transformation on Bob's state. They employ a partially entangled state as the quantum channel, that is \begin{eqnarray}
|\varphi\rangle_{2,3}=X^{jk}|jk\rangle =a|00\rangle
+b|11\rangle~~(a\neq b), \end{eqnarray} with $a, b$ being real numbers and $a^2+b^2=1$. The CPM $X=\sqrt{2}{\rm diag}(a,b)$ is obviously invertible but not unitary, so Bob cannot directly retrieve the state by acting $(\sigma^{\alpha})^{-1}$
on the collapsed state $|\Psi^{\alpha}\rangle_B$.
With the standard Bell basis $\phi^\lambda_{ij}~(\lambda=1,2,3,4)$, the total state of the system can be rewritten as \begin{eqnarray}
|\Psi\rangle_{tot}&=&\frac{1}{\sqrt{2}}R^{i}X^{jk}T^\lambda_{ij}|\lambda
k\rangle =\frac{1}{\sqrt{2}}R^{i}\sigma^{(\lambda)k}_{i}|\lambda
k\rangle\nonumber\\
&=&\frac{1}{\sqrt{2}}[\phi^1_{1,2}(a\alpha|0\rangle+b\beta|1\rangle)
+\phi^2_{1,2}(a\alpha|0\rangle-b\beta|1\rangle)\nonumber\\
&&+\phi^3_{1,2}(a\beta|0\rangle+b\alpha1\rangle)\
+\phi^4_{1,2}(a\beta|0\rangle-b\alpha|1\rangle)]. \end{eqnarray}
After Alice's BM $\phi^1_{12}$, Bob will get the unnormalized state as follows \begin{eqnarray}
|\Psi^1\rangle_{b}=\frac{1}{\sqrt{2}}R^{i}\sigma^{(1)k}_{i}| k\rangle
=\frac{1}{\sqrt{2}}(a\alpha|0\rangle+b\beta|1\rangle). \end{eqnarray}
In order to obtain the original state, one can introduce an auxiliary qubit state $|0\rangle_{A}$ \cite{PRA W L Li2000} in Bob's state, now Bob's state can be express as \begin{eqnarray}
|\Psi^1\rangle_{A,3}=\frac{1}{\sqrt{2}}R^{i}\sigma^{(1)k}_{i}|
k\rangle
=\frac{1}{\sqrt{2}}|0\rangle_{A}[(a\alpha|0\rangle+
b\beta|1\rangle)_{3}, \end{eqnarray}
With the standard two-qubit basis $(|00\rangle, |01\rangle, |10\rangle,
|11\rangle)_{A,3}$, we propose an unitary transformation $U$ with parameter $K$ for the particles (A,3) \begin{eqnarray}
U=\left(\begin{array}{cccc}
Kb&0&\sqrt{1-(Kb)^2}&0\\
0&Ka&0&\sqrt{1-(Ka)^2}\\
\sqrt{1-(Kb)^2}&0&-Kb&0 \\
0&\sqrt{1-(Ka)^2}&0&-Ka\\
\end{array}\right). \end{eqnarray}
where we called $K$ the entanglement matching coefficient of Bob's evolution. To ensure the transformation $U$ to be unitary, we demand $0<K\leq \min(\frac{1}{|a|},\frac{1}{|b|}) $. There are different unitary transformation methods with different $K$.
After Bob's evolution, the state $|\Psi^1\rangle_{A,3}$ turns out to be \begin{eqnarray}
|\Psi^1\rangle_{A,3}=\frac{1}{\sqrt{2}}[Kab
|0\rangle_{A}(\alpha|0\rangle+\beta|1\rangle)_{3}
+a{\sqrt{1-(Kb)^2}}\alpha|1\rangle_{A}|0\rangle_{3}
+b{\sqrt{1-(Ka)^2}}\beta|1\rangle_{A}|1\rangle_{3}. \end{eqnarray}
Certainly, $|\Psi^1\rangle_{A,3} $ is not normalized. Now Bob performs measurement on the auxiliary qubit $A$, if the measurement outcome is $ |1\rangle_{A}$, the teleportaton fails, if the measurement outcome is $ |0\rangle_{A}$, the teleportation is successfully accessed and Bob's state becomes \begin{eqnarray}
|\Psi^1\rangle_{3}=\frac{1}{\sqrt{2}}Kab(\alpha|0\rangle+\beta|1\rangle)_{3} \end{eqnarray}
Now we discuss the probability of successful teleportation, which contains both Alice's $P_A$ and Bob's $P_B$ probability. From Eq. (7) one can obtain the Bell state $\phi^1_{1,2}$ occurring probability as \begin{eqnarray}
P^1_A=\frac{1}{{2}}\langle0|a\alpha\langle1|b\beta
(|a\alpha|0\rangle+b\beta|1\rangle)_{3}=\frac{1}{{2}}[(a\alpha)^2+(b\beta)^2], \end{eqnarray} Similarly, the Bell state $\phi^2_{1,2}$, $\phi^3_{1,2}$ and $\phi^4_{1,2}$ occurring probability are \begin{eqnarray} P^2_A=P^1_A=\frac{1}{{2}}[(a\alpha)^2+(b\beta)^2], P^3_A=P^4_A=\frac{1}{{2}}[(a\beta)^2+(b\alpha)^2], \end{eqnarray} If $a=b=\frac{1}{\sqrt{2}},$ then $P^1_A=P^2_A=P^3_A=P^4_A=\frac{1}{{4}}$, which is just the prefect teleportation.
Now we compute the probability of Bob for obtaining the original state from the state $|\Psi^1\rangle_{A,3}$. The normalized state corresponding to the state in Eq. (11) is \begin{eqnarray}
|\Psi^1\rangle_{(A,3)norm}=\frac{1}{\sqrt{a^2\alpha^2+b^2\beta^2}}|\Psi^1\rangle_{A,3}. \end{eqnarray}
After Bob's successful measurement on $ |0\rangle_{A}$, the probability $P^1_B$ of obtaining the original state from the state $|\Psi^1\rangle_{A,3}$ is \begin{eqnarray} P^1_B=\frac{(Kab)^2}{{(a\alpha)^2+(b\beta)^2}}. \end{eqnarray}
We consider Alice's measurement and Bob's different operations in the teleportation process. For Alice's each measurement and Bob's operation, the probability of obtaining the initial state is \begin{eqnarray} P^1_{AB}=P^1_AP^1_B=\frac{1}{2}(Kab)^2 \end{eqnarray} where $P^1_{AB}$ is just the square of coefficient of the state
$|\Psi^1\rangle_{3}$ in Eq. (12).
Summing all the contributions of $ P^1_{AB}=P^2_{AB}=P^3_{AB}=P^4_{AB}=\frac{1}{2}(Kab)^2$, we obtain the whole probability of successful teleportation as \begin{eqnarray} P=\sum_{i=1}^4=2(Kab)^2 \end{eqnarray}
where $0<K\leq \min(\frac{1}{|a|},\frac{1}{|b|}) $.
If $a>b $, we take $K=K_{max}=\frac{1}{a}$, then the optimal probability is $ P=2b^2$. When $a=b=\frac{1}{\sqrt{2}},$ then $P_{max}=1$, which is just the case for prefect teleportation, and for this special case $U$ is the same as that in Ref. \cite{PRA W L Li2000}, i.e. \begin{eqnarray} U=\left(\begin{array}{cccc}
b/a&0&\sqrt{1-(b/a)^2}&0\\
0&1&0&0\\
\sqrt{1-(b/a)^2}&0&-b/a&0 \\
0&0&0&-1\\ \end{array}\right). \end{eqnarray}
For different matching coefficients $K$, one can adopt different kinds of unitary transformation. FOr example, when $K=1$ the whole probability of successful teleportation is $P=2(ab)^2$ and our unitary transformation matrix turns out to be \begin{eqnarray} U=\left(\begin{array}{cccc} b&0&a&0\\ 0&a&0&b\\ a&0&-b&0 \\ 0&b&0&-a\\ \end{array}\right). \end{eqnarray}
Next we discuss some difference between the probability $P=2b^2$ and $P=2(Kab)^2$ for two kinds of unitary transformation. For arbitrary $a^2+b^2=1$ and $a\neq b$, there are always $P=2b^2>2(ab)^2$. So Eq. (20) is an unitary transformation for obtaining the optimal probability. However, the condition $a^2+b^2=1$ and $a>b$ yields $b^2<1/2$, and therefore one can only obtain probability $P<1$. When $K=1$ we have $P=2(ab)^2$, and the normalization condition $a^2+b^2=1$ gives rise to $P_{max}=1/2$ with $a^2=b^2=1/2$, thus one can only attain the probability $P=2(ab)^2<1/2$ for $a\neq b$.
Because $0<K\leq \min(\frac{1}{|a|},\frac{1}{|b|})$, when $1\leq K\leq 2$ (here $a>b$ and $a\sim b$), then $ P=4(ab)^2\sim 2b^2$. For this case there are a little difference between the probability $P=2b^2$ and $P=4(ab)^2$ for the two kinds of unitary transformation (see Fig. 1). \begin{center} \epsfxsize 90mm \epsfysize 40mm \epsffile{Graph1.eps}
{\footnotesize Fig. 1. Comparison of the probability with different channel parameters.} \end{center}
Different matching coefficients $K$ correspond to different unitary transformations. Although $P=2b^2>2(Kab)^2$, it provides one with more flexible selection for probabilistic teleportation.
\section{Probabilistic teleportation with generalized BM}
Considering now Alice makes a generalized Bell basis measurement (GBM), these are \begin{eqnarray}
\phi^1_{1,2}=a'|00\rangle+b'|11\rangle,~~
\phi^2_{1,2}=b'|00\rangle-a'|11\rangle,~~
\phi^3_{1,2}=a'|01\rangle+b'|10\rangle,~~
\phi^4_{1,2}=b'|01\rangle-a'|11\rangle. \end{eqnarray} where $a'^{2}+b'^2=1$ and $a'\neq b'.$ The transformation matrix ${T}$
between the generalized Bell basis and computation basis $\{|00\rangle, |01\rangle, |10\rangle, 11\rangle\}$ is \begin{eqnarray} {T}= \left(\begin{array}{cccc} a'&0&0&b' \\ b'&0&0&-a' \\ 0&a'&b'&0 \\ 0&b'&-a'&0
\end{array}\right).
\end{eqnarray}
Let us reconsider the aforementioned one-qubit teleportation, under the generalized Bell basis, the total state of the system is \begin{eqnarray}
|\Psi\rangle_{tot}=\frac{1}{\sqrt{2}}R^{i}X^{jk}T^\lambda_{ij}|\lambda k\rangle =\frac{1}{2}R^{i}\sigma^{(\lambda)k}_{i}|\lambda k\rangle. \end{eqnarray}
After Alice's GBM $ \phi^\lambda_{12}$, Bob will get the corresponding
unnormalized state as follows \begin{eqnarray}
&&|\Psi^1\rangle_B=(aa'\alpha|0\rangle+bb'\beta|1\rangle),~~
|\Psi^2\rangle_B=(ab'\alpha|0\rangle-ba'\beta|1\rangle),\nonumber\\
&&|\Psi^3\rangle_B=(aa'\beta|0\rangle+bb'\alpha|1\rangle),~~
|\Psi^4\rangle_B=(ab'\beta|0\rangle-ba'\alpha|1\rangle). \end{eqnarray}
When Bob introduced an auxiliary qubit state $|0\rangle_{A}$ for
$|\Psi^\alpha\rangle_B$, and make an unitary transformation
$U_1$ to the state $|\Psi^{1}\rangle_{A,3}$, with \begin{eqnarray} U_1=\left(\begin{array}{cccc} Kbb'&0&\sqrt{1-(Kbb')^2}&0\\ 0&Kaa'&0&\sqrt{1-(Kaa')^2}\\ \sqrt{1-(Kbb')^2}&0&-Kbb'&0 \\ 0&\sqrt{1-(Kaa')^2}&0&-Kaa'\\ \end{array}\right), \end{eqnarray}
where $0<K\leq \min(\frac{1}{|aa'|},\frac{1}{|bb'|})$, the state
$|\Psi^1\rangle_{A,3}$ becomes \begin{eqnarray}
|\Psi^1\rangle_{A,3}&=&\frac{1}{\sqrt{2}}|0\rangle_{A}[Kaba'b'(\alpha|0\rangle+\beta|1\rangle)_{3}\nonumber\\
&+&a{\sqrt{1-(Kbb')^2}}\alpha|1\rangle_{A}|0\rangle_{3}\
+b{\sqrt{1-(Kaa')^2}}\beta|1\rangle_{A}|1\rangle_{3}. \end{eqnarray}
If Bob's measurement outcome is $|0\rangle_{A}$, the teleportation is successfully implemented.
Considering Alice's Bell basis measurement $\phi^1$ and Bob's evolution on $|\Psi^1\rangle_{A,3}$, the probability of successful teleportation is \begin{eqnarray} P^{1}_{AB}=(Kaba'b')^2. \end{eqnarray} Similarly, After Bob's $U_2$, $U_3$ and $U_4$ transformation to the corresponding states and measurement on the auxiliary qubit, the probability of successful teleportation respectively are \begin{eqnarray} P^2_{AB}=P^3_{AB}=P^4_{AB}=(Kaba'b')^2. \end{eqnarray} Thus the whole probability of successful teleportation is \begin{eqnarray}
P=4(Kaba'b')^2. \end{eqnarray}
Next we discuss the optimal probability of successful teleportation by entanglement matching for the following two cases:
(1) $|a|\geq|a'|\geq|b'|\geq|b|$. For this case, $|aa'|\geq|bb'|$,
$|ab'|\geq|ba'|$, so we take $K=\frac{1}{|aa'|}$ in $U_1$ and $U_3$,
$K=\frac{1}{|ab'|}$ in $U_2$ and $U_4$, for which one can obtain
$P^1_{AB}=P^3_{AB}=|bb'|^2$, $P^2_{AB}=P^4_{AB}=|ba'|^2$. The optimal whole probability is \begin{eqnarray}
P=\sum p_i=|bb'|^2+|ba'|^2+|bc'|^2+|bd'|^2=2|b|^2. \end{eqnarray}
(2) $|a'|\geq|a|\geq|b|\geq|b'|$. In this case, $|aa'|\geq|bb'|$,
$|ba'|\geq|ab'|$, so we take $K_1=\frac{1}{|aa'|}$, $K_2=\frac{1}{|ba'|}$, for which we have $P^1_{AB}=P^3_{AB}=|bb'|^2$,
$P^2_{AB}=P^4_{AB}=|ab'|^2$. The optimal whole probability is \begin{eqnarray}
P=\sum p_i=2|b'|^2. \end{eqnarray}
From the above analysis, one can see that the optimal probability of successful teleportation is determined by the smaller value of $|b|$
and $|b'|$, i.e., the optimal probability is determined by the entanglement degree of Alice's measurement or the quantum channel. However, for $K<K_{max}=\min(\frac{1}{|aa'|},\frac{1}{|bb'|})$, the whole probability of successful teleportation is $P=4(Kaba'b')^2$. Therefor, the general probability of successful teleportation is not only determined by the factors of the channel and measurement, but also related to the unitary transformation during teleportation process.
\section{Conclusion} The unavoidable influence of environment always induces degradation of quantum correlations, therefor the study of probabilistic teleportation is significant for quantum information processing. In this paper, we generalized the protocol of probabilistic teleportation by introducing an auxiliary qubit and the unitary transformation methods. Moreover, through the analysis based on the Bell basis and generalized Bell basis measurement in two probabilistic teleportation, we suggested a general probability of successful teleportation, which is not only determined by both the entanglement degree of transmission channels and the measurement methods, but also related to unitary transformation in teleportation process, i.e., $P=2(Kab)^2$, Although $ P=2(Kab)^2<2(b)^2$ (the optimal $U$ transformation). However in experiment, it is more important to realize successful teleportation. As different entanglement matching coefficients $K$ will give different $U$ evolution methods, so one can have more flexible selectable evolution method experimentally. \\
\textbf{Acknowledgments}\\ \indent This work was supported in part by the National Natural Science Foundation of China under Grant No. 10902083, the Natural Science Foundation of Shaanxi Province under Grant Nos. 2009JQ8006, 2009JM6001 and 2010JM1011.
\end{document} | arXiv |
\begin{document}
\begin{titlepage} \title{A matrix-based method of moments for fitting multivariate network meta-analysis models \with multiple outcomes \and random inconsistency effects}
\begin{abstract} Random-effects meta-analyses are very commonly used in medical statistics. Recent methodological developments include multivariate (multiple outcomes) and network (multiple treatments) meta-analysis. Here we provide a new model and corresponding estimation procedure for multivariate network meta-analysis, so that multiple outcomes and treatments can be included in a single analysis. Our new multivariate model is a direct extension of a univariate model for network meta-analysis that has recently been proposed. We allow two types of unknown variance parameters in our model, which represent between-study heterogeneity and inconsistency. Inconsistency arises when different forms of direct and indirect evidence are not in agreement, even having taken between-study heterogeneity into account. However the consistency assumption is often assumed in practice and so we also explain how to fit a reduced model which makes this assumption. Our estimation method extends several other commonly used methods for meta-analysis, including the method proposed by DerSimonian and Laird (1986). We investigate the use of our proposed methods in the context of a real example. \end{abstract} \end{titlepage}
\label{firstpage}
\section{Introduction} Meta-analysis, the statistical process of pooling the results from separate studies, is commonly used in medical statistics and now requires little introduction. The univariate random-effects model is often used for this purpose. This model has recently been extended to the multivariate (multiple outcomes; Jackson {\it et al.}, 2011) and network (multiple treatments; Lu and Ades, 2004) meta-analysis settings. In a network meta-analysis, more than two treatments are included in the same analysis. The main advantage of network meta-analysis is that, by using indirect information contained in the network, more precise and coherent inference is possible, especially when direct evidence for particular treatment comparisons is limited. Here we describe a new model that extends the random-effects modelling framework to the multivariate network meta-analysis setting, so that both multiple outcomes {\it and} multiple treatments may be included in the same analysis.
Other multivariate extensions of univariate methods for network meta-analysis have previously been proposed. For example, Achana {\it et al}. (2014) analyse multiple correlated outcomes in multi-arm studies in public health. Efthimiou {\it et al}. (2014) propose a model for the joint modelling of odds ratios on multiple endpoints. Efthimiou {\it et al}. (2015) develop another model that is a network extension of an alternative multivariate meta-analytic model that was originally proposed by Riley {\it et al}. (2008). A network meta-analysis of multiple outcomes with individual patient data has also been proposed by Hong {\it et al}. (2015) under both contrast-based and arm-based parameterizations, and Hong {\it et al}. (2016) develop a Bayesian framework for multivariate network meta-analysis. These multivariate network meta-analysis models are based on the assumption of consistency in the network, extending the approach introduced by Lu and Ades (2004). In contrast to these previously developed methods, the method proposed here relaxes the consistency assumption. This assumption is sometimes found to be false across the entire network (Veroniki {\it et al.}, 2013). We model the inconsistency using a design-by-treatment interaction, so that different forms of direct and indirect evidence may not agree, even after taking between-study heterogeneity into account. However we assume that the design-by-interaction terms follow normal distributions, and so conceptualise inconsistency as another source of random variation. This allows us to achieve the dual aim of estimating meaningful treatment effects whilst also allowing for inconsistency in the network.
Although we allow inconsistency in the network, we propose a relatively simple model. Our preference for a simple model is because the between-study covariance structure is typically hard to identify accurately in multivariate meta-analyses (Jackson {\it et al.}, 2011) and also because network meta-analysis datasets are usually small (Nikolakopoulou {\it et al.}, 2014). The new model that we propose for multivariate network meta-analysis is a direct generalisation of the univariate network meta-analysis model proposed by Jackson {\it et al.} (2016), which is a particular form of the design-by-treatment interaction model (Higgins {\it et al.}, 2012). In addition to proposing a new model for multivariate network meta-analysis, we also develop a corresponding new estimation method. This estimation method is based on the method of moments and extends a wide variety of related methods. In particular, we extend the estimation method described by DerSimonian and Laird (1986) by directly extending the matrix based extension of DerSimonian and Laird's estimation method for multivariate meta-analysis (Chen {\it et al.}, 2012; Jackson {\it et al.}, 2013). We adopt the usual two-stage approach to meta-analysis, where the estimated study-specific treatment effects (including the within-study covariance matrices) are computed in the first stage. We give some information about how this first stage is performed but our focus is the second stage, where the meta-analysis model is fitted.
The paper is set out as follows. In section 2, we briefly describe the univariate model for network meta-analysis to motivate our new multivariate network meta-analysis model in section 3. We present our new estimation method in section 4 and we apply our methods to a real dataset in section 5. We conclude with a short discussion in section 6.
\section{A univariate network meta-analysis model} Here we describe our univariate modelling framework for network meta-analysis (Jackson {\it et al.}, 2016; Law {\it et al.}, 2016). Without loss of generality, we take treatment A as the reference treatment for the network meta-analysis. The other treatments are B, C, {\it etc.} We take the design $d$ as referring only to the set of treatments compared in a study. For example, if the first design compares treatments A and B only, then $d=1$ refers to two-arm studies that compare these two treatments. We define $t$ to be the total number of treatments included in the network, and $t_d$ to be the number of treatments included in design $d$. We define $D$ to be the number of different designs, $N_d$ to be the number of studies of design $d$, and $N = \sum\limits_{d=1}^{D} N_d$ to be the total number of studies. We will use the word `contrast' to refer to a particular treatment comparison or effect in a particular study, for example the `AB contrast' in the first study.
We model the estimated relative treatment effects, rather than the average outcomes in each arm, and so perform contrast based analyses. We define ${\bm Y}_{di}$ to be the $c_d \times 1$ column vector of estimated relative treatment effects from the $i$th study of design $d$, where $c_d=t_d-1$. We define $n_d = N_d c_d $ to be the total number of estimated treatment effects that design $d$ contributes to the analysis, and $n = \sum\limits_{d=1}^{D} n_d$ to be the total number of estimated treatment effects that contribute to the analysis. To specify the outcome data ${\bm Y}_{di}$, we choose a baseline treatment in each design $d$. The entries of ${\bm Y}_{di}$ are then the estimated effects of the other $c_d$ treatments included in design $d$ relative to this baseline treatment. For example, if we take $d=2$ to indicate the `CDE design' then $c_2=2$. Taking C as the baseline treatment for this design, the two entries of the ${\bm Y}_{2i}$ vectors are the estimated relative effects of treatment D compared to C and of treatment E compared to C. For example, the entries of the ${\bm Y}_{di}$ could be estimated log-odds ratios or mean differences.
We use normal approximations for the within-study distributions. We define ${\bm S}_{di}$ to be the $c_d \times c_d$ within-study covariance matrix corresponding to ${\bm Y}_{di}$. We treat all ${\bm S}_{di}$ as fixed and known in analysis. Ignoring the uncertainty in the ${\bm S}_{di}$ is acceptable provided that the studies are reasonably large and is conventional in meta-analysis, but this approximation is motivated by pragmatic considerations because this greatly simplifies the modelling. We do not impose any constraints on the form of ${\bm S}_{di}$ other than they must be valid covariance matrices. The lead diagonal entries of the ${\bm S}_{di}$ are within-study variances that can be calculated using standard methods. Assuming that the studies are composed of independent samples for each treatment, the other entries of the ${\bm S}_{di}$ are calculated as the variance of the average outcome (for example the log odds or the sample mean) of the baseline treatment.
We define $\delta_1^{AB}$, $\delta_1^{AC}$, $\cdots$, $\delta_1^{AZ}$, where Z is the final treatment in the network, to be treatment effects relative to the reference treatment A, and call them basic parameters (Lu and Ades, 2006). We use the subscript 1 when defining the basic parameters to emphasise that they are treatment effects for the first (and in this section, only) outcome. We define $c=t-1$ to be the number of basic parameters in the univariate setting. Treatment effects not involving A can be obtained as linear combinations of the basic parameters and are referred to as functional parameters (Lu and Ades, 2006). For example the average treatment effect of treatment E to treatment C, $\delta_1^{CE} = \delta_1^{AE} - \delta_1^{AC}$, is a functional parameter. We define the $c \times 1$ column vector ${\bm \delta}=(\delta_1^{AB}, \delta_1^{AC}, \cdots, \delta_1^{AZ})^T$ and design specific $c_d \times c$ design matrices ${\mathbf Z}_{(d)}$. We use the subscript $(d)$ in these design matrices to emphasise that they apply to each individual study of design $d$; we reserve the subscript $d$ for design matrices that describe regression models for all outcome data from this design. If the $i$th entry of the ${\mathbf Y}_{di}$ are estimated treatment effects of treatment J relative to the reference treatment A then the $i$th row of ${\mathbf Z}_{(d)}$ contains a single nonzero entry: 1 in the $(j-1)$th column, where $j$ is the position of $J$ in the alphabet. If instead the $i$th entry of the ${\mathbf Y}_{di}$ are estimated treatment effects of treatment $J$ relative to treatment $K$, $K \ne A$, then the $i$th row of ${\mathbf Z}_{(d)}$ contains two nonzero entries: 1 in the $(j-1)$th column and -1 in the $(k-1)$th column.
Our univariate model for network meta-analysis is \begin{equation} \label{nine} {\bm Y}_{di} = {\mathbf Z}_{(d)} {\bm \delta} + {\bm \Theta}_{di} + {\bm \Omega}_{d} + {\bm \epsilon}_{di} \end{equation} where ${\bm \Theta}_{di} \sim N( {\bf 0}, \tau^2_\beta \mathbf{P}_{c_{d}})$, ${\bm \Omega}_{d} \sim N( {\bf 0}, \tau^2_\omega \mathbf{P}_{c_{d}})$, ${\bm \epsilon}_{di} \sim N( {\bf 0}, \mathbf{S}_{di})$, all ${\bm \Theta}_{di}$, ${\bm \Omega}_{di}$ and ${\bm \epsilon}_{di}$ are independent, and ${\bf P}_{c_{d}}$ is the $c_{d} \times c_{d}$ matrix with ones on the leading diagonal and halves elsewhere. We refer to $\tau^2_\beta$ and $\tau^2_\omega$ as the between-study variance, and the inconsistency variance, respectively. The term ${\bm \Theta}_{di}$ is a study-by-treatment interaction term that models between-study heterogeneity. The model ${\bm \Theta}_{di} \sim N( {\bf 0}, \tau^2_\beta \mathbf{P}_{c_{d}})$ implies that the heterogeneity variance is the same for all contrasts for every study regardless of whether or not the comparison is relative to the baseline treatment (Lu and Ades, 2004). Other simple choices of $\mathbf{P}_{c_{d}}$, such as allowing the off-diagonal entries to differ from 0.5, violate this symmetry between treatments. For example, in the case $d=2$ indicating the CDE design, the between-study variances for the CD and CE effects in this study are given by the two diagonal entries of $\tau^2_\beta \mathbf{P}_{c_{d}}$, which are both $\tau^2_\beta$. The between-study variance for the effect of E relative to D is $(-1, 1) \tau^2_\beta \mathbf{P}_{c_{d}} (-1, 1)^T$, which is also $\tau^2_\beta$. The ${\bm \Omega}_{d}$ are design-by-treatment interaction terms that model inconsistency in the network. The model ${\bm \Omega}_{d} \sim N( {\bf 0}, \tau^2_\omega \mathbf{P}_{c_{d}})$ implies that the inconsistency variance is the same for all contrasts for every design; other simple choices of $\mathbf{P}_{c_{d}}$ also violate this symmetry.
To describe all estimates from all studies, we stack the ${\bm Y}_{di}$ from the same design to form the $n_d \times 1$ column vector ${\bm Y}_d = ({\bm Y}_{d1}^T, \cdots, {\bm Y}^T_{d N_d})^T$, and we then stack these ${\bm Y}_d$ to form the $n \times 1$ column vector ${\bm Y} = ({\bm Y}_{1}^T, \cdots, {\bm Y}^T_{D})^T$. Jackson {\it et al.} (2016) then use three further matrices that we also define here because they will be required to describe the estimation procedure that follows. The matrix ${\bf M_1}$ is defined as a $n \times n$ square matrix where $m_{1ij}=0$ if the $i$th and $j$th entries of ${\bm Y}$, $i,j=1, \cdots n$, are estimates from different studies; otherwise ${m}_{1ii} =1$, and ${m}_{1ij} =1/2$ for $i \ne j$. The matrix ${\bf M_2}$ is defined as a $n \times n$ square matrix where ${m}_{2ij}=0$ if the $i$th and $j$th entries of ${\bm Y}$, $i,j=1, \cdots n$, are estimates from different designs; otherwise ${m}_{2ij} =1$ if the $i$th and $j$th entries of ${\bm Y}$ are estimates of the same treatment comparison (for example, treatment A compared to treatment B) and ${m}_{2ij} =1/2$ if these entries are estimates of different treatment comparisons. The supplementary materials show a concrete example showing how these two matrices are formed. Jackson {\it et al.} (2016) also define a $n \times c$ univariate design matrix ${\mathbf Z}$, where if the $i$th entry of ${\mathbf Y}$ is an estimated treatment effect of treatment J relative to the reference treatment A then the $i$th row of ${\mathbf Z}$ contains a single nonzero entry: 1 in the $(j-1)$th column, where $j$ is the position of $J$ in the alphabet. If instead the $i$th entry ${\mathbf Y}$ is an estimated treatment effect of treatment $J$ relative to treatment $K$, $K \ne A$, then the $i$th row of ${\mathbf Z}$ contains two nonzero entries: 1 in the $(j-1)$th column and -1 in the $(k-1)$th column. Defining ${\mathbf S}_d=\mbox{diag}({\mathbf S}_{d1}, \cdots, {\mathbf S}_{d N_d} )$, and then ${\mathbf S}=\mbox{diag}({\mathbf S}_{1}, \cdots, {\mathbf S}_{D} )$, model (\ref{nine}) can be presented for the entire dataset as \[ {\mathbf Y} \sim N({\mathbf Z} {\bm \delta}, {\tau^2_\beta{\bf M_1} + {\tau^2_\omega {\bf M_2}} }+ {\mathbf S}) \]
\section{A multivariate network meta-analysis model} We now explain how to extend the univariate model in section 2 to the multivariate setting to handle multiple outcomes. We define $p$ to be the number of outcomes, and so the dimension of the network meta-analysis, so that we now consider the case where $p>1$. The ${\bm Y}_{di}$ are now $p c_d \times 1$ column vectors, where the ${\bm Y}_{di}$ contain $c_d$ column vectors of length $p$. For example, in a $p=5$ dimensional meta-analysis and continuing with the example where $d=2$ indicates the CDE design, we have $c_2=2$. The ${\bm Y}_{2i}$ are then $10 \times 1$ column vectors where, taking C as the baseline treatment for this design, the first five entries of the ${\bm Y}_{2i}$ are estimated relative treatment effect of D compared to C and the second five entries are the same estimate of E compared to C. We define the $pc \times 1$ column vector ${\bm \delta}=(\delta_1^{AB}, \delta_1^{AC}, \cdots, \delta_1^{AZ}, \delta_2^{AB}, \delta_2^{AC}, \cdots, \delta_2^{AZ}, \cdots, \delta_p^{AB}, \delta_p^{AC}, \cdots, \delta_p^{AZ})^T$, so that this vector contains the basic parameters for each outcome in turn. When $p=1$ the vector ${\bm \delta}$ reduces to its definition in the univariate setting, as given in section 2.
We define ${\bf \Sigma}_\beta$ and ${\bf \Sigma}_\omega$ to be $p \times p$ unstructured covariance matrices that are multivariate generalisations of $\tau^2_\beta$ and $\tau^2_\omega$. These two matrices contain the between-study variances and covariances, and the inconsistency variances and covariances, respectively, for all $p$ outcomes. We refer to ${\bf \Sigma}_\beta$ and ${\bf \Sigma}_\omega$ as the between-study covariance matrix, and the inconsistency covariance matrix, respectively. We continue to treat the within-study covariance matrices ${\bm S}_{di}$ as if fixed and known in analysis but these are now $pc_d \times p c_d$ matrices. The entries of the ${\bm S}_{di}$ matrices that describe the covariance of estimated treatment effects for the same outcome can be obtained as in the univariate setting. However the other entries of ${\bm S}_{di}$, that describe the covariance between treatment effects for different outcomes, are harder to obtain in practice. A variety of strategies for dealing with this difficulty have been proposed (Jackson {\it et al}., 2011; Wei and Higgins, 2013).
\subsection{The proposed multivariate model for network meta-analysis} In the multivariate setting, to allow correlations between estimated treatment effects for different outcomes, both within studies and designs, we propose that model (\ref{nine}) is generalised to \begin{equation} \label{new_label} {\bm Y}_{di} = {\bm X}_{(d)} {\bm \delta} + {\bm \Theta}_{di} + {\bm \Omega}_{d} + {\bm \epsilon}_{di} \end{equation} where ${\bm X}_{(d)} =(({\bm I}_p \otimes {\bm Z}_{(d)1})^T, \cdots, ({\bm I}_p \otimes {\bm Z}_{(d) c_d})^T)^T $, ${\bm Z}_{(d)i}$ is the $i$th row of ${\bm Z}_{(d)}$, $
{\bm \Theta}_{di} \sim N( {\bf 0}, \mathbf{P}_{c_{d}} \otimes {\bf \Sigma}_\beta)
$,
$
{\bm \Omega}_{d} \sim N( {\bf 0}, \mathbf{P}_{c_{d}} \otimes {\bf \Sigma}_\omega)
$
and
$
{\bm \epsilon}_{di} \sim N( {\bf 0}, {\bm S}_{di})$, where all ${\bm \Theta}_{di}$, ${\bm \Omega}_{d}$ and ${\bm \epsilon}_{di}$ are independent, and $\otimes$ is the Kronecker product. The random ${\bm \Theta}_{di}$ and ${\bm \Omega}_{d}$ continue to model between-study heterogeneity, and inconsistency, respectively. Recalling that ${\bm \delta}$ contains the basic parameters for each outcome in turn, the design matrices ${\bm X}_{(d)}$ provide the correct linear combinations of basic parameters to describe the mean of all estimated treatment effects in ${\bm Y}_{di}$. Model (\ref{new_label}) reduces to model (\ref{nine}) in one dimension. The definition of $\mathbf{P}_{c_{d}}$ means that ${\bf \Sigma}_\beta$ and ${\bf \Sigma}_\omega$ are the between-study covariance matrix, and inconsistency covariance matrix, for all contrasts. We continue define ${\bm Y}$ as in the univariate setting, where ${\bm Y}$ contains $n$ column vectors of estimated treatment effects that are of length $p$, so that ${\bm Y}$ is a $np \times 1$ column vector in the multivariate setting. We define the multivariate $np \times pc $ design matrix ${\mathbf X}= (({\mathbf I}_p \otimes {\mathbf Z}_{1})^T, \cdots, ({\mathbf I}_p \otimes {\mathbf Z}_{n})^T)^T$, where ${\mathbf Z}_{i}$ is the $i$th row of ${\mathbf Z}$. Model (\ref{new_label}) can be presented for the entire dataset as \begin{equation} \label{nine1} {\mathbf Y} \sim N({\mathbf X} {\bm \delta}, {{\bf M_1} \otimes {\bf \Sigma}_{\beta} + {{\bf M_2}}\otimes {\bf \Sigma}_{\omega} }+ {\mathbf S}) \end{equation}
where we continue to define ${\mathbf S}$ as in the univariate case. Matrices ${\bf M_1}$ and ${\bf M_2}$ are the same as in the univariate setting, and so continue to be $n \times n$ matrices.
Model (\ref{nine1}) is a linear mixed model for network meta-analysis and is conceptually similar to other models of this type (Piepho {\it et al.}, 2012). If ${\bf \Sigma}_{\omega}={\bf 0}$ then all ${\bm \Omega}_{d}= {\bm 0}$ and there is no inconsistency; we refer to this reduced model as the `consistent model'. If both ${\bf \Sigma}_{\beta}={\bf 0}$ and ${\bf \Sigma}_{\omega}={\bf 0}$ then all studies estimate the same effects to within-study sampling error and we refer to this model as the `common-effect and consistent model'.
Missing data (unobserved entries of ${\mathbf Y}$) are common in applications as not all studies may provide data for all outcomes and contrasts. When there are missing outcome data, the model for the observed data is the marginal model for the observed data implied by (\ref{nine1}), where any rows of ${\mathbf Y}$ that contain missing values are discarded. We will use a non-likelihood based approach for making inferences and so assume any data are missing completely at random (Seaman {\it et al.}, 2013). We define the diagonal $np \times np$ missing indicator matrix ${\mathbf R}$, where ${\mathbf R}_{ii}=1$ if ${\mathbf Y}_i$ is observed, ${\mathbf R}_{ii}=0$ if ${\mathbf Y}_i$ is missing, and ${\mathbf R}_{ij}=0$ if $i \ne j$.
\section{Multivariate estimation: a new method of moments}
Our estimation procedure is motivated by the univariate method proposed by DerSimonian and Laird (1986). This was developed in the much simpler setting where each study provides a single estimate $Y_i$, and where the random-effects model $Y_i \sim N(\delta, \tau^2+S_i)$ is assumed. This estimation method for $\tau^2$ uses the $Q$ statistic, where $Q=\sum S_i^{-1}(Y_i -\hat{\delta})^2$ and $\hat{\delta} = \sum S_i^{-1} Y_i/\sum S_i^{-1}$ is the pooled estimate under the common-effect model ($\tau^2=0$).
Now consider an alternative representation of this $Q$ statistic. Taking ${\bf Y} = (Y_1, \cdots, Y_n)^T$, ${\bf S} = \mbox{diag}(S_1, \cdots, S_n)$ and ${ \bf W}={\bf S}^{-1}$ means that $Q=\mbox{tr}({\mathbf W} ({\mathbf Y}-\hat{{\mathbf Y}}) ({\mathbf Y}-\hat{{\mathbf Y}})^{T})$, where $\hat{{\mathbf Y}}$ is obtained under the common-effect model. To obtain a $p \times p$ matrix generalisation of $Q$ for multivariate analyses, we replace the trace operator with the block trace operator in this expression (Jackson {\it et al}., 2013). The block trace operator is a generalisation of the trace that sums over all $n$ of the $p \times p$ matrices along the main block diagonal of an $np \times np$ matrix. This produces a $p \times p$ matrix. In the absence of missing data we can write our multivariate generalisation of the $Q$ statistic, $\mbox{btr}({\mathbf W} ({\mathbf Y}-\hat{{\mathbf Y}}) ({\mathbf Y}-\hat{{\mathbf Y}})^{T})$, as a weighted sum of outer products of $p \times 1$ vectors of residuals under the common-effect and consistent model. Hence the distribution of $\mbox{btr}({\mathbf W} ({\mathbf Y}-\hat{{\mathbf Y}}) ({\mathbf Y}-\hat{{\mathbf Y}})^{T})$ depends directly on the magnitudes of unknown variance components.
\subsection{A Q matrix for multivariate network meta-analysis} We define a within-study precision matrix ${\mathbf W}$ corresponding to ${\mathbf S}$. If there are no missing outcome data in ${\mathbf Y}$ then we define ${\mathbf W} = {\mathbf S}^{-1}$, where ${\mathbf S}$ is taken from model (\ref{nine1}). If there are missing data in ${\mathbf Y}$ then the entries of ${\mathbf W}$ that correspond to observed data are obtained as the inverse of the corresponding entries of the within-study covariance matrix of reduced dimension (equal to that of the observed data) and the other entries of ${\mathbf W}$ are set to zero. For example, consider the case where ${\mathbf Y}$ is a $6 \times 1$ vector but only the second and fifth entries are observed; this corresponds to much less outcome data than would be used in practice but provides an especially simple example. Then we define ${\mathbf S}_{r}$, where the subscript $r$ indicates a dimension reduction, as a $2 \times 2$ matrix whose entries are the within variances and covariances of the two observed entries of ${\mathbf Y}$. The $6 \times 6$ precision matrix ${\mathbf W}$ then has all zero entries in the first, third, fourth and sixth rows and columns. However the remaining entries of ${\mathbf W}$ are the entries of the $2 \times 2$ matrix ${\mathbf S}_{r}^{-1}$, so that ${\mathbf W}_{22} = ({\mathbf S}_{r}^{-1})_{11}$, ${\mathbf W}_{25} = ({\mathbf S}_{r}^{-1})_{12}$, ${\mathbf W}_{52} = ({\mathbf S}_{r}^{-1})_{21}$, and ${\mathbf W}_{55} = ({\mathbf S}_{r}^{-1})_{22}$. We define $\hat{{\mathbf Y}}$ to be the fitted value of ${{\mathbf Y}}$ under the common-effect and consistent model (${\bf \Sigma}_{\beta} = {\bf \Sigma}_{\omega}= {\bf{0}}$), so that $ \hat{{\mathbf Y}} = {\mathbf H} {\mathbf Y}$ where ${\mathbf H} = {\mathbf X} ({\mathbf X}^T {\mathbf W} {\mathbf X})^{-1} {\mathbf X}^{T} {\mathbf W}$. We also define an asymmetric $np \times np$ matrix (Jackson {\it et al.}, 2013) \begin{equation} \label{eq5} {\mathbf Q}= {\mathbf W} \{{\mathbf R} ({\mathbf Y}-\hat{{\mathbf Y}})\} \{{\mathbf R}({\mathbf Y}-\hat{{\mathbf Y}})\}^{T} = {\mathbf W} ({\mathbf Y}-\hat{{\mathbf Y}}) ({\mathbf Y}-\hat{{\mathbf Y}})^{T} {\mathbf R} \end{equation} Our definitions of ${\mathbf W}$ and ${\mathbf R}$ mean that ${\mathbf W} {\mathbf R}={\mathbf W}$, which results in the simplified version of ${\mathbf Q}$ in (\ref{eq5}). From the first form given in (\ref{eq5}), we have that the residuals ${\mathbf Y}-\hat{\mathbf Y}$ are pre-multiplied by ${\mathbf R}$, so that any residuals that correspond to missing outcome data do not contribute to ${\mathbf Q}$. Furthermore missing outcome data do not contribute to $\hat{\mathbf Y}$ because they have no weight under the common-effect and consistent model. Hence we can impute missing outcome data with any finite value without changing the value of ${\mathbf Q}$. This is merely a convenient way to handle missing data numerically and has no implications for the statistical modelling.
\subsection{Design specific Q matrices for multivariate network meta-analysis}
In order to identify the full model, we will require design-specific versions of ${\mathbf Q}$ that only use data from a particular design.
As in the univariate setting, we stack the outcome data from design $d$ to form the vector ${\mathbf Y}_d = ({\mathbf Y}^T_{d1}, \cdots, {\mathbf Y}^T_{dN_d})^T$. In the multivariate setting the vector ${\mathbf Y}_d$ contains $n_d$ estimated effects each of length $p$, so that ${\mathbf Y}_d$ is now a $p n_d \times 1$ column vector. We define the design specific $n_d \times n_d$ matrix ${\bf M_1^d}$, where ${m}^d_{1ij}=0$ if the $i$th and $j$th estimated effect (of length $p$) in ${\mathbf Y}_d$, $i,j=1, \cdots, n_d$, are from separate studies; otherwise ${m}^d_{1ii} =1$ and ${m}^d_{1ij} =1/2$ for $i \ne j$. We define the $ p n_d \times p c_d $ design matrix ${\mathbf X}_{d}$ which is obtained by stacking identity matrices of dimension $p c_d$, where we include one such identity matrix for each study of design $d$. Hence ${\mathbf X}_{d} = {\bf 1}_{N_d} \otimes {\mathbf I}_{p c_{d}}$, where ${\bf 1}_{N_d}$ is the $N_d \times 1$ column vector where every entry is one. We also define the $p c_d \times 1$ column vector ${\bm \beta}_{d} = {\bm X}_{(d)} {\bm \delta} + {\bm \Omega}_{d}$.
An identifiable design-specific marginal model for outcome data from design $d$ only, that is implied by model (\ref{new_label}), is \begin{equation} \label{nined} {\mathbf Y}_d \sim N({\mathbf X}_{d} {\bm \beta}_{d}, {{\bf M_1^d} \otimes {\bf \Sigma}_{\beta}}+ {\mathbf S}_d ) \end{equation} where ${\mathbf S}_d=\mbox{diag}({\mathbf S}_{d1}, \cdots,{\mathbf S}_{d N_d})$. We can also calculate design specific versions of (\ref{eq5}) where we calculate all quantities, including the fitted values, using just the data from studies of design $d$. We define these $p n_d \times p n_d$ design specific matrices as
\begin{equation} \label{cactus}
{\mathbf Q}_d = {\mathbf W}_d ({\mathbf Y}_d-\hat{{\mathbf Y}}_d) ({\mathbf Y}_d-\hat{{\mathbf Y}}_d)^{T} {\mathbf R}_d
\end{equation}
where ${\mathbf W}_d$, ${\mathbf R}_d$ and $\hat{{\mathbf Y}}_d$ in (\ref{cactus}) are defined in the same way as ${\mathbf W}$, ${\mathbf R}$ and $\hat{{\mathbf Y}}$ in (\ref{eq5}) but where only data from design $d$ are used. Hence ${\mathbf R}_d$ and ${\mathbf W}_d$ are the missing indicator matrix, and the within-study precision matrix, of ${\mathbf Y}_d$, respectively. We compute $\hat{{\mathbf Y}}_d = {\mathbf H}_d {\mathbf Y}_d$ where ${\mathbf H}_d = {\mathbf X}_d ({\mathbf X}_d^T {\mathbf W}_d {\mathbf X}_d)^{-1} {\mathbf X}_d^{T} {\mathbf W}_d$. When computing ${\mathbf H}_d$ we take the matrix inverse to be the Moore-Penrose pseudoinverse. This is so that any design-specific regression corresponding to this hat matrix that is not fully identifiable (due to missing outcome data) can still contribute to the estimation. We use model (\ref{nined}) to derive the properties of ${\mathbf Q}_d$ in equation (\ref{cactus}).
\subsection{The estimating equations}
We base our estimation on the two $p \times p$ matrices
$\mbox{btr} ({\mathbf Q}) $
and $ \sum\limits_{d=1}^D\mbox{btr} ({\mathbf Q}_d)$, where ${\mathbf Q}$ and ${\mathbf Q}_d$ are given in (\ref{eq5}) and (\ref{cactus}), respectively. Specifically, we match these quantities to their expectations to estimate the unknown variance parameters using the method of moments.
\subsubsection{Evaluating $\mbox{E}[\mbox{btr} ({\mathbf Q})] $ and deriving the first estimating equation} We define ${\mathbf A} = ({\mathbf I}_{np}-{\mathbf H})^T {\mathbf W}$ and ${\mathbf B} = ({\mathbf I}_{np} - {\mathbf H})^{T} {\mathbf R}$, which are known $np \times np$ matrices. We also divide the matrices ${\mathbf A}$ and ${\mathbf B}$ into $n^2$ blocks of $p \times p$ matrices, and write ${\mathbf A}_{i,j}$ and ${\mathbf B}_{i,j}$, $i,j=1, \cdots n$, to mean the $i$th by $j$th blocks of ${\mathbf A}$ and ${\mathbf B}$ respectively. Hence ${\mathbf A}_{i,j}$ and ${\mathbf B}_{i,j}$ are both $p \times p$ matrices. In the supplementary materials we show that \[ \mbox{E}[\mbox{btr}({\mathbf Q})]= \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} m_{1i j} {\mathbf A}_{k,i} {\bf \Sigma}_\beta {\mathbf B}_{j,k}+ \] \[ \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} m_{2i j} {\mathbf A}_{k,i} {\bf \Sigma}_\omega {\mathbf B}_{j,k} + \mbox{btr}({\mathbf B}). \] We apply the $\mbox{vec}(\cdot)$ operator to both sides of the previous equation and use the identity $\mbox{vec}({\mathbf A}{\mathbf X}{\mathbf B}) = ({\mathbf B}^T \otimes {\mathbf A})\mbox{vec}({\mathbf X})$ (see Henderson and Searle, 1981), to obtain \begin{equation} \label{eq1} \mbox{vec}(\mbox{E}[\mbox{btr}({\mathbf Q})]) = {\mathbf C} \mbox{vec}({\bf \Sigma}_\beta) + {\mathbf D} \mbox{vec}({\bf \Sigma}_\omega) + {\mathbf E} \end{equation} where \[ {\mathbf C} = \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} m_{1i j} {\mathbf B}_{j,k}^T \otimes {\mathbf A}_{k,i} \] \[ {\mathbf D} = \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} m_{2i j} {\mathbf B}_{j,k}^T \otimes {\mathbf A}_{k,i} \] and \[ {\mathbf E} = \mbox{vec}(\mbox{btr}({\bf B})). \] Upon substituting $\mbox{E}[\mbox{btr}({\mathbf Q})] = \mbox{btr}({\mathbf Q})$, ${\bf \Sigma}_\beta = {\hat{\bf \Sigma}}_\beta$ and ${\bf \Sigma}_\omega=\hat{{\bf \Sigma}}_\omega$ in equation (\ref{eq1}), the method of moments gives one estimating equation in the vectorised form of two unknown covariance matrices. \subsubsection{Evaluating $\mbox{E}[\mbox{btr} ({\mathbf Q}_d)] $ and deriving the second estimating equation} Model (\ref{nined}) depends upon one unknown covariance matrix, ${\bf \Sigma}_{\beta}$. The intuition is that, upon using all $D$ of the ${\mathbf Q}_d$ matrices in (\ref{cactus}) and the method of moments to estimate ${\bf \Sigma}_{\beta}$, we will then be able to estimate the other unknown covariance matrix ${\bf \Sigma}_\omega$ using the first estimating equation. We define design specific $ {\mathbf A}_d = ({\mathbf I}_{p n_d}-{\mathbf H}_d)^T {\mathbf W}_d $ and $ {\mathbf B}_d = ({\mathbf I}_{p n_d} - {\mathbf H}_d)^{T} {\mathbf R}_d $ , where ${\mathbf A}_d$ and ${\mathbf B}_d$ are known $p n_d \times p n_d$ matrices. We also divide the matrices ${\mathbf A}$ and ${\mathbf B}$ into $n_d^2$ blocks of $p \times p$ matrices, and write ${\mathbf A}_{d, i,j}$ and ${\mathbf B}_{d, i,j}$, $i,j=1, \cdots, n_d$, to mean the $i$th by $j$th blocks of ${\mathbf A}_d$ and ${\mathbf B}_d$ respectively. In the supplementary materials we show that \begin{equation} \label{eq3} \mbox{vec}\left(\mbox{E}[\sum\limits_{d=1}^D \mbox{btr}({\mathbf Q}_d)]\right) = \left(\sum\limits_{d=1}^D {\mathbf C}_d\right) \mbox{vec}({\bf \Sigma}_\beta) + \sum\limits_{d=1}^D {\mathbf E}_d \end{equation} where \[ {\mathbf C}_d = \sum\limits_{i=1}^{n_d}\sum\limits_{j=1}^{n_d}\sum\limits_{k=1}^{n_d} m^d_{1 ij} {\mathbf B}_{d, j,k}^T \otimes {\mathbf A}_{d, k,i} \] and \[ {\mathbf E}_d = \mbox{vec}(\mbox{btr}({\bf B}_d)). \] Upon substituting $\mbox{E}[\sum\limits_{d=1}^D \mbox{btr}({\mathbf Q}_d)]=\sum\limits_{d=1}^D \mbox{btr}({\mathbf Q}_d)$ and ${\bf \Sigma}_\beta = \hat{{\bf \Sigma}}_\beta$ in (\ref{eq3}), we obtain a second estimating equation from the method of moments.
\subsection{Solving the estimating equations and performing inference}
We solve the estimating equation resulting from (\ref{eq3}) for $\mbox{vec}({\bf \hat{\Sigma}}_\beta)$ and substitute this estimate into the estimating equation resulting from (\ref{eq1}) and solve for $\mbox{vec}({\bf \hat{\Sigma}}_\omega)$. \subsubsection{Estimating ${\bf \Sigma}_\beta$ under the consistent model}
Some applied analysts may prefer to assume the consistent model (${\bf \Sigma}_\omega = {\bf 0})$. As in the univariate case (Jackson {\it et al.}, 2016), we have two possible ways of estimating ${\bf \Sigma}_\beta$ under the consistent model: we can use the estimating equation resulting from (\ref{eq1}) with ${\bf \Sigma}_\omega = {\bf 0}$ or the estimating equation resulting from (\ref{eq3}) as in the full model. Also as in the univariate case, we suggest the former option because it uses the information made by assuming consistency when estimating ${\bf \Sigma}_\beta$. However this first option is valid only under the consistent model. \subsubsection{`Truncating' the estimates of the unknown covariance matrices so that they are symmetric and positive semi-definite} As in the univariate case, there is the problem that the point estimates of the two unknown covariance matrices are not necessarily positive semi-definite. The method of moments does not even initially enforce the constraint that the point estimates of the unknown covariance matrices are symmetrical (Chen {\it et al.}, 2012; Jackson {\it et al.}, 2013). We produce symmetric estimators corresponding to an estimated covariance matrix of $\hat{{\bf \Sigma}}$ as $(\hat{{\bf \Sigma}^T} + {\bf \hat{\Sigma}})/2$ (Chen {\it et al.}, 2012; Jackson {\it et al.}, 2013). This also corresponds to taking the average of estimates that result from our ${\bf Q}$ and ${\bf Q}_d$ matrices and their transposes (Jackson {\it et al.}, 2013). We then write these symmetric estimators in terms of their spectral decomposition (Chen {\it et al.}, 2012; Jackson {\it et al.}, 2013) and truncate any negative eigenvalues to zero to provide the final symmetric positive semi-definite estimated covariance matrices. Specifically, we define the truncated estimate corresponding to the symmetrical $\hat{{\mathbf{\Sigma}}}$ as $ \hat{{\mathbf{\Sigma}}}^{+} = \sum\limits_{i=1}^{p} \mbox{max}(0, \lambda_i) {\mathbf e}_i {\mathbf e}_i^T, $ where $\lambda_i$ is the $i$th eigenvalue of the symmetric $\hat{\mathbf{\Sigma}}$ and ${\mathbf e}_i$ is the corresponding normalised eigenvector.
\subsubsection{Inference for ${\bm \delta}$} Inference for ${\bm \delta}$ then proceeds as a weighted regression where all weights are treated as fixed and known. Writing $\hat{\bf{V}}$ as the estimated variance of ${\bf Y}$ in (\ref{nine1}), in the absence of missing outcome data we have $ \hat{{\bm \delta}}= ({\bm X}^T \hat{\bf{V}}^{-1} {\bm X})^{-1} {\bm X}^T \hat{\bf{V}}^{-1} {\bm Y} $ where $ \mbox{Var}(\hat{{\bm \delta}}) = ({\bm X}^T \hat{\bf{V}}^{-1} {\bm X})^{-1}. $ In the presence of missing data we can, under our missing completely at random assumption, apply these standard formulae for weighted regression to the observed outcomes. Alternatively and equivalently, we can impute the missing outcome data in ${\bf Y}$ with an arbitrary value and replace $\hat{\bf V}^{-1}$ with the precision matrix corresponding to $\hat{\bf V}$, calculated in the way explained for ${\bm S}$ in section 4.1 (Jackson {\it et al.}, 2011). Approximate confidence intervals and hypothesis tests for all basic parameters for all outcomes then immediately follow by taking $\hat{{\bm \delta}}$ to be approximately normally distributed. Inferences for functional parameters follow by taking appropriate linear combinations of $\hat{{\bm \delta}}$.
\subsection{Special cases of the estimation procedure} In the supplementary materials we show that the proposed method reduces to two previous methods in special cases. If all studies are two arm studies and consistency is assumed then the proposed method reduces to the matrix based method for multivariate meta-regression (Jackson {\it et al.}, 2013). The proposed multivariate method reduces to the univariate DerSimonian and Laird method for network meta-analysis (Jackson {\it et al.}, 2016) when $p=1$. \subsection{Model identification}
If the necessary standard matrix inversions resulting from the estimating equations from (\ref{eq1}) and (\ref{eq3}) cannot be performed then both unknown variance components cannot be identified using the proposed method. A minimum requirement for any multivariate modelling is that the common-effect and consistent model must be identifiable. This means that there must be some information (direct or indirect) about each basic parameter for all outcomes. Two or more studies of the same design must provide data for all possible pairs of outcomes to identify ${\bf \Sigma}_{\beta}$. Two or more studies of different designs must provide data for all possible pairs of outcomes to identify ${\bf \Sigma}_{\omega}$. If these conditions are satisfied then the model will be identifiable. In situations where our model is not identifiable we suggest that simpler models should be considered instead. Possible strategies for this include considering models of lower dimension or the consistent model. In practice it is highly desirable to have more than the minimum amount of replication required, both within and between designs, so that the model is well identified. We make some pragmatic decisions in the next section for our example to provide sufficient replication within designs, in order to estimate ${\bf \Sigma}_{\beta}$ with reasonable precision.
\section{Example} The methodology developed in this paper is now applied to an illustrative example in relapsing remitting multiple sclerosis (RRMS). Multiple sclerosis (MS) is an inflammatory disease of the brain and spinal cord and RRMS is a common type of MS. The effectiveness of a new treatment is typically measured to assess its impact on relapse rate and odds of disease progression. Magnetic Resonance Imaging (MRI) allows measurement of the number of new or enlarging lesions in the brain. Three outcomes are included in our analyses, so that $p=3$ in the full three dimensional network meta-analysis. These three outcomes are: (1) the log rate ratio of new or enlarging MRI lesions; (2) the log annualised relapse rate ratio; and (3) log disability progression odds ratio. Relapse is defined as appearance of new, worsening or recurrence of neurological symptoms that can be attributable to MS, accompanied by an increase of a score on the Expanded Disability Status Scale (EDSS) and also functional-systems score(s), lasting at least 24 hours, preceded by neurologic stability for at least 30 days. Disability progression is defined as an increase in EDSS score that was sustained for 12 weeks, with an absence of relapse at the time of assessment. Negative basic parameters indicate that treatments B-F are beneficial compared to treatment A throughout.
Data in this illustrative example were obtained from ten randomised controlled trials of six treatment options (coded in the network data as treatments A to F); placebo (A), interferon beta-1b (B), interferon beta-1a (C), glatiramer (D), and two doses of fingolimod; 0.5mg (E) and 1.25mg (F). Three of the fingolimod trials were three-arm (two doses and a control) and are included as three-arm studies. Three trials of interferon beta (one 1a and two 1b) were three-arm (also two doses and a control), and these were included as separate two-arm trials (each dose against the control, with the number of participants in each control arm halved). This ignores the differences in doses of interferon beta and was a pragmatic decision to help provide an identifiable network. Briefly, in this example there is very little replication within designs, so that identifying ${\bf \Sigma}_\beta$ well is very difficult without making pragmatic decisions such as this. Sormani {\it et al.} (2010) also treat these particular studies as two separate studies in this way, which helps them to identify their meta-regression models. Treating these three studies as separate two-arm trials means that the data are analysed as being from thirteen studies and a summary of the resulting data structure is shown in Table \ref{tab-ms-data}. There are eight different designs in Table \ref{tab-ms-data} and so there is relatively little replication within designs, even when including three of the three-arm studies as separate two arm studies. See Bujkiewicz {\it et al}. (2016) for further details of these data. Figure 1 provides network diagrams that show the number of comparisons between each pair of treatments on the edges. In these diagrams the three arm studies (Table \ref{tab-ms-data}) are taken to contribute three comparisons, for example the CEF study contributes CE, CF and EF comparisons. Two estimates of treatment effect from this study contribute to analyses however because C is taken as the baseline; the study's estimated EF treatment effect contains no additional information once its CE and CF contrasts are included in the analysis.
\begin{figure}
\caption{Network diagram for RRMS dataset. A -- placebo, B -- interferon beta-1b, C -- interferon beta-1a, D -- glatiramer, E -- fingolimod 0.5mg, F -- fingolimod 1.25mg. Left-hand-side network corresponds to studies reporting the log annualised relapse rate ratio and log disability progression odds ratio ($y_2$ and $y_3$) for which data are complete. The right-hand-side network corresponds to studies reporting the log rate ratio of new or enlarging MRI lesions ($y_1$ which is not reported in four studies). The numbers shown on the network edges are the number of direct comparisons of each pair of treatments; the absence of an edge indicates that there is no direct comparison. Three of the thirteen studies are three arm trials which are each taken to provide three direct comparisons (a direct comparison between each treatment pair). Hence there are 19 direct comparisons in the left-hand-side network where there is no missing data. }
\label{netdiag}
\end{figure}
\begin{table}[h] \caption{\label{tab-ms-data}Summary of the relapsing remitting multiple sclerosis dataset.} \begin{tabular}{lll} \hline \textbf{Study} & Design & Outcomes \\ \hline IFNB SG (1) & AB & All three outcomes measured\\ IFNB SG (2) & AB & All three outcomes measured\\ Jacobs/Simon & AC & All three outcomes measured\\ PRISMS (1) & AC & All three outcomes measured\\ PRISMS (2) & AC & All three outcomes measured\\ Johnson & AD & Relapse rate and disability progression only\\ Durelli & BC & Relapse rate and disability progression only \\ O'Connor (1) & BD & Relapse rate and disability progression only \\ O'Connor (2) & BD & Relapse rate and disability progression only \\ Mikol & CD & All three outcomes measured\\ FREEDOMS 1 & AEF & All three outcomes measured \\ FREEDOMS 2 & AEF & All three outcomes measured\\ TRANSFORMS & CEF & All three outcomes measured \\ \hline \end{tabular} \end{table}
\begin{sidewaystable} \caption{\label{tab-ms-results1}Treatment effect estimates of each treatment relative to the reference treatment A (placebo).} \begin{tabular}{lccccc} \hline model & \multicolumn{5}{c}{estimate (se)} \\
& AB & AC & AD & AE & AF \\ \hline \emph{MRI} ($y_1$) & & & & & \\ \hline univariate ($y_1$) & -0.95 (0.39) & -1.00 (0.21) & -0.68 (0.50) & -1.38 (0.26) & -1.52 (0.26) \\ bivariate ($y_1, y_2$) & -0.94 (0.39) & -1.00 (0.21) & -0.68 (0.50) & -1.39 (0.26) & -1.53 (0.26) \\ bivariate ($y_1, y_3$) & -0.96 (0.39) & -0.98 (0.22) & -0.66 (0.50) & -1.38 (0.26) & -1.51 (0.26) \\ trivariate ($y_1, y_2, y_3$)& -0.96 (0.39) & -0.97 (0.22) & -0.67 (0.50) & -1.38 (0.26) & -1.51 (0.26) \\ \hline \emph{Relapse rate} ($y_2$)& & & & & \\ \hline univariate ($y_2$) & -0.35 (0.10) & -0.25 (0.09) & -0.34 (0.11) & -0.81 (0.12) & -0.78 (0.12) \\ bivariate ($y_1, y_2$) & -0.35 (0.10) & -0.25 (0.09) & -0.34 (0.11) & -0.81 (0.12) & -0.78 (0.12) \\ bivariate ($y_2, y_3$) & -0.36 (0.11) & -0.23 (0.10) & -0.33 (0.12) & -0.80 (0.13) & -0.77 (0.13) \\ trivariate ($y_1, y_2, y_3$)& -0.36 (0.11) & -0.23 (0.10) & -0.33 (0.12) & -0.80 (0.13) & -0.77 (0.13) \\ \hline \emph{Disability progression} ($y_3$) & & & & & \\ \hline univariate ($y_3$) & -0.46 (0.25) & -0.11 (0.21) & -0.42 (0.25) & -0.33 (0.25) & -0.37 (0.24) \\ bivariate ($y_2, y_3$) & -0.47 (0.25) & -0.10 (0.21) & -0.43 (0.25) & -0.37 (0.25) & -0.37 (0.25) \\ bivariate ($y_1, y_3$) & -0.46 (0.25) & -0.11 (0.21) & -0.42 (0.25) & -0.34 (0.25) & -0.38 (0.25) \\ trivariate ($y_1, y_2, y_3$)& -0.47 (0.25) & -0.10 (0.21) & -0.43 (0.25) & -0.37 (0.25) & -0.37 (0.25) \\ \hline \end{tabular} \end{sidewaystable}
\begin{table} \caption{\label{tab-ms-results2}Inconsistency and heterogeneity covariance matrices estimates.} \begin{tabular}{lccccccccc} \hline model & $\Sigma_{\omega 11}$ & $\Sigma_{\omega 12}$ & $\Sigma_{\omega 13}$ & $\Sigma_{\omega 22}$ & $\Sigma_{\omega 23}$ & $\Sigma_{\omega 33}$ &&& \\ \hline univariate ($y_1$) & 0.0000 & & & & & &&&\\ univariate ($y_2$) & & & &0.0115 & & &&&\\ univariate ($y_3$) & & & & & & 0.0713 &&&\\ bivariate ($y_1, y_2$) & 0.0002 & 0.0017 & & 0.0125 & & &&& \\ bivariate ($y_1, y_3$) & 0.0018 & & 0.0116 & & & 0.0741 &&&\\ bivariate ($y_2, y_3$) & & & &0.0161 & 0.0344 & 0.0735 &&&\\ trivariate ($y_1, y_2, y_3$)& 0.0027& 0.0066& 0.0143 & 0.0161 & 0.0349& 0.0756 &&& \\ \hline
& $\Sigma_{ \beta 11}$ & $\Sigma_{ \beta 12} $ & $\Sigma_{\beta 13}$ & $\Sigma_{\beta 22 }$ & $\Sigma_{\beta 23} $ & $\Sigma_{\beta 33}$ \\ \hline univariate ($y_1$) & 0.1508 & & & & & &&&\\ univariate ($y_2$) & & & &0.0043 & & &&&\\ univariate ($y_3$) & & & & & & 0.0000 &&&\\ bivariate ($y_1, y_2$) & 0.1523 & -0.0110 & & 0.0047 & & && & \\ bivariate ($y_1, y_3$) & 0.1526 & & -0.0191 & & & 0.0024 & & & \\ bivariate ($y_2, y_3$) & & & &0.0061 & 0.0015 & 0.0004 & & &\\ trivariate ($y_1, y_2, y_3$)& 0.1538 & -0.0116 & -0.0195 & 0.0059 & 0.0024 & 0.0027 & & & \\ \hline \end{tabular} \end{table}
Table \ref{tab-ms-results1} shows the estimates of the basic parameters (treatment effects relative to the reference treatment, placebo) obtained from univariate network meta-analyses, bivariate analyses for all three combinations of pairs of outcomes and the trivariate analysis. The results are similar across all analyses, and conclusions from univariate and multivariate analyses are the same. This is disappointing because multivariate analyses have not resulted in more precise inference. The entries of ${\bf \hat{\Sigma}}_\beta$ and ${\bf \hat{\Sigma}}_\omega$ are shown in Table \ref{tab-ms-results2}. The positive estimates obtained for the unknown variance components suggest that this example exhibits some between-study heterogeneity and inconsistency. In order to assess the impact of the unknown variance components, we also fitted the consistent model and the common-effect and consistent model (results not shown) using all three outcomes ($p=3$). On average, the standard errors of the fifteen basic parameters from the full model are 35\% greater (range: 13\% to 84\%) than those from the consistent model, which in turn are 58\% (range: 8\% to 128\%) greater than those from the common-effect and consistent model. Both the between-study heterogeneity and inconsistency have notable impact.
The multivariate analysis adds to the univariate analyses in two main ways. Firstly, the finding that the multivariate analysis is in good agreement with the univariate analyses is a particularly important finding for treatment effects on MRI where a substantial proportion of data were missing. It has been demonstrated by Kirkham {\it et al}. (2012) that a multivariate approach to meta-analysis can help obtain more accurate estimates in the presence of outcome reporting bias. Hence the multivariate analysis reduces concerns that this univariate analysis is affected by reporting bias. Secondly joint inferences for all three outcomes are possible under the multivariate model. For example, and as we might anticipate, in our example the estimated log annualised relapse rate ratios and log disability progression odds ratios are highly positively correlated; from $\mbox{Var}(\hat{{\bm \delta}})$ in our three dimensional multivariate meta-analysis, the correlations between the five pairs of estimated basic parameters for these two outcomes are all between 0.63 and 0.75. Medical decision making based jointly on these two outcomes should take this high positive correlation into account, and this is only possible by using a multivariate approach. For example, a formal decision analysis involving these two outcomes should be based on their joint distribution rather than their two marginal distributions.
\section{Discussion} We have proposed a new model for dealing with both multiple treatment contrasts and multiple outcomes, to provide a framework for conducting multivariate network meta-analysis. By using a matrix-based method of moments estimator, our methodology naturally builds on previous work (such as the well-known DerSimonian and Laird approach) and is computationally very fast, relative to other potential estimation approaches such as REML or MCMC; this is especially the case in very high dimensions and so our methodology is particularly advantageous for ambitious analyses of this type. The main disadvantage is that, as a necessary consequence of its semi-parametric nature, the method of moments is not based on sufficient statistics and so is not fully efficient. The loss in efficiency relative to maximum likelihood estimation awaits investigation but we anticipate that this will be less serious for inferences about the average effects than the unknown variance components. Furthermore the within-study normal approximations used in our model are not necessarily very accurate even in moderately sized studies.
Since our analysis uses a general design matrix, the modelling may easily be extended by adding study level covariates to describe and fit multivariate network meta-regressions. In the network meta-analysis setting these regressions have the potential to explain the reasons for inconsistency and model multiple dose level responses. Our method of moments estimation can be combined with approaches that `inflate' confidence intervals from a frequentist random effects meta-analysis (Hartung and Knapp, 2001; Jackson and Riley, 2014).
In conclusion, we have developed a new model and estimation method for multivariate network meta-analysis, which can describe multiple treatments and multiple correlated outcomes.
\section*{References}
Achana, F. A., Cooper, N. J., Bujkiewicz, S., Hubbard, S. J., Kendrick, D., Jones, D. R. and Sutton, A. J. (2014). Network meta-analysis of multiple outcome measures accounting for borrowing of information across outcomes. {\it BMC Medical Research Methodology} {\bf 14,} 92.\\
\noindent Bujkiewicz, S., Thompson, J. R., Riley, R. D. and Abrams, K. R. (2016). Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process. {\it Statistics in Medicine} {\bf 35,} 1063-–1089.\\
\noindent Chen, H., Manning, A. K. and Dupuis, J. (2012). A method of moments estimator for random effect multivariate meta-analysis. {\it Biometrics} {\bf 68,} 1278-–1284. \\
\noindent Dersimonian, R. and Laird, N. (1986). Meta-analysis in clinical trials. {\it Controlled Clinical Trials} {\bf 7,} 177--188. \\
\noindent Efthimiou, O., Mavridis, D., Cipriani, A., Leucht, S., Bagos, P. and Salanti, G. (2014). An approach for modelling multiple correlated outcomes in a network of interventions using odds ratios. {\it Statistics in Medicine} {\bf 33,} 2275-–2287. \\
\noindent Efthimiou O, Mavridis D, Riley RD, Cipriani A and Salanti G. (2015) Joint synthesis of multiple correlated outcomes in networks of interventions. {\it Biostatistics} {\bf 16,} 84--97. \\
\noindent Hartung, J. and Knapp, G. (2001). On tests of the overall treatment effect in meta-analysis with normally distributed responses. {\it Statistics in Medicine} {\bf 20,} 1771--1782. \\
\noindent Henderson, H. V. and Searle, S. R. (1981). The vec-permutation matrix, the vec operator and Kronecker products: a review. {\it Linear and Multilinear Algebra} {\bf 9,} 271–-288. \\
\noindent Higgins, J.P.T., Jackson, D., Barrett, J.K., Lu, G., Ades, A.E. and White, I.R. (2012). Consistency and inconsistency in network meta-analysis: concepts and models for multi-arm studies. {\it Research Synthesis Methods} {\bf 3,} 98--110. \\
\noindent Hong, H., Fu, H., Price, K. L. and Carlin, B. P. (2015) Incorporation of individual-patient data in network meta-analysis for multiple continuous endpoints, with application to diabetes treatment. {\it Statistics in Medicine} {\bf 34,} 2794--2819. \\
\noindent Hong, H., Chu H., Zhang J. and Carlin B.P. (2016) A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons. {\it Research Synthesis Methods} {\bf 7,} 6--22. \\
\noindent Jackson, D., Riley, R. and White, I. R. (2011). Multivariate meta-analysis: potential and promise (with discussion). {\it Statistics in Medicine} {\bf 30,} 2481–-2510. \\
\noindent Jackson, D., White, I.R. and Riley, R.D. (2013). A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression. {\it Biometrical Journal} {\bf 55,} 231--245. \\
\noindent Jackson, D. and Riley, R. (2014). A refined method for multivariate meta-analysis and meta-regression. {\it Statistics in Medicine} {\bf 33,} 541--554. \\
\noindent Jackson, D., Law, M., Barrett, J.K., Turner, R., Higgins, J.P.T., Salanti, G. and White, I.R. (2016). Extending DerSimonian and Laird's methodology to perform network meta-analyses with random inconsistency effects. {\it Statistics in Medicine} {\bf 35}, 819--839. \\
\noindent Kirkham, J. J., Riley, R. D. and Williamson, P. R. (2012). A multivariate meta-analysis approach for reducing the impact of outcome reporting bias in systematic reviews. {\it Statistics in Medicine} {\bf 31}, 2179-–2195. \\
\noindent Kulinskaya, E., Dollinger, M.B. and Bj{\o}rkest{\o}l, K. (2011). Testing for Homogeneity in Meta-Analysis I. The One-Parameter Case: Standardized Mean Difference. {\it Biometrics} {\bf 67,} 203--212. \\
\noindent Law, M., Jackson, D., Turner, R., Rhodes, K. and Viechtbauer, W. (2016). Two new methods to fit models for network meta-analysis with random inconsistency effects {\it BMC Medical Research Methodology} {\bf 16}, 87 \\
\noindent Lu, G. and Ades, A. (2004). Combination of direct and indirect evidence in mixed treatment comparisons. {\it Statistics in Medicine} {\bf 23,} 3105--3124. \\
\noindent Lu, G. and Ades, A. (2006). Assessing evidence consistency in mixed treatment comparisons. {\it Journal of the American Statistical Association} {\bf 101,} 447--459. \\
\noindent Nikolakopoulou, A., Chaimani, A., Veroniki, A., Vasiliadis, H.S., Schmid, C.H. and Salanti, G. (2014). Characteristics of Networks of Interventions: A Description of a Database of 186 Published Networks. {\it Plos One} {\bf 9}, 1: e86754. \\
\noindent Piepho, H.P., Williams, E.R. and Madden, L.V. (2012). The Use of Two-Way Linear Mixed Models in Multitreatment Meta-Analysis. {\it Biometrics} {\bf 68,} 1269--1277. \\
\noindent Riley, R.D., Thompson, J. R. and Abrams, K. R. (2008). An alternative model for bivariate random-effects meta-analysis when the within-study correlations are unknown. {\it Biostatistics} {\bf 9}, 172-–186. \\
\noindent Riley, R.D., Price, M.J., Jackson, D., Wardle, M., Gueyffier, F., Wang, J., Staessen, J.A. and White, I.R. (2015). Multivariate meta-analysis using individual participant data. {\it Research Synthesis Methods} {\bf 6,} 157–-174. \\
\noindent Sormani, M.P., Bonzano, L., Roccatagliata, L., Mancardi, G.L., Uccelli, A. and Bruzzi, P. (2010). Surrogate endpoints for EDSS worsening in multiple sclerosis. A meta-analytic approach. {\it Neurology} {\bf 75,} 302--309. \\
\noindent Seaman, S., Galati, J., Jackson, D. and Carlin, J. (2013). What Is Meant by ``Missing at Random?''. {\it Statistical Science } {\bf 28}, 257-268. \\
\noindent Searle, S.R. (1971). Linear Models. Wiley. New York. \\
\noindent Veroniki, A., Vasiliadis, H.S., Higgins, J.P. and Salanti G. (2013). Evaluation of inconsistency in networks of interventions. {\it International Journal of Clinical Epidemiology} {\bf 42,} 332--345. \\
\noindent Wei, Y. and Higgins, J.P.T. (2013). Estimating within-study covariances in multivariate meta-analysis with multiple outcomes. {\it Statistics in Medicine} {\bf 32,} 1191–-1205.
\section*{Supplementary Materials} \section*{Multivariate estimation}
\subsection*{An important result}
In order to evaluate the expectations required, we will need to be able to compute expressions of the form $\mbox{btr}({\bf A}({\bf M} \otimes {\bf \Sigma}){\bf B})$, where ${\bf A}$ and ${\bf B}$ are $np \times np$ matrices, ${\bf M}$ is an $n \times n$ matrix and ${\bf \Sigma}$ is a $p \times p$ matrix. We continue to use the notation ${\bf A}_{i,j}$ to denote the $i$th by $j$th block of ${\bf A}$, where these blocks are $p \times p$ matrices. For any three $np \times np$ matrices ${\bf A}$, ${\bf B}$ and ${\bf C}$, we have \[ ({\bf ACB})_{k,l} = \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n} {\bf A}_{k,i } {\bf C}_{i,j } {\bf B}_{j,l } \] This is just the law of matrix multiplication applied to blocks. Then taking ${\bf C} = {\bf M} \otimes {\bf \Sigma} $ so that from the definition of the Kronecker product, ${\bf C}_{i,j} = m_{ij} {\bf \Sigma}$, we have \[ ({\bf A({\bf M} \otimes {\bf \Sigma})B})_{k,l} = \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n} m_{ij} {\bf A}_{k,i } {\bf \Sigma} {\bf B}_{j,l } \] To obtain the block trace, we sum the matrices along the main diagonal. Hence to obtain the block trace we take $l=k$ to obtain the matrices along the main diagonal and sum over $k$ so obtain \begin{equation} \label{ap1} \mbox{btr}({\bf A}({\bf M} \otimes {\bf \Sigma}){\bf B}) = \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n} \sum\limits_{k=1}^{n} m_{ij} {\bf A}_{k,i } {\bf \Sigma} {\bf B}_{j,k } \end{equation} The use of equation (\ref{ap1}), with the appropriate matrices, almost immediately results in the expected values required in section 4.
\subsection*{The estimating equations }
In this section we prove the results given in section 4 of the main paper. We do not redefine all quantities or give the size of all matrices and vectors, see the main paper for these details. As in the univariate approach of Jackson {\it et al.} (2016), we will base our estimation on the two quantities
$\mbox{btr} ({\mathbf Q}) $
and $ \sum\limits_{d=1}^D\mbox{btr} ({\mathbf Q}_d)$
where $D$ is the number of different designs. We match these quantities to their expectations to estimate the unknown variance parameters. We therefore need to evaluate $\mbox{E}[\mbox{btr} ({\mathbf Q})] $ and $\mbox{E}[\mbox{btr}({\mathbf Q}_d)]$.
\subsubsection*{Evaluating $\mbox{E}[\mbox{btr} ({\mathbf Q})] $ and deriving the first estimating equation} As in Jackson {\it et al.} (2013), by direct calculation we have that ${\mathbf W} {\mathbf H} {\mathbf W}^{-1} = {\mathbf H}^T$ and $(({\mathbf I}_{np} - {\mathbf H})^T)^2= ({\mathbf I}_{np} - {\mathbf H})^T$; if ${\mathbf W}$ is not invertible because outcome data are missing then we can justify the use of the identity ${\mathbf W} {\mathbf H} {\mathbf W}^{-1} = {\mathbf H}^T$ and the expectation that follows in the limit, where the precision $p$ attributed to missing data tends towards zero from above, $p \rightarrow 0^+$ (Jackson {\it et al.}, 2013). Furthermore we can use the identity $ {\mathbf W} = {\mathbf S}^{-1}$ in this limit. We also have that ${\mathbf Y} - \hat{{\mathbf Y}} = ({\mathbf I}_{np} - {\mathbf H}){{\mathbf Y}} $ and $\mbox{E}[{\mathbf Y} - \hat{{\mathbf Y}}]= {\mathbf 0}$. Hence from the definition of ${\mathbf Q}$ we have $\mbox{E}[{\mathbf Q}] = {\mathbf W}\mbox{Var}[{\mathbf Y}-\hat{{\mathbf Y}}] {\mathbf R}$. From these results, taking the variance of ${\mathbf Y}$ from model (3) of the main paper, we can evaluate \[ \mbox{E}[{\mathbf Q}]={\mathbf A}({{{\bf M_1} \otimes {\bf \Sigma}_{\beta} + {{\bf M_2}}\otimes {\bf \Sigma}_{\omega} }}) {\mathbf B} + {\mathbf B} \] where \[ {\mathbf A} = ({\mathbf I}_{np}-{\mathbf H})^T {\mathbf W} \] and \[ {\mathbf B} = ({\mathbf I}_{np} - {\mathbf H})^{T} {\mathbf R} \] Here ${\mathbf A}$ and ${\mathbf B}$ are known $np \times np$ matrices. For estimation purposes we require $ \mbox{E}[\mbox{btr}({\mathbf Q})] =\mbox{btr} (\mbox{E}[{\mathbf Q}])$. We write ${\mathbf A}_{i,j}$ and ${\mathbf B}_{i,j}$ to mean the $i$th by $j$th blocks of ${\mathbf A}$ and ${\mathbf B}$ respectively, so that ${\mathbf A}_{i,j}$ and ${\mathbf B}_{i,j}$ are both $p \times p$ matrices. Then, using (\ref{ap1}), we have \[ \mbox{E}[\mbox{btr}({\mathbf Q})]= \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} m_{1i j} {\mathbf A}_{k,i} {\bf \Sigma}_\beta {\mathbf B}_{j,k}+ \] \[ \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\sum\limits_{k=1}^{n} m_{2i j} {\mathbf A}_{k,i} {\bf \Sigma}_\omega {\mathbf B}_{j,k} + \mbox{btr}({\mathbf B}) \]
\subsubsection*{Evaluating $\mbox{E}[\mbox{btr} ({\mathbf Q}_d)] $ and deriving the second estimating equation} Then we follow very similar, but much simpler, arguments as in the previous section to derive the result that we require. We define design specific hat matrices \begin{equation} \label{ap2} {\mathbf H}_d = {\mathbf X}_d ({\mathbf X}_d^T {\mathbf W}_d {\mathbf X}_d)^{-1} {\mathbf X}_d^{T} {\mathbf W}_d \end{equation}
and also design specific $p n_d \times p n_d$ ${\mathbf A}$ and ${\mathbf B}$ matrices \[ {\mathbf A}_d = ({\mathbf I}_{p n_d}-{\mathbf H}_d)^T {\mathbf W}_d \] and \[ {\mathbf B}_d = ({\mathbf I}_{p n_d} - {\mathbf H}_d)^{T} {\mathbf R}_d \] In equation (\ref{ap2}) we take the matrix inverse to be the Moore-Penrose pseudoinverse. This is because, in the presence of missing outcome data, the design-specific regression corresponding to this hat matrix may not be identifiable (for example, if studies of a particular design do not provide data for one or more of the outcomes). In such instances this design may still provide information about some of the unknown between-study variance components and so it is not desirable to exclude the design from this part of the estimation procedure. By computing (\ref{ap2}) using this pseudoinverse we obtain a suitable hat matrix (Searle, 1971; page 221, his equations 126 and 127). Furthermore all the necessary properties of the hat matrix are retained when using the pseudoinverse when computing (\ref{ap2}) and we retain unbiased fitted values (Searle, 1971; page 181).
Following a simpler version of the arguments in the previous section and the main paper, taking the variance of ${\mathbf Y}_d$ from model (5) of the main paper, and upon applying the vec operator, we obtain \begin{equation} \label{ap3} \mbox{vec}(\mbox{E}[\mbox{btr}({\mathbf Q}_d)]) = {\mathbf C}_d \mbox{vec}({\bf \Sigma}_\beta) + {\mathbf E}_d \end{equation} where \[ {\mathbf C}_d = \sum\limits_{i=1}^{n_d}\sum\limits_{j=1}^{n_d}\sum\limits_{k=1}^{n_d} m^d_{1 ij} {\mathbf B}_{d, j,k}^T \otimes {\mathbf A}_{d, k,i} \] and \[ {\mathbf E}_d = \mbox{vec}(\mbox{btr}({\bf B}_d)) \] We then sum equation (\ref{ap3}) across all designs in order to obtain \[ \mbox{vec}\left(\mbox{E}[\sum\limits_{d=1}^D \mbox{btr}({\mathbf Q}_d)]\right) = \left(\sum\limits_{d=1}^D {\mathbf C}_d\right) \mbox{vec}({\bf \Sigma}_\beta) + \sum\limits_{d=1}^D {\mathbf E}_d \]
\subsection*{Special cases of the estimation procedure (an extended version of section 4.5)} The proposed method reduces to two previous methods in special cases. If all studies are two arm studies (and so provide a single contrast) and consistency is assumed then the proposed method reduces to the matrix based method for multivariate meta-regression (Jackson {\it et al.}, 2013). This is because we then have ${\bf \Sigma}_\omega={\bf 0}$, so that the second triple sum in our expression for $\mbox{E}[\mbox{btr}({\mathbf Q})]$ is zero; furthermore the first triple summation in this expression can be reduced to a double summation, because ${\bf M_1}$ is an identity matrix for multivariate meta-regression (Jackson {\it et al.}, 2013; their equation A.1.).
Furthermore the proposed multivariate method also reduces to the univariate DerSimonian and Laird method for network meta-analysis (Jackson {\it et al.}, 2016) when $p=1$. This is because, in one dimension, the ${\mathbf Q}$ matrices all reduce to the $Q$ random scalars used in the estimation procedure suggested by Jackson {\it et al.} (2016). This can be shown by replacing the block trace operator with the more familiar trace of a matrix (btr is the trace when $p=1$) in the definition of the ${\mathbf Q}$ matrices and using the identity $\mbox{tr}({\bf AB}) = \mbox{tr}({\bf BA})$. These two special cases are in turn generalisations of methods such as that proposed by DerSimonian and Laird (1986).
There is however one caveat when stating that the new multivariate method reduces to the univariate method proposed by (Jackson {\it et al.}, 2016) when $p=1$. This is because the account of Jackson {\it et al.} (2016) does not mention the possibility of missing outcome data and so we have implicitly taken all data to be observed in the argument used in the previous paragraph.
\section*{Example of matrices $\mathbf{M_1}$ and $\mathbf{M_2}$} We provide a concrete example of matrices $\mathbf{M_1}$ and $\mathbf{M_2}$, in order to clarify how they are computed. We take such an example from Law {\it et al.} (2016) which comprises thirteen studies with the following study designs: AB, BC, BC, BC, BC, BC, BD, BD, CD, CD, ABD, BCD, BCD. This is the same type of network as used in the simulation study below. The two matrices for this example are given explicitly below, where we can see that these matrices contain blocks that are comprised of blocks of $\mathbf{P}_{c_{d}}$, where in $\mathbf{M_1}$ the blocks are formed by studies and in $\mathbf{M_2}$ the blocks are formed by designs.
\begin{center}
\setcounter{MaxMatrixCols}{20}
$\mathbf{M_1} = \left( \begin{array}{c|c|c|c|c|c|c|c|c|c|cc|cc|cc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 1 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 1 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 1 \\
\end{array}
\right)$
\setcounter{MaxMatrixCols}{20}
$\mathbf{M_2} = \left( \begin{array}{c|ccccc|cc|cc|cc|cccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 1 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & 1 & \frac{1}{2} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 1 & \frac{1}{2} & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & 1 & \frac{1}{2} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 1 & \frac{1}{2} & 1 \\ \end{array} \right)$ \end{center}
\end{document} | arXiv |
\begin{document}
\title{ Affine translation surfaces in the isotropic 3-space} \author{Muhittin Evren Aydin$^*$, Mahmut Ergut} \address{$^*$ Department of Mathematics, Faculty of Science, Firat University, Elazig, 23200, Turkey} \address{Department of Mathematics, Faculty of Science and Art, Namik Kemal University, Tekirdag, 59000, Turkey} \email{[email protected], [email protected]} \thanks{} \subjclass[2000]{53A35, 53A40, 53B25.} \keywords{Isotropic space, affine translation surface, Weingarten surface.}
\begin{abstract} The isotropic 3-space $\mathbb{I}^{3}$ is a real affine 3-space endowed with the metric $dx^{2}+dy^{2}.$ In this paper we describe Weingarten and linear Weingarten affine translation surfaces in $\mathbb{I}^{3}$. Further we classify the affine translation surfaces in $\mathbb{I}^{3}$ that satisfy certain equations in terms of the position vector and the Laplace operator. \end{abstract}
\maketitle
\section{Introduction}
It is well-known that a surface is called \textit{translation surface }in a Euclidean 3-space $\mathbb{R}^{3}$ if it is the graph of a function $z\left( x,y\right) =f\left( x\right) +g\left( y\right) $ for the standart coordinate system of $\mathbb{R}^{3}.$ One of the famous minimal surfaces of $\mathbb{R} ^{3}$ is Scherk's minimal translation surface which is the graph of the function \begin{equation*} z\left( x,y\right) =\frac{1}{c}\log \left\vert \frac{\cos \left( cx\right) }{ \cos \left( cy\right) }\right\vert ,\text{ }c\in
\mathbb{R}
^{\ast }:=\mathbb{R-}\left\{ 0\right\} . \end{equation*} In order for more generalizations of the translation surfaces to see in various ambient spaces we refer to \cite{4,5,7,12,16,19,20,24,26}.
In 2013, H. Liu and Y. Yu \cite{14} defined the \textit{affine translation surfaces }in $\mathbb{R}^{3}$ as the graph of the function \begin{equation*} z\left( x,y\right) =f\left( x\right) +g\left( y+ax\right) ,\text{ }a\in
\mathbb{R}
^{\ast } \end{equation*} and described the minimal affine translation surfaces which are given by \begin{equation*} z\left( x,y\right) =\frac{1}{c}\log \left\vert \frac{\cos \left( c\sqrt{ 1+a^{2}}x\right) }{\cos \left( c\left[ y+ax\right] \right) }\right\vert , \text{ }a,c\in
\mathbb{R}
^{\ast }. \end{equation*} These are called \textit{affine Scherk surface}. Then H. Liu and S.D. Jung \cite{15} classified the affine translation surfaces in $\mathbb{R}^{3}$ of arbitrary constant mean curvature.
In the isotropic 3-space $\mathbb{I}^{3}$, there exist three different classes of translation surfaces given by (see \cite{18,25}) \[ \left\{ \begin{array}{l} z\left( x,y\right) =f\left( x\right) +g\left( y\right) , \\ y\left( x,z\right) =f\left( x\right) +g\left( z\right) , \\ x\left( y,z\right) =\frac{1}{2}\left( f\left( \frac{y+z-\pi }{2}\right) +g\left( \frac{\pi -y+z}{2}\right) \right), \end{array} \right. \] where $x,y,z$ are the standart affine coordinates in $\mathbb{I}^{3}$. These surfaces are respectively called \textit{translation surfaces of Type 1,2,3} in $\mathbb{I}^{3}$. Such surfaces of constant isotropic Gaussian and mean curvature were obtained in \cite{18} as well as Weingarten ones.
The translation surfaces of Type 1 in $\mathbb{I}^{3}$ that satisfy the condition \begin{equation*} \bigtriangleup ^{I,II}r_{i}=\lambda _{i}r_{i},\text{ }\lambda _{i}\in
\mathbb{R}
,\text{ }i=1,2,3, \end{equation*} were presented in \cite{13}, where $r_{i}$ is the coordinate function of the position vector and $\bigtriangleup ^{I,II}$ the Laplace operator with respect to the first and second fundamental forms, respectively. This condition is natural, being related to the so-called \textit{submanifolds of finite type}, introduced by B.-Y. Chen in the late 1970's (see \cite{8,9,11}). More details of translation surfaces in the isotropic spaces can be found in \cite{2,3,6}.
In this paper, we investigate the affine translation surfaces of Type 1 in $\mathbb{I} ^{3}$, i.e. the graphs of the function \begin{equation*} z\left( x,y\right) =f\left( ax+by\right) +g\left( cx+dy\right) ,\text{ } ad-bc\neq 0 \end{equation*} and classify ones of Weinagarten type. Morever we describe the affine translation surfaces of Type 1 that satisfy the condition $\bigtriangleup ^{I,II}r_{i}=\lambda _{i}r_{i}$.
\section{Preliminaries}
The isotropic 3-space $\mathbb{I}^{3}$ is a real affine space defined from the projective 3-space $P\left(
\mathbb{R}
^{3}\right) $ with an absolute figure consisting of a plane $\omega$ and two complex-conjugate straight lines $f_{1},f_{2}$ in $ \omega $ (see \cite{1,10,17}, \cite{21}-\cite{23}). Denote the projective coordinates by $\left(X_{0},X_{1},X_{2},X_{3}\right)$
in $P\left(
\mathbb{R}
^{3}\right) $. Then the absolute plane $\omega $ is given by $X_{0}=0$ and the absolute lines $f_{1},f_{2}$ by $ X_{0}=X_{1}+iX_{2}=0,$ $X_{0}=X_{1}-iX_{2}=0.$ The intersection point $ F(0:0:0:1)$ of these two lines is called the absolute point. The group of motions of $\mathbb{I}^{3}$ is a six-parameter group given in the affine coordinates $x=\frac{X_{1}}{X_{0}},$ $y=\frac{X_{2}}{X_{0}},$ $z= \frac{X_{3}}{X_{0}}$ by
\begin{equation*} \left( x,y,z\right) \longmapsto \left( x^{\prime },y^{\prime },z^{\prime }\right) :\left\{ \begin{array}{l} x^{\prime }=a_{1}+x\cos \phi -y\sin \phi , \\ y^{\prime }=a_{2}+x\sin \phi +y\cos \phi , \\ z^{\prime }=a_{3}+a_{4}x+a_{5}y+z, \end{array} \right. \end{equation*} where $a_{1},...,a_{5},\phi \in
\mathbb{R}
.$
The metric of $\mathbb{I}^{3}$ is induced by the absolute figure, i.e. $ds^{2}=dx^{2}+dy^{2}.$ The lines in $ z-$direction are called \textit{isotropic lines}. The planes containing an isotropic line are called \textit{isotropic planes}. Other planes are \textit{non-isotropic}.
Let $M$ be a surface immersed in $\mathbb{I}^{3}$. We call the surface $M$ \textit{admissible} if it has no isotropic tangent planes. Such a surface can get the form \begin{equation} r:D\subseteq \mathbb{R}^{2}\longrightarrow \mathbb{I}^{3}:\text{ }\left( x,y\right) \longmapsto \left( r_{1}\left( x,y\right) ,r_{2}\left( x,y\right) ,r_{3}\left( x,y\right) \right). \notag \end{equation}
The components $E,F,G$ of the first fundamental form $I$ of $M$ can be calculated via the metric induced from $\mathbb{I}^{3}$.
Denote $\bigtriangleup^I$ the Laplace operator of $M$ with respect to $I$. Then it is defined as \begin{equation} \bigtriangleup \phi =\frac{1}{\sqrt{\left\vert W\right\vert }}\left\{ \frac{\partial }{\partial x}\left( \frac{G\phi _{x}-F\phi _{y}}{\sqrt{ \left\vert W\right\vert }}\right) -\frac{\partial }{\partial y}\left( \frac{ F\phi _{x}-E\phi _{y}}{\sqrt{\left\vert W\right\vert }}\right) \right\}, \tag{2.1} \end{equation} where $\phi $ is a smooth function on $M$ and $W=EG-F^2$.
The unit normal vector field of $M^{2}$ is completely isotropic, i.e. $ \left( 0,0,1\right) $. Morever, the components of the second fundamental form $II$ are \begin{equation} L=\frac{\det \left( r_{xx},r_{x},r_{y}\right) }{\sqrt{ EG-F^2}},\text{ }M=\frac{\det \left( r_{xy},r_{x},r_{y}\right) }{\sqrt{ EG-F^2}},\text{ }N=\frac{\det \left( r_{yy},r_{x},r_{y}\right) }{\sqrt{ EG-F^2 }}, \tag{2.2} \end{equation} where $r_{xy}=\frac{\partial ^{2}r}{\partial x\partial y},$ etc.
The \textit{relative curvature} (so-called the \textit{isotropic curvature} or \textit{isotropic Gaussian curvature}) and the \textit{ isotropic mean curvature} are respectively defined by \begin{equation} K=\frac{LN-M^2 }{EG-F^2},\text{ }H= \frac{EN-2FM+LG}{2(EG-F^2) }. \tag{2.3} \end{equation}
Assume that nowhere $M$ has parabolic points, i.e. $K\neq 0.$ Then the Laplace operator with respect to $II$ is given by \begin{equation} \bigtriangleup ^{II}\phi =-\frac{1}{\sqrt{\left\vert w\right\vert }}\left\{ \frac{\partial }{\partial x}\left( \frac{N\phi _{x}-M\phi _{y}}{\sqrt{ \left\vert w\right\vert }}\right) -\frac{\partial }{\partial y}\left( \frac{ M\phi_{x}-L\phi _{y}}{\sqrt{\left\vert w\right\vert }}\right) \right\} \tag{2.4} \end{equation} for a smooth function $\phi $ on $M$ and $w=LN-M^2$.
In particular, if $M$ is a graph surface in $\mathbb{I}^{3}$ of a smooth function $z(x,y)$ then the metric on $M$ induced from $\mathbb{I}^{3}$ is given by $ dx^{2}+dy^{2}.$ Thus its Laplacian turns to \begin{equation} \bigtriangleup ^{I}=\frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2} }{\partial y^{2}}. \tag{2.5} \end{equation}
The matrix of second fundamental form $II$ of $M$ corresponds to the Hessian matrix $\mathcal{H}\left( z\right) $,\ i.e., \begin{equation*} \begin{pmatrix} L & M \\ M & N \end{pmatrix} = \begin{pmatrix} z_{xx} & z_{xy} \\ z_{xy} & z_{yy} \end{pmatrix} . \end{equation*} Accordingly, the formulas (2.3) reduce to \begin{equation} K=\det \left( \mathcal{H}\left( z\right) \right) ,\text{ }H=\frac{ trace\left( \mathcal{H}\left( z\right) \right) }{2}. \tag{2.6} \end{equation}
\section{Weingarten affine translation surfaces}
Let $M$ be the graph surface in $\mathbb{I}^{3}$ of the function $z\left( x,y\right) =f\left( u\right) +g\left( v\right)$, where \begin{equation} u=ax+by, \text{ } v=cx+dy. \tag{3.1} \end{equation} If $ad-bc\neq 0$, we call the surface $M$ \textit{affine translation surface of Type 1} in $\mathbb{I}^{3}$ and the pair $\left( u,v\right) $
\textit{affine parameter coordinates}.
In the particular case $a=d=1$ and $b=c=0$ (or $a=d=0$ and $b=c=1$), such a surface reduces to the translation surface of Type 1 in $\mathbb{I}^{3}$. Let us fix some notations to use remaining part: \begin{equation*} \frac{\partial f}{\partial x}=a\frac{df}{du}=af^{\prime },\text{ }\frac{ \partial f}{\partial y}=bf^{\prime },\text{ }\frac{\partial g}{\partial x}=c \frac{dg}{dv}=cg^{\prime },\text{ }\frac{\partial g}{\partial y}=dg^{\prime }, \end{equation*} and so on. By $\left( 2.6\right) ,$ the relative curvature $K$ and the isotropic mean curvature $H$ of $M$ turn to \begin{equation} K=\left( ad-bc\right) ^{2}f^{\prime \prime }g^{\prime \prime }\text{ and } 2H=\left( a^{2}+b^{2}\right) f^{\prime \prime }+\left( c^{2}+d^{2}\right) g^{\prime \prime }. \tag{3.2} \end{equation}
Now we can state the following result to describe the Weingarten affine translation surfaces of Type 1 in $\mathbb{I}^{3}$ that satisfy the condition \begin{equation} K_{x}H_{y}-K_{y}H_{x}=0, \tag{3.3} \end{equation} where the subscript denotes the partial derivative.
\begin{theorem} Let $M$ be a Weingarten affine translation surface of Type 1 in $\mathbb{I} ^{3}.$ Then one of the following occurs:
(i) $M$ is a quadric surface given by \begin{equation*} z\left( x,y\right) =c_{1}u^{2}+\frac{c_{1}\left( a^{2}+b^{2}\right) }{\left( c^{2}+d^{2}\right) }v^{2}+c_{2}u+c_{3}v+c_{4},\text{ }c_{1},...,c_{4}\in
\mathbb{R}
; \end{equation*}
(ii)$\ M$ is of the form either \begin{equation*} z\left( x,y\right) =f\left( u\right) +c_{1}v^{2}+c_{2}v+c_{3},\text{ } f^{\prime \prime \prime} \neq 0, \text{ }c_{1},c_{2},c_{3}\in
\mathbb{R}
\end{equation*} or \begin{equation*} z\left( x,y\right) =g\left( v\right) +c_{1}u^{2}+c_{2}u+c_{3}, \text{ } g^{\prime \prime \prime} \neq 0, \text{ }c_{1},c_{2},c _{3}\in
\mathbb{R}
, \end{equation*} where $\left( u,v\right) $ is the affine parameter coordinates given by (3.1). \end{theorem}
\begin{remark} We point out that a \textit{quadric surface} in $\mathbb{I}^{3}$ is the set of the points satisfying an equation of the second degree. \end{remark}
\begin{proof} It follows from $\left( 3.2\right) $ and $\left( 3.3\right) $ that
\begin{equation} \left[ \left( a^{2}+b^{2}\right) f^{\prime \prime }-\left( c^{2}+d^{2}\right) g^{\prime \prime }\right] f^{\prime \prime \prime }g^{\prime \prime \prime }=0. \tag{3.4} \end{equation} To solve $\left( 3.4\right),$ we have several cases:
\textbf{Case (a). } $\left( a^{2}+b^{2}\right) f^{\prime \prime }=\left( c^{2}+d^{2}\right)g^{\prime \prime }$. Then we derive \begin{equation} z\left( x,y\right) =c_{1}u^{2}+\frac{c_{1}\left( a^{2}+b^{2}\right) }{\left( c^{2}+d^{2}\right) }v^{2}+c_{2}u+c _{3}v+c_{4},\text{ }c_{1},...,c _{4}\in
\mathbb{R}
, \notag \end{equation} which gives the statement (i) of the theorem.
\textbf{Case (b). }$\left( a^{2}+b^{2}\right) f^{\prime \prime } \neq \left( c^{2}+d^{2}\right)g^{\prime \prime }$. Then, by (3.4), the surface has the form either \begin{equation} z\left( x,y\right) =g\left( v\right) +c_{1}u^{2}+c_{2}u+c_{3},\text{ } g^{\prime \prime \prime} \neq 0 \notag \end{equation} or \begin{equation} z\left( x,y\right) =f\left( u\right) +c_{4}v^{2}+c_{5}v+c_{6},\text{ } f^{\prime \prime \prime} \neq 0, \text{ } c_{1},...,c_{6}\in
\mathbb{R}
. \notag \end{equation} This implies the second statement of the theorem. Therefore the proof is completed. \end{proof}
Now we intend to find the linear Weingarten affine translation surfaces of Type 1 in $\mathbb{I}^{3}$ that satisfy \begin{equation} \alpha K+\beta H=\gamma ,\text{ } \alpha ,\beta ,\gamma \in \mathbb{R},\text{ } \left( \alpha ,\beta ,\gamma \right) \neq \left( 0,0,0\right) . \tag{3.5} \end{equation} Without lose of generality, we may assume $\alpha \neq 0$ in $\left( 3.5\right) $ and thus it can be rewritten as \begin{equation} K+2m_{0}H=n_{0},\text{ }2m_{0}=\frac{\beta }{\alpha },\text{ }n_{0}=\frac{ \gamma }{\alpha }. \tag{3.6} \end{equation} Hence the following result can be given.
\begin{theorem} Let $M$ be a linear Weingarten affine translation surface of Type 1 in $ \mathbb{I}^{3}$ that satisfies $\left( 3.6\right) $. Then we have:
(i) $M$ is a quadric surface given by \begin{equation*} z\left( x,y\right) =c_{1}u^{2}+c_{2}v^{2}+c _{3}u+c_{4}v+c _{5},\text{ }c_{1},...,c_{5}\in
\mathbb{R}
; \end{equation*}
(ii) $M$ is of the form either \begin{equation*} z\left( x,y\right) =f\left( u\right) -\frac{m_{0}\left( a^{2}+b^{2}\right) }{ 2\left( ad-bc\right) ^{2}}v^{2}+c_{1}v+c_{2}, \text{ } f^{\prime \prime \prime} \neq 0,\text{ }c_{1},c_{2}\in
\mathbb{R}
\end{equation*} or \begin{equation*} z\left( x,y\right) =g\left( v\right) -\frac{m_{0}\left( c^{2}+d^{2}\right) }{ 2\left( ad-bc\right) ^{2}}u^{2}+c_{1}u+c_{2},\text{ } g^{\prime \prime \prime} \neq 0, \text{ }c_{1},c_{2}\in
\mathbb{R}
, \end{equation*}
where $\left( u,v\right) $ is the affine parameter coordinates given by (3.1). \end{theorem}
\begin{proof} Substituting $\left( 3.2\right) $ in $\left( 3.6\right) $ gives \begin{equation} \left( ad-bc\right) ^{2}f^{\prime \prime }g^{\prime \prime }+m_{0}\left( a^{2}+b^{2}\right) f^{\prime \prime }+m_{0}\left( c^{2}+d^{2}\right) g^{\prime \prime }=n_{0}. \tag{3.7} \end{equation} After taking partial derivative of $\left( 3.7\right) $ with respect to $u$ and $v,$ we deduce $f^{\prime \prime \prime }g^{\prime \prime \prime }=0$. If both $f^{\prime \prime \prime }$ and $g^{\prime \prime \prime }$ are zero then we easily obtain the first statement of the theorem. Otherwise, we have the second statement of the theorem. This proves the theorem. \end{proof}
\begin{example} Consider the affine translation surface of Type 1 in $\mathbb{I}^{3}$ with \begin{equation*} z\left( x,y\right) =\cos \left( x-y\right) +\left( x+y\right) ^{2},\text{ }- \frac{\pi }{6}\leq x,y\leq \frac{\pi }{6}. \end{equation*} This surface plotted as in Fig. 1 satisfies the conditions to be Weingarten and linear Weingarten. \end{example}
\section{Affine translation surfaces satisfying $\bigtriangleup ^{I,II}r_{i}= \protect\lambda _{i}r_{i}$} This section is devoted to classify the affine translation surfaces of Type 1 in $ \mathbb{I}^{3}$ that satisfy the conditions $\bigtriangleup ^{I,II}r_{i}=\lambda _{i}r_{i}$ , $\lambda _{i}\in
\mathbb{R}
.$ For this, we get a local parameterization on such a surface as follows \begin{equation} \left. \begin{array}{r} r\left( x,y\right) =\left( r_{1}\left( x,y\right) ,r_{2}\left( x,y\right) ,r_{3}\left( x,y\right) \right) \\ =\left( x,y,f\left( ax+by\right) +g\left( cx+dy\right) \right) . \end{array} \right. \tag{4.1} \end{equation} Thus we first give the following result.
\begin{theorem} Let $M$ be an affine translation surface of Type 1 in $\mathbb{I} ^{3} $ that satisfies $\bigtriangleup ^{I}r_{i}=\lambda _{i}r_{i}.$ Then it is congruent to one of the following surfaces:
(i) $\left( \lambda _{1},\lambda _{2},\lambda _{3}\right)=(0,0,0)$ \begin{equation*} z\left( x,y\right) =c_{1}u^{2}-\frac{c_{1}\left( a^{2}+b^{2}\right) }{\left( c^{2}+d^{2}\right) }v^{2}+c _{3}u+c_{4}v+c_{5}; \end{equation*}
(ii) $\left( \lambda _{1},\lambda _{2},\lambda _{3}\right)=(0,0,\lambda>0)$ \begin{equation*} z\left( x,y\right) =c_{1}e^{\sqrt{\frac{\lambda }{a^{2}+b^{2}}} u}+c_{2}e^{-\sqrt{\frac{\lambda }{a^{2}+b^{2}}}u}+c_{3}e^{ \sqrt{\frac{\lambda }{c^{2}+d^{2}}}v}+c_{4}e^{-\sqrt{\frac{\lambda }{ c^{2}+d^{2}}}v} ; \end{equation*}
(iii) $\left( \lambda _{1},\lambda _{2},\lambda _{3}\right)=(0,0,\lambda<0)$ \begin{eqnarray*} z\left( x,y\right) &=&c_{1}\cos \left( \sqrt{\tfrac{-\lambda }{ a^{2}+b^{2}}}u\right) +c_{2}\sin \left( \sqrt{\tfrac{-\lambda }{ a^{2}+b^{2}}}u\right) +c_{3}\cos \left( \sqrt{\tfrac{-\lambda }{c^{2}+d^{2}}}v\right)\\ &&+c_{4}\sin \left( \sqrt{\tfrac{-\lambda }{c^{2}+d^{2}}}v\right) , \end{eqnarray*}
where $\left( u,v\right) $ is the affine parameter coordinates given by (3.1) and $c _{1},...,c_{5}\in
\mathbb{R}
$. \end{theorem}
\begin{proof} It is easy to compute from $\left( 2.5\right) $ and $\left( 4.1\right) $ that \begin{equation} \bigtriangleup ^{I}r_{1}=\bigtriangleup ^{I}r_{2}=0 \tag{4.2} \end{equation} and \begin{equation} \bigtriangleup ^{I}r_{3}=\left( a^{2}+b^{2}\right) f^{\prime \prime }+\left( c^{2}+d^{2}\right) g^{\prime \prime }. \tag{4.3} \end{equation} Assuming $\bigtriangleup ^{I}r_{i}=\lambda _{i}r_{i}$, $i=1,2,3$, in $\left( 4.2\right) $ and $\left( 4.3\right) $ yields $\lambda _{1}=\lambda _{2}=0$ and \begin{equation} \left( a^{2}+b^{2}\right) f^{\prime \prime }+\left( c^{2}+d^{2}\right) g^{\prime \prime }=\lambda \left( f+g\right) ,\text{ }\lambda _{3}=\lambda . \tag{4.4} \end{equation} If $\lambda =0$ in $\left( 4.4\right) ,$ then we derive \begin{equation*} f\left( u\right) =c_{1}u^{2}+c _{2}u+c _{3} \end{equation*} and \begin{equation*} g\left( v\right) =-\frac{c_{1}\left( a^{2}+b^{2}\right) }{\left( c^{2}+d^{2}\right) }v^{2}+c _{4}v+c_{5},\text{ }c _{1},...,c_{5}\in
\mathbb{R}
, \end{equation*} which proves the statement (i) of the theorem.
If $\lambda \neq 0$ then $\left( 4.4\right) $ can be rewritten as \begin{equation} \left( a^{2}+b^{2}\right) f^{\prime \prime }-\lambda f=\mu =-\left( c^{2}+d^{2}\right) g^{\prime \prime }+\lambda g,\text{ }\mu \in
\mathbb{R}
. \tag{4.5} \end{equation} In the case $\lambda >0,$ by solving $\left( 4.5\right) $ we obtain \begin{equation} \left\{ \begin{array}{l} f\left( u\right) =c_{1}\exp \left( \sqrt{\frac{\lambda }{a^{2}+b^{2}}} u\right) +c_{2}\exp \left( -\sqrt{\frac{\lambda }{a^{2}+b^{2}}} u\right) +\frac{\mu }{\lambda }, \\ g\left( v\right) =c_{3}\exp \left( \sqrt{\frac{\lambda }{c^{2}+d^{2}}} v\right) +c_{4}\exp \left( -\sqrt{\frac{\lambda }{c^{2}+d^{2}}} v\right) -\frac{\mu }{\lambda },\text{ } \end{array} \right. \notag \end{equation} where $c_{1},...,c_{4}\in
\mathbb{R}
.$ This gives the statement (ii) of the theorem.
Otherwise, i.e., $\lambda <0,$ then we derive \begin{equation} \left\{ \begin{array}{c} f\left( u\right) =c_{1}\cos \left( \sqrt{\tfrac{-\lambda }{a^{2}+b^{2} }}u\right) +c_{2}\sin \left( \sqrt{\tfrac{-\lambda }{a^{2}+b^{2}}} u\right) +\frac{\mu }{\lambda }, \\ g\left( v\right) =c_{3}\cos \left( \sqrt{\tfrac{-\lambda }{c^{2}+d^{2} }}v\right) +c_{4}\sin \left( \sqrt{\tfrac{-\lambda }{c^{2}+d^{2}}} v\right) -\frac{\mu }{\lambda } \end{array} \right. \notag \end{equation} for $c_{1},...,c_{4}\in
\mathbb{R}
.$ This completes the proof. \end{proof}
\begin{example} Take the affine translation surface of Type 1 in $\mathbb{I}^{3}$ with \begin{equation*} z\left( x,y\right) =\cos \left( x+y\right) +\sin \left( x-y\right) ,\text{ } -\pi \leq x,y\leq \pi. \end{equation*} Then it satisfies $\bigtriangleup ^{I}r_{i}=\lambda _{i}r_{i}$ for $ \lambda _{1}=\lambda _{2}=0$, $\lambda _{3}=-2$ and can be drawn as in Fig. 2. \end{example}
Next, we consider the affine translation surface of Type 1 in $\mathbb{I}^{3}$ that satisfies $\bigtriangleup ^{II}r_{i}=\lambda _{i}r_{i}$, $\lambda _{i}\in
\mathbb{R}
.$ Then its Laplace operator with respect to the second fundamental form $II$ has the form \begin{equation} \begin{array}{c} \bigtriangleup ^{II}\phi =\frac{\left( f^{\prime \prime }g^{\prime \prime }\right) ^{-2}}{2\left( ad-bc\right) }\left[ \left( -b\phi _{x}+a\phi _{y}\right) \left( f^{\prime \prime }\right) ^{2}g^{\prime \prime \prime }+\left( d\phi _{x}-c\phi _{y}\right) f^{\prime \prime \prime }\left( g^{\prime \prime }\right) ^{2}\right] \\ +\frac{\left( f^{\prime \prime }g^{\prime \prime }\right) ^{-1}}{\left( ad-bc\right) ^{2}}\left[ \left( 2ab\phi _{xy}-b^{2}\phi _{xx}-a^{2}\phi _{yy}\right) f^{\prime \prime }+\left( 2cd\phi _{xy}-d^{2}\phi _{xx}-c^{2}\phi _{yy}\right)g^{\prime \prime } \right] \end{array} \tag{4.6} \end{equation} for a smooth function $\phi $ and $f^{\prime \prime }g^{\prime \prime }\neq 0.$ Hence we have the following result.
\begin{theorem} Let $M$ be an affine translation surface of Type 1 in\textit{\ }$ \mathbb{I}^{3}$ that satisfies\textit{\ }$\bigtriangleup ^{II}r_{i}=\lambda _{i}r_{i}.$ Then it is congruent to one of the following surfaces:
(i) $\left( \lambda _{1} \neq 0,\lambda _{2} \neq 0,0\right) $ \[ z\left( x,y\right) =\ln \left( x^{\frac{1}{\lambda _{1}}}y^{\frac{1}{\lambda _{2}}}\right) +c_{1},\text{ } c_{1}\in
\mathbb{R}
; \]
(ii) $\left( \lambda \neq 0,\lambda ,0\right) $ \[ z\left( x,y\right) = \ln \left(\left( uv\right)^{\frac{1}{\lambda }}\right) +c_{1},\text{ } c_{1}\in
\mathbb{R}
, \] where $\left( u,v\right) $ is the affine parameter coordinates given by $ \left( 3.1\right) $. \end{theorem}
\begin{proof} Let us assume\ that $\bigtriangleup ^{II}r_{i}=\lambda _{i}r_{i}$, $ \lambda _{i}\in
\mathbb{R}
.$ Then, from $\left( 4.1\right) $ and $\left( 4.6\right) ,$ we state the following system \begin{equation} d\frac{f^{\prime \prime \prime }}{\left( f^{\prime \prime }\right) ^{2}}-b \frac{g^{\prime \prime \prime }}{\left( g^{\prime \prime }\right) ^{2}} =2\left( ad-bc\right) \lambda _{1}x, \tag{4.7} \end{equation} \begin{equation} -c\frac{f^{\prime \prime \prime }}{\left( f^{\prime \prime }\right) ^{2}}+a \frac{g^{\prime \prime \prime }}{\left( g^{\prime \prime }\right) ^{2}} =2\left( ad-bc\right) \lambda _{2}y, \tag{4.8} \end{equation} \begin{equation} \frac{f^{\prime \prime \prime }f^{\prime }}{\left( f^{\prime \prime }\right) ^{2}}+\frac{g^{\prime \prime \prime }g^{\prime }}{\left( g^{\prime \prime }\right) ^{2}}-4=2\lambda _{3}\left( f+g\right) . \tag{4.9} \end{equation} In order to solve above system we have to distinguish two cases depending on the constants $a,b,c,d$ for $ad-bc \neq 0$ .
\textbf{Case (a).} Two of $a,b,c,d$ are zero. Without loss of generality we may assume that $b=c=0$ and $a=d=1$. Then the equations (4.7) and (4.8) reduce to \begin{equation} \frac{f^{\prime \prime \prime }}{\left( f^{\prime \prime }\right) ^{2}} =2\lambda _{1}x \tag{4.10} \end{equation} and \begin{equation} \frac{g^{\prime \prime \prime }}{\left( g^{\prime \prime }\right) ^{2}} =2\lambda _{2}y. \tag{4.11} \end{equation} If $\lambda_{1}=\lambda_{2}=0$ then we obtain a contradiction from (4.9) since $f,g$ are non-constant functions. Thereby we need to consider the remaining cases:
\textbf{Case (a.1).} $\lambda _{1}=0,$ i.e. $f^{\prime \prime \prime }=0.$ Then substituting $\left( 4.10\right) $ and $\left( 4.11\right) $ into $ \left( 4.9\right) $ implies $\lambda _{3}=0$ and \[ g\left( y\right) =\frac{2}{\lambda _{2}}\ln y+c_{1},\text{ }c_{1}\in
\mathbb{R}
. \] Substituting it in (4.11) gives a contradiction.
\textbf{Case (a.2).} $\lambda _{2}=0,$ i.e. $g^{\prime \prime \prime }=0.$ Hence we can similarly obtain $\lambda _{3}=0$ and \[ f\left( x\right) =\frac{2}{\lambda _{1}}\ln x+c_{1},\text{ }c_{1}\in
\mathbb{R}
, \] which gives a contradiction by considering it into (4.10).
\textbf{Case (a.3). }$\lambda _{1}\lambda _{2}\neq 0.$ By substituting $\left( 4.10\right) $ and $\left( 4.11\right) $ into $\left( 4.9\right) $ we deduce \begin{equation} \lambda_{1} xf^{\prime }+\lambda_{2} yg^{\prime }-2=\lambda _{3}\left( f+g\right) . \tag{4.12} \end{equation}
\textbf{Case (a.3.1).} If $\lambda _{3}=0$, then (4.12) reduces to \begin{equation} \lambda_{1} xf^{\prime }+\lambda_{2} yg^{\prime }=2. \tag{4.13} \end{equation} By solving $\left( 4.13\right) $ we find \[ f\left( x\right) =\frac{\xi }{\lambda_{1} }\ln x+c_{1}\text{ and }g\left( v\right) =\frac{2-\xi }{\lambda _{2}}\ln y+c_{2},\text{ }c_{1},c_{2}\in
\mathbb{R}
,\text{ }\xi \in
\mathbb{R}
^{\ast }. \tag{4.14} \] Substituting (4.14) into (4.10) and (4.11) yields $\xi =1$. This proves the first statement of the theorem.
\textbf{Case (a.3.2).} If $\lambda _{3}\neq 0$ in (4.12) then we can rewrite it as \begin{equation} \lambda _{1} xf^{\prime }-\lambda _{3}f-2=c_{1}=-\lambda _{2} yg^{\prime }+\lambda _{3}g,\text{ }c_{1}\in
\mathbb{R}
. \tag{4.15} \end{equation} After solving $\left( 4.15\right) ,$ we conclude \[ f\left( x\right) =-\frac{2+c_{1}}{\lambda _{3}}+c_{2}x^{\frac{\lambda _{3}}{ \lambda _{}1}} \tag{4.16} \] and \[ g\left( y\right) =\frac{c_{1}}{\lambda _{3}}+c_{3}y^{\frac{\lambda _{3}}{ \lambda _{2}}}, \text{ } c_{2},c_{3}\in \mathbb{R}. \tag{4.17} \] By considering (4.16) and (4.17) into (4.10) and (4.11), respectively, we conclude $\lambda _{3}=0$, which implies that this case is not possible.
\textbf{Case (b).} At most one of $a,b,c,d$ is zero. Suppose that $\lambda _{1}=0$ in (4.7). It follows from $\left( 4.7\right) $ that \begin{equation} \frac{f^{\prime \prime \prime }}{\left( f^{\prime \prime }\right) ^{2}}= \frac{c_{1}}{d}\text{ and }\frac{g^{\prime \prime \prime }}{\left( g^{\prime \prime }\right) ^{2}}=\frac{c_{1}}{b},\text{ }c_{1}\in
\mathbb{R}
, \tag{4.18} \end{equation} where we may assume that $b\neq 0 \neq d$ since at most one of $a,b,c,d$ can vanish. If $c_{1}=0$ then we derive a contradiction from $\left( 4.9\right) $ since $ f^{\prime \prime }g^{\prime \prime }\neq 0.$ Considering $\left( 4.18\right) $ into $\left( 4.8\right) $ yields $\frac{c_{1}}{bd}=2\lambda _{2}y,$ which is no possible since $y$ is an independent variable. This implies that $\lambda _{1}$ is not zero and it can be similarly shown that $\lambda _{2}$ is not zero. Hence from $\left( 4.7\right) $ and $\left( 4.8\right) $ we can write \begin{equation} \frac{f^{\prime \prime \prime }}{\left( f^{\prime \prime }\right) ^{2}} =2\left( \lambda _{1}ax+\lambda _{2}by\right) \tag{4.19} \end{equation} and \begin{equation} \frac{g^{\prime \prime \prime }}{\left( g^{\prime \prime }\right) ^{2}} =2\left( \lambda _{1}cx+\lambda _{2}dy\right) . \tag{4.20} \end{equation} Compatibility condition in $\left( 4.19\right) $ or $\left( 4.20\right) $ gives $\lambda _{1}=\lambda _{2}.$ Put $\lambda _{1}=\lambda _{2}=\lambda .$ By substituting $\left( 4.19\right) $ and $\left( 4.20\right) $ into $\left( 4.9\right) $ we deduce \begin{equation} \lambda uf^{\prime }+\lambda vg^{\prime }-2=\lambda _{3}\left( f+g\right), \tag{4.21} \end{equation} where $\left( u,v\right) $ is the affine parameter coordinates given by $ \left( 3.1\right) $.
\textbf{Case (b.1).} If $\lambda _{3}=0$, then (4.21) reduces to \begin{equation} \lambda uf^{\prime }+\lambda vg^{\prime }=2. \tag{4.22} \end{equation} By solving $\left( 4.22\right) $ we find \[ f\left( u\right) =\frac{\xi }{\lambda }\ln u+c_{1}\text{ and }g\left( v\right) =\frac{2-\xi }{\lambda }\ln v+c_{2},\text{ }c_{1},c_{2}\in
\mathbb{R}
,\text{ }\xi \in
\mathbb{R}
^{\ast }. \tag{4.23} \] Substituting (4.23) into (4.19) and (4.20) yields $\xi =1$. This proves the second statement of the theorem.
\textbf{Case (b.2).} If $\lambda _{3}\neq 0$ in (4.11) then we can rewrite it as \begin{equation} \lambda uf^{\prime }-\lambda _{3}f-2=c_{1}=-\lambda vg^{\prime }+\lambda _{3}g,\text{ }c_{1}\in
\mathbb{R}
. \tag{4.24} \end{equation} After solving $\left( 4.24\right) ,$ we deduce \[ f\left( u\right) =-\frac{2+c_{1}}{\lambda _{3}}+c_{2}u^{\frac{\lambda _{3}}{ \lambda }} \tag{4.25} \] and \[ g\left( v\right) =\frac{c_{1}}{\lambda _{3}}+c_{3}v^{\frac{\lambda _{3}}{ \lambda }}, \text{ } c_{2},c_{3}\in \mathbb{R}. \tag{4.26} \] Considering (4.25) and (4.26) into (4.19) and (4.20), respectively, we find $\lambda _{3}=0$, however this is a contradiction.
\end{proof}
\begin{example} Given the affine translation surface of Type 1 in\textbf{\ }$\mathbb{I}^{3}$ as follows \[ z\left( x,y\right) =\ln\left( 2x+y\right) +\ln \left( x-y\right), (u,v) \in [3,5] \times [1,2] . \] Then it holds $\bigtriangleup ^{II}r_{i}=\lambda _{i}r_{i}$ for $\left( \lambda _{1},\lambda _{2},\lambda _{3}\right) =\left( 1,1,0\right) $ and we plot it as in Fig. 3. \end{example} \begin{figure}
\caption{A (linear) Weingarten affine translation surface of Type 1.}
\end{figure} \begin{figure}
\caption{An affine translation surface of Type 1 with $\bigtriangleup^{I}r_{i}=\lambda _{i}r_{i},$ $\left(\lambda _{1},\lambda_{2},\lambda_{3}\right)=\left(0,0,2\right)$.}
\end{figure} \begin{figure}
\caption{An affine translation surface of Type 1 with $\bigtriangleup ^{II}r_{i}=\lambda _{i}r_{i},$ $\left( \lambda _{1},\lambda _{2},\lambda _{3}\right) =\left( 1,1,0\right) .$}
\end{figure}
\end{document} | arXiv |
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
What's chiral about the Dirac points in graphene?
One of the main interesting properties of graphene is the appearance of Dirac points in its energy band structure, i.e. the presence of points where the valence and conduction bands meet at conical intersections at which the dispersion is (locally) the same as for massless fermions moving at the speed of light.
These Dirac points occur at the vertices of the standard hexagonal Brillouin zone, and they come in two sets of three, normally denoted $K$ and $K'$. Surprisingly, however, these two sets (despite looking much the same) turn out to be inequivalent: as far as the lattice translations know, the three $K$ points are exactly the same, but there's nothing in the lattice translation symmetry that relates them to the $K'$ points.
Even more interestingly, these two Dirac points are generally referred to as having a specific chirality (as an example, see Phys. Rev. Lett. 107, 166803 (2011)). Now, I understand why the two are not fully equivalent (as a simple insight, if you rotate the lattice by 60° about a carbon, you don't get the same lattice, which you do after 120°) and why they're related by a reflection symmetry (obvious in the diagram above, and also by the fact that if you tag the two inequivalent carbons as in this question, a 60° rotation is equivalent to a reflection on a line); since the two are inequivalent but taken to each other by a mirror symmetry, I understand why the designation as 'chiral' applies.
Nevertheless, I normally associate the word chiral (when restricted to 2D objects) as something that can be used to fix an orientation of the plane, or, in other words, something associated with a direction of rotation of that plane, and this is where my intuitive understanding of these $K$ and $K'$ points stops. So, in short:
in what way are the $K$ and $K'$ points of graphene associated with an orientation and/or a direction of rotation on the plane, in both the real and the dual lattices?
Similarly, and more physically, what is it about the $K$ and $K'$ points that makes them respond differently to chiral external drivers, such as magnetic fields or circularly polarized light?
condensed-matter symmetry graphene chirality
Emilio Pisanty
Emilio PisantyEmilio Pisanty
$\begingroup$ I think the chirality of the 'Dirac points' usually refers to a property of the spinors of the Dirac fermions which emerge from an effective relativistic description of the low energy physics of graphene. There is a derivation of this in section II B of this review (Rev. Mod. Phys. 81, 109 (2009)). $\endgroup$
– Stephan
$\begingroup$ @Stephan Yeah, I've seen that one and it has plenty of useful information, but I feel it still lacks some intuitive oomph to really explain that chirality in real-world terms, if you know what I mean. $\endgroup$
I know, this question is a bit dated but I stumbled across the same problem and would like to share my insight. If you have found an answer yourself, please provide it, I would be highly interested.
As you pointed out yourself, there are two sublattices and the honeycomb lattice itself is not a Bravais lattice. To construct the graphene lattice, you would use a unit cell containing two carbon atoms. Those two inequivalent atoms give rise to an additional degree of freedom (Is the electron at site A or B?). It is often refered to as pseudospin as it can be treated mathematically in the same way as the spin-1/2 property. This is the point where chirality comes in to play: In graphene, the direction of the pseudospin is linked to the direction of the electron/hole momentum. An electron's pseudospin is parallel to its momentum and for holes, it is antiparallel. Hence, the electron and hole states are called chiral states.
To sum it up, the two sublattices give rise to a spin-like property which is called pseudospin and which is linked to an electron's/a hole's momentum. This is why the property of chirality is introduced. A good read for more detail are these two articles:
https://www.nature.com/articles/nphys384
Lecture on Graphene's electronic bandstructure and Dirac fermions
lmrlmr
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged condensed-matter symmetry graphene chirality or ask your own question.
Chirality effect in graphene
How to construct the matrix of Hamiltonians for a hexagonal lattice
Why do electrons in graphene behave as Dirac fermions near the Dirac points?
How to determine the orientation of the massive Dirac Hamiltonian?
Why do edge states in graphene exist between the valence and conduction band?
Inversion symmetry points of graphene
Is the Berry curvature in perfect monolayer graphene zero?
How to determine the degeneracy of an energy level for a periodic quantum system from its band structure over the Brillouin zone?
How to make sense of the multiple bands obtained for multiple-atoms-per-unitcell crystals, such as graphene?
$4\times4$ Dirac Hamiltonian in Graphene | CommonCrawl |
\begin{document}
\newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{rem}[thm]{Remark} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{de}[thm]{Definition} \newtheorem{nota}[thm]{Notation}
\title{A natural construction of Borcherds' Fake Baby Monster Lie Algebra}
\begin{abstract}
We use a ${\bf Z}_2$-orbifold of the vertex operator algebra associated to the Niemeier lattice with root lattice $A_3^8$ and the no-ghost theorem of string theory to construct a generalized Kac-Moody algebra. Borcherds' theory of automorphic products allows us to determine the simple roots and identify the algebra with the fake baby monster Lie algebra.
\end{abstract}
\section{Introduction}
Up to now, there are only three generalized Kac-Moody algebras or superalgebras for which natural constructions are known. These are the fake monster Lie algebra~\cite{B-fake} and the monster Lie algebra~\cite{B-moonshine} constructed by Borcherds and the fake monster Lie superalgebra~\cite{S-superfake} constructed by the second author. All these algebras can be interpreted as the physical states of a string moving on a certain target space.
In~\cite{B-moonshine}, there was also introduced a method to obtain new generalized Kac-Moody algebras\ from old ones by twisting the denominator identity with some outer automorphism. Such Lie algebras are only defined through generators and relations, as it is the case for all other known examples of generalized Kac-Moody algebras\ (see, for e.g.,~\cite{GN}). In particular, Borcherds found a generalized Kac-Moody algebra\ of rank~$18$ called the {\em fake baby monster Lie algebra\ } by taking a ${\bf Z}_2$-twist of the fake monster Lie algebra (see~\cite{B-moonshine}, Sect.~14, Example~1) and he asked for a natural construction of it. The purpose of this note is to present such a construction.
The fake monster and the monster Lie algebra are obtained in the following way: Take for $V$ the vertex operator algebra\ (VOA) $V_{\Lambda}$ associated to the Leech lattice $\Lambda$ or the Moonshine module VOA $V^{\natural}$ and let $V_{\II}$ be the vertex algebra of the two dimensional even unimodular Lorentzian lattice $I\!I_{1,1}$. The tensor product $V\otimes V_{\II}$ is a vertex algebra of central charge $26$ with an invariant nonsingular bilinear form. Let $P_n$ be the subspace of Virasoro highest weight vectors of conformal weight $n$, i.e., the space of vectors $v$ satisfying $L_m(v)=0$ for $m>0$ and $L_0(v)=n\cdot v$. Then $P_1/L_{-1}P_0$ is a Lie algebra with an induced invariant bilinear form $(\,,\,)$. The fake monster or monster Lie algebra is defined as the quotient of $P_1/L_{-1}P_0$ by the radical of $(\,,\,)$. The non-degeneracy of the induced bilinear form is used to show that one has indeed obtained a generalized Kac-Moody algebra ${\bf g}$.
Alternatively, one can use the bosonic ghost vertex superalgebra $V_{\rm ghost}$ of central charge $-26$ and define the Lie algebra ${\bf g}$ as the BRST-cohomology group $H^1_{\rm BRST}(V\otimes V_{\II})$ (cf.~\cite{FGZ}).
The above construction can be carried out for any VOA $V$ of central charge~$24$, but until now only for $V_{\Lambda}$ and $V^{\natural}$ it was known how to compute the simple roots of the generalized Kac-Moody algebra\ ${\bf g}$. The Lie algebras obtained from the lattice VOAs $V_K$, where $K$ is any rank $24$ Niemeier lattice, are all isomorphic to the fake monster Lie algebra, because $K\oplus I\!I_{1,1}$ is always equal to the even unimodular Lorentzian lattice $I\!I_{25,1}$. The Moonshine module $V^{\natural}$ was constructed in~\cite{FLM} as a ${\bf Z}_2$-orbifold of $V_{\Lambda}$. This ${\bf Z}_2$-orbifold construction was generalized to any even unimodular lattice instead of $\Lambda$ in~\cite{DGM,DGM2}.
In our construction of the fake baby monster Lie algebra we take for $V$ the ${\bf Z}_2$-orbifold of $V_K$, where $K$ is the Niemeier lattice with root lattice~$A_3^8$. The computation of the root multiplicities is harder than in the previous cases: The weight $1$ part $V_1$ of $V$ is the semisimple Lie algebra of type $A_1^{16}$ and~$V$ forms an integrable highest weight representation of level~$2$ for the affine Kac-Moody algebra of type $\widehat{A}_1^{16}$. The decomposition of $V$ into $\widehat{A}_1^{16}$-modules can be described by the Hamming code ${\cal H}_{16}$ of length~$16$ and its dual. We use this combinatorial description together with the no-ghost theorem to determine the root lattice and root multiplicities of ${\bf g}$. The multiplicities obtained are exactly the exponents of a product expansion of an automorphic form constructed in~\cite{S-twist}. This allows us to interpret the automorphic product as one side of the denominator identity of ${\bf g}$, to determine its simple roots, and finally to identify ${\bf g}$ with the fake baby monster Lie algebra.
The paper is organized in the following way. In Section~\ref{voav}, the construction of the vertex operator algebra\ $V$ is described. We use a formula of Kac and Peterson (cf.~\cite{Kac}, Ch.~13) to express the character of the affine Kac-Moody algebra of type $\widehat{A}_1^{16}$ through string functions and theta series. In the last section, the root lattice and root multiplicities of ${\bf g}$ are computed and ${\bf g}$ is identified as the fake baby monster Lie algebra.
We would like to thank R.~Borcherds for helpful comments on an early version of this paper.
\section{The VOA $V$}\label{voav}
We define a VOA $V$ of central charge~$24$ and compute its character as representation for an affine Kac-Moody algebra.
Recall that there exist exactly $24$ positive definite even unimodular lattices in dimension~$24$. They can be classified by their root sublattice. \begin{de}\rm Let $V=V_K^+\oplus (V_K^T)^+$ be the ${\bf Z}_2$-orbifold of the lattice VOA associated to the Niemeier lattice $K$ with root lattice $A_3^8$. Here, $T$ is the involution in ${\rm Aut}(V_K)$ which is the up to conjugation unique lift of the involution $-1$ in ${\rm Aut}(K)$ to ${\rm Aut}(V_K)$ (cf.~\cite{DGH}, Appendix~D); $V_K^+$ is the fixpoint subVOA of $V_K$ under the action of $T$ and $(V_K^T)^+$ is the fixpoint set of the $T$-twisted module~$V_K^T$. \end{de}
In~\cite{DGM,DGM2} it is shown that $V$ has the structure of a VOA of central charge~$24$.
\begin{thm} Let $V_{A_{1,2}}$ be the VOA which has the integrable level~$2$ representation of highest weight $(2,0)$ for the affine Kac-Moody algebra of type $\widehat{A}_1$ as underlying vector space. The subVOA $\widetilde{V}_{1}$ generated by the weight $1$ subspace $V_1$ of $V$ is isomorphic to the affine Kac-Moody VOA $V_{A_{1,2}^{16}}$, the tensor product of $16$ copies of $V_{A_{1,2}}$. \end{thm}
\noi{\bf Proof.\ \,} \cite{DGM}.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm \begin{rem}\rm As noted in the introduction of~\cite{GH}, $V$ is the unique VOA in the genus of the Moonshine module containing $V_{{A}_{1,2}^{16}}$ as a subVOA. (See~\cite{H} for the definition of the genus of a VOA.) \end{rem}
To describe the decomposition of $V$ as a $V_{{A}_{1,2}^{16}}$-module in a convenient way and for some later applications, we explain some well known properties of the binary Hamming code ${\cal H}_{16}$ of length~$16$.
Let ${\cal H}_{16}^{\perp}\subset {\bf F}_2^{16}$ be the binary code spanned by the rows of the matrix $$\left(\begin{array}{c} 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, \\ 1\, 1\, 1\, 1\, 1\, 1\, 1\, 1\, 0\, 0\, 0\, 0\, 0\, 0\, 0\, 0\, \\ 1\, 1\, 1\, 1\, 0\, 0\, 0\, 0\, 1\, 1\, 1\, 1\, 0\, 0\, 0\, 0\, \\ 1\, 1\, 0\, 0\, 1\, 1\, 0\, 0\, 1\, 1\, 0\, 0\, 1\, 1\, 0\, 0\, \\ 1\, 0\, 1\, 0\, 1\, 0\, 1\, 0\, 1\, 0\, 1\, 0\, 1\, 0\, 1\, 0\, \end{array}\right)_.$$ This code is known as the first order Reed-Muller code of length~$16$ and has ${\rm AGL}(4,2)$ as automorphism group (cf.~\cite{DGH}, Th.~C.3). Its Hamming weight enumerator is $W_{{\cal H}_{16}^{\perp}}(x,y)=x^{16}+ 30\,x^8y^8+ y^{16}$. The Hamming code ${\cal H}_{16}$ is defined as the dual code of ${\cal H}_{16}^{\perp}$: these are the vectors $c\in {\bf F}_2^{16}$ satisfying \hbox{$\sum_{i=1}^{16}c_i\cdot d_i=0$} for all $d\in {\cal H}_{16}^{\perp}$. We easily see that ${\cal H}_{16}$ can correct $1$-bit errors and so the smallest nonzero code vector has weight at least~$4$. Indeed, by the MacWilliams identity we obtain for its weight enumerator: $$ W_{{\cal H}_{16}}(x,y)= x^{16}+y^{16}+140\,(x^4y^{12}+x^{12}y^4)+448\,(x^6y^{10}+x^{10}y^6)+870\,x^8y^8.$$ It follows that the $140$ codewords of weight~$4$ form a Steiner system of type $S(16,4,3)$, i.e., for every $3$-tuple of coordinate positions there is exactly one weight~$4$ code vector with value $1$ at this $3$ positions.
We also need the weight enumerator of all other cosets in the cocode ${\bf F}_2^{16}/{\cal H}_{16}$. The $2^5$ cosets ${\cal H}_{16}+c$ can be represented by vectors $c$ of type $(0^{16})$ (one coset), $(0^{15}1^1)$ (sixteen cosets) and $(0^{14}1^2)$ (fifteen cosets). Indeed, the vectors of type $(0^{16})$ and $(0^{15}1^1)$ must be in different cosets and for every vector of type $(0^{14}1^2)$ there are by the Steiner system property exactly $7$ others contained in the same coset. Since $1+16+{16 \choose 2}/8 =2^5$, all cosets are counted. For the cosets of type $(0^{15}1^1)$, the weight enumerator is $$W_{{\cal H}_{16}+c}(x,y)=\frac{1}{32}\big((x+y)^{16}-(x-y)^{16}\big)$$ $$ = x^{15}y + xy^{15} + 35\, (x^{13}y^3+x^3y^{13} )+
273\, (x^{11}y^5+x^5y^{11}) + 715\,(x^9y^7+x^7y^9) $$ because these cosets contain only vectors of odd weight and ${\rm AGL}(4,2)$ acts transitively on the coordinates. For the cosets of type $(0^{14}1^2)$, the weight enumerator is $$W_{{\cal H}_{16}+c}(x,y)= \frac{1}{15}\big(((x+y)^{16}+(x-y)^{16})/2-W_{{\cal H}_{16}}(x,y)\big)$$ $$=8\,(x^{14}y^2 +x^2y^{14}) + 112\,(x^{12}y^4+x^4y^{12}) +
504\,(x^{10}y^6+x^6y^{10}) + 800\,x^8y^8 $$ because ${\rm AGL}(4,2)$ acts also transitively on pairs of coordinates.
We need two other weight enumerators. Let $({\bf F}_2^8)_{0}$ be the subcode of all vectors of even weight in ${\bf F}_2^8$ and $({\bf F}_2^8)_{1}$ be the coset of vectors of odd weight. Their weight enumerators are: \begin{eqnarray*} W_{({\bf F}_2^8)_{0}}(x,y) \!\! & \!=\! & \! \!\frac{1}{2}\big((x+y)^8+(x-y)^8\big) =x^8+y^8+28\,(x^6y^2+x^2y^6)+70\,x^4y^4, \\ W_{({\bf F}_2^8)_{1}}(x,y)\!\! &\! =\! & \!\! \frac{1}{2}\big((x+y)^8-(x-y)^8\big) = 8\,(x^7y+y^7x) + 56\,(x^5y^3+x^3y^5). \end{eqnarray*}
The rational Kac-Moody VOA $V_{A_{1,2}}$ has three irreducible modules $M(0)$, $M(1)$ and $M(2)$ of conformal weight $0$, $1/2$ and $3/16$, respectively. The irreducible modules of $V_{A_{1,2}^{16}}\cong V_{A_{1,2}}^{\otimes 16}$ are the tensor products $M({i_1})\otimes\cdots\otimes M({i_{16}})$, $i_1$, $\ldots$, $i_{16}\in\{0,\,1,\,2\}$, for which we write shortly $M(i_1,\ldots\,,i_{16})$.
\begin{thm}\label{Vdecomp} Up to permutation of the $16$ tensor factors $V_{A_{1,2}}$, the $V_{A_{1,2}^{16}}$-module decomposition of $V$ into isotypical components has the following structure: $$V=\bigoplus_{\delta\in {\cal H}_{16}^{\perp}}K(\delta),$$ where $K(\delta)$ is defined
\begin{tabular}{ll} for ${\rm wt}(\delta)=0$ by & $\bigoplus\limits_{c\in {\cal H}_{16}}M(c)$, \\ for ${\rm wt}(\delta)=8$ by & $\bigoplus\limits_{i_1,\ldots,i_{16}\in\{0,1,2\} } n_{i_1,\ldots,i_{16}}^{\delta} \, M(i_1,\ldots,i_{16})$, \\ \noalign{\noindent here, $n_{i_1,\ldots,i_{16}}^{\delta}=1$, if $[i_k/2]=\delta_k$ (where $[x]$ denotes the Gau\ss bracket of $x$) \newline
for $k=1$, $\ldots$, $16$ and $\# \{k \mid i_k=1\}$ is odd, and $n_{i_1,\ldots,i_{16}}^{\delta}=0$ otherwise,\phantom{$\frac{|}{|}$}
} and for ${\rm wt}(\delta)=16$ by &
$2^3\, M(2,\ldots,2)$. \phantom{$\frac{|}{|}$} \end{tabular}
\end{thm}
\noi{\bf Proof.\ \,} This follows by applying a variation of Th.~4.7 in~\cite{DGH} to the glue code $\Delta$ of the Niemeier lattice with root lattice $A_3^8$. The Virasoro VOA $L_{1/2}$ of central charge $1/2$ there is replaced by the VOA $V_{A_{1,2}}$ which has an isomorphic fusion algebra and the lattice $D_1$ is replaced by the lattice $A_3$ which has an isomorphic discriminant group ${\bf Z}/4{\bf Z}$. Then the theorem remains valid if one substitutes the three irreducible $L_{1/2}$-modules of weight $0$, $1/2$ and $1/16$ by the three irreducible $V_{A_{1,2}}$-modules of weight $0$, $1/2$ and $3/16$, respectively. The explicit description of the decomposition resulting from the above ${\bf Z}/4{\bf Z}$-code~$\Delta$ of length~$8$ was given in the proof of Th.~5.3 in~\cite{DGH}. See also the following Remark~5.4 (2) there. \phantom{xxxx}
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
The ${\bf Z}$-grading on the VOA $V=\bigoplus_{n=0}^{\infty}V_n$ is given by the eigenvalues of the Virasoro generator~$L_0$. There is also the action of the Lie algebra of type $A_1^{16}$. For ${\bf s}$ in the weight lattice~$(A'_1)^{16}\cong\big(\frac{1}{\sqrt{2}}{\bf Z}\big)^{16}$ we denote by $V_n({\bf s})$ the subspace of $V_n$ on which the action of the Cartan subalgebra of the Lie algebra $A_1^{16}$ has weight ${\bf s}$. The character of~$V$ defined by $$\chi_V=q^{-1}\sum_{n\in \bf Z}\sum_{{\bf s}\in (A'_1)^{16}}
\dim V_n({\bf s})\,q^n\,e^{{\bf s}}$$ is an element in the ring of formal Laurent series in $q$ with coefficients in the group ring ${\bf C}[(A'_1)^{16}]$.
For the proof of some identities, it is useful to interpret an element $f$ in ${\bf C}[L][[q^{1/k}]][q^{-1/k}]$, where $L$ is a lattice and $k\in{\bf N}$, as a function on ${\cal H}\times (L\otimes {\bf C})$, where ${\cal H}=\{z\in {\bf C} \mid \Im(z)>0\}$ is the complex upper half plane. This is done by the substitutions $q\mapsto e^{2\pi i \tau}$ and $e^{{\bf s}} \mapsto e^{2\pi i({\bf s},{\bf z})}$ for $(\tau,{\bf z})\in {\cal H}\times (L\otimes {\bf C})$ (in the case of convergence). We indicate this by writing $f(\tau,{\bf z})$.
To compute $\chi_V$ with the help of the Weyl-Kac character formula, we need various power series, which are the Fourier expansion of various modular and Jacobi forms. Let $\eta(\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^n)$ be the Dedekind eta function.
First, there are the three ``string functions"´ $c_0$, $c_1$ and $c_2$ which are modular functions for $\widetilde\Gamma(16)$ of weight $-1/2$. They are defined by \begin{eqnarray*} c_0 \! &\! =\! & \!\frac{1}{2}\left( \frac{\eta(\tau/2)}{\eta(\tau)^2} +\frac{\eta(\tau)}{\eta(2\tau)\eta(\tau/2)}\right) =\, q^{-1/16}\cdot( 1 + q + 3\,q^2 + 5\,q^3 + 10\,q^4 + \cdots ),\\
c_1\! &\! =\! &\! \frac{1}{2}\left( \frac{\eta(\tau/2)}{\eta(\tau)^2} -\frac{\eta(\tau)}{\eta(2\tau)\eta(\tau/2)}\right) =\,q^{-1/16}\cdot(
q^{1/2} + 2\,q^{3/2} + 4\,q^{5/2} + \cdots ), \\ c_2\! &\! =\! &\! \frac{\eta(2\tau)}{\eta(\tau)^2}=\, 1 + 2\,
q + 4\,{q}^2 + 8\,{q}^3 +
14\,{q}^4 + 24\,{q}^5 + 40\,{q}^6 + \cdots\,. \end{eqnarray*}
We denote the dual lattice of an integral lattice $L$ by $L'$. As we are dealing with level~$2$ representations of $\widehat{A}_1$, it will be convenient to define for $\gamma\in L'/L$ the theta function by $$\Theta_{L+\gamma}=
\sum_{{\bf s}\in L+\gamma} q^{{\bf s}^2/4}\,e^{\bf s},$$ writing ${\bf s}^2=({\bf s},{\bf s})$ for the norm of ${\bf s}$ and $L+\gamma$ for the coset of $L$ in $L'$ determined by $\gamma$. In particular, we define the following three theta series: \begin{eqnarray*} \vartheta_0(\tau,z) & = & \Theta_{2A_1}(\tau,z),\\ \vartheta_1(\tau,z) & = & \Theta_{2A_1+ \sqrt{2}}(\tau,z),\\ \vartheta_2(\tau,z) & = & \Theta_{A_1+ \frac{1}{\sqrt{2}}}(\tau,z) \ = \ \Theta_{2A_1+ \frac{1}{\sqrt{2}}}(\tau,z) + \Theta_{2A_1- \frac{1}{\sqrt{2}}}(\tau,z). \end{eqnarray*}
The graded characters $\chi_i=\chi_{M(i)}$ of the three irreducible level $2$ representations $M(i)$, $i=0$, $1$ and $2$, of the affine Kac-Moody Lie algebra of type~$\widehat{A}_1$ can now be expressed in the above series: \begin{prop}[Kac-Peterson{\rm, cf.~\cite{Kac}, Ch.~13}]\label{strings} \begin{eqnarray*} \chi_0 & = & \chi_{M(0)} = c_0\cdot\vartheta_0+c_1\cdot\vartheta_1, \\ \chi_1 & = & \chi_{M(1)} = c_1\cdot\vartheta_0+c_0\cdot\vartheta_1, \\ \chi_2 & = & \chi_{M(2)} = c_2\cdot\vartheta_2. \end{eqnarray*}
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm \end{prop}
We combine this information with the $V_{A_{1,2}^{16}}$-decomposition of $V$.
\begin{prop}\label{kac-moody-character-codes} For $\delta \in {\bf F}_2^{16}$ and $d\in {\bf F}_2^{n-{\rm wt}(\delta)}$ (identified with the subspace $\{c\in {\bf F}_2^{16} \mid c_i=0\ \hbox{for all\ } i=1,\ldots,16 \hbox{\ with\ }\delta_i=1\}$) we introduce the shorthand notation $$\vartheta_d^{\delta}(\tau,{\bf z})= \prod_{\stack{i\in\{1,\ldots,16\}}{\delta_i=0}}\vartheta_{d_i}(\tau,z_i) \quad\hbox{and}\quad \vartheta_2^{\delta}(\tau,{\bf z}) = \prod_{\stack{i\in\{1,\ldots,16\}}{\delta_i=1}}\vartheta_{2}(\tau,z_i).$$ Then the character of $V$ is \begin{eqnarray*} \chi_V(\tau,{\bf z}) & = & \sum_{d\in {\bf F}_2^{16}}
W_{{\cal H}_{16}+d}(c_1,c_2)\,\vartheta_d^{(0,\,\ldots,\,0)}(\tau,{\bf z})\\ && + \sum_{\stack{\delta \in {\cal H}_{16}^{\perp}}{ {\rm wt}(\delta)=8}}
\sum_{d\in {\bf F}_2^8}
W_{({\bf F}_2^{8})_1+d}(c_1,c_2)\,\vartheta_d^{\delta}(\tau,{\bf z}) \,\cdot \, c_2^8\, \vartheta_2^{\delta}(\tau,{\bf z}) \\ && + 2^3c_2^{16}\, \vartheta_2^{(1,\,\ldots,\,1)}(\tau,{\bf z}). \end{eqnarray*} \end{prop}
\noi{\bf Proof.\ \,} This is a rather trivial resummation. {}From Theorem~\ref{Vdecomp} and the notation there one gets $$\chi_V(\tau,{\bf z}) =
\sum_{c\in {\cal H}_{16}} \chi_{M(c)} +\sum_{\stack{\delta\in{\cal H}_{16}^{\perp}}{\scriptstyle {\rm wt}(\delta)=8}}
\sum_{\stack{ i_1,\ldots,i_{16}}{\phantom{\frac{1}{1}}\in\{0,1,2\}}} n_{i_1,\ldots,i_{16}}^{\delta}\chi_{M(i_1,\ldots,i_{16})}
+2^3\chi_{M(2,\ldots,2)}$$ by Proposition~\ref{strings} \begin{eqnarray*} & = &\sum_{c\in {\cal H}_{16}}\sum_{d\in {\bf F}_2^{16}} c_0^{16-{\rm wt}(c+d)}c_1^{{\rm wt}(c+d)}\prod_{i=1}^{16}\vartheta_{d_i}(\tau,z_i)\\ & & +\sum_{\stack{\delta\in{\cal H}_{16}^{\perp}}{ {\rm wt}(\delta)=8}} \sum_{c \in ({\bf F}_2^8)_1}\sum_{d \in {\bf F}_2^8} c_0^{8-{\rm wt}(c+d)}c_1^{{\rm wt}(c+d)} \ \Big(\prod_{\stack{i\in\{1,\ldots,16\}}{\delta_i=0}} \vartheta_{d_i}(\tau,z_i)\Big)\\ & & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \cdot\ c_2^8\ \Big( \prod_{\stack{i\in\{1,\ldots,16\}}{\delta_i=1}} \vartheta_{2}(\tau,z_i)\Big) \\ & &\ +2^3c_2^{16}\prod_{i=1}^{16}\vartheta_{2}(\tau,z_i), \end{eqnarray*} which simplifies to the formula given in the proposition.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
We can simplify this expression further. We define three modular functions $h$, $g_0$ and $g_1$ of weight $-8$ for the modular group $\Gamma(2)$: \begin{eqnarray*} h(\tau)\! &\! =\! &\! \eta(\tau)^{-8}\eta(2\tau)^{-8}= q^{-1}+ 8+ 52\,q+ 256\, q^2 + 1122\, q^3 + 4352\, q^4 + \cdots, \\ g_0(\tau)\!&\! =\! &\! \frac{1}{2}\big(h(\tau/2)+h((\tau+1)/2)\big)= 8 + 256\, q + 4352\, q^2 + 52224\,q^3 +\cdots, \\ g_1(\tau)\! &\! =\! &\!\frac{1}{2}\big(h(\tau/2)-h((\tau+1)/2)\big)=
q^{-1/2}+ 52\, q^{1/2} + 1122\, q^{3/2} + \cdots. \end{eqnarray*}
\begin{lem}\label{modulids} One has the following identities between modular functions for~$\widetilde\Gamma(16)$: \begin{eqnarray*} g_0+h &= & W_{{\cal H}_{16}}(c_1,c_2), \\ g_0 & = & W_{{\cal H}_{16}+(1,1,0,\ldots,0)}(c_1,c_2) \ =\ W_{({\bf F}_2^{8})_1}(c_1,c_2)\cdot c_2^8 \ =\ 2^3\,c_2^{16},\\ g_1 & = & W_{{\cal H}_{16}+(1,0,0,\ldots,0)}(c_1,c_2) \ =\ W_{({\bf F}_2^{8})_0}(c_1,c_2)\cdot c_2^8. \end{eqnarray*} \end{lem}
\noi{\bf Proof.\ \,} The space of modular functions for $\widetilde\Gamma(16)$ of weight $-8$ with poles of given order only at the cusps is finite dimensional. Comparing the Fourier expansions on both sides gives the result. The details are left to the reader.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
Let $\pi: A_1^{16} \longrightarrow A_1^{16}/ (2A_1)^{16} \cong {\bf F}_2^{16}$ be the projection map. We define the even integral lattice $N=\frac{1}{\sqrt{2}}\pi^{-1}({\cal H}_{16})$. Its discriminant group $N'/N$ is contained in $(\frac{1}{\sqrt{2}} A_1')^{16}/(\frac{1}{\sqrt{2}}2A_1)^{16} \cong {\bf Z}_4^{16}$ and fits into the short exact sequence $$0\longrightarrow {\bf F}_2^{16}/{\cal H}_{16} \stackrel{\iota}{\longrightarrow} N'/N \longrightarrow {\cal H}_{16}^{\perp}\longrightarrow 0.$$ Since ${\cal H}_{16}^{\perp}\subset {\cal H}_{16}$ one has $N'/N\cong {\bf Z}_2^{10}$, the sequence has a (non-canonical) split $\mu: {\cal H}_{16}^{\perp}\longrightarrow N'/N$ and all the squared lengths of $N'$ are integral, i.e., the induced discriminant form is of type $2^{+10}_{I\!I}$.
For $\gamma\in N'/N$ define the function $$ f_{\gamma}=\cases{
g_0 +h, & if $\gamma=0$, \cr
g_0, & if \, $\gamma^2\equiv 0 \pmod{2}$ and $\gamma \not=0$,\cr
g_1, & if \, $\gamma^2\equiv 1 \pmod{2}$.} \qquad\qquad(*) $$
We collect now all the theta functions $\vartheta_d^{\delta}(\tau,{\bf z})$ and $\vartheta_2^{\delta}(\tau,{\bf z})$ in $\chi_V$ and arrive at our final expression for the character of $V$.
\begin{thm}\label{kac-moody-character-lattice} $$ \chi_V(\tau,{\bf z}) =
\sum_{\gamma\in N'/N} f_{\gamma}(\tau)\, \Theta_{\sqrt{2}(N+\gamma)}(\tau,{\bf z}). $$ \end{thm} \noi{\bf Proof.\ \,} The expression for $\chi_V(\tau,{\bf z})$ obtained in Proposition~\ref{kac-moody-character-codes} can be rewritten as \begin{eqnarray*} \chi_V(\tau,{\bf z}) & = & \sum_{d\in {\bf F}_2^{16}} W_{{\cal H}_{16}+d}(c_1,c_2)\,\cdot \, \Theta_{\sqrt{2}(\pi^{-1}(d))}(\tau,{\bf z}) \\ && + \sum_{\stack{ \delta \in {\cal H}_{16}^{\perp}}{ {\rm wt}(\delta)=8}}
\sum_{d\in {\bf F}_2^{16}}
W_{({\bf F}_2^{8})_1+d'}(c_1,c_2)\,\cdot \, c_2^8\,\,\cdot \, \Theta_{\sqrt{2}(\pi^{-1}(d)+\mu(\delta))}(\tau,{\bf z}) \\ && + \sum_{d\in {\bf F}_2^{16}}\ 2^3c_2^{16}\,\cdot \, \Theta_{\sqrt{2}(\pi^{-1}(d)+\mu(1,\,\ldots,\,1))}(\tau,{\bf z}), \\ \noalign{ \noindent where $d'$ is the vector of those components $d_i$ of $d$ for which $\delta_i=0$. Using Lemma~\ref{modulids}, one gets}\\
& = & \sum_{\bar d\in {\bf F}_2^{16}/{\cal H}_{16}} f_{\iota(\bar d)}(\tau) \,\cdot \, \Theta_{\sqrt{2}(N+\iota(\bar d))}(\tau,{\bf z}) \\ && + \sum_{\stack{ \delta \in {\cal H}_{16}^{\perp}}{ {\rm wt}(\delta)=8}}
\sum_{\bar d\in {\bf F}_2^{16}/{\cal H}_{16}} f_{\iota(\bar d)}(\tau) \,\cdot \, \Theta_{\sqrt{2}(N+\iota(\bar d)+\mu(\delta))}(\tau,{\bf z}) \\ \nopagebreak[4] &&+\sum_{\bar d\in {\bf F}_2^{16}/{\cal H}_{16}}
f_{\iota(\bar d)}(\tau)\,\cdot \, \Theta_{\sqrt{2}(N+\iota(\bar d) + \mu(1,\,\ldots,\,1))}(\tau,{\bf z}) \\ \pagebreak[4] & = & \sum_{ \delta \in {\cal H}_{16}^{\perp}} \sum_{\bar d\in {\bf F}_2^{16}/{\cal H}_{16}} f_{\iota(\bar d)}(\tau)\,\cdot \, \Theta_{\sqrt{2}(N+\iota(\bar d)+\mu(\delta))}(\tau,{\bf z}) \\ & = & \sum_{\gamma\in N'/N} f_{\gamma}(\tau)\,\cdot\, \Theta_{\sqrt{2}(N+\gamma)}(\tau,{\bf z}). \qquad\qquad\qquad\qquad\qquad\qquad \hbox{\framebox[2.4mm][t1]{\phantom{x}}} \end{eqnarray*}
\section{The generalized Kac-Moody algebra\ ${\bf g}$}\label{fbmla}
In this section, we use the no-ghost theorem to construct a generalized Kac-Moody algebra\ ${\bf g}$ from $V$. Theorem~\ref{kac-moody-character-lattice} allows us to describe its root multiplicities. We determine the simple roots with the help of the singular theta-correspondence and show that ${\bf g}$ is isomorphic to the fake baby monster Lie algebra.
There is an action of the BRST-operator on the tensor product of a vertex algebra $W$ of central charge~$26$ with the bosonic ghost vertex superalgebra $V_{\rm ghost}$ of central charge~$-26$, which defines the BRST-cohomology groups $H^*_{\rm BRST}(W)$. The degree $1$ cohomology group $H^1_{\rm BRST}(W)$ has additionally the structure of a Lie algebra, see~\cite{FGZ,LZ,Z}.
Let $V$ be the VOA of the last section. As it is the case for the Moonshine module, we can assume that $V$ is defined over the field of real numbers. The same holds for the vertex algebra $V_{\II}$ associated to the even unimodular Lorentzian lattice $I\!I_{1,1}$ in dimension~$2$ and for $V_{\rm ghost}$. \begin{de}\rm We define the Lie algebra ${\bf g}$ as $H^1_{\rm BRST}(V\otimes V_{\II}).$ \end{de}
Let $L=N\oplus I\!I_{1,1}$, where $N$ is the even lattice defined in the previous section.
\begin{prop}\label{rootspace} The Lie algebra ${\bf g}$ is a generalized Kac-Moody algebra\ graded by the lattice $N'\oplus I\!I_{1,1}=L'$. Its components ${\bf g}(\alpha)$, for $\alpha=({\bf s},r)\in N'\oplus I\!I_{1,1}$ are isomorphic to $V_{1-r^2/2}(\sqrt{2}\,{\bf s})$ for $\alpha\not = 0$ and to $V_1(0)\oplus {\bf R}^{1,1}\cong {\bf R}^{17,1}$ for $\alpha=0$. \end{prop}
\noi{\bf Proof.\ \,} The vertex algebra $V \otimes V_{I\!I_{1,1}}$ has a canonical invariant bilinear form which can be used to show that the construction of ${\bf g}$ as BRST-cohomology group is equivalent to the so called old covariant construction used in~\cite{B-moonshine} (cf.~\cite{LZ}, section~2.4, cf.~\cite{Z}, section~4). In more detail, $V$ carries an action of the Virasoro algebra of central charge~$24$ and has positive definite bilinear form such that the adjoint of the Virasoro generator $L_n$ is $L_{-n}$ (see~\cite{DGM}). Similarly, $V_{I\!I_{1,1}}$ has an invariant bilinear form (cf.~\cite{S98}, section~2.4), and on $V \otimes V_{I\!I_{1,1}}$, we take the one induced from the tensor product. This allows us to work in the old covariant picture.
The second part now follows from the no-ghost theorem as given in~\cite{B-moonshine}, Th.~5.1, if we use for $G$ a maximal torus of the real Lie group $SU(2)^{16}$ acting on $V$. The proof of the first part is similar to that of Th.~6.2. of~\cite{B-moonshine}.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
The subspace ${\bf g}(0)$ of degree $0 \in L'$ is a Cartan subalgebra for ${\bf g}$.
\begin{thm}\label{rootmult} Let $f_{\gamma}(\tau)=\sum_{n\in{\bf Z}}a_{\gamma}(n)\,q^n$ be the Fourier expansion of the $f_{\gamma}$, \hbox{$\gamma \in N'/N$}, defined in~$(*)$. For a nonzero vector $\alpha\in L'$ the dimension of the component ${\bf g}(\alpha)$ is given by $$ \dim {\bf g}(\alpha)= a_{\gamma}(-\alpha^2/2), $$ where $\gamma$ is the rest class of $\alpha$ in $L'/L\cong N'/N$. The dimension of the Cartan subalgebra is $18$. \end{thm}
\noi{\bf Proof.\ \,} Theorem~\ref{kac-moody-character-lattice}, Proposition~\ref{rootspace}.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
It follows from the Fourier expansion of $f_\gamma$ that the real roots of ${\bf g}$ are the norm~$1$ vectors in $L'$ and the norm~$2$ vectors in $L$ both with multiplicity~$1$. The real roots of ${\bf g}$ generate the Weyl group $W$ of ${\bf g}$ which is also equal to the reflection group of $L'$. Hence the real simple roots of ${\bf g}$ are the simple roots of the reflection group of $L'$.
\begin{prop} There is a primitive norm $0$ vector $\rho$ in $L'$, called the Weyl vector, such that the simple roots of the reflection group of $L'$ are the roots $\alpha$ satisfying $(\rho,\alpha)=-\alpha^2/2$. \end{prop}
\noi{\bf Proof.\ \,} Let $\Lambda_{16}$ be the Barnes-Wall lattice. We write $L(k)$ for the lattice obtained from the lattice $L$ by rescaling all norms by a factor $k$. Since the discriminant forms of the lattices $L=N\oplusI\!I_{1,1}$ and $\Lambda_{16}\oplus I\!I_{1,1}(2)$ are equal, both lattices are in the genus $I\!I_{17,1}(2^{+10}_{I\!I})$. It follows from Eichler's theory of spinor genera that there is only one class in this genus and so both lattices must be isomorphic. For the rescaled dual of the Barnes-Wall lattice we have $\Lambda_{16}'(2)\cong\Lambda_{16}$ so that $L'(2)\cong\Lambda_{16}\oplus I\!I_{1,1}$. The reflection group of $\Lambda_{16}\oplus I\!I_{1,1}$ has a primitive norm~$0$ vector $\rho$ such that the simple roots are the roots satisfying $(\rho,\alpha)=-\alpha^2/2$ (e.g.~\cite{B-theta}, Example~12.4). This implies the statement.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
\begin{rem}\rm \begin{enumerate} \item If we write $L=\Lambda_{16}\oplus I\!I_{1,1}(2)$ with elements $({\bf s},m,n)$, \hbox{$s\in \Lambda_{16}$}, $m$, $n\in {\bf Z}$ and norm $({\bf s},m,n)^2={\bf s}^2-4mn$ we can take $\rho=({\bf 0},0,1/2)$. Then the simple roots of the reflection group of $L'$ are the norm~$1$ vectors in $L'$ of the form $({\bf s},1/2,(s^2-1)/2)$, ${\bf s}\in \Lambda_{16}'$, and the norm~$2$ vectors $({\bf s},1,({\bf s}^2-2)/4)$ in $L$, i.e., ${\bf s}\in\Lambda_{16}$
with $4|({\bf s}^2-2)$. \item The automorphism group $\mbox{Aut}(L')^+$ is the semidirect product of the reflection subgroup by a group of diagram automorphisms. Since $\Lambda_{16}$ has no roots, Theorem~3.3 of~\cite{B-leechlike} implies that the group of diagram automorphisms is equal to the group of affine automorphisms of the Barnes-Wall lattice. See also ~\cite{B-lorentz}, p.~345. \end{enumerate} \end{rem} We fix a Weyl vector $\rho$ and the Weyl chamber containing $\rho$. \begin{prop} The positive multiples $n\rho$ of the Weyl vector are imaginary simple roots of ${\bf g}$ with multiplicity $16$ if $n$ is even and $8$ otherwise. \end{prop} \noi{\bf Proof.\ \,} Every simple root has inner product at most $0$ with $n\rho$. In a Lorentzian space the inner product of two vectors of nonpositive norm in the same cone is at most $0$ and $0$ only if both vectors are proportional to the same norm~$0$ vector. This implies that if we write $n\rho$ as sum of simple roots with positive coefficients the only simple roots appearing in the sum are positive multiples of~$\rho$. Since the support of an imaginary root is connected it follows that all the~$n\rho$ are simple roots. Their multiplicities are given in Theorem~\ref{rootmult}. (Cf.~also Lemma~4 in section~3 of~\cite{B-fake}.)
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
Now we show that we have already found all the simple roots of ${\bf g}$.
\begin{thm} A set of simple roots for ${\bf g}$ is the following. The real simple roots are the norm~$2$ vectors $\alpha$ in $L$ with $(\rho,\alpha)=-\alpha^2/2$ and the norm~$1$ vectors $\alpha$ in $L'$ with $(\rho,\alpha)=-\alpha^2/2$. The imaginary simple roots are the positive multiples~$n\rho$ of $\rho$ with multiplicity~$16$ for even $n$ and with multiplicity~$8$ for odd~$n$. \end{thm}
\noi{\bf Proof.\ \,} The proof is analog to the proof of Theorem~7.2 in~\cite{B-moonshine}. Let ${\bf k}$ be the generalized Kac-Moody algebra with root lattice $L'$, Cartan subalgebra $L'\otimes {\bf R}$ and simple roots as stated in the theorem.
In~\cite{S-twist}, Theorem~3.2, product and Fourier expansions of an automorphic form on the Grassmannian ${\rm Gr}_2(M \otimes {\bf R})$ with $M=L\oplus I\!I_{1,1}$ are worked out for different cusps by applying Borcherds' theory of theta lifts to the vector valued modular form $(f_{\gamma})_{\gamma\in M'/M}$. The expansion at the cusp corresponding to a primitive norm $0$ vector in the sublattice $I\!I_{1,1}\subset M$ shows that the denominator identity of ${\bf k}$ is given by $$ e^{\rho}\prod_{\alpha\in L^+}\left(1-e^{\alpha}\right)^{c(-\alpha^2/2)} \prod_{\alpha\in {L'}^+}\left(1-e^{\alpha}\right)^{c(-\alpha^2)} \qquad\qquad\qquad\qquad\qquad $$ $$\qquad\qquad\qquad\qquad\qquad = \sum_{w\in W} \det(w)\, w\left(e^{\rho}\prod_{n=1}^{\infty} \big(1-e^{n\rho}\big)^8 \big(1-e^{2 n\rho}\big)^8\right).$$ Here, $W$ is the reflection group generated by norm~$1$ vectors of $L'$ and the norm~$2$ vectors of $L\subset L'$ and the exponents $c(n)$ are the coefficients of the modular form $h(\tau)=\sum_{n\in{\bf Z}}c(n)q^n$ defined in section~\ref{voav} (cf.~also~\cite{J}).
Using Theorem~\ref{rootmult} and the definition of the $f_{\gamma}$, we see that ${\bf g}$ and ${\bf k}$ have the same root multiplicities. The product in the denominator identity determines the simple roots of ${\bf g}$ because we have fixed a Cartan subalgebra and a fundamental Weyl chamber. It follows that ${\bf g}$ and ${\bf k}$ have the same simple roots and are isomorphic.
\framebox[2.4mm][t1]{\phantom{x}} \vskip 0.15cm
\begin{cor} The denominator identity of ${\bf g}$ is $$ e^{\rho}\prod_{\alpha\in L^+}\left(1-e^{\alpha}\right)^{c(-\alpha^2/2)} \prod_{\alpha\in {L'}^+}\left(1-e^{\alpha}\right)^{c(-\alpha^2)} \qquad\qquad\qquad\qquad\qquad $$ $$\qquad\qquad\qquad\qquad\qquad = \sum_{w\in W} \det(w)\, w\left(e^{\rho}\prod_{n=1}^{\infty} \big(1-e^{n\rho}\big)^8 \big(1-e^{2 n\rho}\big)^8\right)$$ where $W$ is the reflection group generated by norm~$1$ vectors of $L'$ and the norm~$2$ vectors of $L$ and $c(n)$ is the coefficient of $q^n$ in $$\eta(\tau)^{-8}\eta(2\tau)^{-8} = q^{-1}+ 8+ 52\,q+ 256\, q^2 + 1122\, q^3 + 4352\, q^4 + \cdots .$$ \end{cor}
Using $L'(2)\cong\Lambda_{16} \oplus I\!I_{1,1}(2)$, we see that the denominator identity of ${\bf g}$ is a rescaled version of the denominator identity of Borcherds' fake baby monster Lie algebra determined in~\cite{B-moonshine}, Sect.~14, Example~1. This implies: \begin{cor} The generalized Kac-Moody algebra\ ${\bf g}$ is isomorphic to the fake baby monster Lie algebra. \end{cor}
In a forthcoming paper we will describe similar constructions of some other generalized Kac-Moody algebras.
\small
\end{document} | arXiv |
\begin{definition}[Definition:Steady-State/Electronics]
Consider the electrical circuit $K$ consisting of:
: a resistance $R$
: an inductance $L$
in series with a source of electromotive force $E$ which is a function of time $t$.
:File:CircuitRLseries.png
Let the electric current flowing in $K$ at time $t = 0$ be $I_0$.
Let a constant EMF $E_0$ be imposed upon $K$ at time $t = 0$.
The electric current $I$ in $K$ is given by the equation:
:$I = \dfrac {E_0} R + \left({I_0 - \dfrac {E_0} R}\right) e^{-R t / L}$
The term $\dfrac {E_0} R$ is known as the '''steady-state''' part of $(1)$.
\end{definition} | ProofWiki |
\begin{document}
\title{Relative induction principles for type theories}
\begin{abstract}
We present new induction principles for the syntax of dependent type theories, which we call relative induction principles.
The result of the induction principle relative to a functor $F$ into the syntax is stable over the codomain of $F$.
We rely on the internal language of presheaf categories.
In order to combine the internal languages of multiple presheaf categories, we use Dependent Right Adjoints and Multimodal Type Theory.
Categorical gluing is used to prove these induction principles, but it not visible in their statements, which involve a notion of model without context extensions.
As example applications of these induction principles, we give short and boilerplate-free proofs of canonicity and normalization for some small type theories, and sketch proofs of other metatheoretic results. \end{abstract}
\section{Introduction}\label{sec:introduction}
\paragraph*{Induction principles}
Syntax without bindings or equations is characterized by its universal property as the initial object of some category of algebras, or equivalently by its induction principle as an inductive type. The same can be said for syntax with equations, \eg quotient inductive-inductive types \cite{QIITs,DBLP:conf/lics/KovacsK20}. As for syntax with bindings, we can encode it using syntax with equations but without bindings by making explicit the contexts and substitutions \cite{popl16}.
While this construction yields induction principles for syntax with bindings, most metatheoretic results are not direct applications of these induction principles. They often involve a second step, in which the contexts over which the result holds are identified. For example, canonicity only holds in the empty context, whereas normalization holds over every context, but is only stable under renamings. This second step is most of the time handled in an ad-hoc manner.
Our main contribution is to show how this second step can be handled in a principled way and to introduce new induction principles for syntactic categories with bindings that merge the two steps into one. More specifically, we give statements and proofs of so called \emph{relative induction principles} for a small dependent type theory $\MT_{\Pi,\mathbf{B}}$ with function space and booleans and for a minimal version of cubical type theory. We use these theories to present our constructions, but they do not rely on any specific feature of these theories. They could be generalized to arbitrary type theories (for some general definition of type theory, such as Uemura's definition~\cite{GeneralFrameworkSemanticsTT}). In the appendix we show how to generalize to a hierarchy of universes closed under function space and booleans. We leave the full formal generalization to future work.
In the general case, we consider a functor $F : \mathcal{C} \to \mathcal{S}$, where $\mathcal{S}$ is the syntax of our theory and $\mathcal{C}$ a category which should satisfy some universal property. We give induction principles which directly provide results that are stable over the morphisms of $\mathcal{C}$ (hence the name \emph{relative}). Under the hood, the relative induction principles use the universal properties of both $\mathcal{C}$ and $\mathcal{S}$. The input data for a relative induction principle consists of a \emph{displayed model without context extensions}, along with some additional data depending on the universal property of $\mathcal{C}$.
The following table lists example functors and the result that the induction principle relative to the given functor provides. \begin{alignat*}{4}
& \{\diamond\} && { }\to{ } && \mathbf{0}_{\MT_{\Pi,\mathbf{B}}} && \quad\text{Canonicity \cite{CoquandNormalization} (Section \ref{sec:applCanon})} \\
& \mathcal{Ren} && { }\to{ } && \mathbf{0}_{\MT_{\Pi,\mathbf{B}}} && \quad\text{Normalization \cite{altenkirch_et_al:LIPIcs:2016:5972,CoquandNormalization} (Section \ref{sec:applNorm})} \\
& \square && { }\to{ } && \mathbf{0}_{\mathsf{CTT}} && \quad\text{(Homotopy/Strict) canonicity for cubical type theory \cite{coquand2021canonicity} (Section \ref{sec:applCanCTT})} \\
& \mathcal{A}_{\square} && { }\to{ } && \mathbf{0}_{\mathsf{CTT}} && \quad\text{Normalization for cubical type theory \cite{NormalizationCTT} (Section \ref{sec:applNormCTT})} \\
& \mathbf{0}_{\mathsf{ITT}} && { }\to{ } && \mathbf{0}_{\mathsf{ETT}} && \quad\text{Conservativity of Extensional Type Theory over} \\
& && && && \quad\text{Intensional Type Theory \cite{hofmann95conservativity,Oury2005,10.1145/3293880.3294095} (Section \ref{sec:applConserv})} \\
& \mathbf{0}_{\mathsf{HoTT}} && { }\to{ } && \mathbf{0}_{\mathsf{2LTT}} && \quad\text{Conservativity of two-level type theory over HoTT \cite{CapriottiThesis}} \end{alignat*} There the initial model of a theory $\MT$ is denoted by $\mathbf{0}_{\MT}$, $\MT_{\Pi,\mathbf{B}}$ is the small type theory that we consider in this paper, $\{\diamond\}$ is the terminal category, $\mathcal{Ren}$ is the category of renamings of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, $\square$ is the category of cubes, and $\mathcal{A}_{\square}$ is the category of cubical atomic contexts of~\cite{NormalizationCTT}.
We note some similarity with the \emph{worlds} of Twelf~\cite{Twelf} and with the \emph{context schemas} of Beluga~\cite{Beluga}. The worlds and context schemas can be seen as descriptions of full subcategories spanned by contexts that are generated by a class of context extensions. Our approach is more general, as we are not restricted to full subcategories.
In~\cite{SynCategoriesSketchingAndAdequacy}, an argument is made for the use of locally cartesian closed categories instead of Uemura's representable map categories in the semantics of type theories. Using locally cartesian closed categories means that contexts can be extended by arbitrary judgments. Indeed, the induction principle that we associate to $F : \mathcal{C} \to \mathcal{S}$ is left unchanged if $\mathcal{S}$ is faithfully embedded into a category with additional context extensions. However it depends on the context extensions of $\mathcal{C}$; for instance canonicity is provable using $\{\diamond\} \to \mathbf{0}_{\MT_{0}}$ only because $\{\diamond\}$ is not equipped with any way to extend contexts. Thus the general notion of context extension remains important.
\paragraph*{Higher-order abstract syntax}
Higher-order abstract syntax~\cite{HOAS} is an encoding of bindings that relies on the binding structure of an ambient language. It is closely related to Logical Frameworks~\cite{LF}. As shown in~\cite{SemAnalysisHOAS} for the untyped lambda calculus, higher-order abstract syntax can be given semantics using presheaf categories.
The equivalence of a higher-order presentation of syntax with another presentation is usually called \emph{adequacy}. Hofmann identified the crucial property justifying the adequacy of the higher-order presentation of untyped or simply-typed syntax: given a representable presheaf $\yo_{A}$ of a category $\mathcal{C}$ with products, the presheaf exponential $(\yo_{A} \Rightarrow B)$ can be computed as $\abs{\yo_{A} \Rightarrow B}_{\Gamma} \triangleq \abs{B}_{\Gamma \times A}$. This can be generalized to dependently typed syntax by considering locally representable presheaves.
The internal language of presheaf categories yields a definition of Categories with Families equipped with type-theoretic operations that are automatically stable under substitutions. This gives a nice setting to work with a single model of type theory as used in e.g.\ \cite{paoloHOAS,awodey_2018}. However, it does not immediately give a way to describe the general semantics of a type theory, since different models may live over different presheaf categories. We solve this problem by using Multimodal Type Theory.
\paragraph*{Multimodal Type Theory}
The action on types of a morphism $F : \mathcal{C} \to \mathcal{D}$ of models can be seen as a natural transformation $F^{\mathsf{Ty}} : \mathsf{Ty}^{\mathcal{C}} \to F^{\ast}\ \mathsf{Ty}^{\mathcal{D}}$, where $F^{\ast} : \mathbf{Psh}^{\mathcal{D}} \to \mathbf{Psh}^{\mathcal{C}}$ is precomposition by $F$. The action on terms is harder to describe. As terms are dependent over types, we essentially need to extend $F^{\ast}$ to dependent presheaves. The correct way to do this is to see $F^{\ast}$ as a dependent right adjoint~\cite{DRAs}; it satisfies a universal property that can be axiomatized and yields a modal extension of the internal languages of $\mathbf{Psh}^{\mathcal{C}}$ and $\mathbf{Psh}^{\mathcal{D}}$.
The actions of $F$ on types and terms are described in this extended language by: \begin{alignat*}{3}
& F^{\mathsf{Ty}} && :{ } && \forall (A : \mathsf{Ty}^{\mathcal{C}})\ \mmod{F^{\ast}} \to \mathsf{Ty}^{\mathcal{D}}, \\
& F^{\mathsf{Tm}} && :{ } && \forall (A : \mathsf{Ty}^{\mathcal{C}})\ (a : \mathsf{Tm}^{\mathcal{C}}\ A)\ \mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{D}}\ (F^{\mathsf{Ty}}\ A\ \mmod{F^{\ast}}). \end{alignat*} where $\mmod{F^{\ast}}$ is an element of the syntax of dependent right adjoints that transitions between the presheaf models $\mathbf{Psh}^{\mathcal{C}}$ and $\mathbf{Psh}^{\mathcal{D}}$. Multimodal Type Theory~\cite{MTT} is a further extension of this language that can deal with multiple dependent right adjoints at the same time.
Our strategy is to axiomatize just the structure and properties of the models, categories and functors that we need, so as to be able to perform most constructions internally to Multimodal Type Theory. We describe our variant of the syntax of a dependent right adjoint in \cref{sec:dras}, and of the syntax of Multimodal Type Theory in \cref{sec:mtt}.
Other kinds of modalities have been used similar purposes in related work. In~\cite{SemAnalysisContextualTypes}, the flat modality of crisp type theory is used to characterize the closed terms internally to a presheaf category. In~\cite{NormalizationCTT}, a pair of open and closed modalities correspond respectively to the syntactic and semantic components of constructions performed internally to a glued topos.
One of the advantages of Multimodal Type Theory over other approaches is that additional modes and modalities can be added without requiring modification to constructions that rely on a fixed set of modes and modalities.
\paragraph*{Categorical gluing}
Some of the previous work on the metatheory of type theory has focused on the relation between logical relations and categorical gluing. Some general gluing constructions have been given~\cite{GluingTT, GluingFlatFunctors}. The input of these general gluing constructions is a suitable functor $F : \mathcal{C} \to \mathcal{D}$, where $\mathcal{C}$ is a syntactic model of type theory, and $\mathcal{D}$ is a semantic category (for instance a topos, or a model of another type theory with enough structure). Gluing then provides a new glued model $\mathcal{P}$ of the type theory, that combines the syntax of $\mathcal{C}$ with semantic information from $\mathcal{D}$. Canonicity for instance can be proven by gluing along the global section functor $\mathbf{0}_{\MT} \to \mathbf{Set}$. However the known proofs of normalization~\cite{NormalizationByGluingFreeLT,altenkirch_et_al:LIPIcs:2016:5972,CoquandNormalization} that rely on categorical gluing are not immediate consequences of these general constructions.
In the present work, we see the constructions of the glued category $\mathcal{P}$ and of its type-theoretic structures as fundamentally different constructions. We rely on the same base category $\mathcal{P}$; but we equip it with type-theoretic structure using a different construction, that does not necessarily involve logical relations.
As mentioned earlier, the input data for the relative induction principles are displayed models without context extensions. One of the central results of our work is that any displayed model without context extensions can be replaced by a displayed model with context extensions over a different base category. In our proof this different base category is the glued category $\mathcal{P}$. However the concrete definition of $\mathcal{P}$ does not matter in applications: the only thing that matters is that $\mathcal{P}$ is equipped with a suitable replacement of the input displayed model without context extensions.
\paragraph*{Contributions}
Our main contribution is the statement and proofs of relative induction principles over the syntax of dependent type theory (\cref{sec:disp_models_wo_exts}), which take into account the fact that the results of induction should hold over a category $\mathcal{C}$ with a functor into the syntactic category of the theory. These induction principles are described using the new semantic notions of displayed models without context extensions and relative sections.
We show that the interpretation of dependent right adjoints and Multimodal Type Theory in diagrams of presheaf categories gives an internal language that is well-suited to the definitions of notions related to the semantics of syntax with bindings, including the notions of models, morphisms of models, the rest of the $2$-categorical structure of models, displayed models (both with and without context extensions), sections of displayed models (Sections \ref{sec:int_lang_psh}--\ref{sec:dras}) We never have to prove explicitly that any construction is stable under substitutions.
We show the application of our relative induction principles through the following examples in Section \ref{sec:applications}. We explain in detail an abstract version of Coquand's canonicity proof \cite{CoquandNormalization}, normalisation \cite{altenkirch_et_al:LIPIcs:2016:5972,CoquandNormalization} which we detail in Appendix \ref{sec:normalization}, canonicity \cite{coquand2021canonicity} and normalisation \cite{NormalizationCTT} for cubical type theory, syntactic parametricity \cite{bernardy12parametricity} and conservativity of ETT over ITT \cite{hofmann95conservativity}.
\section{Internal language of presheaf categories and models of type theory}\label{sec:int_lang_psh}
\begin{itemize}
\item We work in a constructive metatheory, with a cumulative hierarchy $(\mathrm{Set}_{i})$ of universes.
\item If $\mathcal{C}$ is a small category, we write $\abs{\mathcal{C}}$ for its set of objects and $\mathcal{C}(x \to y)$ for the set of morphisms from $x$ to $y$.
We may write $(x : \mathcal{C})$ (or $(x : \mathcal{C}^{\mathsf{op}})$) instead of $(x : \abs{\mathcal{C}})$ to indicate that the dependence on $x$ is covariant (or contravariant).
We write $(f \cdot g)$ or $(g\circ f)$ for the composition of $f : \mathcal{C}(x \to y)$ and $g : \mathcal{C}(y \to z)$.
\item We rely on the internal language of presheaf categories. Given a small category $\mathcal{C}$, the presheaf category $\mathbf{Psh}^{\mathcal{C}}$ is a model of extensional type theory, with a cumulative hierarchy of universes $\mathrm{Psh}^{\mathcal{C}}_{0} \subset \mathrm{Psh}^{\mathcal{C}}_{1} \subset \cdots \subset \mathrm{Psh}^{\mathcal{C}}_{i} \subset \cdots$, dependent functions, dependent sums, quotient inductive-inductive types, extensional equality types, \etc.
For each of our definitions, propositions, theorems, \etc, we specify whether it should be interpreted externally or internally to some presheaf category.
\item The Yoneda embedding is written $\yo : \mathcal{C} \to \mathbf{Psh}^{\mathcal{C}}$.
We denote the restriction of an element $x : \abs{X}_{\Gamma}$ of a presheaf $X$ along a morphism $\rho : \mathcal{C}(\Delta \to \Gamma)$ by $x[\rho]_X : \abs{X}_\Delta$. \end{itemize}
\subsection{Locally representable presheaves}
The notion of locally representable presheaf is the semantic counterpart of the notion of context extension.
\begin{definition} \label{locally-representable}
Let $X$ be a presheaf over a category $\mathcal{C}$ and $Y$ be a dependent presheaf over $X$.
We say that $Y$ is \defemph{locally representable} if, for every $\Gamma : \abs{\mathcal{C}}$ and $x : \abs{X}_{\Gamma}$, the presheaf
\begin{alignat*}{3}
& Y_{\mid x} && :{ } && \forall (\Delta : \mathcal{C}^{\mathsf{op}}) (\rho : \mathcal{C}(\Delta \to \Gamma)) \to \mathbf{Set} \\
& \abs{Y_{\mid x}}\ \rho && \triangleq{ } && \abs{Y}_{\Delta}\ (x[\rho])
\end{alignat*}
over the slice category $(\mathcal{C} / \Gamma)$ is representable.
In that case, we have, for every $\Gamma$ and $x$, an \defemph{extended context} $(\Gamma \rhd Y_{\mid x})$, a \defemph{projection map} $\bm{p}^{Y}_{x} : (\Gamma \rhd Y_{\mid x}) \to \Gamma$ and a \defemph{generic element} $\bm{q}^{Y}_{x} : \abs{Y_{\mid x}}\ \bm{p}_{x}^{Y}$ such that for every $\sigma : \Delta \to \Gamma$ and $y : \abs{Y_{\mid x}}\ \sigma$, there is a unique \defemph{extended morphism} $\angles{\sigma,y} : \Delta \to (\Gamma \rhd Y_{\mid x})$ such that $\angles{\sigma,y} \cdot \bm{p}^{Y}_{x} = \sigma$ and $\bm{q}^{Y}_{x}[\angles{\sigma,y}] = y$.
\lipicsEnd \end{definition}
Up to the correspondence between dependent presheaves and their total maps, locally representable dependent presheaves are also known as \emph{representable natural transformations} \cite{awodey_2018}. We read this definition in a structured manner, with a local representability structure consisting of a choice of representing objects in the above definition. The notion of local representability is \emph{local}: the restriction map from local representability structures for a dependent presheaf $Y$ over $X$ to coherent families of local representability structures of $Y_{\mid x}$ over $\yo_{\Gamma}$ for $x : \abs{X}_{\Gamma}$ is invertible. \footnote{Locality also holds if we consider local representability as a property.}
Assume that $\mathcal{C}$ is an $i$-small category. Internally to $\mathbf{Psh}^{\mathcal{C}}$ there is then, for every universe level $j$, a family $\mathrm{isRep} : \mathrm{Psh}^{\mathcal{C}}_{j} \to \mathrm{Psh}^{\mathcal{C}}_{\max(i, j)}$ of local representability structures over $j$-small presheaf families. Due to the above locality property, we have for a dependent presheaf $Y$ over $X$ that elements of $X \vdash \mathrm{isRep}(Y)$ correspond to witnesses that $Y$ is locally representable over $X$. This leads to universes $\mathrm{RepPsh}^{\mathcal{C}}_{j} \triangleq (A : \mathrm{Psh}^{\mathcal{C}}_{j}) \times \mathrm{isRep}\ A$ of $j$-small locally representable presheaf families. As an internal category, it is equivalent to the $j$-small one that at $\Gamma : \abs{\mathcal{C}}$ consists of an element of the slice of $\mathcal{C}$ over $\Gamma$ together with a choice of base changes along any map $\mathcal{C}(\Delta \to \Gamma)$.
An alternative semantic for the presheaf $(y : Y) \to Z\ y$ of dependent natural transformations from $Y$ to $Z$ can be given when $Y$ is locally representable over $X : \mathbf{Psh}^{\mathcal{C}}$. We could define $\abs{(y : Y) \to Z\ y}_{\Gamma}\ x \triangleq \abs{Z}_{\Gamma \rhd Y_{\mid x}}\ (x[\bm{p}^{Y}_{x}], \bm{q}^{Y}_{x})$. This definition satisfies the universal property of the presheaf of dependent natural transformations from $Y$ to $Z$, and is therefore isomorphic to its usual definition. The alternative definition admits a generalized algebraic presentation, which is important to justify the existence of initial models.
\subsection{Internal definition of models}
Our main running example is the theory $\MT_{\Pi,\mathbf{B}}$ of a family equipped with $\Pi$-types and a boolean type. An internal model of $\MT_{\Pi,\mathbf{B}}$ in a presheaf category $\mathbf{Psh}^{\mathcal{C}}$ consists of the following elements. \begin{alignat*}{3}
& \mathsf{Ty} && :{ } && \mathrm{Psh}^{\mathcal{C}} \\
& \mathsf{Tm} && :{ } && \mathsf{Ty} \to \mathrm{RepPsh}^{\mathcal{C}} \\
& \Pi && :{ } && \forall (A : \mathsf{Ty}) (B : \mathsf{Tm}\ A \to \mathsf{Ty}) \to \mathsf{Ty} \\
& \mathsf{app} && :{ } && \forall A\ B \to \mathsf{Tm}\ (\Pi\ A\ B) \simeq ((a : \mathsf{Tm}\ A) \to \mathsf{Tm}\ (B\ a)) \end{alignat*} \begin{alignat*}{3}
& \mathbf{B} && :{ } && \mathsf{Ty} \\
& \mathsf{true},\mathsf{false} && :{ } && \mathsf{Tm}\ \mathbf{B} \\
& \mathsf{elim}_{\BoolTy} && :{ } && \forall (P : \mathsf{Tm}\ \mathbf{B} \to \mathsf{Ty})\ (t : \mathsf{Tm}\ (P\ \mathsf{true}))\ (f : \mathsf{Tm}\ (P\ \mathsf{false}))\ (b : \mathsf{Tm}\ \mathbf{B}) \to \mathsf{Tm}\ (P\ b) \\
& - && :{ } && \mathsf{elim}_{\BoolTy}\ P\ t\ f\ \mathsf{true} = t \\
& - && :{ } && \mathsf{elim}_{\BoolTy}\ P\ t\ f\ \mathsf{false} = f \end{alignat*} The inverse of $\mathsf{app}$ is written $\mathsf{lam} : \forall A\ B \to ((a : \mathsf{Tm}\ A) \to \mathsf{Tm}\ (B\ a)) \to \mathsf{Tm}\ (\Pi\ A\ B)$.
A model of $\MT_{\Pi,\mathbf{B}}$ is a category $\mathcal{C}$ equipped with a terminal object and with a global internal model of $\MT_{\Pi,\mathbf{B}}$ in $\mathbf{Psh}^{\mathcal{C}}$.
\begin{remark}\label{rmk:QIIT}
If we unfold the above internal definitions in presheaves, we see that a model of $\MT_{\Pi,\mathbf{B}}$ is the same externally as an algebra for the signature of a quotient inductive-inductive type (QIIT)~\cite{QIITs} describing $\MT_{\Pi,\mathbf{B}}$.
That QIIT is significantly more verbose because it has sorts of contexts and substitutions and, for every component of the model, separately states the action at each context and coherent action of or coherence under substitution.
The notion of morphism of models we will define in \cref{subsec:morphisms} unfolds externally to the verbose notion of algebra morphism for this QIIT, except that we do not require context extension to be preserved strictly.
The same remark holds for the notion of displayed model to be defined in \cref{sec:disp_models_wo_exts}. \end{remark}
We have a $(2,1)$-category $\mathbf{Mod}_{\MT_{\Pi,\mathbf{B}}}$ of models. The morphisms are functors equipped with actions on types and terms that preserve the terminal object and the context extensions up to isomorphisms and the operations $\Pi$, $\mathsf{app}$, $\mathbf{B}$, $\mathsf{true}$, $\mathsf{false}$ and $\mathsf{elim}_{\BoolTy}$ strictly. The $2$-cells are the natural isomorphisms between the underlying functors.
We have just given an internal definition of the objects of $\mathbf{Mod}_{\MT_{\Pi,\mathbf{B}}}$ in the language of presheaf categories; we will give internal definitions of the other components using dependent right adjoints.
\subsection{Sorts and derived sorts}
A base sort of a CwF $\mathcal{C}$ is a (code for a) presheaf (in $\mathbf{Psh}^{\mathcal{C}}$) of the form $\mathsf{Ty}$ or $\mathsf{Tm}(-)$. The derived sorts are obtained by closing the base sorts under dependent products with arities in $\mathsf{Tm}(-)$. A derived sort is either a base sort, or a presheaf of the form $(a : \mathsf{Tm}(-)) \to X(a)$ where $X(a)$ is a derived sort. A derived sort can be written in the form $[X]Y$ where $X$ is a telescope of types and $Y$ is a base sort that depends on $X$.
The type of an argument of a type-theoretic operation or equation is always a derived sort. We often omit dependencies when writing derived sorts; \eg{} we write $[\mathsf{Tm}]\mathsf{Ty}$ for the derived sort of the second argument of $\Pi$.
\section{Dependent Right Adjoints and morphisms of models}\label{sec:dras}
In this section, we review the syntax and semantics of dependent right adjoints (DRAs)~\cite{DRAs}, and use the syntax of the dependent right adjoint $(F_{!} \dashv \modcolor{F^{\ast}})$ to give an internal encoding of the notion of morphism of models of $\MT_{\Pi,\mathbf{B}}$. Multimodal Type Theory is only needed for some of the proofs and constructions performed in the appendix.
\subsection{Dependent Right Adjoints}
Fix a functor $F : \mathcal{C} \to \mathcal{D}$. The precomposition functor $F^{\ast} : \mathbf{Psh}^{\mathcal{D}} \to \mathbf{Psh}^{\mathcal{C}}$ has both a left adjoint $F_{!} : \mathbf{Psh}^{\mathcal{C}} \to \mathbf{Psh}^{\mathcal{D}}$ and a right adjoint $F_{\ast} : \mathbf{Psh}^{\mathcal{C}} \to \mathbf{Psh}^{\mathcal{D}}$. The functors $F^{\ast}$ and $F_{\ast}$ are not only right adjoints of $F_{!}$ and $F^{\ast}$, they are dependent right adjoints, which means that they admit actions on the types and terms of the presheaf models $\mathbf{Psh}^{\mathcal{C}}$ and $\mathbf{Psh}^{\mathcal{D}}$ that interact with the left adjoints. We distinguish the functor $F^{\ast}$ from the dependent right adjoint $\modcolor{F^{\ast}}$ by using different colors. The dependent adjunction $(F^{\ast} \dashv \modcolor{F_{\ast}})$ is constructed in~\cite[Lemma 8.2]{MTT}, whereas $(F_{!} \dashv \modcolor{F^{\ast}})$ is constructed in~\cite[Lemma 2.1.4]{MTTtechreport}. We recall their constructions in \cref{sec:dra_constructions}.
We focus on the description of the dependent right adjoint $\modcolor{F^{\ast}}$ as a syntactic and type-theoretic operation. For every presheaf $X : \mathbf{Psh}^{\mathcal{C}}$ and dependent presheaf $A$ over $F_{!}\ X$, we have a dependent presheaf $\modcolor{F^{\ast}}\ A$ over $X$, such that elements of $A$ over $F_{!}\ X$ are in natural bijection with elements of $\modcolor{F^{\ast}}\ A$ over $X$.
This is analogous to the definition of $\Pi$-types: given a presheaf $X : \mathbf{Psh}^{\mathcal{C}}$, a dependent presheaf $Y(x)$ over the $(x : X)$ and a dependent presheaf $Z(x, y)$ over $(x : X, y : Y(x))$, the $\Pi$-type $(y : Y(x)) \to Z(x, y)$ over $(x : X)$ is characterized by the fact that its elements are in natural bijection with the elements of $Z(x,y)$ over $(x : X, y : Y(x))$.
Following this intuition, we use a similar syntax for $\Pi$-types and modalities. We view the left adjoint $F_{!}$ as an operation on the contexts of the presheaf model $\mathbf{Psh}^{\mathcal{C}}$. If $(x : X)$ is a context of this presheaf model, we write $(x : X, \mmod{F^{\ast}})$ instead of $F_{!}\ X$. Given a dependent presheaf $Y(x, \mmod{F^{\ast}})$\footnote{Here the notation $Y(x, \mmod{F^{\ast}})$ is an informal way to keep track of the fact that $Y$ is dependent over the context $(x : X, \mmod{F^{\ast}})$.} over $(x : X, \mmod{F^{\ast}})$, we write $(\mmod{F^{\ast}} \to Y(x, \mmod{F^{\ast}}))$ instead of $\modcolor{F^{\ast}}\ Y$.
We write the components of the bijection between elements of $Y(x, \mmod{F^{\ast}})$ over $(x : X, \mmod{F^{\ast}})$ and elements of $(\mmod{F^{\ast}} \to Y(x, \mmod{F^{\ast}}))$ over $(x : X)$ similarly to applications and $\lambda$-abstractions. If $y(x, \mmod{F^{\ast}})$ is an element of $Y(x, \mmod{F^{\ast}})$ over $(x : X, \mmod{F^{\ast}})$, we write $(\lambda\ \mmod{F^{\ast}} \mapsto y(x, \mmod{F^{\ast}}))$ for the corresponding element of $(\mmod{F^{\ast}} \to Y(x, \mmod{F^{\ast}}))$. Conversely, given an element $f(x)$ of $(\mmod{F^{\ast}} \to Y(x, \mmod{F^{\ast}}))$ over $(x : X)$, we write $f(x)\ \mmod{F^{\ast}}$ for the corresponding element of $Y(x, \mmod{F^{\ast}})$. There is a $\beta$-rule $(\lambda\ \mmod{F^{\ast}} \mapsto y(x, \mmod{F^{\ast}}))\ \mmod{F^{\ast}} = y(x, \mmod{F^{\ast}})$ and an $\eta$-rule $(\lambda\ \mmod{F^{\ast}} \mapsto f(x)\ \mmod{F^{\ast}}) = f(x)$.
We may define elements of modal types by pattern matching. For instance, we may write $f(x)\ \mmod{F^{\ast}} \triangleq y(x,\mmod{F^{\ast}})$ to define $f(x)$ as the unique element satisfying the equation $f(x)\ \mmod{F^{\ast}} = y(x,\mmod{F^{\ast}})$, that is $f(x) \triangleq \lambda\ \mmod{F^{\ast}} \mapsto y(x,\mmod{F^{\ast}})$.
The operation $(\mmod{F^{\ast}} \to -)$ is a modality that enables interactions between the two presheaf models $\mathbf{Psh}^{\mathcal{C}}$ and $\mathbf{Psh}^{\mathcal{D}}$. The symbols $\mmod{F^{\ast}}$ and $\mmod{F^{\ast}}$ and their places in the terms have been chosen to make keeping track of the modes of subterms as easy as possible. For both symbols $\mmod{F^{\ast}}$ and $\mmod{F^{\ast}}$, the part of the term that is left of the symbol is at mode $\mathbf{Psh}^{\mathcal{C}}$, while the part that is right of the symbol is at mode $\mathbf{Psh}^{\mathcal{D}}$. The type formers $(\mmod{F^{\ast}} \to -)$ and the term former $(\lambda\ \mmod{F^{\ast}} \mapsto -)$ go from the mode $\mathbf{Psh}^{\mathcal{D}}$ to $\mathbf{Psh}^{\mathcal{C}}$, whereas the term former $(-\ \mmod{F^{\ast}})$ goes from the mode $\mathbf{Psh}^{\mathcal{C}}$ to the mode $\mathbf{Psh}^{\mathcal{D}}$.
\subsection{Modalities are applicative functors}
As a first demonstration of the syntax of modalities, we equip the modality $(\mmod{F^{\ast}} \to -)$ with the structure of an \emph{applicative functor}~\cite{ApplicativeFunctors}, defined analogously to the \emph{reader monad} $(A \to -)$. This structure is given by an operation \begin{alignat*}{3}
& (\_{} \circledast \_{}) && :{ } && \forall A\ B\ (f : \mmod{F^{\ast}} \to (a : A\ \mmod{F^{\ast}}) \to B\ \mmod{F^{\ast}}\ a) (a : \mmod{F^{\ast}} \to A\ \mmod{F^{\ast}}) \\
&&&&& \to (\mmod{F^{\ast}} \to B\ \mmod{F^{\ast}}\ (a\ \mmod{F^{\ast}})) \\
& f \circledast a && \triangleq{ } && \lambda\ \mmod{F^{\ast}} \mapsto (f\ \mmod{F^{\ast}})\ (a\ \mmod{F^{\ast}}) \end{alignat*}
This provides a concise notation to apply functions under the modality. If $f$ is an $n$-ary function under the modality, and $a_{1},\dotsc,a_{n}$ are arguments under the modality, we can write the application $f \circledast a_{1} \circledast \cdots \circledast a_{n}$ instead of $(\lambda\ \mmod{F^{\ast}} \mapsto (f\ \mmod{F^{\ast}})\ (a_{1}\ \mmod{F^{\ast}})\ \cdots\ (a_{n}\ \mmod{F^{\ast}}))$.
When $f$ is a global function of the presheaf model $\mathbf{Psh}^{\mathcal{D}}$, we write $f \circleddollar a_{1} \circledast \cdots \circledast a_{n}$ instead of $(\lambda\ \mmod{F^{\ast}} \mapsto f) \circledast a_{1} \circledast \dotsc \circledast a_{n}$.
\subsection{Preservation of context extensions}
The last component that we need for an internal definition of morphism of models of $\MT_{\Pi}$ is an internal way to describe preservation of extended contexts of locally representable presheaves. The preservation of context extensions can be expressed without assuming that the extended contexts actually exist, \ie{} without assuming that the presheaves are locally representable; in that case we talk about preservation of virtual context extensions.
\begin{definition}[Internally to $\mathbf{Psh}^{\mathcal{C}}$]\label{def:preserves_ext}
Let $A^{\mathcal{C}} : \mathrm{Psh}^{\mathcal{C}}$ and $A^{\mathcal{D}} : \mmod{F^{\ast}} \to \mathrm{Psh}^{\mathcal{D}}$ be presheaves over $\mathcal{C}$ and $\mathcal{D}$, and $F^{A} : \forall (a : A^{\mathcal{C}}) \mmod{F^{\ast}} \to A^{\mathcal{D}}\ \mmod{F^{\ast}}$ be an action of $F$ on the elements of $A^{\mathcal{C}}$.
We say that $F^{A}$ \defemph{preserves virtual context extensions} if for every dependent presheaf $P : \forall \mmod{F^{\ast}} (a : A^{\mathcal{D}}\ \mmod{F^{\ast}}) \to \mathrm{Psh}^{\mathcal{D}}$, the canonical comparison map
\begin{alignat*}{3}
& \tau && :{ } && (\forall \mmod{F^{\ast}} (a : A^{\mathcal{D}}\ \mmod{F^{\ast}}) \to P\ \mmod{F^{\ast}}\ a) \to (\forall (a : A^{\mathcal{C}}) \mmod{F^{\ast}} \to P\ \mmod{F^{\ast}}\ (F^{A}\ a\ \ \mmod{F^{\ast}})) \\
& \tau(p) && \triangleq{ } && \lambda a \mmod{F^{\ast}} \mapsto p\ \mmod{F^{\ast}}\ (F^{A}\ a\ \mmod{F^{\ast}})
\end{alignat*}
is an isomorphism.
In other words, $F^{A}$ preserves virtual context extensions when the modality $(\mmod{F^{\ast}} \to -)$ commutes with quantification over $A^{\mathcal{C}}$ and $A^{\mathcal{D}}$.
This provides a notation to define an element $p$ of $(\forall \mmod{F^{\ast}}(a : A^{\mathcal{D}}\ \mmod{F^{\ast}}) \to P\ \mmod{F^{\ast}}\ a)$ using pattern matching: we write
\[ p\ \mmod{F^{\ast}}\ (F^{A}\ a\ \mmod{F^{\ast}}) \triangleq q\ a\ \mmod{F^{\ast}} \]
to define $p$ as the unique solution of that equation ($p = \tau^{-1}(\lambda a \mmod{F^{\ast}} \mapsto q\ a\ \mmod{F^{\ast}})$).
\lipicsEnd \end{definition}
In \cref{sec:preserv} we show that the internal description of preservation of context extensions coincides with the external notion of preservation up to isomorphism.
\subsection{Morphisms of models} \label{subsec:morphisms}
Let $F : \mathcal{C} \to \mathcal{D}$ be a morphism of models of $\MT_{\Pi,\mathbf{B}}$. We now show that its structure can fully be described in the internal language of $\mathbf{Psh}^{\mathcal{C}}$.
Its actions on types and terms can equivalently be given by the following global elements. \begin{alignat*}{3}
& F^{\mathsf{Ty}} && :{ } && (A : \mathsf{Ty}^{\mathcal{C}}) \to (\mmod{F^{\ast}} \to \mathsf{Ty}^{\mathcal{D}}) \\
& F^{\mathsf{Tm}} && :{ } && \forall A\ (a : \mathsf{Tm}^{\mathcal{C}}) \to (\mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{D}}\ (F^{\mathsf{Ty}}\ A\ \mmod{F^{\ast}})) \end{alignat*} The preservation of context extensions by $F$ is equivalent to the fact that $F^{\mathsf{Tm}}$ preserves virtual context extensions in the sense of \cref{def:preserves_ext}. We can use that fact to obtain the following actions on derived sorts. \begin{alignat*}{3}
& F^{[X]\mathsf{Ty}} && :{ } && (A : X^{\mathcal{C}} \to \mathsf{Ty}^{\mathcal{C}}) \to (\forall \mmod{F^{\ast}}\ (x : X^{\mathcal{D}}) \to \mathsf{Ty}^{\mathcal{D}}) \\
& F^{[X]\mathsf{Tm}} && :{ } && \forall A\ (a : (x : X^{\mathcal{C}}) \to \mathsf{Tm}^{\mathcal{C}}\ (A\ x)) \to (\forall \mmod{F^{\ast}}\ (x : X^{\mathcal{D}}) \to \mathsf{Tm}^{\mathcal{D}}(F^{[X]\mathsf{Ty}}\ A\ \mmod{F^{\ast}}(x))) \end{alignat*} They are defined as follows, using the pattern matching notation of \cref{def:preserves_ext}. \begin{alignat*}{3}
& F^{[X]\mathsf{Ty}}\ A\ \mmod{F^{\ast}}\ (F^{X}\ x\ \mmod{F^{\ast}}) && \triangleq{ } && F^{\mathsf{Ty}}\ (A\ x)\ \mmod{F^{\ast}} \\
& F^{[X]\mathsf{Tm}}\ a\ \mmod{F^{\ast}}\ (F^{X}\ x\ \mmod{F^{\ast}}) && \triangleq{ } && F^{\mathsf{Tm}}\ (a\ x)\ \mmod{F^{\ast}} \end{alignat*} Finally, the preservation of the operations can simply be described by the following equations. \begin{alignat*}{1}
& F^{\mathsf{Ty}}\ (\Pi^{\mathcal{C}}\ A\ B)\ \mmod{F^{\ast}} = \Pi^{\mathcal{D}}\ (F^{\mathsf{Ty}}\ A\ \mmod{F^{\ast}})\ (F^{[\mathsf{Tm}]\mathsf{Ty}}\ B\ \mmod{F^{\ast}}) \\
& F^{\mathsf{Ty}}\ (\mathsf{app}^{\mathcal{C}}\ f\ a)\ \mmod{F^{\ast}} = \mathsf{app}^{\mathcal{D}}\ (F^{\mathsf{Tm}}\ f\ \mmod{F^{\ast}})\ (F^{\mathsf{Tm}}\ a\ \mmod{F^{\ast}}) \\
& F^{\mathsf{Ty}}\ \mathbf{B}^{\mathcal{C}}\ \ \mmod{F^{\ast}} = \mathbf{B}^{\mathcal{D}} \\
& F^{\mathsf{Ty}}\ \mathsf{true}^{\mathcal{C}}\ \ \mmod{F^{\ast}} = \mathsf{true}^{\mathcal{D}} \\
& F^{\mathsf{Ty}}\ \mathsf{false}^{\mathcal{C}}\ \ \mmod{F^{\ast}} = \mathsf{false}^{\mathcal{D}} \\
& F^{\mathsf{Ty}}\ (\mathsf{elim}_{\BoolTy}^{\mathcal{C}}\ t\ f\ b)\ \mmod{F^{\ast}} = \mathsf{elim}_{\BoolTy}^{\mathcal{D}}\ (F^{[\mathsf{Tm}]\mathsf{Ty}}\ P\ \mmod{F^{\ast}})\ (F^{\mathsf{Tm}}\ t\ \mmod{F^{\ast}})\ (F^{\mathsf{Tm}}\ f\ \mmod{F^{\ast}})\ (F^{\mathsf{Tm}}\ b\ \mmod{F^{\ast}}) \end{alignat*} We can then derive analogous equations for $F^{[X]\mathsf{Ty}}$ and $F^{[X]\mathsf{Tm}}$. For instance, \begin{alignat*}{1}
& F^{[X]\mathsf{Ty}}\ (\lambda x \mapsto \Pi^{\mathcal{C}}\ (A\ x)\ (B\ x))\ \mmod{F^{\ast}}\ x \\
& \quad = \Pi^{\mathcal{D}}\ (F^{[X]\mathsf{Ty}}\ A\ \mmod{F^{\ast}}\ x)\ (\lambda a \mapsto F^{[X,\mathsf{Tm}]\mathsf{Ty}}\ B\ \mmod{F^{\ast}}\ (x,a)). \end{alignat*} Indeed, by \cref{def:preserves_ext}, it suffices to show that equation when $x = F^{X}\ x'\ \mmod{F^{\ast}}$. It then follows from the base equation for $F^{\mathsf{Ty}}\ (\Pi^{\mathcal{C}}\ (A\ x')\ (B\ x'))$.
We can also derive strengthening equations. For example, when $A$ does not depend on $X$, we have $F^{[X]\mathsf{Ty}}\ (\lambda x \mapsto A)\ \mmod{F^{\ast}} = \lambda x \mapsto F^{\mathsf{Ty}}\ A\ \mmod{F^{\ast}}$.
\begin{remark}
The notion of morphism of models unfolds externally to the verbose notion of algebra morphism for the QIIT signature of \cref{rmk:QIIT}, except that we do not require context extension to be preserved strictly.
A standard argument shows that initial algebras for the QIIT are biinitial in our sense.
A similar remark holds for the notion of displayed model (and their sections) that will be defined in \cref{sec:disp_models_wo_exts}. \end{remark}
\section{Relative induction principles}\label{sec:disp_models_wo_exts}
In this section we state our relative induction principles using the notion of displayed model without context extensions. The full proofs of these relative induction principles are given in the appendix.
We fix a base model $\mathcal{S}$ of $\MT_{\Pi,\mathbf{B}}$ and a functor $F : \mathcal{C} \to \mathcal{S}$.
\begin{definition}
A \defemph{displayed model without context extensions} over $F : \mathcal{C} \to \mathcal{S}$ consists of the following components, specified internally to $\mathbf{Psh}^{\mathcal{C}}$:
\begin{itemize}
\item Presheaves of displayed types and terms.
\begin{alignat*}{3}
& \mathsf{Ty}^{\bullet} && :{ } && (A : \mmod{F^{\ast}} \to \mathsf{Ty}^{\mathcal{S}}) \to \mathrm{Psh}^{\mathcal{C}} \\
& \mathsf{Tm}^{\bullet} && :{ } && \forall A\ (A^{\bullet} : \mathsf{Ty}^{\bullet}\ A) (a : \mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{C}}
\end{alignat*}
They correspond to the \emph{motives} of an induction principle.
\item Displayed variants of the type-theoretic operations of $\MT_{\Pi,\mathbf{B}}$.
They are the \emph{methods} of the induction principle.
\begin{alignat*}{3}
& \Pi^{\bullet} && :{ } && \forall A\ B\ (A^{\bullet} : \mathsf{Ty}^{\bullet}\ A) (B^{\bullet} : \{a\} (a^{\bullet} : \mathsf{Ty}^{\bullet}\ A^{\bullet}\ a) \to \mathsf{Ty}^{\bullet}\ (B \circledast a)) \\
&&&&& \to \mathsf{Ty}^{\bullet}\ (\Pi^{\mathcal{S}} \circleddollar A \circledast B) \\
& \mathsf{app}^{\bullet} && :{ } && \forall A\ B\ f\ a\ (A^{\bullet} : \mathsf{Ty}^{\bullet}\ A) (B^{\bullet} : \{a\} (a^{\bullet} : \mathsf{Ty}^{\bullet}\ A^{\bullet}\ a) \to \mathsf{Ty}^{\bullet}\ (B \circledast a)) \\
&&&&& \to (\mathsf{Tm}^{\bullet}\ (\Pi^{\bullet}\ A^{\bullet}\ B^{\bullet})\ f) \simeq ((a^{\bullet} : \mathsf{Ty}^{\bullet}\ A^{\bullet}\ a) \to \mathsf{Tm}^{\bullet}\ (B^{\bullet}\ a^{\bullet})\ (\mathsf{app}^{\mathcal{S}} \circleddollar f \circledast a))
\end{alignat*}
\begin{alignat*}{3}
& \mathbf{B}^{\bullet} && :{ } && \mathsf{Ty}^{\bullet}\ (\lambda \mmod{F^{\ast}} \mapsto \mathbf{B}) \\
& \mathsf{true}^{\bullet} && :{ } && \mathsf{Tm}^{\bullet}\ \mathbf{B}^{\bullet}\ (\lambda \mmod{F^{\ast}} \mapsto \mathsf{true}) \\
& \mathsf{false}^{\bullet} && :{ } && \mathsf{Tm}^{\bullet}\ \mathbf{B}^{\bullet}\ (\lambda \mmod{F^{\ast}} \mapsto \mathsf{false}) \\
& \mathsf{elim}_{\BoolTy}^{\bullet} && :{ } && \forall P\ t\ f\ b\ (P^{\bullet} : \forall x\ (x^{\bullet} : \mathsf{Tm}^{\bullet}\ \mathbf{B}^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (P \circledast x)) \\
&&&&& \phantom{\forall} (t^{\bullet} : \mathsf{Tm}^{\bullet}\ (P^{\bullet}\ \mathsf{true}^{\bullet})\ t) (f^{\bullet} : \mathsf{Tm}^{\bullet}\ (P^{\bullet}\ \mathsf{false}^{\bullet})\ f) \\
&&&&& \to (b^{\bullet} : \mathsf{Tm}^{\bullet}\ \mathbf{B}^{\bullet}\ b) \to \mathsf{Tm}^{\bullet}\ (P^{\bullet}\ b^{\bullet})\ b
\end{alignat*}
\item Satisfying displayed variants of the type-theoretic equations\footnote{Note that these equations are well-typed because of the corresponding equations in $\mathcal{S}$. As presheaves support equality reflection, we don't have to write transports.} of $\MT_{\Pi,\mathbf{B}}$.
\begin{alignat*}{3}
& \mathsf{elim}_{\BoolTy}^{\bullet}\ P^{\bullet}\ t^{\bullet}\ f^{\bullet}\ \mathsf{true}^{\bullet} && ={ } && t^{\bullet} \\
& \mathsf{elim}_{\BoolTy}^{\bullet}\ P^{\bullet}\ t^{\bullet}\ f^{\bullet}\ \mathsf{false}^{\bullet} && ={ } && f^{\bullet}
\tag*{\lipicsEnd}
\end{alignat*}
\end{itemize} \end{definition}
A displayed model without context extensions has context extensions when for any $A$ and $A^{\bullet}$, the first projection map \[ (a : \mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}})) \times (a^{\bullet} : \mathsf{Tm}^{\bullet}\ A^{\bullet}\ a) \xrightarrow{\lambda (a,a^{\bullet}) \mapsto a} (\mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}})) \] has a locally representable domain and preserves context extensions.
In \cref{sec:sections} we give an internal definition of section of displayed models with context extensions. It is similar to the definition of morphism of models. The induction principle of the biinitial model $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$ is the statement that any displayed model with context extensions over $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$ admits a section.
While (displayed) models without context extensions are not well-behaved, we show that they can be replaced by (displayed) models with context extensions. \begin{restatable}{definition}{restateDefFactorization}\label{def:disp_model_factorization}
A \defemph{factorization} $(\mathcal{C} \xrightarrow{Y} \mathcal{P} \xrightarrow{G} \mathcal{S}, \mathcal{S}^{\dagger})$ of a global displayed model without context extensions $\mathcal{S}^{\bullet}$ over $F : \mathcal{C} \to \mathcal{S}$ consists of a factorization $\mathcal{C} \xrightarrow{Y} \mathcal{P} \xrightarrow{G} \mathcal{S}$ of $F$ and a displayed model with context extensions $\mathcal{S}^{\dagger}$ over $G : \mathcal{P} \to \mathcal{S}$, such that $Y : \mathcal{C} \to \mathcal{P}$ is fully faithful and equipped with bijective actions on displayed types and terms.
\lipicsEnd{} \end{restatable}
\begin{restatable}{construction}{restateDispReplace}\label{con:disp_replace_0}
We construct a factorization $(\mathcal{C} \xrightarrow{Y} \mathcal{P} \xrightarrow{G} \mathcal{S}, \mathcal{S}^{\dagger})$ of any model without context extensions $\mathcal{S}^{\bullet}$ over $F : \mathcal{C} \to \mathcal{S}$. \end{restatable} \begin{proof}[Construction sketch]
We give the full construction in the appendix.
We see $\mathcal{P}$ as analogous to the presheaf category over $\mathcal{C}$, but in the slice $2$-category $(\mathbf{Cat} / \mathcal{S})$.
Indeed, a generalization of the Yoneda lemma holds for $Y : \mathcal{C} \to \mathcal{P}$.
In particular $Y : \mathcal{C} \to \mathcal{P}$ is fully faithful.
Equivalently, it could be defined as the pullback along $\yo : \mathcal{S} \to \widehat{\mathcal{S}}$ of the Artin gluing $\mathcal{G} \to \widehat{\mathcal{S}}$ of $F_{\ast} : \widehat{\mathcal{S}} \to \widehat{\mathcal{C}}$.
It is well-known that given a base model $\mathcal{C}$ of type theory, that model can be extended to the presheaf category $\widehat{\mathcal{C}}$ in such a way that the Yoneda embedding $\yo : \mathcal{C} \to \widehat{\mathcal{C}}$ is a morphism of models with bijective actions on types and terms.
This is indeed the justification for one of the intended models of two-level type theory~\cite{TwoLevelTypeTheoryAndApplications}.
This construction does not actually depend on the context extensions in the base model $\mathcal{C}$.
The construction of the displayed model $\mathcal{S}^{\dagger}$ over $G : \mathcal{P} \to \mathcal{S}$ is a generalization of this construction to displayed models. \end{proof}
We now assume that we have a section $S_{0}$ of the displayed model without context extensions $\mathcal{S}^{\dagger}$ constructed in \cref{con:disp_replace_0}.
In general, we want more than just the section $S_{0}$. Indeed, if we take a type $A$ of $\mathcal{S}$ over a context $F\ \Gamma$ for some $\Gamma : \abs{\mathcal{C}}$, we can apply the action of $S_{0}$ on types to obtain a displayed type $S_{0}^{\mathsf{Ty}}\ A$ of $\mathcal{S}^{\dagger}$ over $S_{0}\ (F\ \Gamma)$. We would rather have a displayed type of $\mathcal{S}^{\bullet}$ over $\Gamma$. It suffices to have a morphism $\alpha_{\Gamma} : Y\ \Gamma \to S_{0}\ (F\ \Gamma)$. We can then transport $S_{0}^{\mathsf{Ty}}\ A$ to a displayed type $(S_{0}^{\mathsf{Ty}}\ A)[\alpha_{\Gamma}]$ of $\mathcal{S}^{\dagger}$ over $Y\ \Gamma$. Since $Y$ is equipped with a bijective action $Y^{\mathsf{Ty}}$ on displayed types, this provides a displayed type $Y^{\mathsf{Ty},-1}\ (S_{0}^{\mathsf{Ty}}\ A)[\alpha_{\Gamma}]$ of $\mathcal{S}^{\bullet}$ over $\Gamma$, as desired. In general, we want to have a full natural transformation $\alpha : Y \Rightarrow (F \cdot S_{0})$.
It is useful to consider the universal setting under which such a natural transformation is available. \begin{definition}
The \defemph{displayed inserter} $\mathcal{I}(\mathcal{S}^{\bullet})$ is a category equipped with a functor $I : \mathcal{I}(\mathcal{S}^{\bullet}) \to \mathcal{C}$ and with a natural transformation $\iota : (I \cdot Y) \Rightarrow (I \cdot F \cdot S_{0})$ such that $(\iota \cdot P) = 1_{(I \cdot F)}$.
It is the final object among such categories: given any other category $\mathcal{J}$ with $J : \mathcal{J} \to \mathcal{C}$ and $\beta : (J \cdot Y) \Rightarrow (J \cdot F \cdot S_{0})$ such that $(\bm{\beta} \cdot P) = 1_{(J \cdot S_{0})}$, there exists a unique functor $X : \mathcal{J} \to \mathcal{I}(\mathcal{S}^{\bullet})$ such that $J = (X \cdot I)$ and $\beta = (X \cdot \alpha)$.
\lipicsEnd{} \end{definition}
Internally to $\mathbf{Psh}^{\mathcal{I}(\mathcal{S}^{\bullet})}$, we then have the following operations: \begin{alignat*}{3}
& S_{\iota}^{[X]\mathsf{Ty}} && :{ } && \forall \mmod{I^{\ast}} (A : \mmod{F^{\ast}} \to X \to \mathsf{Ty})\ x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (A \circledast x) \\
& S_{\iota}^{[X]\mathsf{Tm}} && :{ } && \forall \mmod{I^{\ast}}\ A\ (a : \forall \mmod{F^{\ast}}\ x \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}}\ x))\ x\ (x^{\bullet} : X^{\bullet}\ x) \\
&&&&& \to \mathsf{Tm}^{\bullet}\ (S_{\iota}^{[X]\mathsf{Ty}}\mmod{I^{\ast}}\ A\ x\ x^{\bullet})\ (a \circledast x), \end{alignat*} where $X^{\bullet}$ is defined by induction on the telescope $X$. They preserve all type-theoretic operations: \begin{alignat*}{1}
& S_{\iota}^{[X]\mathsf{Ty}}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \Pi\ (A\ \mmod{F^{\ast}}\ x)\ (B\ \mmod{F^{\ast}}\ x))\ x^{\bullet} \\
& \quad = \Pi^{\bullet}\ (S_{\iota}^{[X]\mathsf{Ty}}\ \mmod{I^{\ast}}\ A\ x^{\bullet})\ (\lambda a^{\bullet} \mapsto S_{\iota}^{[X,A]\mathsf{Ty}}\ \mmod{I^{\ast}}\ B\ (x^{\bullet},a^{\bullet})) \\
& S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \mathsf{app}\ (f\ \mmod{F^{\ast}}\ x)\ (a\ \mmod{F^{\ast}}\ x))\ x^{\bullet} \\
& \quad = \mathsf{app}^{\bullet}\ (S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ f\ x^{\bullet})\ (S_{\iota}^{[X]\mathsf{Ty}}\ \mmod{I^{\ast}}\ a\ x^{\bullet}) \\
& S_{\iota}^{[X]\mathsf{Ty}}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \mathbf{B})\ x^{\bullet} = \mathbf{B}^{\bullet} \\
& S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \mathsf{true})\ x^{\bullet} = \mathsf{true}^{\bullet} \\
& S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \mathsf{false})\ x^{\bullet} = \mathsf{false}^{\bullet} \\
& S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \mathsf{elim}_{\BoolTy}\ (P\ \mmod{F^{\ast}}\ x)\ (t\ \mmod{F^{\ast}}\ x)\ (f\ \mmod{F^{\ast}}\ x)\ (b\ \mmod{F^{\ast}}\ x))\ x^{\bullet} \\
& \quad = \mathsf{elim}_{\BoolTy}^{\bullet}\ (\lambda b^{\bullet} \mapsto S_{\iota}^{[X,\mathsf{Tm}]\mathsf{Ty}}\ \mmod{I^{\ast}}\ P\ (x^{\bullet},b^{\bullet})) \\
& \phantom{{ }\quad = \mathsf{elim}_{\BoolTy}^{\bullet}\ }(S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ t\ x^{\bullet})\ (S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ f\ x^{\bullet})\ (S_{\iota}^{[X]\mathsf{Tm}}\ \mmod{I^{\ast}}\ b\ x^{\bullet}) \end{alignat*}
\begin{restatable}{definition}{restateRelSection}\label{def:rel_section}
A \defemph{relative section} $S_{\alpha}$ of a factorization $(\mathcal{C} \xrightarrow{Y} \mathcal{P} \xrightarrow{G} \mathcal{S}, \mathcal{S}^{\dagger})$ of a displayed model without context extensions $\mathcal{S}^{\bullet}$ over $F : \mathcal{C} \to \mathcal{S}$ consists of a section $S_{0}$ of the displayed model with context extensions $\mathcal{S}^{\dagger}$ along with a natural transformation $\alpha : Y \Rightarrow (F \cdot S_{0})$ such that $(\alpha \cdot G) = 1_{F}$, or equivalently with a section $\angles{\alpha} : \mathcal{C} \to \mathcal{I}(\mathcal{S}^{\bullet})$ of $I : \mathcal{I}(\mathcal{S}^{\bullet}) \to \mathcal{C}$.
\lipicsEnd{} \end{restatable}
A relative section $S_{\alpha}$ has actions on types and terms, obtained by pulling $S_{\iota}^{[X]\mathsf{Ty}}$ and $S_{\iota}^{[X]\mathsf{Tm}}$ along $\angles{\alpha}$. \begin{alignat*}{3}
& S_{\alpha}^{[X]\mathsf{Ty}} && :{ } && \forall (A : \mmod{F^{\ast}} \to X \to \mathsf{Ty})\ x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (A \circledast x) \\
& S_{\alpha}^{[X]\mathsf{Tm}} && :{ } && \forall A\ (a : \forall \mmod{F^{\ast}}\ x \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}}\ x))\ x\ (x^{\bullet} : X^{\bullet}\ x) \\
&&&&& \to \mathsf{Tm}^{\bullet}\ (S_{\alpha}^{[X]\mathsf{Ty}}\ A\ x\ x^{\bullet})\ (a \circledast x), \end{alignat*}
A displayed model without context extension over the biinitial model does not necessarily admit a relative section; this depends on the functor $F : \mathcal{C} \to \mathbf{0}_{\MT}$. Depending on the universal property of $\mathcal{C}$, we need to provide some additional data in order to construct $\angles{\alpha} : \mathcal{C} \to \mathcal{I}(\mathbf{0}_{\MT}^{\bullet})$. Thus, we get a different induction principle for every functor $F : \mathcal{C} \to \mathbf{0}_{\MT}$ into $\mathbf{0}_{\MT}$, which we call the induction principle relative to $F$. We now state several of these relative induction principles, for our example type theory $\MT_{\Pi,\mathbf{B}}$ and for cubical type theory $\mathsf{CTT}$.
\begin{restatable}[Induction principle relative to $\{\diamond\} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$]{lemma}{restateIndTerminal}\label{lem:ind_terminal}
Denote by $\{\diamond\}$ the terminal category (which should rather be seen here as the initial category equipped with a terminal object), and consider the functor $F : \{\diamond\} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$ that selects the empty context of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$.
Any global displayed model without context extensions over $F$ admits a relative section.
\qed{} \end{restatable}
\begin{definition}
A \defemph{renaming algebra} over a model $\mathcal{S}$ of $\MT_{\Pi,\mathbf{B}}$ is a category $\mathcal{R}$ with a terminal object, along with a functor $F : \mathcal{R} \to \mathcal{S}$ preserving the terminal object, a locally representable dependent presheaf of variables
\[ \mathsf{Var}^{\mathcal{R}} : (A : \mmod{F^{\ast}} \to \mathsf{Ty}^{\mathcal{S}}) \to \mathrm{RepPsh}^{\mathcal{R}} \]
and an action on variables $\mathsf{var} : \forall A\ (a : \mathsf{Var}\ A)\ \mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}})$ that preserves context extensions.
The category of renamings $\mathcal{Ren}_{\mathcal{S}}$ over a model $\mathcal{S}$ is defined as the biinitial renaming algebra over $\mathcal{S}$.
We denote the category of renamings of the biinitial model $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$ by $\mathcal{Ren}$. \end{definition}
\begin{restatable}[Induction principle relative to $\mathcal{Ren} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$]{lemma}{restateIndRenamings}\label{lem:ind_renamings}
Let $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$ be a global displayed model without context extensions over $F : \mathcal{Ren} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, along with, internally to $\mathcal{I}(\mathcal{S}^{\bullet})$, a global map
\[ \mathsf{var}^{\bullet} : \forall \mmod{I^{\ast}} (A : \mmod{F^{\ast}} \to \mathsf{Ty}) (a : \mathsf{Var}\ A) \to \mathsf{Tm}^{\bullet}\ (S_{\iota}^{\mathsf{Ty}}\ \mmod{I^{\ast}}\ A)\ (\mathsf{var}\ a). \]
Then there exists a relative section $S_{\alpha}$ of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$.
\qed{} \end{restatable} The relative section also satisfies a computation rule that relates $S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{var}_{A}\ a)$ and $\mathsf{var}^{\bullet}$.
We also state relative induction principles that can be used to prove canonicity and normalization of cubical type theory. \begin{definition}
A cubical CwF is a CwF $\mathcal{C}$ equipped with a locally representable interval presheaf with two endpoints
\begin{alignat*}{3}
& \mathbb{I}^{\mathcal{C}} && :{ } \mathrm{RepPsh}^{\mathcal{C}}, \\
& 0^{\mathcal{C}}, 1^{\mathcal{C}} && :{ } \mathbb{I}^{\mathcal{C}}.
\tag*{\lipicsEnd}
\end{alignat*} \end{definition}
A model of cubical type theory ($\mathsf{CTT}$) is a cubical CwF equipped with some choice of type-theoretic structures, such as $\Pi$-types, path types, glue types, \etc.
\begin{definition}
A (cartesian) \defemph{cubical algebra} over a model $\mathcal{S}$ of $\mathsf{CTT}$ is a category $\mathcal{C}$ with a terminal object, along with a functor $F : \mathcal{C} \to \mathcal{S}$ preserving the terminal object, a locally representable interval presheaf $\mathbb{I}^{\mathcal{C}} : \mathrm{RepPsh}^{\mathcal{C}}$ with two endpoints $0^{\mathcal{C}}, 1^{\mathcal{C}} : \mathbb{I}^{\mathcal{C}}$ and an action $\mathsf{int} : \mathbb{I}^{\mathcal{C}} \to \mmod{F^{\ast}} \to \mathbb{I}^{\mathcal{S}}$ that preserves context extensions and the endpoints.
The category of cubes $\square_{\mathcal{S}}$ over a model $\mathcal{S}$ is defined as the biinitial cubical algebra over $\mathcal{S}$.
We denote by $\square$ the category of cubes of the biinitial model $\mathbf{0}_{\mathsf{CTT}}$ of cubical type theory.
\lipicsEnd{} \end{definition}
\begin{restatable}[Induction principle relative to $\square \to \mathbf{0}_{\mathsf{CTT}}$]{lemma}{restateIndCube}\label{lem:ind_cubes}
Let $\mathbf{0}_{\mathsf{CTT}}^{\bullet}$ be a global displayed model without context extensions over $F : \square \to \mathbf{0}_{\mathsf{CTT}}$, along with a map
\[ \mathsf{int}^{\bullet} : (i : \mathbb{I}^{\square}) \to \mathbb{I}^{\bullet}\ (\mathsf{int}\ i) \]
such that $\mathsf{int}^{\bullet}\ 0^{\square} = 0^{\bullet}$ and $\mathsf{int}^{\bullet}\ 1^{\square} = 1^{\bullet}$.
Then there exists a relative section $S_{\alpha}$ of $\mathbf{0}_{\mathsf{CTT}}^{\bullet}$.
\qed{} \end{restatable}
\begin{definition}
A (cartesian) \defemph{cubical atomic algebra} over a model $\mathcal{S}$ of $\mathsf{CTT}$ is a category $\mathcal{C}$ with a terminal object, along with a functor $F : \mathcal{C} \to \mathcal{S}$ preserving the terminal object and with the structures of a cubical algebra ($\mathbb{I}^{\mathcal{C}}, 0^{\mathcal{C}}, 1^{\mathcal{C}}, \mathsf{int}$) and of a renaming algebra $(\mathsf{Var}^{\mathcal{C}}, \mathsf{var})$.
The category of cubical atomic contexts $\mathcal{A}_{\square}$ is the biinitial cubical algebra over the biinitial model $\mathbf{0}_{\mathsf{CTT}}$ of cubical type theory.
\lipicsEnd{} \end{definition}
\begin{restatable}[Induction principle relative to $\mathcal{A}_{\square} \to \mathbf{0}_{\mathsf{CTT}}$]{lemma}{restateIndAtomicCube}\label{lem:ind_renaming_cubes}
Let $\mathbf{0}_{\mathsf{CTT}}^{\bullet}$ be a global displayed model without context extensions over $F : \mathcal{A}_{\square} \to \mathbf{0}_{\mathsf{CTT}}$, along with, internally to $\mathbf{Psh}^{\mathcal{I}(\mathbf{0}_{\mathsf{CTT}}^{\bullet})}$, global maps
\begin{alignat*}{3}
& \mathsf{var}^{\bullet} && :{ } && \forall \mmod{I^{\ast}}\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}) (a : \mathsf{Var}\ A) \to \mathsf{Tm}^{\bullet}\ (S_{\iota}^{\mathsf{Ty}}\ \mmod{I^{\ast}}\ A)\ (\mathsf{var}\ a), \\
& \mathsf{int}^{\bullet} && :{ } && \forall \mmod{I^{\ast}}\ (i : \mathbb{I}^{\mathcal{A}_{\square}}) \to \mathbb{I}^{\bullet}\ (\mathsf{int}\ i),
\end{alignat*}
such that $\mathsf{int}^{\bullet}\ \mmod{I^{\ast}}\ 0^{\mathcal{A}_{\square}} = 0^{\bullet}$ and $\mathsf{int}^{\bullet}\ \mmod{I^{\ast}}\ 1^{\mathcal{A}_{\square}} = 1^{\bullet}$.
Then there exists a relative section $S_{\alpha}$ of $\mathbf{0}_{\mathsf{CTT}}^{\bullet}$.
\qed{} \end{restatable}
\section{Applications}\label{sec:applications}
We give a few applications of our relative induction principles. Only the canonicity proof is detailed here; for most of the other proofs we only give the definition of the displayed types. A more detailed normalization proof is given in~\cref{sec:normalization}.
\subsection{Canonicity}\label{sec:applCanon} In order to prove canonicity for $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, we construct a displayed model without context extensions $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$ over $F : \{\diamond\} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, so as to apply \cref{lem:ind_terminal} to it. It is defined in the in the internal language of $\mathbf{Psh}^{\{\diamond\}}$ ($= \mathbf{Set}$).
A type of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$ displayed over a type $A : \mmod{F^{\ast}} \to \mathsf{Ty}$ is a set-valued proof-relevant logical predicate over the terms of type $A$. A term of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$ of type $A^{\bullet}$ displayed over a term $a : \mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})$ is an witness of the fact that $a$ satisfies the predicate $A^{\bullet}$. \begin{alignat*}{3}
& \mathsf{Ty}^{\bullet}\ A && \triangleq{ } && (a : \mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Set} \\
& \mathsf{Tm}^{\bullet}\ A^{\bullet}\ a && \triangleq{ } && A^{\bullet}\ a \end{alignat*}
The logical predicate over $\Pi\ A\ B$ characterizes the functions that preserve the logical predicate of $A$ and $B$. \begin{alignat*}{3}
& \Pi^{\bullet}\ A^{\bullet}\ B^{\bullet} && \triangleq{ } && \lambda\ f \mapsto (\forall a\ (a^{\bullet} : A^{\bullet}\ a) \to B^{\bullet}\ (\mathsf{app} \circleddollar f \circledast a)) \\
& \mathsf{app}^{\bullet}\ f^{\bullet}\ a^{\bullet} && \triangleq{ } && f^{\bullet}\ a^{\bullet} \end{alignat*}
Finally, $\mathbf{B}^{\bullet} : (\mmod{F^{\ast}} \to \mathsf{Tm}\ \mathbf{B}) \to \mathrm{Set}$ characterizes canonical natural numbers, and is defined as the inductive family generated by $\mathsf{true}^{\bullet} : \mathbf{B}^{\bullet}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathsf{true})$ and $\mathsf{false}^{\bullet} : \mathbf{B}^{\bullet}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathsf{false})$. The displayed natural number eliminator $\mathsf{elim}_{\BoolTy}^{\bullet}$ is then obtained from the elimination principle of $\mathbf{B}^{\bullet}$.
\Cref{lem:ind_terminal} now provides a relative section $S_{\alpha}$ of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$.
Internally to $\mathbf{Psh}^{\{\diamond\}}$, take any global boolean term $(b : \mmod{F^{\ast}} \to \mathsf{Tm}\ \mathbf{B})$. Note that since $F : \{\diamond\} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$ selects the empty context, the dependent right adjoint $\modcolor{F^{\ast}}$ restricts presheaves over $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$ to the empty context. Thus $b$ is indeed a closed boolean term.
We can apply the action of the relative section $S_{\alpha}$ to $b$. We obtain an element $S_{\alpha}^{\mathsf{Tm}}\ b$ of $\mathsf{Ty}^{\bullet}\ (S_{\alpha}^{\mathsf{Ty}}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathbf{B}))\ b$. We compute $S_{\alpha}^{\mathsf{Ty}}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathbf{B}) = \mathbf{B}^{\bullet}$. Therefore we have an element of $\mathbf{B}^{\bullet}\ b$. This proves that $b$ is canonical.
\subsection{Normalization}\label{sec:applNorm}
The normalization proof of~\cite{CoquandNormalization} can be expressed using the induction principle relative to $F : \mathcal{Ren} \to \mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, namely~\cref{lem:ind_renamings}.
Internally to $\mathbf{Psh}^{\mathcal{Ren}}$, we have inductively defined families $\mathsf{NfTy} : \mathsf{Ty} \to \mathrm{Psh}^{\mathcal{Ren}}$, $\mathsf{Nf} : \forall A \to (\mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{Ren}}$ and $\mathsf{Ne} : \forall A \to (\mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{Ren}}$ describing the normal forms of types, the normal forms of terms and the neutral terms.
A displayed type $A^{\bullet} : \mathsf{Ty}^{\bullet}\ A$ is a tuple $(A^{\bullet}_{0},A^{\bullet}_{p},A^{\bullet}_{\alpha},A^{\bullet}_{\beta})$ where: \begin{itemize}
\item $A^{\bullet}_{0} : \mathsf{NfTy}\ A$ is a witness that the type $A$ admits a normal form;
\item $A^{\bullet}_{p} : (\mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{Ren}}$ is a proof-relevant logical predicate over terms of type $A$, valued in presheaves over $\mathcal{Ren}$;
\item $A^{\bullet}_{\alpha} : \forall a \to \mathsf{Ne}\ a \to A^{\bullet}_{p}\ a$ is a natural transformation, usually called \emph{reflect} or \emph{quote}, witnessing the fact that all neutral terms satisfy the logical predicate $A^{\bullet}_{p}$;
\item $A^{\bullet}_{\beta} : \forall a \to A^{\bullet}_{p}\ a \to \mathsf{Nf}\ a$ is a natural transformation, usually called \emph{reify} or \emph{unquote}, witnessing the fact that terms satisfying the logical predicate $A^{\bullet}_{p}$ admit normal forms. \end{itemize}
\subsection{Canonicity for cubical type theory}\label{sec:applCanCTT}
The proof of canonicity for cubical type theory from~\cite{HoCanonicityCTT} can be reformulated using the induction principle relative to $F : \square \to \mathbf{0}_{\mathsf{CTT}}$, \ie{} \cref{lem:ind_cubes}. Internally to $\mathbf{Psh}^{\square}$, we have a universe $\mathcal{U}^{\mathsf{fib}}$ of fibrant cubical sets. A displayed type $A^{\bullet} : \mathsf{Ty}^{\bullet}\ A$ is a logical predicate valued in fibrant cubical sets: \[ A^{\bullet} : (\mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathcal{U}^{\mathsf{fib}}. \]
\subsection{Normalization for cubical type theory}\label{sec:applNormCTT}
The proof of normalization for cubical type theory from~\cite{NormalizationCTT} can be reformulated using the induction principle relative to $F : \mathcal{A}_{\square} \to \mathbf{0}_{\mathsf{CTT}}$, that is \cref{lem:ind_renaming_cubes}.
\subsection{Syntactic parametricity}\label{sec:applPar}
Syntactic parametricity can be described by a displayed model without context extensions over $\mathsf{id} : \mathbf{0}_{\MT} \to \mathbf{0}_{\MT}$. A displayed type $A^{\bullet} : \mathsf{Ty}^{\bullet}\ A$ is a type-valued logical predicate $A^{\bullet} : \mathsf{Tm}\ A \to \mathsf{Ty}$.
However, we do not have a relative section of this displayed model. We have the displayed inserter category $\mathcal{I}(\mathbf{0}_{\MT}^{\bullet})$; but the map $I : \mathcal{I}(\mathbf{0}_{\MT}^{\bullet}) \to \mathbf{0}_{\MT}$ does not admit a section. Various applications of syntactic parametricity can use various functors into $\mathcal{I}(\mathbf{0}_{\MT}^{\bullet})$. For instance, if we only care about closed terms, we can consider the functor $\{\diamond\} \to \mathcal{I}(\mathbf{0}_{\MT}^{\bullet})$. This is sufficient to prove that any closed term $f : \mathsf{Tm}\ ((A : \mathcal{U}) \to A \to A)$ is homotopic to the polymorphic identity function.
\subsection{Conservativity}\label{sec:applConserv}
The conservativity of extensional type theory (ETT) over intensional type theory (ITT) can be obtained using an induction principle relative to $F : \mathcal{Ren}_{\mathsf{ITT}} \to \mathbf{0}_{\mathsf{ETT}}$.
The proof involves some congruence $(\sim)$ over $\mathbf{0}_{\mathsf{ITT}}$; this consists of equivalence relations on types and terms preserving all type-theoretic operations. A displayed type $A^{\bullet} : \mathsf{Ty}^{\bullet}\ A$ is an element of the quotient of $(A_{0} : \mathsf{Ty}_{\mathsf{ITT}}) \times (F^{\mathsf{Ty}}\ A_{0} = A)$ by this relation. Displayed terms are defined similarly. Hofmann's proof involves the quotient model $(\mathbf{0}_{\mathsf{ITT}}/\sim)$, but by working internally to $\mathbf{Psh}^{\mathcal{Ren}_{\mathsf{ITT}}}$ we can avoid the (easy but tedious) construction of that model.
\section{Future work}
While we have focused on the type theory $\MT_{\Pi,\mathbf{B}}$ in this document, we hope that it is clear that these constructions generalize to other type theories. Nevertheless, it would be good to actually prove that all of these constructions can be done for arbitrary type theories. In~\cite{QIITs}, a syntactic definition of quotient inductive-inductive type signature is given, along with semantics. It should be possible to extend this approach and give general definitions of models, morphisms, displayed models (without context extensions), \etc, for arbitrary type theory signatures following \cite{paoloHOAS}. Other definitions of the general notion of type theory have been proposed recently~\cite{GeneralDefinitionDTT,GeneralFrameworkSemanticsTT}. One advantage of the approach of~\cite{QIITs} is that its semantics are given by induction on the syntax of signatures; and thus the definitions of models, morphisms, \etc, for a given type theory signature can be computed.
The current proof assistants based on dependent type theory natively support various variants of inductive types. We believe that the ideas presented in this paper could help towards the implementation of proof assistants that natively support syntax with bindings.
We have used our framework to give short proofs of canonicity, normalization, parametricity and conservativity results for dependent type theory. We see them as non-trivial results that are also well-understood; and thus serve as good benchmarks for our induction principles. We hope to apply this framework the proof of novel results in the future.
We would also like to extend this work to other kinds of context extensions and binding structures, such as affine binding structures. An affine variable cannot be duplicated (in the absence of additional structure) and can therefore be used at most once. This should give a description of the category of weakenings as the initial object of some category. The category of weakenings is similar to the category of renamings, but without the ability to duplicate variables. Using the category of weakenings in a normalization proof allows for non-linear equations in the type theory, such as the group equation $x \cdot x^{-1} = 1$. The internal language of presheaves over the category of weakenings is also the right setting for proving the decidability of equality.
\appendix
\section{The dependent right adjoints \texorpdfstring{$F^{\ast}$}{F\textasciicircum *} and \texorpdfstring{$F_{\ast}$}{F\_{}*}}\label{sec:dra_constructions}
In this section we give explicit definitions of the adjunctions $F_{!} \dashv F^{\ast}$ and $F^{\ast} \dashv F_{\ast}$ and their dependent versions $F_{!} \dashv \modcolor{F^{\ast}}$ and $F^{\ast} \dashv \modcolor{F_{\ast}}$, given a functor $F : \mathcal{C} \to \mathcal{D}$. These definitions are standard category theory, we only record them for the benefit of the reader.
The precomposition functor: \begin{alignat*}{3}
& F^{\ast} && :{ } && \mathbf{Psh}^{\mathcal{D}} \to \mathbf{Psh}^{\mathcal{C}} \\
& \abs{F^{\ast}\ X'}_\Gamma && \triangleq{ } && \abs{X'}_{F\ \Gamma} \\
& x[\rho]_{F^{\ast}\ X'} && \triangleq{ } && x[F\ \rho]_{X'} \\
& \abs{F^{\ast}\ f'}_\Gamma\ x && \triangleq{ } && \abs{f'}_{F\ \Gamma}\ x \end{alignat*} Its left adjoint: \begin{alignat*}{3}
& F_{!} && :{ } && \mathbf{Psh}^{\mathcal{C}} \to \mathbf{Psh}^{\mathcal{D}} \\
& \abs{F_{!}\ X}_{\Gamma'} && \triangleq{ } &&
\big((\Gamma:\abs{\mathcal{C}})\times\mathcal{D}(\Gamma'\to F\ \Gamma)\times\abs{X}_{\Gamma}\big) /{\sim} \text{ where } (\Gamma,\delta',x[\rho]_X) \sim (\Delta,F\ \rho\circ\delta',x) \\
& (\Gamma,\delta',x)[\rho']_{F_{!}\ X} && \triangleq{ } && (\Gamma,\delta'\circ\rho',x) \\
& \abs{F_{!}\ f}_{\Gamma'}\ (\Gamma,\delta',x) && \triangleq{ } && (\Gamma,\delta',\abs{f}_\Gamma\ x)
\end{alignat*} The unit of the adjunction $F_{!} \dashv F^{\ast}$ is given by \begin{alignat*}{3}
& \eta_X && :{ } && X \to (F^{\ast}\ (F_{!}\ X)) \\
& \abs{\eta_X}_\Gamma\ x && \triangleq{ } && (\Gamma,\mathsf{id}_{F\ \Gamma},x) \end{alignat*} while the hom-set definition of the adjunction is given by an isomorphism \[
\phi : (F_{!}\ X\to X') \cong (X\to F^{\ast}\ X') : \phi^{-1} \] natural in $X$ and $X'$, where $\phi\ f' \triangleq F^{\ast}\ f'\circ\eta_X$ \ie $\abs{\phi\ f'}_\Gamma\ x = \abs{f'}_{F\ \Gamma}\ (\abs{\eta_X}_\Gamma\ x)$ and $\abs{\phi^{-1}\ f}_{\Gamma'}\ (\Gamma,\delta',x) \triangleq (\abs{f}_\Gamma\ x)[\delta']_{X}$. The dependent right adjoint of $F_{!}$: \begin{alignat*}{3}
& \modcolor{F^{\ast}} && :{ } && \mathbf{DepPsh}^{\mathcal{D}}\ (F_{!}\ X) \to \mathbf{DepPsh}^{\mathcal{C}}\ X \\
& \abs{\modcolor{F^{\ast}}\ A'}_\Gamma\ x && \triangleq{ } && \abs{A'}_{F\ \Gamma}\ (\abs{\eta_X}_\Gamma\ x) \\
& a'[\rho]_{\modcolor{F^{\ast}}\ A'} && \triangleq{ } && a'[F\ \rho]_{A'} \end{alignat*} We have $\modcolor{F^{\ast}}\ A' \circ f = \modcolor{F^{\ast}}\ (A'\circ F_{!}\ f)$. The dependent adjunction $F_{!} \dashv \modcolor{F^{\ast}}$ is an isomorphism \[ \psi : \mathbf{Psh}^{\mathcal{D}}\big((x':F_{!}\ X) \to A'(x')\big) \cong \mathbf{Psh}^{\mathcal{C}}\big((x:X)\to (\modcolor{F^{\ast}}\ A')(x)\big) : \psi^{-1} \] natural in $X$, where $\abs{\psi\ f'}_\Gamma\ x \triangleq \abs{f'}_{F\ \Gamma}\ (\abs{\eta_X}_\Gamma\ x)$ and $\abs{\psi^{-1}\ f}_{\Gamma'}\ (\Gamma,\delta',x) \triangleq (\abs{f}_\Gamma x)[\delta']_{A'}$.
The right adjoint of $F^{\ast}$: \begin{alignat*}{3}
& F_{\ast} && :{ } && \mathbf{Psh}^{\mathcal{C}} \to \mathbf{Psh}^{\mathcal{D}} \\
& \abs{F_{\ast}\ X}_{\Gamma'} && \triangleq{ } && \big\{ \alpha:(\Gamma:\abs{\mathcal{C}})(\delta':\mathcal{D}(F\ \Gamma\to\Gamma'))\to\abs{X}_{\Gamma} \mid \alpha\ \Gamma\ (\delta'\circ F\ \sigma) = (\alpha\ \Delta\ \delta')[\sigma]_X \big\} \\
& \alpha[\rho']_{F_{\ast}\ X} && \triangleq{ } && \lambda \Gamma\ \delta' \mapsto \alpha\ \Gamma\ (\rho'\circ\delta') \\
& \abs{F_{\ast}\ f}_{\Gamma'}\ \alpha && \triangleq{ } && \lambda \Gamma\ \delta' \mapsto \abs{f}_\Gamma\ (\alpha\ \Gamma\ \delta') \end{alignat*} The adjunction is an isomorphism $\phi : (F^{\ast}\ X'\to X) \cong (X'\to F_{\ast}\ X) : \phi^{-1}$ natural in $X$ and $X'$ where $\abs{\phi\ f}_{\Gamma'}\ x' \triangleq \lambda \Gamma\ \delta' \mapsto \abs{f}_\Gamma\ (x'[\delta']_{X'})$ and $\abs{\phi^{-1}\ f'}_{\Gamma}\ x' \triangleq \abs{f'}_{F\ \Gamma}\ x'\ \Gamma\ \mathsf{id}_{F\ \Gamma}$. The dependent right adjoint of $F^{\ast}$: \begin{alignat*}{5}
& \modcolor{F_{\ast}} && :{ } && \mathbf{DepPsh}^{\mathcal{C}}\ (F^{\ast}\ X') \to \mathbf{DepPsh}^{\mathcal{D}}\ X' \\
& \abs{\modcolor{F_{\ast}}\ A}_{\Gamma'}\ x' && \triangleq{ } && \big\{ \alpha:(\Gamma:\abs{\mathcal{C}})(\delta':\mathcal{D}(F\ \Gamma\to\Gamma'))\to\abs{A}_{\Gamma}\ (x'[\delta']_{X'}) \mid \\
& && && \hphantom{\big\{{}} \alpha\ \Gamma\ (\delta'\circ F\ \sigma) = (\alpha\ \Delta\ \delta')[\sigma]_A \big\} \\
& \alpha [\rho']_{\modcolor{F_{\ast}}\ A} && \triangleq{ } && \lambda \Gamma\ \delta' \mapsto \alpha\ \Gamma\ (\rho'\circ\delta') \end{alignat*} We have $\modcolor{F_{\ast}}\ A \circ f' = \modcolor{F^{\ast}}\ (A\circ F_{!}\ f')$. The dependent adjunction $F^{\ast} \dashv \modcolor{F_{\ast}}$ is an isomorphism \[ \psi : \mathbf{Psh}^{\mathcal{C}}\big((x:F^{\ast}\ X') \to A(x)\big) \cong \mathbf{Psh}^{\mathcal{D}}\big((x':X')\to (\modcolor{F_{\ast}}\ A)(x')\big) : \psi^{-1} \] natural in X' where $\abs{\psi\ f}_{\Gamma'}\ x' \triangleq \lambda\Gamma\ \delta'\mapsto \abs{f}_\Gamma\ (x'[\delta']_{X'})$ and $\abs{\psi^{-1}\ f'}_\Gamma\ x' \triangleq \abs{f'}_{F\ \Gamma}\ x'\ \Gamma\ \mathsf{id}_{F\ \Gamma}$.
\section{Multimodal Type Theory}\label{sec:mtt}
The proofs and constructions performed in the appendix involve more than two presheaf categories and more than a single dependent right adjoint. We rely on Multimodal Type Theory~\cite{MTT} to provide a single language that embeds the internal languages of all of those presheaf categories and the dependent right adjoints between them.
Our variant of Multimodal Type Theory differs from the one presented in~\cite{MTT} in a couple of ways. We keep the same syntax for dependent right adjoints as in \cref{sec:dras}; whereas~\cite{MTT} uses weak dependent right adjoints instead, which come with a positive elimination rule instead of the operation $(-\ \mmod{\mu})$. So as to remove some of the ambiguities of the informal syntax and improve readability in the presence of multiple modalities, we annotate locks with \emph{tick variables}. The extension of the syntax of Multimodal Type Theory by ticks was used by~\cite{Transpension} for the same purpose. Ticks were originally introduced in~\cite{ClocksAreTicking}.
\subsection{Multiple modalities}
Multiple modalities are given semantically by multiple dependent right adjoints. Given a functor $F : \mathcal{C} \to \mathcal{D}$, we already have two dependent right adjoints $\modcolor{F^{\ast}}$ and $\modcolor{F_{\ast}}$, which give modalities $(\mmod{F^{\ast}} \to -)$ and $(\mmod{F_{\ast}} \to -)$. Dependent right adjoints can be composed, and we also have modalities $(\mmod{F_{\ast}F^{\ast}} \to -)$, $(\mmod{F^{\ast}F_{\ast}} \to -)$, \etc, where $(\mmod{F_{\ast}F^{\ast}} \to -) = (\mmod{F_{\ast}} \mmod{F^{\ast}} \to -)$.
\subsubsection{Ticks}
In presence of multiple modalities, or of non-trivial relations between the modalities, the notation $(-\ \mmod{\mu})$ becomes ambiguous. Suppose for instance that $\modcolor{\mu}$ is a idempotent dependent right adjoint ($\modcolor{\mu \mu} = \modcolor{\mu}$). Then for any context $\Gamma$, we have $\Gamma, \mmod{\mu}, \mmod{\mu} = \Gamma, \mmod{\mu}$. If we write $(a\ \mmod{\mu})$ in the ambient context $(\Gamma, \mmod{\mu})$, it is unclear whether the subterm $a$ should live in the context $\Gamma$ or $\Gamma, \mmod{\mu}$.
To avoid this kind of ambiguity, we will annotate locks with \emph{ticks}. In the above example, we would have $\Gamma, \emod{\mathfrak{m}}{\mu}, \emod{\mathfrak{n}}{\mu} = \Gamma, \emod{\mathfrak{m}\mathfrak{n}}{\mu}$; and we would write either $(a\ \smod{\mathfrak{n}})$ if $a$ lives over $\Gamma,\emod{\mathfrak{m}}{\mu}$ or $(a\ \smod{\mathfrak{m} \mathfrak{n}})$ if $a$ lives over $\Gamma$.
We use $\tickcolor{\mathfrak{m}}$, $\tickcolor{\mathfrak{n}}$, $\tickcolor{\mathfrak{o}}$, \etc for tick variables. The tick variables refer to the locks of the ambient context. A tick is a formal composition of tick variables, corresponding to the composition of some adjacent locks in the ambient context. We write $\tickcolor{\bullet}$ for the empty tick, corresponding to the empty composition. We write $\tickcolor{\overline{\mathfrak{m}}}$, $\tickcolor{\overline{\mathfrak{n}}}$, $\tickcolor{\overline{\mathfrak{o}}}$, \etc to refer to an arbitrary tick.
If $\Gamma$ is a context, then the subterms of $(\emod{\mathfrak{m}}{\mu} \to -)$ and $(\lambda\ \emod{\mathfrak{m}}{\mu} \mapsto -)$ live over the context $\Gamma, \emod{\mathfrak{m}}{\mu}$.
The operation $(-\ \smod{\overline{\mathfrak{m}}})$ now unbinds the last tick variable of the context; or more generally some suffix of the tick variables. The ordinary variables that occur after these tick variables are implicitly dropped from the current context.
We omit ticks when no ambiguity can arise. In fact, we don't need to use ticks outside of this section.
\subsubsection{Morphisms between modalities}
Finally, we have morphisms between modalities. If $\modcolor{\mu}$ and $\modcolor{\nu}$ are two parallel dependent right adjoints, whose left adjoints are respectively $L_{\mu}$ and $L_{\nu}$, a morphism $\natcolor{\alpha} : \modcolor{\mu} \Rightarrow \modcolor{\nu}$ is a natural transformation $\alpha : L_{\mu} \Rightarrow L_{\nu}$. For example, given $F : \mathcal{C} \to \mathcal{D}$, we have a counit $\natcolor{\varepsilon^{F}} : \modcolor{F_{\ast}} \modcolor{F^{\ast}} \Rightarrow \modcolor{1}$, and a unit $\natcolor{\eta^{F}} : \modcolor{1} \Rightarrow \modcolor{F^{\ast}} \modcolor{F_{\ast}} $, induced by the adjunction $(F_{!} \dashv F^{\ast})$.
Given $\natcolor{\alpha} : \modcolor{\mu} \Rightarrow \modcolor{\nu}$, we obtain a coercion operation $-\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}$ that sends types and terms from the context $\Gamma, \emod{\overline{\mathfrak{n}}}{\nu}$ to the context $\Gamma, \emod{\overline{\mathfrak{m}}}{\mu}$. Semantically, this operation is the presheaf restriction operation of types and terms along the morphism $\abs{\alpha}_{\Gamma} : (\Gamma, \emod{\overline{\mathfrak{m}}}{\mu}) \to (\Gamma, \emod{\overline{\mathfrak{n}}}{\nu})$.
This induces a map \begin{alignat*}{3}
& \mathsf{coe}_{\alpha} && :{ } && \forall (A : \emod{\mathfrak{n}}{\nu} \to \mathrm{Psh}) \to (\emod{\mathfrak{n}}{\nu} \to A\ \smod{\mathfrak{n}}) \to (\emod{\mathfrak{m}}{\mu} \to (A\ \smod{\mathfrak{n}})\ekey{\alpha}{\mathfrak{m}}{\mathfrak{n}}) \\
& \mathsf{coe}_{\alpha}\ a && \triangleq{ } && \lambda\ \emod{\mathfrak{m}}{\mu} \mapsto (a\ \smod{\mathfrak{n}})\ekey{\alpha}{\mathfrak{m}}{\mathfrak{n}} \end{alignat*}
For another example, consider composable functors $F : \mathcal{C} \to \mathcal{D}$ and $G : \mathcal{D} \to \mathcal{E}$. We have a natural isomorphism $\alpha : (F G)_{!} \simeq F_{!} G_{!}$. This induces isomorphisms $(\emod{\mathfrak{m}}{{(FG)}^{\ast}} \to A) \simeq (\emod{\mathfrak{f}\mathfrak{g}}{F^{\ast}G^{\ast}} \to A)$, whose components are \[ \lambda a\ \emod{\mathfrak{f}\mathfrak{g}}{F^{\ast}G^{\ast}} \mapsto (a\ \smod{\mathfrak{m}})\ekey{\alpha}{\mathfrak{f} \mathfrak{g}}{\mathfrak{m}} \] and \[ \lambda a\ \emod{\mathfrak{m}}{(FG)^{\ast}} \mapsto (a\ \smod{\mathfrak{f}\mathfrak{g}})\ekey{\alpha^{-1}}{\mathfrak{m}}{\mathfrak{f} \mathfrak{g}}. \]
We omit the natural transformation when it can be inferred. For instance we could have written $\skey{\mathfrak{f}\mathfrak{g}}{\mathfrak{m}}$ and $\skey{\mathfrak{m}}{\mathfrak{f}\mathfrak{g}}$ above.
More generally, the operation $\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}$ can be applied to any type or term over a context of the form $(\Gamma, \emod{\overline{\mathfrak{n}}}{\nu}, \Delta)$ to send it to the context $(\Gamma, \emod{\overline{\mathfrak{m}}}{\mu}, \Delta\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}})$, where $\Delta$ is an extension of the context $(\Gamma, \emod{\overline{\mathfrak{n}}}{\nu})$ by variable bindings and locks, and $\Delta\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}$ applies the operation $\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}$ to every type in $\Delta$. In that case it is interpreted semantically by restriction along the weakening $(\Gamma, \emod{\overline{\mathfrak{m}}}{\mu}, \Delta\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}) \to (\Gamma, \emod{\overline{\mathfrak{m}}}{\mu}, \Delta)$ of $\abs{\alpha}_{\Gamma} : (\Gamma, \emod{\overline{\mathfrak{m}}}{\mu}) \to (\Gamma, \emod{\overline{\mathfrak{n}}}{\nu})$.
The operation $\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}$ commutes with all natural type-theoretic operations. For example, $(A \times B)\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}} = (A\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}} \times B\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}})$.
It commutes with binders: \[ ((a : A) \to B\ a)\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}} = (a : A\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}) \to B\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}\ a. \] Note that $\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}$ is not applied to the bound variable $a$, as it is already applied to the type $A$ of $a$.
It also commutes with $(\mmod{\mu} \to -)$ and $(\lambda\ \mmod{\mu} \mapsto -)$ for a dependent right adjoint $\mu$: \[ (\emod{\mathfrak{o}}{\mu} \to A)\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}} = (\emod{\mathfrak{o}}{\mu} \to A\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}) \] \[ (\lambda\ \emod{\mathfrak{o}}{\mu} \mapsto a)\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}} = (\lambda\ \emod{\mathfrak{o}}{\mu} \mapsto a\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}}) \]
It commutes with $(-\ \smod{\mathfrak{o}})$ when $\tickcolor{\mathfrak{o}}$ is a tick variable that is bound in $\Delta$.
The operation $\ekey{\alpha}{\overline{\mathfrak{m}}}{\overline{\mathfrak{n}}\mathfrak{q}}$ (where $\tickcolor{\overline{\mathfrak{n}}\mathfrak{q}}$ is a non-empty composition of tick variables ending in $\tickcolor{\mathfrak{q}}$) can only get stuck on $(-\ \smod{\mathfrak{q}})$ (or more generally on $(-\ \smod{\overline{\mathfrak{o}}\mathfrak{q}})$). The operation $\ekey{\alpha}{\overline{\mathfrak{m}}}{\bullet}$ (where $\tickcolor{\bullet}$ is the empty composition of ticks) can only be stuck on a variable.
Finally, these operations satisfy some $2$-naturality conditions. Given two vertically composable morphisms $\natcolor{\alpha} : \modcolor{\mu} \Rightarrow \modcolor{\nu}$ and $\natcolor{\beta} : \modcolor{\nu} \Rightarrow \modcolor{\xi}$, we have \[ (-)\ekey{\beta}{\mathfrak{n}}{\mathfrak{x}}\ekey{\alpha}{\mathfrak{m}}{\mathfrak{n}} = (-)\ekey{\alpha\beta}{\mathfrak{m}}{\mathfrak{x}}. \] Given $\natcolor{\alpha} : \modcolor{\mu} \Rightarrow \modcolor{\nu}$ and a dependent right adjoint $\modcolor{\xi}$ such that the whiskering $\natcolor{\alpha}\modcolor{\xi}$ can be formed, we have \[ (-)\ekey{\alpha\modcolor{\xi}}{\mathfrak{m}\mathfrak{x}}{\mathfrak{n}\mathfrak{x}} = (-)\ekey{\alpha}{\mathfrak{m}}{\mathfrak{n}}. \] Similarly, when we can form the whiskering $\modcolor{\xi}\natcolor{\alpha}$, we have \[ (-)\ekey{\modcolor{\xi}\alpha}{\mathfrak{x}\mathfrak{m}}{\mathfrak{x}\mathfrak{n}} = (-)\ekey{\alpha}{\mathfrak{m}}{\mathfrak{n}}. \]
\section{Constructions and Proofs}\label{sec:proofs}
\subsection{Preservation of context extensions}\label{sec:preserv}
In this subsection, we show that preservation of context extensions as defined in \cref{def:preserves_ext} corresponds externally to preservation of context extension as usually defined. Let $\int$ denote the category of elements functor.
\begin{lemma} \label{lem:loc-rep-via-left-adj} A dependent presheaf $Y : \mathbf{DepPsh}^\mathcal{C}\ X$ is locally representable exactly if the projection functor $\int (X, Y) \to \int X$ is a left adjoint. \end{lemma}
\begin{proof} This is an easy computation. In the notation of \cref{locally-representable}, at object $(\Gamma, x)$, the right adjoint is $(\Gamma \rhd Y_{\mid x}, x[\bm{p}^{Y}_{x}], \bm{q}^{Y}_{x})$ and the counit is $\bm{p}^{Y}_{x} : (\Gamma \rhd Y_{\mid x}, x[\bm{p}^{Y}_{x}]) \to (\Gamma, x)$. \end{proof}
The setting of the following statement is a functor $F : \mathcal{C} \to \mathcal{D}$. We had omitted universe indices in \cref{def:preserves_ext} for readability of the main body. Elaborating, we say $F^A$ \emph{preserves $i$-small virtual context extension} if the stated condition is satisfied for $P$ an $i$-small dependent presheaf. We say that $F^A$ \emph{preserves virtual context extension} if it preserves $i$-small virtual context extension for all $i$.
\begin{proposition} \label{preservation-of-extension}
Let $X$ be an $i$-small presheaf over $\mathcal{C}$, $A^{\mathcal{C}}$ an $i$-small dependent presheaf over $X$, $A^{\mathcal{D}}$ an $i$-small dependent presheaf over $F_{!}\ X$, and $F^{A}$ a dependent natural transformation from $A^{\mathcal{C}}$ to $\modcolor{F^{\ast}}\ A^{\mathcal{D}}$ over $X$.
This data corresponds to the premises of \cref{def:preserves_ext}, interpreted over the context $X$ of the presheaf model $\mathbf{Psh}^{\mathcal{C}}$.
The following conditions are equivalent:
\begin{enumerate}[label=(\roman*)]
\item \label{preservation-of-extension:virtual}
$F^{A}$ preserves $i$-small virtual context extension,
\item \label{preservation-of-extension:virtual-any}
$F^{A}$ preserves $j$-small virtual context extension for a fixed $j \geq i$,
\item \label{preservation-of-extension:virtual-all}
$F^{A}$ preserves virtual context extension,
\end{enumerate}
and assuming that $A^{\mathcal{C}}$ and $A^{\mathcal{D}}$ are locally representable:
\begin{enumerate}[label=(\roman*),resume]
\item \label{preservation-of-extension:external}
for $\Gamma : \abs{\mathcal{C}}$ and $x : \abs{X}_{\Gamma}$, given a representation
\[
\bm{q}^{A^\mathcal{C}}_x : \abs{A^{\mathcal{C}}_{\mid x}}\ (\Gamma \rhd A^\mathcal{C}_{\mid x}, \bm{p}^{A^\mathcal{C}}_x)
,\]
then
\[
\abs{F^{A}}_{\Gamma \rhd A^\mathcal{C}_{\mid x}}\ x[\rho]\ \bm{q}^{A^\mathcal{C}}_x : \abs{A^{\mathcal{D}}_{\mid \left(\abs{\eta^F_X}_\Gamma x\right)}}\ (F\ (\Gamma \rhd A^\mathcal{C}_{\mid x}), F\ \bm{p}^{A^\mathcal{C}}_x)
\]
is a representation.
\item \label{preservation-of-extension:ugly}
for $\Gamma$ and $x$ as above, the comparison morphism
\[ \angles{F\ \bm{p}^{A^{\mathcal{C}}}_{x}, \abs{F^{A}}_{\Gamma \rhd A^{\mathcal{C}}_{\mid x}}\ \left(x[\bm{p}^{A^{\mathcal{C}}}_{x}]\right)\ \bm{q}^{A^{\mathcal{C}}}_{x}} : F\ (\Gamma \rhd A^{\mathcal{C}}_{\mid x}) \to (F\ \Gamma) \rhd A^{\mathcal{D}}_{\mid\left(\abs{\eta^{F}_{X}}_{\Gamma}\ x\right)} \]
is invertible.
\end{enumerate}
Here, $\eta^{F}_{X} : X \to F^{\ast}\ (F_{!}\ X)$ is the component at $X$ of the unit of the adjunction $(F_{!} \dashv F^{\ast})$. \end{proposition}
\begin{proof} We have a strictly commuting square \begin{equation} \label{preservation-of-extension:categories} \begin{tikzcd}
\int (X, A^\mathcal{C})
\ar[r, "v"]
\ar[d, "p_\mathcal{C}"] &
\int (F_! X, A^\mathcal{D})
\ar[d, "p_\mathcal{D}"] \\
\int X
\ar[r, "u"] &
\int F_! X \end{tikzcd} \end{equation} of categories where $p_\mathcal{C}$ and $p_\mathcal{D}$ are projections, $u$ is given by the unit of the adjunction $F_! \dashv F^*$, and $v$ is induced by $F^A$.
Restriction along the maps in~\eqref{preservation-of-extension:categories} induces a strictly commuting square \begin{equation} \label{preservation-of-extension:restriction} \begin{tikzcd}
\mathbf{DepPsh}^{\mathcal{C}}\ (X, A^\mathcal{C}) &
\mathbf{DepPsh}^{\mathcal{D}}\ (F_! X, A^\mathcal{D})
\ar[l, "v^*"] \\
\mathbf{DepPsh}^{\mathcal{C}}\ X
\ar[u, "(p_\mathcal{C})^*"] &
\mathbf{DepPsh}^{\mathcal{D}}\ \int F_! X
\ar[u, "(p_\mathcal{D})^*"]
\ar[l, "u^*"] \rlap{.} \end{tikzcd} \end{equation} We have adjunctions $(p_\mathcal{C})^* \dashv (p_\mathcal{C})_*$ and $(p_\mathcal{D})^* \dashv (p_\mathcal{D})_*$ where $(p_\mathcal{C})_*$ takes $\Pi$-types over $A^\mathcal{C}$ and $(p_\mathcal{D})_*$ takes $\Pi$-types along $A^\mathcal{D}$. Regarding the identity as a natural transformation in~\eqref{preservation-of-extension:restriction} from the right-top to the bottom-left composite and taking its mate with respect to these adjunctions, we obtain a natural transformation \begin{equation} \label{preservation-of-extension:mate-restriction} \begin{tikzcd}
\mathbf{DepPsh}^{\mathcal{C}}\ (X, A^\mathcal{C})
\ar[d, "(p_\mathcal{C})_*"] &
\mathbf{DepPsh}^{\mathcal{D}}\ (F_! X, A^\mathcal{D})
\ar[l, "v^*"]
\ar[d, "(p_\mathcal{D})_*"] \\
\mathbf{DepPsh}^{\mathcal{C}}\ X &
\mathbf{DepPsh}^{\mathcal{D}}\ F_! X
\ar[l, "u^*"]
\ar[ul,shorten <>=20pt,Rightarrow,"\tau"] \rlap{.} \end{tikzcd} \end{equation} This is the comparison map $\tau$ from \cref{def:preserves_ext}. Conditions~\ref{preservation-of-extension:virtual}, \ref{preservation-of-extension:virtual-any}, \ref{preservation-of-extension:virtual-all} state that $\tau$ is invertible at dependent presheaves over $(F_! X, A^\mathcal{D})$ that are $i$-small, $j$-small for a fixed $j \geq i$, and $j$-small for any $j$, respectively.
The above categories may be seen as presheaf categories. Switching to that presentation and transposing the preceding square to left adjoints, we obtain the natural transformation \begin{equation} \label{preservation-of-extension:mate-left-kan-extension} \begin{tikzcd}
\mathbf{Psh}^{\int (X, A^\mathcal{C})}
\ar[r, "v_!"]
\ar[dr,shorten <>=20pt,Rightarrow,"\overline{\tau}"] &
\mathbf{Psh}^{\int (F_! X, A^\mathcal{D})} \\
\mathbf{Psh}^{\int X}
\ar[r, "u_!"]
\ar[u, "(p_\mathcal{C})^*"] &
\mathbf{Psh}^{\int F_! X}
\ar[u, "(p_\mathcal{D})^*"] \rlap{.} \end{tikzcd} \end{equation} We have that $\tau$ is invertible exactly if $\overline{\tau}$ is invertible. By Yoneda, $\tau$ is invertible already if $\overline{\tau}$ is invertible at representables. Reversely, for $\overline{\tau}$ to be invertible at representables, it suffices for $\tau$ to be invertible at applications of $v_! (p_\mathcal{C})^*$ to representables. By our smallness assumptions on $X$, $A^\mathcal{C}$, $A^\mathcal{D}$, these applications are $i$-small. This shows that conditions~\ref{preservation-of-extension:virtual}, \ref{preservation-of-extension:virtual-any}, \ref{preservation-of-extension:virtual-all} are equivalent and hold exactly if $\overline{\tau}$ is invertible at representables (which holds exactly if $\tau$ and $\overline{\tau}$ are invertible).
Assume now that $A^\mathcal{C}$ and $A^\mathcal{D}$ are locally representable. By \cref{lem:loc-rep-via-left-adj}, the functors $p_\mathcal{C}$ and $p_\mathcal{D}$ have right adjoints $q_\mathcal{C}$ and $q_\mathcal{D}$, respectively. Regarding the identity as a natural transformation from the left-bottom to the top-right composite in~\eqref{preservation-of-extension:categories}, we take its mate with respect to these adjunctions, obtaining the natural transformation \begin{equation*} \begin{tikzcd}
\int (X, A^\mathcal{C})
\ar[r, "v"]
\ar[dr,shorten <>=10pt,Rightarrow,"\theta"] &
\int (F_! X, A^\mathcal{D}) \\
\int X
\ar[r, "u"]
\ar[u, "q_\mathcal{C}"] &
\int F_! X
\ar[u, "q_\mathcal{D}"] \rlap{.} \end{tikzcd} \end{equation*} Condition~\ref{preservation-of-extension:ugly} states that $\theta$ is invertible when using the explicit description of $q_\mathcal{C}$ and $q_\mathcal{D}$ given in the proof of \cref{lem:loc-rep-via-left-adj}. Condition~\ref{preservation-of-extension:external} states the same thing without referring to particular choices of context extension: given $M : \abs{\int X}$, if $(N, f)$ is terminal in $(p_\mathcal{C} \downarrow \int X)$, then $(v\ N, u\ f)$ is terminal in $(p_\mathcal{D} \downarrow \int F_! X)$. To see that these conditions are equivalent, recall that terminal objects are unique up to (unique) isomorphisms.
The process of taking mates commutes with presheaf category formation. Thus, the natural transformation $\tau$ in~\eqref{preservation-of-extension:mate-restriction} is equivalent to the restriction action of $\theta$. It follows that $\overline{\tau}$ in~\eqref{preservation-of-extension:mate-left-kan-extension} is equivalent to the left Kan extension action of $\theta$. Recall that left Kan extension of representables is given by the original functor. Thus, $\overline{\tau}$ is invertible at representables exactly if $\theta$ is invertible. This shows that~\ref{preservation-of-extension:virtual} and~\ref{preservation-of-extension:external} are equivalent. \end{proof}
\subsection{Displayed categories}
Displayed categories were introduced in~\cite{DispCats}. The data of a displayed category over a base $\mathcal{D}$ is equivalent to the data of a category $\mathcal{C}$ equipped with a functor into $\mathcal{D}$. Many structures on functors that may seem non-categorical because they involve equalities of objects, such as fibration structures, are actually well-behaved when seen as structures over displayed categories. Some of the constructions that follow are more intuitive when thinking about displayed categories instead of functors. Because Multimodal Type Theory does not have dependent modes, we have to see displayed categories as functors when working internally.
We write $F : \mathcal{C} \rightarrowtriangle \mathcal{D}$ if $F$ is a functor that exhibits $\mathcal{C}$ as a displayed category over $\mathcal{D}$. Given an object $x$ of $\mathcal{D}$, we may write $\abs{\mathcal{C}}(x)$ (or $\mathcal{C}(x)$ or $\mathcal{C}^{\mathsf{op}}(x)$) for the set of objects of $\mathcal{C}$ displayed over $x$, that is the set containing the objects $x' : \abs{\mathcal{C}}$ such that $F\ x' = x$. Given objects $x$ and $y$ of $\mathcal{C}$ and a morphism $f : \mathcal{D}(F\ x \to F\ y)$, we write $\mathcal{C}(x \to_{f} y)$ for the set of morphisms of $\mathcal{C}$ from $x$ to $y$ that are displayed over $f$. In other words, $\mathcal{C}(x \to_{f} y)$ is the set containing the morphisms $f' : \mathcal{C}(x \to y)$ such that $F\ f' = f$.
\subsection{Sections of displayed models with context extensions}\label{sec:sections}
\begin{definition}
A \defemph{section} of a displayed model with context extensions $\mathcal{S}^{\bullet}$ over a functor $F : \mathcal{C} \rightarrowtriangle \mathcal{S}$ consists of a section $S : \mathcal{S} \to \mathcal{C}$ of $F$ (up to a natural isomorphism $(S \cdot F) \simeq 1_{\mathcal{S}}$) along with (internally to $\mathbf{Psh}^{\mathcal{S}}$):
\begin{itemize}
\item Actions on types and terms.
\begin{alignat*}{3}
& S^{\mathsf{Ty}} && :{ } && \forall (A : \mathsf{Ty}^{\mathcal{S}}) \mmod{S^{\ast}} \to \mathsf{Ty}^{\bullet}\ (\lambda \mmod{F^{\ast}} \to A\smkey{S^{\ast}F^{\ast}}{\bullet}) \\
& S^{\mathsf{Tm}} && :{ } && \forall A\ (a : \mathsf{Tm}^{\mathcal{S}}\ A)\ \mmod{S^{\ast}} \to \mathsf{Tm}^{\bullet}\ (S^{\mathsf{Ty}}\ A\ \mmod{S^{\ast}})\ (\lambda \mmod{F^{\ast}} \to a\smkey{S^{\ast}F^{\ast}}{\bullet})
\end{alignat*}
where $\smkey{S^{\ast}F^{\ast}}{\bullet}$ is coercion over the natural isomophism $(S \cdot F) \simeq 1_{\mathcal{S}}$.
\item Such that for every $A : \mathsf{Ty}^{\mathcal{S}}$, the total action
\[ (a : \mathsf{Tm}^{\mathcal{S}}\ A) \to (\mmod{S^{\ast}} \to (a : \mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ A\smkey{S^{\ast}F^{\ast}}{\bullet}) \times (\mathsf{Tm}^{\bullet}\ (S^{\mathsf{Ty}}\ A\ \mmod{S^{\ast}})\ a)) \]
preserves context extensions.
As in the definition of morphisms of models, we can then derive actions on derived sorts.
Given any telescope $X$ of $\mathcal{S}$, we can define, by induction on $X$, a family
\[ X^{\bullet} : \mmod{S^{\ast}} \to (\mmod{F^{\ast}} \to X^{\mathcal{S}}\smkey{S^{\ast}F^{\ast}}{\bullet}) \to \mathrm{Psh}^{\mathcal{C}}, \]
along with an action
\[ S^{X} : (x : X^{\mathcal{S}})\ \mmod{S^{\ast}} \to X^{\bullet}\ (\lambda \mmod{F^{\ast}} \to x\smkey{\mathfrak{s}\mathfrak{f}}{\bullet}) \]
such that the induced total action
\[ (x : X^{\mathcal{S}}) \to (\mmod{S^{\ast}} \to (x : \mmod{F^{\ast}} \to X^{\mathcal{S}}\smkey{\mathfrak{s}\mathfrak{f}}{\bullet}) \times (x^{\bullet} : X^{\bullet}\ \mmod{S^{\ast}}\ x)) \]
has a locally representable codomain and preserves context extensions.
We can then define, using the pattern matching notation of \cref{def:preserves_ext},
\begin{alignat*}{3}
& S^{[X]\mathsf{Ty}} && :{ } && (A : X^{\mathcal{S}} \to \mathsf{Ty}^{\mathcal{S}})\ \mmod{S^{\ast}}\ x\ (x^{\bullet} : X^{\bullet}\ \mmod{S^{\ast}}\ x) \\
&&&&& \to \mathsf{Ty}^{\bullet}\ (\lambda \mmod{F^{\ast}} \to A\smkey{S^{\ast}F^{\ast}}{\bullet}\ (x\ \mmod{F^{\ast}})) \\
& S^{[X]\mathsf{Ty}} && \triangleq{ } && \lambda A\ \mmod{S^{\ast}}\ (S^{X}\ x\ \mmod{S^{\ast}}) \mapsto S^{\mathsf{Ty}}\ (A\ x)\ \mmod{S^{\ast}} \\
& S^{[X]\mathsf{Tm}} && :{ } && \forall A\ (a : \mathsf{Tm}^{\mathcal{S}}\ a)\ \mmod{S^{\ast}}\ x\ (x^{\bullet} : X^{\bullet}\ \mmod{S^{\ast}}\ x) \\
&&&&& \to \mathsf{Tm}^{\bullet}\ (S^{\mathsf{Ty}}\ A\ \mmod{S^{\ast}})\ (\lambda \mmod{F^{\ast}} \to a\smkey{S^{\ast}F^{\ast}}{\bullet}) \\
& S^{[X]\mathsf{Tm}} && \triangleq{ } && \lambda a\ \mmod{S^{\ast}}\ (S^{X}\ x\ \mmod{S^{\ast}}) \mapsto S^{\mathsf{Tm}}\ (a\ x)\ \mmod{S^{\ast}}
\end{alignat*}
\item And such that all type-theoretic operations are preserved. For example,
\begin{alignat*}{3}
& S^{\mathsf{Ty}}\ \mathbf{B}^{\mathcal{S}} && ={ } && \lambda \mmod{S^{\ast}} \mapsto \mathbf{B}^{\bullet} \\
& S^{\mathsf{Ty}}\ (\mathsf{elim}_{\BoolTy}^{\mathcal{S}}\ P\ t\ f\ n) && ={ } && \mathsf{elim}_{\BoolTy}^{\bullet} \circleddollar S^{[\mathsf{Tm}]\mathsf{Ty}}\ P \circledast S^{\mathsf{Tm}}\ t \circledast S^{\mathsf{Tm}}\ f \circledast S^{\mathsf{Tm}}\ n \\
& S^{\mathsf{Ty}}\ (\Pi^{\mathcal{S}}\ A\ B) && ={ } && \Pi^{\bullet} \circleddollar S^{\mathsf{Ty}}\ A \circledast S^{[\mathsf{Tm}]\mathsf{Ty}}\ B
\end{alignat*}
As in the definition of morphisms of models, we can derive computation rules for $S^{[X]\mathsf{Ty}}$ and $S^{[X]\mathsf{Tm}}$.
\end{itemize} \end{definition}
\subsection{Displayed presheaf category}
In what follows, we need to consider categories of presheaves over large categories, and in particular categories of presheaves over categories of presheaves. We have to be a bit careful about sizes. If $\mathcal{C}$ is a category, we write $\widehat{\mathcal{C}}$ for the category of $\omega$-small presheaves (functors into $\mathbf{Set}_{\omega}$) over $\mathcal{C}$, and $\mathbf{Psh}^{\mathcal{C}}$ for the category of large presheaves (functors into $\mathbf{Set}_{\omega+1}$) over $\mathcal{C}$. We only use the internal language of $\mathbf{Psh}^{\mathcal{C}}$.
The goal of this subsection is to construct the factorization of \cref{con:disp_replace_0}. \restateDefFactorization* \restateDispReplace*
We fix a model $\mathcal{S}$ of $\MT_{\Pi,\mathbf{B}}$ and a functor $F : \mathcal{C} \rightarrowtriangle \mathcal{S}$ for this whose subsection.
\begin{definition}
We define the \defemph{displayed presheaf category} $\mathcal{P}$ along with a projection functor $G : \mathcal{P} \rightarrowtriangle \mathcal{S}$ (which exhibits $\mathcal{P}$ as a displayed category over $\mathcal{S}$) and the \defemph{displayed Yoneda embedding} $Y : \mathcal{C} \to \mathcal{P}$.
The displayed Yoneda embedding is a displayed functor over $\mathcal{S}$: it satisfies $(Y \cdot G) = F$.
They are analogous to the usual category of presheaves and Yoneda embedding, but in the slice $2$-category $(\mathbf{Cat} / \mathcal{S})$, or equivalently in the $2$-category of displayed categories over $\mathcal{S}$.
\begin{mathpar}
\begin{tikzcd}
\mathcal{C} \ar[rd, "F"', -{Triangle[open]}] \ar[rr, "Y"] && \mathcal{P} \ar[ld, "G", -{Triangle[open]}] \\
& \mathcal{S} &
\end{tikzcd}
\end{mathpar}
An object $\Gamma^{\dagger}$ of $\mathcal{P}$ displayed over an object $\Gamma$ of $\mathcal{S}$ is a dependent presheaf
\[ \Gamma^{\dagger} : \forall (\Theta : \mathcal{C}^{\mathsf{op}}) (\gamma : \mathcal{S}(F\ \Theta \to \Gamma)) \to \mathbf{Set}_{\omega}. \]
A morphism $f^{\dagger} : \mathcal{P}(\Gamma^{\dagger} \to_{f} \Delta^{\dagger})$ displayed over a morphism $f : \mathcal{S}(\Gamma \to \Delta)$ is a dependent natural transformation
\[ f^{\dagger} : \forall (\Theta : \mathcal{C}^{\mathsf{op}}) (\gamma : \mathcal{S}(F\ \Theta \to \Gamma)) \to \Gamma^{\dagger}\ \gamma \to \Delta^{\dagger}\ (\gamma \cdot f). \]
Given an object $\Gamma : \abs{\mathcal{C}}$, we define an object $\abs{Y}_{\Gamma}$ of $\mathcal{P}$ displayed over $F\ \Gamma$ by
\[ \abs{Y}_{\Gamma}\ \Theta\ \gamma \triangleq \mathcal{C}(\Theta \to_{\gamma} \Gamma) \]
As this is both contravariant in $\Gamma$ and covariant in $\Theta$, this extends to a displayed functor $Y : \mathcal{C} \to \mathcal{P}$. \end{definition}
\begin{proposition}\label{prop:disp_as_gluing}
The category $\mathcal{P}$ is equivalent to the comma category $(\widehat{\mathcal{C}} \downarrow N_{F})$, where $N_{F} : \mathcal{S} \to \widehat{\mathcal{C}}$ is the composition of the Yoneda embedding $\yo^{\mathcal{S}} : \mathcal{S} \to \widehat{\mathcal{S}}$ with $F^{\ast} : \widehat{\mathcal{S}} \to \widehat{\mathcal{C}}$.
\qed \end{proposition}
We prove several core properties of $G : \mathcal{C} \rightarrowtriangle \mathcal{P}$ and $Y : \mathcal{C} \to \mathcal{P}$.
The first of these properties is the generalization of the Yoneda lemma to $\mathcal{P}$. \begin{lemma}\label{lem:disp_yoneda_lemma}
There is a natural family of isomorphisms
\[ r : \forall (\Gamma^{\dagger} : \mathcal{P}) (\Delta : \mathcal{C}^{\mathsf{op}}) (\gamma : \mathcal{S}(F\ \Delta \to \Gamma)) \to \Gamma^{\dagger}\ \gamma \simeq \mathcal{P}(\abs{Y}_{\Delta} \to_{\gamma} \Gamma^{\dagger}), \]
whose components are given by
\begin{alignat*}{3}
& r\ \Gamma^{\dagger}\ \Delta\ \gamma\ \gamma^{\dagger} && \triangleq{ } && \lambda \Theta\ \delta\ (\delta' : \mathcal{C}(\Theta \to_{\delta} \Delta)) \mapsto \gamma^{\dagger}[\delta'] \\
& r^{-1}\ \Gamma^{\dagger}\ \Delta\ \gamma\ \gamma' && \triangleq{ } && \gamma'\ \Delta\ \mathsf{id}_{F\ \Delta}\ \mathsf{id}_{\Delta} \tag*{\qed}
\end{alignat*} \end{lemma}
\begin{proposition}\label{prop:disp_yoneda_ff}
The functor $Y : \mathcal{C} \to \mathcal{P}$ is fully faithful. \end{proposition} \begin{proof}
We prove that the actions of $Y$ on displayed morphisms are bijective; this implies that its total actions are also bijective.
Take two objects $\Gamma$ and $\Delta$ of $\mathcal{C}$ and a base morphism $f : \mathcal{S}(F\ \Delta \to F\ \Gamma)$.
By \cref{lem:disp_yoneda_lemma}, we have $\abs{Y}_{\Gamma}\ f \simeq \mathcal{P}(\abs{Y}_{\Delta} \to_{f} \abs{Y}_{\Gamma})$; and $\abs{Y}_{\Gamma}\ f = \mathcal{C}(\Delta \to_{f} \Gamma)$ by definition.
This determines a function $\mathcal{C}(\Delta \to_{f} \Gamma) \to \mathcal{P}(\abs{Y}_{\Delta} \to_{f} \abs{Y}_{\Gamma})$ that coincides with the action of $Y$ on displayed morphisms, which is therefore bijective. \end{proof}
\begin{proposition}\label{prop:disp_yoneda_unit}
The unit $\eta^{Y} : 1_{\mathbf{Psh}^{\mathcal{C}}} \Rightarrow (Y_{!} \cdot Y^{\ast})$ is an isomorphism. \end{proposition} \begin{proof}
This follows from $Y$ being fully faithful (see~\cite[Prop 4.23]{Kelly05}). \end{proof} As both $Y_{!}$ and $Y^{\ast}$ admit dependent right adjoints, \cref{prop:disp_yoneda_unit} induces coercion operations internally to $\mathbf{Psh}^{\mathcal{C}}$ and $\mathbf{Psh}^{\mathcal{P}}$.
\begin{lemma} \label{lem:fib-pullback-left-adj} Let $p : \mathcal{E} \to \mathcal{C}$ be a Grothendieck fibration. Then pullback along $p$ preserves left adjoints. \end{lemma}
\begin{proof} A standard fact. \end{proof}
In the following, we switch freely between the point of view of a dependent presheaf over $X$ and a map into $X$. In particular, a dependent presheaf over $X$ is locally representable exactly if the corresponding map into $X$ is a representable morphism.
\begin{corollary} \label{cor:loc-rep-pullback-fib} Let $p : \mathcal{E} \to \mathcal{C}$ be a Grothendieck fibration. Let $f \in \mathbf{Psh}^{\mathcal{C}}(Y \to X)$ be locally representable. Then $p^{\ast} f \in \mathbf{Psh}^{\mathcal{E}}(p^{\ast} Y \to p^{\ast} X)$ is locally representable. \end{corollary}
\begin{proof}
This is the combination of \cref{lem:loc-rep-via-left-adj,cor:loc-rep-pullback-fib}.
For this, note that $\int p^{\ast} f$ is the pullback of $\int f$ along $p$. \end{proof}
\begin{proposition}[Internally to $\mathbf{Psh}^{\mathcal{P}}$]\label{prop:disp_yoneda_rep_1}
For every locally representable presheaf $A : \mmod{G^{\ast}} \to \mathrm{RepPsh}^{\mathcal{S}}$, the presheaf $(\mmod{G^{\ast}} \to A\ \mmod{G^{\ast}})$ is locally representable. \end{proposition}
\begin{proof} We have to show the judgment $ A : \mmod{G^{\ast}} \to \mathrm{RepPsh}^{\mathcal{S}} \vdash \mathrm{isRep}(\mmod{G^{\ast}} \to A\ \mmod{G^{\ast}}) .$ Inhabitants of this type correspond to local representability structures of the dependent presheaf $ A : \mmod{G^{\ast}} \to \mathrm{RepPsh}^{\mathcal{S}} \vdash \mmod{G^{\ast}} \to A\ \mmod{G^{\ast}} .$ This is the image of the universal locally representable dependent presheaf $ A : \mathrm{RepPsh}^{\mathcal{S}} \vdash A $ under $G^{\ast}$. From \cref{prop:disp_as_gluing}, we see that $G^{\ast}$ is a Grothendieck fibration. We conclude by \cref{cor:loc-rep-pullback-fib}. \end{proof}
\begin{proposition}[Internally to $\mathbf{Psh}^{\mathcal{P}}$]\label{prop:disp_yoneda_rep_2}
Given an $\omega$-small presheaf $A : \mmod{Y_{\ast}} \to \mathrm{Psh}_{\omega}^{\mathcal{C}}$, the presheaf $(\mmod{Y_{\ast}} \to A\ \mmod{Y_{\ast}})$ is locally representable. \end{proposition}
\begin{proof} We have to show the judgment $ A : \mmod{Y_{\ast}} \to \mathrm{Psh}_{\omega}^{\mathcal{C}} \vdash \mathrm{isRep}(\mmod{Y_{\ast}} \to A\ \mmod{Y_{\ast}}) .$ Inhabitants of this type correspond to local representability structures of the dependent presheaf $ A : \mmod{Y_{\ast}} \to \mathrm{Psh}_{\omega}^{\mathcal{S}} \vdash \mmod{Y_{\ast}} \to A\ \mmod{Y_{\ast}} .$ This is the image of the universal dependent presheaf $A : \mathrm{Psh}^{\mathcal{S}} \vdash A$ under $Y_*$. So it suffices to show the following: given an $\omega$-small dependent presheaf $N$ over $M$ in $\mathrm{Psh}^{\mathcal{C}}$, the dependent presheaf $Y_* N$ over $Y_* M$ in $\mathrm{Psh}^{\mathcal{P}}$ is locally representable.
Let us inspect the action of the functor $Y_*$ on $M : \mathrm{Psh}^{\mathcal{C}}$. From \cref{sec:dra_constructions}, we have for $\Gamma : \mathcal{C}$ and $\Gamma^\dagger : \mathcal{P}(\Gamma)$ that $|Y_* M|_{(\Gamma, \Gamma^\dagger)}$ consists of a dependent natural transformation \[
(\Delta : \mathcal{C}^{\mathsf{op}}) (g : \mathcal{P}(Y \Delta, (\Gamma, \Gamma^\dagger))) \to |M|_\Delta. \] Regarding $\mathcal{C}$ as displayed over $\mathcal{S}$, this writes as \[
(\Delta_\mathcal{S} : \mathcal{S}^{\mathsf{op}}) (\Delta_\mathcal{C} : \mathcal{C}^{\mathsf{op}}(\Delta_\mathcal{C})) (u : \mathcal{S}(\Delta_\mathcal{S} \to \Gamma)) (u^\dagger : \mathcal{P}(Y \Delta_\mathcal{C} \to_u \Gamma^\dagger)) \to |M|_{\Delta_\mathcal{C}}. \] By \cref{lem:disp_yoneda_lemma} (displayed Yoneda), this is naturally isomorphic to the type of dependent natural transformations \[
(\Delta_\mathcal{S} : \mathcal{S}^{\mathsf{op}}) (\Delta_\mathcal{C} : \mathcal{C}^{\mathsf{op}}(\Delta_\mathcal{C})) (u : \mathcal{S}(\Delta_\mathcal{S} \to \Gamma)) (u^\dagger : \Gamma^\dagger\ u)) \to |M|_{\Delta_\mathcal{C}}. \]
Similarly, for a dependent presheaf $N$ over $M$ in $\mathrm{Psh}^{\mathcal{C}}$, we can describe the dependent presheaf $Y_* N$ over $Y_* M$ in $\mathrm{Psh}^{\mathcal{P}}$ as follows. Given $\Gamma : \mathcal{C}$ and $\Gamma^\dagger : \mathcal{P}(\Gamma)$ and $\alpha : |Y_* M|_{(\Gamma, \Gamma^\dagger)}$, then $|Y_* N|_{(\Gamma,\Gamma^{\dagger})}\ \alpha$ is the type of dependent natural transformations \[
(\Delta_\mathcal{S} : \mathcal{S}^{\mathsf{op}}) (\Delta_\mathcal{C} : \mathcal{C}^{\mathsf{op}}(\Delta_\mathcal{C})) (u : \mathcal{S}(\Delta_\mathcal{S} \to \Gamma)) (u^\dagger : \Gamma^\dagger\ u) \to |N|\ (\alpha\ \Delta_\mathcal{S}\ \Delta_\mathcal{C}\ u\ u^\dagger). \]
We now show that $Y_{\ast} N$ is locally representable. Let $(\Gamma,\Gamma^{\dagger})$ be an object of $\mathcal{P}$ and $\alpha : \abs{Y_{\ast} M}_{(\Gamma,\Gamma^{\dagger})}$. We have to define a representing object for the presheaf (over $(\mathcal{P}/(\Gamma,\Gamma^{\dagger}))$) \begin{alignat*}{3}
& (Y_{\ast} N)_{\mid \alpha} && :{ } && ((\Omega,\Omega^{\dagger}) : \mathcal{P}^{\mathsf{op}}) (\rho : \mathcal{S}(\Omega \to \Gamma)) (\rho^{\dagger} : \mathcal{P}(\Omega^{\dagger} \to_{\gamma} \Gamma^{\dagger})) \to \mathrm{Set} \\
& \abs{(Y_{\ast}N)_{\mid \alpha}}\ (\Omega,\Omega^{\dagger})\ \rho\ \rho^{\dagger} && \triangleq{ } && \abs{Y_{\ast} N}_{(\Omega,\Omega^{\dagger})}\ (\lambda \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ u\ u^{\dagger} \mapsto \alpha\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (u \cdot \rho)\ (\rho^{\dagger}\ u\ u^{\dagger})) \end{alignat*}
The extended context is $(\Gamma, \Gamma^{\rhd})$ where \begin{alignat*}{3}
& \Gamma^{\rhd}\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (u : \mathcal{S}(\Delta_{\mathcal{S}} \to \Gamma)) && \triangleq{ } &&
(u^{\dagger} : \Gamma^{\dagger}\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ u) \times (n : \abs{N}\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (\alpha\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ u\ u^{\dagger})) \end{alignat*} Note that this is only well-defined because $N$ is $\omega$-small.
The projection morphism $\bm{p} : \mathcal{P}(\Gamma^{\rhd} \to_{\mathsf{id}} \Gamma^{\dagger})$ forgets the component $n$. The generic element $\bm{q} : \abs{(Y_{\ast} N)_{\mid \alpha}}\ (\Gamma,\Gamma^{\rhd})\ \mathsf{id}\ \bm{p}$ is given by $\bm{q}\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ u\ (u^{\dagger}, n) \triangleq n$.
Finally, we have to check the universal property of the extended context. Given $(\Delta,\Delta^{\dagger}) : \mathcal{P}$, a morphism from $(\Omega, \Omega^{\dagger})$ to $(\Gamma,\Gamma^{\rhd})$ consists of a morphism $\rho : \mathcal{S}(\Omega \to \Gamma)$ and a dependent natural transformation \[ \rho^{\rhd} : \forall \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (u : \mathcal{S}(\Delta_{\mathcal{S}} \to \Omega)) \to \Omega^{\dagger}\ u \to \Gamma^{\rhd}\ (u \cdot \rho). \] By definition of $\Gamma^{\rhd}$, this is equivalently given by a pair of dependent natural transformations \begin{alignat*}{3}
& \rho^{\dagger} && :{ } && \forall \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (u : \mathcal{S}(\Delta_{\mathcal{S}} \to \Omega)) \to \Omega^{\dagger}\ u \to \Gamma^{\dagger}\ (u \cdot \rho) \\
& \rho^{n} && :{ } && \forall \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (u : \mathcal{S}(\Delta_{\mathcal{S}} \to \Omega))\ (u^{\dagger} : \Omega^{\dagger}\ u) \to \abs{N}\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (\alpha\ \Delta_{\mathcal{S}}\ \Delta_{\mathcal{C}}\ (u \cdot \rho)\ (\rho^{\dagger}\ u\ u^{\dagger})), \end{alignat*} \ie. by $\rho^{\dagger} : \mathcal{P}(\Omega^{\dagger} \to_{\rho} \Gamma^{\dagger})$ and $\rho^{n} : \abs{(Y_{\ast} N)_{\mid \alpha}}\ (\Omega,\Omega^{\dagger})\ \rho\ \rho^{\dagger}$. This shows that $(\Gamma, \Gamma^{\rhd})$ satisfies the universal property of a representing object of $(Y_{\ast} N)_{\mid \alpha}$. \end{proof}
\begin{proposition}[Internally to $\mathbf{Psh}^{\mathcal{P}}$]\label{prop:disp_yoneda_rep_3}
For every $\omega$-small presheaf $A : \mmod{Y_{\ast}} \to \mathrm{Psh}_{\omega}^{\mathcal{C}}$, the unique map
\[ (\mmod{Y_{\ast}} \to A\ \mmod{Y_{\ast}}) \to (\mmod{G^{\ast}} \to \mathbf{1}) \]
preserves context extensions.
Equivalently, the constant map
\[ (\mmod{G^{\ast}} \to B\ \mmod{G^{\ast}}) \to ((\mmod{Y_{\ast}} \to A\ \mmod{Y_{\ast}}) \to (\mmod{G^{\ast}} \to B\ \mmod{G^{\ast}})) \]
is an isomorphism for every $B : \mmod{G^{\ast}} \to \mathrm{Psh}^{\mathcal{S}}$. \end{proposition} \begin{proof}
This follows from the fact that the projection map $\bm{p} : \mathcal{P}((\Gamma,\Gamma^{\rhd}) \to (\Gamma,\Gamma^{\dagger}))$ constructed in the proof of \cref{prop:disp_yoneda_rep_2} is sent by $G$ to the identity morphism $\mathsf{id} : \mathcal{S}(\Gamma \to \Gamma)$. \end{proof}
We can now forget the definitions of $\mathcal{P}$, $G$ and $Y$, as we will only rely on these properties.
We will work internally to $\mathbf{Psh}^{\mathcal{P}}$, $\mathbf{Psh}^{\mathcal{C}}$ and $\mathbf{Psh}^{\mathcal{S}}$ and use the dependent right adjoints $\modcolor{F^{\ast}}$, $\modcolor{G^{\ast}}$, $\modcolor{Y^{\ast}}$, $\modcolor{Y_{\ast}}$ and their compositions. There is actually, up to isomorphism, only a single new composite dependent right adjoint $\modcolor{Y_{\ast}Y^{\ast}}$.
\begin{construction}\label{con:disp_replace_model}
We construct a displayed model with context extensions $\mathcal{S}^{\dagger}$ over $G : \mathcal{P} \rightarrowtriangle \mathcal{S}$.
Furthermore, we equip $Y : \mathcal{C} \to \mathcal{P}$ with actions on displayed types and terms that preserve all displayed type-theoretic operations. \end{construction} \begin{proof}[Construction]
We pose, internally to $\mathbf{Psh}^{\mathcal{P}}$,
\begin{alignat*}{3}
& \mathsf{Ty}^{\dagger} && :{ } && (A : \mmod{G^{\ast}} \to \mathsf{Ty}^{\mathcal{S}}) \to \mathrm{Psh}^{\mathcal{P}} \\
& \mathsf{Ty}^{\dagger}\ A && \triangleq{ } &&
\mmod{Y_{\ast}} \to \mathsf{Ty}^{\bullet}\ (\lambda \mmod{F^{\ast}} \mapsto (A\ \mmod{G^{\ast}})\smkey{Y_{\ast}F^{\ast}}{G^{\ast}}) \\
& \mathsf{Tm}^{\dagger} && :{ } && \forall A\ (A^{\dagger} : \mathsf{Ty}^{\dagger}\ A) (a : \mmod{G^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{G^{\ast}})) \to \mathrm{Psh}^{\mathcal{P}} \\
& \mathsf{Tm}^{\dagger}\ A^{\dagger}\ a && \triangleq{ } &&
\mmod{Y_{\ast}} \to \mathsf{Tm}^{\bullet}\ (A^{\dagger}\ \mmod{Y_{\ast}})\ (\lambda \mmod{F^{\ast}} \mapsto (a\ \mmod{G^{\ast}})\smkey{Y_{\ast}F^{\ast}}{G^{\ast}})
\end{alignat*}
By \cref{prop:disp_yoneda_rep_1}, the family $(\mmod{G^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{G^{\ast}}))$ is locally representable.
By \cref{prop:disp_yoneda_rep_2}, the family $\mathsf{Tm}^{\dagger}\ A^{\dagger}\ a$ is also locally representable.
Thus the dependent sum
\[ (a : \mmod{G^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{G^{\ast}})) \times (a^{\dagger} : \mathsf{Tm}^{\dagger}\ A^{\dagger}\ a) \]
is a locally representable family of presheaves.
The fact that the first projection map
\[ (a : \mmod{G^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{G^{\ast}})) \times (a^{\dagger} : \mathsf{Tm}^{\dagger}\ A^{\dagger}\ a) \xrightarrow{\lambda (a,a^{\dagger}) \mapsto a} (\mmod{G^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{G^{\ast}})) \]
preserves context extensions follows from \cref{prop:disp_yoneda_rep_3}.
We have, internally to $\mathbf{Psh}^{\mathcal{C}}$, the following isomorphisms
\begin{alignat*}{3}
& Y^{\mathsf{Ty}} && :{ } &&
(A : \mmod{F^{\ast}} \to \mathsf{Ty}^{\mathcal{S}}) \\
&&&&& \to \mathsf{Ty}^{\bullet}\ A \simeq (\mmod{Y^{\ast}} \to \mathsf{Ty}^{\dagger}\ (\lambda \mmod{G^{\ast}} \mapsto (A\ \mmod{F^{\ast}})\smkey{Y^{\ast}G^{\ast}}{F^{\ast}})) \\
& Y^{\mathsf{Ty}} && \triangleq{ } &&
\lambda A\ A^{\bullet}\ \mmod{Y^{\ast}Y_{\ast}} \mapsto A^{\bullet}\smkey{Y^{\ast}Y_{\ast}}{\bullet} \\
& Y^{\mathsf{Ty},-1} && \triangleq{ } &&
\lambda A\ A^{\dagger}\ \mapsto (A^{\dagger}\ \mmod{Y^{\ast}Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}} \\
& Y^{\mathsf{Tm}} && :{ } &&
\forall (A : \mmod{F^{\ast}} \to \mathsf{Ty}^{\mathcal{S}})\ (A^{\bullet} : \mathsf{Ty}^{\bullet}\ A)\ (a : \mmod{F^{\ast}} \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}})) \\
&&&&& \to \mathsf{Tm}^{\bullet}\ A^{\bullet}\ a \simeq (\mmod{Y^{\ast}} \to \mathsf{Tm}^{\dagger}\ (Y^{\mathsf{Ty}}\ A^{\bullet}\ \mmod{Y^{\ast}})\ (\lambda \mmod{G^{\ast}} \mapsto (a\ \mmod{F^{\ast}})\smkey{Y^{\ast}G^{\ast}}{F^{\ast}})) \\
& Y^{\mathsf{Tm}} && \triangleq{ } &&
\lambda A^{\bullet}\ a\ a^{\bullet}\ \mmod{Y^{\ast}Y_{\ast}} \mapsto a^{\bullet}\smkey{Y^{\ast}Y_{\ast}}{\bullet} \\
& Y^{\mathsf{Tm},-1} && \triangleq{ } &&
\lambda A^{\bullet}\ a\ a^{\dagger}\ \mapsto (a^{\dagger}\ \mmod{Y^{\ast}Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}}
\end{alignat*}
More generally, for every (global) telescope $X$ of $\mathcal{S}$, we have the following isomorphisms
\begin{alignat*}{3}
& Y^{[X]\mathsf{Ty}} && :{ } &&
(A : \mmod{F^{\ast}} \to X \to \mathsf{Ty}^{\mathcal{S}}) \\
&&&&& \to (\forall x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (A \circledast x)) \\
&&&&& \simeq (\forall \mmod{Y^{\ast}}\ x\ (x^{\dagger} : X^{\dagger}\ x) \to \mathsf{Ty}^{\dagger}\ (\lambda \mmod{G^{\ast}} \mapsto (A\ \mmod{F^{\ast}})\smkey{Y^{\ast}G^{\ast}}{F^{\ast}})\ (x\ \mmod{G^{\ast}})) \\
& Y^{[X]\mathsf{Ty}} && \triangleq{ } &&
\lambda A\ A^{\bullet}\ \mmod{Y^{\ast}}\ x\ x^{\dagger}\ \mmod{Y_{\ast}} \mapsto A^{\bullet}\smkey{Y^{\ast}Y_{\ast}}{\bullet}\ (Y^{X,-1}\ (\lambda \mmod{Y^{\ast}} \mapsto x^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet})) \\
& Y^{[X]\mathsf{Ty},-1} && \triangleq{ } &&
\lambda A\ A^{\dagger}\ x^{\bullet} \mapsto (A^{\dagger}\ \mmod{Y^{\ast}}\ (Y^{X}\ x^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}} \\
& Y^{[X]\mathsf{Tm}} && :{ } &&
\forall (A : \mmod{F^{\ast}} \to X \to \mathsf{Ty}^{\mathcal{S}})\ (A^{\bullet} : \forall x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (A \circledast x)) \\
&&&&& \phantom{\forall{ }} (a : \mmod{F^{\ast}} \to (x : X) \to \mathsf{Tm}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}}\ x)) \\
&&&&& \to (\forall x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Tm}^{\bullet}\ (A^{\bullet}\ x^{\bullet})\ (a \circledast x)) \\
&&&&& \simeq (\forall \mmod{Y^{\ast}}\ x\ (x^{\dagger} : X^{\dagger}\ x) \to \\
&&&&& \hphantom{\simeq (} \mathsf{Tm}^{\dagger}\ (Y^{[X]\mathsf{Ty}}\ A^{\bullet}\ \mmod{Y^{\ast}}\ x^{\dagger})\ (\lambda \mmod{G^{\ast}} \mapsto (a\ \mmod{F^{\ast}})\smkey{Y^{\ast}G^{\ast}}{F^{\ast}})\ (x\ \mmod{G^{\ast}})) \\
& Y^{[X]\mathsf{Tm}} && \triangleq{ } &&
\lambda a\ a^{\bullet}\ \mmod{Y^{\ast}}\ x\ x^{\dagger}\ \mmod{Y_{\ast}} \mapsto a^{\bullet}\smkey{Y^{\ast}Y_{\ast}}{\bullet}\ (Y^{X,-1}\ (\lambda \mmod{Y^{\ast}} \mapsto x^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet})) \\
& Y^{[X]\mathsf{Tm},-1} && \triangleq{ } &&
\lambda a\ a^{\dagger}\ x^{\bullet} \mapsto (a^{\dagger}\ \mmod{Y^{\ast}}\ (Y^{X}\ x^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}}
\end{alignat*}
where $X^{\bullet}$, $X^{\dagger}$ and $Y^{X}$ can be defined by induction on $X$.
In particular we obtain bijective actions of $Y$ on every derived sort of the theory.
It remains to define the type-theoretic operations of $\mathcal{S}^{\dagger}$.
Each operation of $\mathcal{S}^{\dagger}$ should be derived from the corresponding operation of $\mathcal{S}^{\bullet}$.
We want all of the displayed operations to be preserved by $Y^{\mathsf{Ty}}$ and $Y^{\mathsf{Tm}}$.
In the case of the $\Pi$ type former, this translates to the following equation, internally to $\mathbf{Psh}^{\mathcal{C}}$.
\begin{alignat*}{3}
& Y^{\mathsf{Ty}}\ (\Pi^{\bullet}\ A^{\bullet}\ B^{\bullet})\ \mmod{Y^{\ast}} && ={ } && \Pi^{\dagger}\ (Y^{\mathsf{Tm}}\ A^{\bullet}\ \mmod{Y^{\ast}})\ (Y^{[\mathsf{Tm}]\mathsf{Ty}}\ B^{\bullet}\ \mmod{Y^{\ast}})
\end{alignat*}
To define $\Pi^{\dagger}$, we essentially have solve this equation
We look for a candidate with the following shape (where $\bm{A^{\ddagger}}$, $\bm{B^{\ddagger}}$, \etc, are still to be determined).
\begin{alignat*}{3}
& \Pi^{\dagger} && ={ } && \lambda A^{\dagger}\ B^{\dagger}\ \mmod{Y_{\ast}} \mapsto \Pi^{\bullet}\ \bm{A^{\ddagger}}(A^{\dagger}, \mmod{Y_{\ast}})\ \bm{B^{\ddagger}}(B^{\dagger}, \mmod{Y_{\ast}})
\end{alignat*}
The following equation should then hold internally to $\mathbf{Psh}^{\mathcal{P}}$.
\begin{alignat*}{3}
& \lambda \mmod{Y_{\ast}} \mapsto \Pi^{\bullet}\ A^{\bullet}\ B^{\bullet} && ={ } && \lambda \mmod{Y_{\ast}} \mapsto \Pi^{\bullet}\ \bm{A^{\ddagger}}(Y^{\mathsf{Ty}}\ A^{\bullet}\ \mmod{Y^{\ast}}, \mmod{Y_{\ast}})\ \bm{B^{\ddagger}}(Y^{[\mathsf{Tm}]\mathsf{Ty}}\ B^{\bullet}\ \mmod{Y^{\ast}}, \mmod{Y_{\ast}})
\end{alignat*}
We use the following definitions for $\bm{A^{\ddagger}}$ and $\bm{B^{\ddagger}}$.
\begin{alignat*}{3}
& \bm{A^{\ddagger}}(A^{\dagger}, \mmod{Y_{\ast}}) && \triangleq{ } && A^{\dagger}\ \mmod{Y_{\ast}} \\
& \bm{B^{\ddagger}}(B^{\dagger}, \mmod{Y_{\ast}}) && \triangleq{ } &&
\lambda a^{\bullet} \mapsto (B^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet}\ (Y^{\mathsf{Tm}}\ a^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}}
\end{alignat*}
Thus we obtain
\begin{alignat*}{3}
& \Pi^{\dagger} && ={ } && \lambda A^{\dagger}\ B^{\dagger}\ \mmod{Y_{\ast}} \mapsto \Pi^{\bullet}\ (A^{\dagger}\ \mmod{Y_{\ast}})\ (\lambda a^{\bullet} \mapsto (B^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet}\ (Y^{\mathsf{Tm}}\ a^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}})
\end{alignat*}
In general, we have to solve equations of the form
\[ \bm{y^{\ddagger}}(Y^{[X]\mathsf{Ty}}\ y^{\bullet}\ \mmod{Y^{\ast}}, \mmod{Y_{\ast}}) = y^{\bullet}. \]
or
\[ \bm{y^{\ddagger}}(Y^{[X]\mathsf{Tm}}\ y^{\bullet}\ \mmod{Y^{\ast}}, \mmod{Y_{\ast}}) = y^{\bullet}. \]
They admit solutions with the following shape:
\[ \bm{y^{\ddagger}}(y^{\dagger}, \mmod{Y_{\ast}}) \triangleq \lambda x^{\bullet} \mapsto (y^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet}\ (Y^{X}\ x^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}}. \]
This provides a definition of all displayed type-theoretic operations of $\mathcal{S}^{\dagger}$.
\begin{alignat*}{3}
& \Pi^{\dagger}\ A^{\dagger}\ B^{\dagger} && ={ } && \lambda \mmod{Y_{\ast}} \mapsto \Pi^{\bullet}\ (A^{\dagger}\ \mmod{Y_{\ast}})\ (\lambda a^{\bullet} \mapsto (B^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet}\ (Y^{\mathsf{Tm}}\ a^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}}) \\
& \mathsf{app}^{\dagger}\ f^{\dagger}\ a^{\dagger} && ={ } && \lambda \mmod{Y_{\ast}} \mapsto \mathsf{app}^{\bullet}\ (f^{\dagger}\ \mmod{Y_{\ast}})\ (a^{\dagger}\ \mmod{Y_{\ast}}) \\
& \mathsf{lam}^{\dagger}\ b^{\dagger} && ={ } && \lambda \mmod{Y_{\ast}} \mapsto \mathsf{lam}^{\bullet}\ (\lambda a^{\bullet} \mapsto (b^{\dagger}\smkey{Y_{\ast}Y^{\ast}}{\bullet}\ (Y^{\mathsf{Tm}}\ a^{\bullet}\ \mmod{Y^{\ast}})\ \mmod{Y_{\ast}})\smkey{\bullet}{Y^{\ast}Y_{\ast}}) \\
&&&&& \dots
\end{alignat*}
These operations satisfy the equations that hold in $\mathcal{S}^{\bullet}$. \end{proof}
\subsection{Displayed inserters}
We fix two parallel displayed functors $K, L : \mathcal{C} \to \mathcal{D}$ over a base category $\mathcal{S}$.
\begin{mathpar}
\begin{tikzcd}
\mathcal{C} \ar[rd, "F"', -{Triangle[open]}] \ar[rr, "K", shift left] \ar[rr, "L"', shift right] && \mathcal{D} \ar[dl, "G", -{Triangle[open]}] \\
&\mathcal{S}&
\end{tikzcd} \end{mathpar}
\begin{definition}
The \defemph{displayed inserter} of $K$ and $L$ is a displayed category $I : \mathcal{I} \rightarrowtriangle \mathcal{C}$ over $\mathcal{C}$.
\begin{itemize}
\item A object of $\mathcal{I}$ displayed over an object $x$ of $\mathcal{C}$ is a displayed morphism
\[ \iota^{x} : \mathcal{D}(K\ x \to_{\mathsf{id}_{F\ x}} L\ x). \]
\item A morphism of $\mathcal{I}$ from $\iota^{x}$ to $\iota^{y}$ displayed over $f : \mathcal{C}(x \to y)$ is a proof of the commutation of the square
\begin{mathpar}
\begin{tikzcd}
K\ x \ar[r, "\iota^{x}"] \ar[d, "K\ f"'] & L\ x \ar[d, "L\ f"] \\
K\ y \ar[r, "\iota^{y}"] & L\ y
\end{tikzcd}
\end{mathpar}
\end{itemize}
There is a natural transformation $\iota : (I \cdot K) \Rightarrow (I \cdot L)$ formed by the morphisms $\iota^{x}$.
The category $\mathcal{I}$ satisfies the following universal property: for every category $\mathcal{A}$ along with a functor $A : \mathcal{A} \to \mathcal{C}$ and a natural transformation $\gamma : (A \cdot K) \Rightarrow (A \cdot L)$, there exists a unique functor $B : \mathcal{A} \to \mathcal{I}$ such that $A = (B \cdot I)$ and $\gamma = (B \cdot \iota)$.
\lipicsEnd{} \end{definition}
\begin{proposition}[Internally to $\mathbf{Psh}^{\mathcal{I}}$]\label{prop:inserter_rep}
Assume given locally representable presheaves
\begin{alignat*}{3}
& A^{\mathcal{S}} && :{ } && \mmod{I^{\ast}F^{\ast}} \to \mathrm{RepPsh}^{\mathcal{S}} \\
& A^{\mathcal{C}} && :{ } && \mmod{I^{\ast}} \to \mathrm{RepPsh}^{\mathcal{C}} \\
& A^{\mathcal{D}} && :{ } && \mmod{I^{\ast}L^{\ast}} \to \mathrm{RepPsh}^{\mathcal{D}}
\end{alignat*}
along with actions of $K$, $L$ and $G$ on these presheaves
\begin{alignat*}{3}
& K^{A} && :{ } && \mmod{I^{\ast}} \to A^{\mathcal{C}}\ \mmod{I^{\ast}} \to \mmod{K^{\ast}} \to (A^{\mathcal{D}}\ \mmod{I^{\ast}F^{\ast}})\emkey{\iota}{I^{\ast}K^{\ast}}{I^{\ast}L^{\ast}} \\
& L^{A} && :{ } && \mmod{I^{\ast}} \to A^{\mathcal{C}}\ \mmod{I^{\ast}} \to \mmod{L^{\ast}} \to A^{\mathcal{D}}\ \mmod{I^{\ast}L^{\ast}} \\
& G^{A} && :{ } && \mmod{I^{\ast}L^{\ast}} \to A^{\mathcal{D}}\ \mmod{I^{\ast}L^{\ast}} \to \mmod{G^{\ast}} \to (A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}})\smkey{L^{\ast}G^{\ast}}{F^{\ast}}
\end{alignat*}
such that $L^{A}$ and $G^{A}$ preserve context extensions and the two composed actions of $F$ on $A$ coindide, \ie the equality
\begin{alignat*}{1}
& (G^{A}\ \mmod{I^{\ast}L^{\ast}}\ (L^{A}\ \mmod{I^{\ast}}\ a\ \mmod{L^{\ast}})\ \mmod{G^{\ast}})\smkey{F^{\ast}}{L^{\ast}G^{\ast}} \\
& \quad { }= ((G^{A}\ \mmod{I^{\ast}L^{\ast}})\emkey{\iota}{I^{\ast}K^{\ast}}{I^{\ast}L^{\ast}}\ (K^{A}\ \mmod{I^{\ast}}\ a\ \mmod{K^{\ast}})\ \mmod{G^{\ast}})\smkey{F^{\ast}}{K^{\ast}G^{\ast}}
\end{alignat*}
holds over the context $(\mmod{I^{\ast}}, a : A^{\mathcal{C}}\ \mmod{I^{\ast}}, \mmod{F^{\ast}})$.
Then the presheaf
\[ A^{\mathcal{I}} \triangleq \mmod{I^{\ast}} \to \{ a : A^{\mathcal{C}}\ \mmod{I^{\ast}} \mid \mmod{K^{\ast}} \to (K^{A}\ \mmod{I^{\ast}}\ a\ \mmod{K^{\ast}}) = (L^{A}\ \mmod{I^{\ast}}\ a\ \mmod{L^{\ast}})\emkey{\iota}{I^{\ast}K^{\ast}}{I^{\ast}L^{\ast}} \}\]
is locally representable and the first projection map
\[ I^{A} : A^{\mathcal{I}} \to \mmod{I^{\ast}} \to A^{\mathcal{C}}\ \mmod{I^{\ast}} \]
preserves context extensions. \end{proposition} \begin{proof}
We translate the statement externally.
Fix an object $(x,\iota^{x})$ of $\mathcal{I}$.
We have locally representable dependent presheaves
\begin{alignat*}{3}
& A^{\mathcal{S}} && :{ } && \forall (y : \mathcal{S}^{\mathsf{op}}) (\rho : \mathcal{S}(y \to F\ x)) \to \mathbf{Set} \\
& A^{\mathcal{C}} && :{ } && \forall (y : \mathcal{C}^{\mathsf{op}}) (\rho : \mathcal{C}(y \to x)) \to \mathbf{Set} \\
& A^{\mathcal{D}} && :{ } && \forall (y : \mathcal{D}^{\mathsf{op}}) (\rho : \mathcal{D}(y \to L\ x)) \to \mathbf{Set}
\end{alignat*}
and dependent natural transformations
\begin{alignat*}{3}
& K^{A} && :{ } && \forall (y : \mathcal{C}^{\mathsf{op}}) (\rho : \mathcal{C}(y \to x)) \to A^{\mathcal{C}}\ \rho \to A^{\mathcal{D}}\ (K\ \rho \cdot \iota^{x}) \\
& L^{A} && :{ } && \forall (y : \mathcal{C}^{\mathsf{op}}) (\rho : \mathcal{C}(y \to x)) \to A^{\mathcal{C}}\ \rho \to A^{\mathcal{D}}\ (L\ \rho) \\
& G^{A} && :{ } && \forall (y : \mathcal{D}^{\mathsf{op}}) (\rho : \mathcal{C}(y \to L\ x)) \to A^{\mathcal{D}}\ \rho \to A^{\mathcal{S}}\ (G\ \rho)
\end{alignat*}
such that $L^{A}$ and $G^{A}$ preserves context extensions and such that for every $\rho : \mathcal{C}(y \to x)$ and $a : A^{\mathcal{C}}\ \rho$,
we have $G^{A}\ (L\ \rho)\ (L^{A}\ \rho\ a) = G^{A}\ (K\ \rho \cdot \iota^{x})\ (K^{A}\ \rho\ a)$.
We have to show that the dependent presheaf
\begin{alignat*}{3}
& A^{\mathcal{I}} && :{ } && \forall ((y,\iota^{y}) : \mathcal{I}^{\mathsf{op}}) (\rho : \mathcal{I}((y,\iota^{y}) \to (x,\iota^{x}))) \to \mathbf{Set} \\
& A^{\mathcal{I}}\ (y,\iota^{y})\ \rho && \triangleq{ } && \{ a : A^{\mathcal{C}}\ \rho \mid K^{A}\ \rho\ a = (L^{A}\ \rho\ a)[\iota^{y}] \}
\end{alignat*}
is locally representable.
Fix some object $(y,\iota^{y})$ of $\mathcal{I}$ along with $\rho : \mathcal{I}((y,\iota^{y}) \to (x,\iota^{x}))$.
Recall that $\rho$ is a morphism $\rho : \mathcal{C}(y \to x)$ such that $\iota^{y} \cdot K\ \rho = L\ \rho \cdot \iota^{x}$.
We have to show that the presheaf
\begin{alignat*}{3}
& A^{\mathcal{I}}_{\mid (y,\iota^{y})} && :{ } && \forall (z : \mathcal{I}^{\mathsf{op}}) (\sigma : \mathcal{I}(z \to (y,\iota^{y}))) \to \mathbf{Set} \\
& A^{\mathcal{I}}_{\mid (y,\iota^{y})}\ \sigma && \triangleq{ } && A^{\mathcal{I}}\ (\sigma \cdot \rho)
\end{alignat*}
is representable.
We know that $A^{\mathcal{C}}_{\mid y}$ and $A^{\mathcal{D}}_{\mid L\ y}$ are representable and that $L^{A}$ preserves context extensions.
Thus we have some representing object $\bm{p} : \mathcal{C}(y^{\rhd} \to y)$ of $A^{\mathcal{C}}_{\mid y}$, and we know that $L\ \bm{p} : \mathcal{D}(L\ y^{\rhd} \to L\ y)$ is a representing object of $A^{\mathcal{D}}_{\mid L\ y}$.
We have a generic element $\bm{q} : A^{\mathcal{C}}\ (\bm{p} \cdot \rho)$ for $A^{\mathcal{C}}_{\mid y}$, and $L^{A}\ (\bm{p} \cdot \rho)\ \bm{q} : A^{\mathcal{D}}\ (L\ (\bm{p} \cdot \rho))$ is a generic element for $A^{\mathcal{D}}_{\mid L\ y}$.
We construct a morphism $\iota^{y^{\rhd}} : \mathcal{D}(K\ y^{\rhd} \to L\ y^{\rhd})$ such that the square
\begin{mathpar}
\begin{tikzcd}
K\ y^{\rhd} \ar[d, "K\ \bm{p}"'] \ar[r, "\iota^{y^{\rhd}}"] & L\ y^{\rhd} \ar[d, "L\ \bm{p}"] \\
K\ y \ar[r, "\iota^{y}"] & L\ y
\end{tikzcd}
\end{mathpar}
commutes and such that $G\ \iota^{y^{\rhd}} = \mathsf{id}_{F\ y}$.
By the universal property of $L\ y^{\rhd}$, we define $\iota^{y^{\rhd}}$ as the extension of $K\ \bm{p} \cdot \iota^{y}$ by the element $K^{A}\ (\bm{p} \cdot \rho)\ \bm{q} : A^{\mathcal{D}}\ (K\ \bm{p} \cdot \iota^{y} \cdot L\ \rho)$.
\[ \iota^{y^{\rhd}} \triangleq \angles{K\ \bm{p} \cdot \iota^{y}, K^{A}\ (\bm{p} \cdot \rho)\ \bm{q}}. \]
We can then compute
\begin{alignat*}{3}
& G\ \iota^{y^{\rhd}}
&& ={ } && G\ \angles{K\ \bm{p} \cdot \iota^{y}, K^{A}\ (\bm{p} \cdot \rho)\ \bm{q}} \\
&&& ={ } && \angles{G\ (K\ \bm{p}) \cdot G\ \iota^{y}, G^{A}\ (K\ (\bm{p} \cdot \rho) \cdot \iota^{x})\ (K^{A}\ (\bm{p} \cdot \rho)\ \bm{q})} \\
&&& ={ } && \angles{G\ (L\ \bm{p}), G^{A}\ (L\ (\bm{p} \cdot \rho))\ (L^{A}\ (\bm{p} \cdot \rho)\ \bm{q})} \\
&&& ={ } && G\ \angles{L\ \bm{p}, L^{A}\ (\bm{p} \cdot \rho)\ \bm{q}} \\
&&& ={ } && G\ (L\ \angles{\bm{p},\bm{q}}) \\
&&& ={ } && \mathsf{id}
\end{alignat*}
This defines an object $(y^{\rhd}, \iota^{y^{\rhd}})$ of $\mathcal{I}$, equipped with a projection $\bm{p}$ into $(y,\iota^{y})$.
It remains to show that this object represents $A^{\mathcal{I}}_{\mid (y,\iota^{y})}$.
Fix an object $(z,\iota^{z})$ of $\mathcal{I}$ along with a morphism $\sigma : \mathcal{I}((z,\iota^{z}) \to (y,\iota^{z}))$.
We know that factorizations of $\sigma : \mathcal{C}(z \to y)$ through $\bm{p} : \mathcal{C}(y^{\rhd} \to y)$ are in natural bijection with elements of $A^{\mathcal{C}}\ (\sigma \cdot \rho)$.
We extend this bijection to $\mathcal{I}$.
Because a displayed morphism of $\mathcal{I}$ over a morphism of $\mathcal{C}$ only consists of propositional data, we only need to construct a logical equivalence at the level of $A^{\mathcal{I}}$.
Take an element $a : A^{\mathcal{I}}\ (\sigma \cdot \rho)$.
By the universal property of $y^{\rhd}$, we have an extended morphism $\angles{\sigma, a} : \mathcal{C}(z \to y^{\rhd})$.
Let's show that the square
\begin{mathpar}
\begin{tikzcd}
K\ z \ar[r, "\iota^{z}"] \ar[d, "K\ \angles{\sigma,a}"'] & L\ z \ar[d, "L\ \angles{\sigma,a}"] \\
K\ y^{\rhd} \ar[r, "\iota^{y^{\rhd}}"] & L\ y^{\rhd}
\end{tikzcd}
\end{mathpar}
commutes.
By the universal property of $L^{y^{\rhd}}$, both $K\ \angles{\sigma,a} \cdot \iota^{y^{\rhd}}$ and $\iota^{z} \cdot L\ \angles{\sigma,a}$ can be written as the extension of some morphism in $\mathcal{D}(K\ z \to L\ y)$ by some element of $A^{\mathcal{D}}$.
We compute
\begin{alignat*}{3}
& K\ \angles{\sigma,a} \cdot a^{y^{\rhd}} && ={ } && K\ \angles{\sigma,a} \cdot \angles{K\ \bm{p} \cdot \iota^{y}, K^{A}\ (\bm{p} \cdot \rho)\ \bm{q}} \\
&&& ={ } && \angles{K\ (\angles{\sigma,a} \cdot \bm{p}) \cdot \iota^{y}, (K^{A}\ (\bm{p} \cdot \rho)\ \bm{q})[K\ \angles{\sigma,a}]} \\
&&& ={ } && \angles{K\ \sigma \cdot \iota^{y}, (K^{A}\ (\bm{p} \cdot \rho)\ \bm{q})[K\ \angles{\sigma,a}]} \\
&&& ={ } && \angles{K\ \sigma \cdot \iota^{y}, K^{A}\ (\sigma \cdot \rho)\ a}
\end{alignat*}
and
\begin{alignat*}{3}
& \iota^{z} \cdot L\ \angles{\sigma,a} && ={ } && \iota^{z} \cdot \angles{L\ \sigma, L^{A}\ (\sigma \cdot \rho)\ a} \\
&&& ={ } && \angles{\iota^{z} \cdot L\ \sigma, (L^{A}\ (\sigma \cdot \rho)\ a)[\iota^{z}]}
\end{alignat*}
Now $K\ \sigma \cdot \iota^{y} = \iota^{z} \cdot L\ \sigma$ because $\sigma$ is a morphism of $\mathcal{I}$, and $K^{A}\ (\sigma \cdot \rho)\ a = (L^{A}\ (\sigma \cdot \rho)\ a)[\iota^{z}]$ because $a$ is an element of $A^{\mathcal{I}}$.
Thus we have $K\ \angles{\sigma,a} \cdot \iota^{y^{\rhd}} = \iota^{z} \cdot L\ \angles{\sigma,a}$, as needed.
This concludes the proof that $A^{\mathcal{I}}_{\mid (y,\iota^{y})}$ is representable.
The dependent presheaf $A^{\mathcal{I}}$ is thus locally representable.
Finally, we also have to check that the representing objects that we have constructed are natural in $(x,\iota^{x})$.
This follows from the fact that the representing objects of $A^{\mathcal{C}}$ are natural in $x$. \end{proof}
We now return to the setting of our induction principles. We fix a base model $\mathcal{S}$ of $\MT_{\Pi,\mathbf{B}}$, a functor $F : \mathcal{C} \to \mathcal{S}$ equipped with a displayed model without context extensions $\mathcal{S}^{\bullet}$, a factorization $(\mathcal{C} \xrightarrow{Y} \mathcal{P} \xrightarrow{G} \mathcal{S}, \mathcal{S}^{\dagger})$ and a section $S$ of $\mathcal{S}^{\dagger}$. We let $\mathcal{I}(\mathcal{S}^{\bullet})$ be the displayed inserter of $Y$ and $F \cdot S$.
The following diagram describes the categories and functors in play. \begin{mathpar}
\begin{tikzcd}
\mathcal{I}(\mathcal{S}^{\bullet}) \ar[d, "I", -{Triangle[open]}] && \\
\mathcal{C} \ar[rr, "Y"] \ar[dr, "F", -{Triangle[open]}] &&
\mathcal{P} \ar[dl, "G"', -{Triangle[open]}] \\
& \mathcal{S} \ar[ur, "S_{0}"', bend right] &
\end{tikzcd} \end{mathpar}
\begin{proposition}\label{prop:inserter_terminal}
If $\mathcal{C}$ has a terminal object that is preserved by $F : \mathcal{C} \to \mathcal{S}$, then $\mathcal{I}(\mathcal{S}^{\bullet})$ has a terminal object that is preserved by $I : \mathcal{I}(\mathcal{S}^{\bullet}) \to \mathcal{C}$. \end{proposition} \begin{proof}
The terminal object of $\mathcal{I}(\mathcal{S}^{\bullet})$ is $(\diamond,\iota^{\diamond})$, where $\diamond$ is the terminal object of $\mathcal{C}$ and $\iota^{\diamond} : \mathcal{P}(Y\ \diamond \to_{\mathsf{id}} S\ (F\ \diamond))$.
Since both $S$ and $F$ both preserve terminal objects, there is a unique such $\iota^{\diamond}$.
This also implies that $(\diamond,\iota^{\diamond})$ is terminal. \end{proof}
We can also specialize \cref{prop:inserter_rep} to this situation. \begin{proposition}[Internally to $\mathbf{Psh}^{\mathcal{I}(S^{\bullet})}$]\label{prop:inserter_rep_cor}
Assume given the following data:
\begin{itemize}
\item A locally representable presheaf $A^{\mathcal{S}} : \mmod{I^{\ast}F^{\ast}} \to \mathrm{RepPsh}^{\mathcal{S}}$.
\item A dependent presheaf $A^{\bullet} : \mmod{I^{\ast}} \to (\mmod{F^{\ast}} \to A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}}) \to \mathrm{Psh}^{\mathcal{C}}$.
\item A dependent presheaf $A^{\dagger} : \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}} \to (\mmod{G^{\ast}} \to (A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}})\smkey{S_{0}^{\ast}G^{\ast}}{\bullet}) \to \mathrm{Psh}^{\mathcal{S}}$
such that the first projection map
\begin{alignat*}{1}
& (a : \mmod{G^{\ast}} \to (A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}})\smkey{S_{0}^{\ast}G^{\ast}}{\bullet}) \times
(a^{\dagger} : A^{\dagger}\ \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}}\ a) \\
& \quad { }\xrightarrow{\lambda (a,a^{\dagger}) \mapsto a}
(\mmod{G^{\ast}} \to (A^{\mathcal{S}}\ \mmod{F^{\ast}})\smkey{S_{0}^{\ast}G^{\ast}}{\bullet})
\end{alignat*}
has a locally representable domain and preserves context extensions.
\item A bijective action
\begin{alignat*}{3}
& Y^{A} && :{ } && \mmod{I^{\ast}}\ \{a : \mmod{F^{\ast}} \to A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}}\} \to \\
&&&&& A^{\bullet}\ \mmod{I^{\ast}}\ a \simeq (\mmod{Y^{\ast}} \to (A^{\dagger}\ \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}})\emkey{\iota}{I^{\ast}Y^{\ast}}{I^{\ast}F^{\ast}S_{0}^{\ast}}\ (\lambda \mmod{G^{\ast}} \mapsto (a\ \mmod{G^{\ast}})\smkey{F^{\ast}}{Y^{\ast}G^{\ast}})).
\end{alignat*}
\item An action
\[ S^{A} : \mmod{I^{\ast}F^{\ast}}\ (a : A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}})\ \mmod{S_{0}^{\ast}} \to A^{\dagger}\ \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}}\ (\lambda \mmod{G^{\ast}} \mapsto a\smkey{S_{0}^{\ast}G^{\ast}}{\bullet}) \]
whose induced total action preserves context extensions.
\item A locally representable presheaf $A^{\mathcal{C}} : \mmod{I^{\ast}} \to \mathrm{RepPsh}^{\mathcal{C}}$.
\item An action $F^{A} : \mmod{I^{\ast}} \to A^{\mathcal{C}}\ \mmod{I^{\ast}} \to \mmod{F^{\ast}} \to A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}}$ that preserves context extensions.
\item A map $f : \mmod{I^{\ast}} \to (a : A^{\mathcal{C}}\ \mmod{I^{\ast}}) \to A^{\bullet}\ \mmod{I^{\ast}F^{\ast}}\ (F^{A}\ \mmod{I^{\ast}}\ a)$.
\end{itemize}
We pose
\begin{alignat*}{3}
& S^{A} && :{ } && \mmod{I^{\ast}}\ (a : \mmod{F^{\ast}} \to A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}}) \to A^{\bullet}\ \mmod{I^{\ast}}\ a \\
& S^{A}\ \mmod{I^{\ast}}\ a && \triangleq{ } && Y^{A,-1}\ (\lambda \mmod{Y^{\ast}} \mapsto (S^{A}\ (a\ \mmod{F^{\ast}})\ \mmod{S_{0}^{\ast}})\emkey{\iota}{I^{\ast}Y^{\ast}}{I^{\ast}F^{\ast}S_{0}^{\ast}})
\end{alignat*}
Then the presheaf
\begin{alignat*}{3}
& A^{\mathcal{I}(S^{\bullet})} && \triangleq{ } && \mmod{I^{\ast}} \to \{ a : A^{\mathcal{C}}\ \mmod{I^{\ast}} \\
&&&&& \phantom{\mmod{I^{\ast}} \to { }} \mid S^{A}\ \mmod{I^{\ast}}\ (\lambda \mmod{F^{\ast}} \mapsto F^{A}\ \mmod{I^{\ast}}\ a\ \mmod{F^{\ast}}) = f\ \mmod{I^{\ast}}\ a \}
\end{alignat*}
is locally representable and the first projection map
\[ A^{\mathcal{I}(S^{\bullet})} \to \mmod{I^{\ast}} \to A^{\mathcal{C}}\ \mmod{I^{\ast}} \]
preserves context extensions. \end{proposition} \begin{proof}
This follows from \cref{prop:inserter_rep}, applied to the following presheaves
\begin{alignat*}{3}
& B^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}}
&& \triangleq{ } && A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}} \\
& B^{\mathcal{C}}\ \mmod{I^{\ast}}
&& \triangleq{ } && (a : \mmod{F^{\ast}} \to A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}}) \times (A^{\bullet}\ \mmod{I^{\ast}}\ a) \\
& B^{\mathcal{P}}\ \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}}
&& \triangleq{ } && (a : \mmod{G^{\ast}} \to (A^{\mathcal{S}}\ \mmod{I^{\ast}F^{\ast}})\smkey{S_{0}^{\ast}G^{\ast}}{\bullet}) \times (A^{\dagger}\ \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}}\ a)
\end{alignat*}
and to the following actions
\begin{alignat*}{1}
& Y^{A}\ \mmod{I^{\ast}}\ a\ \mmod{Y^{\ast}} \\
& \quad \triangleq (\lambda \mmod{G^{\ast}} \mapsto (F^{A}\ \mmod{I^{\ast}}\ a\ \mmod{F^{\ast}})\smkey{F^{\ast}}{Y^{\ast}G^{\ast}}, Y^{A}\ \mmod{I^{\ast}}\ (f\ \mmod{I^{\ast}}\ a)\ \mmod{Y^{\ast}}) \\
& {(F \cdot S)}^{A}\ \mmod{I^{\ast}}\ a\ \mmod{F^{\ast}S_{0}^{\ast}} \\
& \quad \triangleq (\lambda \mmod{G^{\ast}} \mapsto (F^{A}\ \mmod{I^{\ast}}\ a\ \mmod{F^{\ast}})\smkey{S_{0}^{\ast}G^{\ast}}{\bullet}, S^{A}\ \mmod{I^{\ast}F^{\ast}}\ (F^{A}\ \mmod{I^{\ast}}\ a\ \mmod{F^{\ast}})\ \mmod{S_{0}^{\ast}}) \\
& G^{A}\ \mmod{I^{\ast}F^{\ast}S_{0}^{\ast}}\ (a, -)\ \mmod{G^{\ast}} \triangleq a\ \mmod{G^{\ast}}
\tag*{\qedhere}
\end{alignat*} \end{proof}
\subsection{Relative sections}
From the point of view of $\mathcal{I}$, the relative section of $\mathcal{S}^{\bullet}$ already exists. That is, we have, for every telescope $X$, operations \begin{alignat*}{3}
& S_{\iota}^{[X]\mathsf{Ty}} && :{ } && \forall \mmod{I^{\ast}} (A : \mmod{F^{\ast}} \to X \to \mathsf{Ty})\ x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (A \circledast x) \\
& S_{\iota}^{[X]\mathsf{Ty}} && \triangleq{ } && \lambda \mmod{I^{\ast}}\ A\ x^{\bullet} \mapsto Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}} \mapsto (S^{[X]\mathsf{Ty}}\ (A\ \mmod{F^{\ast}})\ \mmod{S_{0}^{\ast}})\emkey{\iota}{I^{\ast}Y^{\ast}}{I^{\ast}F^{\ast}S_{0}^{\ast}})\ x^{\bullet} \\
& S_{\iota}^{[X]\mathsf{Tm}} && :{ } && \forall \mmod{I^{\ast}}\ A\ (a : \forall \mmod{F^{\ast}}\ x \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}}\ x))\ x\ (x^{\bullet} : X^{\bullet}\ x) \\
&&&&& \to \mathsf{Tm}^{\bullet}\ (S^{[X]\mathsf{Ty}}\mmod{I^{\ast}}\ A\ x\ x^{\bullet})\ (a \circledast x) \\
& S_{\iota}^{[X]\mathsf{Tm}} && \triangleq{ } && \lambda \mmod{I^{\ast}}\ a\ x^{\bullet} \mapsto Y^{[X]\mathsf{Tm},-1}\ (\lambda \mmod{Y^{\ast}} \mapsto (S^{[X]\mathsf{Tm}}\ (a\ \mmod{F^{\ast}})\ \mmod{S_{0}^{\ast}})\emkey{\iota}{I^{\ast}Y^{\ast}}{I^{\ast}F^{\ast}S_{0}^{\ast}})\ x^{\bullet}, \end{alignat*} where $X^{\bullet} : (\mmod{F^{\ast}} \to X\ \mmod{F^{\ast}}) \to \mathrm{Psh}^{\mathcal{C}}$ is defined by induction on the telescope $X$.
Given any section $\angles{\alpha}$ of $I : \mathcal{I} \to \mathcal{C}$, we can then define \begin{alignat*}{3}
& S_{\alpha}^{[X]\mathsf{Ty}} && :{ } && \forall (A : \mmod{F^{\ast}} \to X \to \mathsf{Ty})\ x\ (x^{\bullet} : X^{\bullet}\ x) \to \mathsf{Ty}^{\bullet}\ (A \circledast x) \\
& S_{\alpha}^{[X]\mathsf{Ty}} && \triangleq{ } && \lambda A\ x^{\bullet} \mapsto (S^{[X]\mathsf{Ty}}_{\iota}\mmod{I^{\ast}})\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}\ A\ x^{\bullet} \\
& S_{\alpha}^{[X]\mathsf{Tm}} && :{ } && \forall A\ (a : \forall \mmod{F^{\ast}}\ x \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}}\ x))\ x\ (x^{\bullet} : X^{\bullet}\ x) \\
&&&&& \to \mathsf{Tm}^{\bullet}\ (S^{[X]\mathsf{Ty}}\ A\ x\ x^{\bullet})\ (a \circledast x) \\
& S_{\alpha}^{[X]\mathsf{Tm}} && \triangleq{ } && \lambda a\ x^{\bullet} \mapsto (S^{[X]\mathsf{Tm}}_{\iota}\mmod{I^{\ast}})\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}\ a\ x^{\bullet} \end{alignat*}
Note that we have \begin{alignat*}{3}
& S_{\alpha}^{[X]\mathsf{Ty}} && ={ } && \lambda A\ x^{\bullet} \mapsto Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}} \mapsto (S^{[X]\mathsf{Ty}}\ (A\ \mmod{F^{\ast}})\ \mmod{S_{0}^{\ast}})\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})\ x^{\bullet} \\
& S_{\alpha}^{[X]\mathsf{Tm}} && ={ } && \lambda a\ x^{\bullet} \mapsto Y^{[X]\mathsf{Tm},-1}\ (\lambda \mmod{Y^{\ast}} \mapsto (S^{[X]\mathsf{Tm}}\ (a\ \mmod{F^{\ast}})\ \mmod{S_{0}^{\ast}})\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})\ x^{\bullet} \end{alignat*}
These actions preserve all type-theoretic operations. This follows from the fact that the actions of $S_{0}$ and $Y$ on types and terms preserve the type-theoretic operations. We give the detailed proof for $\mathbf{B}$ and $\Pi$. \begin{alignat*}{3}
& S_{\alpha}^{[X]\mathsf{Ty}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \mathbf{B})
&& ={ } && Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto (S^{[X]\mathsf{Ty}}\ \mathbf{B}\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})
\tag*{(definition of $S_{\alpha}^{[X]\mathsf{Ty}}$)} \\
&&& ={ } && Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto \mathbf{B}^{\dagger}\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})
\tag*{($S_{\alpha}^{[X]\mathsf{Ty}}$ preserves $\mathbf{B}$)} \\
&&& ={ } && Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto \mathbf{B}^{\dagger})
\tag*{(commutation with $-\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}}$)} \\
&&& ={ } && \mathbf{B}^{\bullet}
\tag*{($Y^{[X]\mathsf{Ty}}$ preserves $\mathbf{B}$)} \end{alignat*}
\begin{alignat*}{1}
& S_{\alpha}^{[X]\mathsf{Ty}}\ (\lambda \mmod{F^{\ast}}\ x \mapsto \Pi\ (A\ \mmod{F^{\ast}}\ x)\ (B\ \mmod{F^{\ast}}\ x)) \\
& \quad ={ } Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto (S_{\alpha}^{[X]\mathsf{Ty}}\ (\lambda x \mapsto \Pi\ (A\ \mmod{F^{\ast}}\ x)\ (B\ \mmod{F^{\ast}}\ x))\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})
\tag*{(definition of $S_{\alpha}^{[X]\mathsf{Ty}}$)} \\
& \quad ={ } Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto (\Pi^{\dagger}\ (S_{\alpha}^{[X]\mathsf{Ty}}\ A\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\ (S_{\alpha}^{[X,\mathsf{Tm}]\mathsf{Ty}}\ B\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}})))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})
\tag*{($S_{\alpha}^{[X]\mathsf{Ty}}$ preserves $\Pi$)} \\
& {\quad ={ } Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto \Pi^{\dagger}\ }
(S_{\alpha}^{[X]\mathsf{Ty}}\ A\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}} \\
& \hphantom{\quad ={ } Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto \Pi^{\dagger}\ }
(S_{\alpha}^{[X,\mathsf{Tm}]\mathsf{Ty}}\ B\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})
\tag*{(commutation with $-\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}}$)} \\
& {\quad ={ } \Pi^{\bullet}\ }
(Y^{[X]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto (S_{\alpha}^{[X]\mathsf{Ty}}\ A\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})) \\
& \hphantom{\quad ={ } \Pi^{\bullet}\ }
(Y^{[X,\mathsf{Tm}]\mathsf{Ty},-1}\ (\lambda \mmod{Y^{\ast}}\ x^{\dagger} \mapsto (S_{\alpha}^{[X,\mathsf{Tm}]\mathsf{Ty}}\ B\ \mmod{S_{0}^{\ast}}\ (x^{\dagger}\ \mmod{S_{0}^{\ast}}))\emkey{\alpha}{Y^{\ast}}{F^{\ast}S_{0}^{\ast}})) \\
\tag*{($Y^{[X]\mathsf{Ty}}$ preserves $\Pi$)} \\
& \quad ={ } \Pi^{\bullet}\ (S^{[X]\mathsf{Ty}}_{\alpha}\ A)\ (S^{[X,\mathsf{Tm}]\mathsf{Ty}}_{\alpha}\ B)
\tag*{(definitions of $S^{[X]\mathsf{Ty}}_{\alpha}$ and $S^{[X,\mathsf{Tm}]\mathsf{Ty}}_{\alpha}$)} \\ \end{alignat*}
\subsection{Induction principles}\label{sec:indPrinc}
\restateIndTerminal* \begin{proof}
By biinitiality of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, we have a section $S_{0}$ of the displayed model $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\dagger}$.
By \cref{prop:inserter_terminal} $\mathcal{I}(\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet})$ has a terminal object.
This determines a section $\angles{\alpha}$ of $I : \mathcal{I}(\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}) \to \{\diamond\}$.
We thus have a relative section $S_{\alpha}$ of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$. \end{proof}
\restateIndRenamings* \begin{proof}
By biinitiality of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}$, we have a section $S_{0}$ of the displayed model $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\dagger}$.
We now equip $\mathcal{I}(\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet})$ with the structure of a renaming algebra.
By \cref{prop:inserter_terminal} $\mathcal{I}(\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet})$ has a terminal object.
We pose
\[ \mathsf{Var}^{\mathcal{I}}\ A \triangleq \mmod{I^{\ast}} \to \{ a : \mathsf{Var}^{\mathcal{Ren}}\ (A\ \mmod{I^{\ast}}) \mid S_{\iota}^{\mathsf{Tm}}\ (\mathsf{var}\ a) = \mathsf{var}^{\bullet}\ \mmod{I^{\ast}}\ a \}. \]
By \cref{prop:inserter_rep}, $\mathsf{Var}^{\mathcal{I}}$ is a family of locally representable presheaves.
Thus $\mathcal{I}(\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet})$ has the structure of a renaming algebra.
By biinitiality of $\mathcal{Ren}$, we obtain a section $\angles{\alpha}$ of $I : \mathcal{I}(\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}) \to \mathcal{Ren}$, and thus a relative section $S_{\alpha}$ of $\mathbf{0}_{\MT_{\Pi,\mathbf{B}}}^{\bullet}$.
The morphism $\angles{\alpha}$ of renaming algebras has an action on variables.
\begin{alignat*}{1}
& \angles{\alpha}^{\mathsf{Var}} : \forall (a : \mathsf{Var}^{\mathcal{Ren}}\ A)\ \mmod{\angles{\alpha}^{\ast}I^{\ast}} \to S_{\iota}^{\mathsf{Tm}}\ (\mathsf{var}\ a\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}) = \mathsf{var}^{\bullet}\ \mmod{I^{\ast}}\ a\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}
\end{alignat*}
Thus given any variable $a : \mathsf{Var}^{\mathcal{Ren}}\ A$, we know that $S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{var}\ a) = (\mathsf{var}^{\bullet}\ \mmod{I^{\ast}})\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}\ a$. \end{proof}
\restateIndCube* \begin{proof}
By biinitiality of $\mathbf{0}_{\mathsf{CTT}}$, we have a section $S_{0}$ of the displayed model $\mathbf{0}_{\mathsf{CTT}}^{\dagger}$.
We now equip $\mathcal{I}(\mathbf{0}_{\mathsf{CTT}}^{\bullet})$ with the structure of a renaming algebra.
By \cref{prop:inserter_terminal} $\mathcal{I}(\mathbf{0}_{\mathsf{CTT}}^{\bullet})$ has a terminal object.
We pose
\[ \mathbb{I}^{\mathcal{I}} \triangleq \mmod{I^{\ast}} \to \{ i : \mathbb{I}^{\square} \mid S_{\iota}^{\mathbb{I}}\ (\mathsf{int}\ i) = \mathsf{int}^{\bullet}\ \mmod{I^{\ast}}\ i \}. \]
Since we have $S^{\mathbb{I}}_{\iota}\ 0^{\square} = 0^{\bullet}$ and $S^{\mathbb{I}}_{\iota}\ 1^{\square} = 1^{\bullet}$, we can lift $0^{\square}$ and $1^{\square}$ to elements $0^{\mathcal{I}},1^{\mathcal{I}} : \mathbb{I}^{\mathcal{I}}$.
By \cref{prop:inserter_rep}, $\mathbb{I}^{\mathcal{I}}$ is a locally representable presheaf.
Thus $\mathcal{I}(\mathbf{0}_{\mathsf{CTT}}^{\bullet})$ has the structure of a cubical algebra.
Furthermore $I : \mathcal{I}(\mathbf{0}_{\mathsf{CTT}}^{\bullet}) \to \square$ is a morphism of cubical algebras.
By biinitiality of $\square$, we obtain a section $\angles{\alpha}$ of $I : \mathcal{I}(\mathbf{0}_{\mathsf{CTT}}^{\bullet}) \to \square$, and thus a relative section $S_{\alpha}$ of $\mathbf{0}_{\mathsf{CTT}}^{\bullet}$.
The morphism $\angles{\alpha}$ of cubical algebras has an action on the interval.
\begin{alignat*}{1}
& \angles{\alpha}^{\mathbb{I}} : \forall (i : \mathbb{I}^{\square})\ \mmod{\angles{\alpha}^{\ast}I^{\ast}} \to S_{\iota}^{\mathbb{I}}\ (\mathsf{int}\ i\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}) = \mathsf{int}^{\bullet}\ \mmod{I^{\ast}}\ i\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}
\end{alignat*}
Thus given any interval element $i : \mathbb{I}^{\square}$, we know that $S_{\alpha}^{\mathbb{I}}\ (\mathsf{int}\ i) = (\mathsf{int}^{\bullet}\ \mmod{I^{\ast}})\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}\ i$. \end{proof}
\restateIndAtomicCube* \begin{proof}
Similar to the proofs of \cref{lem:ind_renamings} and \cref{lem:ind_cubes}. \end{proof}
\section{Normalization}\label{sec:normalization}
In this section, we describe a normalization proof for a type theory $\MT_{\mathcal{U}}$ with a hierarchy of universes indexed by natural numbers, closed under $\Pi$-types and boolean types. This proof relies on an induction principle relative to $\mathcal{Ren} \to \mathbf{0}_{\MT_{\mathcal{U}}}$.
The proof follows the same structure as Coquand normalization proof from~\cite{CoquandNormalization}; it is an algebraic presentation of Normalization by Evaluation (NbE). There is one important difference between the type theory $\MT_{\mathcal{U}}$ and the type theory considered in~\cite{CoquandNormalization}. Our type theory is not cumulative; type-formers at different universe level are distinct. We believe that proving normalization for a cumulative type theory should be possible in our framework, but getting the details right is tricky and the definitions become very verbose. These details, such as the precise definition of the normal forms, were omitted from Coquand's proof. We prove slightly more; in addition to the existence of normal forms, we also prove uniqueness. Proving uniqueness relies on the computation rules of relative sections.
\subsection{The type theory \texorpdfstring{$\MT_{\mathcal{U}}$}{T\_{}U}} We describe the structure of a model of $\MT_{\mathcal{U}}$ over a category $\mathcal{C}$, internally to $\mathbf{Psh}^{\mathcal{C}}$.
Types and terms are now indexed by universe levels. \begin{alignat*}{3}
& \mathsf{Ty} && :{ } && \forall (i : \mathbb{N}) \to \mathrm{Psh}^{\mathcal{C}} \\
& \mathsf{Tm} && :{ } && \forall (i : \mathbb{N})\ (A : \mathsf{Ty}_{i}) \to \mathrm{RepPsh}^{\mathcal{C}} \end{alignat*}
We have lifting functions that can be used to move between different universe levels. \begin{alignat*}{3}
& \mathsf{Lift} && :{ } && \forall (i : \mathbb{N}) \to \mathsf{Ty}_{i} \to \mathsf{Ty}_{i+1} \\
& \mathsf{lift} && :{ } && \forall (i : \mathbb{N})\ (A : \mathsf{Ty}_{i}) \to \mathsf{Tm}_{i}\ A \simeq \mathsf{Tm}_{i+1}\ (\mathsf{Lift}_{i}\ A) \end{alignat*}
We have a hierarchy of universes, indexed by universe levels. The terms of the $i$-th universe are in bijection with the types of level $i$. \begin{alignat*}{3}
& \mathcal{U} && :{ } && \forall (i : \mathbb{N}) \to \mathsf{Ty}_{i+1} \\
& \mathsf{El} && :{ } && \forall (i : \mathbb{N}) \to \mathsf{Tm}_{i+1}\ \mathcal{U}_{i} \simeq \mathsf{Ty}_{i} \end{alignat*}
Finally, we have $\Pi$-types and boolean numbers types at every universe level. The motive of the eliminator for booleans can be at any universe level. \begin{alignat*}{3}
& \Pi && :{ } && \forall i\ (A : \mathsf{Ty}_{i})\ (B : \mathsf{Tm}\ A \to \mathsf{Ty}_{i}) \to \mathsf{Ty}_{i} \\
& \mathsf{app} && :{ } && \forall i\ A\ B \to \mathsf{Tm}_{i}\ (\Pi\ A\ B) \simeq ((a : \mathsf{Tm}\ A) \to \mathsf{Tm}\ (B\ a)) \\
& - && :{ } && \mathsf{elim}_{\BoolTy}\ P\ t\ f\ \mathsf{true} = t \\
& - && :{ } && \mathsf{elim}_{\BoolTy}\ P\ t\ f\ \mathsf{false} = f \end{alignat*} \begin{alignat*}{3}
& \mathbf{B} && :{ } && \forall i \to \mathsf{Ty}_{i} \\
& \mathsf{true},\mathsf{false} && :{ } && \forall i \to \mathsf{Tm}_{i}\ \mathbf{B} \\
& \mathsf{elim}_{\BoolTy} && :{ } && \forall i\ j\ (P : \mathsf{Tm}\ \mathbf{B} \to \mathsf{Ty}_{j})\ (t : \mathsf{Tm}\ (P\ \mathsf{true}))\ (f : \mathsf{Tm}\ (P\ \mathsf{false}))\ (b : \mathsf{Tm}\ \mathbf{B}_{i}) \to \mathsf{Tm}\ (P\ b) \end{alignat*}
We have, as for $\MT_{\Pi,\mathbf{B}}$, notions of displayed models without context extensions and of relative sections for $\MT_{\mathcal{U}}$. The following variant of \cref{lem:ind_renamings} can easily be proven for $\MT_{\mathcal{U}}$.
\begin{definition}
A \defemph{renaming algebra} over a model $\mathcal{S}$ of $\MT_{\mathcal{U}}$ is a category $\mathcal{R}$ with a terminal object, along with a functor $F : \mathcal{R} \to \mathcal{S}$ preserving the terminal object, a locally representable dependent presheaf of variables
\[ \mathsf{Var}^{\mathcal{R}} : \forall i\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i}^{\mathcal{S}}) \to \mathrm{RepPsh}^{\mathcal{R}} \]
and an action on variables $\mathsf{var} : \forall i\ A\ (a : \mathsf{Var}_{i}\ A)\ \mmod{F^{\ast}} \to \mathsf{Tm}_{i}^{\mathcal{S}}\ (A\ \mmod{F^{\ast}})$ that preserves context extensions.
The category of renamings $\mathcal{Ren}_{\mathcal{S}}$ over a model $\mathcal{S}$ is defined as the biinitial renaming algebra over $\mathcal{S}$.
We denote the category of renamings of the biinitial model $\mathbf{0}_{\MT_{\mathcal{U}}}$ by $\mathcal{Ren}$.
\lipicsEnd{} \end{definition}
\begin{lemma}[Induction principle relative to $\mathcal{Ren} \to \mathbf{0}_{\MT_{\mathcal{U}}}$]\label{lem:ind_renamings_univ}
Let $\mathbf{0}_{\MT_{\mathcal{U}}}^{\bullet}$ be a global displayed model without context extensions over $F : \mathcal{Ren} \to \mathbf{0}_{\MT_{\mathcal{U}}}$, along with, internally to $\mathcal{I}(\mathcal{S}^{\bullet})$, a global map
\[ \mathsf{var}^{\bullet} : \forall \mmod{I^{\ast}}\ i\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i}) (a : \mathsf{Var}_{i}\ A) \to \mathsf{Tm}^{\bullet}\ (S_{\iota}^{\mathsf{Ty}}\ \mmod{I^{\ast}}\ A)\ (\mathsf{var}\ a). \]
Then there exists a relative section $S_{\alpha}$ of $\mathbf{0}_{\MT_{\mathcal{U}}}^{\bullet}$.
It satisfies the additional computation rule
\[ S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{var}\ a) = (\mathsf{var}^{\bullet}\ \mmod{I^{\ast}})\smkey{\bullet}{\angles{\alpha}^{\ast}I^{\ast}}\ a. \]
\qed{} \end{lemma}
\subsection{Normal forms}
The goal of normalization is to prove that every term admits a unique normal form. We first need to define normal types, normal forms and neutral terms (which correspond to stuck computations). They are defined, internally to $\mathbf{Psh}^{\mathcal{Ren}}$, as inductive families indexed by the terms of $\mathbf{0}_{\MT_{\mathcal{U}}}$. \begin{alignat*}{3}
& \mathsf{Ne} && :{ } && \forall i\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i}) \to (\mmod{F^{\ast}} \to \mathsf{Tm}_{i}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{Ren}} \\
& \mathsf{Nf} && :{ } && \forall i\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i}) \to (\mmod{F^{\ast}} \to \mathsf{Tm}_{i}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{Ren}} \\
& \mathsf{NfTy} && :{ } && \forall i \to (\mmod{F^{\ast}} \to \mathsf{Ty}_{i}) \to \mathrm{Psh}^{\mathcal{Ren}} \end{alignat*} An element of $\mathsf{Ne}\ a$ (\resp{} $\mathsf{Nf}\ a$) is a witness of the fact that the term $a$ is a neutral term (\resp{} admits a normal form). An element of $\mathsf{NfTy}\ A$ is a witness that the type $A$ admits a normal form.
We list below the constructors of these inductive families. \begin{alignat*}{3}
& \mathsf{var}^{\mathsf{ne}} && :{ } && \forall i\ A\ (a : \mathsf{Var}_{i}\ A) \to \mathsf{Ne}_{i}\ A\ (\mathsf{var}\ a) \\
& \mathsf{lift}^{-1,\mathsf{ne}} && :{ } && \forall i\ A \to \mathsf{Ne}_{i+1}\ (\mathsf{Lift}_{i}\ A)\ a \to \mathsf{Ne}_{i}\ (\mathsf{lift}^{-1}\ a) \\
& \mathsf{app}^{\mathsf{ne}} && :{ } && \forall i\ A\ B\ f\ a \to \mathsf{Ne}_{i}\ f \to \mathsf{Nf}_{i}\ a \\
&&&&& \to \mathsf{Ne}_{i}\ (\mathsf{app} \circleddollar A \circledast B \circledast f \circledast a) \\
& \mathsf{elim}_{\BoolTy}^{\mathsf{ne}} && :{ } && \forall i\ j\ P\ t\ f\ b \to ((m : \mathsf{Var}\ (\lambda \mmod{F^{\ast}} \mapsto \mathbf{B}_{i})) \to \mathsf{NfTy}_{j}\ (P \circledast \mathsf{var}\ m)) \\
&&&&& \to \mathsf{Nf}\ t \to \mathsf{Nf}\ f \to \mathsf{Ne}_{i}\ b \to \mathsf{Ne}_{i}\ (\mathsf{elim}_{\BoolTy} \circleddollar P \circledast f \circledast t \circledast b) \\ \end{alignat*} \begin{alignat*}{3}
& \mathsf{nfty}^{\mathsf{nf}} && :{ } && \forall i \to \mathsf{NfTy}_{i}\ A \to \mathsf{Nf}_{i+1}\ (\lambda \mmod{F^{\ast}} \mapsto \mathcal{U}_{i})\ A \\
& \mathsf{ne}^{\mathsf{nf}}_\mathbf{B} && :{ } && \forall i\ a \to \mathsf{Ne}_{i}\ (\lambda \mmod{F^{\ast}} \mapsto \mathbf{B}_{i})\ a \to \mathsf{Nf}_{i}\ a \\
& \mathsf{ne}^{\mathsf{nf}}_{\mathsf{El}} && :{ } && \forall i\ A \to \mathsf{Ne}_{i+1}\ (\lambda \mmod{F^{\ast}} \mapsto \mathcal{U}_{i})\ A \\
&&&&& \to \mathsf{Ne}_{i}\ (\mathsf{El} \circleddollar A)\ a \to \mathsf{Nf}_{i}\ a \\
& \mathsf{lift}^{\mathsf{nf}} && :{ } && \forall i\ A \to \mathsf{Nf}_{i}\ A\ a \to \mathsf{Nf}_{i+1}\ (\mathsf{Lift}\ A)\ (\mathsf{lift}\ a) \\
& \mathsf{true}^{\mathsf{nf}} && :{ } && \forall i \to \mathsf{Nf}_{i}\ \mathsf{true} \\
& \mathsf{false}^{\mathsf{nf}} && :{ } && \forall i \to \mathsf{Nf}_{i}\ \mathsf{false} \\
& \mathsf{lam}^{\mathsf{nf}} && :{ } && \forall i\ A\ B\ b \to \mathsf{NfTy}_{i}\ A \to ((a : \mathsf{Var}\ A) \to \mathsf{NfTy}_{i}\ (B \circledast \mathsf{var}\ a)) \\
&&&&& \to ((a : \mathsf{Var}\ A) \to \mathsf{Nf}_{i}\ (b \circledast \mathsf{var}\ a)) \\
&&&&& \to \mathsf{Nf}_{i}\ (\mathsf{lam} \circleddollar A \circledast B \circledast b) \\ \end{alignat*} \begin{alignat*}{3}
& \mathsf{ne}^{\mathsf{nfty}}_{\mathcal{U}} && :{ } && \forall i\ A \to \mathsf{Ne}_{i+1}\ (\lambda\mmod{F^{\ast}} \to \mathcal{U}_{i})\ A \to \mathsf{NfTy}_{i+1}\ (\mathsf{El} \circleddollar A) \\
& \mathcal{U}^{\mathsf{nfty}} && :{ } && \forall i \to \mathsf{NfTy}_{i+1}\ (\lambda\mmod{F^{\ast}} \to \mathcal{U}_{i}) \\
& \mathsf{Lift}^{\mathsf{nfty}} && :{ } && \forall i \to \mathsf{NfTy}_{i}\ A \to \mathsf{NfTy}_{i+1}\ (\mathsf{Lift}\ A) \\
& \mathbf{B}^{\mathsf{nfty}} && :{ } && \forall i \to \mathsf{NfTy}_{i}\ (\lambda\mmod{F^{\ast}} \to \mathbf{B}_{i}) \\
& \Pi^{\mathsf{nfty}} && :{ } && \forall i\ A\ B \to \mathsf{NfTy}_{i}\ A \to ((a : \mathsf{Var}\ A) \to \mathsf{NfTy}_{i}\ (B \circledast \mathsf{var}\ a)) \\
&&&&& \to \mathsf{NfTy}_{i}\ (\Pi \circleddollar A \circledast B) \end{alignat*}
The construction of our normalization function will work for any algebra ($\mathsf{Ne}$, $\mathsf{Nf}$, $\mathsf{NfTy}$, $\mathsf{Ne}$, $\mathsf{Nf}$, $\mathsf{NfTy}$, $\dotsc$) with the above signature. The choice of the initial algebra is only needed to show uniqueness of normal forms in~\cref{lem:stab_norm}.
\subsection{The normalization displayed model}
We now construct a displayed model without context extensions $\mathbf{0}_{\MT_{\mathcal{U}}}^{\bullet}$ over $F : \mathcal{Ren} \to \mathbf{0}_{\MT_{\mathcal{U}}}$, internally to $\mathbf{Psh}^{\mathcal{Ren}}$.
A displayed type $A^{\bullet} : \mathsf{Ty}^{\bullet}\ A$ of $\mathbf{0}_{\MT_{\mathcal{U}}}^{\bullet}$ over a type $A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i}$ consists of four components $(A^{\bullet}_{\mathsf{nfty}}, A^{\bullet}_{p}, A^{\bullet}_{\mathsf{ne}}, A^{\bullet}_{\mathsf{nf}})$. \begin{itemize}
\item $A^{\bullet}_{\mathsf{nfty}} : \mathsf{NfTy}_{i}\ A$ is a witness of the fact that the type $A$ admits a normal form.
\item $A^{\bullet}_{p} : (\mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathrm{Psh}^{\mathcal{Ren}}_{i}$ is a proof-relevant logical predicate over the terms of type $A$, valued in $i$-small presheaves.
\item $A^{\bullet}_{\mathsf{ne}} : \forall a \to \mathsf{Ne}_{i}\ a \to A^{\bullet}_{p}\ a$ shows that neutral terms satisfy the logical predicate $A^{\bullet}_{p}$.
The function $A^{\bullet}_{\mathsf{ne}}$ is often called \emph{unquote} or \emph{reflect}.
\item $A^{\bullet}_{\mathsf{nf}} : \forall a \to A^{\bullet}_{p}\ a \to \mathsf{Nf}_{i}\ a$ shows that the terms that satisfy the logical predicate $A^{\bullet}_{p}$ admit normal forms.
The function $A^{\bullet}_{\mathsf{nf}}$ is often called \emph{quote} or \emph{reify}. \end{itemize}
A displayed term $a^{\bullet} : \mathsf{Tm}^{\bullet}\ A^{\bullet}\ a$ of type $A^{\bullet}$ over a term $(a : \mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}}))$ is an inhabitant $a^{\bullet}$ of $A^{\bullet}_{p}\ a$, \ie{} a witness of the fact that $a$ satisfies the logical predicate $A^{\bullet}_{p}$.
The displayed lifted types are defined as follows. Because the structures of $\MT_{\mathcal{U}}$ are not strictly preserved by these lifting operations, $\mathsf{Lift}_{i}\ A$ can be seen as a record type with a projection $\mathsf{lift}^{-1}$ and a constructor $\mathsf{lift}$. This definition would directly extend to $\Sigma$-types or to other record types. \begin{alignat*}{3}
& \mathsf{Lift}^{\bullet} && :{ } && \forall i\ A \to \mathsf{Ty}^{\bullet}_{i}\ A \to \mathsf{Ty}^{\bullet}_{i+1}\ (\mathsf{Lift}_{i} \circleddollar A) \\
& {(\mathsf{Lift}^{\bullet}_{i}\ A^{\bullet})}_{\mathsf{nfty}} && \triangleq{ } && \mathsf{Lift}^{\mathsf{nfty}}\ A^{\bullet}_{\mathsf{nfty}} \\
& {(\mathsf{Lift}^{\bullet}_{i}\ A^{\bullet})}_{p} && \triangleq{ } && \lambda a \mapsto A^{\bullet}_{p}\ (\mathsf{lift}^{-1} \circleddollar a) \\
& {(\mathsf{Lift}^{\bullet}_{i}\ A^{\bullet})}_{\mathsf{ne}} && \triangleq{ } && \lambda a_{\mathsf{ne}} \mapsto A^{\bullet}_{\mathsf{ne}}\ (\mathsf{lift}^{-1,\mathsf{ne}}\ a_{\mathsf{ne}}) \\
& {(\mathsf{Lift}^{\bullet}_{i}\ A^{\bullet})}_{\mathsf{nf}} && \triangleq{ } && \lambda a^{\bullet} \mapsto \mathsf{lift}^{\mathsf{nf}}\ (A^{\bullet}_{\mathsf{nf}}\ a^{\bullet}) \end{alignat*}
The definition of the displayed universes of the normalization displayed model is below. \begin{alignat*}{3}
& \mathcal{U}^{\bullet}_{i} && :{ } && \mathsf{Ty}_{i+1}^{\bullet}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathcal{U}_{i}) \\
& \mathcal{U}^{\bullet}_{i,\mathsf{nfty}} && \triangleq{ } && \mathcal{U}^{\mathsf{nfty}}_{i} \\
& \mathcal{U}^{\bullet}_{i,p} && \triangleq{ } && \lambda A \mapsto \mathsf{Ty}^{\bullet}_{i}\ (\mathsf{El} \circleddollar A) \\
& {(\mathcal{U}^{\bullet}_{i,\mathsf{ne}}\ A\ A_{\mathsf{ne}})}_{\mathsf{nfty}} && \triangleq{ } && \mathsf{ne}^{\mathsf{nfty}}_{\mathcal{U}}\ A_{\mathsf{ne}} \\
& {(\mathcal{U}^{\bullet}_{i,\mathsf{ne}}\ A\ A_{\mathsf{ne}})}_{p} && \triangleq{ } && \lambda a \mapsto \mathsf{Ne}\ a \\
& {(\mathcal{U}^{\bullet}_{i,\mathsf{ne}}\ A\ A_{\mathsf{ne}})}_{\mathsf{ne}} && \triangleq{ } && \lambda a_{\mathsf{ne}} \mapsto a_{\mathsf{ne}} \\
& {(\mathcal{U}^{\bullet}_{i,\mathsf{ne}}\ A\ A_{\mathsf{ne}})}_{\mathsf{nf}} && \triangleq{ } && \lambda a_{\mathsf{ne}} \mapsto \mathsf{ne}^{\mathsf{nf}}_{\mathsf{El}}\ a_{\mathsf{ne}} \\
& \mathcal{U}^{\bullet}_{i,\mathsf{nf}}\ A\ A^{\bullet} && \triangleq{ } && A^{\bullet}_{\mathsf{nfty}} \end{alignat*} The most interesting part is the component $\mathcal{U}^{\bullet}_{i,\mathsf{ne}}$ that constructs a displayed type over any neutral element of the universe; any element of a neutral type is itself neutral.
For $\Pi$-types, the logical predicates are defined in the same way as for canonicity. \begin{alignat*}{3}
& {(\Pi^{\bullet}\ A^{\bullet}\ B^{\bullet})}_{\mathsf{nfty}} && \triangleq{ } &&
\Pi^{\mathsf{nfty}}\ A^{\mathsf{nfty}}\ (\lambda a_{\mathsf{var}} \mapsto \mathsf{let}\ a^{\bullet} = A^{\bullet}_{\mathsf{ne}}\ (\mathsf{var}^{\mathsf{ne}}\ a_{\mathsf{var}})\ \mathsf{in}\ {(B^{\bullet}\ a^{\bullet})}_{\mathsf{nfty}}) \\
& {(\Pi^{\bullet}\ A^{\bullet}\ B^{\bullet})}_{p}\ A && \triangleq{ } &&
\forall a\ (a^{\bullet} : A^{\bullet}_{p}) \to {(B^{\bullet}\ a^{\bullet})}_{p} \\
& {(\Pi^{\bullet}\ A^{\bullet}\ B^{\bullet})}_{\mathsf{ne}}\ f_{\mathsf{ne}} && \triangleq{ } &&
\lambda a^{\bullet} \mapsto {(B^{\bullet}\ a^{\bullet})}_{\mathsf{ne}}\ (\mathsf{app}^{\mathsf{ne}}\ f_{\mathsf{ne}}\ (A^{\bullet}_{\mathsf{nf}}\ a^{\bullet})) \\
& {(\Pi^{\bullet}\ A^{\bullet}\ B^{\bullet})}_{\mathsf{nf}}\ f^{\bullet} && \triangleq{ } && \mathsf{lam}^{\mathsf{nf}}\ (\lambda a_{\mathsf{var}} \mapsto \mathsf{let}\ a^{\bullet} = A^{\bullet}_{\mathsf{ne}}\ (\mathsf{var}^{\mathsf{ne}}\ a_{\mathsf{var}})\ \mathsf{in}\ {(B^{\bullet}\ a^{\bullet})}_{\mathsf{nf}}\ (f^{\bullet}\ a^{\bullet})) \end{alignat*}
For booleans, we define an inductive family $\mathbf{B}^{\bullet}_{p} : (\mmod{F^{\ast}} \to \mathsf{Tm}\ \mathbf{B}) \to \mathrm{Psh}^{\mathcal{Ren}}$ generated by \begin{alignat*}{3}
& \mathsf{true}^{\bullet} && :{ } && \mathbf{B}^{\bullet}_{p}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathsf{true}) \\
& \mathsf{false}^{\bullet} && :{ } && \mathbf{B}^{\bullet}_{p}\ (\lambda\ \mmod{F^{\ast}} \mapsto \mathsf{false}) \\
& \mathsf{ne}^{\mathbf{B}^{\bullet}_{p}} && :{ } && \forall b \to \mathsf{Ne}\ b \to \mathbf{B}^{\bullet}_{p}\ b \end{alignat*} This family witnesses the fact that a boolean term is an element of the free bipointed presheaf generated by the neutral boolean terms. This extends to the following definition of the displayed boolean type in the normalization model. \begin{alignat*}{3}
& \mathbf{B}^{\bullet}_{\mathsf{nfty}} && \triangleq{ } && \mathbf{B}^{\mathsf{nfty}} \\
& \mathbf{B}^{\bullet}_{\mathsf{ne}}\ n_{\mathsf{ne}} && \triangleq{ } && \mathsf{ne}^{\mathbf{B}^{\bullet}_{p}} \\
& \mathbf{B}^{\bullet}_{\mathsf{nf}}\ \mathsf{true}^{\bullet} && \triangleq{ } && \mathsf{true}^{\mathsf{nf}} \\
& \mathbf{B}^{\bullet}_{\mathsf{nf}}\ \mathsf{false}^{\bullet} && \triangleq{ } && \mathsf{false}^{\mathsf{nf}} \\
& \mathbf{B}^{\bullet}_{\mathsf{nf}}\ (\mathsf{ne}^{\mathbf{B}^{\bullet}_{p}}\ b_{\mathsf{ne}}) && \triangleq{ } && \mathsf{ne}^{\mathsf{nf}}_{\mathbf{B}}\ b_{\mathsf{ne}} \end{alignat*}
The displayed boolean eliminator $\mathsf{elim}_{\BoolTy}^{\bullet}$ is defined using the induction principle of $\mathbf{B}^{\bullet}_{p}$.
\subsection{Normalization}
Given any displayed type $A^{\bullet}$, every variable of type $A$ satisfies the logical predicate $A^{\bullet}_{p}$; we can define \begin{alignat*}{3}
& \mathsf{var}^{\bullet} && :{ } && \forall \mmod{I^{\ast}}\ i\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i}) (a : \mathsf{Var}\ A) \to \mathsf{Tm}^{\bullet}\ (S^{\mathsf{Ty}}_{\iota}\ \mmod{I^{\ast}}\ A)\ (\mathsf{var}\ a) \\
& \mathsf{var}^{\bullet} && \triangleq{ } && \lambda \mmod{I^{\ast}}\ A\ a \mapsto {(S^{\mathsf{Ty}}_{\iota}\ \mmod{I^{\ast}}\ A)}_{\mathsf{ne}}\ (\mathsf{var}^{\mathsf{ne}}\ a) \end{alignat*}
We can now apply \cref{lem:ind_renamings_univ} to $\mathbf{0}_{\MT_{\mathcal{U}}}^{\bullet}$. We obtain a relative section $S_{\alpha}$ of $\mathbf{0}_{\MT_{\mathcal{U}}}^{\bullet}$.
This proves the existence of normal forms, as witnessed by the following normalization function, internally to $\mathbf{Psh}^{\mathcal{Ren}}$. \begin{alignat*}{3}
& \mathsf{norm} && :{ } && \forall i\ (A : \mmod{F^{\ast}} \to \mathsf{Ty}_{i})\ (a : \mmod{F^{\ast}} \to \mathsf{Tm}\ (A\ \mmod{F^{\ast}})) \to \mathsf{Nf}_{i}\ a \\
& \mathsf{norm}\ A\ a && \triangleq{ } && {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nf}}\ (S_{\alpha}^{\mathsf{Tm}}\ a) \end{alignat*}
\subsection{Stability of normalization}
It remains to show the uniqueness of normal forms. It follows from stability of normalization: \begin{lemma}[Internally to $\mathbf{Psh}^{\mathcal{Ren}}$]\label{lem:stab_norm}
For every $a_{\mathsf{ne}} : \mathsf{Ne}_{i}\ A\ a$, we have $S_{\alpha}^{\mathsf{Tm}}\ a = {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{ne}}\ a_{\mathsf{ne}}$, and for every $a_{\mathsf{nf}} : \mathsf{Nf}_{i}\ A\ a$, we have ${(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nf}}\ (S_{\alpha}^{\mathsf{Tm}}\ a) = a_{\mathsf{nf}}$.
Furthermore for every $A_{\mathsf{nfty}} : \mathsf{NfTy}_{i}\ A$, we have ${(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nfty}} = A_{\mathsf{nfty}}$. \end{lemma} \begin{proof}
This lemma is proven by induction on $\mathsf{Ne}$, $\mathsf{Nf}$, $\mathsf{NfTy}$.
We only list some of the cases.
\begin{description}
\item[$\mathsf{var}^{\mathsf{ne}}\ a : \mathsf{Ne}\ (\mathsf{var}\ A\ a)$]
\begin{alignat*}{1}
& S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{var}\ A\ a) \\
& \quad { }= {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{ne}}\ (\mathsf{var}^{\mathsf{ne}}\ a)
\tag*{(computation rule of $S_{\alpha}$)}
\end{alignat*}
\item[$\mathsf{app}^{\mathsf{ne}}\ f_{\mathsf{ne}}\ a_{\mathsf{nf}} : \mathsf{Ne}\ (\mathsf{app} \circleddollar A \circledast B \circledast f \circledast a)$]
\begin{alignat*}{1}
& S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{app} \circleddollar f \circledast a) \\
& \quad { }= \mathsf{app}^{\bullet}\ (S_{\alpha}^{\mathsf{Tm}}\ f)\ (S_{\alpha}^{\mathsf{Tm}}\ a)
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= (S_{\alpha}^{\mathsf{Tm}}\ f)\ (S_{\alpha}^{\mathsf{Tm}}\ a)
\tag*{(definition of $\mathsf{app}^{\bullet}$)} \\
& \quad { }= ({(S_{\alpha}^{\mathsf{Ty}}\ (\Pi\ A\ B))}_{\mathsf{ne}}\ f_{\mathsf{ne}})\ (S_{\alpha}^{\mathsf{Tm}}\ a)
\tag*{(induction hypothesis for $f_{\mathsf{ne}}$)} \\
& \quad { }= {(\Pi^{\bullet}\ (S_{\alpha}^{\mathsf{Ty}}\ A)\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B))}_{\mathsf{ne}}\ f_{\mathsf{ne}}\ (S_{\alpha}^{\mathsf{Tm}}\ a)
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= {((S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B)\ (S_{\alpha}^{\mathsf{Tm}}\ a))}_{\mathsf{ne}}\ {(\mathsf{app}^{\mathsf{ne}}\ f_{\mathsf{ne}}\ ({(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nf}}\ (S_{\alpha}^{\mathsf{Tm}}\ a)))}
\tag*{(definition of $\Pi^{\bullet}$)} \\
& \quad { }= {(S_{\alpha}^{\mathsf{Ty}}\ (B \circledast a))}_{\mathsf{ne}}\ (\mathsf{app}^{\mathsf{ne}}\ f_{\mathsf{ne}}\ a_{\mathsf{nf}})
\tag*{(induction hypothesis for $a_{\mathsf{nf}}$)}
\end{alignat*}
\item[$\mathsf{lam}^{\mathsf{nf}}\ b_{\mathsf{nf}} : \mathsf{Nf}\ (\mathsf{lam} \circleddollar \{A\} \circledast \{B\} \circledast b)$]
\begin{alignat*}{1}
& {(S_{\alpha}^{\mathsf{Ty}}\ (\Pi\ A\ B))}_{\mathsf{nf}}\ (S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{lam} \circleddollar b)) \\
& \quad { }= {(\Pi^{\bullet}\ (S_{\alpha}^{\mathsf{Ty}}\ A)\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B))}_{\mathsf{nf}}\ (\mathsf{lam}^{\bullet}\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Tm}}\ b))
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= {(\Pi^{\bullet}\ (S_{\alpha}^{\mathsf{Ty}}\ A)\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B))}_{\mathsf{nf}}\ (\lambda\ a^{\bullet} \mapsto (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Tm}}\ b)\ a^{\bullet})
\tag*{(definition of $\mathsf{lam}^{\bullet}$)} \\
& \quad { }= \mathsf{lam}^{\mathsf{nf}}\ (\lambda\ a_{\mathsf{var}} \mapsto \mathsf{let}\ a^{\bullet} = {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{ne}}\ (\mathsf{var}^{\mathsf{ne}}\ a_{\mathsf{var}})\ \mathsf{in}\ {(S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B\ a^{\bullet})}_{\mathsf{nf}}\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Tm}}\ b\ a^{\bullet}))
\tag*{(definition of $\Pi^{\bullet}$)} \\
& \quad { }= \mathsf{lam}^{\mathsf{nf}}\ (\lambda\ a_{\mathsf{var}} \mapsto \mathsf{let}\ a^{\bullet} = S_{\alpha}^{\mathsf{Tm}}\ (\mathsf{var}\ a_{\mathsf{var}})\ \mathsf{in}\ {(S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B\ a^{\bullet})}_{\mathsf{nf}}\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Tm}}\ b\ a^{\bullet}))
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= \mathsf{lam}^{\mathsf{nf}}\ (\lambda\ a_{\mathsf{var}} \mapsto {(S_{\alpha}^{\mathsf{Ty}}\ (B\ \circledast \mathsf{var}\ a_{\mathsf{var}}))}_{\mathsf{nf}}\ (S_{\alpha}^{\mathsf{Tm}}\ (b\ \circledast \mathsf{var}\ a_{\mathsf{var}})))
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= \mathsf{lam}^{\mathsf{nf}}\ (\lambda\ a_{\mathsf{var}} \mapsto b_{\mathsf{nf}}\ a_{\mathsf{var}})
\tag*{(induction hypothesis for $b_{\mathsf{nf}}$)} \\
& \quad { }= \mathsf{lam}^{\mathsf{nf}}\ b_{\mathsf{nf}}
\end{alignat*}
\item[$\Pi^{\mathsf{nfty}}\ A_{\mathsf{nfty}}\ B_{\mathsf{nfty}} : \mathsf{NfTy}\ (\Pi \circleddollar A \circledast B)$]
\begin{alignat*}{1}
& {(S_{\alpha}^{\mathsf{Ty}}\ (\Pi \circleddollar A \circledast B))}_{\mathsf{nfty}} \\
& \quad { }= {(\Pi^{\bullet}\ (S_{\alpha}^{\mathsf{Ty}}\ A)\ (S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B))}_{\mathsf{nfty}}
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= \Pi^{\mathsf{nfty}}\ {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nfty}}\ (\lambda a_{\mathsf{var}} \mapsto \mathsf{let}\ a^{\bullet} = {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{ne}}\ (\mathsf{var}^{\mathsf{ne}}\ a_{\mathsf{var}})\ \mathsf{in}\ {(S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B\ a^{\bullet})}_{\mathsf{nfty}})
\tag*{(definition of $\Pi^{\bullet}$)} \\
& \quad { }= \Pi^{\mathsf{nfty}}\ {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nfty}}\ (\lambda a_{\mathsf{var}} \mapsto \mathsf{let}\ a^{\bullet} = s^{\mathsf{Tm}}_{\alpha}\ (\mathsf{var}\ a_{\mathsf{var}})\ \mathsf{in}\ {(S_{\alpha}^{[\mathsf{Tm}]\mathsf{Ty}}\ B\ a^{\bullet})}_{\mathsf{nfty}})
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= \Pi^{\mathsf{nfty}}\ {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nfty}}\ (\lambda a_{\mathsf{var}} \mapsto {(S_{\alpha}^{\mathsf{Ty}}\ (B\ \circledast \mathsf{var}\ a_{\mathsf{var}}))}_{\mathsf{nfty}})
\tag*{(computation rule of $S_{\alpha}$)} \\
& \quad { }= \Pi^{\mathsf{nfty}}\ {(S_{\alpha}^{\mathsf{Ty}}\ A)}_{\mathsf{nfty}}\ (\lambda a_{\mathsf{var}} \mapsto B_{\mathsf{nfty}}\ a_{\mathsf{var}})
\tag*{(induction hypothesis for $B_{\mathsf{nfty}}$)} \\
& \quad { }= \Pi^{\mathsf{nfty}}\ A_{\mathsf{nfty}}\ B_{\mathsf{nfty}}
\tag*{(induction hypothesis for $A_{\mathsf{nfty}}$)} \\
\end{alignat*}
\end{description} \end{proof}
\end{document} | arXiv |
We have positive integers $a,$ $b,$ and $c$ such that $a > b > c.$ When $a,$ $b,$ and $c$ are divided by $19$, the remainders are $4,$ $2,$ and $18,$ respectively.
When the number $2a + b - c$ is divided by $19$, what is the remainder?
First of all, we know that $a > c,$ we do not have to worry about $2a + b - c$ being negative. In any case, we have: \begin{align*}
a &\equiv 4\pmod{19}, \\
b &\equiv 2\pmod{19}, \\
c &\equiv 18\pmod{19}.
\end{align*}Adding as needed, we have $2a + b - c = a + a + b - c \equiv 4 + 4 + 2 - 18 \equiv -8 \equiv 11 \pmod{19}.$ Therefore, our answer is $\boxed{11}.$ | Math Dataset |
Combating COVID-19
Peak Prediction
Prevalence Estimation
Prediction Model(current)
Learning Epidemic Models for COVID-19
UCLA Statistical Machine Learning Lab • April 6, 2020 • Paper Link
We proposed an improved SEIR model for predicting the dynamics among the cumulative confirmed cases and death of COVID-19. Before we introduce this model, we start with some basic epidemic models.
The SIR Model
SIR1 is an epidemic model that shows the change of infection rate over time. As illustrated in Figure 1, the SIR model characterizes the dynamic interplay among the susceptible individuals (S), infectious individuals (I) and recovered/deceased individuals (R) in a certain place. In the SIR model, the susceptible individuals may become infectious individuals over time, which depends on the spread rate of the virus, often called the contact rate. Recovered individuals are assumed to be immune to the virus and thus cannot become susceptible again.
Figure 1. Illustration of the SIR model.
To characterize this dynamics, let's define parameters at time $t$ as follows:
$S_t$: the number of susceptible individuals
$I_t$: the number of infectious individuals
$R_t$: the number of recovered/deceased/immune individuals
To simplify the analysis, we assume for now that the total population in the certain area is fixed as $N$. The evolving equations of the above parameters over time are defined as follows: \begin{align} \frac{d S_t}{d t}&= -\frac{\beta I_t S_t}{N}\\ \frac{d I_t}{d t}&= \frac{\beta I_t S_t}{N} - \gamma I_t\\ \frac{d R_t}{d t}&=\gamma I_t \end{align} where $\beta$ is the contact rate between the Susceptible and Infectious groups, and $\gamma$ is the transition rate between the Infectious and Recovered groups. The above ordinary differential equations indicate that at every time unit the total number of susceptible individuals will decrease by a quantity $-\beta I_t S_t/N$, who will transit into the infectious group. Apart from the increase from the transition of susceptible individuals, the size of the infectious group will also decrease by a factor of $\gamma$. In the COVID-19 case, the infection ratio $\beta$ could be scaled with $1/S_t$ since the population is not fully mixed and people are quarantined at home.
Incubation Period: The SEIR Model
For other diseases, there is often an incubation period during which individuals who have been exposed to the virus may not be as contagious as the infectious individuals. Therefore, it is important to separately model these cases as the "Exposed" group. As is shown in the following figure, this mode is usually referred to as SEIR2.
Figure 2. Illustration of the SEIR model.
The dynamics of SEIR introduces a new compartment $E_t$, which models the number of individuals that are exposed to coronavirus but have not developed obvious symptoms. Among all the exposed cases, there are only a fraction $\sigma$ of people who will develop observable symptoms in a time unit.
\begin{align} \frac{d S_t}{d t}&= -\frac{\beta I_t S_t}{N}\\ \frac{d E_t}{d t}&= \frac{\beta I_t S_t}{N} -\sigma E_t\\ \frac{d I_t}{d t}&= \sigma E_t - \gamma I_t\\ \frac{d R_t}{d t}&=\gamma I_t \end{align}
Compared with the SIR model, SEIR has more elaborated model parameters. The parameters $\sigma,\beta$ and $\gamma$ can be learned from the reported data.
Unreported Recovery: The SuEIR Model
It is observed that COVID-19 has an incubation period ranging from 2 to 14 days3. However, during this period, individuals who have been exposed to the virus can also infect the susceptible group. In practice, the common situation is that the number of reported cases (including confirmed cases and recovered cases) are not equal to their real numbers as many infectious cases have not been tested, which will not pass to the next compartment. Therefore, we use the similar idea of SEIR and proposed a new epidemic model that takes the untested/unreported cases into consideration, which are illustrated in Figure 3.
Figure 3. Illustration of the SuEIR model.
In particular, the compartment Exposed in our model is considered as the cases that have already been infected and have not been tested. Therefore, they also have the capability to infect the susceptible individuals. Moreover, some of such cases can receive a test and be passed to the Infectious compartment (as well as reported to the public), while the rest of them will recover/die but not appear in the publicly reported cases. Therefore, we introduce a new parameter $\mu<1$ in the evolution dynamics of $I_t$ to control the ratio of the exposed cases that are confirmed and reported to the public.
\begin{align*} \frac{d S_t}{d t}&= -\frac{\beta (I_t+E_t) S_t}{N}\\ \frac{d E_t}{d t}&= \frac{\beta (I_t+E_t) S_t}{N} -\sigma E_t\\ \frac{d I_t}{d t}&= \mu \sigma E_t - \gamma I_t\\ \frac{d R_t}{d t}&=\gamma I_t \end{align*}
Training the SuEIR Model Using Machine Learning Methods
In order to find the optimal parameters of the SuEIR model, defined by $\boldsymbol{\theta} = (\beta, \sigma, \gamma, \mu)$, we apply gradient-based optimizers to minimizing the following loss function \begin{align*} L(\boldsymbol{\theta}; \mathbf {I }, \mathbf {R}) = \frac{1}{T}\sum_{t=1}^T (\hat I_t - I_t)^2 + \lambda (\hat R_t - R_t)^2, \end{align*} where $\hat I_t$ and $\hat R_t$ denote the reported numbers of confirmed cases and recover cases at time $t$ (or date $t$), $I_t$ and $R_t$ denote the numbers of confirmed cases and recover cases computed based on our model, and $\lambda>0$ is a tuning parameter in order to balance the importance between the prediction error in terms of the confirmed cases and recover cases. In particular, since the true model is defined by ordinary differential equations (ODEs), in our experiment we apply numerical ODE solvers to compute $S_t$ and $R_t$ given the model parameters $\boldsymbol{\theta}$ and initial quantities $S_0$, $E_0$, $I_0$, and $R_0$.
In terms of the initialization, we can directly set $I_0 = \hat I_0$ and $R_0 = \hat R_0$. Additionally, one can typically set $S_0+E_0+I_0+R_0 = N$, where $N$ is the total population of the region (which can be either a country or a state/province). However, since most of the states in the US have already issued the safer-at-home rule, the actual total number of cases in the SuEIR model, denoted by $N_0$, must be less than $N$. Moreover, we also need to point out that the initialization of $E$, i.e., $E_0$, is a bit tricky since we do not know the number of infected cases before testing them. However, it is also not reasonable to set $E_0=0$ since generally there have already existed a large number of infected cases when the governments begin to perform the test and report. In our experiment, we train multiple models with different choices of $N_0$ and $E_0$ and select ones with reasonable training loss.
Prediction of Confirmed Cases in California
Figure 4 shows the prediction of our SuEIR model for the confirmed cases in California. Baseline models are SIR, Arima4, and Gaussian error fit5 (i.e., fitted by Gaussian error function). All the models are trained based on the actual numbers up to 04/03/2020. The results show that the increase of confirmed cases will slow down around the mid of May, and the projected confirmed cases in California is around 51000.
Figure 4. Prediction of cumulative confirmed cases by SuEIR model.
Similarly, we plot the predicted cumulative death cases in California in Figure 5. The results show that the increase of deaths will slow down around the beginning of June, and the projected deaths in California is around 2100.
Figure 5. Prediction of cumulative death cases by SuEIR model.
Wikipedia contributors. Compartmental models in epidemiology: The SIR model. Wikipedia, The Free Encyclopedia, 11 Apr. 2020. Wikipedia https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SIR_model
Wikipedia contributors. Compartmental models in epidemiology: The SEIR model. Wikipedia, The Free Encyclopedia, 11 Apr. 2020. Wikipedia https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model
Stephen A. Lauer et al. "The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application". Ann Intern Med. 2020. Annals of Internal Medicine DOI: 10.7326/M20-0504
Wikipedia contributors. Autoregressive integrated moving average. Wikipedia, The Free Encyclopedia, 11 Apr. 2020. Wikipedia https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average
IHME COVID-19 health service utilization forecasting team, Christopher JL Murray. "Forecasting COVID-19 impact on hospital bed-days, ICU-days, ventilator-days and deaths by US state in the next 4 months". Preprint. medRxiv DOI: 10.1101/2020.03.27.20043752
Presented by Statistical Machine Learning Lab at UCLA. All rights reserved ©. Contact Us Follow @QuanquanGu
This website is constructed under GNU Affero General Public License v3.0. | CommonCrawl |
You are given $n$ cubes in a certain order, and your task is to build towers using them. Whenever two cubes are one on top of the other, the upper cube must be smaller than the lower cube.
You must process the cubes in the given order. You can always either place the cube on top of an existing tower, or begin a new tower. What is the minimum possible number of towers?
The first input line contains an integer $n$: the number of cubes.
The next line contains $n$ integers $k_1,k_2,\ldots,k_n$: the sizes of the cubes.
Print one integer: the minimum number of towers. | CommonCrawl |
Uriel Rothblum
Uriel George "Uri" Rothblum (Tel Aviv, March 16, 1947 – Haifa, March 26, 2012) was an Israeli mathematician and operations researcher. From 1984 until 2012 he held the Alexander Goldberg Chair in Management Science at the Technion – Israel Institute of Technology in Haifa, Israel.[1][2]
Uriel Rothblum
Born(1947-03-16)March 16, 1947
Tel-Aviv, Israel.
DiedMarch 26, 2012(2012-03-26) (aged 65)
Haifa, Israel.
CitizenshipIsrael, United States
Alma mater
• Tel Aviv University B.S and M.S.
• Stanford Ph.D.
Scientific career
Fields
• Mathematics
• operation research
• system analysis
Institutions
• Technion
• New York University
• Yale
Rothblum was born in Tel Aviv to a family of Jewish immigrants from Austria.[3] He went to Tel Aviv University, where Robert Aumann became his mentor; he earned a bachelor's degree there in 1969 and a master's in 1971. He completed his doctorate in 1974 from Stanford University, in operations research, under the supervision of Arthur F. Veinott. After postdoctoral research at New York University, he joined the Yale University faculty in 1975, and moved to the Technion in 1984.[2]
Rothblum became president of the Israeli Operational Research Society (ORSIS) for 2006–2008, and editor-in-chief of Mathematics of Operations Research from 2010 until his death.[2] He was elected to the 2003 class of Fellows of the Institute for Operations Research and the Management Sciences.[4]
References
1. Loewy, Raphael (2012), "Uriel G. Rothblum (1947–2012)", Linear Algebra and Its Applications, 437 (12): 2997–3009, doi:10.1016/j.laa.2012.07.010, MR 2966614.
2. Golany, Boaz (2012), "Uriel G. Rothblum, March 16, 1947 – March 26, 2012", OR/MS Today.
3. "Uriel Rothblum - Biography".
4. Fellows: Alphabetical List, Institute for Operations Research and the Management Sciences, retrieved 2019-10-09
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
\begin{document}
\newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary}
\newcommand\R{\mathbb{R}} \newcommand\g{\frak{g}} \newcommand\K{\mathbb{K}} \newcommand\C{\mathbb{C}} \newcommand\Z{\mathbb{Z}} \newcommand{$\Z^2_2$-symmetric space}{$\Z^2_2$-symmetric space} \newcommand\z{ \mathbb{Z}_2^2} \newcommand\p{\mathcal{P}} \newcommand{{\it Proof. }}{{\it Proof. }} \newcommand\f{\frak{t}} \newcommand\h{\frak{h}} \newcommand\m{\frak{m}} \setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{.25in} \setlength{\textwidth}{6.25in}
\pagestyle{myheadings}
\begin{abstract} Flag manifolds are in general not symmetric spaces. But they are provided with a structure of $\mathbb{Z}_2^k$-symmetric space. We describe the Riemannian metrics adapted to this structure and some properties of reducibility. We detail for the flag manifold $SO(5)/SO(2)\times SO(2) \times SO(1)$ what are the conditions for a metric adapted to the $\mathbb{Z}_2^2$-symmetric structure to be naturally reductive. \end{abstract}
\maketitle
\section{Introduction}
In this work we call flag manifold any homogeneous space $$G/H=\frac{SO(n)}{SO(p_1) \times \cdots \times SO(p_k) }$$ with $p_1 + \cdots +p_k=n.$ We suppose also that $p_1\geq p_2 \geq \cdots\geq p_k.$ When $k=2$ and $p_2=1,$ this space is isomorphic to the sphere $S^{n-1}$ and it is a symmetric space. When $k=2$ and $p_1 \neq 1$, the homogeneous space $G/H$ is isomorphic to the Grassmannian manifold and it is also a symmetric space. But if $k>2$ the homogeneous space $G/H$ is reductive but not symmetric. In \cite{Ba-Go} and \cite{Go-Re-Santiago} we have shown that we can define on $G/H$ a structure of $\mathbb{Z}_2^k$-symmetric space, that is, the Lie algebra $\frak{g}$ of $G$ admits a $\mathbb{Z}_2^k$-grading. A Riemannian metric on $G/H$ is called associated with the $\mathbb{Z}_2^k$-symmetric structure if the natural symmetries defining the $\mathbb{Z}_2^k$-symmetric structure are isometries. The Riemannian geometry is more complicated than the Riemannian geometry of symmetric spaces (or $\mathbb{Z}_2$-symmetric spaces); in particular, it is not always naturally reductive. In the paper we investigate the $\mathbb{Z}_2^k$-symmetric Riemannian tensor on flag manifolds and develop all the computations when $G/H=SO(5)/SO(2)\times SO(2) \times SO(1).$
\section{Riemannian $\mathbb{Z}_2^k$-symmetric spaces} This notion has been introduced in \cite{Lutz}. Let $G/H$ be an homogeneous space with a connected Lie group $G$. We denote by $\frak{g}$ and $\frak{h}$ respectively the Lie algebras of $G$ and $H$.
\begin{definition} The homogeneous space $G/H$ is $\mathbb{Z}_2^k$-symmetric if the Lie algebra $\frak{g}$ admits a $\mathbb{Z}_2^k$-grading, that is, $$\frak{g}=\oplus_{\gamma \in \mathbb{Z}_2^k}\frak{g}_{\gamma}, \quad [\frak{g}_{\gamma_1},\frak{g}_{\gamma_2}]\subset \frak{g}_{\gamma_1\gamma_2},$$ with $\frak{g}_{e}=\frak{h}$ where $e$ is the identity of $\mathbb{Z}_2^k$. \end{definition}
If $\frak{m}=\oplus_{\gamma \in \mathbb{Z}_2^k, \gamma \neq e} \frak{g}_{\gamma}$ then $\frak{g}=\frak{g}_{e} \oplus \frak{m}$ and the $\mathbb{Z}_2^k$-grading implies $[\frak{g}_e, \frak{m}]\subset \frak{m}.$ But if $k \geq 2$, we have not $[\frak{m},\frak{m}]\subset \frak{g}_e.$ Thus the decomposition $\frak{g}= \frak{g}_e \oplus \frak{m}$ is reductive, not symmetric when $k\geq 2$.
\noindent {\bf Consequence.} A $\mathbb{Z}_2^k$-symmetric homogeneous space $G/H$ is reductive.
\begin{example} From \cite{Ba-Go}, \cite{Go-Re-Santiago} and \cite{Kollross}, it is possible to give a classification of the $\mathbb{Z}_2^2$-grading of complex simple Lie algebras. In the following list we give the pairs $(\frak{g}, \frak{g}_e=\frak{h})$ which the (local) classification of $\mathbb{Z}_2^2$-symmetric structures when $G$ is a simple complex or $G$ is simple compact and real. $$
\begin{array}{l|l} \frak{g} & \frak{g}_e=\frak{h} \\ \hline \\ so(k_1+k_2+k_3+k_4), & \oplus_{i=1}^4 so(k_i) \\ k_1 \geq k_2 \geq k_3 \geq k_4, k_3 \neq 0 \\ sp(k_1+k_2+k_3+k_4), & \oplus_{i=1}^4 sp(k_i) \\ k_1 \geq k_2 \geq k_3 \geq k_4, k_3 \neq 0 \\ so(2m) & gl(m) \\ so(2k_1+2k_2) & gl(k_1)\oplus gl(k_2) \\ sp(2m) & gl(m) \\ sp(2k_1+2k_2) & gl(k_1)\oplus gl(k_2) \\ so(2m) & so(m)\\ so(4m) & sp(2m) \\ sp(4m) & sp(2m) \\ sp(2m) & so(m) \\ sl(2n) & sl(n) \\ sl(k_1+k_2) & \oplus_{i=1}^2 sl(k_i) \oplus \mathbb{C} \\ sl(k_1+k_2+k_3) & \oplus_{i=1}^3 sl(k_i)\oplus \mathbb{C}^2 \\ sl(k_1+k_2+k_3+k_4) & \oplus_{i=1}^4 sl(k_i) \oplus \mathbb{C}^3 \\ so(8) & gl(3) \oplus gl(1) \end{array} \quad
\begin{array}{l|l} \frak{g} & \frak{g}_e=\frak{h} \\ \hline \\ E_6 & so(6)\oplus \mathbb{C}\\
& sp(2)\oplus sp(2) \\
& sp(3)\oplus sp(1) \\
& su(3)\oplus su(3)\oplus \mathbb{C}^2 \\
& su(4)\oplus sp(1)\oplus sp(1)\oplus \mathbb{C} \\
& su(5) \oplus \mathbb{C}^2 \\
& so(8) \oplus \mathbb{C}^2 \\
& so(9) \\
\hline \\ E_7 & so(8) \\
& su(4)\oplus su(4)\oplus \mathbb{C} \\
& sp(4) \\
& su(6)\oplus sp(1)\oplus \mathbb{C} \\
& so(8)\oplus so(4)\oplus sp(1) \\
& u(6) \oplus \mathbb{C} \\
& so(10) \oplus \mathbb{C}^2 \\
& F_4
\end{array}
$$
$$
\begin{array}{l|l} \frak{g} & \frak{g}_e=\frak{h} \\ \hline \\ E_8 & so(8) \oplus so(8) \\
& su(8)\oplus \mathbb{C} \\
& so(12)\oplus sp(1)\oplus sp(1) \\
& E_6 \oplus \mathbb{C}^2
\end{array}
\quad
\begin{array}{l|l} \frak{g} & \frak{g}_e=\frak{h} \\ \hline \\ F_4 & u(3) \oplus \mathbb{C} \\
& sp(2)\oplus sp(1)\oplus sp(1) \\
& so(8) \\ G_2 & \mathbb{C}^2 \end{array} $$ \end{example}
If $M=G/H$ is a $\mathbb{Z}_2^k$-symmetric space, the grading $$\frak{m}=\oplus_{\gamma \in \mathbb{Z}_2^k, \gamma \neq e} \frak{g}_{\gamma}$$ is associated with a spectral decomposition of $\mathfrak{g}$ defined by a family $\left\{ \sigma_\gamma, \gamma \in \mathbb{Z}_2^k \right\}$ of automorphisms of $\frak{g}$ satisfying $$\left\{ \begin{array}{l} \sigma_{\gamma}^2=Id, \\ \sigma_{\gamma_1} \circ \sigma_{\gamma_2}=\sigma_{\gamma_2} \circ \sigma_{\gamma_1}, \end{array} \right. $$ for any $\sigma,\sigma_1,\sigma_2 \in \mathbb{Z}_2^k.$ Any $\sigma_{\gamma} \in Aut(\frak{g})$ defines an automorphism $s_\gamma$ of $G$ and $H$, if it is connected, corresponds to the identity component of the group $\left\{A \in G / \forall \sigma \in \mathbb{Z}_2^k, s_\gamma(A)=A \right\}.$ The family $\left\{ s_\gamma , \gamma \in \mathbb{Z}_2^k \right\}$ is a subgroup of $Aut(G)$
isomorphic to $\mathbb{Z}_2^k.$ For any $x \in G/H$, it determines a subgroup $\left\{ s_{\gamma,x} , \gamma \in \mathbb{Z}_2^k \right\}$ of $Diff(M)$ isomorphic to $\mathbb{Z}_2^k.$ The diffeomorphisms $s_{\gamma,x}$ are called the symmetries of the $\mathbb{Z}_2^k$-symmetric space $G/H.$ By extension, we will also call symmetries, the automorphisms $s_\gamma$ of $G.$
\noindent{\bf Remark.} A $\Gamma$-symmetric space, when $\Gamma$ is a cyclic group is usually called generalized symmetric space. In this case the grading of the Lie algebra is defined in the complex field and correspond to the roots of the unity. For a general presentation, see \cite{Kowalski}. \begin{definition} A Riemannian metric $g$ on the (reductive) $\mathbb{Z}_2^k$-symmetric space $G/H$ is called adapted to the $\mathbb{Z}_2^k$-symmetric structure if the symmetries $s_{\gamma,x}$ are isometries for any $\gamma$ in $ \mathbb{Z}_2^k$ and $x$ in $G/H.$
In this case we will say that $G/H$ is a Riemannian $\mathbb{Z}_2^k$-symmetric space. \end{definition}
As $G/H$ is a reductive homogeneous space, we consider only $G$-invariant Riemannian metrics on $G/H$ which are positive-definite $ad(H)$-invariant symmetric bilinear form $B$ on $\frak{m}$, with the following correspondence $$B(X,Y)=g(X,Y)_0$$ for $X,Y \in \frak{m}$. As in this paper we consider $H$ connected, the invariance of $B$ is written $$B([Z,X],Y)+B(X,[Z,Y])=0,$$ for $X,Y$ in $\frak{m}$ and $Z \in \frak{h}.$
\begin{proposition} Let $G/H$ be a $\mathbb{Z}_2^k$-symmetric structure with $G$ and $H$ connected. Any Riemannian metric $g$ adapted to the $\mathbb{Z}_2^k$-symmetric structure is in one-to-one correspondence with the $ad(H)$-invariant positive-definite bilinear form $B$ on $\frak{m}$ such that $B(\frak{g}_\gamma, \frak{g}_{\gamma '})=0$ for $\gamma \neq \gamma '$ in $\mathbb{Z}_2^k$ where $\frak{g}=\oplus_{\gamma \in \mathbb{Z}_2^k}\frak{g}_{\gamma}$ is the $\mathbb{Z}_2^k$-grading corresponding to the $\mathbb{Z}_2^k$-symmetric structure of $G/H$. Let $U(X,Y)$ be the symmetric bilinear mapping of $\frak{m} \times \frak{m}$ in $\frak{m}$ defined by $$2B(U(X,Y),Z)=B(X,[Z,Y]_\frak{m})+B([Z,X]_\frak{m},Y),$$ for all $X,Y,Z \in \frak{m},$ where $[\ , ]_\frak{m}$ denote the projection on $\frak{m}$ of the bracket of $\frak{g}.$ \end{proposition} Thus the Riemmanian connection for $g$ is given by $$\nabla_X Y=U(X,Y)+\frac{1}{2}[X,Y]_\frak{m}$$ and the curvature tensor satisfies $$R(X,Y)(Z)=\nabla_X\nabla_Y Z-\nabla _Y \nabla_X Z- \nabla_{[X,Y]_{\m}}Z-[[X,Y]_\frak{\h},Z],$$ for $X,Y,Z \in \m$ and where the term $\left[\left[X,Y\right]_\frak{\h},Z\right]$ corresponds to the linear isotropy representation of $H$ into $G/H.$ \section{The Riemannian $\Z^2_2$-symmetric space \\ $SO(5)/SO(2)\times SO(2) \times SO(1)$} Any Riemannian symmetric space is naturally reductive. It is not usually the case for Riemannian $\mathbb{Z}_2^k$-symmetric spaces as soon as $k \geq 2.$ In this section we describe the Riemannian and Ricci tensors for any $\Z^2$-symmetric metric on the flag manifold $SO(5)/SO(2)\times SO(2) \times SO(1)$. In particular we study some properties of these metrics when they are not naturally reductive. A $\Z^2_2$-grading of the Lie algebra $so(5)$ is given by the decomposition $$\g=\g_e \oplus \g_a \oplus \g_b \oplus \g_c,$$ corresponding to the multiplication of $\Z^2_2$ : $a^2=b^2= c^2=e, \ ab=c, ac=b, bc=a$, $e$ being the identity. To describe the components of the grading, we consider $$so(5)=\left\{ \left( \begin{array}{ccccc} 0 & x_1 & a_1 & a_2 & b_1 \\ -x_1 & 0 & a_3 & a_4 & b_2 \\ -a_1 & -a_3 & 0 & x_2 & c_1 \\ -a_2 & -a_4 & -x_2 & 0 & c_2 \\ -b_1 & -b_2 & -c_1 & -c_2 & 0 \end{array} \right),x_i,a_i,b_i,c_i \in \R\right\}. $$ Thus $$\begin{array}{l} \g_e=\left\{ X \in so(5)\, / \, a_i=b_i=c_i=0 \right\}, \\ \g_a=\left\{ X \in so(5)\, / \, x_i=b_i=c_i=0 \right\}, \\ \g_b=\left\{ X \in so(5)\, / \, x_i=b_i=c_i=0 \right\}, \\ \g_c=\left\{ X \in so(5)\, / \, x_i=a_i=b_i=0 \right\}. \end{array}$$ We have $\g_e=so(2) \oplus so(2)\oplus so(1)$ and this grading induces the $\z$-symmetric structure on $SO(5)/SO(2)\times SO(2) \times SO(1)$. Moreover, from \cite{Ba-Go}, this grading is unique up an equivalence of $\z$-gradings. We denote by $\left\{ \left\{ X_1,X_2 \right\}, \left\{ A_1,A_2,A_3,A_4 \right\},\right.$ $\left. \left\{ B_1,B_2 \right\}, \left\{ C_1,C_2 \right\} \right\}$ the basis of $so(5)$ where each big letter corresponds to the matrix of $so(5)$ with the small letter equal to $1$ and other coefficients are zero. This basis is adapted to the grading. Let us denote by $\left\{ \omega_1, \omega_2, \alpha_1, \alpha_2, \alpha_3, \alpha_4, \beta_1, \beta_2, \gamma_1, \gamma_2 \right\}$ the dual basis.
\begin{theorem} Any $\Z^2_2$-symmetric Riemannian metric $g$ on $SO(5)/SO(2)\times SO(2) \times SO(1)$ is given by an $ad(H)$-invariant symmetric bilinear form $B$ on $\m=\g_a \oplus \g_b \oplus \g_c$ $$B=t^2(\alpha_1^2+ \alpha_2^2+\alpha_3^2+\alpha_4^2)+u(\alpha_1 \alpha_4-\alpha_2 \alpha_3 )+ v^2(\beta_1^2 +\beta_2^2)+w^2( \gamma_1^2+ \gamma_2^2)$$ with $tvw \ne 0$ and $u \in \left]-4t^2, 4t^2\right[$. \end{theorem} \noindent {\it Proof. } It is explain in detail in \cite{Go-Re-Santiago}.
The Riemannian connection is given by $$\bigtriangledown _X (Y)= \frac{1}{2}[X,Y]_{\m} +U(X,Y),$$ for any $X,Y \in \m,$ where $U$ is the symmetric bilinear mapping on $\m \times \m$ into $\m$ defined by $$2B(U(X,Y),Z)=B(X,[Z,Y]_{\m})+B([Z,X]_{\m},Y),$$ for any $X,Y,Z \in \m.$ The bilinear mapping $U$ is reduced to $$B=\sum_{i=1}^4 \widetilde{\alpha}_i^2+\sum_{i=1}^2 \widetilde{\beta}_i^2+\sum_{i=1}^2 \widetilde{\gamma}_i^2$$ with $$ \begin{array}{llll} \widetilde{\alpha}_1=t\alpha_1+\frac{u}{2t}\alpha_4, & \widetilde{\alpha}_2=t\alpha_2-\frac{u}{2t}\alpha_3,& \widetilde{\alpha}_3=K\alpha_3, & \widetilde{\alpha}_4=K\alpha_4, \\ \widetilde{\beta}_1=v\beta_1, &\widetilde{\beta}_2=v\beta_2,&\widetilde{\gamma}_1=w\gamma_1,&\widetilde{\gamma}_2=w\gamma_2. \end{array} $$ with $K=\sqrt{t^2-\frac{u^2}{4t^2}}.$ If $\{\widetilde{A_j},\widetilde{B_i},\widetilde{C_i} \}$ is the dual basis, this basis is orthonormal and the brackets $[ \ , \ ]_{\m} $ are given by $$
\begin{array}{c|c|c|c|c|c|c|c|c}
& \widetilde{A_1} & \widetilde{A_2} & \widetilde{A_3} &\widetilde{A_4} & \widetilde{B_1} & \widetilde{B_2} & \widetilde{C_1} & \widetilde{C_2} \\ \hline
& & & & & & & & \\ \widetilde{A_1} & 0 & 0 &0 & 0 &-\frac{w}{tv}\widetilde{C_1} & 0 & \frac{v}{tw}\widetilde{B_1} & 0 \\
& & & & & & & & \\ \widetilde{A_2} & & 0 &0 & 0&-\frac{w}{tv}\widetilde{C_2} & 0 & 0 & \frac{v}{tw}\widetilde{B_1} \\
& & & & & & & & \\ \widetilde{A_3} & & & 0 & 0& -\frac{uw}{2t^2vK}\widetilde{C_2} &-\frac{w}{Kv}\widetilde{C_1} & \frac{v}{Kw}\widetilde{B_2} & \frac{uv}{2t^2wK}\widetilde{B_1} \\
& & & & & & & & \\ \widetilde{A_4} & & & & 0 & \frac{uw}{2t^2vK}\widetilde{C_1} &-\frac{w}{Kv}\widetilde{C_2}& -\frac{uv}{2t^2wK}\widetilde{B_1} & \frac{v}{Kw}\widetilde{B_2} \\
& & & & & & & & \\ \widetilde{B_1} & & & & & 0 &0 &-\frac{t}{vw}\widetilde{A_1} &-\frac{t}{vw}\widetilde{A_2} \\
& & & & & & & & \\
\widetilde{B_2} & & & & & & 0 &\frac{u}{2vwt}\widetilde{A_2}- &-\frac{u}{2vwt}\widetilde{A_1}- \\
& & & & & & &\frac{K}{vw}\widetilde{A_3} & \frac{K}{vw}\widetilde{A_4}\\
& & & & & & & & \\ \widetilde{C_1} & & & & & & & 0 &0 \\
& & & & & & & & \\ \widetilde{C_2} & & & & & & & & 0 \\ \end{array} $$ Recall that the Riemannian connection $\nabla$ is given by $$\nabla_X(Y)=U(X,Y)+\frac{1}{2}[X,Y]_{\m},$$ and the Riemannian curvature $R(X,Y)$ is the matrix given by $$R(X,Y)(Z)=\nabla_X\nabla_Y Z-\nabla _Y \nabla_X Z- \nabla_{[X,Y]_{\m}}Z-[[X,Y]_\frak{\h},Z],$$ for $X,Y,Z \in \m$ and where the term $\left[\left[X,Y\right]_\frak{\h},Z\right]$ corresponds to the linear isotropy representation of $H$ into $G/H.$
The symmetric mapping $U$ is given by:
$$\begin{array}{c|c|c|c|c|c|c|c|c}
U(X,Y) & \widetilde{A_1} & \widetilde{A_2} & \widetilde{A_3} &\widetilde{A_4} & \widetilde{B_1} & \widetilde{B_2} & \widetilde{C_1} & \widetilde{C_2} \\ \hline
& & & & & & & & \\
\widetilde{A_1} & 0 & 0 &0 & 0 &\frac{t^2-v^2}{2tvw}\widetilde{C_1} & \frac{u}{4vwt}\widetilde{C_2} & \frac{-t^2+w^2}{2tvw}\widetilde{B_1} & -\frac{u}{4vwt}\widetilde{B_2} \\
& & & & & & & & \\ \widetilde{A_2} & & 0 &0 & 0& \frac{t^2-v^2}{2tvw}\widetilde{C_2} & -\frac{u}{4vwt}\widetilde{C_1} & \frac{u}{4vwt}\widetilde{B_2} & \frac{-t^2+w^2}{2tvw}\widetilde{B_1} \\
& & & & & & & & \\ \widetilde{A_3} & & & 0 & 0& \frac{-uv}{4t^2wK}\widetilde{C_2} &\frac{K^2-v^2}{2Kvw}\widetilde{C_1} & \frac{-K^2+w^2}{2Kvw}\widetilde{B_2} & \frac{uw}{4t^2vK}\widetilde{B_1} \\
& & & & & & & & \\ \widetilde{A_4} & & & & 0 & \frac{uv}{4t^2wK}\widetilde{C_1} &\frac{K^2-v^2}{2Kvw}\widetilde{C_2}& -\frac{uw}{4t^2vK}\widetilde{B_1} & \frac{-K^2+w^2}{2Kvw}\widetilde{B_2} \\
& & & & & & & & \\ \widetilde{B_1} & & & & & 0 &0 &\frac{v^2-w^2}{2vwt}\widetilde{A_1}+ &\frac{v^2-w^2}{2vwt}\widetilde{A_2} -\\
& & & & & & &\frac{u(v^2-w^2)}{4t^2vwK}\widetilde{A_4} & \frac{u(v^2-w^2)}{4t^2vwK}\widetilde{A_3} \\
& & & & & & & & \\
\widetilde{B_2} & & & & & & 0 &\frac{v^2-w^2}{2vwK}\widetilde{A_3} &\frac{v^2-w^2}{2vwK}\widetilde{A_4} \\
& & & & & & & & \\ \widetilde{C_1} & & & & & & & 0 &0 \\
& & & & & & & & \\ \widetilde{C_2} & & & & & & & & 0 \\ \end{array}$$
Infinitesimal isometries are given by the vectors $X \in \m$ satisfying $$B(A_XY,Z)=-B(Y,A_XZ),$$ for any $Y, Z \in\m,$ where $A_XY=|X,Y]-\nabla_XY$. Since $\nabla_XY=U(X,Y)+\frac{1}{2}[X,Y]$, we have $A_XY=-U(X,Y)+\frac{1}{2}[X,Y]$. Thus $X \in\m$ is an infinitesimal isometry if $$B([X,Y],Z)+B(Y,[X,Z])=0,$$ for any $Y,Z \in \m.$ \begin{proposition} If $X \in \m$ is an infinitesimal isometry, thus \begin{enumerate} \item If $u=0$ and \begin{enumerate} \item If $t^2=v^2$, $X=c_1\widetilde{C_1}+c_2\widetilde{C_2},$ \item If $t^2=w^2$, $X=c_1\widetilde{B_1}+c_2\widetilde{B_2},$ \item If $v^2=w^2$, $X=a_1\widetilde{A_1}+a_2\widetilde{A_2}+a_3\widetilde{A_3}+a_4\widetilde{A_4}.$ \end{enumerate} \item If $u \neq 0$ and \begin{enumerate} \item If $v^2=w^2$, $X=a_1\widetilde{A_1}+a_2\widetilde{A_2}+a_3\widetilde{A_3}+a_4\widetilde{A_4},$ \item If $v^2 \neq w^2$, $X=0$. \end{enumerate} \end{enumerate} \end{proposition} In particular, if $u=0$ and $t^2=v^2=w^2$, the bilinear form $B$ is $ad(\m)$-invariant. \section{On the (non)naturally reductivity} We consider on the $\Z^2_2$-symmetric space $\ SO(5)/SO(2)\times SO(2) \times SO(1)$ the $\Z^2_2$-symmetric Riemannian metric associated with the bilinear form $$B=t^2(\alpha_1^2+ \alpha_2^2+\alpha_3^2+\alpha_4^2)+u(\alpha_1 \alpha_4-\alpha_2 \alpha_3 )+ v^2(\beta_1^2 +\beta_2^2)+w^2( \gamma_1^2+ \gamma_2^2)$$ with $tvw \ne 0$ and $u \in \left]-4t^2, 4t^2\right[$. \begin{proposition} The $\Z^2_2$-symmetric space $ \ SO(5)/SO(2)\times SO(2) \times SO(1)$ is naturally reductive if and only if $$u=0 \ {\rm and} \ t^2=v^2=w^2. $$ \end{proposition} \noindent {\it Proof. } Indeed, naturally reductivity means that $$B(X,[Z,Y]_{\m})+B([Z,X]_{\m},Y)=0,$$ that is, $B(U(X,Y),Z)=0$ for any $X,Y,Z \in \m$. From the expression of $U$ we deduce $u=0$ and $t^2=v^2=w^2=K^2.$ But $u=0$ implies $K^2=t^2.$ $\clubsuit$
Now, we assume that we have not the naturally reductivity property. In this case, we can study if such a Riemannian space is a d'Atri space, that is, the geodesic symmetries preserve the volume. Let us recall that naturally reductive homogeneous spaces are d'Atri spaces, but these two notions are not equivalent. We know some examples of d'Atri spaces which are not naturally reductive. The condition of being a d'Atri space is often difficult to compute because it is necessary to know the equations of geodesics. But the D'Atri definition is equivalent to the Ledger (infinite) system whose the first equation writes $$L(X,Y,Z)= (\nabla_{X}\rho)(Y, Z) + (\nabla_{Y}\rho)(Z,X) + (\nabla_{Z}\rho)(X,Y)= 0,$$ for any $X,Y,Z \in \m$,
where $\rho$ is the Ricci tensor, that is, the trace of the linear operator $V \rightarrow R(V,X)Y$. In the orthonormal basis previously defined the Ricci tensor is the symmetric matrix $$\left(
\begin{array}{cccccccc}
\rho_{11} & 0 & 0 & \rho_{14} & 0 & 0 & 0 & 0 \\
0 & \rho_{22} & \rho_{23} & 0 & 0 & 0 & 0 & 0 \\
0 & \rho_{23} & \rho_{33} & 0 & 0 & 0 & 0 & 0\\
\rho_{14} & 0 & 0 & \rho_{44} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \rho_{55} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \rho_{66} & 0 & 0 \\
0 & 0& 0 &0 & 0 & 0 & \rho_{77} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \rho_{88} \\
\end{array}
\right) $$ with $$\begin{array}{l}
\rho_{11}=\rho_{22} =\frac{4t^4+u^2-4(v^4-6v^2w^2 + w^4)}{8t^2v^2w^2}, \\
\rho_{14}=- \rho_{23}=\frac{u(K^2t^2 + v^4 - 6v^2w^2 + w^4)}{4 Kt^3v^2w^2},\\
\rho_{33}= \rho_{44}=\frac{(4t^4-u^2)^2-4(v^4-6v^2w^2+w^4)(u^2+4t^4)}{8t^2(4t^4-u^2)v^2w^2},\\
\rho_{55}= \rho_{66}=\frac{-4t^6+12t^4w^2+t^2(u^2+4v^4-4w^4)-3u^2w^2}{(4t^4-u^2)v^2w^2}, \\
\rho_{77}= \rho_{88}=\frac{-4t^6+12t^4v^2+t^2(u^2-4v^4+4w^4)-3u^2v^2}{(4t^4-u^2)v^2w^2}.
\end{array}
$$ The non trivial Ledger equations (for the first condition) are $$ \left\{ \begin{array}{l} L(\widetilde{A_1},\widetilde{B_1},\widetilde{C_1})=L(\widetilde{A_2},\widetilde{B_1},\widetilde{C_2}),\\ L(\widetilde{A_1},\widetilde{B_2},\widetilde{C_2})=L(\widetilde{A_2},\widetilde{B_2},\widetilde{C_1}),\\ L(\widetilde{A_3},\widetilde{B_1},\widetilde{C_2})=L(\widetilde{A_4},\widetilde{B_1},\widetilde{C_1}),\\ L(\widetilde{A_3},\widetilde{B_2},\widetilde{C_1})=L(\widetilde{A_4},\widetilde{B_2},\widetilde{C_2}). \end{array}
\right. $$ Thus we obtain $$(*) \ \ \left\{ \begin{array}{l}
(v^2-w^2) \rho_{11}+ (w^2-t^2) \rho_{55}+(t^2-v^2) \rho_{77}+\frac{u(w^2-v^2)}{2tK} \rho_{14}=0,\\
-\frac{u}{2t} \rho_{55}+\frac{u}{2t} \rho_{77}+\frac{v^2-w^2}{K} \rho_{14}=0,\\
\frac{u(v^2-w^2)}{2tvwK} \rho_{33}+\frac{uw}{2tvK} \rho_{55}-\frac{v^2-w^2}{vw} \rho_{14}-\frac{ u v}{2 t w K} \rho_{77}=0,\\
(v^2-w^2) \rho_{33}+ (w^2-K^2) \rho_{55}+(K^2-v^2) \rho_{77}=0. \end{array}
\right. $$ \begin{itemize} \item If $u=0$, thus $ \rho_{11}=\rho_{33}$, $\rho_{14}=0$ and $(*)$ is equivalent to $$(v^2-w^2) \rho_{33}+ (w^2-t^2) \rho_{55}+(t^2-v^2) \rho_{77}=0,$$ that is, $$ \begin{array}{l} (v^2-w^2)(9t^4+v^4+10v^2w^2+w^4-10t^2v^2-10t^2w^2)=0 \end{array} $$ We obtain $$\left\{ \begin{array}{l} v^2=w^2 \\
{\rm or} \ 9t^4+v^4+10v^2w^2+w^4-10t^2v^2-10t^2w^2=0. \end{array} \right.$$ Let us study the second equation. For this, since $t \neq 0$, we can consider the change of variables $$V=\frac{v^2}{t^2}, \ \ W=\frac{w^2}{t^2}.$$ The equation becomes $$9 - 10(V+W) + (V+W)^2 +8VW= 0.$$ Now we put $S=V+W,$ $P=VW$ and we obtain $$P=\frac{-S^2+10S-9}{8}.$$ Since $V$ and $W$ are strictly positive, $P > 0$, which implies that $-S^2+10S-9 >0$, that is, $S \in \left]1,9\right[.$ With these conditions, $V$ and $W$ are the roots of $X^2-SX+P=0$. In fact they always exist since $X^2-SX+P=X^2-SX+\frac{-S^2+10S-9}{8}$ has as discriminant $$\Delta = S^2-4P=S^2-\frac{-S^2+10S-9}{2}=\frac{3S^2-10S+9}{2}$$ which is always positive. The roots of $X^2-SX+P$ are $$\left\{ \begin{array}{l} X_1=\displaystyle\frac{S-\sqrt{\frac{3S^2-10S+9}{2}}}{2},\\
X_2= \displaystyle\frac{S+\sqrt{\frac{3S^2-10S+9}{2}}}{2}.\\ \end{array} \right. $$ Since $S \in \left]1,9\right[$, $P >0$ and $X_1 >0, X_2 >0$. We obtain $$\left(\frac{v^2}{t^2}, \frac{w^2}{t^2} \right)=(X_1,X_2) \ \ {\rm{or}} \ (X_2,X_1).$$ \begin{proposition} Assume that the Riemannian $\Z^2_2$-symmetric metric on $SO(5)/SO(2) \times SO(2) \times SO(1)$ associated with a bilinear form $B$ with $u=0$ satisfies the first Ledger condition. Then
$B$ is one of the following bilinear form
$$
\begin{array}{lll}
B_1&=&t^2(\alpha_1^2+ \alpha_2^2+\alpha_3^2+\alpha_4^2)+v^2(\beta_1^2 +\beta_2^2+ \gamma_1^2+ \gamma_2^2),\\ B_2(S)&=&t^2(\alpha_1^2+ \alpha_2^2+\alpha_3^2+\alpha_4^2+\displaystyle\frac{S-\sqrt{\frac{3S^2-10S+9}{2}}}{2}(\beta_1^2 +\beta_2^2)\\ && +\displaystyle\frac{S+\sqrt{\frac{3S^2-10S+9}{2}}}{2}( \gamma_1^2+ \gamma_2^2)),\\ B_3(S)&=&t^2(\alpha_1^2+ \alpha_2^2+\alpha_3^2+\alpha_4^2+\displaystyle\frac{S+\sqrt{\frac{3S^2-10S+9}{2}}}{2}(\beta_1^2 +\beta_2^2)\\ && +\displaystyle\frac{S-\sqrt{\frac{3S^2-10S+9}{2}}}{2}( \gamma_1^2+ \gamma_2^2)),
\end{array}
$$
with $S \in \left]1,9\right[.$ The metrics associated with $B_2(S)$ and $B_3(S)$ are not naturally reductive and the matrix associated with $B_1$ is not naturally reductive as soon as $t^2 \neq v^2$.
\end{proposition}
\item Assume that $u \neq 0.$ The first Ledger condition writes
$$ \left\{ \begin{array}{l} -t^2(v^2 - w^2)(36t^6 - 40t^4(v^2 + w^2) +6u^2(v^2 + w^2) \\
\ \ \ \ +t^2(-9 u^2 + 4(v^4 + 10 v^2 w^2 + w^4)))=0, \\
t^2u(v^2 - w^2)(28 t^4 - 7u^2 - 16t^2(v^2 + w^2) +
4(v^4 - 6 v^2w^2 + w^4))=0, \\
(v^2 - w^2)(144 t^8 - 160 t^6(v^2 + w^2) +
40t^2u^2(v^2 + w^2) -\\
\ \ \ \ 16t^4(4 u^2 - v^4 - 10 v^2w^2 - w^4) +
u^2(7u^2 - 4(v^4 - 6 v^2w^2 + w^4)))=0, \\
t^2u(v^2 - w^2)
(4 t^6 - 12 t^4(v^2 + w^2) + 3 u^2(v^2 + w^2)-
t^2(u^2 - 32 v^2w^2))=0.
\end{array} \right.$$ We obtain $v^2=w^2$ or $$ \left\{ \begin{array}{l} 36t^6 - 40t^4(v^2 + w^2) + 6u^2(v^2 + w^2) +
t^2(-9 u^2 + 4(v^4 + 10 v^2 w^2 + w^4))=0, \\
28 t^4 - 7u^2 - 16t^2(v^2 + w^2) +
4(v^4 - 6 v^2w^2 + w^4))=0, \\
144 t^8 - 160 t^6(v^2 + w^2) +
40t^2u^2(v^2 + w^2) -\\
\ \ \ \ 16t^4(4 u^2 - v^4 - 10 v^2w^2 - w^4) +
u^2(7u^2 - 4(v^4 - 6 v^2w^2 + w^4))=0, \\
4 t^6 - 12 t^4(v^2 + w^2) + 3 u^2(v^2 + w^2)-
t^2(u^2 - 32 v^2w^2)=0.
\end{array} \right.$$ \end{itemize}
The change of variables: $U=\frac{u}{t^2}, \ V=\frac{v^2}{t^2}, \ W=\frac{w^2}{t^2},\ S=V+W$ and $P=VW$ shows that the first three equations of the previous system are equivalent. So the system reduces to the two equations \begin{equation} 64P-24PS+4S-13S^2+3S^3=0, \label{Eq1} \end{equation} \begin{equation} 7U^2=28-16S+4(S^2-8P)\label{Eq2}. \end{equation}
Equation \ref{Eq1} gives $$P=\displaystyle\frac{-4S+13S^2-3S^3}{8(8-3S)}=\frac{-S(S-4)(3S-1)}{8(8-3S)},$$ because $S=\frac{8}{3}$ is not a solution. Thus $V$ and $W$ are roots of $X^2-SX+P=0$ if there are two real solutions of this equation. But $$X^2-SX+P=X^2-SX+\frac{-S(S-4)(3S-1)}{8(8-3S)}$$ and its discriminant is $$\Delta= S^2-4P= S^2+\frac{S(S-4)(3S-1)}{2(8-3S)}=\frac{S(-3S^2+3S+4)}{2(8-3S)}.$$ We obtain a condition on $S$ to the existence of $V$ and $W$ which is $$S \in \left]0, \frac{3+\sqrt{57}}{6}\right[\ \bigcup \ \left]\frac{8}{3},+\infty\right[.$$ Moreover we want $V$ and $W$ to be positive solutions so $P>0$ and $S>0$; we have to take $S \in \left]\frac{1}{3}, \frac{8}{3}\right[\ \bigcup \ \left]4, +\infty\right[.$ Finally $$S \in \left]\frac{1}{3}, \frac{3+\sqrt{57}}{6}\right[ \ \bigcup \ \left]4,+\infty\right[.$$
Then Equation \ref{Eq2} gives conditions on $S$ to the existence of $U$. In fact Equation \ref{Eq2} is equivalent to $$U^2=4\frac{8-7S+S^2}{8-3S}.$$ Then $S \in \left]0,\frac{7-\sqrt{17}}{2}\right[ \bigcup \left] \frac{8}{3},\frac{7+\sqrt{17}}{2} \right[$. But we have also that $u \in \left]-4t^2 , 4t^2 \right[$ so $U^2<16$ which reduces to $S^2 +5S-24<0$ and $S\in \left]0,\frac{7-\sqrt{17}}{2}\right[ \bigcup \left] \frac{8}{3},3 \right[$.
With the conditions coming from \ref{Eq1} we finally have that $S \in \left]\frac{1}{3},\frac{7-\sqrt{17}}{2}\right[$
\begin{proposition} Assume that the $\Z^2_2$-symmetric metric on $SO(5)/SO(2) \times SO(2) \times SO(1)$ associated with a bilinear form $B$ with $u\neq 0$ satisfies the first Ledger condition. Then $S$ belongs to $\left]\frac{1}{3},\frac{7-\sqrt{17}}{2}\right[$ and
$B=t^2(\alpha_1^2+ \alpha_2^2+\alpha_3^2+\alpha_4^2)+u(\alpha_1\alpha_4 -\alpha_2\alpha_4)+v^2(\beta_1^2 +\beta_2^2)+w^2( \gamma_1^2+ \gamma_2^2),$ with
\begin{enumerate}
\item $$v^2=\frac{-16 S + 6 S^2 -\sqrt{2S(32 + 12 S - 33 S^2 + 9S^3)}}{-32 + 12 S}t^2$$ and $$ w^2=\frac{-3S(S-4)(S-\frac{1}{3})}{8(8-3S)V}t^2$$
\item $$w^2=\frac{-16 S + 6 S^2 -\sqrt{2S(32 + 12 S - 33 S^2 + 9S^3)}}{-32 + 12 S}t^2 $$ and $$ v^2=\frac{-3S(S-4)(S-\frac{1}{3})}{8(8-3S)V}t^2$$
\end{enumerate}
and in each case $u^2=4\frac{8-7S+S^2}{8-3S}t^4.$
None of these metrics is naturally reductive. \end{proposition}
\end{document} | arXiv |
Nieuw Archief voor Wiskunde
The Nieuw Archief voor Wiskunde (English translated title: New Archive for Mathematics) is a quarterly Dutch peer-reviewed scientific journal of mathematics published by the Koninklijk Wiskundig Genootschap (Royal Mathematical Society) since 1875. The new version, the fifth series, started in 2000. The journal covers developments in mathematics in general and in Dutch mathematics in particular.[1][2] It is abstracted and indexed in zbMATH Open.[3]
Nieuw Archief voor Wiskunde
DisciplineMathematics
LanguageDutch
Publication details
History1875-present
Publisher
Centrum Wiskunde & Informatica (The Netherlands)
FrequencyQuarterly
Standard abbreviations
ISO 4 (alt) · Bluebook (alt1 · alt2)
NLM (alt) · MathSciNet (alt )
ISO 4Nieuw Arch. Wiskd.
Indexing
CODEN (alt · alt2) · JSTOR (alt) · LCCN (alt)
MIAR · NLM (alt) · Scopus
ISSN0028-9825
LCCN88647611
OCLC no.1313690209
Links
• Journal homepage
• Online archive
History
The previous version, with the full title Nieuw Archief voor Wiskunde uitgegeven door het Wiskundig Genootschap had four series between 1875 up to 1999: Eerste reeks (First Series, 1875-1893, 20 volumes),[4] Tweede reeks (Second series, 1895-1949, 23 volumes),[5][6] Derde reeks (Third series, 1953-1982, 30 volumes),[7] and Vierde reeks (Fourth series, 1983-1999, 17 volumes).[8] Initially, the publishing company was Weytingh & Brave in Amsterdam.[9]
Predecessor journal
In the years 1856-1874, the Wiskundig Genootschap (Mathematical Society) published three volumes of Archief, uitgegeven door het Wiskundig Genootschap with Weytingh & Brave in Amsterdam.
References
1. "Nieuw Archief voor Wiskunde". nieuwarchief.nl (in Dutch). Koninklijk Wiskundig Genootschap (official website). Retrieved 9 October 2022.
2. "Nieuw Archief voor Wiskunde. Vijfde Serie". zbmath.org. zbMATH Open. 2022. Retrieved 9 October 2022.
3. "Serials Database". ZbMATH Open. Springer Science+Business Media. Retrieved 2022-10-09.
4. "Nieuw Archief voor Wiskunde". zbmath.org. zbMATH Open. 2022. Retrieved 9 October 2022.
5. "Nieuw Archief voor Wiskunde. Tweede Serie". zbmath.org. zbMATH Open. 2022. Retrieved 9 October 2022.
6. "MathDoc Gallica-Math: Répertoire Bibliographique des Sciences Mathématiques (1894-1912) Nieuw Archief voor wiskunde uitgegeven door bet Wiskundig Genootschap. Amsterdam". sites.mathdoc.fr. Mathdoc. Unité d'Appui et de Recherche 5638 (CNRS/UGA), Grenoble. Retrieved 9 October 2022.
7. "Nieuw Archief voor Wiskunde. Derde Serie". zbmath.org. zbMATH Open. 2022. Retrieved 9 October 2022.
8. "Nieuw Archief voor Wiskunde. Vierde Serie". zbmath.org. zbMA. 2022. Retrieved 9 October 2022.
9. "Delpher". delpher.nl. Retrieved 9 October 2022.. Search for Nieuw Archief voor Wiskunde.
| Wikipedia |
2020, Volume 40, Issue 6: 3909-3955. Doi: 10.3934/dcds.2020036
This issue Previous Article Convergence and structure theorems for order-preserving dynamical systems with mass conservation Next Article The impact of the domain boundary on an inhibitory system: Interior discs and boundary half discs
Variational and operator methods for Maxwell-Stokes system
Xing-Bin Pan,
School of Mathematical Sciences, East China Normal University and NYU-ECNU Institute of Mathematical Sciences at NYU Shanghai, Shanghai 200062, China
* Corresponding author: Xing-Bin Pan
Dedicated to Professor Wei-Ming Ni on the occasion of his 70th birthday
Received: January 31, 2019
Published: March 2020
This work was partially supported by the National Natural Science Foundation of China grant no. 11671143, and 11431005
In this paper we revisit the nonlinear Maxwell system and Maxwell-Stokes system. One of the main feature of these systems is that existence of solutions depends not only on the natural of nonlinearity of the equations, but also on the type of the boundary conditions and the topology of the domain. We review and improve our recent results on existence of solutions by using the variational methods together with modified De Rham lemmas, and the operator methods. Regularity results by the reduction method are also discussed and improved.
Curl equation,
Maxwell equations,
Maxwell-Stokes system,
magneto-static problem,
modified de Rham lemma,
variational method,
reduction method,
domain topology.
Mathematics Subject Classification: Primary: 35Q61; Secondary: 35A15, 35J20, 35J47, 35J50, 35J57, 35J61, 35J62, 35Q60, 47J30, 78A25.
[1] S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions, I, Comm. Pure Appl. Math., 12 (1959), 623-727. doi: 10.1002/cpa.3160120405.
[2] Y. Aharonov and D. Bohm, Significance of electromagnetic potentials in the quantum theory, Phys. Rev., 115 (1959), 485-491. doi: 10.1103/PhysRev.115.485.
[3] A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Functional Analysis, 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7.
[4] C. Amrouche, C. Bernardi, M. Dauge and V. Girault, Vector potentials in three-dimensional non-smooth domains, Math. Meth. Appl. Sci., 21 (1998), 823-864. doi: 10.1002/(SICI)1099-1476(199806)21:9<823::AID-MMA976>3.0.CO;2-B.
[5] C. Amrouche and V. Girault, On the existence and regularity of the solution of Stokes problem in arbitrary dimension, Proc. Japan Acad. Ser. A Math. Sci., 67 (1991), 171-175. doi: 10.3792/pjaa.67.171.
[6] C. Amrouche and N. Seloula, $L^p$-teory for vector potentials and Sobolev's inequalities for vector fields. Application to the Stokes equations with presure boundary conditions, Math. Models Meth. Appl. Sci., 23 (2013), 37-92. doi: 10.1142/S0218202512500455.
[7] G. Auchmuty and J. C. Alexander, $L^2$-well-posedness of $3D$ div-curl boundary value problems, Quart. Appl. Math., 63 (2005), 479-508. doi: 10.1090/S0033-569X-05-00972-5.
[8] F. Bachinger, U. Langer and J. Schöberl, Numerical analysis of nonlinear multiharmonic eddy current problems, Numer. Math., 100 (2005), 593-616. doi: 10.1007/s00211-005-0597-2.
[9] T. Bartsch and J. Mederski, Ground and bound state solutions of semilinear timeharmonic Maxwell equations in a bounded domain, Arch. Ration. Mech. Anal., 215 (2015), 283-306. doi: 10.1007/s00205-014-0778-1.
[10] P. Bates and X. B. Pan, Nucleation of instability of the Meissner state of 3-dimensional superconductors, Comm. Math. Phys., 276 (2007), 571-610. doi: 10.1007/s00220-007-0335-y.
[11] V. Benci and D. Fortunato, Towards a unified field theory for classical electrodynamics, Arch. Ration. Mech. Anal., 173 (2004), 379-414. doi: 10.1007/s00205-004-0324-7.
[12] J. Bolik and W. von Wahl, Estimating $\nabla {\bf{u}}$ in terms of $ \rm div $ ${\bf u}$, $ \rm curl $ $ {\bf u}$, either $(\nu, {\bf u})$ or $\nu\times{\bf u}$ and the topology, Math. Meth. Appl. Sci., 20 (1997), 737-744. doi: 10.1002/(SICI)1099-1476(199706)20:9<737::AID-MMA863>3.3.CO;2-9.
[13] F. Boyer and P. Fabrie, Mathematical Tools for the Navier-Stokes Equations and Related Models, Applied. Math. Sci., vol. 183, Springer, New York, 2013. doi: 10.1007/978-1-4614-5975-0.
[14] A. Buffa, M. Costbel and D. Sheen, On traces for $H({\rm {curl}}\; ,\Omega)$ in Lipschitz domains, J. Math. Anal. Appl., 276 (2002), 845-867. doi: 10.1016/S0022-247X(02)00455-9.
[15] L. Cattabriga, Su un problema al contorno relativo al sistema di equazioni di Stokes, (Italian) Rend. Sem. Padova., 31 (1961), 308-340.
[16] M. Cessenat, Mathematical Methods in Electromagnetism-Linear Theory and Applications, Series on Advances in Mathematics for Applied Sciences, 41, World Scientific Publishing Co., Inc., River Edge, NJ, 1996. doi: 10.1142/2938.
[17] S. C. Chapman, Macroscopic Models of Superconductivity, Ph. D thesis, Oxford University, 1991.
[18] S. J. Chapman, Superheating fields of type Ⅱ superconductors, SIAM J. Appl. Math., 55 (1995), 1233-1258. doi: 10.1137/S0036139993254760.
[19] J. Chen and X. B. Pan, Functionals with operator curl in an extended magnetostatic Born-Infeld model, SIAM J. Math. Anal., 45 (2013), 2253-2284. doi: 10.1137/120891496.
[20] J. Chen and X. B. Pan, An extended magnetostatic Born-Infeld model with a concave lower order term, J. Math. Phys., 54 (2013), 111501, 29 pp. doi: 10.1063/1.4826995.
[21] J. Chen and X. B. Pan, Quasilinear degenerate systems with pperator curl, Proc. Roy. Soc. Edinburgh Sect. A, 148 (2018), 243-279. doi: 10.1017/S0308210517000014.
[22] M. Costabel, A remark on the regularity of solutions of Maxwell's equations on Lipschitz domains, Math. Meth. Appl. Sci., 12 (1990), 365-368. doi: 10.1002/mma.1670120406.
[23] M. Costabel and M. Dauge, Singularities of electromagnetic fields in polyhedral domains, Arch. Ration. Mech. Anal., 151 (2000), 221-276. doi: 10.1007/s002050050197.
[24] R. Dautray and J. L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 2, Functional and variational methods. With the collaboration of Michel Artola, Marc Authier, Philippe Bénilan, Michel Cessenat, Jean Michel Combes, Hélène Lanchon, Bertrand Mercier, Claude Wild and Claude Zuily. Translated from the French by Ian N. Sneddon, Springer-Verlag, Berlin, 1988. doi: 10.1007/978-3-642-61566-5.
[25] R. Dautray and J. L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 3. Spectral theory and applications. With the collaboration of Michel Artola and Michel Cessenat. Translated from the French by John C. Amson, Springer-Verlag, Berlin, 1990.
[26] G. de Rham, Differentiable Manifolds, Forms, Currents, Harmonic Forms, Translated from the French by F. R. Smith. With an introduction by S. S. Chern. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 266, Springer-Verlag, Berlin, 1984. doi: 10.1007/978-3-642-61752-2.
[27] G. Galdi, Introduction to the Mathematical Theory of the Navier-Stokes Equations-Steady State Problems, Second edition. Springer Monographs in Mathematics, Springer, New York, 2011. doi: 10.1007/978-0-387-09620-9.
[28] D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 224, Springer-Verlag, Berlin, 1983. doi: 10.1007/978-3-642-61798-0.
[29] V. Girault and P. Raviart, Finite Element Methods for the Navier-Stokes Equations, Theory and algorithms. Springer Series in Computational Mathematics, 5, Springer-Verlag, Berlin, 1986. doi: 10.1007/978-3-642-61623-5.
[30] X. Jiang and W. Zheng, An efficient eddy current model for nonlinear Maxwell equations with laminated conductors, SIAM J. Appl. Math., 72 (2012), 1021-1040. doi: 10.1137/110857477.
[31] H. Kozono and T. Yanagisawa, $L^r$-variational inequality for vector fields and the Helmholtz-Weyl decomposition in bounded domains, Indiana Univ. Math. J., 58 (2009), 1853-1920. doi: 10.1512/iumj.2009.58.3605.
[32] H. Kozono and T. Yanagisawa, Generalized Lax-Milgram theorem in Banach spaces and its application to the elliptic system of boundary value problems, Manuscripta Math., 141 (2013), 637-662. doi: 10.1007/s00229-012-0586-6.
[33] L. Landau and E. Lifshitz, Electrodynamics of Continuous Media, Course of Theoretical Physics, Vol. 8. Translated from the Russian by J. B. Sykes and J. S. Bell, Pergamon Press, Oxford-London-New York-Paris; Addison-Wesley Publishing Co., Inc., Reading, Mass. 1960.
[34] G. Lieberman, Oblique Derivative Problems for Elliptic Equations, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2013. doi: 10.1142/8679.
[35] G. Lieberman and X. B. Pan, On a quasilinear system arising in the theory of superconductivity, Proc. Roy. Soc. Edinburgh Sect. A, 141 (2011), 397-407. doi: 10.1017/S0308210509001395.
[36] J. L. Lions, Quelques Méthodes de R\'rsolution des Problémes aux Limites Non Linéaires, (French) Dunod, Gauthier-Villars, Paris 1969.
[37] J. Mederski, Ground states of time-harmonic semilinear Maxwell equations in $\Bbb R^3$ with vanishing permittivity, Arch. Ration. Mech. Anal., 218 (2015), 825-861. doi: 10.1007/s00205-015-0870-1.
[38] A. Milani and R. Picard, Weak solution theory for Maxwell's equstions in the semistatic limit case, J. Math. Anal. Appl., 191 (1995), 77-100. doi: 10.1016/S0022-247X(85)71121-3.
[39] D. Mitrea, M. Mitrea and M. Taylor, Layer potentials, the hodge laplacian and global boundary problems in non-smooth riemannian manifolds, Mem. Amer. Math. Soc., 150 (2001), x+120 pp. doi: 10.1090/memo/0713.
[40] P. Monk, Finite Element Methods for Maxwell's Equations, Numerical Mathematics and Scientific Computation, Oxford University Press, New York, 2003.
[41] R. Monneau, Quasilinear elliptic system arising in a three-dimensional type I\!I superconductor for infinite $\kappa$, Nonlinear Anal., 52 (2003), 917-930. doi: 10.1016/S0362-546X(02)00142-6.
[42] J. Nedelec, Éléments finis mixtes incompressibles pour l'équation de Stokes dans $\Bbb R^3$, (French) [Incompressible mixed finite elements for the Stokes equation in $\mathbb{R}\^{3}$], Numer. Math., 39 (1982), 97-112. doi: 10.1007/BF01399314.
[43] X. B. Pan, Landau-de Gennes model of liquid crystals and critical wave number, Comm. Math. Phys., 239 (2003), 343-382. doi: 10.1007/s00220-003-0875-8.
[44] X. B. Pan, Nucleation of instability of meissner state of superconductors and related mathematical problems, Trends in Partial Differential Equations, Adv. Lect. Math. (ALM), Int. Press, Somerville, MA, 10 (2010), 323–372.
[45] X. B. Pan, Minimizing curl in a multiconnected domain, J. Math. Phys., 50 (2009), 033508, 10 pp. doi: 10.1063/1.3082373.
[46] X. B. Pan, On a quasilinear system involving the operator Curl, Calc. Var. Partial Differential Equations, 36 (2009), 317-342. doi: 10.1007/s00526-009-0230-9.
[47] X. B. Pan, Asymptotic behavior of solutions of a quasilinear system involving the operator curl, J. Math. Phys., 52 (2011), 023517, 34 pp. doi: 10.1063/1.3543618.
[48] X. B. Pan, Regularity of weak solutions to nonlinear Maxwell systems, J. Math. Phys., 56 (2015), 071508, 24 pp. doi: 10.1063/1.4927427.
[49] X. B. Pan, Existence and regularity of solutions to quasilinear systems of Maxwell type and Maxwell-Stokes type, Calc. Var. Partial Differential Equations, 55 (2016), art. 143, 43 pp. doi: 10.1007/s00526-016-1081-9.
[50] X. B. Pan, Meissner states of type I\!I superconductors, J. Elliptic Parabol. Equ., 4 (2018), 441-523. doi: 10.1007/s41808-018-0027-0.
[51] X. B. Pan, General magneto-static model, submitted, https://arXiv.org/pdf/1908.03882.pdf.
[52] X. B. Pan and Y. W. Qi, Asymptotics of minimizers of variational problems involving curl functional, J. Math. Phys., 41 (2000), 5033-5063. doi: 10.1063/1.533391.
[53] X. B. Pan and Z. B. Zhang, Existence, regularity and uniqueness of weak solutions to a quasilinear Maxwell system, Nonlinearity, 32 (9) (2019), 3342-3366.
[54] R. Picard, On the boundary value problems of electro- and magnetostatics, Proc. Royal Soc. Edinburgh A, 92 (1982), 165-174. doi: 10.1017/S0308210500020023.
[55] J. Saranen, On generalized harmonic fields in domains with anisotropic nonhomogeneous media, J. Math. Anal. Appl., 88 (1982), 104-115. doi: 10.1016/0022-247X(82)90179-2.
[56] G. Schwarz, Hodge Decomposition– A Method for Solving Boundary Value Problems, Lecture Notes in Math., Springer-Verlag, Berlin Heidelberg, 1995. doi: 10.1007/BFb0095978.
[57] C. Simader and H. Sohr, A new approach to the Helmholtz decomposition and the Neumann problem in $l^q$-spaces for bounded and exterior domains, Mathematical Problems Relating to the Navier-Stokes Equation, Ser. Adv. Math. Appl. Sci., World Sci. Publ., River Edge, NJ, 11 (1992), 1–35. doi: 10.1142/9789814503594_0001.
[58] J. Simon, Démonstration constructive d'un théoréme de G. de Rham, (French) [A constructive proof of a theorem of G. de Rham], C. R. Acad. Sci. Paris Sér. I Math., 316 (1993), 1167-1172.
[59] H. Sohr, The Navier-Stokes Equations, An elementary functional analytic approach. [2013 reprint of the 2001 original] [MR1928881]. Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG, Basel, 2001.
[60] M. Struwe, Variational Methods–Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems, Third edition. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 34, Springer-Verlag, Berlin, 2000. doi: 10.1007/978-3-662-04194-9.
[61] C. Stuart and H. S. Zhou, A variational problem related to self-trapping of an electromagnetic field, Math. Meth. Appl. Sci., 19 (1996), 1397-1407.
[62] C. Stuart and H. S. Zhou, A constrained minimization problem and its application to guided cylindrical TM-modes in an anisotropic self-focusing dielectric, Calc. Var. Partial Differential Equations, 16 (2003), 335-373. doi: 10.1007/s005260100153.
[63] X. H. Tang and D. D. Qin, Ground state solutions for semilinear time-harmonic Maxwell equations, J. Math. Phys., 57 (2016), 041505, 19 pp. doi: 10.1063/1.4947179.
[64] R. Temam, Navier-Stokes Equations–Theory and Numerical Analysis, Theory and numerical analysis. Reprint of the 1984 edition, AMS Chelsea Publishing, Providence, RI, 2001. xiv+408 pp. doi: 10.1090/chel/343.
[65] W. von Wahl, Estimating $\nabla{\bf u}$ by $ \rmdiv $ ${\bf u}$ and $ \rmcurl $ ${\bf u}$, Math. Meth. Appl. Sci., 15 (1992), 123-143. doi: 10.1002/mma.1670150206.
[66] X. M. Wang, A remark on the characterization of the gradient of a distribution, Appl. Anal., 51 (1993), 35-40. doi: 10.1080/00036819308840202.
[67] M. Wiegner, Schauder estimates for boundary layer potentials, Math. Methods Appl. Sci., 16 (1993), 877-894. doi: 10.1002/mma.1670161204.
[68] T. T. Wu and C. N. Yang, Concept of nonintegrable phase factors and global formulation of gauge fields, Phys. Review D, 12 (1975), 3845-3857. doi: 10.1103/PhysRevD.12.3845.
[69] X. F. Xiang, Radially symmetric solutions for a limiting form of the Ginzburg-Landau model, Proc. Roy. Soc. Edinburgh Sect. A, 143 (2013), 185-203. doi: 10.1017/S0308210511000825.
[70] X. F. Xiang, On the shape of Meissner solutions to a limiting form of Ginzburg-Landau systems, Arch. Ration. Mech. Anal., 222 (2016), 1601-1640. doi: 10.1007/s00205-016-1029-4.
[71] H. M. Yin, Optimal regularity of solutions to a degenerate elliptic system arising in electromagnetic fields, Comm. Pure Appl. Anal., 1 (2002), 127-134. doi: 10.3934/cpaa.2002.1.127.
[72] E. Zeidler, Nonlinear Functional Analysis and Its Applicationss, I, Fixed-point theorems. Translated from the German by Peter R. Wadsack, Springer-Verlag, New York, 1986.
[73] X. Y. Zeng, Cylindrically symmetric ground state solutions for curl-curl equations with critical exponent, Z. Angew. Math. Phys., 68 (2017), art. 135, 12 pp. doi: 10.1007/s00033-017-0887-4.
Open Access Under a Creative Commons license
Xing-Bin Pan | CommonCrawl |
\begin{document}
\markboth{P.-O. Parisé and D. Rochon}{A Study of Dynamics of the Tricomplex Polynomial $\eta^p+c$}
\title{A Study of Dynamics of the Tricomplex Polynomial $\eta^p+c$ } \author{Pierre-Olivier Parisé\thanks{E-mail: \texttt{[email protected]}} \and Dominic Rochon\thanks{E-mail: \texttt{[email protected]}}}
\date{Département de mathématiques et d'informatique \\ Université du Québec à Trois-Rivières \\ C.P. 500 Trois-Rivières, Québec \\ Canada, G9A 5H7}
\maketitle
\noindent\textbf{AMS subject classification:} 37F50, 32A30, 30G35, 00A69\\ \noindent\textbf{Keywords: }Tricomplex dynamics, Generalized Mandelbrot sets, Multicomplex numbers, Hyperbolic numbers, 3D fractals\\
\begin{abstract} In this article, we give the exact interval of the cross section of the so-called \textit{Mandelbric} set generated by the polynomial $z^3+c$ where $z$ and $c$ are complex numbers. Following that result, we show that the \textit{Mandelbric} defined on the hyperbolic numbers $\ensuremath{\mathbb{D}}$ is a square with its center at the origin. Moreover, we define the \textit{Multibrot} sets generated by a polynomial of the form $Q_{p,c}(\eta )=\eta^p+c$ ($p \in \ensuremath{\mathbb{N}}$ and $p \geq 2$) for tricomplex numbers. More precisely, we prove that the tricomplex \textit{Mandelbric} has four principal slices instead of eight principal 3D slices that arise for the case of the tricomplex Mandelbrot set. Finally, we prove that one of these four slices is an octahedron. \end{abstract}
\section{Introduction} In 1982, A. Douady and J. H. Hubbard \cite{Dou} studied dynamical systems generated by iterations of the quadratic polynomial $z^2+c$. One of the main results of their work was the proof that the well-known Mandelbrot set for complex numbers is a connected set. It is also well known that the Mandelbrot set is bounded by a discus of radius 2 and crosses the real axis on the interval $\left[ -2 , \frac{1}{4} \right]$. Considering functions of the form $z^p+c$ where $z,c$ are complex numbers and $p$ may be an integer, and a rational or a real number being greater than 2, T. V. Papathomas, B. Julesz, U. G. Gujar and V. G. Bhavsar \cite{Papathomas, Gujar} explored the sets generated by these functions called \textit{Multibrot} sets. The last two authors remark that these \textit{polynomials} generate rich fractal structures. This was the starting point for other researchers such as D. Schleicher (\cite{Di,Lau}), E. Lau \cite{Lau}, X. Sheng and M. J. Spurr \cite{Sheng}, X.-D. Liu et al. \cite{chine6} and many others to study symmetries in \textit{Multibrot} sets, their connectivity and the discus that bound these sets.
In 1990, P. Senn \cite{Senn} suggested to define the Mandelbrot set for another set of numbers: the hyperbolic numbers (also called duplex numbers). He remarked that the Mandelbrot set for this number structure seemed to be a square. Four years later, a proof of this statement was given by W. Metzler \cite{MET}.
Unless these works were done in the complex plane, so in 2D, several mathematicians questioned themselves on a generalization of the Mandelbrot set in three dimensions. In 1982, A. Norton \cite{NORTON} succeeded to bring a first method to visualize fractals in 3D using the quaternions. In 2000, D. Rochon \cite{Rochon1} used the bicomplex numbers set $\ensuremath{\mathbb{M}} (2)$ to give a 4D definition of the so-called Mandelbrot set and made 3D slices to get the \textit{Tetrabrot}. Later, X.-y. Wang and W.-j. Song \cite{Chine1} follow D. Rochon's work to establish the \textit{Multibrots} sets for bicomplex numbers. Several years before, D. Rochon and V. Garant-Pelletier (\cite{GarantRochon}, \cite{GarantPelletier}) gave a definition of the Mandelbrot set for multicomplex numbers denoted by $\ensuremath{\mathcal{M}}_n$ and restrict their explorations to the tricomplex case. Particularly, they found eight principal slices of the tricomplex Mandelbrot set and proved that one of these 3D slices, typically named the \textit{Perplexbrot}, is an octahedron of edges $\frac{9\sqrt{2}}{8}$.
In this article, we investigate the \textit{Multibrot} sets for complex, hyperbolic and tricomplex numbers, respectively, denoted by $\ensuremath{\mathcal{M}}^p$, $\ensuremath{\mathcal{H}}^p$ and $\ensuremath{\mathcal{M}}_3^p$ when $p$ is an integer greater than one. We emphasize on the sets $\ensuremath{\mathcal{M}}^3$, $\ensuremath{\mathcal{H}}^3$ and $\ensuremath{\mathcal{M}}_3^3$ respectively called the \textit{Mandelbric}, \textit{Hyperbric} and tricomplex \textit{Mandelbric}. Precisely, in the second section, we recall some definitions and properties of tricomplex numbers denoted by $\ensuremath{\mathbb{M}} (3)$. We remark that complex and hyperbolic numbers are embedded in $\ensuremath{\mathbb{M}} (3)$ and also that there are other interesting subsets of $\ensuremath{\mathbb{M}} (3)$. In the third section, we review the definition and properties of \textit{Multibrots} sets. Particularly, we prove that the set $\ensuremath{\mathcal{M}}^3$ crosses the real axis on $\left[- \frac{2}{3\sqrt{3}}, \frac{2}{3\sqrt{3}} \right]$. In section four, we define \textit{Multibrot} sets for hyperbolic numbers. Particularly, based on Metzler's article (see \cite{MET}), we prove that $\ensuremath{\mathcal{H}}^3$ is a square where its center is the origin. Finally, in the fifth section, we define the tricomplex \textit{Multibrot} sets corresponding to the polynomial $\eta^p+c$ where $\eta$ and $c$ are tricomplex numbers and $p\geq 2$ is an integer. We obtain, for the case where $p=3$, that there are four principal 3D slices of $\ensuremath{\mathcal{M}}_3^3$ instead of eight like it is showed in \cite{GarantRochon} for $\ensuremath{\mathcal{M}}_3^2$. Moreover, we prove that one of these four slices, typically named the \textit{Perplexbric}, is an octahedron of edges $\frac{2\sqrt{2}}{3\sqrt{3}}$.
\section{Tricomplex numbers} In this section, we begin by a short introduction of the tricomplex numbers space $\ensuremath{\mathbb{M}} (3)$. One may be refer to \cite{Baley}, \cite{GarantPelletier} and \cite{Vajiac} for more details on the next properties.
A tricomplex number $\eta$ is composed of two coupled bicomplex numbers $\zeta_1$, $\zeta_2$ and an imaginary unit $\bb$ such that \begin{equation} \eta=\zeta_1 + \zeta_2 \bb \label{eq2.1} \end{equation} where $\bbs=-1$. The set of such tricomplex numbers is denoted by $\ensuremath{\mathbb{M}} (3)$. Since $\zeta_1,\zeta_2 \in \ensuremath{\mathbb{M}} (2)$, we can write them as $\zeta_1=z_1+ z_2\bt$ and $\zeta_2=z_3+ z_4\bt$ where $z_1,z_2,z_3,z_4 \in \ensuremath{\mathbb{M}} (1)\simeq \ensuremath{\mathbb{C}}$. In that way, \eqref{eq2.1} can be rewritten as \begin{equation} \eta=z_1+ z_2 \bt+ z_3 \bb+ z_4 \bjt\label{eq2.2} \end{equation} where $\ensuremath{{\bf i_2^{\text 2}}}=-1$, $\bt \bb = \bb \bt = \bjt$ and $\bjts=1$. Moreover, as $z_1$, $z_2$, $z_3$ and $z_4$ are complex numbers, we can write the number $\eta$ in a third form as \begin{align} \eta&=a+ b\bo + (c+ d\bo)\bt + (e + f\bo)\bb + (g + h\bo)\bjt\notag\\ &=a+ b\bo + c\bt + d\bjp + e\bb + f\bjd + g \bjt + h\bq\label{eq2.3} \end{align} where $\ensuremath{{\bf i_1^{\text 2}}}=\bqs=-1$, $\bq =\bo \bjt = \bo \bt \bb$, $\bjd = \bo \bb = \bb \bo$, $\bjds=1$, $\bjp=\bo \bt = \bt \bo$ and $\bjps=1$. After ordering each term of \eqref{eq2.3}, we get the following representations of the set of tricomplex numbers: \begin{align}
\ensuremath{\mathbb{M}} (3) &:= \oa \eta = \zeta_1 + \zeta_2\bb \, |\, \zeta_1, \zeta_2 \in \ensuremath{\mathbb{M}} (2) \fa \notag\\
&=\oa z_1+ z_2 \bt+ z_3 \bb+ z_4 \bjt \, |\, z_1,z_2,z_3,z_4 \in \ensuremath{\mathbb{M}} (1) \fa \notag\\
&=\oa x_0+ x_1\bo + x_2\bt + x_3\bb + x_4\bq + x_5\bjp + x_6 \bjd+ x_7\bjt \, |\, x_i \in \ensuremath{\mathbb{M}} (0)=\ensuremath{\mathbb{R}} \right. \\ & \left. \qquad \qquad \text{ for } i=0,1,2, \ldots , 7 \fa \text{.} \label{EqRep} \end{align} Let $\eta_1=\zeta_1+\zeta_2\bb$ and $\eta_2=\zeta_3 + \zeta_4\bb$ be two tricomplex numbers with $\zeta_1,\zeta_2,\zeta_3,\zeta_4 \in \ensuremath{\mathbb{M}} (2)$. We define the equality, the addition and the multiplication of two tricomplex numbers as \begin{align} \eta_1&=\eta_2 \text{ iff } \zeta_1=\zeta_3 \text{ and } \zeta_2=\zeta_4 \label{eq2.4}\\ \eta_1 + \eta_2 &:= (\zeta_1 + \zeta_3) + (\zeta_2+\zeta_4)\bb \label{eq2.5}\\ \eta_1 \cdot \eta_2&:= (\zeta_1\zeta_3-\zeta_2\zeta_4)+(\zeta_1\zeta_4 + \zeta_2\zeta_3)\bb \label{eq2.6}\text{.} \end{align} Table \ref{tabC1} shows the results after multiplying each tricomplex imaginary unity two by two. \begin{table} \centering
\begin{tabular}{c|*{9}{c}} $\cdot$ & 1 & $\mathbf{i_1}$ & $\mathbf{i_2}$ & $\mathbf{i_3}$ & $\mathbf{i_4}$ & $\mathbf{j_1}$ & $\mathbf{j_2}$ & $\mathbf{j_3}$\\\hline 1 & 1 & $\mathbf{i_1}$ & $\mathbf{i_2}$ & $\mathbf{i_3}$ & $\mathbf{i_4}$ & $\mathbf{j_1}$ & $\mathbf{j_2}$ & $\mathbf{j_3}$\\ $\mathbf{i_1}$ & $\mathbf{i_1}$ & $-\mathbf{1}$ & $\mathbf{j_1}$ & $\mathbf{j_2}$ & $-\mathbf{j_3}$ & $-\mathbf{i_2}$ & $-\mathbf{i_3}$ & $\mathbf{i_4}$\\ $\mathbf{i_2}$ & $\mathbf{i_2}$ & $\mathbf{j_1}$ & $-\mathbf{1}$ & $\mathbf{j_3}$ & $-\mathbf{j_2}$ & $-\mathbf{i_1}$ & $\mathbf{i_4}$ & $-\mathbf{i_3}$\\ $\mathbf{i_3}$ & $\mathbf{i_3}$ & $\mathbf{j_2}$ & $\mathbf{j_3}$ & $-\mathbf{1}$ & $-\mathbf{j_1}$ & $\mathbf{i_4}$ & $-\mathbf{i_1}$ & $-\mathbf{i_2}$\\ $\mathbf{i_4}$ & $\mathbf{i_4}$ & $-\mathbf{j_3}$ & $-\mathbf{j_2}$ & $-\mathbf{j_1}$ & $-\mathbf{1}$ & $\mathbf{i_3}$ & $\mathbf{i_2}$ & $\mathbf{i_1}$\\ $\mathbf{j_1}$ & $\mathbf{j_1}$ & $-\mathbf{i_2}$ & $-\mathbf{i_1}$ & $\mathbf{i_4}$ & $\mathbf{i_3}$ & $\mathbf{1}$ & $-\mathbf{j_3}$ & $-\mathbf{j_2}$\\ $\mathbf{j_2}$ & $\mathbf{j_2}$ & $-\mathbf{i_3}$ & $\mathbf{i_4}$ & $-\mathbf{i_1}$ & $\mathbf{i_2}$ & $-\mathbf{j_3}$ & $\mathbf{1}$ & $-\mathbf{j_1}$\\ $\mathbf{j_3}$ & $\mathbf{j_3}$ & $\mathbf{i_4}$ &$-\mathbf{i_3}$ & $-\mathbf{i_2}$ & $\mathbf{i_1}$ & $-\mathbf{j_2}$ & $-\mathbf{j_1}$ & $\mathbf{1}$ \\ \end{tabular} \caption{Products of tricomplex imaginary units}\label{tabC1} \end{table} The set of tricomplex numbers with addition $+$ and multiplication $\cdot$ forms a commutative ring with zero divisors.
A tricomplex number has a useful representation using the idempotent elements $\ett =\frac{1+\bjt}{2}$ and $\etc =\frac{1-\bjt}{2}$. Recalling that $\eta = \zeta_1 + \zeta_2\bb$ with $\zeta_1, \zeta_2 \in \ensuremath{\mathbb{M}} (2)$, the idempotent representation of $\eta$ is \begin{equation} \eta = (\zeta_1- \zeta_2\bt)\ett + (\zeta_1+ \zeta_2\bt)\etc \label{eq2.7}\text{.} \end{equation} The representation \eqref{eq2.7} of a tricomplex number is useful to add and multiply tricomplex numbers because it allows to do these operations term-by-term. In fact, we have the following theorem (see \cite{Baley}): \begin{theorem}\label{theo2.2} Let $\eta_1=\zeta_1 + \zeta_2\bb$ and $\eta_2=\zeta_3 + \zeta_4\bb$ be two tricomplex numbers. Let $\eta_1=u_1\ett + u_2 \etc$ and $\eta_2=u_3\ett + u_4\etc$ be the idempotent representation \eqref{eq2.7} of $\eta_1$ and $\eta_2$. Then, \begin{enumerate} \item $\eta_1+\eta_2=(u_1+u_3)\ett + (u_2+u_4)\etc$; \item $\eta_1 \cdot \eta_2 = (u_1 \cdot u_3)\ett + (u_2 \cdot u_4)\etc$; \item $\eta_1^m=u_1^m \ett + u_2^m \etc$ $\forall m \in \ensuremath{\mathbb{N}}$. \end{enumerate} \end{theorem} Moreover, we define a $\ensuremath{\mathbb{M}} (3)$-\textit{cartesian} set $X$ of two subsets $X_1,X_2\subseteq\ensuremath{\mathbb{M}} (2)$ as follows: \begin{align}
X=X_1\times_{\ett}X_2:=\oa \eta =\zeta_1+ \zeta_2\bb \in \ensuremath{\mathbb{M}} (3) \, | \, \eta = u_1 \ett + u_2 \etc , u_1 \in X_1 \text{ and } u_2 \in X_2 \fa \text{.}\label{eq2.14} \end{align}
Let define the norm $\Vert \cdot \Vert_3 :\, \ensuremath{\mathbb{M}} (3) \rightarrow \ensuremath{\mathbb{R}}$ of a tricomplex number $\eta=\zeta_1 + \zeta_2\bb$ as \begin{align}
\Vert \eta \Vert_3 & := \sqrt{\Vert\zeta_1\Vert_2^2+\Vert\zeta_2\Vert_2^2}=\sqrt{\sum_{i=1}^2|z_i|^2+\sum_{i=3}^4|z_i|^2}\label{eq2.15}\\ &=\sqrt{\sum_{i=0}^7x_i^2}.\notag \end{align} According to the norm \eqref{eq2.15}, we say that a sequence $\oa s_m \fa_{m=1}^{\infty} $ of tricomplex numbers is bounded if and only if there exists a real number $M$ such that $\Vert s_m \Vert_3 \leq M$ for all $m \in \ensuremath{\mathbb{N}}$. Now, according to \eqref{eq2.14}, we define two kinds of tricomplex discus: \begin{definition}\label{def2.1} Let $\alpha = \alpha_1+\alpha_2 \bb \in \ensuremath{\mathbb{M}} (3)$ and set $\bf{r_2}\geq \bf{r_1} >0$. \begin{enumerate} \item The open discus is the set \begin{align}
\bf{D_3}(\alpha ; \bf{r_1},\bf{r_2})&:= \oa \eta \in \ensuremath{\mathbb{M}} (3) \, | \, \eta =\zeta_1 \ett + \zeta_2\etc , \, \Vert \zeta_1-(\alpha_1- \alpha_2 \bt)\Vert_2 <\bf{r_1} \text{ and }\right.\notag \\ &\left. \qquad \qquad \qquad \Vert \zeta_2 - (\alpha_1+\alpha_2 \bt) \Vert_2 <\bf{r_2} \fa \text{.}\label{eq2.141} \end{align} \item The closed discus is the set \begin{align}
\overline{\bf{D_3}}(\alpha ; \bf{r_1},\bf{r_2})&:= \oa \eta \in \ensuremath{\mathbb{M}} (3) \, | \, \eta =\zeta_1 \ett + \zeta_2\etc , \, \Vert \zeta_1-(\alpha_1- \alpha_2\bt)\Vert_2 \leq \bf{r_1} \text{ and }\right.\notag \\ &\left. \qquad \qquad \qquad \Vert \zeta_2 - (\alpha_1+ \alpha_2 \bt) \Vert_2 \leq \bf{r_2} \fa \text{.}\label{eq2.142} \end{align} \end{enumerate} \end{definition}
We end this section by several remarks about subsets of $\ensuremath{\mathbb{M}} (3)$. Let the set $\ensuremath{\mathbb{C}} (\bk ):=\oa \eta = x_0 + x_1 \bk \, | \, x_0, x_1 \in \ensuremath{\mathbb{R}} \fa, \bk \in \oa \bo , \bt , \bb , \bq \fa $. So, $\ensuremath{\mathbb{C}} (\bk )$ is a subset of $\ensuremath{\mathbb{M}} (3)$ for $k=1,2,3,4$ and we also remark that they are all isomorphic to $\ensuremath{\mathbb{C}}$. Furthermore, the set
$$\ensuremath{\mathbb{D}} (\bjk ):=\oa x_0 + x_1 \bjk \, | \, x_0, x_1 \in \ensuremath{\mathbb{R}}\fa$$ where $\bjk \in \oa \bjp , \bjd, \bjt \fa$ is a subset of $\ensuremath{\mathbb{M}} (3)$ and is isomorphic to the set of hyperbolic numbers $\ensuremath{\mathbb{D}}$ for $k\in\{1,2,3,4\}$ (see \cite{RochonShapiro, vajiac2} and \cite{Sobczyk} for further details about the set $\ensuremath{\mathbb{D}}$ of hyperbolic numbers). Moreover, we define three particular subsets of $\ensuremath{\mathbb{M}} (3)$ (see \cite{GarantRochon} and \cite{Parise}). \begin{definition} \label{Mikil} Let $\bk , \bil \in \oa 1 , \bo , \bt , \bb , \bq , \bjp , \bjd , \bjt \fa$ where $\bk \neq \bil$. The first set is \begin{equation}
\ensuremath{\mathbb{M}} (\bk , \bil ):= \oa x_1 + x_2\bk + x_3\bil + x_4\bk\bil \, | \, x_i \in \ensuremath{\mathbb{R}}, i=1, \ldots, 4 \fa \text{.} \end{equation} \end{definition} It is easy to see that $\ensuremath{\mathbb{M}} (\bk , \bil )$ is closed under addition and multiplication of tricomplex numbers and that $\ensuremath{\mathbb{M}} (\bk , \bil ) \simeq \ensuremath{\mathbb{M}} (2)$ except for the \textit{biduplex} sets $\ensuremath{\mathbb{M}} (\bjp,\bjd )$, $\ensuremath{\mathbb{M}} (\bjp , \bjt )$ and $\ensuremath{\mathbb{M}} (\bjd , \bjt )$ (see \cite{GarantRochon}). \begin{definition}\label{Mikilim} Let $\bk , \bil , \bim \in \oa \bo , \bt , \bb , \bq , \bjp , \bjd , \bjt \fa$ with $\bk \neq \bil$, $\bk \neq \bim$ and $\bil \neq \bim$. The second subset is \begin{equation}
\ensuremath{\mathbb{M}} (\bk , \bil , \bim ):= \oa x_1\bk + x_2\bil + x_3\bim + x_4\bk\bil\bim \, | \, x_i \in \ensuremath{\mathbb{R}}, i=1, \ldots, 4 \fa \text{.} \end{equation} \end{definition} Using Table \ref{tabC1}, we can easily verify that for any tricomplex number $\zeta \in \ensuremath{\mathbb{M}} (\bk , \bil , \bim )$, $\zeta^3 \in \ensuremath{\mathbb{M}} (\bk , \bil , \bim )$. In section \ref{sec5}, this fact will be useful to characterize some principal 3D slices of the tricomplex \textit{Mandelbric}. \begin{definition}\label{Tikilim} Let $\bk , \bil , \bim \in \oa 1, \bo , \bt , \bb , \bq , \bjp , \bjd , \bjt \fa$ with $\bk \neq \bil$, $\bk \neq \bim$ and $\bil \neq \bim$. The third subset is \begin{equation}
\ensuremath{\mathbb{T}} (\bim , \bk , \bil ):= \oa x_1\bk + x_2\bil + x_3\bim \, | \, x_1,x_2,x_3 \in \ensuremath{\mathbb{R}} \fa \text{.} \end{equation} \end{definition} This last set is not closed under multiplication depending on the number of times $k$ you multiply a tricomplex number in this set. For $k$ even, two cases may occur depending on the choice of the tricomplex imaginary units: the case that $\ensuremath{\mathbb{T}} (\bim , \bk , \bil ) \subseteq \ensuremath{\mathbb{M}} (\bk , \bil )$ or the case that the result of multiplying tricomplex numbers in $\ensuremath{\mathbb{T}} (\bim , \bk , \bil )$ is not closed in the set $\ensuremath{\mathbb{M}} (\bk , \bil )$. The first case arises if one of the imaginary unit $\bk , \bil , \bim$ is 1 or if the product $\bk\bil = \pm \bim$. Whenever one of these conditions are not fulfilled, the result is not closed in the set $\ensuremath{\mathbb{M}} (\bk , \bil )$. On the other hand, if $k$ is odd, the first case stay the same but the second is always closed in the set $\ensuremath{\mathbb{M}} (\bk , \bil , \bim )$. These facts are direct consequences of the definition of the tricomplex imaginary units.
\section{Generalized Mandelbrot sets for complex numbers} In this section, we investigate \textit{Multibrot} sets and recall some of their properties that come from \cite{Gujar,chine6,2Noah,Parise,Chine1}. We also obtain some specific results for the \textit{Mandelbric} set $\ensuremath{\mathcal{M}}^3$ generated by the complex polynomial $Q_{3,c}(z)=z^3+c$.
\subsection{\textit{Multibrot} sets} We define the \textit{Multibrot} as follows: \begin{definition}\label{d2.2.1} Let $Q_{p,c}(z)=z^p+c$ a polynomial of degree $p\in \ensuremath{\mathbb{N}}\setminus \left\lbrace 0,1\right\rbrace$. A \textit{Multibrot} set is the set of complex numbers $c$ which for all $m \in \ensuremath{\mathbb{N}}$, the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is bounded, \textit{i.e.} \begin{equation}
\ensuremath{\mathcal{M}^p} = \oa c \in \ensuremath{\mathbb{C}} \, | \ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}} \text{ is bounded } \fa. \end{equation} \end{definition} If we set $p=2$, we find the well-known Mandelbrot set. The next two theorems provide a method to visualize the $\ensuremath{\mathcal{M}^p}$ sets (see \cite{chine6} and \cite{Parise}). \begin{theorem}\label{t2.2.1}
For all complex number $c$ in $\ensuremath{\mathcal{M}^p}$, we have $|c| \leq 2^{1/(p-1)}$. \end{theorem} To show Theorem \ref{t2.2.1}, we need the following lemma. \begin{lemma}\label{l2.2.0.1}
Let $|c|^{p-1}>2$ with $p\geq 2$ an integer. Then, $|Q_{p,c}^m(0)|\geq |c|(|c|^{p-1}-1)^{m-1}$ for all natural number $m\geq 1$. \end{lemma} \begin{demo}
The proof is done by induction. For $m=1$, we have that $|Q_{p,c}(0)| = |c| = |c|(|c|^{p-1}-1)^{1-1}$. Suppose that the property is true for an integer $k\geq 1$. Then, for $k+1$, we obtain that \begin{align*}
|Q_{p,c}^{k+1}(0)| &= |(Q_{p,c}^k(0))^p + c| \geq |Q_{p,c}^k(0)|^p-|c|\text{.} \end{align*}
By the induction hypothesis and since $|c|^{p-1}>2$, we get the following inequalities \begin{align*}
|Q_{p,c}^{k+1}(0)| &\geq |c|^p(|c|^{p-1}-1)^{p(k-1)} - |c|\\
&\geq |c|^p(|c|^{p-1}-1)^{k-1} - |c|(|c|^{p-1}-1)^{k-1}\\
&\geq |c|(|c|^{p-1}-1)^k\text{.} \end{align*} Hence, the property holds for $k+1$ and then it holds for all natural number $m\geq 1$.$\square$ \end{demo} \begin{demo}[of Theorem \ref{t2.2.1}]
Suppose, by contradiction, there exists a complex number $c\in \ensuremath{\mathcal{M}^p}$ such that $|c|>2^{1/(p-1)}$. So, we have $|c|^{p-1}>2$. It follows from Lemma \ref{l2.2.0.1} that $|Q_{p,c}^m(0)|\geq |c|(|c|^{p-1}-1)^{m-1}$ for all $m\geq 1$. Then, as $m\rightarrow \infty$, $|Q_{p,c}^m(0)|$ tends to infinity since $|c|^{p-1}-1>1$. Thus, the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is unbounded, so $c\not \in \ensuremath{\mathcal{M}^p}$ by the definition of a Multibrot set. \end{demo} \begin{theorem}\label{t2.2.2}
A complex number $c$ is in $\ensuremath{\mathcal{M}^p}$ iff $|Q_{p,c}^m(0)|\leq 2^{1/(p-1)}$ $\forall m \in \ensuremath{\mathbb{N}}$. \end{theorem} We need another lemma to prove Theorem \ref{t2.2.2}. \begin{lemma}\label{l2.2.0.2}
Let $|c|\leq 2^{1/(p-1)}$ with $p\geq 2$ an integer. Suppose there exists an integer $n\geq 1$ such that $|Q_{p,c}^n(0)|$ $= 2^{1/(p-1)}+\delta$ with $\delta > 0$. Then, we have this following inequality: $|Q_{p,c}^{n+m}(0)| \geq 2^{1/(p-1)}+(2p)^m\delta$, $\forall m\geq 1$. \end{lemma} \begin{demo}
The proof is done by induction. For $m=1$, we have that $|Q_{p,c}^{n+1}(0)| \geq |Q_{p,c}^n(0)|^p - |c|$. By the hypothesis, we get $|Q_{p,c}^{n+1}(0)|\geq (2^{1/(p-1)}+\delta)^p - 2^{1/(p-1)}$. But, $(2^{1/(p-1)}+\delta)^p = \sum_{i=0}^p\binom{p}{i}2^{\frac{p-i}{p-1}}\delta^i$. Then, \begin{align*} (2^{1/(p-1)}+\delta)^p - 2^{1/(p-1)} &\geq 2^{p/(p-1)} + 2p\delta - 2^{1/(p-1)}\\ &=2^{1/(p-1)}(2^p-1) + 2p\delta\\ &\geq 2^{1/(p-1)}+2p\delta\text{.} \end{align*}
Thus, $|Q_{p,c}^{n+1}(0)| \geq 2^{1/(p-1)}+2p\delta$ and the property holds for $m=1$. Now, suppose that the property is true for an integer $k\geq 1$. Then, for $k+1$, we have that $|Q_{p,c}^{n+k+1}(0)| \geq |Q_{p,c}^{n+k}(0)|^p - |c|$. Then, by the induction hypothesis, we get the following chain of inequalities \begin{align*}
|Q_{p,c}^{n+k}(0)|^p - |c| &\geq (2^{1/(p-1)}+(2p)^k\delta)^p - |c|\\ &\geq 2^{p/(p-1)} + 2p(2p)^k\delta - 2^{1/(p-1)}\\ &\geq 2^{1/(p-1)} + (2p)^{k+1}\delta\text{.} \end{align*} Consequently, the property holds for $k+1$ and thus it holds for any natural number $m\geq 1$. \end{demo} \begin{demo}\textbf{(of Theorem \ref{t2.2.2})}
\begin{enumerate}
\item[$\Rightarrow$)] Let $c \in \ensuremath{\mathcal{M}^p}$. By Theorem \ref{t2.2.1}, we know that $|c|\leq 2^{1/(p-1)}$. Suppose there exists a $n\geq 1$ such that $|Q_{p,c}^n(0)| > 2^{1/(p-1)}$, that is $|Q_{p,c}^n(0)| = 2^{1/(p-1)}+\delta$ with $\delta > 0$. Then, by Lemma \ref{l2.2.0.2}, we obtain that $|Q_{p,c}^{n+m}(0)| \geq 2^{1/(p-1)}+(2p)^m\delta$ for all $m\geq 1$. Then, $|Q_{p,c}^{n+m}(0)| \to \infty$ as $m$ tends to infinity since $2p>1$. Thus, $c\not\in \ensuremath{\mathcal{M}^p}$. This is a contradiction. \item[$\Leftarrow$)] This is a direct consequence of the definition of Multibrot sets since the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is bounded by $2^{1/(p-1)}$ for all natural number $m\geq 1$. \end{enumerate} \end{demo} Theorem \ref{t2.2.2} provides a criterion to decide whenever a complex number $c$ belongs to the set $\ensuremath{\mathcal{M}^p}$. The algorithm used to generate the figures is described in \cite{Gujar}. We use the preset limit $L=2^{1/(p-1)}$ and the magnitude of maximum iterations $M=1000$. The images are generated in a square of $1000 \times 1000$ pixels. Figures \ref{fig2.1}, \ref{fig2.2} illustrate, respectively, the sets $\ensuremath{\mathcal{M}^p}$ for the values $p=3$ and $p=4$. \begin{figure}\label{fig2.1}
\label{fig2.2}
\end{figure}
Now, let $\mathfrak{M}$ denote the family of all generalized Mandelbrot sets $\ensuremath{\mathcal{M}^p}$, \textit{i.e.} $\mathfrak{M}:=\oa \ensuremath{\mathcal{M}^p} \, | \, p\geq 2\fa$. The family $\mathfrak{M}$ has the following property (see \cite{Chine1}). \begin{theorem}\label{t2.2.3} All member of the family $\mathfrak{M}$ is a connected set. \end{theorem}
We have also some other properties related to the polynomial $Q_{p,c}(z)$ when we iterate it from $z_0=0$. The proofs can be found in \cite{Parise}. \begin{lemma}\label{l2.2.1} Set $c>0$ where $c$ is a real number. Then, the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is strictly ascendant. Furthermore, if the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is bounded, then it converges to $c_0>0$. \end{lemma} \begin{lemma}\label{l2.2.2} Set $c<0$ where $c$ is a real number. Then, the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is strictly descendant when $p$ is odd. Furthermore, if the sequence $\ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is bounded, then it converges to $c_0<0$. \end{lemma}
These properties will be useful to prove our next result for the intersection of $\ensuremath{\mathcal{M}^3}$ with the real axis.
\subsection{Roots of a third-degree polynomial} Let $P(x)=x^3+bx^2+cx+d$ denote a monic cubic polynomial with real coefficients. Set $y=x+\frac{b}{3}$ as a Möbius transformation. It reduces the polynomial $P(x)$ to the polynomial $Q(y)=y^3+py+q$ where $p=c-\frac{b^2}{3}$ and $q=\frac{2b^3}{27}-\frac{cb}{3}+d$. In that case, searching for the roots of $P(x)$ is equivalent to search the roots of $Q(y)$.
A complex number $z$ is a root of $Q(y)$ iff there exist two complex numbers $y_1$ and $y_2$ such that $z=y_1+y_2$ and \begin{align} y_1^3+y_2^3+q=0\\ y_1y_2=-\dfrac{p}{3} \label{syst1} \end{align} (see \cite{Parise}). The last equations can be rewritten as the following equivalent system \begin{align} y_1^3+y_2^3+q=0\notag \\ y_1^3y_2^3=-\dfrac{p^3}{27} \label{syst2}\text{.} \end{align} With respect to \eqref{syst2}, we remark that $y_1^3$, $y_2^3$ are roots of the polynomial $T(t)=t^2-(y_1^3+y_2^3)t+y_1^3y_2^3=t^2+qt-\frac{p^3}{27}$ where its discriminant $\Delta$ is equal to $q^2+\frac{4p^3}{27}$. To settle information on the roots of $Q(y)$ and so, to get information from the roots of $P(x)$, we are interesting about the sign of $D=27\Delta =27q^2+4p^3$. In fact, one can prove the following result (see \cite{HBook}, \cite{EQU34} and \cite{Parise}). \begin{theorem}\label{t2.1.1} Let $P(x)=x^3+bx^2+cx+d$ where $b,c,d \in \mathbb{R}$. If $D=4c^3+27d^2+4db^3-b^2c^2-18bcd$ and \begin{itemize} \item[i)] $D>0$, then $P(x)$ has one real root and two complex roots; \item[ii)] $D=0$, then $P(x)$ has three real roots which one is of multiplicity 2; \item[iii)] $D<0$, then $P(x)$ has three distinct real roots. \end{itemize} \end{theorem} \begin{demo} Developing the expression of $D = 27\Delta$ where $\Delta = q^2 + \frac{4p^3}{27}$ is the discriminant of $T(t)$ gives the expression of $D$ in the statement.
Now, we study the relation between the roots of $Q(y)$ and the sign of $D$. \begin{enumerate} \item[i)] If $D > 0$, then $T(t)$ has two real roots $t_1$ and $t_2$. According to \eqref{syst2}, $t_1 = y_1^3$ and $t_2 = y_2^3$. So, from the remark above, the roots of $Q(y)$ are \begin{align*} z_1& = y_1 + y_2 = \sqrt[3]{t_1} + \sqrt[3]{t_2}\\ z_2& = \omega y_1 + \overline{\omega}y_2 = \omega \sqrt[3]{t_1} + \overline{\omega}\sqrt[3]{t_2}\\ z_3& = \overline{\omega}y_1 + \omega y_2 = \overline{\omega}\sqrt[3]{t_1} + \omega \sqrt[3]{t_2} \end{align*} where $\omega:=\frac{-1+\bi \sqrt{3}}{2}$ is a cubic root of the unity. This gives one real root and two complex roots. \item[ii)] If $D = 0$, then $T(t)$ has a root $t = -\frac{q}{2}$ of multiplicity $2$. According to \eqref{syst2}, $y^3 = t$ where $y^3:=y_1^3 = y_2^3$. Then, the roots of $Q(y)$ are \begin{align*} z_1 &= 2y = 2\sqrt[3]{t}\\ z_2 &= z_3 = \omega y + \overline{\omega} y = -\sqrt[3]{t}\text{.} \end{align*} These last roots are all real, and one of them (the root $z_2$) is of multiplicity 2. \item[iii)] If $D < 0$, then the roots $t$ and $\overline{t}$ are complex roots of $T(t)$. Let $t = re^{\bi\theta }$ and set $y:= t^{1/3}$ a cubic root of $t$. Then, the roots of $Q(y)$ are \begin{align*} z_1& = y + \overline{y} = t^{1/3} + \overline{t}^{1/3}\\ z_2& = \omega y + \overline{\omega}\, \overline{y} = \omega t^{1/3} + \overline{\omega}\,\overline{t}^{1/3}\\ z_3&= \overline{\omega} y + \omega\overline{y} = \overline{\omega}t^{1/3} + \omega \overline{t}^{1/3}\text{.} \end{align*} Since the three roots are the sum of a complex number with its conjugate, $z_1$, $z_2$ and $z_3$ are distinct real roots. This complete the proof. \end{enumerate}
\end{demo} \subsection{$\ensuremath{\mathcal{M}^3}$ set: the \textit{Mandelbric}} In this subsection, we prove that the \textit{Mandelbric} set crosses the real axis on the interval $\left[-\frac{2}{3\sqrt{3}}, \frac{2}{3\sqrt{3}} \right]$ (see Theorem \ref{t2.3.1}). Before we go through the proof of Theorem \ref{t2.3.1}, we first establish some symmetries (see \cite{Lau} and \cite{Sheng}) in the \textit{Mandelbric}. \begin{lemma}\label{l2.3.1} Let $c \in \ensuremath{\mathcal{M}^3}$. Then $\ensuremath{\overline{c}} \in \ensuremath{\mathcal{M}^3}$. \end{lemma} \begin{demo}
Suppose $c \in \ensuremath{\mathcal{M}^3}$. Then, by Theorem \ref{t2.2.2}, $|Q_{3,c}^m(0)| \leq \sqrt{2}$ $\forall m \in \ensuremath{\mathbb{N}}$. Since $Q_{3,\ensuremath{\overline{c}}}^m(0)=\overline{Q_{3,c}^m(0)}$, $|Q_{3,\ensuremath{\overline{c}}}^m(0)|=|Q_{3,c}^m(0)|$ for all $m \in \ensuremath{\mathbb{N}}$. Thus, $\ensuremath{\overline{c}} \in \ensuremath{\mathcal{M}^3}$. \end{demo} Lemma \ref{l2.3.1} provides that $\ensuremath{\mathcal{M}^3}$ is symmetrical about the real axis. The next lemma is a step forward to show that $\ensuremath{\mathcal{M}^3}$ is symmetrical about the imaginary axis. \begin{lemma}\label{l2.3.2} Let $c=x+ y\bi$ where $x,y \in \ensuremath{\mathbb{R}}$. If $c \in \ensuremath{\mathcal{M}^3}$, then $-c \in \ensuremath{\mathcal{M}^3}$. \end{lemma} \begin{demo}
Let $c=x+ y\bi$ where $x,y \in \ensuremath{\mathbb{R}}$. If $c \in \ensuremath{\mathcal{M}^3}$, then by Theorem \ref{t2.2.2}, $|Q_{3,c}^m(0)|\leq \sqrt{2}$ $\forall m \in \ensuremath{\mathbb{N}}$. By induction, we remark that $\forall m \in \ensuremath{\mathbb{N}}$, $Q_{3,-c}^m(0)=-Q_{3,c}^m(0)$, and so $|Q_{3,-c}^m(0)|=|Q_{3,c}^m(0)|\leq \sqrt{2}$ $\forall m \in \ensuremath{\mathbb{N}}$. Thus, $-c \in \ensuremath{\mathcal{M}^3}$. \end{demo} \begin{corollary}\label{c2.3.1} Let $c=x+ y\bi$ where $x,y \in \ensuremath{\mathbb{R}}$. If $c \in \ensuremath{\mathcal{M}^3}$, then $c'=-x+y\bi $ is in $\ensuremath{\mathcal{M}^3}$. \end{corollary} \begin{demo} Let $c=x+ y\bi$ and $c \in \ensuremath{\mathcal{M}^3}$. We want to prove that $c'=-x+ y\bi\in\ensuremath{\mathcal{M}^3}$. By hypothesis and Lemma \ref{l2.3.1}, $\ensuremath{\overline{c}} \in \ensuremath{\mathcal{M}^3}$. Therefore, by Lemma \ref{l2.3.2}, $-\ensuremath{\overline{c}} \in \ensuremath{\mathcal{M}^3}$. Since $-\ensuremath{\overline{c}}=-x+ y\bi$, we have that $c'\in \ensuremath{\mathcal{M}^3}$. \end{demo} With this last result, the next proof will be simplifies. \begin{theorem}\label{t2.3.1} The Mandelbric crosses the real axis on the interval $\left[ \frac{-2}{3\sqrt{3}},\, \frac{2}{3\sqrt{3}} \right]$. \end{theorem} \begin{demo} By the Corollary \ref{c2.3.1}, we can restrict our proof to the interval $\left[ 0,\, \frac{2}{3\sqrt{3}} \right]$. Let $R_{3,c}(x)=x^3-x+c$ where $c \in \ensuremath{\mathbb{R}}$ and $D=-4+27c^2$. We start by showing that no point $c>\frac{2}{3\sqrt{3}}$ lies in $\ensuremath{\mathcal{M}^3}$. In this case, $D>0$ and $R_{3,c}$ has a real root (see Theorem \ref{t2.1.1}), and this root is given by \begin{equation} x_0=\sqrt[3]{ -\dfrac{c}{2} + \dfrac{\sqrt{c^2-\frac{4}{27}}}{2}} + \sqrt[3]{ -\dfrac{c}{2} - \dfrac{\sqrt{c^2-\frac{4}{27}}}{2} }. \end{equation} Suppose that $c \in \ensuremath{\mathcal{M}^3}$, \textit{i.e.} $\ensuremath{\left\lbrace Q_{3,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is bounded. Then, Lemma \ref{l2.2.1} implies that $\ensuremath{\left\lbrace Q_{3,c}^m(0) \right\rbrace_{m=1}^{\infty}}$ is strictly ascendant and it converges to $c_0>0$. Since $Q_{3,c}^m(0)$ is a polynomial function for all $m\in \ensuremath{\mathbb{N}}$, we have that \begin{align} c_0&=\lim_{m \rightarrow \infty} Q_{3,c}^{m+1}(0)\\ &= Q_{3,c}( \lim_{m \rightarrow \infty} Q_{3,c}^{m}(0) )\\ &= Q_{3,c}(c_0)\text{.} \end{align} Thus, $c_0$ is a real root of $R_{3,c}$ and $c_0=x_0$. However, since $\frac{c}{2}>-\frac{c}{2}$, we have that \begin{equation} x_0= \sqrt[3]{ -\dfrac{c}{2} + \dfrac{\sqrt{c^2-\frac{4}{27}}}{2} } + \sqrt[3]{ -\dfrac{c}{2} - \dfrac{\sqrt{c^2-\frac{4}{27}}}{2} }<0. \end{equation} This is a contradiction with $x_0=c_0>0$. Thus, $c \not \in \ensuremath{\mathcal{M}^3}$.
Next, we show that for $0 \leq c \leq \frac{2}{3\sqrt{3}}$, $c$ lies in $\ensuremath{\mathcal{M}^3}$. Obviously, $c=0$ is in $\ensuremath{\mathcal{M}^3}$. Suppose that $0 < c \leq \frac{2}{3\sqrt{3}}$. In this case, $D \leq 0$ and by the Theorem \ref{t2.1.1}, $R_{3,c}(x)$ has the following three real roots (see \cite{Parise}): \begin{equation}\label{e2.3.1} {\left( -\dfrac{c}{2} + \bi\dfrac{\sqrt{-c^2+\frac{4}{27}}}{2} \right)}^{1/3} + {\left( -\dfrac{c}{2} -\bi \dfrac{\sqrt{-c^2+\frac{4}{27}}}{2} \right)}^{1/3}. \end{equation} Following De Moivre's formula, one of these roots can be expressed as follows: \begin{equation} a=\dfrac{2}{\sqrt{3}}\cos \left( \frac{\theta}{3}\right) \end{equation}
for $c\in \left( 0, \, \frac{2}{3\sqrt{3}} \right]$ and $\theta=\arctan \left(\frac{\sqrt{-D}}{-3c\sqrt{3}}\right)+\pi$ where $\pi \leq \theta < \frac{\pi}{2}$. We prove by induction that $|Q_{3,c}^m(0)|<a$ $\forall m \in \ensuremath{\mathbb{N}}$. For $m=1$, we have that $|Q_{3,c}(0)|=|c|<a$ because $|c| < \frac{1}{\sqrt{3}} \leq a$. Indeed, since $\pi \leq \theta < \frac{\pi}{2}$, we obtain $\frac{1}{\sqrt{3}} \leq a < 1$. Now, suppose that $|Q_{3,c}^k(0)| < a$ for a $k \in \ensuremath{\mathbb{N}}$. Then, since $R_{3,c}(a)=a^3-a+c=0$ and $c>0$, \begin{equation*}
|Q_{3,c}^{k+1}(0)|=|(Q_{3,c}^k(0))^3+c|\leq |(Q_{3,c}^k(0))|^3+|c| < a^3+c=a. \end{equation*}
Thus, the proposition is true for $k+1$ and $|Q_{3,c}^m(0)|<a$ $\forall m \in \ensuremath{\mathbb{N}}$. Since $a \leq \sqrt{2}$, then by the Theorem \ref{t2.2.2} we have $c \in \ensuremath{\mathcal{M}^3}$ .
In conclusion, $\ensuremath{\mathcal{M}^3} \cap \ensuremath{\mathbb{R}}_+ = \left[ 0, \, \frac{2}{3\sqrt{3}} \right]$. \end{demo}
\section{\textit{Multibrot} sets for hyperbolic numbers} Previously, we treated the \textit{Multibrot} sets for complex numbers. In this section, we propose an extension of the Mandelbrot set for hyperbolic numbers called the \textit{Hyperbrots} and we prove that $\ensuremath{\mathcal{M}^3}$ for hyperbolic numbers denoted by $\ensuremath{\mathcal{H}^3}$ is a square of side length $\frac{2}{3\sqrt{3}}\sqrt{2}$.
\subsection{Definition of the sets $\mathcal{H}^p$} Based on the works of Metzler and Senn (see \cite{MET} and \cite{Senn} respectively) on the hyperbolic Mandelbrot set, we define the Hyperbrots as follows: \begin{definition}\label{d3.1} Let $Q_{p,c}(z)=z^p+c$ where $z,c \in \ensuremath{\mathbb{D}}$ and $p\geq 2$ an integer. The Hyperbrots are defined as the sets \begin{equation}
\ensuremath{\mathcal{H}}^p:= \oa c \in \ensuremath{\mathbb{D}} \, | \, \ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}} \text{ is bounded } \fa \text{.} \end{equation} \end{definition} Metzler proved that $\ensuremath{\mathcal{H}^2}$ is a square with diagonal length 2$\frac{1}{4}$ and of side length $\frac{9}{8} \sqrt{2}$. We use the same approach to prove that $\ensuremath{\mathcal{H}^3}$ is a square with diagonal length $\frac{4}{3\sqrt{3}}$ and with side length $\frac{2}{3\sqrt{3}}\sqrt{2}$. For the next part, we note a hyperbolic numbers $z$ as $(u,v)^{\top}$ and the fixed number $c$ as $(a,b)^{\top}$ where $\top$ is the transpose of a column vector in $\ensuremath{\mathbb{R}}^2$.
\subsection{Special case: $\mathcal{H}^3$} First, we recall some of the basic tools introduced by Metzler. \begin{definition}\label{d3.2} Let $(u,v)^{\top}, (x,y)^{\top} \in \ensuremath{\mathbb{R}}^2$. We define two multiplication operations $\diamond$ and $\ast$ on $\ensuremath{\mathbb{R}}^2$ as \begin{equation}\label{e3.1} \begin{pmatrix} u\\v \end{pmatrix} \diamond \begin{pmatrix} x\\y \end{pmatrix}:= \begin{pmatrix} ux+vy\\vx+uy \end{pmatrix} \end{equation} \begin{equation}\label{e3.2} \begin{pmatrix} u\\v \end{pmatrix} \ast \begin{pmatrix} x\\y \end{pmatrix}:= \begin{pmatrix} ux\\vy \end{pmatrix}. \end{equation} \end{definition} \begin{remark}\label{r3.1} The operation $\diamond$ corresponds to the multiplication $\cdot$ of two hyperbolic numbers as we adopted the two-dimensional vector notation. We use the symbols $\diamond_{\circ n}$ and $\ast_{\circ n}$ to denote the $n$ consecutive multiplications $\diamond \circ \diamond \circ \ldots \circ \diamond$ and $\ast \circ \ast \circ \ast \ldots \circ \ast$ respectively. \end{remark} With the usual addition operation $+$ on $\ensuremath{\mathbb{R}}^2$, $(\ensuremath{\mathbb{R}}^2,+,\diamond )$ and $(\ensuremath{\mathbb{R}}^2,+,\ast )$ are commutative rings with unity.
Let $T:\, \ensuremath{\mathbb{R}}^2 \rightarrow \ensuremath{\mathbb{R}}^2$ be the following matrix \begin{equation}\label{e3.3} T:= \begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}\text{.} \end{equation} Then, $T$ is an isomorphism between $(\ensuremath{\mathbb{R}}^2,+,\diamond )$ and $(\ensuremath{\mathbb{R}}^2,+,\ast )$. Now, we define \begin{equation}\label{e3.4} \ensuremath{\mathbf{H}_{p,c}} \vecxy := \vecxy \diamond_{\circ p} \vecxy + \vecab \end{equation} and we generalize a result that is included in the proof of Metzler for the case $p=2$. \begin{lemma}\label{l3.1} For all $m \in \ensuremath{\mathbb{N}}$, we have that \begin{equation}\label{e3.5} T\ensuremath{\mathbf{H}_{p,c}}^m \vecxy = \begin{pmatrix} Q_{p,a-b}^m(x-y) \\ Q_{p,a+b}^m(x+y) \end{pmatrix} \end{equation} where $T$ is the matrix of equation \eqref{e3.3} and $Q_{p,c}(z)=z^p+c$ with $z,c\in \ensuremath{\mathbb{R}}$. \end{lemma} The proof can be found in \cite{Parise}. It is similar to the one of Metzler gave in his article (see \cite{MET}). We just replace $\diamond$ by $\diamond_{\circ p}$, $\ast$ by $\ast_{\circ p}$ and the degree $2$ of $P_{(a,b)}$ by the integer $p \geq 2$. Hence, Lemma \ref{l3.1} allows to separate the dynamics of $\ensuremath{\mathbf{H}_{p,c}} \vecxy$ in terms of the dynamics of two real polynomials $Q_{p,a-b}(x-y)$ and $Q_{p,a+b}(x+y)$. We now use Lemma \ref{l3.1} and Theorem \ref{t2.3.1} to prove the next result. \begin{theorem}\label{t3.2}
$\ensuremath{\mathcal{H}^3} = \oa c=(a,b)^{\top} \in \ensuremath{\mathbb{R}}^2 \, | \, |a|+|b| \leq \frac{2}{3\sqrt{3}} \fa$. \end{theorem} \begin{demo} By Lemma \ref{l3.1} and the remark right after, $\oa \ensuremath{\mathbf{H}_{p,c}}^m (\mathbf{0}) \fa _{m=1}^{\infty}$ is bounded iff the real sequences $\oa Q_{3,a-b}^m(0) \fa _{m=1}^{\infty}$ and $\oa Q_{3,a+b}^m(0) \fa _{m=1}^{\infty}$ are bounded . But, according to Theorem \ref{t2.3.1}, these sequences are bounded iff \begin{align}
|a-b| \leq \frac{2}{3\sqrt{3}} \mbox{ and } |a+b|\leq \frac{2}{3\sqrt{3}} \label{e3.7}\text{.} \end{align}
Then, by a simple computation we obtain $|a|+|b| \leq \frac{2}{3\sqrt{3}}$. Conversely, if $|a|+|b| \leq \frac{2}{3\sqrt{3}}$ is true, then by the properties of the absolute value, we obtain the inequalities in \eqref{e3.7}. Thus, we obtain the following characterization for $\ensuremath{\mathcal{H}^3}$, $\ensuremath{\mathcal{H}^3} = \oa c=(a,b)^{\top} \in \ensuremath{\mathbb{R}}^2 \, | \, |a|+|b| \leq \frac{2}{3\sqrt{3}} \fa \text{.}$ \end{demo} Figure \ref{fig3.1} represent an image of the set $\ensuremath{\mathcal{H}^3}$. \begin{figure}\label{fig3.1}
\end{figure}
\section{Generalized Mandelbrot sets for tricomplex numbers}\label{sec5} In this section, we use the set of tricomplex numbers to generalize the \textit{Multibrot} sets. Particularly, we give some basic properties of the tricomplex \textit{Multibrot} and we continue the exploration of the \textit{Multibrot} started in \cite{GarantRochon}. We will concentrate our exploration on the case $\ensuremath{\mathcal{M}}_3^3$ corresponding to the polynomial $Q_{3,c}(\eta)=\eta^3+c$ with $\eta,c \in \ensuremath{\mathbb{M}} (3)$.
\subsection{Definition and properties of $\mathcal{M}_3^p$} The authors of \cite{Chine1} defined the bicomplex \textit{Multibrot} sets as follows: \begin{definition}\label{def5.1} Let $Q_{p,c}=\zeta^p+c$ where $\zeta , c \in \ensuremath{\mathbb{M}} (2)$ and $p \geq 2$ is an integer. The bicomplex \textit{Multibrot} set is defined as the set \begin{equation*}
\ensuremath{\mathcal{M}}_2^p:= \oa c\in \ensuremath{\mathbb{M}} (2) \, | \, \ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}} \text{ is bounded } \fa \text{.} \end{equation*} \end{definition} In \cite{Chine1}, they proved that $\ensuremath{\mathcal{M}}_2^p$ can be expressed as a $\ensuremath{\mathbb{M}} (2)$-\textit{cartesian} set and is connected. In the next theorem, we improve their result concerning the bounded discus of $\ensuremath{\mathcal{M}}_2^p$ in conformity with our Theorem \ref{t2.2.1}. \begin{theorem}\label{t5.1} Let $\ensuremath{\mathcal{M}}_2^p$ denote the bicomplex \textit{Multibrot} sets for integers $p\geq 2$. Then, the following inclusions hold: \begin{equation} \ensuremath{\mathcal{M}}_2^p \subset \overline{\mathbf{D_2}(0,2^{\frac{1}{p-1}},2^{\frac{1}{p-1}})} \subset \overline{\mathbf{B_2^1}(0,2^{\frac{1}{p-1}})}\text{.} \end{equation} \end{theorem} \begin{demo} We know from \cite{Chine1} that $\ensuremath{\mathcal{M}}_2^p=\ensuremath{\mathcal{M}}_1^p \times_{\eb}\ensuremath{\mathcal{M}}_1^p$. Moreover, by Theorem \ref{t2.2.1}, we know that $\ensuremath{\mathcal{M}}_1^p \subset \overline{\mathbf{B_1^1}(0,2^{\frac{1}{p-1}})}$. So, combining the both last statements, we proved the left inclusion. For the right inclusion, we use this following inclusion $\overline{\mathbf{D_2}(a,\bf{r_1},\bf{r_2})} \subset \overline{\mathbf{B_2^1}(a,\sqrt{\frac{\bf{r_1}^2+\bf{r_2}^2}{2}})}$ (see \cite{Baley}) with $a=0$ and $\bf{r_1}=\bf{r_2} = 2^{\frac{1}{p-1}}$. \end{demo}
Now, the tricomplex \textit{Multibrot} sets are defined analogously to the bicomplex ones: \begin{definition}\label{d5.2} Let $Q_{p,c}=\eta^p+c$ where $\eta , c \in \ensuremath{\mathbb{M}} (3)$ and $p\geq 2$ an integer. The tricomplex \textit{Multibrot} set is defined as the set \begin{equation}
\ensuremath{\mathcal{M}}_3^p:=\oa c \in \ensuremath{\mathbb{M}} (3) \, | \, \ensuremath{\left\lbrace Q_{p,c}^m(0) \right\rbrace_{m=1}^{\infty}} \text{ is bounded } \fa \text{.} \end{equation} \end{definition} We have the following theorem that characterizes $\ensuremath{\mathcal{M}}_3^p$ set as a $\ensuremath{\mathbb{M}} (3)$-\textit{cartesian} product of $\ensuremath{\mathcal{M}}_2^p$. \begin{theorem}\label{t5.2} $\ensuremath{\mathcal{M}}_3^p=\ensuremath{\mathcal{M}}_2^p \times_{\ett} \ensuremath{\mathcal{M}}_2^p$. \end{theorem} \begin{demo} Let $c=c_1 + c_2\bb=(c_1- c_2\bt)\ett + (c_1+ c_2\bt)\etc$ as a tricomplex numbers. So, by Definition \ref{d5.2}, $c \in \ensuremath{\mathcal{M}}_3^p$ iff $\oa Q_{p,c}^m(0) \fa$ is bounded. However, from Theorem \ref{theo2.2}, $Q_{p,c}^m(0)$ can be expressed with the idempotent representation as follows: \begin{equation} Q_{p,c}^m(0)=Q_{p,c_1- c_2\bt}^m(0)\ett + Q_{p,c_1+ c_2\bt}^m(0)\etc \end{equation} for all $m \in \ensuremath{\mathbb{N}}$. Moreover, in \cite{Baley}, it is proved for the general case of multicomplex numbers that: \begin{equation} \Vert \zeta \Vert_n=\sqrt{\dfrac{\Vert \zeta_1 - \zeta_2 \bnm \Vert_{n-1}^2+ \Vert \zeta_1 + \zeta_2 \bnm \Vert_{n-1}^2}{2}}\label{e5.1} \end{equation} where $\zeta=\zeta_1+ \zeta_2 \bn\in \ensuremath{\mathbb{M}} (n)$. So, $\oa Q_{p,c}^m(0) \fa_{m=1}^{\infty}$ is bounded iff $\oa Q_{p,c_1- c_2\bt}^m(0) \fa_{m=1}^{\infty}$ and $\oa Q_{p,c_1+ c_2\bt}^m(0)\fa_{m=1}^{\infty}$ are bounded. By Definition \ref{def5.1}, we obtain that $c_1- c_2\bt, c_1+ c_2\bt \in \ensuremath{\mathcal{M}}_2^p$. Thus, $c=(c_1- c_2\bt)\ett + (c_1+ c_2\bt)\etc \in \ensuremath{\mathcal{M}}_2^p \times_{\ett} \ensuremath{\mathcal{M}}^p_2$. \end{demo} If we combine Theorem \ref{t5.2} with Theorem \ref{t2.2.1}, we get the following statement. \begin{theorem}\label{t5.3} Let $\ensuremath{\mathcal{M}}_3^p$ be the tricomplex \textit{Multibrot} set for $p\in\ensuremath{\mathbb{N}}\backslash\{0,1\}$. Then the following inclusion holds: \begin{equation} \ensuremath{\mathcal{M}}_3^p \subset \overline{\bf{D_3}}(0,2^{\frac{1}{p-1}},2^{\frac{1}{p-1}})\text{.} \end{equation} \end{theorem}
Finally, in \cite{Chine1}, it is proved that the sets $\ensuremath{\mathcal{M}}_2^p$ is connected $\forall p\in\ensuremath{\mathbb{N}}\backslash\{0,1\}$. We obtain the same property for $\ensuremath{\mathcal{M}}_3^p$. \begin{theorem}\label{t5.4} $\ensuremath{\mathcal{M}}_3^p$ is a connected set. \end{theorem} \begin{demo} Let define the function $\Gamma_2:\, X_1 \times X_2 \rightarrow X_1 \times_{\ett}X_2$ with $X_1,X_2 \subset \ensuremath{\mathbb{M}} (2)$ and $X=X_1 \times X_2\subset \ensuremath{\mathbb{M}} (3)$ by $\Gamma_2(u_1,u_2)=u_1\ett + u_2\etc$. Obviously, $\Gamma_2$ is an homeomorphism. So, if $X_1,X_2$ are connected sets, then $X$ is also a connected set. Thus, by Theorem \ref{d5.2}, $\ensuremath{\mathcal{M}}_3^p=\ensuremath{\mathcal{M}}_2^p \times_{\ett}\ensuremath{\mathcal{M}}_3^p$ and since $\ensuremath{\mathcal{M}}_2^p$ is connected (see \cite{Chine1}), $\ensuremath{\mathcal{M}}_3^p$ is also a connected set $\forall p\in\ensuremath{\mathbb{N}}\backslash\{0,1\}$. \end{demo}
Theorem \ref{t5.3} is useful to generate the divergence layers of the tricomplex \textit{Multibrot} sets. We use this information to draw the images of the next part. Moreover, we conjecture that the Fatou-Julia Theorem is true for tricomplex \textit{Multibrot} sets and use it to give more information about the topology of the sets. For a statement of the generalized Fatou-Julia Theorem, we refer the reader to \cite{GarantRochon}.
\subsection{Principal 3D slices of the set $\mathcal{M}_3^3$} We want now to visualize the tricomplex \textit{Multibrot} sets. Since there are in eight dimensions, we take the same approach from \cite{GarantRochon} to accomplish this goal. In that way, we may denote the principal 3D slice for a specific tricomplex \textit{Multibrot} set as $\ensuremath{\mathcal{T}}^p$ and define it as the set \begin{equation}\label{eq5.2.1}
\ensuremath{\mathcal{T}}^p:= \ensuremath{\mathcal{T}}^p(\bim , \bk , \bil ) =\oa c \in \ensuremath{\mathbb{T}} (\bim , \bk , \bil ) \, | \, \oa Q_{p,c}^m(0) \fa_{m=1}^{\infty} \text{ is bounded } \fa \text{.} \end{equation} So the number $c$ has three of its components that are not equal to zero. In total, there are 56 possible combinations of principal 3D slices. To attempt a classification of these slices, we introduce a relation $\sim$ (see \cite{GarantRochon}). \begin{definition}\label{def5.2.1} Let $\ensuremath{\mathcal{T}}_1^p( \bim , \bk , \bil )$ and $\ensuremath{\mathcal{T}}_2^p( \bn , \ensuremath{\bf i_q} , \ensuremath{\bf i_s} )$ be two 3D slices of a tricomplex \textit{Multibrot} set $\mathcal{M}_3^p$ that correspond, respectively, to $Q_{p,c_1}$ and $Q_{p,c_2}$. Then, $\ensuremath{\mathcal{T}}_1^p \sim \ensuremath{\mathcal{T}}_2^p$ if there exists a bijective linear function $\varphi : \mathrm{span}_{\ensuremath{\mathbb{R}}}\oa 1,\bim , \bk , \bil \fa \rightarrow \mathrm{span}_{\ensuremath{\mathbb{R}}} \oa 1, \bn , \ensuremath{\bf i_q} , \ensuremath{\bf i_s} \fa$ such that $(\varphi \circ Q_{p,c_1} \circ \varphi^{-1})(\eta )=Q_{p,c_2}(\eta )$ $\forall \eta \in \mathrm{span}_{\ensuremath{\mathbb{R}}}\oa 1,\bn , \ensuremath{\bf i_q} , \ensuremath{\bf i_s} \fa$. In that case, we say that $\ensuremath{\mathcal{T}}_1^p$ and $\ensuremath{\mathcal{T}}_2^p$ have the same dynamics. \end{definition} If two 3D slices are in relationship in term of $\sim$, then we also say that they are symmetrical. This comes from the fact that their visualizations by a computer give the same images. In \cite{Parise}, it is showed that $\sim$ is also a equivalent relation on the set of 3D slices of $\ensuremath{\mathcal{M}}_3^p$. For the rest of this article, we focus on the principal slices of the $\ensuremath{\mathcal{M}}_3^3$ set, also called the tricomplex \textit{Mandelbric} set.
Garant-Pelletier and Rochon \cite{GarantRochon} showed that $\ensuremath{\mathcal{M}}_3^2$ has eight principal 3D slices. So, according to \eqref{eq5.2.1} and the eight principal slices defined in \cite{GarantRochon}, we have the next lemma that corresponds to the first case discussed in the section 2, \textit{i.e.} iterates of $Q_{p,c}^m(0)$ for $m\in \ensuremath{\mathbb{N}}$ are closed in the set $\ensuremath{\mathbb{M}} (\bk , \bil )$. \begin{lemma}[Parisé \cite{Parise}]\label{SymG1} We have the following symmetries in term of $\sim$: \begin{enumerate} \item $\mathcal{T}^3(1,\mathbf{i_1},\mathbf{i_2}) \sim \ensuremath{\mathcal{T}}^3(1,\bk , \bil )$ $\forall \bk , \bil \in \oa \bo , \bt , \bb , \bq \fa$; \item $\ensuremath{\mathcal{T}}^3(1, \bjp , \bjd ) \sim \ensuremath{\mathcal{T}}^3(1, \bjp , \bjt ) \sim \ensuremath{\mathcal{T}}^3(1, \bjd , \bjt )$; \item $\ensuremath{\mathcal{T}}^3(\bo , \bt , \bjp ) \sim \ensuremath{\mathcal{T}}^3(\bk , \bil , \bk\bil )$ for all $\bk , \bil \in \oa \bo ,\bt ,\bb ,\bq \fa$ and \item $\ensuremath{\mathcal{T}}^3(1, \bo , \bjp ) \sim \ensuremath{\mathcal{T}}^3 (1, \bk , \ensuremath{\bf j_l} )$ for $\bk \in \oa \bo ,\bt ,\bb , \bq \fa$ and $\ensuremath{\bf j_l} \in \oa \bjp , \bjd , \bjt \fa$. \end{enumerate} \end{lemma} Figures \ref{fig5.1}, \ref{fig5.2}, \ref{fig5.3} and \ref{fig5.4} illustrates one slices in the four classes of 3D slices of Lemma \ref{SymG1}. \begin{figure}
\caption{Four 3D slices of the \textit{Mandelbric}}
\label{fig5.1}
\label{fig5.2}
\label{fig5.3}
\label{fig5.4}
\label{figAllG1}
\end{figure} It seems that figures \ref{fig5.1} and \ref{fig5.3} looking same where these correspond to slices $\ensuremath{\mathcal{T}}^3(1,\bo , \bt )$ and $\ensuremath{\mathcal{T}}^3(\bo ,\bt , \bjp )$. Indeed, we have the next lemma that attests this remark. \begin{lemma}\label{SymG2} Slices $\ensuremath{\mathcal{T}}^3(1 , \bo , \bt )$ and $\ensuremath{\mathcal{T}}^3(\bo ,\bt ,\bjp )$ have the same dynamics in the sense of the relation $\sim$. \end{lemma} \begin{demo} Set the numbers $c$ and $c'$ and also the function $\varphi : \ensuremath{\mathbb{M}} (\bo , \bt ) \rightarrow \ensuremath{\mathbb{M}} (\bo , \bt )$ as \begin{align*} c=c_1 + c_2\bo + c_3\bt \text{,} & \quad c'=c_2\bo + c_3\bt + c_1\bjp \end{align*} and \begin{equation*} \eta =\varphi (x_1 + x_2\bo + x_3 \bt + x_4 \bjp )= x_4 + x_2 \bo + x_3\bt + x_1\bjp\text{.} \end{equation*} So, \begin{align*} (\varphi \circ Q_{3,c} \circ \varphi^{-1} ) (\eta )&=\varphi \left( (x_1^3-3x_1x_2^2-3x_1x_3^2+3x_1x_4^2 + 6x_2x_3x_4+c_1)\right.\\ & +(-x_2^3 +3x_1^2x_2 - 3x_2x_3^2 +3x_2x_4^2 - 6x_1x_3x_4 +c_2) \bo \\ & + (-x_3^3 +3x_1^2x_3 - 3x_2^2x_3 +3x_3x_4^2 - 6x_1x_2x_4 + c_3) \bt \\ &\left. + (x_4^3 + 3x_1^2x_4 - 3x_2^2x_4 - 3x_3^2x_4 + 6x_1x_2x_3)\bjp \right) \\ &=(x_4^3 + 3x_1^2x_4 - 3x_2^2x_4 - 3x_3^2x_4 + 6x_1x_2x_3)\\ &+ (-x_2^3 +3x_1^2x_2 - 3x_2x_3^2 +3x_2x_4^2 - 6x_1x_3x_4 +c_2)\bo \\ &+ (-x_3^3 +3x_1^2x_3 - 3x_2^2x_3 +3x_3x_4^2 - 6x_1x_2x_4 + c_3) \bt\\ &+(x_1^3-3x_1x_2^2-3x_1x_3^2+3x_1x_4^2 + 6x_2x_3x_4+c_1)\bjp \\ &=Q_{3,c'} \left( \eta \right)\text{.} \end{align*} Thus, by Definition \ref{def5.2.1}, we have the result. \end{demo} Because $\sim$ is an equivalence relation, by Lemmas \ref{SymG1} and \ref{SymG2}, we have find the first principal slice of $\ensuremath{\mathcal{M}}_3^3$, we will call this slice the \textit{Tetrabric}. Now, for slices that correspond to the second case (where the iterates of $Q_{3,c}^m(0)$ are not closed in $\ensuremath{\mathbb{M}} (\bk , \bil )$) we have a lemma similar to Lemma \ref{SymG1}. However, when $p=3$, the iterates of $Q_{3,c}^m(0)$ are closed in $\ensuremath{\mathbb{M}} (\bk ,\bil , \bim )$ (see section 2). \begin{lemma}\label{SymG3} We have the following symmetries: \begin{enumerate} \item $\ensuremath{\mathcal{T}}^3(\bo ,\bt ,\bb ) \sim \ensuremath{\mathcal{T}}^3(\bk , \bil ,\bim )$ for $\bk , \bil , \bim \in \oa \bo ,\bt , \bb , \bq \fa$; \item Every slices of the form $\ensuremath{\mathcal{T}}^3(\bk , \bil , \ensuremath{\bf j_m} )$ where $\bk\bil \neq \ensuremath{\bf j_m}$ ,$\bk , \bil \in \oa \bo , \bt , \bb , \bq \fa$, $\bk \neq \bil$ and $\ensuremath{\bf j_m} \in \oa \bjp , \bjd , \bjt \fa$. Precisely, \begin{align*} \mathcal{T}^3(\mathbf{i_1},\mathbf{i_2},\mathbf{j_2})& \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{i_2},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{i_3},\mathbf{j_1}) \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{i_3},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{i_4},\mathbf{j_1})\\ & \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{i_4},\mathbf{j_2}) \sim \mathcal{T}^3(\mathbf{i_2},\mathbf{i_3},\mathbf{j_1}) \sim \mathcal{T}^3(\mathbf{i_2},\mathbf{i_3},\mathbf{j_2}) \sim \mathcal{T}^3(\mathbf{i_2},\mathbf{i_4},\mathbf{j_1})\\ & \sim \mathcal{T}^3(\mathbf{i_2},\mathbf{i_4},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_3},\mathbf{i_4},\mathbf{j_2}) \sim \mathcal{T}^3(\mathbf{i_3},\mathbf{i_4},\mathbf{j_3})\text{;} \end{align*} \item Every slices of the form $\ensuremath{\mathcal{T}}^3(\bk , \ensuremath{\bf j_l} , \ensuremath{\bf j_m} )$ where $\bk \in \oa \bo , \bt , \bb , \bq \fa$, $\ensuremath{\bf j_l}, \ensuremath{\bf j_m} \in \oa \bjp , \bjd , \bjt \fa$ and $\ensuremath{\bf j_l} \neq \ensuremath{\bf j_m}$. Precisely, \begin{align*} \mathcal{T}^3(\mathbf{i_1},\mathbf{j_1},\mathbf{j_2}) & \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{j_1},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{j_2},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_2},\mathbf{j_1},\mathbf{j_2}) \sim \mathcal{T}^3(\mathbf{i_2},\mathbf{j_1},\mathbf{j_3}) \\ & \sim \mathcal{T}^3(\mathbf{i_1},\mathbf{j_2},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_3},\mathbf{j_1},\mathbf{j_2}) \sim \mathcal{T}^3(\mathbf{i_3},\mathbf{j_1},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_3},\mathbf{j_2},\mathbf{j_3}) \\ & \sim \mathcal{T}^3(\mathbf{i_4},\mathbf{j_1},\mathbf{j_2}) \sim \mathcal{T}^3(\mathbf{i_4},\mathbf{j_1},\mathbf{j_3}) \sim \mathcal{T}^3(\mathbf{i_4},\mathbf{j_2},\mathbf{j_3}) \text{ and;} \end{align*} \item $\ensuremath{\mathcal{T}}^3(\bjp , \bjd , \bjt )$ with itself. \end{enumerate} \end{lemma} Proof of Lemma \ref{SymG3} can be found in \cite{Parise}. The same ideas from the proof of Lemma \ref{SymG1} are used in the proof of Lemma \ref{SymG3} but instead of using the set $\ensuremath{\mathbb{M}} (\bk , \bil )$ we use the set $\ensuremath{\mathbb{M}} (\bk ,\bil , \bim )$ to define the function $\varphi$. Figures \ref{figAllG2} illustrate one slice in each four classes of 3D slices of Lemma \ref{SymG3}. From these figures, we remark that the classes of $\ensuremath{\mathcal{T}}^3(1,\bo , \bt )$ and $\ensuremath{\mathcal{T}}^3(\bo , \bt , \bjd )$ generate the same images. We notice the same phenomenon for the slices $\ensuremath{\mathcal{T}}^3(1, \bo , \bjp)$ and $\ensuremath{\mathcal{T}}^3(\bo , \bjp , \bjd )$ and also $\ensuremath{\mathcal{T}}^3(1, \bjp , \bjd )$ and $\ensuremath{\mathcal{T}}^3(\bjp , \bjd , \bjt )$. Indeed, we have this next lemma. \begin{figure}
\caption{Four 3D slices of the \textit{Mandelbric}}
\label{fig5.5}
\label{fig5.6}
\label{fig5.7}
\label{fig5.8}
\label{figAllG2}
\end{figure}\begin{lemma} \label{SymG4} We have that \begin{enumerate} \item $\ensuremath{\mathcal{T}}^3 (1 , \bo , \bt ) \sim \ensuremath{\mathcal{T}}^3(\bo ,\bt ,\bjd )$; \item $\ensuremath{\mathcal{T}}^3(1 , \bo , \bjp ) \sim \ensuremath{\mathcal{T}}^3(\bo , \bjp ,\bjd )$ and; \item $\ensuremath{\mathcal{T}}^3(1, \bjp ,\bjd ) \sim \ensuremath{\mathcal{T}}^3(\bjp , \bjd , \bjt )$. \end{enumerate} \end{lemma} \begin{demo} We prove point \textit{1)} of this lemma. Set the numbers $c$ and $c'$ as \begin{align*} c=c_1 + c_2\bo + c_3\bt \text{,} & \quad c'=c_2\bo + c_3\bt + c_1 \bjd \text{.} \end{align*} Now, let define $\varphi : \ensuremath{\mathbb{M}} (\bo , \bt ) \rightarrow \ensuremath{\mathbb{M}} (\bo , \bt , \bjd )$ as \begin{equation*} \eta = \varphi (x_1 +x_2\bo + x_3\bt + x_4\bjp ) = x_2 \bo + x_3\bt + x_1\bjd - x_4\bjt \text{.} \end{equation*} We obtain \begin{align*} Q_{3,c}(\varphi^{-1}(\eta ) ) &= (x_1^3-3x_1x_2^2-3x_1x_3^2+3x_1x_4^2 + 6x_2x_3x_4 + c_1)\\ & +(-x_2^3 +3x_1^2x_2 - 3x_2x_3^2 +3x_2x_4^2 - 6x_1x_3x_4 +c_2) \bo \\ & + (-x_3^3 +3x_1^2x_3 - 3x_2^2x_3 +3x_3x_4^2 - 6x_1x_2x_4 + c_3) \bt \\ & + (x_4^3 + 3x_1^2x_4 - 3x_2^2x_4 - 3x_3^2x_4 + 6x_1x_2x_3)\bjp \end{align*} and \begin{align*} Q_{3,c'}(\eta )&= (-x_2^3 +3x_1^2x_2 - 3x_2x_3^2 +3x_2x_4^2 - 6x_1x_3x_4 +c_2) \bo \\ &+ (-x_3^3 +3x_1^2x_3 - 3x_2^2x_3 +3x_3x_4^2 - 6x_1x_2x_4 + c_3) \bt \\ &+ (x_1^3-3x_1x_2^2-3x_1x_3^2+3x_1x_4^2 + 6x_2x_3x_4 + c_1) \bjd \\ &- (x_4^3 + 3x_1^2x_4 - 3x_2^2x_4 - 3x_3^2x_4 + 6x_1x_2x_3) \bjt \text{.} \end{align*} Hence, by applying $\varphi$ on the expression of $Q_{3,c}$, we have that $(\varphi \circ Q_{3,c} \circ \varphi^{-1}) (\eta ) = Q_{3,c'}(\eta )$ for every $\eta \in \ensuremath{\mathbb{M}} (\bo , \bt , \bjd )$. Thus, $\ensuremath{\mathcal{T}}^3(1 , \bo , \bt ) \sim \ensuremath{\mathcal{T}}^3(\bo ,\bt ,\bjd )$. For the second part, set the numbers $c$ and $c'$, and also the function $\varphi : \ensuremath{\mathbb{M}} (\bo , \bjp ) \rightarrow \ensuremath{\mathbb{M}} (\bo ,\bjp , \bjd )$ as \begin{align*} c=c_1+c_2\bo +c_3\bjp \text{,} & \quad c'=c_2\bo + c_3\bjp + c_1\bjd \end{align*} and \begin{equation*} \eta = \varphi (x_1 + x_2\bo - x_3 \bt + x_4 \bjp ) = x_2\bo + x_4\bjp + x_1\bjd - x_3 \bq \text{.} \end{equation*} One can verify that $(\varphi \circ Q_{3,c} \circ \varphi^{-1})(\eta ) = Q_{3,c'}(\eta )$ for all $\eta \in \ensuremath{\mathbb{M}} (\bo , \bjp , \bjd )$. Finally, for the last part, set the numbers $c$, $c'$ and the function $\varphi : \ensuremath{\mathbb{M}} (\bjp , \bjd ) \rightarrow \ensuremath{\mathbb{M}} (\bjp , \bjd , \bjt )$ as \begin{align*} c=c_1 + c_2\bjp + c_3 \bjd \text{,} & \quad c'=c_1\bjp + c_2\bjd + c_3\bjt \end{align*} and \begin{equation*} \eta = \varphi (x_1 + x_2 \bjp + x_3 \bjd - x_4 \bjt ) =-x_4 + x_1\bjp + x_2 \bjd + x_3\bjt \text{.} \end{equation*} Thus, $(\varphi \circ Q_{3,c} \circ \varphi^{-1})(\eta ) = Q_{3,c'}(\eta )$ for every $\eta \in \ensuremath{\mathbb{M}} (\bjp , \bjd , \bjt )$. \end{demo} From the previous lemmas, we obtain the following corollary. \begin{corollary}\label{NbPSl} There are four principal 3D slices of the tricomplex \textit{Mandelbric}: \begin{enumerate} \item $\ensuremath{\mathcal{T}}^3(1, \bo , \bt )$ called \textit{Tetrabric}; \item $\ensuremath{\mathcal{T}}^3(1 , \bjp , \bjd )$ called \textit{Perplexbric}; \item $\ensuremath{\mathcal{T}}^3(1 , \bo , \bjp )$ called \textit{Hourglassbric}; \item $\ensuremath{\mathcal{T}}^3(\bo , \bt , \bb )$ called \textit{Metabric}. \end{enumerate} \end{corollary} We now treat the second case of Corollary \ref{NbPSl} and we show that the \textit{Perplexbric} is an octahedron of edges $\frac{2\sqrt{2}}{3\sqrt{3}}$.
\subsection{Special case: slice $\mathcal{T}^3(1,\bjp , \bjd )$} We had proved in section 4 that the hyperbolic \textit{Mandebric} (called the \textit{Hyperbric}) is a square of edges $\frac{2\sqrt{2}}{3\sqrt{3}}$ (see Theorem \ref{t3.2}). Now, our interest is to generalize the \textit{Hyperbric} in three dimensions. Let adopt the same notation as in \cite{GarantRochon} for the \textit{Perplexbric} \begin{equation} \mathcal{P}^3:=\mathcal{T}^3(1,\mathbf{j_1},\mathbf{j_2})
=\left\lbrace c=c_1+c_4\mathbf{j_1}+c_6\mathbf{j_2} \, | \, c_i \in \mathbb{R} \text{ and } \left\lbrace Q_{3,c}^m(0)\right\rbrace_{m=1}^{\infty} \text{ is bounded} \right\rbrace \text{.}\label{eq3.3.1} \end{equation} Before proving this result, we need this next lemma.
\begin{lemma}\label{lem3.3.1} We have the following characterization of the \textit{Perplexbric} \begin{equation*} \mathcal{P}^3=\bigcup_{y\in \left[\frac{-2}{\sqrt{27}},\frac{2}{3\sqrt{3}}\right]} \left\lbrace \left[ (\mathcal{H}^3-y\mathbf{j_1})\cap (\mathcal{H}^3+y\mathbf{j_1})\right] + y\mathbf{j_2}\right\rbrace \end{equation*} where $\mathcal{H}^3$ is the \textit{Hyperbric} (see section 4). \end{lemma}
\begin{demo} By Definition of $\mathcal{P}^3$ and the idempotent representation, we have that \begin{equation}
\mathcal{P}^3=\left\lbrace c=\left( d-c_6\mathbf{j_1}\right) \gamma_2 + \left( d + c_6\mathbf{j_1}\right) \overline{\gamma}_2 \, | \, \left\lbrace Q_{3,c}^m(0)\right\rbrace_{m=1}^{\infty} \text{ is bounded} \right\rbrace \label{PDefRemod} \end{equation} where $d=c_1+c_4\bjp \in \ensuremath{\mathbb{D}}$. Furthermore, the sequence $\left\lbrace Q_{3,c}^m(0)\right\rbrace_{m=1}^{\infty}$ is bounded iff the two sequences $\left\lbrace Q_{3,d-c_6\mathbf{j_1}}^m(0)\right\rbrace_{m=1}^{\infty}$ and $\left\lbrace Q_{3,d+c_6\mathbf{j_1}}^m(0)\right\rbrace_{m=1}^{\infty}$ are bounded. To continue, we make the following remark about hyperbolic dynamics: $\forall z \in \mathbb{D}$ \begin{equation}
\mathcal{H}^3-z:=\left\lbrace c \in \mathbb{D} \, | \, \left\lbrace Q_{3,c+z}^m(0)\right\rbrace_{m=1}^{\infty} \text{ is bounded } \right\rbrace \text{.}\label{eq3.3.16} \end{equation} By Definition \ref{d3.1}, $\left\lbrace Q_{3,d-c_6\mathbf{j_1}}^m(0)\right\rbrace_{m=1}^{\infty}$ and $\left\lbrace Q_{3,d+c_6\mathbf{j_1}}^m(0)\right\rbrace_{m=1}^{\infty}$ are bounded iff $d-c_6\mathbf{j_1}, d+c_6\mathbf{j_1} \in \mathcal{H}^3$. Therefore, by \eqref{eq3.3.16}, we also have that $d-c_6\mathbf{j_1}, d+c_6\mathbf{j_1} \in \mathcal{H}^3$ iff $d \in (\mathcal{H}^3-c_6\mathbf{j_1}) \cap (\mathcal{H}^3+c_6\mathbf{j_1})$. Hence, \begin{align*}
\mathcal{P}^3&=\left\lbrace c=c_1+c_4\mathbf{j_1}+c_6\mathbf{j_2} \, | \, c_1+c_4\mathbf{j_1} \in (\mathcal{H}^3-c_6\mathbf{j_1}) \cap (\mathcal{H}^3+c_6\mathbf{j_1}) \right\rbrace \\ &=\bigcup_{y\in \mathbb{R}} \left\lbrace \left[(\mathcal{H}^3-y\mathbf{j_1})\cap (\mathcal{H}^3+y\mathbf{j_1})\right] + y\mathbf{j_2}\right\rbrace\text{.} \end{align*} In fact, by Theorem \ref{t3.2}, \begin{equation} (\mathcal{H}^3-y\mathbf{j_1})\cap (\mathcal{H}^3+y\mathbf{j_1})=\emptyset \end{equation} whenever $y\in \left[ -\frac{2}{3\sqrt{3}},\frac{2}{3\sqrt{3}}\right]^{c}$. This conduct us to the desire characterization of the \textit{Perplexbric}: \begin{equation*} \mathcal{P}^3=\bigcup_{y\in \left[\frac{-2}{\sqrt{27}},\frac{2}{3\sqrt{3}}\right]} \left\lbrace \left[(\mathcal{H}^3-y\mathbf{j_1})\cap (\mathcal{H}^3+y\mathbf{j_1})\right] + y\mathbf{j_2}\right\rbrace \text{.} \end{equation*} \end{demo} As a consequence of Lemma \ref{lem3.3.1} and Theorem \ref{t3.2}, we have the following result:
\begin{theorem}\label{theo3.2.5} $\mathcal{P}^3$ is an octahedron of edges $\frac{2\sqrt{2}}{3\sqrt{3}}$. \end{theorem}
\section{Conclusion} In this article, we have treated \textit{Multibrot} sets for complex, hyperbolic and tricomplex numbers. Many results presented in this article can be generalized for arbitrary integers of degree $p\geq 2$.
For the case of complex \textit{Multibrot} sets, it would be grateful if we can grade-up the proof of Theorem \ref{t2.3.1} for all Multibrots. However, as we can see, the proof is increasing in level of technicality as the degree of the polynomial $Q_{p,c}$ increases. So, we must find a different approach to prove the following next conjecture. \begin{conjecture}\label{cj1} Let $\ensuremath{\mathcal{M}^p}$ be the generalized Mandelbrot set for the polynomial $Q_{p,c}(z)=z^p+c$ where $z,c \in \ensuremath{\mathbb{C}}$ and $p\geq 2$ an integer. Then, we have two cases for the intersection $\ensuremath{\mathcal{M}^p} \cap \ensuremath{\mathbb{R}}$: \begin{itemize} \item[i.] If $p$ is even, then $\ensuremath{\mathcal{M}^p} \cap \ensuremath{\mathbb{R}} = \left[-2^{\frac{1}{p-1}},(p-1)p^{\frac{-p}{p-1}} \right]$; \item[ii.] If $p$ is odd, then $\ensuremath{\mathcal{M}^p} \cap \ensuremath{\mathbb{R}}=\left[ -(p-1)p^{\frac{-p}{p-1}} , \, (p-1)p^{\frac{-p}{p-1}} \right]$. \end{itemize} \end{conjecture} This would conduct us to another conjecture about the \textit{Hyperbrots}. \begin{conjecture}\label{cj2} The Hyperbrots are squares and the following characterization of \textit{Hyperbrots} holds: \begin{enumerate}
\item If $p$ is even, then $\ensuremath{\mathcal{H}}^p = \oa c=a+b\ensuremath{{\bf j}} \, | \, 2^{\frac{1}{p-1}}\leq a-b, a+b \leq (p-1)p^{\frac{-p}{p-1}} \fa$;
\item If $p$ is odd, then $\ensuremath{\mathcal{H}}^p = \oa c=a+b\ensuremath{{\bf j}} \, | \, |a|+|b| \leq (p-1)p^{\frac{-p}{p-1}} \fa $. \end{enumerate} \end{conjecture} Further explorations of 3D slices of the tricomplex \textit{Multibrot} sets are also planned. Particularly, we are interested in the case where the degree of the tricomplex polynomial is an integer $p>3$.
\section*{Acknowledgments} DR is grateful to the Natural Sciences and Engineering Research Council of Canada (NSERC) for financial support. POP would also like to thank the NSERC for the award of a Summer undergraduate Research grant.
\end{document} | arXiv |
\begin{document}
\title{\large{A GENERAL FRAMEWORK FOR FREQUENTIST MODEL AVERAGING }\if11{\footnote{ \baselineskip=12pt Corresponding Author: {Min-ge Xie ([email protected])}.
This article is a work developed based on the thesis of the first author. The research was supported in part by US NSF grants DMS-1513483 (MX), DMS-1418042 (HL), and by Award No.11529101(HL), made by National Natural Science Foundation of China. }} \fi } \if11 \author{Priyam Mitra$^*$, Heng Lian$^\dagger$, Ritwik Mitra$^{\dagger\dagger}$, Hua Liang$^\ddagger$ and Min-ge Xie$^*$ \\ Rutgers University$^*$, City University of Hong Kong$^\dagger$, \\ Princeton University$^{\dagger\dagger}$ \& George Washington University$^\ddagger$ } \fi
\date{} \maketitle
\begin{center} \textbf{Abstract} \end{center}
{ \singlespace Model selection strategies have been routinely employed to determine a model for data analysis in statistic, and further study and inference then often proceed as though the selected model were the true model that were known a priori.
This practice does not account for the uncertainty introduced by the selection process and the fact that the selected model can possibly be a wrong one. Model averaging approaches try to remedy this issue by combining estimators for a set of candidate models. Specifically, instead of deciding which model is the `right' one, a model averaging approach suggests to fit a set of candidate models and average over the estimators using certain data adaptive weights. In this paper we establish a general frequentist model averaging framework that does not set any restrictions on the set of candidate models. It greatly broadens the scope of the existing methodologies under the frequentist model averaging development. Assuming the data is from an unknown model, we derive the model averaging estimator and study its limiting distributions and related predictions while taking possible modeling biases into account. We propose a set of optimal weights to combine the individual estimators so that the expected mean squared error of the average estimator is minimized. Simulation studies are conducted to compare the performance of the estimator with that of the existing methods. The results show the benefits of the proposed approach over traditional model selection approaches as well as existing model averaging methods.
} \vspace*{.1in}
\noindent\textsc{Keywords}: {Asymptotic distribution; Bias variance trade-off; Local mis-specification; model averaging estimators; Optimal weight selection. }
\section{Introduction}
When there are several plausible models to choose from but no definite scientific rationale to dictate which one should be used, a model selection method has been used traditionally to determine a `correct' model for data analysis. Commonly used model selection methods, such as Akaike information criterion (AIC), Bayesian information criterion (BIC), stepwise regression, best subset selection, penalised regression, etc., are data driven and different methods may use different criteria; cf., e.g., \cite{Hastie2009} and reference therein. Once a model is chosen, further analysis proceeds as if the model selected is the true one. This practice does not account for the uncertainty introduced in the process due to model selection, and can often lead to faulty inference as discussed in \cite{Madigan1994, Draper1995, Buckland1997}, among others. To provide a solution to the problem, model averaging methods have been introduced to incorporate model uncertainty during analysis; cf., e.g., \cite{Claeskens2008}. Instead of deciding which model is the `correct' one, a model averaging method uses a set of plausible candidate models and final measures of inference are derived from a combination of all models. The candidate models are combined using some data-dependent weights to reflect the degree to which each candidate model is trusted.
Our research on model averaging is motivated in part by a real life example on a prostate cancer study where the relationship between the level of prostate-specific antigen and a number of clinical measures in men who were about to receive a radical prostatectomy was investigated. The variables included in the study are log cancer volume, log prostate weight, age, log of the amount of benign prostatic hyperplasia, seminal vesicle invasion, log of capsular penetration, Gleason score, and percent of Gleason scores 4 or 5. In analysis of such data, a common theme is that different model selection methods may choose different models as the `true' one. For example, AIC and BIC, two commonly used model selection criteria, may pick two different models, as the criteria for selection is different. Such situations would certainly raise many questions in practice. For instance, if the estimator is selected by using a model selection criteria, how would we address the possibility that the selection is a wrong model? Also, if different model selection methods give us different results, we might wonder how trustworthy the model selection procedures are. Instead of choosing one model using a model selection scheme, we can use an average of estimators from different models. The model averaging estimator then can provide us with an estimate of any parameter involved in the study and can be used for providing confidence bounds. The model averaging estimator can be used for prediction purposes as well.
\cite{Hjort2003} provided a formal theoretical treatment of frequentist model averaging approaches, which provided in-depth understanding of model averaging approaches and was well cited. However, the development had an assumption that any extra parameters not included in the narrowest model will shrink to zero at a $ {\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ rate.
It essentially requires that the all candidate models are within a $ {\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ neighborhood of the true model. Although this assumption avoids a technical difficulty of handling biased estimators, in reality we do not know the true model and thus excluding from consideration those models that are beyond $ {\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ neighborhood of the true model appears to be very restrictive in practice.
In this paper, we remove this restrictive assumption in \cite{Hjort2003} and develop frequentist model averaging approaches under a much more general framework. Our model averaging scheme allows us to use all the potential candidate models available, even the ones that produce biased estimates.
The development is motivated by the familiar bias-variance trade-off. If we use an overly simple model, the parameter estimates will often be biased, but it can also possibly have less variance, because there are fewer parameters to estimate. Similarly, if a bigger model is used, the parameter estimates often have low or no bias but increased variance. It is possible that biased estimators may end up having lower mean squared error (MSE) than the bigger model or even true model, and visa verse. In our development, we study the delicate balance between bias and variance in all possible models and utilize the knowledge to develop new frequentist model averaging approaches.
A key element of a model averaging method is selection of weights that help us build a combined model averaging estimator. The weights proposed in our development are based on the aforementioned bias-variance trade-off, anchoring on the mean squared error (MSE) of the overall model averaging estimator. The weighing scheme is similar to but not the same as that discussed in \cite{Liang2011}, in which the authors only focused on Gaussian linear regression models. Specifically, a consistent estimate of the mean squared error of the model averaging estimator is proposed, and the weights are chosen such that the MSE estimate is minimized. Using these weights, we show that model averaging performs better or no worse than several existing and commonly used model selection or model averaging methods. In particular, the weights chosen often display good optimality properties, for example, the parameter estimates converging to the true parameter values as sample size $n$ goes to infinity. Thus it can be shown that in most of the cases weights that are chosen to combine the candidate models highlight the contribution of the true model. However, for a finite sample size with $n$, biased estimators may end up having lower mean squared error than that from the true model and the model averaging estimator may be based on biased candidate models.
A model averaging estimator incorporates model uncertainty into the analysis by combining a set of competing candidate models rather than choosing just one. It also provides an insurance against selecting a poor model thus improving the risk in estimation.
In \cite{Hjort2006} and \cite{Claeskens2008}, variable selection methods for the Cox proportional hazards regression model were discussed along with the choice of weights. In \cite{Hansen2007} a new set of weights was derived using Mallow's criterion. In \cite{Liang2011}, the authors proposed an unbiased estimator of the risk and a set of optimal weights was chosen by minimizing the trace of the unbiased estimator. Further details about model selection and averaging can also be found in \cite{Lien2005,Karagrigoriou2009,wan.zhang.ea:2010,zhang.wan.zhou:2012, Wei2012}. The model averaging method has also been used in many areas of applications, e.g.,
\cite{Danilov2004a, Danilov2004} for forecasting stock market data, \cite{Pesaran2009} for risk of using false models in portfolio management, \cite{magnus.wan.ea:2011} for analysis of the Hong Kong housing market, and \cite{Posada2004} for a study of phylogenetics in biology. Our development in this article extends the existing theoretical frequentist development to a general framework so it can incorporate biased models under a general setting. Model averaging has been also discussed in the Bayesian framework; see, e.g. \cite{Raftery1998} and \cite{ Hoeting1999a}. In a Bayesian approach, a weighted average of the posterior distributions under every available candidate model was used for estimation and prediction purposes. The weights were determined by posterior model probabilities. Model averaging in a frequentist setup, as in \cite{Hjort2003} and also ours, precludes the need to specify any prior distributions, thus removing any possible oversight due to faulty choice of priors. The question in a frequentist setting is how to obtain the weights by a data-driven approach.
The rest of the article is organized as follows. In section \ref{sec:2}, we propose a general framework that covers the framework of \cite{Hjort2003} as a special case and study asymptotic properties of model averaging estimators. We also derive a consistent estimator for the mean squared error of the model averaging estimator and use it to facilitate our choice of data-driven weights in section \ref{sec:3}. The development is illustrated in generalized linear models and particularly in linear and logistic model setups. In section \ref{sec:4}, simulation studies are carried out to examine the performance of the proposed estimator and to compare its performance with existing methods.
\section{General Framework}\label{sec:2} \subsection{Notations and Set up}\label{subsec:notation}
Consider $n$ independent data points $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}=(y_{1}, \cdots, y_{n})$ sampled from a distribution having density of the form $f(y)\equiv f(y,\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$, where $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$ is the unknown parameter of interest. Here the parameter $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$ can be written as $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}=(\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma})$, where $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta} \in \Theta \subset {\mathbb{R}}^{p}$, $p \ge 0$, are the parameters that are certainly included in every candidate model and $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma} \in {\mathbb{R}}^{q}$ is the remaining set of parameters that may or may not be included in the candidate models. We assume that $p$ and $q$ are given. As a model averaging method, instead of choosing one particular candidate model as the ``correct'' model, we consider a set of candidate models, say ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$, in which each candidate model contains the common parameters $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ and a unique $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}'$ that includes $m$ of $q$ components of the parameter $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$, $0 \leq m \leq q$.
The choice of ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ can vary depending on the problem that one is trying to solve. For example, the candidate model set ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ can contain all possible $2^{q}$ combinations of $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$. Or, one can choose a subset of the $2^{q}$ possible models as ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$. In \cite{Hansen2007}, a set of nested models has been used as candidate models, with $|{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|=q+1$. In \cite{Hjort2003} ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ includes candidate models that are within a $ {\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ neighborhood of the true model. Our development encompasses both setups as there are no restrictions on ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$, and ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ can include any number of candidate models between $1$ and $2^q$. Similar setup was used in \cite{Liang2011} where ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ is also unrestricted, but the development there was done in the standard linear regression framework.
Let the parameters in the true model be given by $\bbeta_{\mbox{\rm\tiny true}} = (\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_{\mbox{\rm\tiny true}}, \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{\mbox{\rm\tiny true}})$. Let $m^{\mbox{\rm\tiny true}}$ be the number of components of $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$ that are present in the true model. Define ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\in} $ as the collection of the candidate models that contain the true model, thus every model in ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\in}$ contain each and every one of the $m^{\mbox{\rm\tiny true}}$ components of $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$. Define $\mathcal{M}_{\notin}= {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}- {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\in}\subset {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$, so $\mathcal{M}_{\notin}$ contains candidate models for which at least one of those $m^{\mbox{\rm\tiny true}}$ components are not present. Clearly $\mathcal{M} = \mathcal{M}_{\in} \cup \mathcal{M}_{\notin} $.
In \cite{Hjort2003},
a common parameter is also present in all the candidate models that is similar to ours. But the treatment of $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$ is different. In particular, the model containing just $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ is called a narrow model and the true model is chosen of the form $f(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y})=f(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y},\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta} ,\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0 + \delta/\sqrt n)$. Here, parameter $\delta$ determines how far a candidate model can vary from the narrow model and $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0 $ is a given value of $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$ for which any extended model reduces down to the narrow model. Thus, this choice of true model essentially requires that the all candidate models are within a $ {\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ neighborhood of the true model. Any model that is beyond $ {\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ neighborhood of the true model is excluded from the analysis. In this paper, we remove this rather restrictive constraint. Indeed, we assume the parameter for the true model is $\bbeta_{\mbox{\rm\tiny true}} = (\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_{\mbox{\rm\tiny true}}, \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{\mbox{\rm\tiny true}})$, where $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{\mbox{\rm\tiny true}}$ may or may not have any of the $q$ components, and the candidate model set ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ can be a subset or contain all possible $2^{q}$ combinations of $\gamma$. Thus in our model setup there are no restrictions on the choice of true model or on the set of candidate models as in \cite{Hjort2003}. Furthermore, we can treat the setup considered in \cite{Hjort2003} as a special case of ours by restricting $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{\mbox{\rm\tiny true}}$ so that all the candidate models will have a bias of order ${\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n})$ or less.
Note that, every candidate model includes a unique $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}$ that may or may not include all $q$ components. Thus the numbers of parameters from different candidate models may be different. For ease of presentation and following \cite{Hansen2007}, we introduce an augmentation scheme to bring all of them to the same length. We first illustrate the idea using the regression example considered by \cite{Hansen2007}: $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$ is the vector of responses, $\mathbold{X}}\def\hbX{{\widehat{\bX}}}\def\tbX{{\widetilde{\bX}}}\def\bXbar{{\overline \bX}$ is the design matrix with full column rank $p+q$ and the candidate models are nested models. We further assume the first $p$ columns of $\mathbold{X}}\def\hbX{{\widehat{\bX}}}\def\tbX{{\widetilde{\bX}}}\def\bXbar{{\overline \bX}$ are always included in the candidate models; the special case with $p =0$ goes back to the setup of \cite{Hansen2007}. It follows that the $k^{\rm th}$ candidate model includes the first $p+k$ columns of $\mathbold{X}}\def\hbX{{\widehat{\bX}}}\def\tbX{{\widetilde{\bX}}}\def\bXbar{{\overline \bX}$, $k=0,\cdots,q$. Denote by ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}$ the estimated regression parameters corresponding to the $k^{\rm th}$ candidate model. Then the $(p+k) \times 1$ vector ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}$ can be augmented to a $(p+q) \times 1$ vector $({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}^{\top}, {\bf 0}^{\top})^{\top}$, by adding $(q - k)$ $0$'s. The augmented estimator for the $k^{\rm th}$ candidate model is given by \begin{equation} \bbeta_{k}_{k}= ({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}^{\top}, {\bf 0}^{\top})^{\top} =\begin{bmatrix} (\mathbold{X}}\def\hbX{{\widehat{\bX}}}\def\tbX{{\widetilde{\bX}}}\def\bXbar{{\overline \bX}_{k}^{\top}\mathbold{X}}\def\hbX{{\widehat{\bX}}}\def\tbX{{\widetilde{\bX}}}\def\bXbar{{\overline \bX}_{k})^{-1} \mathbold{X}}\def\hbX{{\widehat{\bX}}}\def\tbX{{\widetilde{\bX}}}\def\bXbar{{\overline \bX}_{k}^{\top}\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}\\ 0 \end{bmatrix}; \label{eq:augmented} \end{equation} cf., e.g., \cite{Hansen2007} which adopted this augmentation on a set of nested candidate models.
More generally, let $\bbeta_{k}$ be the parameter for the $k^{{\rm th}}$ model in ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$. Assume the length of $\bbeta_{k}$ is $p+m_k$, where $m_k$ depends on $k$. Define the log-likelihood for the $i^{{\rm th}}$ observation in the $k^{{\rm th}}$ model as $\ell_{k;i}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k}) = \log f(y_{i}, \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k})$. The maximum likelihood estimate (MLE) of $\bbeta_{k}$ using the $k^{{\rm th}}$ model is ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k} = \mathop{\rm arg\, max}_{\beta_k} \ell_{k}(\bbeta_{k})$, where $\ell_{k}(\bbeta_{k}) = \sum^{n}_{i=1}\ell_{k;i}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k})$. Write the score function of the $k^{{\rm th}}$ model as $S_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$.
As in the example above, the vector $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k}$ for the $k^{{\rm th}}$ model can be augmented to a $(p+q) \times 1$ vector $(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k}^{\top}, \mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}_k^{\top})^{\top}$, where $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}_k$ is a fixed value used for augmentation to hold spaces.
The augmented maximum likelihood estimator is given by $\bbeta_{k}_{k} = ({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}^{\top}, \mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}_k^{\top})^{\top}$.
The fixed value augmentation does not affect the parameter, and only appends the length of the parameter. In the linear model example above the values $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}_k=0$. Some examples of $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}_k \not =0$ can be found in \cite{MitraThesis}.
Similarly, $\bbeta_{\mbox{\rm\tiny true}}$ can be augmented to a $(p+q) \times 1$ vector $(\bbeta_{\mbox{\rm\tiny true}}^{\top}, \mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}^{\top})^{\top}$ for a certain fixed set of $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}$ without altering the true model. Thus, without loss of generality and from now on, we assume $\bbeta_{\mbox{\rm\tiny true}}$ is a $(p+q) \times 1$ vector in the sense that some of the elements may be the augmented to fill the space.
For the model $k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$, let us define $\bbeta_{k}^{*} \in {\mathbb{R}}^{p+m_k}$ as the solution of the equation $\mathbb{E}}\def\bbEn{\mathbb{E}_{n} S_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})=0$, where $S_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$ is the score function of the $k^{{\rm th}}$ model having $p+m_k$ parameters. Define, as before, $\widetilde{\bbeta}^{*}_{k} \in {\mathbb{R}}^{p+q}$ as the $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}-$augmented version of $\bbeta_{k}^{*}$. Since the score function is Fisher consistent, $\widetilde{\bbeta}_{k} \rightarrow \widetilde{\bbeta}^{*}_{k}$ under usual regularity conditions. But this
$\widetilde{\bbeta}^{*}_{k}$ may not be close~to~$\bbeta_{\mbox{\rm\tiny true}}$.
Let $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}: {\mathbb{R}}^{p+q}\rightarrow {\mathbb{R}}^{\ell}$ be a general function that is $1^{{\rm st}}$ order partially differentiable and $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}$ $ = $ $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu} (\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}})$ is the parameter of interest. Then, the model averaging estimator of $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}$ is defined~as \begin{align}\label{fmae} \widehat\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}_{ave} = \sum_{k \in \mathcal{M}} \ w_k \mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu} ( \bbeta_{k}_k ), \end{align} where the weights $0\leq w_{k}\leq 1, \forall\ k$, and $\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}{w_{k}}=1$. In the remainder of this section, we derive the asymptotic properties of the model averaging estimator (\ref{fmae}) for any given set of weights~$w_{k}$.
\subsection{Main Results}
We assume the usual regularity conditions under which the familiar likelihood asymptotic arguments apply; cf., the conditions listed in the Appendix. See also \cite{Lehmann1998, Lehmann1999, van00} for more details.
Let $\nabla\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu} \in {\mathbb{R}}^{\ell\times (p+q)}$ be the first order derivative of the ${\mathbb{R}}^{p+q}\rightarrow {\mathbb{R}}^{\ell}$ function $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}$. Define $\mathbf{H}_{k}$= $\lim_{n\rightarrow \infty} \dfrac{1}{n} \mathbb{E}}\def\bbEn{\mathbb{E}_{n}\left[\ell_{k}''(\bbeta_{k}^{*})\right]$ and assume it is invertible. We also assume \begin{align*}\label{eq:lindcond1} {\rm (\textbf{A}1)} \,\, \lim_{n} \dfrac{1}{n}\sum^{n}_{i=1} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left[\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} \norm{\nabla\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k}){\rm \textbf{H}}_{k}^{-1}\ell_{k;i}'(\bbeta_{k}^{*})}{}\ \bbI \ \left\{\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} \norm{\nabla\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k}){\rm \textbf{H}}_{k}^{-1}\ell_{k;i}'(\bbeta_{k}^{*})}{}>\sqrt{n}\varepsilon\right\}\right] = 0,
\end{align*} for any $\varepsilon>0$, where $\bbI \{\cdot\}$ is the indicator function. We have the following theorem. Its proof is given in the Appendix.
\begin{theorem} \label{th:main2} Let $\bbeta_{k}_{k}$ be the $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}$-augmented MLE as defined in \eqref{eq:augmented} for the $k^{\rm th}$ model in ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$. Let $0\leq w_{k}\leq 1$ for $k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ be model weights so that $\sum_{k}w_{k}=1$. Assume condition $(\textbf{A}1)$ holds. The asymptotic distribution of the model averaging estimator for $\mathbf{\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}}(\bbeta_{\mbox{\rm\tiny true}})$ is given as, \begin{equation} \label{eq:mu} \sqrt{n}\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}_{k}) - \mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\bbeta_{\mbox{\rm\tiny true}})\} - \sqrt{n}\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}(\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k})-\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}})) \convd {\cal N}}\def\scrN{{\mathscr N}}\def\Nhat{\widehat{N}}\def\Ntil{{\widetilde N}}\def\Nbar{{\overline N}\left(0, \ \mathbold{\Sigma}}\def\hbSigma{{\widehat{\bSigma}}_{w}\right), \end{equation} where the variance $\mathbold{\Sigma}}\def\hbSigma{{\widehat{\bSigma}}_{w}$ is given by \begin{align} \label{eq:varmu} \mathbold{\Sigma}}\def\hbSigma{{\widehat{\bSigma}}_{w} = \lim_{n\rightarrow \infty} \dfrac{1}{n}\sum^{n}_{i =1} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left[ \big(\sum_{k}w_{k}\nabla\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k})^{\top}{\rm \textbf{H}}^{-1}_{k}\ell_{k;i}'\big)^{\otimes 2}\right] \end{align} \end{theorem}
The condition (\textbf{A}1) implies that the contribution of $\nabla\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k}){\rm \textbf{H}}_{k}^{-1}\ell_{k;i}'(\bbeta_{k}^{*})$ to the total variance, for each model $k$ in the set ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ and for each $1 \leq i \leq n$ is asymptotically negligible, and it is satisfied in a wide array of cases. We provide a set of sufficient conditions under which it is satisfied, and we also provide such examples in the cases of linear and generalized linear models in Section \ref{sec:3}.
See further discussions of the condition in Section \ref{sec:3}.
In our general framework, there is no guarantee that $\widetilde{\bbeta}^{*}_{k} = \bbeta_{\mbox{\rm\tiny true}}$, neither does $||\widetilde{\bbeta}^{*}_{k} - \bbeta_{\mbox{\rm\tiny true}}|| \to 0$ asymptotically, particularly when $k \in \mathcal{M}_{\notin}$. So $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k}) - \mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\bbeta_{\mbox{\rm\tiny true}})$ is not necessarily $0$, even asymptotically. But we can view it as a measurement of the bias by the $k^{\rm th}$ model. Thus, with the second term on the left hand side of (\ref{eq:mu}) serving as a bias correction term, Theorem~\ref{th:main2} states that the model averaging estimator still retains the usual form of asymptotic normality after the bias correction. In the theorem, the weights are fixed. In practice, we often estimate the weights using data. In this case, we need that the estimated weight $w^{(n)}_k(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y})$ for the $k$th model converges to $w_k$ as $n$ goes to infinity. By Slutsky's lemma, the result in Theorem \ref{th:main2} still holds. A further study of data dependent weights is in Section~\ref{sec:3}.
All the candidate models have $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ in common. We can use Theorem \ref{th:main2} to construct asymptotic convergence results for the common parameter $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$. If we consider a function from $(\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}) \mapsto \mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ to extract the $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ parameter, then by a direct application of Theorem \ref{th:main2} we can derive the asymptotic distribution of $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ as given below in Corollary \ref{cor:theta}.
\begin{corollary} \label{cor:theta} Let $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$ be the common parameter for all candidate models in ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$. Let $\bbeta_{\mbox{\rm\tiny true}}=(\btheta_{\mbox{\rm\tiny true}},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{\mbox{\rm\tiny true}})$, $\bbeta_{k}^{*}=(\btheta^{*}_{k},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}^{*}_{k})$, ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}=({\widehat{\btheta}}}\def\btheta_{k}{{\widetilde{\btheta}}_{k},{\widehat{\bgamma}}}\def\tbgamma{{\widetilde{\bgamma}}_{k})$. Then under the same setup as in Theorem \ref{th:main2} \begin{align} \sqrt{n}\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}(\widehat{\boldsymbol{\theta}}_{k} - \btheta_{\mbox{\rm\tiny true}}) - \sqrt{n} \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}(\btheta^{*}_{k} - \btheta_{\mbox{\rm\tiny true}}) \convd {\cal N}}\def\scrN{{\mathscr N}}\def\Nhat{\widehat{N}}\def\Ntil{{\widetilde N}}\def\Nbar{{\overline N}\left(0, \ \mathbold{\Sigma}}\def\hbSigma{{\widehat{\bSigma}}_{w}\right), \end{align} where the variance is given by $\mathbold{\Sigma}}\def\hbSigma{{\widehat{\bSigma}}_{w} = \lim_{n\rightarrow \infty} \dfrac{1}{n}\sum^{n}_{i =1} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left[ \big(\sum_{k}w_{k} [{\rm \textbf{I}}} \def\bbI{\mathbb{I}_p, \mathbold{0}}\def\bone{\mathbold{1}]{\rm \textbf{H}}^{-1}_{k}\ell_{k;i}'\big)^{\otimes 2}\right]$. \end{corollary}
\subsection{Connection to \cite{Hjort2003}'s development}\label{sec:3Hjort} The development of \cite{Hjort2003} required that all candidate models are within a ${\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n}) $ neighborhood of the true model. We broaden this framework in our development. In particular, we show in this subsection that the results described in \cite{Hjort2003} can be obtained as a special case of our result.
We start with a description of the misspecified model setup used in \cite{Hjort2003}. Let $ Y_1, \cdots, Y_n $ be a independent and identically distributed (i.i.d.) sample from density $f$ of maximum $p+q$ parameters. The parameter of interest is $ \mu = \mu(f)$, where $\mu :{\mathbb{R}}^{p+q} \rightarrow {\mathbb{R}}$ . The model that includes just $p$ parameters, say $\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}$, is defined as the narrow model, while any extended model $f(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y},\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma})$ reduces to the narrow model for $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}=\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0$; here the vector $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0$ is fixed and known. For the $k^{{\rm th}}$ model with unknown parameters $(\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_k)$, the MLE of $\mu$ is written as ${\widehat{\mu}_k} = \mu({{\widehat{\btheta}}}\def\btheta_{k}{{\widetilde{\btheta}}_k},{{\widehat{\bgamma}}}\def\tbgamma{{\widetilde{\bgamma}}_k},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{0,k^c})$, where $k^c$ refers to the elements that are not contained in ${\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_k}$.
Thus in this setup, if a parameter $\gamma_j$ is not included in the candidate model, we set $\gamma_j=\gamma_{j,0}$, the $j^{\rm th}$ element of $\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0$. The true model is assumed to be \begin{align}\label{eq:mispec} f_{\mbox{\rm\tiny true}}(y)=f(y,\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_{0},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0+\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}/\sqrt{n}), \end{align} where $\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}$ signify the deviation of the model in directions $1, . . . , q$. So $\bbeta_{\mbox{\rm\tiny true}}= (\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_{0},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0+\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}/\sqrt{n})$. Let us write $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0} = (\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_{0},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0)$. We will also write $\mu_{\mbox{\rm\tiny true}} = \mu(\bbeta_{\mbox{\rm\tiny true}})$, which is the estimand of interest. Under this model setup, \cite{Hjort2003} derived asymptotic normality result for the model averaging estimator $\sum_{k}w_{k}\widehat{\mu}_{k}$. To describe their result, let us first define \begin{align*} S(y) = \begin{bmatrix} U(y)\\ V(y) \end{bmatrix} = \begin{bmatrix} {\partial \over \partial\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}} \log f(y,\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma})\\ {\partial \over \partial\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}} \log f(y,\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta},\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma})
\end{bmatrix} \bigg|_{\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta} =\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_0,\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma} =\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_0} \text{ and }{\rm var}\{S(Y)\} = \begin{bmatrix} {\rm \textbf{J}}_{00} \quad {\rm \textbf{J}}_{01}\\ {\rm \textbf{J}}_{01} \quad {\rm \textbf{J}}_{11} \end{bmatrix} = {\rm \textbf{J}}_{full}, \text{ say}. \end{align*} Let $\Ubar_{n}= n^{-1}\sum_{i}U(Y_{i})$ and $\Vbar_{n} = n^{-1}\sum_{i}V(Y_{i})$. Denote by $V_{k}(Y)$ and $\Vbar_{n;k}$ the appropriately subsetted vectors obtained from $V(Y)$ and $\Vbar_{n}$, with the subset indices corresponding to that of ${\widehat{\bgamma}}}\def\tbgamma{{\widetilde{\bgamma}}$ in model $k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$, respectively. Also, define ${\rm \textbf{J}}_{k} = \mbox{\rm var}\{U(Y),V_{k}(Y)\}$ for all $k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$. \cite{Hjort2003} showed that, \begin{align}\label{eq:hjortasymp}\sqrt{n}(\sum_{k}w_{k}\widehat{\mu}_{k} - \mu_{\mbox{\rm\tiny true}}) \convd \sum_{k}w_{k}\Lambda_{k}, \end{align}
where \begin{align*} \Lambda_{k} = \begin{pmatrix} \partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})/\partial \mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta} \\ \partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})/\partial \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{k} \end{pmatrix}^{\top} \left\{{\rm \textbf{J}}_{k}^{-1}\begin{pmatrix} {\rm \textbf{J}}_{01}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta} \\ \pi_{k} {\rm \textbf{J}}_{11}\delta \end{pmatrix} + {\rm \textbf{J}}_{k}^{-1}\begin{pmatrix} \sqrt{n}(\Ubar_{n}- \mathbb{E}}\def\bbEn{\mathbb{E}_{n} U_{k}(Y_{1}))\\ \sqrt{n}(\Vbar_{n,k} - \mathbb{E}}\def\bbEn{\mathbb{E}_{n} V_{k}(Y_{1})) \end{pmatrix}\right\} - \bigg(\dfrac{\partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})}{\partial \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}}\bigg)^{\top}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}. \end{align*}
Here, $\pi_{k}\in {\mathbb{R}}^{|M_{k}|\times q}$ is the projection matrix that projects any vector $\mathbold{u}}\def\uhat{\widehat{u}}\def\util{\widetilde{u}}\def\ubar{{\overline u} \in {\mathbb{R}}^{q}$ to $\mathbold{u}}\def\uhat{\widehat{u}}\def\util{\widetilde{u}}\def\ubar{{\overline u}_{k}\in {\mathbb{R}}^{|M_{k}|}$ with indices as given by $M_{k} \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$.
The following corollary states that the result in (\ref{eq:hjortasymp}) can be directly obtained from Theorem \ref{th:main2} and thus Theorem \ref{th:main2} covers the special setting (\ref{eq:mispec}) of \cite{Hjort2003}. A proof of the corollary can be found in the Appendix.
\begin{corollary}\label{cor:match} Under the misspecification model (\ref{eq:mispec}), the asymptotic bias and variance in (\ref{eq:hjortasymp}) matches those in Theorem \ref{th:main2}. \end{corollary}
\subsection{Selection of Weights in Frequentist Model Averaging}\label{sec:3}
Model averaging acknowledges the uncertainty caused by model selection and tackles the problem by weighting all models under consideration. To make it effective, it is desirable that the weights can reflect the impact of each candidate model, which can be achieved by properly assigning a weight to each candidate model. If model $k'$ is more likely to impact or is more plausible than the model $k$, its associated weight $w_{k'}$ should be no smaller than $w_{k}$ for the model $k$. In our development, we propose to measure the strength of a model by its mean squared error, based on which we obtain a set of data-adaptive weights by minimizing the mean squared error of the combined model averaging estimator. A similar scheme was developed in \cite{Liang2011}, where the authors minimized an unbiased estimator of mean squared error to obtain their optimal weights. However, their work was done for the linear models. As in \cite{Liang2011}, we assume that the true model is included in the set of candidate models in the development of our weighing scheme.
Recall Theorem \ref{th:main2}, the asymptotic mean squared error (AMSE) of $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}_{k})$ is, \begin{equation}\label{eq:AMSE} Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) = {\rm trace}\left((\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}^{*}_{k}) - \mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\bbeta_{\mbox{\rm\tiny true}})\})^{\otimes 2}+ \dfrac{1}{n}\mathbold{\Sigma}}\def\hbSigma{{\widehat{\bSigma}}_{w}\right), \end{equation} for any given set of weights.
However, this quantity depends on the unknown parameter $\bbeta_{\mbox{\rm\tiny true}}$, so we instead consider its estimate $\widehat{Q}_n(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$. Assume that we estimate $\bbeta_{\mbox{\rm\tiny true}}$ consistently and the estimate is, say, ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{cons}$. Then, $Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$ in (\ref{eq:AMSE}) can be consistently estimated by $\widehat{Q}_n(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) = Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) \big|_{\bbeta_{\mbox{\rm\tiny true}}= {\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{cons}}$. We propose to obtain a set of data adaptive weights $\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}_{n}^{*}$ by minimizing $\widehat{Q}_{n}(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$: $$ \mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}_{n}^{*} = \mathop{\rm arg\, min}_{\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}} \, \widehat{Q}_{n}(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}). $$ The numerical performance of the proposed averaging estimators will be evaluated in Section 4. In the next section, we illustrate the procedure in the linear and logistic models in details.
\iffalse \hlcomment{I am not sure what message Priyam wants to deliver in following paragraph.}
When sample size increases, it is observed that the behavior of the model averaging estimator and the true model estimator is similar, which will be illustrated in a simulation study later. In next section we will develop the asymptotic distribution of the model averaging estimator in a linear regression setup. The weights selection process will also be described. The estimator proposed in the section above is not unbiased in general. But under certain specific framework such as linear regression, general liner model or exponential family it can be simplified and the estimators can be developed so that they are optimal or near optimal. We can develop consistent or unbiased estimators in linear regression framework as detailed in \cite{Liang2011}. This estimator of the MSE of the model averaging estimator could also be used to derive the model weights. \fi
\section{Model Averaging and Weight Selection in Regression Models} We now discuss the model averaging estimator described in Section \ref{sec:2} for generalized linear models (GLM). Specifically, let $ \mathbb{E}}\def\bbEn{\mathbb{E}_{n} y_{i}=g(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$ where $g$ is a given link function connecting the mean and the linear predictor $\eta_{i}=\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$. We consider a set ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$ of $2^{q}$ models. Suppose we want to estimate a function $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$ and, as defined in (\ref{fmae}), the final model averaging estimator is given by ${\widehat{\bmu}}}\def\tbmu{{\widetilde{\bmu}}_{ave} = \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}(\widetilde{\bbeta}_{k})$. Since the set up for Theorem \ref{th:main2} is for a general parametric model, the same asymptotic convergence results hold for GLM models. In particular we verify condition (\textbf{A}1) and discuss the data-driven weight choices below in two special cases: linear regression and logistic models.
\subsection{Prediction in Linear Regression Models}
We first derive the model averaging estimator in the linear regression framework: \[
\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}={\rm \textbf{X}} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} +\mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}, \] where ${\rm \textbf{X}}\in {\mathbb{R}}^{n\times (p+1)}$ is a non-random design matrix of full column rank; i.e., $\hbox{rank}({\rm \textbf{X}}) = p+1$, and $\mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon} \sim {\cal N}}\def\scrN{{\mathscr N}}\def\Nhat{\widehat{N}}\def\Ntil{{\widetilde N}}\def\Nbar{{\overline N}(\mathbold{0}}\def\bone{\mathbold{1},\sigma^2 {\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n})$.
Let ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}= \{M_{k}\}^{|{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|}_{k=1}$ be the set of candidate models. Here $M_{k}$ denotes a particular set of features having cardinality $|M_{k}|$. Define ${\rm \textbf{X}}_{k}\in {\mathbb{R}}^{n \times |M_{k}|}, 1\leq k \leq |{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|$ as the design matrix of the $k^{\rm th}$ candidate model with the features in $M_{k}$. We consider zero-augmentation of the parameter set $\bbeta_{k}$ for all $k$. Let $\widetilde{{\rm \textbf{X}}}_{k}\in {\mathbb{R}}^{n \times (p+1)}$ be the augmented version of ${\rm \textbf{X}}_{k}$ with the missing columns replaced by the $\mathbold{0}}\def\bone{\mathbold{1}$ vector. In our analysis, all the candidate models contain the intercept term corresponding to $\beta_{0}$. With the rest of the $p$ components, we can construct $2^{p}$ candidate models, all of which are included in our analysis.
Let us fix a ${\bx^{*}}\in {\mathbb{R}}^{p+1}$. Define $\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{*}_{k}\in {\mathbb{R}}^{|M_{k}|}$ so that ${\bx_{k}^{*}}$ consists of those components of ${\bx^{*}}$ indexed by $M_{k}\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}$. Consider the particular choice of the function $\mu: {\mathbb{R}}^{p+1} \rightarrow {\mathbb{R}}^{}$ so that for $\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}\in {\mathbb{R}}^{p+1}$, $\mu(\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b})={\bx^{*}}^{\top}\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}$. Clearly the $\nabla\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})= {\bx^{*}}$. For the following discussion, we are interested in the model averaging estimator of $\mu(\bbeta_{\mbox{\rm\tiny true}})$ = ${\bx^{*}}^{\top}\bbeta_{\mbox{\rm\tiny true}}$, which is given by $\widehat{\mu}_{ave} = \sum_{k}w_{k}{\bx_{k}^{*}}^{\top}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}$ with $w_{k}\geq 0$ and $\sum_{k}w_{k}=1$. In the simulations, we will use ${\bx^{*}}$ generated from the known covariate distribution, while for the real data, we split the whole data set into a training set and a test set and ${\bx^{*}}$ will be set to be the covariate in the test~set.
For the $k^{\rm th}$ candidate model with $\bbeta_{k}\in {\mathbb{R}}^{|M_{k}|}$, the score function is given by $\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k}(\bbeta_{k}) = {\rm \textbf{X}}_{k}^{\top}(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-{\rm \textbf{X}}_{k}\bbeta_{k})$ and ${\rm \textbf{H}}_{k}$ is given by ${\rm \textbf{H}}_{k}= -(1/n){\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k}$; note that this follows from the definition immediately preceding condition (\textbf{A}1). Thus our Hessian matrix satisfies the condition as it does not depend on $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$. Similarly we note that regarding condition (\textbf{A}1), \begin{equation}
|\nabla \mu(\widetilde{\bbeta}^{*}_{k}){\rm \textbf{H}}_{k}^{-1}\ell'_{k;i}(\bbeta_{k}^{*})|
= \left|(y_{i}-[{\rm \textbf{X}}_{k}]^{\top}_{i,\Cdot}\bbeta_{k}^{*}) \ {\bx_{k}^{*}}^{\top}({\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k}/n)^{-1}[{\rm \textbf{X}}_{k}]_{i,\Cdot}\right|\\
= |c_{ik}(\varepsilon_{i} + A_{ik})|, \nonumber \end{equation} where $c_{ik} = {\bx_{k}^{*}}^{\top} ({\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k}/n)^{-1}[{\rm \textbf{X}}_{k}]_{i,\Cdot}$ and $A_{ik} = \mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i}\bbeta_{\mbox{\rm\tiny true}}-[{\rm \textbf{X}}_{k}]^{\top}_{i,\Cdot}\bbeta_{k}^{*}$ are fixed constants, and $[{\rm \textbf{X}}_{k}]_{i,\Cdot}$ is the $i$th column of the matrix ${\rm \textbf{X}}_{k}^{\top}$.
Note that $\varepsilon_{i}\sim {\cal N}}\def\scrN{{\mathscr N}}\def\Nhat{\widehat{N}}\def\Ntil{{\widetilde N}}\def\Nbar{{\overline N}(0,\sigma^{2})$. The condition (\textbf{A}1) is satisfied if, for any arbitrary $\varepsilon>0$,
\[\lim _{n \rightarrow \infty}\dfrac{1}{n}\max_{1\leq i \leq n} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left\{\max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |c_{ik}(\varepsilon_{i} + A_{ik})|\right\}^{2} \bbI\left\{ \max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}|c_{ik}(\varepsilon_{i} + A_{ik})|> \sqrt{n}\varepsilon\right\} = 0.\]
Moreover, if $|c_{ik}|\leq C$ for some fixed constant $C>0$, the condition is further reduced to,
\[\lim _{n \rightarrow \infty}\dfrac{1}{n}\max_{1\leq i \leq n} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left\{\max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |\varepsilon_{i} + A_{ik}|\right\}^{2} \bbI\left\{ \max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}|\varepsilon_{i} + A_{ik}|> \sqrt{n}\varepsilon\right\} = 0.\] It is appropriate to note that we can have a bound of $c_{ik}$ as \begin{align*}
\max_{k}|c_{ik}| = \max_{k} |{\bx_{k}^{*}}^{\top}({\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k}/n)^{-1}[{\rm \textbf{X}}_{k}]_{i,\Cdot}|
\leq \norm{{\bx^{*}}}{}\norm{\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{i}}{}\max_{k} \dfrac{1}{\lambda_{min}({\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k}/n)}. \end{align*} Here $\lambda_{min}({\rm \textbf{B}})$ denotes the smallest singular value of matrix ${\rm \textbf{B}}$. Now by an application of Cauchy-Schwarz inequality, \begin{align}
& \dfrac{1}{n}\mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left\{\max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |\varepsilon_{i} + A_{ik}|\right\}^{2} \bbI\left\{ \max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}|\varepsilon_{i} + A_{ik}|> \sqrt{n}\varepsilon\right\}\notag\\
& \qquad \leq \dfrac{1}{n} \left\{\mathbb{E}}\def\bbEn{\mathbb{E}_{n}\max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |\varepsilon_{i} + A_{ik}|^{4} \right\}^{1/2} \left\{\ \mathbb{P} (\max_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |\varepsilon_{i} + A_{ik}| > \sqrt{n}\varepsilon)\right\}^{1/2}\notag\\
& \qquad \leq \dfrac{1}{n}\sum_{k \in M} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} (\varepsilon_{i} + A_{ik})^{4} \left\{\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} \mathbb{P}(|\varepsilon_{i} + A_{ik}|> \sqrt{n}\varepsilon)\right\}^{1/2}\notag\\
& \qquad \leq \left\{\dfrac{A^{4}_{ik}}{n^{2}} + 6\dfrac{A^{2}_{ik}\sigma^{2}}{n^{2}} + \dfrac{3\sigma^{4}}{n^{2}} \right\}^{1/2} \left\{\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\mathbb{P}(|\varepsilon_{i}|> \sqrt{n}\varepsilon - |A_{ik}|)\right\}^{1/2}\label{eq:fincond-lin}. \end{align}
Thus it follows that for $|{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|$ finite, as $n$ goes to infinity, the right hand side of (\ref{eq:fincond-lin}) goes to zero and thus Condition (\textbf{A}1) is satisfied.
The MLE of $\beta_k$ in the $k^{\rm th}$ model is given by ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}= ({\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k})^{-1}{\rm \textbf{X}}^{\top}_{k}\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$. Let $\bbeta_{k}^{*}$ be such that $\mathbb{E}}\def\bbEn{\mathbb{E}_{n} \mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}_{k}'(\bbeta_{k}^{*}) = \mathbold{0}}\def\bone{\mathbold{1}$; $\mathbb{E}}\def\bbEn{\mathbb{E}_{n} \mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}_{k}'(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k})$ being the score function of the $k^{\rm th}$ model, solving which we find that, \begin{align}\label{eq:linreg-bbetakstar} \bbeta_{k}^{*} = ({\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k})^{-1}{\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}\bbeta_{\mbox{\rm\tiny true}}. \end{align}
\noindent As discussed in Section \ref{subsec:notation}, the entire set of candidate models can be divided into two categories. The $1^{\rm st}$ category contains the ones that are biased and is denoted by ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\notin}$ and the second category contains ones that are not and is denoted by ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\in}$. So, for $k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\in}$ we have $\bbeta_{k}^{*}= \bbeta_{\mbox{\rm\tiny true}}$, whereas for $k \in \mathcal{M}_{\notin}$ we have $\bbeta_{k}^{*} \neq \bbeta_{\mbox{\rm\tiny true}}$. Therefore the bias term of model averaging estimator $\widehat{\mu}_{ave}$ can be written as, \[
\sum_{k\in \mathcal{M}_{\notin}} w_k ({\bx_{k}^{*}}^{\top}\bbeta_{k}^{*}- {\bx^{*}}^{\top}\bbeta_{\mbox{\rm\tiny true}}) = \sum_{k\in \mathcal{M}_{\notin}} w_k{\bx_{k}^{*}}^{\top}({\rm \textbf{X}}_k^{\top} {\rm \textbf{X}}_k )^{-1} {\rm \textbf{X}}_k^{\top} {\rm \textbf{X}}\bbeta_{\mbox{\rm\tiny true}} - {\bx^{*}}^{\top}\bbeta_{\mbox{\rm\tiny true}}. \] Since the weights assigned to the models are unknown, we propose an estimate of the mean squared error (MSE) and minimize the MSE to obtain weights that would be assigned to the candidate models. From Theorem \ref{th:main2}, the asymptotic mean squared error (MSE) of $\widehat{\mu}_{ave}$ is given by \begin{align}
Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})
& = \left[\left\{\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\notin}}w_k({\bx_{k}^{*}}^{\top}\bbeta_{k}^{*} - {\bx^{*}}^{\top}\bbeta_{\mbox{\rm\tiny true}})\right\}^{2} \right.\notag \\
& \qquad \qquad + \left. \dfrac{1}{n^{2}}\sum_{k \in \mathcal{M}} \sum_{k' \in \mathcal{M}} \ w_k w_{k'}
{\bx_{k}^{*}}^{\top}\mathbf{H}_{k}^{-1}\ \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k}(\bbeta_{\mbox{\rm\tiny true}})\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k'}(\bbeta_{\mbox{\rm\tiny true}})^{\top} \ {\mathbf{H}_{k'}^{-1}}^{\top} {\bx^{*}}_{k'}\right].\notag
\label{eq:tracelinear} \end{align}
Since $\mathbf{H}_{k}$ does not depend on $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$, we focus on estimating $\mathbb{E}}\def\bbEn{\mathbb{E}_{n} \mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k}(\bbeta_{\mbox{\rm\tiny true}})\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k'}(\bbeta_{\mbox{\rm\tiny true}})^{\top} $, which equals $
{\rm \textbf{X}}_{k}^{\top} \mathbb{E}}\def\bbEn{\mathbb{E}_{n}(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-{\rm \textbf{X}}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}})(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-{\rm \textbf{X}} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}})^{\top} {\rm \textbf{X}}_{k'} =\sigma^{2} {\rm \textbf{X}}^{\top}_{k}{\rm \textbf{X}}_{k'}.$ It follows that \begin{align*} Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) & = \left\{ \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\notin}}\sum_{k' \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}_{\notin}} w_{k}w_{k'}({\bx_{k}^{*}}^{\top}\bbeta_{k}^{*} - {\bx^{*}}^{\top}\bbeta_{\mbox{\rm\tiny true}})({\bx^{*}}_{k'}^{\top}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*}_{k'} - {\bx^{*}}^{\top}\bbeta_{\mbox{\rm\tiny true}})\right.\notag \\
& \qquad \qquad + \left. \sigma^{2}\sum_{k \in \mathcal{M}} \sum_{k' \in \mathcal{M}} \ w_k w_{k'}
{\bx_{k}^{*}}^{\top}({\rm \textbf{X}}_{k}^{\top}{\rm \textbf{X}}_{k})^{-1}\ {\rm \textbf{X}}_{k}^{\top}{\rm \textbf{X}}_{k'} \ {({\rm \textbf{X}}_{k'}^{\top}{\rm \textbf{X}}_{k'})^{-1}} {\bx^{*}}_{k'} \right\}.\notag \end{align*} Define the estimates of $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$ and $\sigma$ as ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full} = ({\rm \textbf{X}}^{\top}{\rm \textbf{X}})^{-1}{\rm \textbf{X}}^{\top}\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$ and $\hsigma_{full}^{2} = \norm{\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-{\rm \textbf{X}}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full}}{}^{2}/n$, respectively. Then $({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full}, \hsigma_{full})$ are consistent estimates of $(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}}, \sigma)$ under mild conditions. We therefore propose to estimate $Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$ by \begin{eqnarray} \widehat{Q}(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) & = & \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sum_{k' \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}w_{k'}({\bx_{k}^{*}}^{\top}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k} - {\bx^{*}}^{\top}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full})({\bx^{*}}_{k'}^{\top}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k'} - {\bx^{*}}^{\top}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full}) \nonumber \\
&& \quad \quad + \, \hsigma_{full}^{2}\sum_{k \in \mathcal{M}} \sum_{k' \in \mathcal{M}} \ w_k w_{k'}
{\bx_{k}^{*}}^{\top}({\rm \textbf{X}}_{k}^{\top}{\rm \textbf{X}}_{k})^{-1}\ {\rm \textbf{X}}_{k}^{\top}{\rm \textbf{X}}_{k'} \ {({\rm \textbf{X}}_{k'}^{\top}{\rm \textbf{X}}_{k'})^{-1}} {\bx_{k}^{*}}.
\label{eq:1tracelin} \end{eqnarray}
We obtain the weights for model averaging estimator $\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}= (w_1,\cdots,w_{|{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|})$ such that $\widehat{Q}(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$ in (\ref{eq:1tracelin}) is minimized.
\subsection{Estimation in Logistic Regression Framework}\label{subsec:logistic}
In this section we study the proposed model averaging estimation method under logistic regression models.
Let $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y} \in {\mathbb{R}}^{n}$ be $n$ independent copies of a dichotomous response variable $Y$ taking values 0/1. Let ${\rm \textbf{X}} = (\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{1}, \cdots, \mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{n})^{\top}\in {\mathbb{R}}^{n \times (p+1)}$ be a set of features. The logit model is given by,
\[p_{i}= P(y_{i}=1|{\rm \textbf{X}})=\dfrac{\exp(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})}{1+\exp(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})}, \quad \forall i=1,\cdots, n, \] where $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} \in {\mathbb{R}}^{p+1}$ are the set of unknown parameters of interest.
The log-likelihood for the logistic regression can be written as, \begin{align}
\ell_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}|\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y},{\rm \textbf{X}})
= & \log \prod_{i=1}^n \dfrac{\exp(y_i \mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_i \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})}{1+\exp(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_i \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})} = \sum_{i=1}^n y_i\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_i \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} - \sum_{i=1}^n \log(1+\exp(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_i \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}))\nonumber. \end{align}
As before, let ${\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}= \{M_{k}\}^{|{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|}_{k=1}$ be the set of candidate models. Here $M_{k}$ denotes a particular set of features having cardinality $|M_{k}|$. Define ${\rm \textbf{X}}_{k}
\in {\mathbb{R}}^{n \times |M_{k}|}, 1\leq k \leq |{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|$ as the design matrix of the $k^{\rm th}$ candidate model with the features in $M_{k}$. Denote by $[{\rm \textbf{X}}_{k}]_{i,\Cdot}$
the $i$th column of the matrix ${\rm \textbf{X}}_{k}$, thus $[{\rm \textbf{X}}_{k}]_{i,\Cdot}\in R^{|M_{k}|}$. Let $\bbeta_{k} \in {\mathbb{R}}^{|M_{k}|}$ be the parameter vector with components corresponding to the index set $M_{k}$. We consider zero-augmentation of the parameter set $\bbeta_{k}$ for all $k$ as was done for the linear regression models.
Again, we consider estimation of a function of the form $p: {\mathbb{R}}^{p+1} \rightarrow {\mathbb{R}}$ given by \begin{align} \label{eq:logit} p(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}) = \dfrac{\exp ({\bx^{*}}^{\top}{\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}})}{1 + \exp({\bx^{*}}^{\top}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})}. \end{align} Let the unknown true parameter in our model be $\bbeta_{\mbox{\rm\tiny true}}\in {\mathbb{R}}^{p+1}$. Then $\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}}=\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}(\bbeta_{\mbox{\rm\tiny true}}):= {\exp({\rm \textbf{X}} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}})}/\{1+\exp({\rm \textbf{X}} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}})\}\in {\mathbb{R}}^{n}$ calculated component wise. To estimate the parameter $p_{\mbox{\rm\tiny true}}=p(\bbeta_{\mbox{\rm\tiny true}})$, we consider the model averaging estimator given by \begin{align*} \widehat p_{ave} = \sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}p(\widetilde{\bbeta}_{k}), \end{align*} where $\widetilde{\bbeta}_{k}$ is the 0-augmented version of the MLE ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}$ of $\bbeta_{k}$ for the $k^{\rm th}$ model. The score function for the $k^{\rm th}$ model is given by \[
\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}^{'}_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k})=\sum_{i} y_i [{\rm \textbf{X}}_{k}]_{i,\Cdot} - \sum_i \dfrac{\exp([{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top} \bbeta_{k})}{1+\exp([{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top}\bbeta_{k})}[{\rm \textbf{X}}_{k}]_{i,\Cdot}= {\rm \textbf{X}}_{k}^{\top} (\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k})\quad \forall\ 1\leq k \leq |{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|, \] where $\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k}= {\exp({\rm \textbf{X}}_{k}\bbeta_{k})}/\{1+\exp({\rm \textbf{X}}_{k} \bbeta_{k})\} \in {\mathbb{R}}^{n}$. The second derivative of the log-likelihood is given by \[
\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}^{''}_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k}) = \sum_{i=1}^n \dfrac{\exp([{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top} \bbeta_{k})}{\{1+\exp([{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top} \bbeta_{k})\}^2}[{\rm \textbf{X}}_{k}]_{i,\Cdot}[{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top}= {\rm \textbf{X}}_{k}^{\top} {\rm \textbf{W}}_{k}({\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n}-{\rm \textbf{W}}_{k}) {\rm \textbf{X}}_{k}\quad \forall\ 1\leq k \leq |{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}| , \]
where the weight matrix ${\rm \textbf{W}}_{k} \in {\mathbb{R}}^{n \times n}$ is a diagonal matrix defined as ${\rm \textbf{W}}_{k} = \hbox{diag} \big(p_{k;1}, \cdots, $ $p_{k;n}\big)$ with $p_{k;i} = {\exp([{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top} \bbeta_{k})}\big/{\{1+\exp([{\rm \textbf{X}}_{k}]_{i,\Cdot}^{\top} \bbeta_{k})\}^2}$, for $i = 1, \ldots, n$. Since $\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}^{''}_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k})$ does not depend on $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$, we have ${\rm \textbf{H}}_{k} = (1/n)\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}^{''}_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{k})$, for $1\leq k \leq |{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|$. By simple algebra, it can be verified that Condition (\textbf{A}1) is satisfied for logistic regression model too.
To estimate the bias of the model averaging estimator, we define $\bbeta_{k}^{*}$ as the solution of the equation $\mathbb{E}}\def\bbEn{\mathbb{E}_{n}[\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k}(\bbeta_{k})]= \mathbb{E}}\def\bbEn{\mathbb{E}_{n}\{{\rm \textbf{X}}_k^{\top}(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k})\}= \mathbold{0}}\def\bone{\mathbold{1}$. That is, $\bbeta_{k}^{*}$ is a solution of \begin{align}\label{eq:logitscore-k} {\rm \textbf{X}}_k^{\top} (\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k})= 0. \end{align} Denote by $\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}^{*}_{k}= {\exp({\rm \textbf{X}}_{k}\bbeta_{k}^{*})}/\{1+\exp({\rm \textbf{X}}_{k} \bbeta_{k}^{*})\} \in {\mathbb{R}}^{n}$ calculated component wise. We have ${\rm \textbf{X}}_k^{\top} (\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k}^{*})= 0$, and it follows that \begin{align*} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k}(\bbeta_{k}^{*})\mathbold{\ell}}\def\ellhat{\widehat{\ell}}\def\elltil{\widetilde{\ell}}\def\ellbar{{\overline \ell}'_{k'}(\bbeta_{k}^{*})^{\top} & = {\rm \textbf{X}}_{k}^{\top}\mathbb{E}}\def\bbEn{\mathbb{E}_{n}(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}^{*}_{k})(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}^{*}_{k'})^{\top} {\rm \textbf{X}}_{k'}\\ & = {\rm \textbf{X}}_{k}^{\top} \mathbb{E}}\def\bbEn{\mathbb{E}_{n}\{(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})- (\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}^{*}_{k}- \mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})\}\{(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}}) - (\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}^{*}_{k'}- \mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})\}^{\top} {\rm \textbf{X}}_{k'} \\ & = {\rm \textbf{X}}_{k}^{\top} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} (\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})^{\top} {\rm \textbf{X}}_{k'} = {\rm \textbf{X}}_{k}^{\top} {\rm \textbf{W}}^{\mbox{\rm\tiny true}} {\rm \textbf{X}}_{k'} \end{align*} where ${\rm \textbf{W}}^{\mbox{\rm\tiny true}}=var( \mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}}) = \mathbb{E}}\def\bbEn{\mathbb{E}_{n} (\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})(\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}})^{\top}$. In addition, write ${\rm \textbf{W}}^{*}_{k} = \hbox{diag}(\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}^{*}_{k}) \in {\mathbb{R}}^{n\times n}$. The gradient $\nabla p$ is given by $
\nabla p(\bbeta_{k}^{*}) = p_k^*(1-p_k^*){\bx_{k}^{*}},$ $1\leq k\leq |{\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}|.$
Thus, the MSE estimate is \begin{align*}
Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) & = \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sum_{k' \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}w_{k'}(p^{*}_{k}-p_{\mbox{\rm\tiny true}})(p^{*}_{k'}-p_{\mbox{\rm\tiny true}})\notag \\
& \qquad + \sum_{k \in \mathcal{M}} \sum_{k' \in \mathcal{M}} \ w_{k} w_{k'} p_k^*(1-p_k^*){\bx_{k}^{*}}^{\top} \{{\rm \textbf{X}}^{\top}_{k}{\rm \textbf{W}}^{*}_{k}({\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n}-{\rm \textbf{W}}^{*}_{k}){\rm \textbf{X}}_{k}\}^{-1}\\
& \qquad \qquad \times\ {\rm \textbf{X}}_{k}^{\top} {\rm \textbf{W}}^{\mbox{\rm\tiny true}} {\rm \textbf{X}}_{k'}\{{\rm \textbf{X}}^{\top}_{k'}{\rm \textbf{W}}^{*}_{k'}({\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n}-{\rm \textbf{W}}^{*}_{k'}){\rm \textbf{X}}_{k'}\}^{-1} {\bx_{k}^{*}} p_k^*(1-p_k^*)
.\notag \end{align*}
However, $Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$ involves unknown $\bbeta_{\mbox{\rm\tiny true}}$ and $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_k^{*}$. As in the linear regression model case, we use the full model to estimate $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{\mbox{\rm\tiny true}}$ and denote by the estimator ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full}$. Then, compute ${\widehat{\bp}}}\def\tbp{{\widetilde{\bp}}_{full} = \mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full})$ and $ \widehat p_{full} = p({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full})$. The estimators ${\widehat{\bp}}}\def\tbp{{\widetilde{\bp}}_k^{*}={\exp({\rm \textbf{X}}_{k}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k})}/\{1+\exp({\rm \textbf{X}}_{k} {\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k})\} $ and $\widehat p_k^{*}={\exp({\bx_{k}^{*}}^{\top}{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k})}/\{1+\exp({\bx_{k}^{*}}^{\top} {\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k})\} $ are obtained by solving the equation \begin{align}\label{eq:logitscore-k2} {\rm \textbf{X}}_k^{\top} ({\widehat{\bp}}}\def\tbp{{\widetilde{\bp}}_{full}-\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k})= 0, \end{align} using iterative re-weighted least squares (IRLS) method; cf., e.g., \cite{Holland2007}. Specifically, let ${\bbeta_{k}}^{(s)}$ be the solution of (\ref{eq:logitscore-k2}) at the $s^{\rm th}$ stage of the IRLS algorithm. The coefficients for the $(s+1)^{\rm th}$ stage is then given by \begin{align*}
{\bbeta_{k}}^{(s+1)} & = \left.{\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}}_k^{(s)} + \{{\rm \textbf{X}}_k^{\top} {\rm \textbf{W}}_{k}({\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n}-{\rm \textbf{W}}_{k}) {\rm \textbf{X}}_k\}^{-1}{\rm \textbf{X}}_k^{\top} \left\{ \dfrac{\exp({\rm \textbf{X}} {\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full})}{1+\exp({\rm \textbf{X}} {\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{full})}-\dfrac{\exp({\rm \textbf{X}}_k {\bbeta_{k}}^{})}{1+\exp({\rm \textbf{X}}_k {\bbeta_{k}}^{})} \right\}\nonumber\right|_{\bbeta_{k}={\bbeta_{k}}^{(s)}}, \end{align*} for $s = 0, 1, 2, ...$ When the algorithm converges, we obtain the estimate ${\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}$. Putting together, we estimate $Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$ by \begin{align}\label{eq:logis_Q} \hat Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w}) & =\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sum_{k' \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}w_{k'}(\widehat p^{*}_{k}-\widehat p_{full})(\widehat p^{*}_{k'}-\widehat p_{full})\notag \\
& \qquad + \sum_{k \in \mathcal{M}} \sum_{k' \in \mathcal{M}} \ w_{k} w_{k'} \widehat p_k^*(1-\widehat p_k^*){\bx_{k}^{*}}^{\top} \{{\rm \textbf{X}}^{\top}_{k}{\rm \textbf{W}}^{*}_{k}({\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n}-{\rm \textbf{W}}^{*}_{k}){\rm \textbf{X}}_{k}\}^{-1}\notag\\
& \qquad \times\ {\rm \textbf{X}}_{k}^{\top} {\rm \textbf{W}}^{\mbox{\rm\tiny true}} {\rm \textbf{X}}_{k'}\{{\rm \textbf{X}}^{\top}_{k'}{\rm \textbf{W}}^{*}_{k'}({\rm \textbf{I}}} \def\bbI{\mathbb{I}_{n}-{\rm \textbf{W}}^{*}_{k'}){\rm \textbf{X}}_{k'}\}^{-1}{\bx_{k}^{*}} \widehat p_k^*(1-\widehat p_k^*) \bigg |_{\mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_k^*= {\widehat{\bp}}}\def\tbp{{\widetilde{\bp}}_k^*; \mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{k'}^*= {\widehat{\bp}}}\def\tbp{{\widetilde{\bp}}_{k'}^*; \mathbold{p}}\def\phat{\widehat{p}}\def\ptil{\widetilde{p}}\def\pbar{{\overline p}_{\mbox{\rm\tiny true}} = {\widehat{\bp}}}\def\tbp{{\widetilde{\bp}}_{full}}
\biggr]. \end{align}
We can obtain $w_1,\cdots,w_N$ such that the estimated MSE $\hat Q(\mathbold{w}}\def\what{\widehat{w}}\def\wtil{\widetilde{w}}\def\wbar{{\overline w})$ is minimized, similar to the development done in linear regression setup. These weights can be assigned to individual models for developing the model averaging estimator.
\section{Simulation Study \& Real Data Analysis}\label{sec:4} \label{sec:simudata}
\subsection{Simulation Study I: bias and variance tradeoff} \label{sec:simu1}
We study both finite and large sample behavior of the model averaging estimator under a regression setup: $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y} = {\rm \textbf{X}}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} + \mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}$ where $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}, \mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon} \in {\mathbb{R}}^{n}$ and $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}\in {\mathbb{R}}^{p + 1}$. In the study, $p = 9$ and $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} = (\beta_{0}, \beta_{1}, \cdots, \beta_{9})^{\mathsf{T}}$ where $\beta_{0}$ is the intercept coefficient. We assume that 5 parameters $(\beta_{0}, \cdots, \beta_{4})^{\mathsf{T}}$ are always included in all candidate models and the remaining parameters $(\beta_{5}, \cdots, \beta_{9})^{\mathsf{T}}$ may or may not be in a candidate model. For simulation of $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$, first we set the true parameter (henceforth, referred to as $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*}$) as follows:
\[\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*} = \underbrace{0.3, 0.3, 0.5, 0.1, 0.5}_{\text{Always Included}}, \underbrace{0.0, 0.6, 0.0, 0.1, 0.0}_{\text{Candidate Parameters}}.\]
For the design matrix, the first column of ${\rm \textbf{X}}$ is chosen to be 1 (for interecept) and the rest are simulated independently from $\mathsf{N}(0,1)$ random variable. The final response $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}$ is obtained by adding independent Gaussian error $\varepsilon_{i} \sim \mathsf{N}(0,1)$ to each row. We also simulate ${\bx^{*}} = (1, x^{*}_{1}, \cdots, x^{*}_{9})^{\mathsf{T}}$ so that each element $x^{*}_{j}$ is simulated from $\mathsf{N}(0, 1)$ and define our parameter of interest $\mu^{*} = {{\bx^{*}}}^{\mathsf{T}}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*}$.
\begin{figure}
\caption{Bias and variance movement for the proposed model averaging and the oracle estimator of $\mu^{*}$. The true model is a sub-model of (nested within) some of the candidate models, but not included in the candidate model set in Case B.}
\label{fig:pic1}
\end{figure} Clearly, based on all possible choices of last 5 parameters - there are a total of $2^{5} = 32$ candidate models. For ease of calculations we will consider the following 6 nested set of candidate models and the true/oracle model (represented pictorially): \begin{align} \label{eq:candiatemodels} \kbordermatrix{
& \boldsymbol{\beta_{5}} & \boldsymbol{\beta_{6}} & \boldsymbol{\beta_{7}} & \boldsymbol{\beta_{8}} & \boldsymbol{\beta_{9}} \\
\text{\textbf{Candiate} 1} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\[-0.5em]
\text{\textbf{Candiate} 2} & \text{\ding{55}} & \checkmark & \checkmark & \checkmark & \checkmark \\[-0.5em]
\text{\textbf{Candiate} 3} & \text{\ding{55}} & \text{\ding{55}} & \checkmark & \checkmark & \checkmark \\[-0.5em]
\text{\textbf{Candiate} 4} & \text{\ding{55}} & \text{\ding{55}} & \text{\ding{55}} & \checkmark & \checkmark \\[-0.5em]
\text{\textbf{Candiate} 5} & \text{\ding{55}} & \text{\ding{55}} & \text{\ding{55}} & \text{\ding{55}} & \checkmark
\\[-0.5em]
\text{\textbf{Candiate} 6} & \text{\ding{55}} & \text{\ding{55}} & \text{\ding{55}} & \text{\ding{55}} & \text{\ding{55}}
\\[-0.5em]
\text{\textbf{Oracle } } & \text{\ding{55}} & \checkmark & \text{\ding{55}} & \checkmark & \text{\ding{55}}
}. \end{align} Note that the true model is a sub-model of candidate models 1 and 2, and candidate model 6 only contains the first 5 fixed parameters and none of the candidate parameters are included. We will consider two cases --- In \textbf{Case A} we will consider all 7 models in (\ref{eq:candiatemodels}) comprising of the 6 nested models and the true model; In \textbf{Case B}, we will only consider the first 6 nested models. We will compare our results with that of the \textit{oracle estimate}, where we know before-hand which parameters are non-zero and use a least -squared method to estimate $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$ and consequently $\mu^{*}$. We vary the sample size $n$ from 100 to 1000 and compare the bias and variance between the proposed and the oracle method.
In Figure \ref{fig:pic1}, we consider two cases: \textbf{Case A}, where the true (or oracle)-model is one of the candidate models and \textbf{Case B}, the true model is not one of the candidate models. In Figure \ref{fig:pic1} we compare the squared bias, variance and mean squared error movements as sample size is increased. In the top panel for \textbf{Case A}, the model-average estimator has less variance than the oracle estimate even for very small sample sizes which is to be expected; the reason being that the candidate model set contains the oracle as one of its candidates and further averaging reduces variances. In the bottom panel for \textbf{Case B}, with the increase in sample size, the variance of the proposed estimator decreases but is slightly higher compared to oracle. In both cases, the bias matches the oracle very closely as the sample size increases. It is suggestive from the plots in Figure \ref{fig:pic1} that in the linear regression setup, even when the candidate models do not include the true set of parameters, model averaging approaches the performance of oracle estimator in terms of bias and variance. We want to stress that this close performance of the model averaging estimator as compared to oracle is specific to this simple linear regression setup where the true model is a sub-model of some of the candidate models. In general, the question of \textit{whether the performance of model averaging estimator is close to the oracle}, would require separate investigation specific to the model and data at hand.
\subsection{Simulation Study II: Comparison with existing model averaging methods}
In this subsection we use both linear and logistic regression models to perform simulation studies to compare the performance of the frequentist model averaging estimator with the proposed weights with two existing model averaging methods by \cite{Hjort2003} and \citet{Liang2011}. The method by \cite{Hjort2003} (which we refer to as the FMA method) and the method by \citet{Liang2011} (which we refer to as the OPT method) are two well-studied approaches and both are also close to ours. The FMA medthod combines estimators from different models with the assumption that the data are coming from a local misspecification framework so the candidate model used has to have a bias of ${\cal O}}\def\scrO{{\mathscr O}}\def\Ohat{\widehat{O}}\def\Otil{{\widetilde O}}\def\Obar{{\overline O}({1}/{\sqrt n})$ or less. We do not have this restriction in our proposed method. The OPT method proposes an unbiased estimate of MSE of the model averaging estimator and then the model averaging weights are obtained by minimizing the trace of the MSE estimate. The weight selection for OPT has been shown to exhibit optimality properties in terms of minimizing the MSE. However, their development is limited only to linear regression setting.
\vskip5pt\noindent\textbf{Linear Regression:} In the linear regression setup, we work with a design similar to the one we described in Subsetion \ref{sec:simu1}. In particular, in the setup $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y} = {\rm \textbf{X}}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} + \mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}$ where $\mathbold{y}}\def\yhat{\widehat{y}}\def\ytil{\widetilde{y}}\def\ybar{{\overline y}}\def\vby{{\bf y}, \mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon} \in {\mathbb{R}}^{n}$ and $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}\in {\mathbb{R}}^{p}$, we take $p = 4$ and $n = 100$; we denote $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta} = (\beta_{0}, \beta_{1}, \beta_{2}, \beta_{3})$ with $\beta_{0}$ being the coefficient for the intercept. In this setup the fixed parameter is $\beta_{0}$ (i.e. $k$ = 1) and the rest may or may not appear in the model (i.e. $m$ = 3). As before, we use $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*}$ to denote the true parameter. The elements of the design matrix ${\rm \textbf{X}}$ is simulated independently from a $\mathsf{N}(0,1)$ distribution and the elements of the error vector $\mathbold{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\ephat{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}}\def\hep{\widehat{\varepsilon}\def\epsa{\epsilon}\def\eps{\epsilon}$ is simulated independently as $\mathsf{N}(0, 1)$. \begin{table}[t]
\centering
\scalebox{0.8}{\begin{tabular}{cccclcclcclcc}
\toprule
\multicolumn{13}{c}{\textbf{Case A : True model among candidates}}\\[0.5em]
\toprule
\multirow{3}{*}{$\beta^{*}_{3}$} & \multirow{3}{*}{$\mu^{*}$} &\multicolumn{2}{c}{\textbf{(a) Proposed}} & &\multicolumn{2}{c}{\textbf{(b) OPT}} & & \multicolumn{2}{c}{\textbf{(c) FMA}} & &\multicolumn{2}{c}{\textbf{(d) Oracle}}\\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}\\[-0.7em]
& &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}}\\
\midrule
0.001 & -0.192 & -0.059 & \textit{0.249} & & -0.028 & \textit{0.231} & & 0.186 & \textit{0.400} & & -0.14 & \textit{0.221}\\
0.005 & -0.196 & -0.062 & \textit{0.250} & & -0.032 & \textit{0.231} & & 0.184 & \textit{0.402} & & -0.145 & \textit{0.221}\\
0.01 & -0.202 & -0.064 & \textit{0.252} & & -0.037 & \textit{0.232} & & 0.182 & \textit{0.404} & & -0.15 & \textit{0.221}\\
0.05 & -0.243 & -0.103 & \textit{0.261} & & -0.075 & \textit{0.238} & & 0.165 & \textit{0.421} & & -0.192 & \textit{0.221}\\
0.1 & -0.296 & -0.149 & \textit{0.268} & & -0.119 & \textit{0.248} & & 0.148 & \textit{0.445} & & -0.244 & \textit{0.221}\\
0.5 & -0.714 & -0.599 & \textit{0.248} & & 0.104 & \textit{0.832} & & 0.129 & \textit{0.849} & & -0.662 & \textit{0.221}\\
\toprule
\multicolumn{13}{c}{\textbf{Case B: True model not among candidates}}\\[0.5em]
\toprule
\multirow{3}{*}{$\beta^{*}_{3}$} & \multirow{3}{*}{$\mu^{*}$} &\multicolumn{2}{c}{\textbf{(a) Proposed}} & &\multicolumn{2}{c}{\textbf{(b) OPT}} & & \multicolumn{2}{c}{\textbf{(c) FMA}} & &\multicolumn{2}{c}{\textbf{(d) Oracle}}\\
\cline{3-4}\cline{6-7}\cline{9-10}\cline{12-13}\\[-0.7em]
& &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}}\\
\midrule
0.001 & -0.192 & -0.058 & \textit{0.246} & & 0.063 & \textit{0.373} & & 0.217 & \textit{0.430} & & -0.14 & \textit{0.221}\\
0.005 & -0.196 & -0.06 & \textit{0.248} & & 0.063 & \textit{0.375} & & 0.217 & \textit{0.433} & & -0.145 & \textit{0.221}\\
0.01 & -0.202 & -0.061 & \textit{0.249} & & 0.063 & \textit{0.379} & & 0.216 & \textit{0.438} & & -0.15 & \textit{0.221}\\
0.05 & -0.243 & -0.091 & \textit{0.259} & & 0.064 & \textit{0.406} & & 0.21 & \textit{0.472} & & -0.192 & \textit{0.221}\\
0.1 & -0.296 & -0.112 & \textit{0.276} & & 0.066 & \textit{0.444} & & 0.202 & \textit{0.515} & & -0.244 & \textit{0.221}\\
0.5 & -0.714 & -0.073 & \textit{0.663} & & 0.077 & \textit{0.817} & & 0.129 & \textit{0.849} & & -0.662 & \textit{0.221}\\
\hline
\bottomrule
\end{tabular}}
\caption{(\textbf{Linear Regression}) Mean squared error for estimation of $\mu^{*}$ for the (a) model averaging estimator with proposed weights, (b) model averaging estimator with Liang's (\cite{Liang2011}) weights, (c) Hjort's (\cite{Hjort2003}) model averaging estimator with AIC based weights, and (d) oracle estimator. Here, in the top table, the candidate models include the true set of parameters (\textbf{Case A}) and in the bottom table true set of parameters is not included (\textbf{Case B}) - as described in (\ref{eq:candimods}). } \label{tab:mse_oracInNotIn} \end{table}
In this simulation setup, the estimand of interest is the following: \begin{align*} \mu^{*} = {\bx^{*}}^{\mathsf{T}}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*}, \text{ where } {\bx^{*}} \sim \mathsf{N}_{p}(\mathbold{0}}\def\bone{\mathbold{1}, {\rm \textbf{I}}} \def\bbI{\mathbb{I}_{4}). \end{align*} For our specific example, we have ${\bx^{*}} = (1, -1.855445, -1.018565, -1.045111)$ and the true parameter $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*} = (0.3, 0.1, 0.3, \beta^{*}_{3})$. In the following we will vary the value of $\beta^{*}_{3}$ in the set $\{0.001, 0.005, 0.01, 0.05, 0.1, 0.5\}$ and compare the performances of different methods. As before, we will consider two different sets of candidate models; \begin{align}\label{eq:candimods} \begin{array}{ccl} \textbf{Case A } & : &\{\beta_{0}\},\ \{\beta_{0}, \beta_{1}\},\ \{\beta_{0}, \beta_{1}, \beta_{2}\},\ \{\beta_{0}, \beta_{1}, \beta_{2}, \beta_{3}\}\\ \textbf{Case B } & : &\{\beta_{0}\},\ \{\beta_{0}, \beta_{1}\},\ \{\beta_{0}, \beta_{1}, \beta_{2}\}. \end{array} \end{align} Note that in \textbf{Case A}, the true parameter set is included in the model while in \textbf{Case B}, the true parameter set is not included. In fact, \textbf{Case B} represents a typical scenario where researchers are not even aware of the presence of the existence of the feature corresponding to $\beta_{3}$ and hence is working under a mis-specified model.
In Table \ref{tab:mse_oracInNotIn} the performances of different methods are compared for \textbf{Case A} (at the top) and \textbf{Case B} (bottom). For each separate choice of $\beta^{*}_{3}$, we performed 10 simulations and reported their averages in Table \ref{tab:mse_oracInNotIn} along with the root mean square error (in italics). Specifically the error for this simulation setup was defined as, \[
\text{Error} \ = \ \sqrt{({1}/{10})\sum_{k = 1}^{10}|\widehat{\mu}_{k}-\mu^{*}|^{2}}, \] whre $\widehat{\mu}_{k}$ is the estimate corresponding to a specifc method at the $k^{\rm th}$ simulation. In Table \ref{tab:mse_oracInNotIn} \textbf{Case A} we compare the metrhods when $\beta_{3}$ is included in the largest candiate model while in \textbf{Case B}, $\beta_{3}$ is not considered in any of the candidate models. From Table \ref{tab:mse_oracInNotIn} \textbf{Case A}, it can be seen that in the finite sample framework ($n = 100$), the performances of proposed model-average estimator and OPT are similar and both outperform FMA. Moreover with the increase in magnitude of $\beta^{*}_{3}$ to 0.5, proposed model averaging method outperforms both FMA and OPT. On the other hand, the setup in Table \ref{tab:mse_oracInNotIn} \textbf{Case B}
shows that with the increase in $\beta^{*}_{3}$, the estimation error increases consistently for all three methods. Nevertheless, our proposed method clearly outperforms the competing methods in this scenario for all $\beta^{*}_{3}$ values. We also remark that the proposed method performs well up till $\beta_{3} = 0.1$, but the error jumps for the larger signal with $\beta_{3} = 0.5$. This is expected since $\beta_{3}$ is not considered in any of the candidate models and the extent of model mis-specification is large at $\beta^{*}_{3} = 0.5$.
\vskip5pt\noindent\textbf{Logistic Regression:} \begin{table}[t]
\centering
\scalebox{0.8}{\begin{tabular}{cccclcclcc}
\toprule
\multicolumn{10}{c}{\textbf{Case A: True model among candidates}}\\[0.5em]
\toprule
\multirow{3}{*}{$\beta^{*}_{3}$} & \multirow{3}{*}{$p^{*}$} &\multicolumn{2}{c}{\textbf{(a) Proposed}} & &\multicolumn{2}{c}{\textbf{(b) FMA}} & & \multicolumn{2}{c}{\textbf{(c) Oracle}}\\
\cline{3-4}\cline{6-7}\cline{9-10}\\[-0.7em]
& &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} \\
\midrule
0.001 & 0.452 & 0.457 & \textit{0.102} & & 0.515 & \textit{0.115} & & 0.418 & \textit{0.114}\\
0.005 & 0.451 & 0.457 & \textit{0.102} & & 0.515 & \textit{0.116} & & 0.418 & \textit{0.114}\\
0.01 & 0.45 & 0.46 & \textit{0.101} & & 0.518 & \textit{0.116} & & 0.419 & \textit{0.110}\\
0.05 & 0.439 & 0.46 & \textit{0.126} & & 0.529 & \textit{0.126} & & 0.428 & \textit{0.139}\\
0.1 & 0.427 & 0.44 & \textit{0.147} & & 0.534 & \textit{0.135} & & 0.398 & \textit{0.145}\\
0.5 & 0.329 & 0.386 & \textit{0.173} & & 0.547 & \textit{0.230} & & 0.357 & \textit{0.166}\\
\midrule
\multicolumn{10}{c}{\textbf{Case B: True model not among candidates}}\\[0.5em]
\toprule
\multirow{3}{*}{$\beta^{*}_{3}$} & \multirow{3}{*}{$p^{*}$} &\multicolumn{2}{c}{\textbf{(a) Proposed}} & &\multicolumn{2}{c}{\textbf{(b) FMA}} & & \multicolumn{2}{c}{\textbf{(c) Oracle}}\\
\cline{3-4}\cline{6-7}\cline{9-10}\\[-0.7em]
& &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} & &\textbf{Estimate} & \textbf{\textit{Error}} \\
\midrule
0.001 & 0.452 & 0.475 & \textit{0.093} & & 0.543 & \textit{0.123} & & 0.418 & \textit{0.114}\\
0.005 & 0.451 & 0.475 & \textit{0.093} & & 0.543 & \textit{0.124} & & 0.418 & \textit{0.114}\\
0.01 & 0.45 & 0.478 & \textit{0.092} & & 0.546 & \textit{0.125} & & 0.419 & \textit{0.110}\\
0.05 & 0.439 & 0.473 & \textit{0.118} & & 0.554 & \textit{0.134} & & 0.428 & \textit{0.139}\\
0.1 & 0.427 & 0.456 & \textit{0.135} & & 0.561 & \textit{0.152} & & 0.398 & \textit{0.145}\\
0.5 & 0.329 & 0.478 & \textit{0.187} & & 0.56 & \textit{0.239} & & 0.357 & \textit{0.166}\\
\hline
\bottomrule
\end{tabular}}
\caption{(\textbf{Logistic Regression}) Estimation of $p^{*}$ for the (a) model averaging estimator with proposed weights (b) Hjort's (\cite{Hjort2003}) model averaging estimator with AIC based weights, and (c) oracle estimator. Here, in the top table, the candidate models include the true set of parameters (\textbf{Case A}) and in the bottom table true set of parameters is not included (\textbf{Case B}) - as described in (\ref{eq:candimods}).} \label{tab:err_oracInNotIn} \end{table} We now describe the efficacy of the proposed methodology for logistic regression setup and compare its performance with Hjort's FMA method (\cite{Hjort2003}). The logit model is given by, \begin{align}
p_{i}= P(y_{i}=1|{\rm \textbf{X}})=\dfrac{\exp(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})}{1+\exp(\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}^{\top}_{i} \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})}, \quad \forall i=1,\cdots, n, \end{align} where ${\rm \textbf{X}} = [\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{1}, \cdots,\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{i}, \cdots, \mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{n}]^{\mathsf{T}} \in {\mathbb{R}}^{n\times p}$ where $\mathbold{x}}\def\xhat{\widehat{x}}\def\xtil{\widetilde{x}}\def\xbar{{\overline x}}\def\vbx{{\bf x}_{i}\in {\mathbb{R}}^{p}$ and $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}\in {\mathbb{R}}^{p}$. We take $n=100$ and $p$ = 4 where the intercept is always included ($k=1$) and the rest of the parameters can be varied in forming candidate models ($m = 3$). As in the linear regression simulation setup, the elements of ${\rm \textbf{X}}$ is simulated independently from $\mathsf{N}(0, 1)$ distribution. In this setup, the true value of the parameter $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$ is set as $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*} = (0.3, 0.1, 0.3, \beta_{3}^*)$, where we vary the value of $\beta^{*}_{3}$ (as before) in the set $\{0.001, 0.005, 0.01, 0.05, 0.1, 0.5\}$. For this logistic regression setup, our estimand of interest is as follows: \begin{align}\label{eq:estimand_logistic} p^{*} = \exp(\eta^{*})/(1+\exp(\eta^{*})) \text{ where } \eta^{*} = {\bx^{*}}^{\mathsf{T}}\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}^{*} \text{ and } {\bx^{*}} \sim \mathsf{N}_{4}(\mathbold{0}}\def\bone{\mathbold{1}, {\rm \textbf{I}}} \def\bbI{\mathbb{I}_{4}). \end{align}
As in the regression setup, we set ${\bx^{*}} = [1.0, -1.86, -1.019, -1.045]$. Note that the specifics of model averaging estimator for the estimand in (\ref{eq:estimand_logistic}) has been described in detail in Section \ref{subsec:logistic}. Specifically, (\ref{eq:logis_Q}) describes the MSE function to be minimized for optimal weights. We compare our prposed method with Hjort's FMA method (\cite{Hjort2003}) and the oracle estimate. As in the linear regression setup, we consider two cases namely, \textbf{Case A} and \textbf{Case B}; see (\ref{eq:candimods}) for more details. The results for both \textbf{Case A} and \textbf{Case B} are summarized in Table \ref{tab:err_oracInNotIn}. We define the error metric as, \[
\text{Error} \ = \ \sqrt{({1}/{10})\sum_{k = 1}^{10}|\phat_{k}-p^{*}|^{2}}, \] whre $\phat_{k}$ is the estimate corresponding to a specifc method at the $k^{\rm th}$ simulation. As in the linear regression setup, for the logistic regression as well, we see that the proposed method performs better than Hjort's method using AIC-based weights in both cases across all $\beta^{*}_{3}$ values. For \textbf{Case A}, the performance of our proposed method matches that of the oracle and the differences are with the margin of error. For \textbf{Case B}, the performance of our proposed method tracks well with the oracle until the signal strength of $\beta^{*}_{3}$ is increased to 0.5, in which case the estimation error increases.
\subsection{Analysis of Prostate Cancer Data.}
The data for this example come from a study by \cite{Stamey1989}. They examined the relationship between the level of prostate-specific antigen and a number of clinical measures in men who were about to receive a radical prostatectomy. As a regression problem, the response variable is \textsf{lpsa}, the level of prostate-specific antigen, with values ranging from -0.43 to 5.58. The predictor variables (clinical measures) are log cancer volume (\textsf{lcavol}), log prostate weight (\textsf{lweight}), \textsf{age}, log of the amount of benign prostatic hyperplasia (\textsf{lbph}), seminal vesicle invasion (\textsf{svi}), log of capsular penetration (\textsf{lcp}),Gleason score (\textsf{gleason}), and percent of Gleason scores 4 or 5 (\textsf{pgg45}). Here \textsf{svi} is a binary variable, and \textsf{gleason} is an ordered categorical variable.
We considered a best-subset model selection approach using an all-subsets search. In this model selection approach, the estimated prediction error is obtained using a crude cross-validation method: the dataset is divided randomly into a training set of size 67 and a test set of size 30. The training set is used to select a model and then the test set is used to compute the prediction error, averaging over all 30 points. We repeat the process five times and average over the five prediction errors.
\begin{table*}[h]
\centering
\begin{tabular} {lcc}
\toprule \textbf{Method Used} & \textbf{Test Error}\\ \midrule
Model Selection (Best Subset Regression) & 0.487 \\
Model Averaging (Proposed Weights) & 0.453 \\
Model Averaging (AIC Weights) & 0.987 \\
Full Model & 1.272 \\
\bottomrule
\end{tabular} \caption{Prediction Error for different methods for prostate cancer data.}
\label{tab:PredictionErrorForDifferentMethods} \end{table*}
We also considered the model averaging method using two different sets of weights: the proposed weights and also AIC-based weights.
Using the proposed weights, the proposed approach assigned the most weights to the model with features \textsf{lcavol}, \textsf{lweight}, \textsf{svi}, \textsf{pgg45}, \textsf{lcp}, \textsf{gleason} and \textsf{lbph} and the model with \textsf{lcavol}, \textsf{lweight}, \textsf{svi}, \textsf{pgg45}, \textsf{lcp}, \textsf{gleason}, \textsf{lbph} and \textsf{age}. The procedure with AIC-based weights gives more weight to a smaller model containing \textsf{lcavol} and \textsf{lweight}. We used the same crude cross-validation method as above, with a training set of size 67 and a test set of size 30. The training set is used to obtain the model averaging estimates and then the test set is used to compute the prediction error, averaging over all 30 points. We repeat the process five times and average over the five prediction errors.
Finally, as an illustration, we also plotted in Figure~\ref{fig:pic3}, a set of $90\%$ prediction intervals of antigen levels for one test dataset in one of our simulation runs. The x-axis is the index of the 30 observations in the test dataset. In order to get the prediction interval, we kept the test dataset fixed while in 50 different replications we selected a random subset of 50 observations from the training data (of original size 67) and applied the model averaging method to analyze the training data of size $50$ and use the result to predict the \textsf{lpsa} values for the test dataset. In order to construct the prediction interval we added to each predicted mean, a Gaussian noise with mean 0 and standard error equal to the estimated standard error from the full model denoted as $\hsigma_{full}$; see (\ref{eq:1tracelin}). In this case the the estimate was $\hsigma_{full} = 0.599$. The upper and lower limits of the prediction band were calculated based on quantiles. As is clear from the plots, most of the observations fall within the 90\% prediction interval.
\begin{figure}
\caption{Actual and predicted level of \textsf{lpsa} (level of prostate-specific antigen) based on the prostate cancer data from \cite{Stamey1989}. In the x-axis the indices of the 30 observations are noted. In the y-axis we note the \textsf{lpsa} values. The red points indicate actual (observed) values while the blue points indicate predited values based on average of 50 replications. The gray band denotes the 90\% confidence interval.}
\label{fig:pic3}
\end{figure}
\section{Discussion}
In this paper, we propose a more general framework where the choice of true model is not fixed. The truth can be any one or a mixture of the candidate models. Models that have large biases are not excluded from our analysis. We study the behavior of frequentist model averaging estimator with an optimal weighting scheme to combine all the individual candidate models. As an illustration, we derive the model averaging estimator in the linear and logistic regression framework.
We also implement the weighting scheme proposed by \citet{Liang2011} and compare their performance to AIC based weights. The simulation results indicate that under certain model specifications, the proposed estimator works better than \cite{Hjort2003}'s estimator.
There are many ways a regression model can be misspecified. Misspecification in most cases is often interpreted as a case of left out variables or when the functional form of the model is not correctly specified. In these instances, the normality assumption among random errors are violated. This results in the estimates being biased as discussed in \cite{Giles1992}. These estimates can harm the decision making process, so one should be very attentive while fitting and choosing models in the presence of misspecification. Many methods have been used to measure and limit misspecification in model fitting. Ramsey regression equation specification error test, discussed in \cite{jerry1977}, may help provide a test that is useful in a linear regression setup.
In model averaging, if the true model is not included in the set of candidate models, we end up using an estimate that is biased. If all the models are misspecified, the weights derived by AIC or by using a consistent or unbiased estimator of mean squared error are not optimal and should be with care. When the true model is not included in the analysis thus all the candidate models are wrong, there have been developments in model selection that takes care of the bias resulting from selection. See \cite{Hurvich1989, Hurvich1991}. A penalized version of AIC and BIC have been derived that performs better than other selection criteria. One can follow a similar path and derive the model averaging weights based on a sightly modified criteria.
Another problem with model averaging is that the number of optional parameters in analysis could be very high. For example, if there are $30$ parameters we could end up using as many as $2^{30}$candidate models. This may be time consuming and not ideal in certain fields of study. However, as suggested in this paper, a statistician can choose to use all or very few candidate models as per the scope of the study. This could be explored in further developments.
\renewcommand{{A.\arabic{equation}}}{{A.\arabic{equation}}} \renewcommand{{\it \Alph{section}}}{{\it \Alph{section}}} \renewcommand{{\it A.\arabic{subsection}}}{{\it A.\arabic{subsection}}} \setcounter{equation}{0} \setcounter{section}{0} \setcounter{subsection}{0}
\section{Appendices}\label{sec:7} \subsection*{A.1. Regularity Conditions and Assumptions}
In this section we state the regularity conditions that were used throughout the paper. We assume that the density function satisfies the following conditions. \begin{enumerate}[(a)] \item $\Theta$ is an open subset of ${\mathbb{R}}^{p}$, and the support of the density $f(y,\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$ is independent of $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$. \item The true parameter value is an interior point of the parameter space. \item $\ell_{k;i}'$ and $\ell_{k;i}''(\bbeta_{k}^{*})$ exists and $\ell_{k;i}'$ is a continuous function of $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}$. \item $\mathbb{E}}\def\bbEn{\mathbb{E}_{n}[\ell_{k;i}']=0$ and $\mathbb{E}}\def\bbEn{\mathbb{E}_{n}[\ell_{k;i}' \ell_{k;i}'^{\top}] = -\mathbb{E}}\def\bbEn{\mathbb{E}_{n}[\ell_{k;i}''(\bbeta_{k}^{*})]$. These conditions are standard conditions for asymptotic normality of maximum likelihood estimators. \item $ \lim_{n\rightarrow \infty} \dfrac{1}{n} \left[\ell_{k}''(\bbeta_{k}^{*})\right] \rightarrow \mathbf{H}_{k}$ and $\mathbf{H}_{k}$ is positive definite.
\item For some $\epsilon > 0 $, $\sum_i \mathbb{E}}\def\bbEn{\mathbb{E}_{n}|\lambda'\ell_{k;i}'(\bbeta_{\mbox{\rm\tiny true}})|^{2+\epsilon}/n^{(2+\epsilon)/2} \rightarrow 0$ for all $\epsilon \in {\mathbb{R}}^{p}$.
\item There exists $\epsilon > 0 $ and random variables $B_i(y_i)$, $sup \left\{ |\ell_{k;i}''(\bbeta_{k}^{*})|:|| t-\bbeta_{\mbox{\rm\tiny true}} || \leq \epsilon \right\} \leq B_i(y_i)$ and $\mathbb{E}}\def\bbEn{\mathbb{E}_{n}|B_i(y_i)|^{1+\delta} \leq K$, where $\delta$ and $K$ are positive constants. \end{enumerate} We also assume that the variance matrix of the score statistic is finite and positive definite.
Consider a functional $\mu: {\mathbb{R}}^{p + q} \rightarrow {\mathbb{R}}$. Define $\mu^{(\mbox{\rm\tiny drop})}: {\mathbb{R}}^{p+m} \rightarrow {\mathbb{R}}$ as the same function as $\mu$ with only the $(q-m)$ corresponding arguments dropped. For any $\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b} = (b_{1}, \cdots, b_{p}, b_{p+1}, \cdots, b_{p +m})$ with $1\leq m \leq q$ define the \emph{$\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}$-augmented} version of $\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}$ as $\widetilde{\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}} = \{\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}, \mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}\}\in {\mathbb{R}}^{p+q}$ with some fixed $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}\in \bar{{\mathbb{R}}}^{q-m}$ inserted at the place of missing components. Let the indices of the missing components be $\{p +i_{1}, \cdots, p +i_{q-m}\}$. We define $\widetilde{\mu}:{\mathbb{R}}^{p+m} \rightarrow {\mathbb{R}}$ as the restriction of $\mu : {\mathbb{R}}^{p+q} \rightarrow {\mathbb{R}}$ subject to $b_{p+i_{1}} =c_{1}, \cdots, b_{p+i_{q-m}} = c_{q-m}$. Clearly then $\mu(\widetilde{\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}}) = \widetilde{\mu}(\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b})$.
Given a function $\mu$, the fixed value $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}$ is chosen in such a way that $\mu(\widetilde{\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}}) = \mu^{(\mbox{\rm\tiny drop})}(\mathbold{b}}\def\bhat{\widehat{b}}\def\btil{\widetilde{b}).$ We assume that $\mathbold{\mu}}\def\muhat{\widehat{\mu}}\def\widehat{\mu}{\widehat{\mu}: {\mathbb{R}}^{p+q}\rightarrow {\mathbb{R}}^{\ell}$ is a function that is $1^{{\rm st}}$ order partially differentiable at $\bbeta_{\mbox{\rm\tiny true}}$. Note that by definition of $\mathbold{c}}\def\chat{\widehat{c}}\def\ctil{\widetilde{c}$-augmentation, $\mu(\widetilde{\bbeta}_{k}) = \mu^{(\mbox{\rm\tiny drop})}({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k})$. For ease of reading, in the subsequent proof, we omit the superscript `$(\mbox{\rm\tiny drop})$'.
\subsection*{A.2. Proof of Theorem \ref{th:main2}}
From usual regularity conditions on the log-likelihood, it can be shown that $\sqrt{n}\left({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_k-\bbeta_{k}^{*}\right) = - {\rm \textbf{H}}^{-1}_{k}\left\{\dfrac{1}{\sqrt{n}}\sum^{n}_{i=1}\ell_{k;i}'(\bbeta_{k}^{*})\right\} + o_{\mathbb{P}}(1)$. For more detail and exact conditions see \cite[Chapter~5]{van00}.
Now by application of Taylor expansion, $\mu({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}) - \mu(\bbeta_{k}^{*}) = \nabla\mu(\bbeta_{k}^{*})^{\top}({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}) + o_{\mathbb{P}}(\norm{{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}}{})$, so that \[ \sqrt{n}(\mu({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}) - \mu(\bbeta_{k}^{*})) = -\nabla\mu(\bbeta_{k}^{*})^{\top}\left[{\rm \textbf{H}}^{-1}_{k} \left\{\frac{1}{\sqrt{n}}\sum^{n}_{i=1}\ell_{k;i}'(\bbeta_{k}^{*})\right\} + o_{\mathbb{P}}(1)\right] + o_{\mathbb{P}}(\sqrt{n}\norm{{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}}{}). \] Thus it follows that for $0\leq w_{k}\leq 1$ with $\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}=1$, \begin{align*} & \sqrt{n}\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\} \\ & =\sqrt{n} \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu(\bbeta_{k}^{*}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\}+ \sqrt{n}\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}) - \mu(\bbeta_{k}^{*})\}\\ & = \sqrt{n} \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu(\bbeta_{k}^{*}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\} - \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\nabla\mu(\bbeta_{k}^{*})^{\top}{\rm \textbf{H}}^{-1}_{k} \left\{\frac{1}{\sqrt{n}}\sum^{n}_{i=1}\ell_{k;i}'(\bbeta_{k}^{*})\right\} \\ & \hspace{2cm} + o_{\mathbb{P}}\left(\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sqrt{n}\norm{{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}}{}\right)\\ & = \sqrt{n} \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu(\bbeta_{k}^{*}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\}+ \dfrac{1}{\sqrt{n}} \sum^{n}_{i=1} \left\{-\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}\nabla\mu(\bbeta_{k}^{*})^{\top}{\rm \textbf{H}}_{k}^{-1}\ell_{k;i}'(\bbeta_{k}^{*})\right\} \\ & \hspace{2cm}+ o_{\mathbb{P}}\left(\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sqrt{n}\norm{{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}}{}\right)\\ & = \sqrt{n} \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu(\bbeta_{k}^{*}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\}+ \dfrac{1}{\sqrt{n}} \sum^{n}_{i=1} Z_{i} + o_{\mathbb{P}}\left(\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sqrt{n}\norm{{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}}{}\right), \end{align*}
where we have used the definition that $Z_{i} = -\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}\nabla\mu(\bbeta_{k}^{*})^{\top}{\rm \textbf{H}}_{k}^{-1}\ell_{k;i}'(\bbeta_{k}^{*})$. First note that $\sqrt{n}\norm{{\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*}}{} =o_{\mathbb{P}}(1)$ via consistency of MLE. Note that $Z_{i}$'s are independent and $\mathbb{E}}\def\bbEn{\mathbb{E}_{n} Z_{i} = 0$. Now fix $\varepsilon>0$. In order to prove the asymptotic normality of the quantity $(1/\sqrt{n})\sum_{i}Z_{i}$ we invoke the Lindeberg-Feller central limit theorem (see \cite{billing08}). This requires verification of the so called Lindeberg condition, given by $(1/n)\sum^{n}_{i=1}\mathbb{E}}\def\bbEn{\mathbb{E}_{n} Z^{2}_{i}\bbI\left\{|Z_{i}|>\sqrt{n}\varepsilon\right\}$. Let us denote $Y_{ki}=\nabla\mu(\bbeta_{k}^{*}){\rm \textbf{H}}_{k}^{-1}\ell_{k;i}'$. Now, \begin{align*}
\dfrac{1}{n}\sum^{n}_{i=1}\mathbb{E}}\def\bbEn{\mathbb{E}_{n} Z^{2}_{i}\bbI\left\{|Z_{i}|>\sqrt{n}\varepsilon\right\} & = \dfrac{1}{n}\sum^{n}_{i=1}\mathbb{E}}\def\bbEn{\mathbb{E}_{n}\underbrace{\left(\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y_{ki}\right)^{2}}_{= A, \text{ say}}\ \underbrace{\bbI\left\{|\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y_{ki}|>\sqrt{n}\varepsilon\right\}}_{=B, \text{ say}}\\
& \leq \dfrac{1}{n}\sum^{n}_{i=1}\mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left[\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y_{ki}^{2}\ \bbI\left\{\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |Y_{ki}|>\sqrt{n}\varepsilon\right\}\right]\\
& \leq \dfrac{1}{n}\sum^{n}_{i=1} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left[\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |Y_{ki}|^{2}\bbI \ \left\{\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |Y_{ki}|>\sqrt{n}\varepsilon\right\}\right]. \end{align*}
Here the inequality in the second line is derived by first noting that if $A,B>0$ and $A<C, B<D$, then $AB<CD$. Secondly, note that $A= (\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y_{ki}) \leq \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y^{2}_{ki}$ by Jensen's inequality. Also since $\sqrt{n}\varepsilon < |\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y_{ki}| \leq \max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}\sum_{k}|w_{k}| = 1$, it follows that $\bbI\left\{|\sum_{k\in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y_{ki}|>\sqrt{n}\varepsilon\right\}\leq \bbI\left\{\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |Y_{ki}|>\sqrt{n}\varepsilon\right\}$. Now take $C=\sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}}w_{k}Y^{2}_{ki}$ and $D = \bbI\left\{\max_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} |Y_{ki}|>\sqrt{n}\varepsilon\right\}$.
Now by condition (\textbf{A}1), the Lindeberg-Feller condition is satisfied for $(1/\sqrt{n})Z_{i}$'s whence it follows that $(1/\sqrt{n})\sum^{n}_{i=1}Z_{i} \sim {\cal N}}\def\scrN{{\mathscr N}}\def\Nhat{\widehat{N}}\def\Ntil{{\widetilde N}}\def\Nbar{{\overline N}(0, \sigma_{w}^{2})$, where $\sigma^{2}_{w}$ is given by \[ \sigma^{2}_{w} = \lim_{n\rightarrow \infty} \dfrac{1}{n}\sum^{n}_{i =1} \mathbb{E}}\def\bbEn{\mathbb{E}_{n} \left\{\sum_{k}w_{k}\nabla\mu(\bbeta_{k}^{*})^{\top}{\rm \textbf{H}}^{-1}_{k}\ell_{k;i}'\right\}^{2}. \] The theorem follows.
\subsection*{A.3. Proof of Corollary \ref{cor:match}}
As defined before, for the $k^{{\rm th}}$ candidate model, let
$\bbeta_{k}^{*} \in {\mathbb{R}}^{p+|M_{k}|}$ be the solution of the equation $\mathbb{E}}\def\bbEn{\mathbb{E}_{n} S_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})=0$, where $S_{k}(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta})$ is the score function for the $k^{{\rm th}}$ model. Let $\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0,k}= (\mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta}_{0}, \pi_{k}\mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{0})^{\top}\in {\mathbb{R}}^{p+|M_{k}|} $. Therefore, $\mathbb{E}}\def\bbEn{\mathbb{E}_{n}\{\ell_{k}'(\bbeta_{k}^{*})\}=\mathbold{0}}\def\bone{\mathbold{1}$. Then, by Taylor's theorem and appropriate regularity conditions on the density function, it follows that asymptotically, $\bbeta_{k}^{*} - \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0,k} \approx {\rm \textbf{J}}_{k}^{-1}\mathbb{E}}\def\bbEn{\mathbb{E}_{n}\{\ell_{k}'(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})\}$. Now note that following \cite[Page~37]{Hjort2003}, \[ \mathbb{E}}\def\bbEn{\mathbb{E}_{n}\{\ell_{k}'(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})\} = \begin{pmatrix} {\rm \textbf{J}}_{01}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}/\sqrt{n} + o(1/\sqrt{n}) \\ \pi_{k}{\rm \textbf{J}}_{11}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}/\sqrt{n} + o(1/\sqrt{n}) \end{pmatrix}, \] so that, \begin{align}\label{eq:imp1} \bbeta_{k}^{*} - \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0,k} \approx {\rm \textbf{J}}_{k}^{-1} \begin{pmatrix} {\rm \textbf{J}}_{01}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}/\sqrt{n} \\ \pi_{k}{\rm \textbf{J}}_{11}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}/\sqrt{n} \end{pmatrix}. \end{align} In order to prove the corollary, we first match the bias terms. Note that in Theorem \ref{th:main2}, the bias term is given by \[ \sqrt{n} \sum_{k \in {\cal M}}\def\scrM{{\mathscr M}}\def\Mhat{\widehat{M}}\def\Mtil{{\widetilde M}}\def\Mbar{{\overline M}} w_{k}\{\mu(\bbeta_{k}^{*}, \gamma_{0,k^{c}}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\}. \] Thus consider term by term, the bias of the $k^{\rm th}$ component is given by
\begin{align*} \sqrt{n}\{\mu(\bbeta_{k}^{*}, \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{0,k^{c}}) - \mu(\bbeta_{\mbox{\rm\tiny true}})\}& = \sqrt{n}\{\mu(\bbeta_{k}^{*}, \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{0,k^{c}}) - \mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})\}- \sqrt{n}\{\mu(\bbeta_{\mbox{\rm\tiny true}})- \mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})\}\\ & \approx \sqrt{n}(\bbeta_{k}^{*}-\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0,k})^{\top}\begin{pmatrix} \partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})/\partial \mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta} \\ \partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})/\partial \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{k} \end{pmatrix} - \bigg(\dfrac{\partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})}{\partial \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}}\bigg)^{\top}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}\\ & = \begin{pmatrix} \partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})/\partial \mathbold{\theta}}\def\thetahat{\widehat{\theta}}\def\htheta{\widehat{\theta} \\ \partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})/\partial \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{k} \end{pmatrix}^{\top} {\rm \textbf{J}}_{k}^{-1} \begin{pmatrix} {\rm \textbf{J}}_{01}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta} \\ \pi_{k}{\rm \textbf{J}}_{11}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta} \end{pmatrix} - \bigg(\dfrac{\partial\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})}{\partial \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}}\bigg)^{\top}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta}, \end{align*} where the last term follows from (\ref{eq:imp1}). This matches the bias term in (\ref{eq:hjortasymp}). Looking at the variance term, note that from (\ref{eq:varmu}), the variance of the $k^{\rm th}$ term is given by,
\[ \mbox{\rm var}\{\nabla\mu(\bbeta_{k}^{*}, \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{0,k^{c}})\}^{\top}{\rm \textbf{H}}^{-1}_{k}(\sum^{n}_{i=1}\ell_{k;i}'(\bbeta_{k}^{*})/\sqrt{n}). \] From (\ref{eq:imp1}), via Taylors theorem it follows that $\nabla\mu(\bbeta_{k}^{*}, \mathbold{\gamma}}\def\gammahat{\widehat{\gamma}}\def\widehat{\gamma}{\widehat{\gamma}_{0,k^{c}}) \approx \nabla\mu(\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0})$. Also note that from standard theory of maximum likelihood estimation,
\begin{align*} {\rm \textbf{H}}^{-1}_{k}(\bbeta_{k}^{*})(\sum^{n}_{i=1}\ell_{k;i}'(\bbeta_{k}^{*})/\sqrt{n}) & \approx \sqrt{n} ({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\bbeta_{k}^{*})\\ & = \sqrt{n} ({\widehat{\bbeta}}}\def\bbeta_{k}{{\widetilde{\bbeta}}_{k}-\mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0,k}) - \sqrt{n} (\bbeta_{k}^{*} - \mathbold{\beta}}\def\betahat{\widehat{\beta}}\def\hbeta{\widehat{\beta}_{0,k})\\ & = {\rm \textbf{J}}_{k}^{-1} \begin{pmatrix} \sqrt{n}\Ubar_{n}\\ \sqrt{n}\Vbar_{n,k} \end{pmatrix} - {\rm \textbf{J}}_{k}^{-1} \begin{pmatrix} {\rm \textbf{J}}_{01}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta} \\ \pi_{k}{\rm \textbf{J}}_{11}\mathbold{\delta}}\def\deltahat{\widehat{\delta}}\def\hdelta{\widehat{\delta} \end{pmatrix}\\ & = {\rm \textbf{J}}_{k}^{-1}\begin{pmatrix} \sqrt{n}\{\Ubar_{n}- \mathbb{E}}\def\bbEn{\mathbb{E}_{n} U_{k}(Y_{1})\}\\ \sqrt{n}\{\Vbar_{n,k} - \mathbb{E}}\def\bbEn{\mathbb{E}_{n} V_{k}(Y_{1})\} \end{pmatrix}. \end{align*} Here the last inequality follows from Lemma 3.1 in \cite{Hjort2003}. Hence it follows that asymptotically both the bias and variance terms are equal.
\end{document} | arXiv |
Figure 1:Shape collections typically come with inconsistent orientations (a). PCA-based alignment (b), or aligning to an arbitrarily chosen base model (c) is prone to error. The problem with pairwise alignments is attributed to several minima in alignment distances (Epair), arising due to near-symmetries of shapes. We introduce an autocorrelation-guided algorithm to efficiently sample the minima (red boxes) and jointly co-align the input models (d).
Co-aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval, and visualization. We observe that resolving among some orientations is easier than others, for example, a common mistake for bicycles is to align front-to-back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyze rotational autocorrelations of shapes to facilitate shape co-alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well-matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state-of-the-art techniques on benchmark datasets, but requires significantly fewer computations, resulting in 2-16$\times$ speed improvement in our tests.
Figure 5: Starting from a set of shapes, we normalize and compute their autocorrelation descriptors to cluster the shapes. We then align the shapes first within and then across the clusters using a graph-based discrete formulation wherein we intelligently sample candidate alignments for each shape guided by their autocorrelation descriptors.
Figure 12: Randomly selected shapes from all our datasets, indicating their pose before (odd rows - in gray) and after (even rows - in green) alignment.
We thank the reviewers for their comments and suggestions for improving the paper. This work was supported in part by ERC Starting Grant SmartGeometry (StG-2013-335373) and gifts from Adobe. Melinos Averkiou is grateful to the Rabin Ezra Scholarship Trust for the award of a bursary. | CommonCrawl |
\begin{document}
\keywords{elastoplasticity, energetic solution, gradient polyconvexity, rate-independent model}
\title[Elastoplasticity of gradient-polyconvex materials]{Elastoplasticity of gradient-polyconvex materials}
\author{Martin Kru\v z\'ik} \address[Martin Kru\v z\'ik]{Czech Academy of Sciences, Institute of Information Theory and Automation, Pod vod\'arenskou ve\v z\'i 4, 182 08 Prague, Czechia and Faculty of Civil Engineering, Czech Technical University, Th\'{a}kurova 7, 166 29 Prague, Czechia} \email{[email protected]} \urladdr{http://staff.utia.cas.cz/kruzik}
\author{Ji\v{r}\'{\i} Zeman} \address[Ji\v{r}\'{\i} Zeman]{Institut f\"{u}r Mathematik, Universit\"{a}t Augsburg, D-86135 Augsburg, Germany } \email{[email protected]} \urladdr{https://www.uni-augsburg.de/de/fakultaet/mntf/math/prof/ana/arbeitsgruppe/jiri-zeman}
\begin{abstract} We propose a model for rate-independent evolution in elastoplastic materials under external loading, which allows large strains. In the setting of strain-gradient plasticity with multiplicative decomposition of the deformation gradient, we prove the existence of the so-called energetic solution. The stored energy density function is assumed to depend on gradients of minors of the deformation gradient which makes our results applicable to shape-memory materials, for instance. \end{abstract}
\maketitle
\section{Introduction and notation} Elastoplasticity at large strains is an area of ongoing research, bringing together contributions from modelling, analysis and numerical simulations. For the mathematical analysis of elastoplastic models, it is often convenient to use powerful tools from the calculus of variations, which are now able to treat quasistatic evolutionary problems as well (see e.g. \cite{ortiz-repetto}, \cite{mielke1} as pioneering works). In principle, the existence of solutions could be ensured by assuming \color{black} some \color{black} generalized convexity \EEE of the strain energy, but general material behavior may contradict this assumption. For \color{black} example\EEE, this is manifested in shape-memory alloys (SMA) \cite{wlsc}, magnetostrictive \cite{desimone} and ferroelectric materials \cite{shu}, even if convexity is understood in a generalized sense, such as \textit{polyconvexity} (first defined in \cite{ball77}).
As a remedy, one can then recourse to higher-gradient regularizations, where the stored energy density $W$ also depends e.g. on the second gradient of the deformation. From a mathematical point of view, this adds compactness to the model, which is instrumental in proving the existence of solutions by the direct method. \cite{ball81} Materials with such constitutive equations are referred to as non-simple and were introduced by Toupin \cite{toupin, toupin2}. Since then, the concept has been elaborated by many authors so that its thermodynamical side is also better understood \cite{greenRivlin, capriz, podio, hooke2, forest, mierouNum, forest2}.
\textit{Gradient-polyconvex} (GPC) materials form a special class of non-simple solids and appeared in \cite{bbmkas} for the first time. Their stored energy density is not a function of the full second gradient $\nabla^2 y$ of the deformation $y\colon\Omega\to\mathbb{R}^3$, \color{black} which maps the reference configuration $\Omega$ to $\mathbb{R}^3$ \EEE, but it only depends, in a convex way, on the weak gradients of ${\rm cof}\nabla y$ (and that of ${\rm det}\nabla y$, if desirable). (See Section \ref{sec:main} for the definition of a cofactor matrix.) To interpret the condition physically, note that since ${\rm det}\nabla y$ measures the local change in volume between the reference and current configuration of the material and ${\rm cof}\nabla y$ describes the transformation of area \cite[p.~78]{gurtinBook}, the stored energy $W$ depending on their gradients offers a control of how abruptly these changes vary in space. Gradient polyconvexity has since been applied to the \color{black} evolution \EEE of SMA \cite{mkppas} and a numerical implementation of GPC \color{black} material models \EEE is also available \cite{mhmk}. \color{black} GPC allows us to consider stored energy densities without assuming any notion of convexity in the deformation gradient variable. This makes it suitable for \color{black} modelling \color{black} of \color{black} shape-memory \color{black} alloys, for instance, because the resulting functional is lower semicontinuous in the underlying weak topology. An alternative approach to energy functionals that are not lower semicontinuous is relaxation \cite{dacorogna}, however, available results prevent us from considering energies tending to infinity for extreme compression. \EEE In this article, we study an elastoplastic model using gradient polyconvexity.
The idea of strain-gradient plasticity is similar to that of nonsimple materials, as it also uses higher-order terms to prevent physical quantities from unrealistic fine-scale oscillations. Incorporating gradients of plastic variables in the constitutive equations is common in the engineering literature \cite[p.~250]{RIS} and we refer an interested reader to \cite{dk, gurtin, Gurtin-Anand, mktr-book, zpbmj, muaif, tsa-aif} or \cite{ebobisse} and the references therein. Gradient terms account for non-local interactions of dislocations and we include them as well, as they offer a suitable regularization to our model.
For our problem, we formulate the so-called \textit{energetic solution} (let us name \cite{AMFTVL} as an early reference; other related sources are cited in \cite{RIS}). One advantage of the energetic formulation is that it avoids derivatives of constitutive equations and time derivatives of the solution itself. \cite{mielke1} The variational nature of this solution concept also combines well with homogenization and relaxation. Two conditions lie at the core of the energetic formulation: a \textit{stability inequality}, which couples minimization of the elastic energy with a principle of maximum dissipation, and an \textit{energy balance}. The two requirements together imply that a usual \textit{plastic flow rule} is satisfied for sufficiently smooth solutions.
\color{black} The plan of the paper is as follows: \color{black} in \EEE Section 2 we review some basic facts from the modelling of \textit{materials with internal variables}, loosely following \cite{fm} and \cite{han}. For the sake of completeness, we also motivate the definition of an energetic solution, although this has been done more thoroughly in previous works of Mielke {\it et al.} The main part of this paper is Section 3, where we study the rate-independent behavior of elastoplastic GPC materials under external loads and prove the existence result. The section is concluded with an example from crystal plasticity, which illustrates the usability of our findings.
Our approach draws inspiration from \cite{mami2}, where the existence of energetic solutions in large-strain elastoplasticity is proved in the presence of plastic strain gradients and a polyconvex $W$. However, as our stored energy is gradient polyconvex, our findings apply to the setting of multiwell \color{black} energies\EEE, as encountered e.g. in shape-memory alloys, cf. \cite{bhatta}, \cite{mierou} or \cite{krzi} for instance. We remark that energetic formulations in elastoplasticity have also been propounded: in the linear framework \cite{gl}, without strain gradients \cite{cckham}, using a finite quasiconvex energy \cite{jkmk}, involving a plastic Cauchy-Green tensor \cite{grandi1, grandi2}, for numerical computations \cite{mierouNum}, and elsewhere. The paper \cite{critplast} discusses different assumptions in quasistatic large-strain elastoplastic evolutions. Lastly, in \cite{mkdmus}, rate-independent dislocation-free plasticity is treated.
In what follows, $L^\beta(\Omega;\mathbb{R}^n)$, $1\le\beta<+\infty$ denotes the usual Lebesgue space of mappings $\Omega\to\mathbb{R}^n$ whose modulus is integrable with the power $\beta$ and $L^\infty(\Omega;\mathbb{R}^n)$ is the space of measurable and essentially bounded mappings $\Omega\to\mathbb{R}^n$.
Further, $W^{1,\beta}(\Omega;\mathbb{R}^n)$ standardly represents the space of mappings which live in $L^\beta(\Omega;\mathbb{R}^n)$ and their gradients belong to $L^\beta(\Omega;\mathbb{R}^{n\times n})$. Finally, $W^{1,\beta}_0(\Omega;\mathbb{R}^n)$ is a subspace of $W^{1,\beta}(\Omega;\mathbb{R}^n)$ of maps with a zero trace on $\partial\Omega$.
The weak convergence in $L^\beta(\Omega;\mathbb{R}^n)$ is defined as follows: $y_k\to y$ weakly in $L^\beta(\Omega;\mathbb{R}^n)$ (weakly star for $\beta=+\infty$) if $\int_\Omega y_k(x)\cdot\varphi(x)\,\mathrm{d} x\to\int_\Omega y(x)\cdot \varphi(x)\,\mathrm{d} x$ for all $\varphi\in L^{\beta'}(\Omega;\mathbb{R}^n)$ where $\beta'=\beta/(\beta-1)$ if $1<\beta<+\infty$, $\beta'=1$
if $\beta=+\infty$ and $\beta'=+\infty$ for $\beta=1$. Weak convergence of mappings and their gradients in $L^\beta$ then defines the weak convergence in $W^{1,\beta}(\Omega;\mathbb{R}^n)$. We also write \textit{w}-$\lim_{k\to+\infty}y_k=y$ or $y_k{\rightharpoonup} y$ to denote weak convergence. Finally, $C(\Omega)$ or $C(\mathbb{R}^{n\times n})$ stand for function spaces of functions continuous on $\Omega$ or $\mathbb{R}^{n\times n}$, respectively.
If $f\colon\mathbb{R}^n\to\mathbb{R}$ is convex but possibly nonsmooth we define its subdifferential at a point $x_0\in\mathbb{R}^n$ as the set of all $v\in\mathbb{R}^n$ such that $f(x)\ge f(x_0)+v\cdot(x-x_0)$ for all $x\in\mathbb{R}^n$. The subdifferential of $f$ will be denoted $\partial^{\rm sub}f$ and its elements will be called subgradients of $f$ at $x_0$.
\section{Motivation: modelling inelastic processes with internal variables}
Consider $\Omega\subset\mathbb{R}^n$, a bounded Lipschitz domain representing the so-called reference configuration of a solid body, and a mapping $y\colon\Omega\to\mathbb{R}^n$, the deformation which the body is subjected to. \color{black} We explain the idea of elastoplasticity and the concept of energetic solutions following freely the exposition in \cite{fm} \color{black} and restricting ourselves to simple materials.\EEE
According to Han and Reddy \cite[p.~34]{han}, plastic deformation `is most conveniently described in the framework of materials with internal variables'. Those material models are not only governed by external (controllable) variables, such as temperature or strain, but also incorporate a vector $z\in Z\subset\mathbb{R}^m$ of internal variables, which describe e.g. an ongoing chemical reaction, elastoplastic behavior or material damage. (Details can be found in \cite{HNguyen} or in more recent works on the subject listed in \cite[p.~39]{han}.) The \textit{hyperelastic} stored energy density $\mathcal{W}$ then has the form $\mathcal{W}=\mathcal{W}(F,z)$, if we consider a {\it simple material}, i.e. a material with constitutive equations involving only the first gradient $F=\nabla y$ of the deformation $y$.
The thermodynamic conception is usually that differentiating $\mathcal{W}$ with respect to $F$, we get the mechanical stress, whereas the derivative $-\partial_z \mathcal{W}$ gives another stress-like variable – the so-called thermodynamic force \begin{eqnarray} Q:=-\frac{\partial}{\partial z}\mathcal{W}(F,z)\end{eqnarray} associated with the internal variable $z$. We can imagine that $Q$ tries to restructure the material irreversibly, which would lead to changing the value of $z$. The development of convex analysis \cite{moreau} allowed quite a general formulation of an evolution rule for the internal variable $z$. Assuming the existence of a nonnegative convex potential of dissipative forces $\delta=\delta(\dot z)$, where $\dot z$ denotes the time derivative of $z$, we write the \textit{flow law} as \begin{eqnarray}\label{fr} Q(t)\in{\partial^{\rm sub}}\delta(\dot z(t))\end{eqnarray} everywhere in $\Omega$. A common simplifying assumption is \textit{rate-independent} behavior. It is suitable for some particular materials or for the modelling of processes with low rates of external loading. In simple terms, rate-independence means that rescaling the loading in time only results in a corresponding time-rescaling of the deformation and no additional viscous, inertial or thermal effects arise. Rate-independence translates into positive one-homogeneity of $\delta$, i.e. $\delta(\alpha\dot z)=\alpha\delta(\dot z)$ for all $\alpha>0$.
\begin{remark} Since the subdifferential is \textit{monotone} (see \cite{rockafellar}), by (\ref{fr}) we have \begin{eqnarray} (Q(t)-\theta)\cdot(\dot z(t)-\xi)\ge 0. \end{eqnarray} for all $\theta\in{\partial^{\rm sub}}\delta(\xi)$. Choosing $\xi=0$ and observing that the one-homogeneity of $\delta$ implies $\delta(\dot z)=\omega\cdot\dot z$ for all $\omega\in{\partial^{\rm sub}}\delta(\dot z)$ we get \begin{eqnarray}\label{mdp} \delta(\dot z(t))=Q(t)\cdot \dot z(t)\ge \theta\cdot\dot z(t) \end{eqnarray} for all $\theta\in{\partial^{\rm sub}}\delta(0)$. Hence we derived the so-called {\it maximum dissipation principle} (see e.g.~ \cite{hill} or \cite{simo, simo2}) which states that the plastic dissipation that takes place in reality is not less than any possible dissipation due to thermodynamic forces ``available'' in the so-called {\it elastic domain} ${\partial^{\rm sub}}\delta(0)$. \end{remark}
Hereafter, $\nu$ is the outer unit normal to $\partial\Omega$, and $\partial\Omega\supset\Gamma_0,\Gamma_1$ which are disjoint. Let $f(t)\colon\Omega\to\mathbb{R}^n$ be the (volume) density of external \textit{body} forces and $g(t)\colon\Gamma_1\subset\partial\Omega\to\mathbb{R}^n$ be the (surface) density of surface forces. The conservation of momentum yields the \textit{equilibrium equations}
\begin{eqnarray}\label{rovnovaha} -{\rm div}\left(\frac{\partial}{\partial F}\mathcal{W}(\nabla y(t),z(t))\right) = f(t) \mbox{ in $\Omega$}, \end{eqnarray}
\begin{eqnarray}\label{Bcond} y(t,x)=y_0(x) \mbox{ on $\Gamma_0$},\end{eqnarray}
\begin{eqnarray}\label{neumann}\frac{\partial}{\partial F}\mathcal{W}(\nabla y(t),z(t)) \nu(x) =g(t,x) \mbox{ on $\Gamma_1$}.\end{eqnarray}
If $z_0\in Z$ is an initial condition for the internal variable, the system of equations (\ref{rovnovaha})-(\ref{neumann}) together with (\ref{fr1}):
\begin{eqnarray}\label{fr1}-\frac{\partial}{\partial z }\mathcal{W}(\nabla y(t),z(t))\in\partial\delta(\dot z(t)),\ z(0)=z_0,\end{eqnarray} governs the mechanical behavior characterized by the unknowns $y(t)$, $z(t)$.
Unfortunately, the system (\ref{rovnovaha})-(\ref{neumann}) is ill-posed in many situations. \color{black} See the works of Suquet \cite{suquet} and Temam \cite{temam} for early analysis of this problem.\EEE
To get existence results, it seems \color{black} necessary \EEE to include the gradient of the internal variable, i.e. use the strain energy density
$$\tilde{\mathcal{W}}(\nabla y, z,\nabla z):=\mathcal{W}(\nabla y, z)+\epsilon|\nabla z|^\alpha$$ for $\alpha\ge 1$ and $\varepsilon>0$. Let us briefly recall how an energy balance, which appears in the \textit{energetic} formulation of this regularized problem, is obtained, under sufficient smoothness and integrability assumptions on all the present mappings. For details, see \cite{fm,mielke1}.
The functional
\begin{eqnarray} \mathcal{I}(t,y(t),z(t)):=\int_\Omega \mathcal{W}(\nabla y(t),z(t))\,\mathrm{d} x+\epsilon\int_\Omega |\nabla z(t)|^\alpha\,\mathrm{d} x-L(t,y(t)),\end{eqnarray} expresses the potential energy in our system, where the work done by external forces is \begin{eqnarray}\label{loading1} L(t,y(t)):=\int_\Omega f(t)\cdot y(t)\,\mathrm{d} x +\int_{\Gamma_1} g(t)\cdot y(t)\,\mathrm{d} S.\ \end{eqnarray}
We also introduce the total dissipation along $z$, $$ {\rm Diss}(z;[0,t]):=\int_0^t\int_\Omega\delta(\dot z(s))\,\mathrm{d} x\mathrm{d} s.$$
Calculating the thermodynamic force $\partial_z \tilde{\mathcal{W}}$, using the relation $\delta(\dot z)=\omega\cdot\dot z$, $\omega\in{\partial^{\rm sub}}\delta(\dot z)$, mentioned above (\ref{mdp}), we can deduce from (\ref{rovnovaha})-(\ref{neumann}) the following energy balance: $$ \mathcal{I}(t,y(t),z(t))+{\rm Diss}(z;[0,t])=\mathcal{I}(0,y(0),z(0))+\int_0^t\dot L(s,y(s))\,\mathrm{d} s.$$
Due to low regularity in time, a more general expression for ${\rm Diss}(z;[0,t])$ must be used in practice, though – see (\ref{e-bal}) below.
To this end, we define a dissipation distance between two values $z_0,z_1\in {\mathbb{Z}}$ of the internal variable: \begin{eqnarray}\label{nodist} D(x,z_0,z_1):=\inf_z\left\{\int_0^1 \delta(x,z(s),\dot z(s))\,\mathrm{d} s;\ z(0)=z_0,\ z(1)=z_1\right\},\end{eqnarray} where $z\in C^1([0,1];{\mathbb{Z}})$, and set \begin{eqnarray}\label{dist} \mathcal{D}(z_1,z_2)=\int_\Omega D(x,z_1(x),z_2(x))\,\mathrm{d} x \end{eqnarray} for $z_1,z_2\in \mathbb{Z}:=\{z\colon\Omega\to\mathbb{R}^m;\ z(x)\in Z\mbox{ a.e. in $\Omega$}\}$, as in \cite{mielke1}.
In order to find a quasistatic evolution of the system Mielke, Theil, and Levitas \cite{AMFTVL} came up with the following definition of the energetic solution which conveniently overcomes nonsmoothness mentioned above. Moreover, it fully exploits the possible variational structure of the problem and allows for very general energy and dissipation functionals. This concept has versatile applications to many problems in continuum mechanics of solids. Additionally, working with ${\mathcal{I}}$ and $\mathcal{D}$ directly allows us to include also higher derivatives of $y$ into the model or to require integrability of some functions of $\nabla y$ as also done in this contribution.
\subsection{Energetic solution} Suppose that the evolution of $y(t)\in {\mathbb{Y}}$ and $z(t)\in {\mathbb{Z}}$ is studied during a time interval $[0,T]$. The following two properties characterize the energetic solution due to Mielke {\it et al.} \cite{AMFTVL}.
\noindent (i) Stability inequality:\\ $\forall t\in[0,T],\, \tilde z\in {\mathbb{Z}},\, \tilde{y}\in {\mathbb{Y}}$: \begin{eqnarray}\label{stblt}\mathcal{I}(t,y(t),z(t))\le \mathcal{I}(t,\tilde y,\tilde z)+\mathcal{D}(z(t),\tilde z)\end{eqnarray}
\noindent (ii) Energy balance: $\forall\ 0\le t\le T$
\begin{eqnarray}\label{e-bal}\mathcal{I}(t,y(t),z(t))+{\rm Var}(\mathcal{D},z;[0,t]) =\mathcal{I}(0,y(0),z(0)) +\int_0^t \dot L(\xi,y(\xi))\,\mathrm{d} \xi, \end{eqnarray}
$$\text{where } {\rm Var}(\mathcal{D},z;[s,t]):=\sup\left\{\sum_{i=1}^N \mathcal{D}(z(t_i),z(t_{i-1}));\ \{t_i\} \mbox{ partition of } [s,t]\right\}.$$
\begin{definition}\label{en-so} The mapping $t\mapsto(y(t),z(t))\in {\mathbb{Y}}\times {\mathbb{Z}} $ is an energetic solution to the problem $(\mathcal{I},\delta, L)$ if the stability inequality and energy balance are satisfied for all $t\in [0,T]$. \end{definition}
\begin{remark} The mechanical idea behind stability inequality (i) is the following: imagine first that $\tilde{z} := z(t)$, then $\mathcal{D}(z(t),\tilde{z})$ vanishes, since no change in the internal variables implies no dissipation. Consequently, (i) simplifies to $\mathcal{I}(t,y(t),z(t))\le \mathcal{I}(t,\tilde y,\tilde z)$ for all $\tilde{y}\in {\mathbb{Y}}$ and $y(t)$ is a global minimizer of $\mathcal{I}(t,\cdot, z(t))$ over ${\mathbb{Y}}$. So we see that in this case, (i) has the meaning of an elastic equilibrium. If $\tilde{z}\neq z(t)$, then the amount of dissipated energy between the states $\tilde{z}$ and $z(t)$ must, by (i), at least compensate for, if not outweigh the associated loss in the total energy, which is a version of the principle of maximum dissipation. \cite{mielke1} \end{remark}
We will write $\mathbb{Q}:=\mathbb{Y}\times \mathbb{Z}$ and set $q:=(y,z)$. Next let us define the set of \textit{stable states} at time $t$ as
\begin{eqnarray} \mathcal{S}(t):=\{q\in\mathbb{Q};\ \forall \tilde q\in\mathbb{Q}:\ \mathcal{I}(t,q)\le \mathcal{I}(t,\tilde q)+\mathcal{D}(q,\tilde q)\} \end{eqnarray} and \begin{eqnarray}\mathcal{S}_{[0,T]}:=\bigcup_{t\in[0,T]}\{t\}\times\mathcal{S}(t).\end{eqnarray}
Moreover, a sequence $\{(t_k,q_k)\}_{k\in{\mathbb N}}$ is called {\it stable} if $q_k\in \mathcal{S}(t_k)$.
\section{Applications to elastoplasticity}\label{sec:main} This section shows how the energetic approach can be applied to an elastoplastic problem of gradient-polyconvex materials.
\subsection{Gradient polyconvexity} Gradient polyconvexity was first defined in \cite{bbmkas} in analogy with classical polyconvexity \cite{ball77}. The difference is that here we assume that the stored energy density is convex in gradients of minors of the deformation gradient but not in minors alone. More precisely, the following definition is taken from \cite{bbmkas}. We recall that for an invertible $F\in\mathbb{R}^{n\times n}$ we define the cofactor of $F$, ${\rm cof} F=({\rm det} F) F^{-\top}\in\mathbb{R}^{n\times n}$. \color{black} As is shown in \cite{bbmkas} there are maps $y\in W^{1,1}(\Omega;\mathbb{R}^3)$ such that ${\rm det}\nabla y$ and ${\rm cof}\nabla y$ are Lipschitz continuous but $y\not\in W^{2,1}(\Omega;\mathbb{R}^3)$. The same conclusion can be reached for every $n\ge 3$. On the other hand, if $n=2$ then $\nabla{\rm cof}\nabla y$ has the same entries (up to the minus sign) as $\nabla^2 y$. \EEE
\begin{definition} \label{def-gpc} Let $\Omega\subset\mathbb{R}^n$ be a bounded open domain. Let $\mathcal{W}_1\colon\mathbb{R}^{n\times n}\times\mathbb{R}^{n\times n\times n}\to\mathbb{R}\cup\{+\infty\}$ be a lower semicontinuous function. The functional \begin{align}\label{full-I} J(y) = \int_\Omega \mathcal{W}_1(\nabla y(x), \nabla[ {\rm cof} \nabla y(x)]) \, \mathrm{d} x, \end{align} defined for any measurable function \color{black} $y\colon \Omega \to \mathbb{R}^n$ for which the weak derivatives $\nabla y$, $\nabla[ {\rm cof} \nabla y]$ exist and are integrable is called {\em gradient polyconvex} if the function $\mathcal{W}_1(F,\cdot)$ is convex for every $F\in\mathbb{R}^{n\times n}$. \EEE \end{definition}
We assume that for some $c>0$, and numbers \color{black} $\alpha> n-1$\EEE, $s>0$ it holds that for every $F\in\mathbb{R}^{n\times n}$ and every $H\in\mathbb{R}^{n\times n\times n}$ we have that \begin{align}\label{growth-graddet1} \mathcal{W}_1(F,H)\ge\begin{cases}
c\big(|F|^\alpha + ({\rm det} F)^{-s}+|H|^{\alpha/(n-1)} \big) &\text{ if }{\rm det} F>0,\\ +\infty&\text{ otherwise,} \end{cases} \end{align}
where $|\cdot|$ denotes the Euclidean norm. These growth assumptions can be surely weakened but we stick to them in order to simplify our presentation.
The idea behind condition \eqref{growth-graddet1} is that the energy blows up if the deformation does not preserve orientation or if the measures of strain on the right-hand side grow to extreme values. Coercivity conditions involving $|F|$ and ${\rm det} F$ are commonly used in nonlinear elasticity (see e.g. \cite{ciarlet}) and reflect variations in volume or changes of the distances of points caused by the deformation. The term $H$ which is a placeholder for $\nabla{\rm cof}\nabla y$ penalizes spatial changes of ${\rm cof}\nabla y$ and, consequently, it aims at suppressing abrupt areal changes in the deformed configuration.
An important feature of gradient polyconvexity is that {\it no} convexity assumptions on $\mathcal{W}_1$ are needed in the $F$-variable, so that very general material laws can be considered including multiwell energy functions \cite{bhatta} or the St.~Venant–Kirchhoff energy density \cite{ciarlet}. We call materials whose stored $J$ energy obeys \eqref{full-I} also gradient-polyconvex.
\subsection{Assumptions on problem data}\label{assump}
As in \cite{gurtin} we will consider so-called {\it separable materials}, i.e. materials where the elastoplastic energy density has the form \begin{eqnarray}\label{separable} \mathcal{W}(F_{\rm e}, H, F_p,\nabla F_{\rm p}, p,\nabla p):=\mathcal{W}_1(F_{\rm e}, H)+\mathcal{W}_2(F_p,\nabla F_{\rm p}, p,\nabla p). \end{eqnarray}
We will assume that $\mathcal{W}_1$ is continuous and satisfies \eqref{growth-graddet1} while for $\mathcal{W}_2$ we require:\\ \noindent (i) The plastic part $\mathcal{W}_2$ is continuous in its all arguments.
\noindent (ii) Suppose that there are two constants $C,c>0$ so that the following assumption holds for constants $c_1>0$, $\beta>n$, and $\omega>n$:
\begin{align}\label{growth}
C(1+|F_{\rm p}|^{\beta}+|G|^{\beta} +|p|^{\omega}+|\pi|^{\omega})&\ge\mathcal{W}_2(F_{\rm p}, G, p,\pi)\nonumber\\
&\ge c(|F_{\rm p}|^{\beta}+|G|^{\beta} +|p|^{\omega}+|\pi|^{\omega})-c_1. \end{align}
\noindent (iii) There is $c_2>0$, $v^*\in\mathbb{R}^m$ and a modulus of continuity \color{black} $\hat{\omega}$ such that for all $\hat{\alpha}>0$ \EEE, $F_{\rm p}\in\mathbb{R}^{n\times n}$, $G\in\mathbb{R}^{n\times n\times n}$, $p\in\mathbb{R}^m$ and $\pi\in\mathbb{R}^{m\times n}$: \begin{eqnarray}\label{W2cont}
|\mathcal{W}_2(F_{\rm p},G,p+\hat{\alpha} v^*,\pi)-\mathcal{W}_2(F_{\rm p},G,p,\pi)|\le \hat{\omega}(\hat{\alpha})(\mathcal{W}_2(F_{\rm p},G,p,\pi)+c_2). \end{eqnarray}
Furthermore, let us suppose that for every $F_{\rm e}$, $F_{\rm p}\in\mathbb{R}^{n\times n}$ and $p\in \mathbb{R}^m$, the functions $\mathcal{W}_1(F_{\rm e},\cdot)$ and $\mathcal{W}_2(F_{\rm p},\cdot,p,\cdot)$ are convex.
The dissipation distance $\mathcal{D}\colon{\mathbb{Z}}\times{\mathbb{Z}}\to[0,+\infty]$ takes the form (\ref{dist}) \color{black} for a function $D\colon \Omega\times ({\rm SL}(n)\times\mathbb{R}^m)^2$ \EEE and we only change the definition of ${\mathbb{Z}}$ to (\ref{Z}) below. We make the following assumptions on $\mathcal{D}$:
\noindent (i) Lower semicontinuity: \begin{eqnarray}\label{assonD1} \mathcal{D}(z,\tilde z)\le\liminf_{k\to\infty}\mathcal{D}(z_k,\tilde z_k),\end{eqnarray} whenever $z_k{\rightharpoonup} z$ and $\tilde z_k {\rightharpoonup}\tilde z$.
\noindent (ii) Positivity: \begin{eqnarray}\label{assonD2} \text{If }\{z_k\}\subset Z\text{ is bounded and }\min\{\mathcal{D}(z_k,z),\mathcal{D}(z,z_k)\}\to 0\text{ then }z_k{\rightharpoonup} z. \end{eqnarray}
\noindent (iii) For all $z_1$, $z_2\in\mathbb{Z}$: $\mathcal{D}(z_1, z_2)=0$ if and only if $z_1=z_2$.
\noindent (iv) Triangle inequality: $\mathcal{D}(z_1, z_3)\le\mathcal{D}(z_1, z_2)+\mathcal{D}(z_2,z_3)$ for all $z_1$, $z_2$, $z_3\in\mathbb{Z}$.
We refer the reader to \cite{mami2} to see that (ii) follows from (i) and (iii). \color{black} After stating Proposition \ref{limstab}, we specify further assumptions on $D$ ((\ref{nodi1}) or (3.A)--(3.C)). Besides, it is naturally required that $D$ be such that (i)--(iv) holds.\EEE
In order to prove the existence of a solution to (\ref{timestep}) we must impose some data qualifications. In what follows, we assume that \begin{eqnarray}\label{ass-f}f \in C^1\left([0, T]; L^{\tilde d} \left(\Omega; \mathbb{R}^n \right) \right),\end{eqnarray}
\begin{eqnarray}\label{ass-g} g \in C^1\left([0, T]; L^{\hat d} \left(\Gamma_1; \mathbb{R}^n \right) \right),\end{eqnarray} where \color{black} $\tilde d\ge [nd/(n-d)]'=nd/(nd-n+d)$ \EEE if $1\le d<n$ or $\tilde d> 1$ otherwise. Similarly, we suppose that \color{black} $\hat d\ge [(nd-d)/(n-d)]'=(nd-d)/(nd-n)$ \EEE if $d<n$ or $\hat d> 1$ otherwise.
\subsection{Formulation of the problem} From now on, $y\colon\Omega\to\mathbb{R}^n$ will represent the deformation of a material body, whose reference configuration is a bounded Lipschitz domain $\Omega\subset\mathbb{R}^n$. Since $y$ models both \textit{elastic} and \textit{plastic} behaviour, we split the deformation gradient $F=\nabla y$ as $F=F_{\rm e} F_{\rm p}$, where $F_{\rm e}$ stands for an elastic part and $F_{\rm p}\in {\rm SL}(n):=\{A\in\mathbb{R}^{n\times n};\ {\rm det}\ A=1\}$ is a plastic part, which irreversibly transforms the material. To capture e.g. \textit{back stresses}, we use the vector $p\in\mathbb{R}^m$ of hardening internal variables. Written together, $z(x)=(F_{\rm p}(x),p(x))$ is a plastic variable, lying in ${\rm SL}(n)\times\mathbb{R}^m$ for almost all $x\in\Omega$.
The energy functional ${\mathcal{I}}$ is given by \begin{eqnarray} {\mathcal{I}}(t,y(t),z(t)):=\int_\Omega \mathcal{W}(\nabla y F_{\rm p}^{-1}, \nabla[({\rm cof}\nabla y)F_{\rm p}^\top], F_{\rm p}, \nabla F_{\rm p}, p,\nabla p)\,\mathrm{d} x-L(t,y(t)), \end{eqnarray} with $L$ defined in (\ref{loading1}).
Our stored energy density $\mathcal{W}$ does not explicitly depend on the spatial variable $x$, but treating the inhomogeneous case would not need many modifications.
Let us remark that $({\rm cof} \nabla y)F_{\rm p}^\top$ is the cofactor of the elastic part $F_{\rm e}$, since by the product rule for cofactor matrices \cite[p.~4]{ciarlet} and by $F_{\rm p}\in{\rm SL}(n)$, we have $${\rm cof} F_{\rm e} = {\rm cof}(FF_{\rm p}^{-1})=({\rm cof} F){\rm cof}(F_{\rm p}^{-1})=({\rm cof} F){\rm det}(F_{\rm p}^{-1})(F_{\rm p}^{-1})^{-\top}=({\rm cof} F)F_{\rm p}^\top.$$
The admissible deformations $y$ lie in
$$\mathbb{Y}:=\{y\in W^{1,d}(\Omega;\mathbb{R}^n);\ y=y_0 \mbox{ on }\Gamma_0 \}, $$
where $\Gamma_0\subset\partial\Omega$ with a positive surface measure and $y_0\in W^{1-1/d,d}(\Gamma_0;\mathbb{R}^n)$ is given. Assuming that $\Gamma_1\subset\partial\Omega$ as in Section 2, we suppose $\Gamma_0\cap\Gamma_1=\emptyset$. For the internal states $z$ let us define the set \begin{eqnarray}\label{Z} \mathbb{Z}:=\{(F_{\rm p},p)\in W^{1,\beta}(\Omega;\mathbb{R}^{n\times n})\times W^{1,\omega}(\Omega;\mathbb{R}^m):\ F_{\rm p}(x)\in {\rm SL}(n) \mbox{ for a.e.~$x\in\Omega$}\}. \end{eqnarray}
For ease of notation, we write $q=(y,z)\color{black} \in \mathbb{Q}=\mathbb{Y}\times\mathbb{Z}\EEE$ and understand \color{black} $\mathcal{I}(t,\cdot)$, $L(t,\cdot)$ and $\mathcal{D}$ \EEE as functions of $q$, that is: \begin{gather*} {\mathcal{I}}(t,q(t))=\int_\Omega \mathcal{W}(\nabla y F_{\rm p}^{-1}, \nabla[({\rm cof}\nabla y)F_{\rm p}^{\top}], F_{\rm p}, \nabla F_{\rm p}, p,\nabla p)\,\mathrm{d} x-L(t,q(t)),\\ L(t,q(t):=L(t,y(t)),\\ \mathcal{D}(q_1,q_2):=\mathcal{D} (z_1, z_2) \end{gather*} if $q_1=(y_1,z_1)$ and $q_2=(y_2,z_2)$.
In order to prove the existence of an energetic solution to our problem we will need the following results of technical nature.
\subsection{Auxiliary results}
We start this section by the following reverse Young inequality.
\begin{lemma}\label{Yconseq} Suppose that $a>0$, $b>0$, $\delta>0$, $r>1$. Then \begin{equation*} \frac{a}{b}\geq r\delta^\frac{r}{r-1}a^\frac{1}{r}-(r-1)\delta^\frac{r^2}{(r-1)^2}b^\frac{1}{r-1}. \end{equation*} \end{lemma} \begin{proof} Young's inequality states that given a pair of positive numbers $\alpha, \beta$ and $1<p,q<+\infty$, $\frac{1}{p}+\frac{1}{q}=1$, then \begin{equation}\label{eq:Young} \alpha\beta\leq\frac{\alpha^p}{p}+\frac{\beta^q}{q}. \end{equation} Set $p=r$, $\alpha=a^\frac{1}{r}$, $\beta=\delta^\frac{r}{r-1}b$ in \eqref{eq:Young}. Then $q=\frac{r}{r-1}$ and Young's inequality yields $$\frac{r-1}{r}\delta^\frac{r^2}{(r-1)^2}b^\frac{r}{r-1}+\frac{1}{r}a\geq \delta^\frac{r}{r-1}a^\frac{1}{r}b,$$ which after multiplying by $\frac{r}{b}$ implies the desired result. \end{proof}
It will also be useful to give a name to a kind of convergence which makes ${\mathcal{I}}(t,\cdot)$ lower semicontinuous.
\begin{definition} We say that the sequence \color{black} $\{q_k\}_{k\in{\mathbb N}}\subset\mathbb{Q}$, \EEE $q_k=(y_k,z_k)=(y_k,F_{{\rm p}k},p_k)$ \emph{gpc-converges} to $q_*=(y_*,z_*)=(y_*,F_{{\rm p}*},p_*)$ if $z_k{\rightharpoonup} z_*$ in $\mathbb{Z}$, $\nabla{\rm cof}(\nabla y_k F_{{\rm p}k}^{-1}){\rightharpoonup} \nabla{\rm cof}(\nabla y_* F_{{\rm p}*}^{-1})$ in $L^{\alpha/(n-1)}(\Omega;\mathbb{R}^{n\times n\times n})$ and $\nabla y_k\to \nabla y_*$ in measure. We write $q_k\gto q_*$ for short. \end{definition}
\begin{lemma} Let $t_k\to t_*$ with $t_k$, $t_*\in [0,T]$, $k\in{\mathbb N}$, and $q_k\gto q_*$, $\{q_k\}_{k\in{\mathbb N}}\subset\mathbb{Q}$. Then ${\mathcal{I}}(t_*,q_*)\le\liminf_{k\to\infty}{\mathcal{I}}(t_k,q_k)$. \end{lemma} \begin{proof} This is an immediate consequence of \cite[Corollary 7.9]{FL}. \color{black} Note that we can construct a subsequence such that $\nabla y_{k_j}F_{{\rm p}k_j}^{-1}\to\nabla y_* F_{{\rm p}*}^{-1}$ almost everywhere.\EEE \end{proof} Even though we do not have the weak lower semicontinuity of ${\mathcal{I}}$ in general, we can get it for a subsequence provided the respective values of ${\mathcal{I}}$ are bounded. \begin{lemma}\label{gtosubseq} \color{black} Provided that $\alpha^{-1}+\beta^{-1}\le d^{-1}< (n-1)^{-1}$ and $d>\frac{\beta(n-1)}{\beta-1}$\EEE, let $t_k\in [0,T]$, $k\in{\mathbb N}$, and $q_k{\rightharpoonup} q_*$ in $\mathbb{Q}$. Suppose there is $C_I>0$ such that for all $k\in{\mathbb N}$ the bound ${\mathcal{I}}(t_k,q_k)\le C_I$ holds true. Then there exists a subsequence $\{q_{k_j}\}_{j\in{\mathbb N}}$ of $\{q_k\}_{k\in{\mathbb N}}$ that gpc-converges to the same limit $q_*$. \end{lemma} \begin{proof} The proof goes the same way as in Proposition \ref{timestep-ex}. To keep the flow of ideas uninterrupted there, we postpone the presentation to that section. \end{proof}
\begin{proposition}\label{limstab} Let ${\mathcal{I}}$ be lower semicontinuous with respect to gpc-convergence and let (\ref{ass-f}) and (\ref{ass-g}) hold. Let it for all $(t_*,q_*)\in[0,T]\times\mathbb{Q}$ and all stable sequences $\{(t_k,q_k)\}_{k\in{\mathbb N}}$ such that w-$\lim_{k\to\infty} (t_k,q_k)=(t_*,q_*)$ be true that for all $\tilde q\in\mathbb{Q}$ there is $\{\tilde q_k\}\subset \mathbb{Q}$ such that \begin{align}\label{limsup} \limsup_{k\to\infty}({\mathcal{I}}(t_k,\tilde q_k)+\mathcal{D}(q_k,\tilde q_k)) \le {\mathcal{I}}(t_*,\tilde q)+\mathcal{D}(q_*,\tilde q)). \end{align} Then for any stable sequence $\{(t_k,q_k)\}_{k\in{\mathbb N}}$ such that w-$\lim_{k\to\infty} (t_k,q_k)=(t_*,q_*)$ and ${\mathcal{I}}(t_k,q_k)\le C$ for some $C>0$, we have $\lim_{k\to\infty}{\mathcal{I}}(t_k,q_k)={\mathcal{I}}(t_*,q_*)$ and $q_*\in\mathcal{S}(t_*)$. \end{proposition}
{\it Proof.} We follow the proof of Prop.~4.3 in \cite{mami2}. Take $\tilde q:=q_*$ in (\ref{limsup}), which yields a sequence $\{\tilde q_k\}_{k\in{\mathbb N}}$. Then we get, by the stability of $q_k$, \begin{eqnarray}\label{Iusc} \limsup_{k\to\infty}{\mathcal{I}}(t_k, q_k)\le \limsup_{k\to\infty}(({\mathcal{I}}(t_k,\tilde q_k)+\mathcal{D}(q_k,\tilde q_k))\le {\mathcal{I}}(t_*,\tilde q)+\mathcal{D}(q_*,\tilde q)= {\mathcal{I}}(t_*,q_*). \end{eqnarray}
The assumptions (\ref{ass-f}) and (\ref{ass-g}) on $f$ and $g$ further give \begin{eqnarray}\label{tStartk}
\lim_{k\to\infty}|{\mathcal{I}}(t_k,q_k)-{\mathcal{I}}(t_*,q_k)|=\lim_{k\to\infty}|L(t_k,q_k)-L(t_*,q_k)|=0. \end{eqnarray}
Since ${\mathcal{I}}$ is lower semicontinuous with respect to gpc-convergence (and by Lemma \ref{gtosubseq} we can pass to a gpc-convergent subsequence, without relabeling it), we deduce by equation (\ref{tStartk}) that $$\liminf_{k\to\infty} {\mathcal{I}}(t_k,q_k) =\lim_{k\to\infty}({\mathcal{I}}(t_k,q_k)-{\mathcal{I}}(t_*,q_k))+\liminf_{k\to\infty}{\mathcal{I}}(t_*,q_k)\ge{\mathcal{I}}(t_*,q_*).$$ This combined with (\ref{Iusc}) establishes the weak continuity along a stable sequence: ${\mathcal{I}}(t_k,q_k)\to{\mathcal{I}}(t_*,q_*)$. In the end, pick a $\tilde q\in\mathbb{Q}$ and apply (\ref{limsup}) to it: $$ {\mathcal{I}}(t_*,q_*)=\lim_{k\to\infty}{\mathcal{I}}(t_k,q_k)\le\liminf_{k\to\infty}({\mathcal{I}}(t_k,\tilde q_k)+\mathcal{D}(q_k,\tilde q_k))\le {\mathcal{I}}(t_*,\tilde q)+\mathcal{D}(q_*,\tilde q);$$ hence, the stability of $q_*$ is proved.
$\Box$
A natural question is how to ensure the validity of (\ref{limsup}). If $\mathcal{D}\colon\mathbb{Q}\times\mathbb{Q}\to [0,+\infty)$, i.e. no irreversibility constraint is imposed on plastic processes, then it is sufficient if $D$ from (\ref{dist}) satisfies \color{black} \begin{eqnarray}\label{nodi1}\begin{aligned} &D\text{ is a Carathéodory mapping and}\\
&D(x,z_1,z_2)\le c(x)+ C(|F_{\p1}|^{\beta^*-\epsilon} + |F_{\p2}|^{\beta^*-\epsilon}+|p_1|^{\omega^*-\epsilon}+|p_2|^{\omega^*-\epsilon}), \end{aligned}\end{eqnarray}\EEE where $\epsilon>0$ is small enough and $\beta^*:=n\beta/(n-\beta)$ if $n>\beta$ and $\beta^*>1$ if $\beta\ge n$. Similarly, $\omega^*:=n\omega/(n-\omega)$ if $n>\omega$ and $\omega^*>1$ if $\omega\ge n$. Then the compact embedding provides the continuity of $\mathcal{D}$. This shows that (\ref{limsup}) is valid with a constant sequence $\tilde{q}_k=\tilde{q}$.
If $\mathcal{D}\colon\mathbb{Q}\times\mathbb{Q}\to [0,+\infty]$, the assumptions are more elaborate. Following \cite{mami2} we impose the following {\it sufficient} conditions on $D$ from (\ref{nodist}):\\ \noindent \color{black} (3.A) \EEE $D(x,\cdot,\cdot)\colon\mathbb{D}(x)\to[0,+\infty)$ is continuous, where
$\mathbb{D}(x):=\{(z_1,z_2);\ D(x,z_1,z_2)<+\infty\}$,\\ \noindent (3.B) For every $R>0$ there is $K>0$ such that for almost all $x\in\Omega$:
$D(x,z_1,z_2)<K$ if $z_1,z_2\in \mathbb{D}(x)$ and $|z_1|,|z_2|<R$, and \\ (3.C) The direction $v^*\in\mathbb{R}^m$ from (\ref{W2cont}) has the property that for all $\alpha,R>0$ there is $\rho>0$ such that for almost every $x\in\Omega$ and every $z,z_0,z_1$:
$$ |z-z_0|<\rho \mbox{, }(z_0,z_1)\in \mathbb{D}(x)\mbox{ and }|z_0|,|z_1|<R\mbox{ implies } (z,z_1+(0,\alpha v^*))\in \mathbb{D}(x).$$
\begin{proposition} Let $\beta,\omega>n$. Let $D$ satisfy (3.A)--(3.C). Then (\ref{limsup}) holds. \end{proposition}
{\it Proof.} The reasoning follows the lines of \cite{mami2}. If $\mathcal{D}(q_*,\tilde q)=+\infty$ in (\ref{limsup}), the proof is finished. So, we assume that $$\mathcal{D}(q_*,\tilde q)\in\mathbb{R}. $$ If $q_k{\rightharpoonup} q_*$, we observe that \begin{eqnarray}\label{compPlast}
\rho_k:=\|F_{{\rm p} k}-F_{{\rm p}*}\|_{C(\bar\Omega;\mathbb{R}^{n\times n})}+\|p_k- p_*\|_{C(\bar\Omega;\mathbb{R}^m)}\to 0.
\end{eqnarray} by the compact embedding. Then $|z_k|+|z_*| +|\tilde z|<R$ for some $R>0$ if $k$ is large enough. Define $\tilde z_k:= (\tilde F_{\rm p},\tilde p+\alpha_k v^*)$ where $\alpha_k\to 0$ and relates to $\rho_k$
as in (3.C) (we may need to redefine the $\rho_k$ from (\ref{compPlast}) by passing to a subsequence, which is without loss of generality). Thus, $(z_k,\tilde z_k)\in \mathbb{D}(x)$ a.e. in $\Omega$ and we have $|z_k|$, $|\tilde z_k| <R$. The continuity of $D$ gives the convergence of $D(x,z_k,\tilde z_k)\to D(x,z_*,\tilde z)$ pointwise so that $\mathcal{D}(q_k,\tilde q_k)\to\mathcal{D}(q_*,\tilde q)$ by condition (3.B) and the dominated convergence theorem. Furthermore, properties of $\mathcal{W}_2$ and $L$ (assumptions (\ref{ass-f}), (\ref{ass-g})) imply that ${\mathcal{I}}(t_k,\tilde{q}_k)\to {\mathcal{I}}(t_*,\tilde{q}_*)$. Summing up, we deduce that (\ref{limsup}) is satisfied with equality.
$\Box$
\subsection{Incremental problems}
Next, we define the following sequence of incremental problems. We consider a \color{black} {\it stable} \EEE initial condition $q^0_\tau:=q^0\in\mathbb{Q}$.
Let us take $\tau>0$, a time step, chosen in the way that $N=T/\tau\in{\mathbb N}$. For $1\le k\le N$, $t_k:=k\tau$, find $q_\tau^k\in\mathbb{Q}$ such that
$q^k_\tau$ solves \begin{eqnarray}\label{timestep}\left.\begin{array}{ll} \mbox{minimize } & {\mathcal{I}}(t_k,q)+\mathcal{D}(q_\tau^{k-1},q)\\ \mbox{subject to } & q^k_\tau\in\mathbb{Q}.
\end{array}\right\}\ \end{eqnarray}
\begin{proposition}\label{timestep-ex} Let $\alpha^{-1}+\beta^{-1}\le d^{-1}< \color{black} (n-1)^{-1}\EEE$, $\color{black} d>\frac{\beta(n-1)}{\beta-1}\EEE$. Let the assumptions on $\mathcal{W}$ and $\mathcal{D}$ be satisfied. Let further (\ref{ass-f}) and (\ref{ass-g}) be satisfied. Then the problem (\ref{timestep}) has a solution for all $k=1,\ldots, T/\tau$. In addition, for the solution $q_\tau^k=(y_\tau^k,z_\tau^k)$ we get ${\rm det}\nabla y_\tau^k>0$ a.e. in $\Omega$. \end{proposition}
{\it Proof.} Given $q_\tau^{k-1}\in \mathbb{Q}$ from the previous time step, suppose that $\{q_j\}\subset \mathbb{Q}$ is a minimizing sequence for $q\mapsto{\mathcal{I}}(t_k,q)+\mathcal{D}(q_\tau^{k-1},q)$. The assumption (\ref{growth}) implies that $\{z_j\}$ is uniformly bounded in $W^{1,\beta}(\Omega;\mathbb{R}^{n\times n})\times W^{1,\omega}(\Omega;\mathbb{R}^m)$. Hence, as $\beta,\omega>1$ we can extract a weakly converging subsequence (not relabeled) $z_j{\rightharpoonup} z$ in $W^{1,\beta}(\Omega;\mathbb{R}^{n\times n})\times W^{1,\omega}(\Omega;\mathbb{R}^m)$. The strong convergence of $z_j\to z:=(F_{\rm p},p)$ in $L^\beta(\Omega;\mathbb{R}^{n\times n})\times L^\omega(\Omega;\mathbb{R}^m)$ ensures that $F_{\rm p}(x)\in {\rm SL}(n)$ almost everywhere. \color{black} Write $z_j=(F_{\rm p}^j,p^j)$, $q_j=(y^j,z_j)$. \EEE Exploiting the submultiplicativity of the Euclidean norm, we estimate using Lemma~\ref{Yconseq} \begin{align}
\int_\Omega |\nabla y^j(x)(F_{\rm p}^j)^{-1}|^\alpha\,\mathrm{d} x&\ge\frac{ \|\nabla y^j\|_{L^d(\Omega;\mathbb{R}^{n\times n})}^\alpha}{\|F^j_{\rm p}\|_{L^\beta(\Omega;\mathbb{R}^{n\times n})}^\alpha}\nonumber\\
&\ge \color{black}\frac{\alpha}{d}\delta^{\alpha/(\alpha-d)}\|\nabla y^j\|_{L^d(\Omega;\mathbb{R}^{n\times n})}^d \EEE-\frac{\alpha-d}{d}\delta^{\alpha^2/(\alpha-d)^2}\|F^j_{\rm p}\|_{L^\beta(\Omega;\mathbb{R}^{n\times n})}^\beta. \end{align}
The $L^d$-term on the right hand side is bounded due to (\ref{growth}) and the boundedness of $\{y^j\}$ in $W^{1,d}(\Omega;\mathbb{R}^n)$ follows by the Poincar\'{e} inequality if $\delta>0$ is taken small. Hence $y^j{\rightharpoonup} y$ in $W^{1,d}(\Omega;\mathbb{R}^n)$ (up to a subsequence). This then also implies that ${\rm cof}\nabla y^j{\rightharpoonup}{\rm cof}\nabla y$ in $L^{d/(n-1)}(\Omega;\mathbb{R}^{n\times n})$; cf. \cite{ciarlet}. Due to reflexivity of $W^{1,\alpha/(n-1)}(\Omega;\mathbb{R}^{n\times n})$ we get for a non-relabelled subsequence that $({\rm cof}\nabla y^j )F^{j\top}_{\rm p}{\rightharpoonup} \color{black} \,\Xi\EEE$ in $W^{1,\alpha/(n-1)}(\Omega;\mathbb{R}^{n\times n})$ \color{black} for some $\Xi$ \EEE. However, the weak convergence of $\{{\rm cof}\nabla y^j\}_j$ and the strong convergence of $\{F_{\rm p}^j\}$ allow us to identify $\color{black} \Xi \EEE=({\rm cof}\nabla y)F^\top_{\rm p}$. Moreover, the growth condition \eqref{growth-graddet1} implies that ${\rm det}\nabla y>0$ a.e. in $\Omega$ (see \cite{bbmkas}). Cramer's rule together with ${\rm det} F_{\rm p}^j=1$ and ${\rm det}^{n-1}(\nabla y^j(F^j_{\rm p})^{-1}) ={\rm det}({\rm cof}\color{black}[\nabla y^j(F^j_{\rm p})^{-1}]\EEE)= {\rm det}({\rm cof}\nabla y^j)$ \color{black} give \EEE $$ \frac{{\rm cof}[\nabla y^j (F_{\rm p}^j)^{-1}]}{{\rm det}\nabla y^j}=(\nabla y^j (F_{\rm p}^j)^{-1})^{-\top}$$ and the transpose of the left-hand side converges pointwise (for a subsequence again) to $(\nabla y (F_{\rm p})^{-1})^{-1}$ a.e. in $\Omega$. This, together with the fact that ${\rm det}\nabla y>0$ a.e., implies that $\nabla y^j (F^j_{\rm p})^{-1}\to \nabla y (F_{\rm p})^{-1}$ a.e. in $\Omega$. Then \color{black} by\cite[Corollary 7.9]{FL} we see \EEE that $\mathcal{I}$ is weakly lower semicontinuous.
The assumptions on $\mathcal{D}$ ensure that it is sequentially weakly lower semicontinuous on $\mathbb{Q}$, too. By the direct method of the calculus of variations, we conclude that a minimizer $q_\tau^k$ exists.
$\Box$ \begin{remark} Note that in fact, we proved that a minimizing sequence has a gpc-convergent subsequence. \end{remark}
\subsection{Interpolation in time}
We denote by $q_\tau$ a piecewise constant interpolation of $q_\tau^k=:(y_\tau^k,z^k_\tau)$, i.e. $q_\tau(t)=q_\tau^k$ if $t\in [k\tau,(k+1)\tau)$ and $k=0,\ldots, T/\tau-1$. Finally, $q_\tau(T):=q_\tau^N$. Analogously,
$L_\tau(t,q_\tau(k\tau)):=L(k\tau,q_\tau(k\tau))$ is a piecewise constant interpolation of $L$ and ${\mathcal{I}}_\tau(t,q_\tau(k\tau)):=
{\mathcal{I}}(k\tau,q_\tau(k\tau))$ is a piecewise constant interpolation of ${\mathcal{I}}$.
\begin{proposition}
\label{prop:3}
Under the assumptions of Proposition \ref{timestep-ex},
problem~\eqref{timestep} has a solution $q_\tau(t)$ which is stable,
i.e., for all $t \in [0,T]$ and for every $q \in \mathbb{Q}$,
\begin{equation}
\label{eq:stab}
\mathcal{I}_\tau (t, q_\tau(t)) \le
\mathcal{I}_\tau(t, q) + \mathcal{D} \left(q_\tau(t), q \right).
\end{equation}
Moreover, for all $t_I \le t_{I\!I}$ from the set $\{k \tau\}_{k =
0}^N$, the following discrete energy inequalities hold if one
extends the definition of $q_\tau(t)$ by setting $q_\tau(t) :=
q^0$ if $t < 0$:
\begin{multline}
\label{eq:energy}
- \int_{t_I}^{t_{I\!I}} \dot L \left(t, q_\tau(t-\tau) \right) \, \mathrm{d} t \le
\mathcal{I} \left(t_{I\!I}, q_\tau(t_{I\!I}) \right)
+ {\rm Var}\left(\mathcal{D},q_\tau; \left[t_I, t_{I\!I} \right] \right)
- \mathcal{I} \left(t_I, q_\tau(t_I) \right) \\
\le -\int_{t_I}^{t_{I\!I}} \dot L \left(t, q_\tau (t) \right) \, \mathrm{d}
t.
\end{multline} \end{proposition}
{\it Proof.} In Proposition~\ref{timestep-ex} we proved the existence of a solution to~\eqref{timestep}.
To show the stability estimate~\eqref{eq:stab}, we use the minimizing property of $q_\tau^k$ and a triangle inequality for $\mathcal{D}$. Indeed, since $q_\tau^k$ is a minimizer, \begin{equation}
\label{eq:17}
\mathcal{I} \left(k \tau, q^k_\tau \right) + \mathcal{D} \left(q^{k-1}_\tau, q^k_\tau
\right)
\le \mathcal{I} \left(k \tau, q \right) + \mathcal{D} \left(q^{k-1}_\tau,
q \right), \end{equation} from which we infer that \begin{equation*}
\mathcal{I} \left(k \tau, q^k_\tau \right)\le \mathcal{I} \left(k \tau, q \right) +
\mathcal{D} \left(q^{k-1}_\tau, q \right)-\mathcal{D} \left(q^{k-1}_\tau, q^k_\tau
\right). \end{equation*} However, the structure of the metric implies that \begin{equation*}
\mathcal{D} \left(q^{k-1}_\tau, q \right) - \mathcal{D} \left(q^{k-1}_\tau,
q^k_\tau \right) \le \mathcal{D} \left(q^{k}_\tau, q \right), \end{equation*} from which~\eqref{eq:stab} follows.
The next step is to verify energy inequality~\eqref{eq:energy}, following the thoughts of \cite{AMFTVL}. Testing the stability of $q^{k-1}_\tau$ with $q := q^{k}_\tau$, we get \begin{align}\label{inequ}
\mathcal{I} \left((k-1) \tau, q^{k-1}_\tau \right) & \le
\mathcal{I} \left((k-1) \tau, q^{k}_\tau \right) + \mathcal{D} \left( q^{k-1}_\tau, q^{k}_\tau
\right) \\
&= \mathcal{I} \left(k \tau, q^{k}_\tau \right) + L \left(k \tau, q^{k}_\tau
\right) - L \left((k-1) \tau, q^{k}_\tau \right)+ \mathcal{D}
\left( q^{k-1}_\tau, q^{k}_\tau \right).\nonumber \end{align} For nonnegative integers $k_1$, $k_2$ with $k_1 \le k_2 \le N$, let $t_I = k_1 \tau$ and $t_{I\!I} = k_2 \tau$. Summing ~\eqref{inequ} over $k = k_1 + 1,\dots, k_2$ gives \begin{eqnarray}
\label{eq:energy0}
\sum_{k = k_1 + 1}^{k_2}
\left[ L \left((k-1) \tau, q^{k}_\tau \right)
- L \left(k \tau, q^{k}_\tau \right) \right]& \le&
\mathcal{I} \left(k_2 \tau, q^{k_2}_\tau \right)
- \mathcal{I} \left(k_1 \tau, q^{k_1}_\tau \right)\\ & +&
\sum_{k = k_1 + 1}^{k_2} \mathcal{D} \left(q^{k-1}_\tau, q^{k}_\tau \right).\nonumber \end{eqnarray} Replacing $q_\tau^{k_1}$, $q_\tau^{k_2}$ with $q_\tau$ evaluated at suitable time points, we discover the first inequality in~\eqref{eq:energy}, \begin{align*}
- \int_{t_I}^{t_{I\!I}} \dot L \left(t, q_\tau(t-\tau) \right) \, \mathrm{d} t
& \le
\mathcal{I} \left(k_2 \tau, q^{k_2}_\tau \right)
- \mathcal{I} \left(k_1 \tau, q^{k_1}_\tau \right)
+ \sum_{k = k_1 + 1}^{k_2} \mathcal{D} \left(q^{k-1}_\tau, q^{k}_\tau \right) \\
& = \mathcal{I} \left(t_{I\!I}, q^{k_2}_\tau \right)
- \mathcal{I} \left(t_I, q^{k_1}_\tau \right)
+ {\rm Var} \left(\mathcal{D},q_\tau; \left[t_I, t_{I\!I} \right] \right) \end{align*} (for a step function, calculating ${\rm Var} \left(\mathcal{D},q_\tau; \left[t_I, t_{I\!I} \right]\right)$ is easy). The proof of the second inequality in~\eqref{eq:energy} is similar. Starting from the minimality of $q^{k}_\tau$, when compared with $q^{k-1}_\tau$ in~\eqref{eq:17}, yields \begin{multline*}
\mathcal{I} \left(k \tau, q^{k}_\tau \right)
+ \mathcal{D} \left(q^{k-1}_\tau, q^{k}_\tau \right) \le
\mathcal{I} \left(k \tau, q^{k-1}_\tau \right) \\
= \mathcal{I} \left((k-1) \tau, q^{k-1}_\tau \right)
+ L \left((k-1) \tau, q^{k-1}_\tau \right)
- L \left(k \tau, q^{k-1}_\tau \right). \end{multline*} We sum again over $k = k_1 + 1,\dots, k_2$ to obtain \begin{eqnarray*}
\mathcal{I} \left(k_2 \tau, q^{k_2}_\tau \right)
& -& \mathcal{I} \left(k_1 \tau, q^{k_1}_\tau \right)
+ \sum_{k = k_1 + 1}^{k_2} \mathcal{D} \left(q^{k-1}_\tau, q^{k}_\tau \right)\\
& \le& \sum_{k = k_1 + 1}^{k_2} \left[L \left((k-1) \tau, q^{k - 1}_\tau
\right)
- L \left(k\tau, q^{k - 1}_\tau \right) \right], \end{eqnarray*} so we find \begin{equation*}
\mathcal{I} \left(k_2 \tau, q^{k_2}_\tau \right)
- \mathcal{I} \left(k_1 \tau, q^{k_1}_\tau \right)
+ {\rm Var} \left(\mathcal{D},q_\tau; \left[t_I, t_{I\!I} \right] \right)
\le - \int_{t_I}^{t_{I\!I}} \dot L \left(t, q_\tau(t) \right) \, \mathrm{d} t \end{equation*} and the second inequality in~\eqref{eq:energy} is shown.
$\Box$
We would like to pass with the step size $\tau$ to zero and for this we need certain \emph{a priori} bounds. \begin{proposition}
\label{prop:4}
Let (\ref{ass-f}) and (\ref{ass-g}) be satisfied. Then there is $\kappa \in \mathbb{R}$ such that for any $\tau>0$: \begin{equation}
\label{first}
\norm{y_\tau}_{L^\infty \left(0,T; W^{1,d}(\Omega;\mathbb{R}^n)
\right)} < \kappa , \end{equation} \begin{equation}
\label{second}
{\rm Var}(\mathcal{D},q_\tau;[0,T])) < \kappa, \end{equation}
\begin{equation} \label{fourth} \norm{z_\tau}_{L^\infty \left(0,T; W^{1,\alpha}(\Omega;\mathbb{R}^{n\times n})\times W^{1,\beta}(\Omega;\mathbb{R}^{m})
\right)} < \kappa. \end{equation} \end{proposition}
{\it Proof.} For $q=(y, F_{\rm p},p)\in \mathbb{Q}$, set $$ V(q)= \int_\Omega \color{black}\mathcal{W}(\nabla y(x)F_{\rm p}^{-1},\nabla[({\rm cof}\nabla y(x))F_{\rm p}^\top(x)],F_{\rm p}(x),\nabla F_{\rm p},p(x),\nabla p(x))\,\mathrm{d} x\EEE. $$ Like in the proof of Proposition \ref{timestep-ex}, we can show by the growth conditions on $\mathcal{W}$ that for any $q$ with $V(q)<+\infty$ and some constants $C_0$, $C>0$,
\begin{eqnarray}\label{lowerbound} -C_0+C\left(\|y\|^d_{W^{1,d}(\Omega;\mathbb{R}^n)}+ \|F_{\rm p}\|^\beta_{W^{1,\beta}(\Omega;\mathbb{R}^{n\times n})}+\|p\|^\omega_{W^{1,\omega}(\Omega;\mathbb{R}^m)}\right)\le V(q).\end{eqnarray} Using a lower energy estimate from the proof of (\ref{eq:energy}) for $k_1=0$ we get \begin{equation*} V(q^{k_2}_\tau)-L(k_2\tau,q^{k_2}_\tau)-V(q^0_\tau)+L(0,q^{0}_\tau)\le \sum_{k=1}^{k_2} \left[L \left((k-1) \tau, q^{k - 1}_\tau
\right) -L \left(k\tau, q^{k - 1}_\tau \right) \right]. \end{equation*} Hence for a constant $C$ which only depends on the data and the initial condition, \begin{eqnarray}\label{basic} V(q^{k_2}_\tau)\le\sum_{k=1}^{k_2} \left[L \left((k-1) \tau, q^{k - 1}_\tau
\right) -L \left(k\tau, q^{k - 1}_\tau \right) \right] +L(k_2\tau,q^{k_2}_\tau)+C.
\end{eqnarray} So, denoting $Y_\tau^d:=\max_{1\le\ell\le T/\tau} \|y^{\ell}_\tau\|^d_{W^{1,d}(\Omega;\mathbb{R}^n)}$ and taking into account (\ref{lowerbound}) with $q:=q^{k_2}_\tau$ we have \begin{eqnarray} Y_\tau^d\le C\sum_{k=1}^{k_2} \left[L \left((k-1) \tau, q^{k - 1}_\tau
\right) -L \left(k\tau, q^{k - 1}_\tau \right) \right] +C(q^{k_2}_\tau).\end{eqnarray} This gives the bound (\ref{first}), because $Y_\tau$ is raised to the power $d>1$ on the left hand side, whereas the right hand side is merely linear in $y_\tau^\ell$. We can proceed similarly for (\ref{fourth}). The upper bound on the total dissipation can be derived by algebraic manipulations as in \cite{fm}.
$\Box$
\subsection{Limit passage}
The following lemma is proved in \cite{mami1}.
\begin{lemma}\label{helly} Let $\mathcal{D}\colon\mathbb{Z}\times\mathbb{Z}\to[0,+\infty]$ \color{black} satisfy (\ref{assonD1}) and (\ref{assonD2})\EEE. Let $\mathcal{K}$ be a weakly sequentially compact subset of $\mathbb{Z}$. Then for every sequence $\{z_k\}_{k\in{\mathbb N}}$, $z_k\colon[0,T]\to \mathcal{K}$ for which $\sup_{k\in{\mathbb N}}\,{\rm Var}(\mathcal{D},z_k;[0,T])<C$, $C>0$, there exists a subsequence (not relabelled), a function $z\colon[0,T]\to\mathcal{K}$, and a function $\Delta\colon[0,T]\to [0,C]$ such that:\\ \noindent (i) ${\rm Var}(\mathcal{D},z_k;[0,t])\to \Delta(t)$ for all $t\in[0,T]$,\\ \noindent (ii) $z_k{\rightharpoonup} z$ in $\mathbb{Z}$ for all $t\in[0,T]$, and\\ \noindent
(iii) ${\rm Var}(\mathcal{D},z;[t_0,t_1])\le\Delta(t_1)-\Delta(t_0)$ for all $0\le t_0<t_1\le T$. \end{lemma}
Let us denote $\mathbb{X}:= L^{\beta}(\Omega;\mathbb{R}^{n\times n})\times L^\omega(\Omega;\mathbb{R}^m)$. Finally, we proved the existence of an energetic solution.
\begin{theorem}\label{existence} Let $\alpha^{-1}+\beta^{-1}\le d^{-1}<\color{black} (n-1)^{-1}\EEE$, $\color{black} d>\frac{\beta(n-1)}{\beta-1}\EEE$. Let $q^0\in\mathbb{Q}$ be a stable initial condition. Let the assumptions on $\mathcal{W}$, $\mathcal{D}$, $f$ and $g$ from Section~\ref{assump} hold. Let further (\ref{nodi1}) or (3.A), (3.B), (3.C) hold. Then there is a process $q\colon[0,T]\to \mathbb{Q}$ with $q(t)=(y(t),z(t))$ such that $q$ is an energetic solution according to Definition~\ref{en-so}. The following convergence statements also hold:\\
\noindent
(i) for a $t$-dependent (not relabelled) subsequence w-$\lim_{\tau\to 0}y_{\tau}(t)=y(t)$ in $W^{1,d}(\Omega;\mathbb{R}^n)$ for all $t\in[0,T]$, \\
\noindent (ii) for a (not relabelled) subsequence $\lim_{\tau\to 0} z_{\tau}(t)=z(t)$ in $\mathbb{X}$ for all $t\in[0,T]$, \\ \noindent
(iii) for a (not relabelled) subsequence
$\lim_{\tau\to 0}{\mathcal{I}}_{\tau}(t,q_{\tau}(t))={\mathcal{I}}(t,q(t))$ for all $t\in[0,T]$, and\\ \noindent (iv) for a (not relabelled) subsequence $\lim_{\tau\to 0} {\rm Var}(\mathcal{D},q_\tau;[0,t])= {\rm Var}(\mathcal{D},q;[0,t])$ for all $t\in[0,T]$.
\end{theorem}
\proof We have adapted the proof \color{black} from \cite{fm, mielke2} \EEE and divided it into \color{black} three \EEE steps.
\noindent {\it Step 1}: \color{black} Assertion (i) follows from the \textit{a priori} estimate in Proposition~\ref{prop:4} and (ii) results from Lemma~\ref{helly}. Hence \EEE we have a limit $q(t)=(y(t),z(t))$. We easily get that $q(t)\in\mathbb{Q}$ for all $t\in[0,T]$.
Put $S(t,\tau):=\max_{k\in{\mathbb N}\cup\{0\}}\{k\tau;\ k\tau\le t\}$. Then $\lim_{\tau\to 0}S(t,\tau)=t$ and by definition $q_\tau(t):=q_\tau(S(t,\tau))\in\mathcal{S}(S(t,\tau))$. As we have seen, $\mathcal{D}$ can be assumed such that (\ref{limsup}) holds. Hence by Proposition~\ref{limstab}, the limit is stable, i.e. $q(t)\in \mathcal{S}(t)$ (thanks to our \textit{a priori} estimates, we can pass to a gpc-convergent subsequence to get lower semicontinuity). \\
\noindent {\it Step 2}: \color{black} Notice that $\theta_{\tau}(t):= \frac{\partial L}{\partial t}(t,q_{\tau}(t))$ is bounded in $L^\infty(0,T)$ by (\ref{ass-f}), (\ref{ass-g}) and (\ref{first}), so we deduce a weak*-convergent subsequence, which we do not relabel, with the limit $\theta$. By Fatou's lemma $ \theta_{\rm i}\le\theta$. \EEE
Our interpolation satisfies $q_\tau(t)=q_\tau(k\tau)$ for $0\le t-k\tau< \tau$. Successively using (\ref{eq:energy}), (\ref{ass-f}), (\ref{ass-g}) and (\ref{first}), we get for some $C$, $C_1>0$ \begin{eqnarray*} {\mathcal{I}}(t,q_\tau(t))+{\rm Var}(\mathcal{D},q_\tau;[0,t])&\le& {\mathcal{I}}(k\tau,q_\tau(k\tau))+{\rm Var}(\mathcal{D}, q_\tau;[0,k\tau]) +C\tau\\ &\le& {\mathcal{I}}(0,q_\tau(0))-\int_0^{k\tau}\dot L(s,q_\tau(s))\,\mathrm{d} s +C\tau \\ &\le& {\mathcal{I}}(0,q_\tau(0))-\int_0^t \dot L(s,q_\tau(s))\,\mathrm{d} s+C_1\tau. \end{eqnarray*}
On account of Lemma~\ref{helly} (i) and Proposition \ref{limstab} we find that for $\tau\to 0$ \begin{eqnarray*} {\mathcal{I}}(t,q(t))+ \Delta(t) &\le& {\mathcal{I}}(0,q(0))-\int_0^t \theta(s)\,\mathrm{d} s.\end{eqnarray*}
\color{black} Following \cite{mielke2} we set $\theta_{\rm i}(t):=\liminf_{\tau\to 0}\theta_{\tau}(t)$. \EEE Since $\Delta(t)\ge{\rm Var}(\mathcal{D},q;[0,t])$ and by Fatou's lemma $\int_0^t\theta(s)\,\mathrm{d} s\ge\int_0^t\theta_{\rm i}(s)\,\mathrm{d} s$, for a.a. $t\in[0,T]$ we get \begin{eqnarray*}{\mathcal{I}}(t,q(t)) +{\rm Var}(\mathcal{D},q;[0,t]) &\le& {\mathcal{I}}(0,q(0))-\int_0^t \theta_{\rm i}(s)\,\mathrm{d} s.\end{eqnarray*}
The convergence $\theta_\tau(s)=\dot L(s,q_\tau(s))\to\dot L(s,q(s))$ and extracting an $s$-dependent subsequence of $\{\theta_\tau(s)\}$ converging to $\theta_{\rm i}(s)$ show that $\theta_{\rm i}(s)=\dot L(s,q(s))$. We finally get the upper energy estimate \begin{eqnarray} \label{upperest} {\mathcal{I}}(t,q(t)) +{\rm Var}(\mathcal{D},q;[0,t]) &\le& {\mathcal{I}}(0,q(0))-\int_0^t \dot L(s,q(s)\,\mathrm{d} s. \end{eqnarray}
\color{black}\textit{Step 3:} \EEE The fact that $q(t)$ is stable for all $t\in[0,T]$ will be useful in proving the lower estimate. Consider a partition of a time interval $[t_I,t_{I\!I}]\subset [0,T]$ such that $t_I=\vartheta_0^M<\vartheta_1^M<\vartheta_2^M<\cdots<\vartheta_{K_M}^M=t_{I\!I}$ and $\max_{i} (\vartheta_i^M-\vartheta_{i-1}^M)=:\vartheta^M\to 0$ as $M\to\infty$. Let us test the stability of $q(\vartheta_{k-1}^M)$ with $q(\vartheta_k^M)$, $k=1,\ldots, K_M$. Similarly to (\ref{eq:energy0}) we get \begin{eqnarray}
\label{eq:energy00}
\sum_{k = 1}^{K_M}
\left[L\((\vartheta_{k-1}^M, q(\vartheta_k^M)\)-L\(\vartheta_k^M,q(\vartheta_k^M)\)\right]&\le&
\mathcal{I} \(t_{I\!I}, q(t_{I\!I})\) - \mathcal{I} \(t_I, q(t_I)\)\\ & +& \sum_{k = 1}^{K_M} \mathcal{D} \(q(\vartheta_{k-1}^M), q(\vartheta_k^M)\).\nonumber \end{eqnarray}
Thus, \begin{eqnarray}
\label{eq:energy000}
\sum_{k = 1}^{K_M}
-\int_{\vartheta_{k-1}^M}^{\vartheta_k^M}\dot L(s, q(\vartheta_k^M))\,\mathrm{d} s&\le&
\mathcal{I} \(t_{I\!I}, q(t_{I\!I}\) - \mathcal{I} \(t_I, q(t_I)\)\nonumber\\ & +& {\rm Var} (\mathcal{D},q;[t_I,t_{I\!I}]). \end{eqnarray}
We finally \color{black} rearrange the terms as \EEE \begin{eqnarray}\label{eq:forces}\sum_{k =1}^{K_M} \int_{\vartheta_{k-1}^M}^{\vartheta_k^M}\dot L(s, q(\vartheta_k^M))\,\mathrm{d} s&=& \sum_{k = 1}^{K_M} \dot L(\vartheta_k^M,q(\vartheta_k^M))(\vartheta_k^M-\vartheta_{k-1}^M)\nonumber\\ & +& \sum_{k = 1}^{K_M}\int_{\vartheta_{k-1}^M}^{\vartheta_k^M}(\dot L(s, q(\vartheta_k^M))-\dot L(\vartheta_k^M,q(\vartheta_k^M)))\,\mathrm{d} s. \end{eqnarray}
The last sum on the right-hand side of (\ref{eq:forces}) goes to zero with $\vartheta^M\to 0$ because the time derivative of the external loading is uniformly continuous in time by (\ref{ass-f}) and (\ref{ass-g}). The first sum on the right-hand side converges to $\int_{t_I}^{t_{I\!I}}\dot L(s,q(s))\,\mathrm{d} s$ by \cite[Lemma~4.12]{dmgfrt}.
Altogether, the upper and lower estimates give us the energy balance
\begin{eqnarray} \label{e-balance} {\mathcal{I}}(t,q(t)) +{\rm Var}(\mathcal{D},q;[0,t]) &=& {\mathcal{I}}(0,q(0))-\int_0^t \dot L(s,q(s)\,\mathrm{d} s. \end{eqnarray}
Moreover, \begin{eqnarray}\label{improvedconv} & &{\mathcal{I}}(0,q(0)) -\int_0^t\theta_{\rm i}(s)\,\mathrm{d} s\le {\mathcal{I}}(t,q(t)) +{\rm Var}(\mathcal{D},q;[0,t]))\le {\mathcal{I}}(t,q(t)) +\Delta(t)\nonumber\\ &\le& {\mathcal{I}}(0,q(0)) -\int_0^t\theta(s)\,\mathrm{d} s\le {\mathcal{I}}(0,q(0)) -\int_0^t\theta_{\rm i}(s)\,\mathrm{d} s. \end{eqnarray} Thus in fact, all inequalities in (\ref{improvedconv}) are equalities and we get (iv). \color{black} Proposition~\ref{limstab} also implies (iii). \EEE
\qed
Finally, we include a simple example from single-crystal plasticity, covered by our approach, in the spirit of \cite{gurtin} and \cite{amLieGroups}.
\begin{example}[single slip without hardening] Consider two orthonormal vectors $a$, $b\in\mathbb{R}^3$, describing the motion of edge dislocations in such a way that $a$ is the glide direction and $b$ is the slip-plane normal. Further we focus on a particular case of the so-called {\it separable} material where
$$\mathcal{W}(F_{\rm e}, H, z)=\mathcal{W}_1(F_{\rm e})+c|H|^2+c{\rm det}(F_{\rm e})^{-s}+\varepsilon|F_{\rm p}|^6+\varepsilon|\nabla F_{\rm p}|^6,\;c,s,\varepsilon>0,$$
with $F_{\rm p}(x,t)=\mathbb{I}+\gamma(x,t)a\otimes b$ where $\gamma$ is the plastic slip and we can choose $\mathcal{W}_1$ to be e.g. the stored energy density of the Saint Venant–Kirchhoff material, i.e. $\mathcal{W}_1(F_{\rm e})=\frac{1}{8}(\textstyle{F_{\rm e}}F_{\rm e}-\mathbb{I}):\mathbb{C}:(\textstyle{F_{\rm e}}F_{\rm e}-\mathbb{I})$ so that $\mathcal{W}_1(F_{\rm e})\ge c(|F_e|^4-1)$. The vectors $a$, $b$ are not fixed in the reference configuration in general (more precisely, they lie in an intermediate lattice space, \color{black} see \EEE \cite{gurtinBook}). The slip-plane normal $\tilde b$ in the reference configuration is given by $\tilde b= (F_{\rm p})^\top b$. However, in the special case of a single slip we obtain $\tilde b=b$ and the slip-plane normal remains unchanged during plastic transformations.
As $F_{\rm p}$ is completely described by $\gamma$, we identify $z:=\gamma$ and use the dissipation potential $$
\delta(\dot\gamma)=\kappa|\dot\gamma|, $$ where $\kappa>0$ represents the resistance to the slip.
This corresponds to the dissipation distance $$
\mathcal{D}(\gamma_1,\gamma_2)=\int_\Omega \kappa|\gamma_1(x)-\gamma_2(x)|\mathrm{d} x. $$ An extension to multi-slip, described by several glide directions $\{a_i\}$ and slip-plane normals $\{b_i\}$, $1\le i\le N$, would be feasible, cf. \cite{amLieGroups}. \end{example} As a last remark, note that our stored energy $\mathcal{W}$ could also depend on \color{black} $F_{\rm e}^{-1}$, \EEE cf. \cite{bbmkas}.
{\bf Acknowledgements.} \color{black} This work was supported by the GA\v{C}R project 18-03834S (MK \& JZ) and by the DFG Priority Programme (SPP) 2256 (JZ).
\end{document} | arXiv |
Homework Set 7
Stony Brook Physics pjy141:help
Trace: • Homework Set 7
For several of these problems you'll need to look up some standard moments of inertia.
Problem 11.11
Calculate the total moment of inertia before and after. Part A relies on conservation of angular momentum. In Part B and C you find you out whether rotational kinetic energy is conserved. If it isn't, where does the work come from?
The model used for this problem is slightly different to the one we used in class, here when the arms are at the side of the person we treat them as vertical rods on the side of the cylinder, in class we just added them in to the mass of the cylinder. Other than that (and the lack of dumbbells) the problem is very similar.
Use the cross-product $\vec{r}\times\vec{F}$ to find the torque.
You can use conservation of angular momentum to solve this problem. Just before the collision the angular momentum is all in the bullet, just afterward some of it is transferred to the rod. You can treat the bullet as if it were moving in a circle around the axis at the moment of the collision as it is moving tangentially to the direction of rotation. This is only true at this time, at any significant time before and after the collision this would be inaccurate. This is the same idea as we used to justify neglecting the force due to gravity when we look at collisions of freely moving objects.
This problem is essentially similar to that of precession of a gyroscope. Here the force is not gravity but a force given to you in the question. You will also need to calculate the angular momentum of the asteroid by finding first it's moment of inertia and angular velocity. Once you have the velocity of precession you can find the time taken for the rotation axis to move by 15$^{o}$ .
This gyroscope precesses under it's own weight. Find the torque due that weight, and the angular momentum of the gyroscope, then you can find the precession velocity, and from this the period of precession.
This problem is similar to the crane boom example we did in class. Here however we do include the weight of the arm as force acting at it's center of mass.
You need to use geometry to find the force the two wires exert in the horizontal direction. Then use the balance of the torques around the base of the net to find a simple relation between the tension in the net and this force.
Redraw the problem with the weight of the door expressed as a force going through it's center of gravity. A smart choice ox axis to calculate the torques around can make this easier (try to choose an axis where all the points at which forces are exerted lie on a coordinate axis).
In a ladder problem like this one the wall exerts a normal force, which is horizontal, and the weight of the ladder which acts through it's center of mass is vertically down. The force at the base, has a normal component, which balances the weight, and a frictional component which balances the normal force from the wall. As the frictional force cannot exceed $\mu N$ use this as the condition for when the ladder begins to slip.
pjy141/help/hw7.txt · Last modified: 2011/04/07 22:32 by mdawber | CommonCrawl |
\begin{definition}[Definition:Euclid's Definitions - Book X (III)/1 - First Apotome]
{{EuclidSaid}}
:''Given a rational straight line and an apotome, if the square on the whole be greater than the square on the annex by the square on a straight line commensurable in length with the whole, and the whole be commensurable in length with the rational straight line set out, let the apotome be called a '''first apotome'''.''
{{EuclidDefRef|X (III)|1|First Apotome}}
\end{definition} | ProofWiki |
What is $3254_6$ when expressed in base 10?
$3254_6=3\cdot6^3+2\cdot6^2+5\cdot6^1+4\cdot6^0=648+72+30+4=\boxed{754}$. | Math Dataset |
\begin{document}
\begin{center} {\LARGE\bf Just another solution to the Basel Problem\\ \vskip 1cm}
\large Alois Schiessl\\
\vskip 0.25cm \tt{[email protected]}
\end{center}
\begin{abstract} The Basel problem consists in finding the sum of the reciprocals of the squares of the positive integers. It was finally solved in 1735 by Leonhard Euler. He showed that \[ \sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}. \] In this paper, we propose a simple proof based on the Weierstrass Sine product formula and L'Hôpital's rule. \end{abstract} \centerline{\it In celebration of Pi Day 2023} $ \\ $ $ \\ $
The Basel Problem was a very famous problem in the middle of the seventeenth century. It was first posed by Pietro Mengoli in 1650 and many prominent mathematicians of the time tried to solve it without success. It took almost a hundred years before Euler \cite{E1}, \cite{E2} succeeded in 1734 in proving the above closed-form solution. $ \\ $ In this paper, we aim to give another proof of the Basel problem. The proof is short, simple and uses only classical analysis. $ \\ \\ $ We recall the Weierstrass factorisation theorem \cite{RC} for $\sin \left(\pi\,x\right)$ : \begin{align} \sin \left( {\pi x} \right) = \pi x \cdot \prod\limits_{k = 1}^\infty {\left( {1 - \frac{{{x^2}}}{{{k^2}}}} \right)} \end{align} \\ The infinite product is analytic, so we can take the natural logarithm on both sides: \begin{align} \ln \left( {\sin \left( {\pi x} \right)} \right) = \ln \left( {\pi x} \right) + \sum\limits_{k = 1}^\infty {\ln \left( {1 - \frac{{{x^2}}}{{{k^2}}}} \right)} \end{align} Next differentiating with respect to $x$ and assuming $k^2-x^2\neq 0$ gives us \begin{align} \frac{{\pi\cdot \cos \left( {\pi x} \right)}}{{\sin \left( {\pi x} \right)}} = \frac{1}{x} + \sum\limits_{k = 1}^\infty {\left( {\frac{1}{{1 - \frac{{{x^2}}}{{{k^2}}}}} \cdot \frac{{ - 2 x}}{{{k^2}}}} \right)} = \frac{1}{x} - 2 x\sum\limits_{k = 1}^\infty {\left( {\frac{1}{{{k^2} - {x^2}}}} \right)} \end{align} Uniform convergence allows the interchange of the derivative and the infinite series.
$ \\ $ Using elementary algebraic, equation (3) can be rearranged to the equivalent form \begin{align} &\frac{{\pi\cdot \cos \left( {\pi x} \right)}}{{\sin \left( {\pi x} \right)}}-\frac{1}{x}=- 2 x\sum\limits_{k = 1}^\infty {\left( {\frac{1}{{{k^2} - {x^2}}}} \right)}\\ &\left(\frac{{\pi\cdot \cos \left( {\pi x} \right)}}{{\sin \left( {\pi x} \right)}} -\frac{1}{x}\right)\cdot\frac{-1}{2\,x} =\sum\limits_{k = 1}^\infty {\left( {\frac{1}{{{k^2} - {x^2}}}} \right)}\\ &{\frac {\sin \left( \pi \,x \right) -\pi \,x\cos \left( \pi \,x
\right) }{2\,{x}^{2}\sin \left( \pi \,x \right) }} =\sum _{k=1}^{\infty }\frac {1}{ \left( k^2-x^2 \right)} \,;\;\;k^2-x^2\neq 0 \end{align} The study of equation (6) is central in the paper. $ \\ $ $ \\ $ Note that if we take $x=0$, the right-hand side becomes the sum \begin{equation} \sum _{k=1}^ {\infty }\frac {1}{ k^2}=1+\frac {1}{2^2}+\frac {1}{3^2}+\frac {1}{4^2}+\ldots \end{equation} It is easy to verify that the series is convergent (Integral test for convergence). $ \\ $ $ \\ $ We know that we must obtain something nice on the left-hand side because the right-hand side is nice. Plugging in $x=0$, the left-hand side unfortunately leads to an indeterminate form \begin{align} \lim_{x \to 0} \;\;{\frac {\sin \left( \pi \,x \right) -\pi \,x\cos \left( \pi \,x
\right) }{2\,{x}^{2}\sin \left( \pi \,x \right) }} \rightarrow \frac{0}{0} \end{align} \\From the convergence of the right hand side, we conclude that \begin{align} \lim_{x \to 0} \;\;{\frac {\sin \left( \pi \,x \right) -\pi \,x\cos \left( \pi \,x
\right) }{2\,{x}^{2}\sin \left( \pi \,x \right) }} \end{align} exists. We use L'Hôpital's rule to evaluate the limit. Applying L'Hôpital's rule once still gives an indeterminate form. In this case, the limit can be evaluated by applying the rule twice more. $ \\ $ $ \\ $ $1.$ Application of L'Hôpital's rule: \begin{align} &\lim_{x \to 0} \;\;{\frac {\sin \left( \pi \,x \right) -\pi \,x\cos \left( \pi \,x
\right) }{2\,{x}^{2}\sin \left( \pi \,x \right) }}\\ =&\lim_{x \to 0} \;{\frac {{\pi }^{2} x\sin \left( \pi \,x \right)}{4\,x\sin \left( \pi \,x \right) +2\,\pi \,{x}^{2}\cos \left( \pi \,x
\right)}}\rightarrow\;\frac{0}{0} \end{align} $2.$ Application of L'Hôpital's rule: \begin{align} &\lim_{x \to 0} \;\;{\frac {\sin \left( \pi \,x \right) -\pi \,x\cos \left( \pi \,x
\right) }{2\,{x}^{2}\sin \left( \pi \,x \right) }}\\ =&\lim_{x \to 0} \frac {{\pi }^{3}\cos \left( \pi \,x \right) x+{\pi }^{2}\sin \left( \pi \,x
\right)}{4\,\sin \left( \pi \,x \right) +8\,\pi \,\cos \left( \pi \,x \right) x -2\,{\pi }^{2}{x}^{2}\sin \left( \pi \,x \right)}\rightarrow\;\frac{0}{0} \end{align} $3.$ Application of L'Hôpital's rule: \begin{align} &\lim_{x \to 0} \;\;{\frac {\sin \left( \pi \,x \right) -\pi \,x\cos \left( \pi \,x
\right) }{2\,{x}^{2}\sin \left( \pi \,x \right) }}\\ =\;&\lim_{x \to 0} \;{\frac {2\,{\pi }^{3}\cos \left( \pi \,x \right)-{\pi }^{4}\sin \left( \pi \,x \right) x }{12\,\pi \,\cos \left( \pi \,x \right) -12\,{\pi }^{2}\sin \left( \pi \,x \right) x-2\,{\pi }^{3}{x}^{2}\cos \left( \pi \,x \right) }}\\ =\;&\frac {2\,{\pi }^{3}\cos \left( \pi \,\cdot 0\right)-{\pi }^{4}\sin \left( \pi \,\cdot 0 \right) \cdot 0 }{12\,\pi \,\cos \left( \pi \,\cdot 0 \right) -12\,{\pi }^{2}\sin \left( \pi \,\cdot 0 \right) \cdot {0}^{2}\cdot\,{\pi }^{3}\cdot {0}^{2}\cdot\cos \left( \pi \,\cdot 0 \right) }\\ =\;&\frac {2\,\pi^3}{12\,\pi}=\frac {\pi^2}{6} \end{align} \\Putting all the pieces together, we immediately obtain the famous Euler identity \[ \sum _{k=1}^ {\infty }\frac { 1}{ k^2}\,=\,\frac {\pi^2}{6}. \]
\end{document} | arXiv |
NSLS-II
RHIC Spin
RHIC Computing
Nuclear Theory
High-Energy Physics
Cosmology & Astrophysics
Electronic Detector
Physics Applications Software
High Energy Theory
ATLAS Computing
RHIC Facility
RHIC/AGS Users' Group
ES&H
Safety & Training Office
Environmental Mgmt. System
Intgrated Mgmt. System Contacts (PDF)
NPP Directorate
QuarkNet Workshop for High School Teachers
9 am, Building 438
Hosted by: Ketevi Assamagan
3 pm, Small Seminar Room, Bldg. 510
Hosted by: Alessandro Tricoli
In this seminar, I will present an overview of up-to-date results on searches for Dark Matter signals with the ATLAS detector at the LHC using Run 2 data. Comparison with non-accelerator DM results as well as interpretation within some theoretical models will be discussed. In addition, expectation from the high-luminosity LHC (HL-LHC) DM searches program will be briefly presented. Finally, I will talk about the new ATLAS High Granularity Timing Detector (HGTD), planned for the phase 2 upgrade program for the HL-LHC run, and it performances for physics studies.
10 am, Small Seminar Room, Bldg. 510
Hosted by: Lijuan Ruan
Heavy quark transport offers unique insight into the microscopic picture of the sQGP created in heavy-ion collisions. One central focus of heavy quark program is to determine the heavy quark spatial diffusion coefficient and its momentum and temperature dependence. This requires precise measurements of heavy flavor hadron production and their collective flow over a broad momentum region. In the meantime, heavy quark hadrochemistry, the abundance of various heavy flavor hadrons, provides special sensitivity to the QCD hadronization and also plays an important role for the interpretation of heavy flavor hadron data in order to constrain the heavy quark spatial diffusion coefficient of the sQGP. In this seminar, I will focus on the recent STAR results of charm hadron D0, D+/-, D*, Ds, Lambda_c production and D0 radial and elliptic flow in heavy-ion collisions utilizing the state-of-the-art silicon pixel detector, the Heavy Flavor Tracker. These data will be compared to measurements from other experiments at RHIC and the LHC as well as various model calculations. I will then discuss how these data will help us better understand the sQGP properties and its hadronization. Finally, I will present a personal view of future heavy quark measurements at RHIC.
Hosted by: Jin Huang
High-pt theory and data are traditionally used to explore high-pt parton interactions with QGP, while QGP bulk properties are explored through low-pt data and corresponding models. However, with a proper description of high-pt medium interactions, high-pt probes also become a powerful tool for inferring bulk QGP properties, as they are sensitive to global QGP parameters. With the goal of developing a multipurpose QGP tomography tool, over the past several years, we developed the dynamical energy loss formalism, and the corresponding fully optimized DREENA numerical framework. As first steps towards QGP tomography, we will use DREENA framework to address how we can directly from experimental data i) differentiate between different energy loss mechanisms, ii) infer the shape of QGP droplet. The research presented in this talk will therefore demonstrate how high-pt theory and data can be used to both infer the nature of high pt-parton medium interactions, and important bulk QGP medium properties.
Physics Department Summer Lecture Series
12:30 pm, Small Seminar Room, Bldg. 510
I will review how the state-of-the-art sensors developed for astronomical applications can precisely measure the positions and shapes of billions of galaxies. The talk will focus on the camera and sensors for the Large Synoptic Survey Telescope (LSST) and will discuss limitations on the achievable precision coming from the instrumentation. I will also discuss light sensitive sensors which can be used for fast imaging of single photons in QIS and other applications.
Gertrude S. Goldhaber Award Ceremony
12 pm, Large Seminar Room, Bldg. 510
Brookhaven Women in Science is pleased to present this year's Gertrude Scharff Goldhaber Award to Brooke Russell. The Gertrude S. Goldhaber Prize was established to honor Gertrude Scharff-Goldhaber for her outstanding contributions in the field of nuclear physics and her support of women in science, and to recognize substantial promise and accomplishment by a woman graduate student in physics. This event is open to the public. Refreshments will be provided.
Lab Holiday: Independence Day
EBNN Directorate Visitor Seminar
3 pm, Large Conference Room, Bldg. 535
Hosted by: Martin Schoonen
Hosted by: Rongrong Ma
The in-medium color potential is a fundamental quantity for understanding the properties of the strongly coupled quark-gluon plasma (sQGP). Open and hidden heavy-flavor (HF) production in ultrarelativistic heavy-ion collisions (URHICs) has been found to be a sensitive probe of this potential. Here we utilize a previously developed quarkonium transport approach in combination with insights from open HF diffusion to extract the color-singlet potential from experimental results on Υ production in URHICs. Starting from a parameterized trial potential, we evaluate the Υ transport parameters and conduct systematic fits to available data for the centrality dependence of ground and excited states at RHIC and the LHC. The best fits and their statistical significance are converted into a temperature dependent potential. Including nonperturbative effects in the dissociation rate guided from open HF phenomenology, we extract a rather strongly coupled potential with substantial remnants of the long-range confining force in the QGP.
Silicon technology is approximately 70 years old but thousands of years by a multitude of researchers has been dedicated to R&D; the well-established microelectronic industry is based on it. Being that the silicon is sensitive to photons (from infrared to X-rays, passing through visible light and ultraviolet) and to charged particles, we can leverage the microelectronic technology to make sensors out of silicon. Silicon sensors are used in a variety of applications including scientific experiments (High Energy Physics, Astrophysics, Photon Science, etc) as well as industrial and commercial use (cameras, etc). The basic structure is the p-n junction across which a voltage is applied. When an ionizing event occurs (a photon or a charged-particle interacting with silicon), a short current pulse (~ few ns) is generated and detected by the read-out electronics. There are many kinds of silicon sensors and each one must be tailored according to the specific application. We'll give an overview of the state of the silicon technology and its different applications.
Hosted by: Chao Zhang
The next generation long-baseline neutrino oscillation experiments such as DUNE (Deep Underground Neutrino Experiment) aim to solve the remaining questions in neutrino oscillation physics, including neutrinos' mass ordering and CP violation. The near detector(s) will provide crucial constraints on the systematic uncertainties to the oscillation measurements. Flux uncertainty is one of the dominant contributions to the systematic uncertainties. In this talk I present a novel approach of precisely determining the neutrino flux in the near detector(s) of a long-baseline neutrino experiment such as DUNE, by using neutrino/antineutrino-hydrogen interactions with low visible hadronic energy (Low-nu). The application of this method in the proposed KLOE-STT detector is discussed, which could serve as part of the near detector complex of DUNE.
I will give a general introduction to the modern theory of "strong" interactions, which involve quarks and gluons. At about a trillion degrees, these form a Quark-Gluon Plasma, which we believe is created in the collisions of heavy ions at very high energies, such as at the Relativistic Heavy Ion Collider here at Brookhaven. I also make extensive comments about the sociology of the field, especially the phenomenon of "As everyone who is anyone knows..."
11 am, Room 300, 3rd Floor, Chemistry Building 555
Hosted by: Michael White
Size-selected nanoparticles (atomic clusters), deposited onto supports from the beam in the absence of solvents, represent a new class of model systems for catalysis research and possibly small-scale manufacturing of selective catalysts. To translate these novel and well-controlled systems into practical use, two major challenges have to be addressed. (1) Very rarely have the actual structures of clusters been obtained from direct experimental measurements, so the metrology of these new material systems have to improve. The availability of aberration-corrected HAADF STEM is transforming our approach to this structure challenge [1,2]. I will address the atomic structures of size-selected Au clusters, deposited onto standard carbon TEM supports from a mass-selected cluster beam source. Specific examples considered are the "magic number clusters" Au20, Au55, Au309, Au561, and Au923. The results expose, for example, the metastability of frequently observed structures, the nature of equilibrium amongst competing isomers, and the cluster surface and core melting points as a function of size. The cluster beam approach is applicable to more complex nanoparticles too, such as oxides and sulphides [3]. (2) A second major challenge is scale-up, needed to enable the beautiful physics and chemistry of clusters to be exploited in applications, notably catalysis [4]. Compared with the (powerful) colloidal route, the nanocluster beam approach [5] involves no solvents and no ligands, while particles can be size selected by a mass filter, and alloys with challenging combinations of metals can readily be produced. However, the cluster approach has been held back by extremely low rates of particle production, only 1 microgram per hour, sufficient for surface science studies but well below what is desirable even for research-level realistic reaction studies. In an effort to address this scale-up challenge, I will discuss the development of a new kind of nanop
QCD, our nearly perfect theory of the strong interaction, is also deeply profound because all phenomena are emergent features of the many-body dynamics of the quark and gluon fields and the vacuum of the theory. This talk on many-body QCD is organized as a play in four acts: i) Origins, mysteries, symmetries ii) The power and the glory of QCD iii) Surprises from boiling the QCD vacuum in heavy-ion collisions: a) why the world's hottest fluid, albeit also being its most viscous, flows with almost no resistance b) a possible unexpected universality between the hottest and coldest fluids on earth c) What magnetar strength magnetic fields created in heavy-ion collisions may reveal about the topology of the QCD vacuum iv) Looking ahead to the Electron-Ion Collider: what the ultimate IMAX experience may reveal of QCD's mysteries
Physics Department Summer Lectures
In this lecture, I will introduce some basic statistical concepts commonly used in the data analysis of high-energy physics experiments. I will review the basic procedure in setting confidence intervals. Some advanced topics in data unfolding, selection of test statistics, and usage of linear algebra in reducing computation will be touched upon.
11 am, ISB Bldg. 734 Conf. Rm. 201 (upstairs)
While beam damage is often considered detrimental to our quantitative imaging capabilities, the energy and charge injected into the sample as a result of inelastic scattering can be exploited beneficially. This is especially true in radiation-chemistry-type experimental setups in the electron microscope where the beam promotes local wanted chemical reactions. We have observed that by exposing to the electron beam a layer of small volatile organic molecules condensed over a cold substrate results in the formation of a solid product. Evidence suggests that the exposure mechanism driving the formation of a solid product is partial dehydrogenation of the molecules, removal of H2, and progressive increase of the average molecular weight. Contrary to focused electron beam induced deposition, that relies on surface absorption followed by aggregation of mobile species, at cryogenic temperature organic ice molecules are largely immobilized, and act as targets for the incoming electrons throughout the entire thickness of the layer. Therefore, the exposure occurs throughout the volume of the frozen precursor, and the features are essentially determined by the electron distribution, with diffusion/transport parameters bearing little or no relevance. Since larger molecules are less volatile, if the molecular weight increases sufficiently, upon raising the temperature the unexposed areas leave the sample, while the exposed molecules assemble into a solid product in the form of hydrogenated amorphous carbon.
Lattice-QCD predicts the occurrence of a phase transition above a critical temperature from ordinary nuclear matter to a new state of matter, usually referred to as the quark-gluon plasma (QGP), in which partons are relevant degrees of freedom. One primary goal of the heavy-ion physics is to create and study the properties of the QGP created in these collisions. The last couple of decades have seen tremendous progresses in understanding the QGP, thanks to the successful operation of dedicated experiments at the RHIC and the LHC. In this lecture, I will discuss the detectors designed for heavy-ion physics, and how an experimentalist turns electronic signal into physics results. Future direction of heavy-ion experiments will also be discussed.
Sambamurti Lecture
3:30 pm, Large Seminar Room, Bldg. 510
Hosted by: John Haggerty
Neutrinos have been the most consistently surprising particle of the last few decades. The onset of high-precision experiments has lead to the discovery of neutrino oscillations, possible evidence for beyond the Standard Model sterile neutrinos, and the beginnings of neutrino-based geophysics. Recent measurements of antineutrinos from nuclear reactors have observed flux and spectral discrepancies compared to leading theoretical models. Experiments like Daya Bay and PROSPECT are able to observe the small differences of neutrino emission from different mixtures of nuclear fuel, which may illuminate the origin of this disagreement. These neutrino finger-prints can also be used to investigate the mixture of fuel inside an operating reactor, rekindling interest in neutrino-based reactor monitoring. I will present recent advances which have demonstrated how small-scale experiments utilizing new technologies can advance both fundamental and applied science.
The role of computing in particle and nuclear physics.
1:30 pm, ISB Bldg. 734 Conf. Rm. 201 (upstairs)
Angle-resolved photoemission, in addition to tunneling, has provided key information on the cuprate pairing on the microscopic scale. In particular, in the underdoped regime, the angular dependence of the gap function Δ(θ) deviates from a pure d-wave form such that the antinodal gap value ΔAN and the nodal gap value ΔN completely diverge. On another front, ARPES has firmly established that the enigmatic Fermi arcs, i.e. normal electron excitations around the nodes, exist even below Tc. In this work, we will interpret these experiments based on the 'pairon' model [1] in which the fundamental object is a hole pair bound by its local antiferromagnetic environment on the scale of the coherence length ξAF. The pairon model agrees quantitatively with both the gap function Δ(θ) and the Fermi arcs seen at finite temperature.
"Introduction to Statistics in High-Energy Physics"
Presented by Xin Qian
"Electron beam effects on organic ices"
Presented by Marco Beleggia, Technical University of Denmark, Denmark
"Searching for and understanding the quark-gluon plasma in heavy-ion"
Presented by Rongrong Ma, BNL
"Finger-printing a nuclear reactor with neutrinos"
Presented by Thomas Langford, Yale University
"From Raw Data to Physics Results"
Presented by Paul Laycock
"Fermi arcs, nodal and antinodal gaps in cuprates : the 'pairon' model to the rescue"
Presented by William Sacks, Sorbonne University, France
Presented by Babak Salehi Kasmaei, Kent State University
2 pm, Building 510, CFNS Seminar room 2-38
NT/RIKEN
Presented by Jean-Francois Paquet
Friday, August 16, 2019, 2:00 pm
Presented by Daniel Harlow, MIT
Presented by Shailesh Chandrasekharan, Duke University
Friday, September 6, 2019, 2:00 pm
Presented by Gerald Dunne, University of Connecticut
Friday, September 13, 2019, 2:00 pm
NT RIKEN Seminar
Presented by Derek Teaney, Stony Brook
Presented by Adrien Florio, École polytechnique fédérale de Lausanne
12 pm, Building 510, Room 2-160
Thursday, October 3, 2019, 12:00 pm
Hosted by: Yuta Kikuchi
Presented by Sophia Han, Ohio University
Friday, October 25, 2019, 2:00 pm
Presented by Jennifer Cano, SUNY-Stony Brook
1:30 pm, ISB Bldg. 734, Conf. Rm. 201 (upstairs)
Thursday, December 12, 2019, 1:30 pm
Past Year's Events
"Precision Jet Substructure with the ATLAS Detector"
Presented by Jennifer Roloff, BNL
Thursday, July 18, 2019, 3 pm
"Quantum Chromodynamics (QCD) as a many-body theory: An existential tale in four acts"
Presented by Raju Venugopalan, BNL
"Nanoparticle Beam Deposition: A Novel Route to the Solvent-Free"
Presented by Richard E. Palmer, Nanomaterials Lab, Swansea University, UK, United Kingdom
Tuesday, July 16, 2019, 11 am
Room 300, 3rd Floor, Chemistry Building 555
"Topological Superconducting Qubits"
Presented by Javad Shabani, Center for Quantum Phenomena NYU
Friday, July 12, 2019, 2 pm
Building 510, CFNS Room 2-38
"A golden age in physics, an overview of what the...is going on in the RHIC tunnel"
Presented by Rob Pisarski, BNL
"Low-nu Flux Measurement Using Neutrino/Antineutrino-Hydrogen Interactions for Long-baseline Neutrino Oscillation Experiments"
Presented by Hongyue Duyang, University of South Carolina
"Silicon Detectors for Particle and Nuclear Physics"
Presented by Gabriele Giacomini, BNL
": Extracting the Heavy-Quark Potential from Bottomonium Observables in Heavy-Ion Collisions"
Presented by Xiaojian Du, Texas A&M University
Tuesday, July 9, 2019, 11 am
"Defense Nuclear Nonproliferation's Mission"
Presented by Dr. Brent K. Park, NNSA - Deputy Administrator for Defense Nuclear Nonproliferation
Monday, July 8, 2019, 3 pm
Wednesday, July 3, 2019, 9 am
"Astronomical CCDs and light-sensitive sensors for fast imaging"
Presented by Andrei Nomerotski, BNL
"DREENA framework as a multipurpose QGP tomography tool"
Presented by Magdalena Djordjevic, Institute of Physics Belgrade
"Charm hadron collective flow and charm hadrochemistry in heavy-ion collisions"
Presented by Xin Dong, Lawrence Berkeley National Laboratory
Tuesday, July 2, 2019, 9 am
"Searches of Dark Matter signals with the ATLAS detector at the LHC: Present and future"
Presented by Dr. Rachid Mazini, Academia Sinica, Taiwan
Monday, July 1, 2019, 9 am
"Using Gravitational Lensing to measure Dark Matter and Dark Energy in the Universe"
Presented by Erin Sheldon, BNL
Gravitational lensing is the bending of the path of light near massive bodies. Mass produces a curvature of space time, and light follows a curved path that is calculable using the General Theory of Relativity. I will discuss how the lensing effect is used to measure the amount of Dark Matter in galaxies and in the universe as a whole. I will also discuss how we use lensing to measure the properties of the mysterious Dark Energy that is driving the accelerated expansion of our universe.
"First measurement of the neutron-argon cross section between 100 and 800 MeV"
Presented by Prof. Christopher Mauger, University of Pennsylvania
Thursday, June 27, 2019, 3 pm
The DUNE experiment directs a neutrino beam from Fermilab towards a 40 kiloton liquid argon time-projection chamber (TPC) 1300 km away in the Sanford Underground Research Facility in South Dakota. By measuring electron neutrino and anti-neutrino appearance from the predominantly muon neutrino and anti-neutrino beams, DUNE will determine the neutrino mass ordering and explore leptonic CP violation. The neutrino oscillation phenomena explored by DUNE require robust determinations of the (anti-)neutrino energies by reconstructing the particles produced in charged current reactions. Among the particles emerging from the interaction which carry significant energy, neutrons are the most challenging to reconstruct. The CAPTAIN collaboration has made the first measurement of the neutrino-argon cross section between 100 and 800 MeV of neutron kinetic energy - an energy regime crucial for neutrino energy reconstruction at DUNE. We made the measurement in a liquid argon TPC with 400 kg of instrumented mass. I describe the measurement and discuss future plans.
"Excitonic condensation of strongly correlated electrons"
Presented by Professor Jan Kunes, Vienna University of Technology, Austria
Bldg. 734, ISB Conference Room 201 (upstairs)
Hosted by: Keith Gilmore
Spontaneous symmetry breaking is a prominent demonstration of the collective behavior of strongly correlated systems. Besides ordering of charge or of spin dipoles, more exotic types of long-range order are possible, which do not couple to conventional probes and are therefore sometimes called the hidden order. Excitonic magnets, or excitonic condensates, are examples of such systems. I will introduce the concept of excitonic condensate from the strong coupling perspective and discuss the rich variety of excitonic phases arising from the internal (spin, orbital) degrees of freedom of the excitons. I will present some numerical results obtained with dynamical mean-field theory for models as well as for specific materials, which we suspect to be excitonic magnets. The presentation will include the recently obtained results for dynamical susceptibilities in phases with long-range order and some proposals on how to detect excitonic condensates with today's experimental techniques.
"The Really Big Picture: Cosmology in the 21st Century"
"Search for a new particle in the decay of the Higgs boson"
In this talk, I will discuss the search strategies that led to the discovery of the Higgs boson. Then, I will focus on the usage of the Higgs boson as a portal to "new physics". I will conclude with dark sector states as a possibility for physics beyond the Standard Model of particle physics
Physics Collquium - CANCELLED
"3D imaging of nuclei: status and towards an EIC"
Presented by Kawtar Hafidi, ANL
Tuesday, June 18, 2019, 3:30 pm
Hosted by: Thomas Ullrich
"Basics of Neutrino Interactions in Matter"
Presented by Milind Diwan, BNL
I will review the basics of neutrino interactions in matter with emphasis on calculations of cross sections and rates. The lecture will provide introduction to the physics of weak interactions.
"First observation of the directed flow of D0 and anti-D0 in Au+Au collisions at sqrt(sNN) = 200 GeV"
Presented by Subhash Singha, KSU
Tuesday, June 18, 2019, 11 am
Hosted by: Isaac Upsal
In this talk, we will present the first measurement of rapidity-odd directed flow (v1) for D0 and anti-D0 mesons at mid-rapidity (|y| < 0.8) in Au+Au collisions at sqrt(sNN) = 200 GeV using the STAR detector at the Relativistic Heavy Ion Collider. In 10-80% Au+Au collisions, the slope of the v1 rapidity dependence (dv1/dy), averaged over D0 and anti-D0 mesons, is -0.080 +/- 0.017 (stat.) +/- 0.016 (syst.) for transverse momentum (pT) above 1.5 GeV/c. The absolute value of D0-meson dv1/dy is about 25 times larger than that for charged kaons, with 3.4sigma significance. These data not only give unique insight into the initial tilt of the produced matter, they are expected to provide improved constraints for the geometric and transport parameters of the hot QCD medium created in relativistic heavy-ion collisions.
"Tracking phase textures in complex oxides using coherent x-rays"
Presented by Xiaoqian Chen, Lawrence Berkeley Laboratory
Monday, June 17, 2019, 11 am
ISB Bldg. 734, Conf. Rm. 201 (upstairs)
Hosted by: Ian Robinson/Mark Dean
In complex oxides, coupled interactions result in unpredictable and novel emerging orders that are yet to be understood. With the recent advancement in x-ray and laser sources, exploration of equilibrium fluctuation and nonlinear dynamics have become an effective approach to understand these intertwined orders. In particular, coherent x-rays are a simultaneous probe of the order parameter, phase texture, and dynamics. In the first part of my talk, I will use underdoped cuprate La2-xBaxCuO4 as an example to show how x-ray speckle correlation can be a test for (lattice degree of freedom and charge) order coupling and dynamics. However, can we image domain dynamics in real time? In the second part of my talk, I will use antiferromagnetically ordered artificial lattice to demonstrate that phase retrieval lensless imaging can be used to image charge and magnetic orders. Using Bragg coherent diffraction imaging, we revealed a single domain wall motion with 100ms time resolution. References [1] X. M. Chen et al. Phys. Rev. Lett. 117, 167001 (2016) [2] V. Thampy et al. Phys. Rev. B 95, 241111 (2017) [3] X. M. Chen et al. Nat Commun. 10 1435 (2019) [4] X. M. Chen et al. under review, arXiv:1809.05656 [cond-mat.mes-hall]
"D meson mixing via dispersion relation"
Presented by Hsiang-nan Li, National Center for Theoretical Sciences, Physics Division, Taiwan
Friday, June 14, 2019, 2 pm
In this talk I will explain how to calculate the D meson mixing parameters x and y in the Standard Model. Charm physics is notoriously difficult, because most effective theories and perturbation theories do not apply well. I propose to study the D meson mixing via a dispersion relation, which relates low mass dynamics to high mass one. Taking heavy quark results as inputs in the high mass region, we obtain x and y consistent with experimental data at least in order of magnitude.
"The Little Neutral One"
Presented by Mary Bishai, BNL
In the past 50 years, the study of neutrinos, the lightest, yet most abundant of the known elementary particles has revealed cracks in the Standard Model of Particle Physics. Could neutrinos explain the matter anti-matter asymmetry in our Universe? To answer these questions we need to better understand the properties of these elusive polymorphs. I will present a brief history of the neutrino, what we have learnt so far about it, and what we hope to learn in the next couple of decades from some of the most ambitious experiments in particle physics.
NSLS-II Friday Lunchtime Seminar
"High resolution strain measurements and phase discrimination in solid solutions using X-Ray Diffuse Multiple Scattering (DMS)"
Presented by Gareth Nisbet, Diamond Light Source, United Kingdom
NSLS-II Bldg. 743 Room 156
Hosted by: Ignace Jarrige
DMS is a new high resolution scattering technique which manifests as diffraction lines impinging on the detector plane, similar to Kikuchi lines or Kossel lines. I will explain how multiple intersections from coplanar and non-coplanar reflections can be used for phase discrimination in multi-phasic systems by following a simple reductive procedure. The methods will be demonstrated using data from complex PMN-PT and PIN-PMN-PT ferroelectric solid solutions. I will also show how convolutional neural networks are being applied to DMS data for phase discrimination.
"Argon Capture Experiment at DANCE (ACED)"
Presented by Jingbo Wang, UC Davis
Liquid argon is becoming a popular medium for particle detection, with applications ranging from low-background dark matter searches to high-energy neutrino detection. Because neutrons may represent both an important source of background and a product of signal events, a good understanding of their interactions in argon is a requirement for precision physics measurements. Despite being one of the most basic quantities needed to describe low energy neutron transport, the thermal neutron capture cross section on argon remains poorly understood, with the existing activation measurements showing significant disagreements. To resolve these disagreements, the Argon Capture Experiment at DANCE (ACED) collaboration has performed a differential measurement of the 40Ar(n, gamma)41Ar cross section using a time of flight neutron beam and the Detector for Advanced Neutron Capture Experiments (DANCE), a ∼4pi gamma spectrometer at Los Alamos National Laboratory. A fit to the differential cross section from 0.015-0.15eV, assuming a 1/v energy dependence, yields sigma(2200)=673+-26 (stat.) +- 59(sys.) mb. During this talk, I will introduce the DUNE experiment before focusing on the importance of neutrons in liquid argon detectors. I will then present the ACED experiment and the use of neutrons as a detector calibration method.
"High energy atmospheric neutrinos: connections between laboratory experiments and cosmic rays"
Presented by Mary Hall Reno, University of Iowa
Hosted by: Milind Diwan
The IceCube Neutrino Observatory's measurement of a diffuse flux of neutrinos from astrophysical sources has opened a new era in high energy astroparticle physics. Neutrinos produced by cosmic ray interactions in the atmosphere are the main background to the astrophysical neutrino flux. At these high energies, data from the Large Hadron Collider experiments on heavy flavor production can be used to narrow the uncertainties in the background predictions at the highest energies. Our evaluation of the atmospheric neutrino flux from charm will be used to illustrate how collider physics results are connected to cosmic ray physics in this context.
"The Anomalous Magnetic Moment of the Muon and the Standard Model of Particle Physics"
Presented by William Morse, BNL
At the end of this lecture, you will know: What is anomalous about the magnetic moment of the muon. What is the magnetic moment of the muon. What is the muon. Why Bohr said "Anyone who thinks they understand Quantum Mechanics, and is not deeply disturbed by it, doesn't understand Quantum Mechanics." What the Standard Model Theorists have to fear from the anomalous magnetic moment of the muon.
"Visible and Invisible Clues for New Physics"
Friday, June 7, 2019, 12:30 pm
In this presentation, we will briefly describe the main elements of the Standard Model of particle physics. This theory, together with General Relativity, provides a precise description of a vast array of experimental and observational data, from microscopic to astronomical scales. However, solid empirical evidence and conceptual clues lead us to expect that this "standard" picture is incomplete. We will discuss some of the key reasons for this expectation.
"Status and perspective of high-energy automotive batteries"
Presented by Richard Schmuch, University of Munster, Germany
This presentation gives an overview of the materials, performance requirements and cost of current automotive traction batteries based on Li-ion technology. It also includes important aspects related to electromobility, such as its sustainability and energy efficiency. As current Li-Ion batteries with intercalation-type active materials are approaching their physicochemical energy density limit of roughly 300 Wh/kg or 800 Wh/L, alternative technologies such as lithium-metal based all-solid-state batteries (ASSBs) currently intensively studied, which promise an energy density of up to 1000 Wh/L. The potential and challenges of this and other post Li-ion batteries (e.g. Dual-Ion, Mg-Ion, Li-Sulphur) are discussed and also compared by systematic bottom-up energy density calculations. Through a step-by-step analysis from theoretical energy content at the material level to practical energies at the cell level, the individual advantages and shortcomings of the studied battery types are elucidated. Literature: (1) Schmuch, R.; Wagner, R.; Hörpel, G.; Placke, T.; Winter, M. Performance and Cost of Materials for Lithium-Based Rechargeable Automotive Batteries. Nat. Energy 2018, 3 (4), 267–278. (2) Betz, J.; Bieker, G.; Meister, P.; Placke, T.; Winter, M.; Schmuch, R. Theoretical versus Practical Energy: A Plea for More Transparency in the Energy Calculation of Different Rechargeable Battery Systems. Adv. Energy Mater. 2018, 1803170, 1803170. (3) Placke, T.; Kloepsch, R.; Dühnen, S.; Winter, M. Lithium Ion, Lithium Metal, and Alternative Rechargeable Battery Technologies: The Odyssey for High Energy Density. J. Solid State Electrochem. 2017, 1–26. (4) Meister, P.; Jia, H.; Li, J.; Kloepsch, R.; Winter, M.; Placke, T. Best Practice: Performance and Cost Evaluation of Lithium Ion Battery Active Materials with Special Emphasis on Energy Efficiency. Chem. Mater. 2016, 28 (20), 7203-7217
"Probing quantum materials with multiple spectroscopic techniques"
Presented by Eduardo H. da Silva Neto, University of California, Davis
ISB - Bldg. 734
Hosted by: Robert Konik
Resonant X-ray Scattering (RXS), Scanning Tunneling Spectroscopy (STS) and Angle-Resolved Photo-Emission Spectroscopy (ARPES) measurements have been at the forefront of several advances in the studies of quantum materials. Our group specializes in these techniques, looking to leverage their combination to the study of quantum materials. I will discuss two projects where we have used these state-of-the-art techniques to study high-temperature superconductors and topological materials. Charge order has now been ubiquitously observed in cuprate high-temperature superconductors. However, it remains unclear if the charge order is purely static or whether it also features dynamic correlations. I will discuss a polarization-resolved soft x-ray inelastic RXS experiment with unprecedented resolution that demonstrates the existence of a coupling between dynamic magnetic and charge-order correlations in the electron-doped cuprate Nd2−xCexCuO4 [1-3]. I will also discuss a combined ARPES-STS study of the topological material Hf2Te2P. Similar to the reports by H. Ji, et al. on Zr2Te2P [4], band structure calculations and ARPES by Hosen et al. [5] also suggest multiple topological surface states in Hf2Te2P. However, some topological surface states still lacked direct spectroscopic evidence due the inability of ARPES experiments to probe the unoccupied band structure. Using the combination of STS and ARPES with surface K-doping, we probe the unoccupied band structure of Hf2Te2P and demonstrate the presence of multiple surface states with a linear Dirac-like dispersion, consistent with the predictions from previously reported band structure calculations [6]. [1] E. H. da Silva Neto, et al. Science 347, 282 (2015). [2] E. H. da Silva Neto, et al. Science Advances 2 (8), e1600782 (2016). [3] E. H. da Silva Neto, et al. Physical Review B, Rapid Communication 98, 161114(R) (2018). [4] H. Ji, et al. Physical
"Mega-electron-volt ultrafast electron diffraction at SLAC National Accelerator Laboratory"
Presented by Xiaozhe Shen, SLAC National Accelerator Laboratory
Monday, June 3, 2019, 2 pm
Bldg. 480, Conference Room
Hosted by: Jing Tao
Ultrafast electron diffraction (UED) is a transformative tool for probing atomic structural dynamics in ultrafast science to understand the correlation between materials' structure and their functionalities, with the ultimate goal of controlling energy and matter. The advent of high-brightness relativistic electron beams from photocathode radio frequency (RF) gun provides a great opportunity to push the resolving power of UED onto atomic length and time scales. With the expertise in electron beam physics and ultrafast laser technology, SLAC has dedicated enormous efforts to develop a world-leading UED using mega-electron-volt (MeV) electron beams since 2014. Over the years, SLAC MeV UED has achieved great instrument performance and delivered numerous scientific outcomes for ultrafast science. In 2019, SLAC MeV UED has officially transformed into a user facility. In this talk, performance of SLAC MeV UED will be reviewed, including characterization of the instrument resolution and machine stability. The unique capabilities of SLAC MeV UED to accommodate various sample environments for a broad range of scientific interests, including condense matter physics and chemical science, will be presented, with highlighted scientific results. Research and development efforts to improve the performance of SLAC MeV UED will be discussed.
"Dark matter in the cosmos-The Hunt to Find it in the Laboratory"
Presented by Ioannis (J.D.) Vergados
There is plenty of evidence at all scales (galaxies, cluster of galaxies, cosmological distances) that most of the energy content of the universe is of unknown nature, i.e, 70% is dark energy and 25% dark matter. Only 5% is made up of matter of known nature, in atoms, in stars, in planets etc, constituents predicted by the standard model. Thus unraveling the nature of the dominant components and, in particular, of dark matter is one of the most important open problems in science. This nature can only be understood by the direct detection of its constituents in the laboratory. This can be achieved, if there exists a week interaction, much stronger than gravity, between the dark matter and ordinary matter. The constituents are supposed to have a mass and are called WIMPs (weakly interacting massive particles). We have no idea what this mass is, but from the rotational curves we know that the constituents must be non relativistic, regardless of the size of their mass. The experimental techniques for the direct detection crucially depend on the assumed WIMP mass. Historically the first searches assumed WIMP masses of many GeV and, therefore, heavy nuclear targets were favored. Thus the hunt for DM began and evolved into a multi-pronged and interdisciplinary enterprise, combining cosmology and astrophysics, particle and nuclear physics as well as detector technology, which will be reviewed. Since the WIMP energy is in the keV region, the nucleus cannot be excited and only the nuclear recoil can be measured. As a result, unfortunately, the signal cannot be easily distinguished from backgrounds. After thirty years of intensive work against formidable backgrounds by a lot of large experimental teams, no dark matter has been found. Impressive limits on the nucleon cross section have, however, been obtained. Extension of these searches to GeV or sub-GeV WIMPs is also been considered using light nuclear targets. It may very w
"Latest oscillation results from the NOvA experiment"
Presented by Diana Patricia Mendez, University of Sussex
Thursday, May 30, 2019, 3 pm
Hosted by: Elizabeth Worcester
NOvA is a long-baseline neutrino oscillation experiment measuring $\nu_{\mu}$ disappearance and $\nu_e$ appearance within the NuMI beam from Fermilab. The experiment uses a Near and a Far Detector placed 810 km away from each other and at 14 milliradians off the beam-axis resulting in an observed energy spectrum that peaks at 2 GeV, close to the oscillation maximum. A combined $\bar{\nu}_{\mu}$ + $\nu_{\mu}$ disappearance, and $\bar{\nu}_{e}$ + $\nu_{e}$ appearance result will be presented including NOvA's first collected anti-neutrino data for a total exposure of $16\times10^{20}$ protons-on-target. In addition to an increased exposure, an upgraded analysis has enable the experiment to set new limits to the allowed regions for $\Delta m^2_{32}$ and sin$^2\theta_{23}$ and make a measurement of $\Delta m^2_{32}$ among the world's best.
"Applications of machine learning to computational physics"
Presented by Dr Akio Tomiya, RBRC
Thursday, May 30, 2019, 12 pm
In this talk, I would like to talk about my works with machine learning. I plan to introduce my works which related to lattice QCD research: detection of phase transition in classical spin systems [arXiv 1609.09087, 1812.01522], configuration generation [1712.03893 + some]
"The Influence of Aerosol Chemical Composition, Morphology, and Phase State on Water and Ice Cloud Particle Formation"
Presented by Yue Zhang, North Carolina State, MIT, and Aerodyne
Thursday, May 30, 2019, 11 am
Hosted by: Ernie Lewis
Aerosols and clouds effect Earth's radiative balance, and aerosol-cloud interactions are major sources of uncertainties in predicting future climate. The climate effects of water and ice cloud particles formed from atmospheric particulate matter are not well understood due to the complex physical and chemical properties of these aerosols. Measurements from fixed sites and field campaigns have shown that organic aerosols (OA) dominate the non-refractory aerosols in the free troposphere where clouds typically form, and cloud water and ice cloud residue both show the presence of organic materials. Despite the abundance of OA, their effects on both cloud condensation nuclei (CCN) and ice nucleation (IN) are not fully understood and even controversial. To probe into these issues, the CCN and IN properties of complex inorganic-organic aerosol mixtures that simulate ambient conditions were measured with a cloud condensation nuclei counter (CCNC, DMT, Inc.) and a spectrometer for ice nucleation (SPIN, DMT, Inc.) at a variety of laboratory conditions. Our studies suggest that the composition of the organic-containing aerosols, as well as their morphology and phase state, jointly impact their cloud forming potential. The results highlight the importance of combining aerosol physical and chemical properties to accurately understand cloud particle formation processes and their implications on the climate.
"Examining hydrodynamical modelling of the QGP through dilepton radiation"
Presented by Gojko Vujanovic, Wayne State University
Recent viscous hydrodynamical studies at the Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider (LHC), show that bulk viscosity plays an important role in their phenomenological description. A temperature-dependent bulk viscosity in the hydrodynamical evolution of the medium can modify the development of the hydrodynamic momentum anisotropy differently in the high- and low-temperature regions. Thus, anisotropic flow coefficients of various particle species are affected differently depending where their surface of last scattering lies. For the case of hadronic observables, they are predominantly sensitive to low temperature regions, while electromagnetic radiation is emitted at all temperatures. Therefore, bulk viscosity should affect electromagnetic radiation differently than hadron emission. The effects of bulk viscosity on one of the electromagnetic probes, namely photons, has already been investigated. The same statement holds true for hadrons. The goal of this presentation is to study how dilepton production, the other source of electromagnetic radiation, gets modified owing to the presence of bulk viscosity at RHIC and LHC energies. With calculations at different collision energies, comparisons in the dilepton signal can be made and more robust conclusions regarding the role of bulk viscosity in high energy heavy-ion collisions can be drawn. Dilepton radiation from the dilute hadronic sector of the medium, which are radiated in addition to dileptons emitted during the hydrodynamical evolution, will also be included to ascertain whether interesting dynamics induced by bulk viscosity may have observable consequences. To complete that investigation, particular attention will be given to how the $\rho(770)$ meson, and its subsequent dilepton decay, is calculated at the end of the hydrodynamical simulation.
"Pieces of the Puzzle: Reaching QCD on Quantum Computers"
Presented by Henry Lamm, UMD
Friday, May 24, 2019, 2 pm
Building 510, CFNS Seminar Room 2-38
The advent of quantum computing for scientific research presents the possibility of calculating time-dependent observables like viscosity and parton distributions from QCD. In order to utilize this new tool, a number of theoretical and practical issues must be addressed related to efficiently digitize, initialize, propagate, and evaluate quantum field theory. In this talk, I will discuss a number of projects being undertaken by the NuQS collaboration to realize calculations on NISQ era and beyond quantum computers.
CFNS Seminar
"Effect of non-eikonal corrections on two particle correlations"
Presented by Tolga Altinoluk, National Centre for Nuclear Research, Warsaw, Poland
Hosted by: Andrey Tarasov
We will discuss the non-eikonal effects on gluon production in pA collisions that originate from the finite longitudinal width of the target. We will then consider the dilute target limit, and discuss the single and double inclusive gluon production cross section in pp collisions. We will show that non-eikonal corrections break the accidental symmetry of the CGC and give rise to non-vanishing odd azimuthal harmonics.
"In situ imaging of gold nanocrystals during the CO oxidation reaction studied by Bragg Coherent Diffraction Imaging"
Presented by Ana Flavia Suzana, Brazilian Association of Synchrotron Light Technology-ABTLUS, Brazil
Thursday, May 23, 2019, 1:30 pm
Hosted by: Ian Robinson
The fundamental aim of heterogeneous catalysis research is to understand mechanisms at the nanoparticle level, and then to design and synthesize catalysts with desired active sites. In this regard, the in situ/operando characterization of defects is crucial as they are preferential catalytic sites for the reaction occurrence. In this seminar I will talk about the main part of the work developed during my PhD: the investigation of the morphology and structure evolution of gold nano-catalysts supported on titanium dioxide. Those catalytic materials were evaluated for the model CO oxidation reaction, chosen for its environmental relevance and "simplicity" to be reproducible within our X-ray imaging study. We used the Bragg Coherent Diffraction Imaging technique to follow in situ the 3D morphology changes under catalytic reaction conditions. We correlated the 3D displacement field and strain distribution of the gold nanoparticles to the catalytic properties of the material. In particular, for a 120 nm gold nanoparticle, we quantified under working conditions the adsorbate-induced surface stress on the gold nanocrystal, which leads to restructuration and defects identified as a nanotwin network.
"Complex saddle points of path integrals"
Presented by Semeon Valgushev, BNL
In this talk, we discuss the physical role of complex saddle points of path integrals. In the first case study, we analyze saddle point structure of two-dimensional lattice gauge theory represented as Gross-Witten-Wadia unitary matrix model. We find that non-perturbative physics in the strong coupling phase can be understood in terms of new family of complex saddle points those properties are connected to resurgent structure of the 1/N expansion. In the second case study, we discuss the sign problem in fermionic systems at finite density and the possibility to alleviate it with the help of defomations of integration contour into complex space on the example of two-dimensional Hubbard model.
"Designing Dopants to Shield Anion Electrostatics in Doped Conjugated Polymers to Obtain Highly Mobile and Delocalized Carriers"
Presented by Taylor Aubry, UCLA
Room 300, 3rd Floor - Chemistry Bldg. 555
Hosted by: Matthew Bird
Doping conjugated polymers is an effective way to tune their electronic properties for thin-film electronics applications. Chemical doping of semiconducting polymers involves the introduction of a strong electron acceptor or donor molecule that can undergo charge transfer (CT) with the polymer. The CT reaction creates electrical carriers on the polymer chain (usually positive polarons a.k.a. holes) while the dopant molecules remain in the film as counterions. Undesirably, strong electrostatic attraction from the anions of most dopants will localize the polarons and reduce their mobility. We employ a new strategy utilizing substituted icosahedral dodecaborane (DDB) clusters as molecular dopants for conjugated polymers. DDBs provide a unique system in which the redox potential of the dopant can be rationally tuned via modification of the substituents without significant change to the size or shape of the dopant molecule. These clusters allow us to disentangle the effects of energetic offset on the production of free and trapped carriers in DDB-doped poly-3-hexylthiophene (P3HT) films. We find that by designing our cluster to have a high redox potential and steric protection of the core-localized electron density, highly delocalized polarons with mobilities equivalent to films doped with no anions present are obtained.1 P3HT films doped with these boron clusters have conductivities and polaron mobilities roughly an order of magnitude higher than films doped with conventional small-molecule dopants such as 2,3,5,6-tetrafluoro-7,7,8,8- tetracyanoquinodimethane (F4TCNQ). The spectral shape of the IR-region absorption for our DDB-doped polymer film closely matches the calculated theoretical spectrum for the anion at infinite distance from the polaron.2 We therefore conclude that these DDB clusters are able to effectively spatially separate the counterion. Moreover, nearly all DDB-produced carriers are free, while it has been shown that small m
C-AD Accelerator Physics Seminar
"High-Level Software Development for the CLARA FEL Test Facility"
Presented by Dr. James Jones, Daresbury Laboratory
Bldg. 911B, Second Floor, Large Conf. Rm., Rm. A2
Hosted by: Steve Peggs
CLARA is a low-energy test facility for advanced FEL physics and beam-driven novel acceleration techniques. As part of the facility we have planned for an advanced integrated system for high-level software development and online-model based on C++ and python interfaces. An overview of the CLARA facility and recent experimental results will be presented along with a description and current status of the HLS middle-layer and online-model. Future plans for CLARA will also be presented.
"Future opportunities for a small-system scan at RHIC"
Presented by Jiangyong Jia
The observation of multi-particle azimuthal correlations in high-energy small-system collisions has led to intense debate on its physical origin between two competing theoretical scenarios: one based on initial-state intrinsic momentum anisotropy (ISM), the other based on final-state collective response to the collision geometry (FSM). To complement the previous scan of asymmetric collision systems (p+Au, d+Au and He+Au), we propose a scan of small symmetric collision systems at RHIC, such as C+C, O+O, Al+Al and Ar+Ar at sqrt{s_NN} = 0.2 TeV, to further disentangle contributions from these two scenarios. These symmetric small systems have the advantage of providing access to geometries driven by the average shape of the nuclear overlap, compared to fluctuation-dominant geometries in asymmetric systems. A transport model is employed to investigate the expected geometry response in the FSM scenario. Different trends of elliptic flow with increasing charge particle multiplicity are observed between symmetric and asymmetric systems, while triangular flow appears to show a similar behavior. Furthermore, a comparison of O+O collisions at sqrt{s_NN} = 0.2 TeV and at sqrt{s_NN} =2.76−7 TeV, as proposed at the LHC, provides a unique opportunity to disentangle the collision geometry effects at nucleon level from those arising from subnucleon fluctuations.
"MAXPD: Multi-Anvil X-ray Powder Diffraction — COMPRES Partner User Program for High Pressure Studies at 28-ID-2-D"
Presented by Matthew L. Whitaker, Stony Brook University
MAXPD is the downstream endstation of XPD, an insertion device beamline at Sector 28 (28-ID-2-D) of NSLS-II. The MAXPD endstation and General User Program are sponsored by the COnsortium for Materials Properties Research in Earth Sciences (COMPRES). MAXPD has an 1100-ton hydraulic press installed, which is equipped with a unique DT-25 pressure module that can be swapped out for a more standard D-DIA module as desired. MAXPD makes use of the world-class monochromatic beam available at XPD (usually ~67 keV), with which we collect both angular dispersive X-ray diffraction data and X-radiographic imaging. The first General User experiments took place in March 2018. Final Science Commissioning beamtime took place in August of last year, and the full General User program for MAXPD began in the 2018-3 cycle. In this seminar, I will give an overview of the science drivers behind the development of the endstation, some of its unique capabilities, some representative results from recent experiments conducted over the last two cycles at MAXPD, and where we are looking to go as we look to the future.
"The non-equilibrium attractor: Beyond hydrodynamics"
Presented by Michael Strickland, Kent State University
"Electric dipole moments in the era of the LHC"
Presented by Jordy de Vries, University of Massachusetts Amherst, Riken BNL
Thursday, May 9, 2019, 12 pm
The search for an understanding of fundamental particle physics that goes beyond the Standard Model (SM) has grown into a worldwide titanic effort. Low-energy precision experiments are complementary to collider searches and, in certain cases, can even probe higher energy scales directly. However, the interpretation of a potential signal, or lack thereof, is complicated because of the non-perturbative nature of low-energy QCD. I will use the search for electric dipole moments (EDMs), which aims to discover beyond-the-SM CP violation, as an example to illustrate these difficulties and how they can be overcome by combining (chiral) effective field theory and lattice QCD. I discuss how EDM experiments involving complex systems like nucleons, nuclei, atoms, and molecules constrain possible CP-violating interactions involving the Higgs boson, how these constraints match up to direct LHC searches, and the relevance of and strategies for the improvement of the hadronic and nuclear theory.
"Mu*STAR Accelerator-Driven Subcritical Molten-Salt All-Purpose non-Nuclear Reactor"
Presented by Rolland Johnson, Muons Inc.
Tuesday, May 7, 2019, 3:30 pm
Hosted by: George Redlinger
The Mu*STAR BHAG[1] is: To make superconducting RF accelerators so powerful and efficient that they make enough neutrons to produce nuclear energy for electricity or for process heat at less cost than from wind, solar, or natural gas, without weapons proliferation legacies of enrichment and chemical reprocessing, by burning unwanted nuclear materials. The arguments are presented to support that such a goal is possible in the near future. [1] BHAG: Big Hairy Audacious Goal, from "Built to Last: Successful Habits of Visionary Companies" by Jim Collins and Jerry Porras (2004)
"Relativistic Hydrodynamic Fluctuations"
Presented by Gokce Basar, UiC
Friday, May 3, 2019, 2 pm
We present a general systematic formalism for describing dynamics of fluctuations in an arbitrary relativistic hydrodynamic flow, including their feedback (known as long-time hydrodynamic tails) in a deterministic way. The fluctuations are described by two-point equal-time correlation functions. We introduce a definition of equal time in a situation where the local rest frame is determined by the local flow velocity, and a method of taking derivatives and Wigner transforms of such equal-time correlation functions, which we call confluent. The Wigner functions satisfy evolution equations that describes the relaxation of the out-of-equilibrium modes. We find that the equations for confluent Wigner functions nontrivially match with the kinetic equation for phonons propagating on an arbitrary background, including relativistic inertial and Coriolis forces due to acceleration and vorticity of the flow. We also describe the procedure of renormalization of short-distance singularities which eliminates cutoff dependence, allowing efficient numerical implementation of these equations.
Special Particle Physics Seminar
"Observation of CP violation in charm decays"
Presented by Angelo Di Canto, BNL
Friday, May 3, 2019, 11 am
The existence of CP violation in the decays of strange and beauty mesons is well established experimentally by numerous measurements. By contrast, CP violation in the decays of charm particles has so far escaped observation. This seminar reports on the first observation of CP violation in charm decays thought the measurement of the difference between the time-integrated CP asymmetries in D0 -> K- K+ and D0 -> pi- pi+ decays. The measurement has been performed using the full data set of proton-proton collisions collected by LHCb in 2011-2018, which corresponds to an integrated luminosity of 9fb-1. In addition, a brief overview of recent measurements of mixing and mixing-induced CP violation in charm mesons at LHCb is also presented.
"Dark Matter Searches with the ATLAS Detector at the LHC"
Presented by Arely Cortes Gonzalez, CERN
Thursday, May 2, 2019, 3 pm
Hosted by: Michael Begel
The presence of a non-baryonic dark matter component in the Universe is inferred from the observation of its gravitational interaction. If dark matter interacts weakly with the Standard Model it would be produced at the LHC, escaping the detector and leaving a large missing transverse momentum as their signature. The ATLAS detector has developed a broad and systematic search program for dark matter production in LHC proton-proton collisions. The results of these searches on the 13 TeV data, their interpretation, and the possible evolution of the search program will be presented.
"The Chiral Qubit: quantum computing with chiral anomaly"
Presented by Dmitri Kharzeev, Stony Brook University and BNL
The quantum chiral anomaly enables a nearly dissipationless current in the presence of chirality imbalance and magnetic field – this is the Chiral Magnetic Effect (CME), observed recently in Dirac and Weyl semimetals. We propose to utilize the CME for the design of qubits potentially capable of operating at THz frequency, room temperature, and the coherence time to gate time ratio of about 10^4 . The proposed "Chiral Qubit" is a micron-scale ring made of a Weyl or Dirac semimetal, with the |0> and |1> quantum states corresponding to the symmetric and antisymmetric superpositions of quantum states describing chiral fermions circulating along the ring clockwise and counter-clockwise. A fractional magnetic flux through the ring induces a quantum superposition of the |0> and |1> quantum states. The entanglement of qubits can be implemented through the near-field THz frequency electromagnetic fields.
"Reviewing the AGS Heavy Ion Program and Looking Forward to the Fixed-Target Program at STAR"
Presented by Prof. Daniel Cebra, UC Davis
Tuesday, April 30, 2019, 11 am
In the 1980's and 90's the AGS initiated a heavy-ion beam program with both silicon and gold beams. A suite of dedicated experiments established the systematics for production of light charged particles, strangeness, light nuclei, and anti-particles, as well as systematics for flow and femtoscopy. Those experiments established the design of the RHIC detectors and trained the personnel who would become leaders in the RHIC program. Recently, there has been renewed interest in the energy region covered by the AGS heavy-ion program. New facilities are being built Germany and Russia and proposed in Japan and China. And a conclusion of the first beam energy scan at RHIC was that it would be necessary to revisit the AGS energy range by installing a fixed-target within the STAR experiment. This talk will review key results from the AGS heavy-ion program, and those to results from the STAR fixed-target test runs, and outline the proposed physics program.
""Measure what is measurable and make measurable what is not so - Uncover new physics with bosons at the LHC and upgrades of the CMS detector to maximize the discovery potential""
Presented by Mia Liu, FNAL
Monday, April 29, 2019, 11 am
The Standard Model describes the building blocks of matter and their interactions. It has been tested extensively with experimental data and found to be incredibly successful in describing nature. Discovering the Higgs boson in 2012 at the LHC completed the picture of the SM. The LHC is at the forefront of directly searching for new physics which is Beyond-Standard-Model (BSM), and I will discuss searches for supersymmetric partners of the electroweak bosons, as well as measurement of an extremely rare process with three WWW bosons as stringent tests of the SM. I will also discuss the instrumentation which enables such studies. The discussion includes the recently completed CMS Phase-1 pixel upgrade, as well as the R&D studies towards solving the future trigger and computing challenges using innovative machine learning approaches in future high energy experiments.
"Searching for Higgs Pair Production at the LHC"
Presented by Elizabeth Brost, Northern Illinois University
Thursday, April 25, 2019, 3 pm
Since the discovery of the Higgs boson in 2012, the particle physics community at the Large Hadron Collider (LHC) has been hard at work studying its properties, and comparing them to the predictions of the Standard Model (SM), including the couplings of the Higgs boson to itself and to other particles. The Higgs self-coupling can be measured directly in the Higgs pair production process, and will provide insight into the nature of electroweak symmetry breaking. In the SM, the di-Higgs cross section in proton-proton collisions is very small. However, a wide range of beyond-the-SM models predict enhancements to the di-Higgs production rate, which motivates searching for di-Higgs production even now, when the SM cross section is too small to measure in the current LHC dataset. Looking forward, the LHC Run 3 and HL-LHC will bring a new set of challenges, including more proton-proton collisions per bunch crossing. Extracting rare physics signatures from this busier environment will be difficult for the current ATLAS trigger system. In this talk, I will present current and future ATLAS searches for hh production using a variety of final states, and discuss the use of future track triggers in upgrades to the ATLAS trigger system.
"Partons from the Path-Integral Formalism of the Hadronic Tensor"
Presented by Keh-Fei Liu, University of Kentucky
Thursday, April 25, 2019, 12 pm
"Parton distributions in Euclidean space"
Presented by Anatoly Radyushkin, ODU/JLab
To extract parton distributions from the lattice simulations, one needs to consider matrix elements M(z,p) of bilocal correlators of parton fields [generically written as φ(0)φ(z)] at spacelike separations z=(0,0,0,z_3). A transition to PDFs may be proceeded by taking a Fourier transform either with respect to z_3 for fixed p_3 (which gives X. Ji's quasi-PDFs), or with respect to the Lorentz-invariant variable ν=-(zp) for fixed values of another Lorentz invariant z^2 [which results in pseudo-PDFs].These functions are interesting on their own, and I will discuss, in the continuum case, their general properties, the connection between the two types of functions, and their relation with the usual light-cone PDFs. I will outline the algorithm of extracting the PDFs through the use of the so-called "reduced Ioffe-time distributions",and illustrate this pseudo-PDF-oriented approach on the example of exploratory lattice simulations performed by Orginos et al.
"Dissecting the Higgs boson with ATLAS and leptons"
Presented by Quentin Buat, CERN
Using data taken during the first years of the LHC Run2, ATLAS has firmly established the coupling of the Higgs boson to the tau lepton, thus directly confirming the existence of leptonic Yukawa interactions. In this talk, I will present the cross-section measurements performed by ATLAS in the di-tau final state with a partial Run2 dataset and discuss the prospects of the analysis with the full Run2 dataset and beyond. I will also discuss the status of the search for the muonic Yukawa interaction.
""Superconductivity and magnetism at ferroelectric critical point""
Presented by Alexander Balatsky, UConn Nordita
ISB Bldg. 734 Conf. Rm. 201 (upstairs)
Hosted by: Ilya Drozdov
It is well established that multiple entangled orders emerge in quantum materials at criticality: eg superconducting states develop in the vicinity of magnetic phases. I will make the case that similar phenomena occur in quantum paraelectrics. Recent observations of strain and O18 isotope substitution in doped STO support the view of the key role critical ferroelectric fluctuations play in producing superconductivity. Looking beyond superconductivity, I will illustrate how quantum ferroelectric fluctuations can induce magnetic fluctuations due to recently proposed phenomenon of dynamic multiferroicity.
"Using High-Resolution Observations to Improve a Low-Resolution Global Climate Model"
Presented by Greg Elsaesser, NASA GISS
Hosted by: Mike Jensen
This talk will begin with an overview of recent development in the representation of deep convection in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). Global satellite remote sensing products are important references for continual GCM development and evaluation, but such products often provide data at coarse temporal and/or spatial resolutions, thus making it difficult to conceptualize and evaluate "process representations" in a GCM. I will discuss the various approaches I am taking to average global satellite retrievals in new ways, coincident with efforts to use new DOE/ARM observations, to derive composite high-resolution evolutions of deep convection and the nearby environment. These depictions will not only inform future development, but they are also crucial for ensuring that recent improved mean-state representations are not the result of errors cancelling at the process level.
"Recent Results from COMPASS"
Presented by Ana-Sofia Nunes, BNL
Hosted by: Oleg Eyser
COMPASS is a fixed target experiment at the CERN SPS that has been collecting data since 2002 and was already approved to run in 2021. It uses unique beams of naturally polarized muons and unpolarized hadrons of 160, 190 or 200 GeV impinging on polarized and unpolarized proton, isoscalar or heavy targets to study fundamental aspects of QCD, as the structure of nucleons, hadron spectroscopy and the pion polarizability. The collected data allow measurements on the spin structure of nucleons, not only in the collinear approximation but also on nucleon tomography, either via deeply virtual Compton scattering (DVCS) and deeply virtual meson production (DVMP) which give access to generalized parton distributions (GPDs), or via semi-inclusive deep inelastic scattering (SIDIS) and polarized Drell-Yan (DY) which give access to transverse-momentum dependent parton distribution functions (TMDs). Moreover, hadron multiplicities extracted from semi-inclusive deep inelastic scattering data can be used as input for the computation of fragmentation functions (FFs) in QCD fits. A selection of the latest published and preliminary results of COMPASS in the scope of the study of the structure of nucleons and the hadronization of quarks will be presented.
Condensed Matter Physics and Materials Science - The Myron Strongin Seminar
"Disappearance of Superconductivity Due to Vanishing Coupling in the Overdoped High-Temperature Cuprate Superconductors"
Presented by Tonica Valla, BNL
Monday, April 15, 2019, 1:30 pm
Hosted by: Weiguo Yin and Jing Tao
In high-temperature cuprate superconductors, superconductivity is accompanied by a "plethora of orders", and phenomena that may compete, or cooperate with superconductivity, but which certainly complicate our understanding of origins of superconductivity in these materials. While prominent in the underdoped regime, these orders are known to significantly weaken or completely vanish with overdoping. Here, we approach the superconducting phase from the more conventional highly overdoped side. We present angle-resolved photoemission spectroscopy (ARPES) studies of Bi2Sr2CaCu2O8+d (Bi2212) single crystals cleaved and annealed in ozone to increase the doping all the way to the metallic, non-superconducting phase. We show that the mass renormalization in the antinodal region of the Fermi surface, associated with the structure in the quasiparticle self-energy, that possibly reflects the pairing interaction, monotonically weakens with increasing doping and completely disappears precisely where superconductivity disappears. This is the direct evidence that in the overdoped regime, superconductivity is determined by the coupling strength. A strong doping dependence and an abrupt disappearance above the transition temperature (Tc) eliminate the conventional phononic mechanism of the observed mass renormalization and identify the onset of spin-fluctuations as its likely origin.
"Future Circular Collider"
Presented by Michael Benedikt, CERN
Friday, April 12, 2019, 3:30 pm
Hosted by: George Redlinger & Maria Chamizo Llatas
The global Future Circular Collider Study, launched in 2014 by CERN as host institute, has published its conceptual design report by the end of 2018, as input to the update of the European Strategy for Particle Physics. Today, a staged Future Circular Collider (FCC), consisting of a luminosity-frontier highest-energy electron-positron collider (FCC-ee) followed by an energy-frontier hadron collider (FCC-hh), promises the most far-reaching physics program for the post-LHC era. FCC-ee is a precision instrument to study the Z, W, Higgs and top particles, and o?ers unprecedented sensitivity to signs of new physics. Most of the FCC-ee infrastructure can later be reused for the subsequent hadron collider, FCC-hh. The FCC-hh provides proton-proton collisions at a centre-of-mass energy of 100 TeV and can directly produce new particles with masses of up to several tens of TeV. This collider will also measure the Higgs self-coupling and explore the dynamics of electroweak symmetry breaking. Heavy-ion collisions and ep collisions (FCC-eh) further contribute to the breadth of the overall FCC program. The integrated FCC infrastructure will serve the particle physics community through the end of the 21st century. This presentation will summarize the conceptual designs of FCC-ee and FCC-hh, covering the machine concepts, the R&D for key technologies, infrastructure planning, initial considerations for the experiments, and a possible implementation schedule.
NT / RIKEN Seminar
"A Complex Path Around the Sign Problem"
Presented by Paolo Bedaque, U Maryland
The famous "sign problem" is the main roadblock in the path to a Monte Carlo solution of QCD at finite densities and the study of real time dynamics. We review a recent developed approach to this problem based on deforming the domain of integration of the oath integral into complex field space. After discussing the math involved in the complex analysis of multidimensional spaces we will talk about the advantages/disadvantages of using Lefschetz thimbles, "learnifolds" and "optimized manifolds" as the alternative integration manifold as well as the algorithms that go with them. Several examples of lower dimensional field theories will be presented.
"Neutrino Interaction Modeling and Tuning"
Presented by Libo Jiang, University of Pittsburgh
GENIE is a well-knows event generator provides the simulation of neutrino interactions, and performs a highly-developed global analysis of neutrino scattering. It handles all neutrinos and targets, and all processes relevant from MeV to PeV energy scales. I am going to present the modelling of neutrino interactions and results of tuning against experimental data.
"Tailoring electronic and thermal properties of bulk Cu26T2(Ge,Sn)6S32 colusite through defects engineering and functionalization of the conductive network"
Presented by Emmanuel Guilmeau, CRISMAT Laboratory, Caen, France
Hosted by: Qiang Li
A complete study of the structure and thermoelectric properties of colusite Cu26T2(Ge,Sn)6S32 (T = V, Cr, Mo, W) is presented. A brief introduction provides a state-of-theart/survey of thermoelectric sulfides, with a special focus on the structural features and transport properties relationship in Cu-based sulfides. In the first part of this presentation, we highlight the key role of the densification process on the formation of short-to-medium range structural defects in Cu26V2Sn6S32 [1]. A simple and powerful way to adjust carrier concentration combined with enhanced phonon scattering through point defects and disordered regions is described. By combining experiments with band structure and phonons calculations, we elucidate, for the first time, the underlying mechanisms at the origin of the intrinsically low thermal conductivity in colusite samples as well as the effect of S vacancies and antisite defects on the carrier concentration. In the second part, we demonstrate the spectacular role of the substitution of V5+ by hexavalent T6+ cations (Cr, Mo and W) on the electronic properties, leading to high power factors [2]. In particular, Cu26Cr2Ge6S32 shows a value of 1.53 mW m-1 K-2 at RT that reaches a maximum value of 1.94 mW m-1 K-2 at 700 K. The rationale is based on the concept of conductive "Cu-S" network, which in colusites corresponds to the more symmetric parent sphalerite structure. The interactions within the mixed octahedral-tetrahedral [TS4]Cu6 complexes are shown to be responsible for the outstanding electronic transport properties. [1] C. Bourgès et al., J. Amer. Chem. Soc. 140 (2018) 2186 [2] V. Pavan Kumar et al., Adv. Energy Mater. 9 (2019) 1803249
"Simulating Mixed-Phase Clouds at High Latitudes: Model Evaluation, Improvement, and Interactions with Aerosol"
Presented by Xiahong Liu, Univ. Wyoming
Hosted by: Damao Zhang
Mixed-phase clouds are frequently observed in the Arctic and Antarctic and over the Southern Ocean, and have important impacts on the surface energy budget and regional climate. Global climate models (GCMs), an important tool for studying the climate change still have large biases in simulating the mixed-phase cloud properties, including supercooled liquid amount and liquid and ice phase partitioning. In this talk, I will present our recent works on mixed-phase clouds: (1) improving the representations of subgrid mixing and partitioning between cloud liquid and ice in mixed-phase clouds in the DOE's Energy Exascale Earth System Model (E3SM). Model simulations are evaluated against observation data obtained in the DOE Atmospheric Radiation Measurement (ARM) Program's field campaigns and long-term ground-based multi-sensor measurements; and (2) investigating the effects of aerosols, including dust and sea spray aerosol, on mixed-phase clouds. We found that dust, as ice nucleating particles (INPs), induces a global net warming via its indirect effect on mixed-phase clouds with a predominant warming in the NH midlatitudes and a cooling in the Arctic. INP sources of sea spray aerosol vary with time and geographic location with the maximum contribution in the marine boundary layer over the Southern Ocean, where dust has a limited influence. Modeled INP concentrations are compared with observations from different campaigns (e.g., MARCUS, SOCRATES, CAPRICORN).
"Importance of electron interactions in understanding the photo-electron spectroscopy and the Weyl character of MoTe2"
Presented by Niraj Aryal, Florida State University
ISB Bldg. 734 Conference Room 201 (upstairs)
Hosted by: Weiguo Yin
Weyl semimetals are crystalline materials that host pairs of chiral Weyl Fermions (WFs) as low energy excitations. Such WFs act as sources and sinks of Berry curvature and can contribute to many exotic transport properties. Recently, inversion symmetry broken transition metal dichalcogenide materials like MoTe2 and WTe2 have been predicted to host type-II WFs by DFT calculations and ARPES experiments. However, quantum oscillation experiments (QOE) disagree with the DFT calculations thus raising doubt about the existence of Weyl physics in these materials [1]. In order to address this discrepancy, we studied the role of electron interactions in Td-MoTe2 by employing DFT where the onsite Coulomb repulsion (Hubbard U) for the Mo 4d states is included within the DFT+U scheme. We found that in addition to explaining the QOE, inclusion of electron interaction is needed to explain the light-polarization dependence measured by ARPES [2]. We also found that while the number of Weyl points (WPs) and their position in the Brillouin Zone change as a function of U, a pair of such WPs very close to the Fermi level survive the inclusion of these important corrections. Our calculations suggest that the Fermi surface of Td-MoTe2 is in the vicinity of a correlations-induced Lifshitz transition which can be probed experimentally. If time allows, I will also present briefly our study of the interface between topological insulator and non-topological materials which are important for band engineering and studying emergent fundamental phenomena. References [1] D. Rhodes, R. Schonemann, N. Aryal, Q.R. Zhou et al., Bulk Fermi surface of the Weyl type-II semimetallic candidate ?-MoTe2, Phys. Rev. B 96, 165134 (2017). [2] N. Aryal and E. Manousakis, Importance of electron correlations in understanding the photo-electron spectroscopy and the Weyl character of MoTe2, Phys. Rev. B 99, 035123 (2019).
"The Search for the dark vector boson"
Presented by Diallo Boye, BNL
Hosted by: Alessandro Tricolli
Hidden sector or dark sector states appear in many extensions to the Standard Model, to provide a particle candidate for dark matter in universe or to explain astrophysical observations such the as positron excess observed in the cosmic radiation flux. A hidden or dark sector can be introduced with an additional U(1)d dark gauge symmetry. The discovery of the Higgs boson during Run 1 of the Large Hadron Collider opens a new and rich experimental program based on the Higgs Portal. This discovery route uses couplings to the dark sector at the Higgs level, which were not experimentally accessible before. These searches use the possible exotic decays: H -> Z Zd -> 4l and H -> Zd Zd -> 4l. Here Zd is a dark vector boson. We have experience of this search from the Run 1 period of the LHC using the ATLAS detector at CERN. These results showed (tantalizingly) two signal events where none were expected, so that in the strict criteria of High Energy Physics, the result was not yet statistically significant. The Run 1 analysis for 8 TeV collision energy is further developed in Run 2 with 13 TeV collision energy, to expand the search area, take advantage of higher statistics, a higher Higgs production cross section, and substantially better performance of the ATLAS detector. The analysis is extended to search for heavier scalars decaying to dark vector bosons.
CFN Special Colloquium
"Discovering novel materials, and novel physics, with first-principles"
Presented by Nicola Marzari, École Polytechnique Fédérale de Lausanne (EPFL)
CFN, Bldg 735, 2nd Floor Seminar Room
First-principles simulations are one of the greatest accelerators in the world of science and technology. To provide some context, one could mention that 30,000 papers on density-functional theory are published every year; that 12 of these are in the top-100 most-cited papers in the entire history of science, engineering, and medicine; or that the doubling in capacity every 14 months has been the underwriter of computational science for the past 30 years. I'll highlight some of my own scientific, structural, and policy perspectives on this, taking as a case study the discovery of novel two-dimensional materials and of their properties and applications. I'll then argue how the need to calculate materials properties often forces a critical evaluation of some stalwarts of condensed-matter physics: in this case, learning that phonons are just a high-temperature approximation for heat carriers, or discovering that the Boltzmann transport equation can be generalized to describe simultaneously the propagation and interference of phonon wavepackets, thus unifying the description of thermal transport in crystals and glasses. Bio: Nicola Marzari holds the chair of Theory and Simulation of Materials at the École Polytechnique Fédérale de Lausanne, where he is also the director of the Swiss National Centre for Competence in Research MARVEL, on Computational Design and Discovery of Novel Materials (2014-26). Previous tenured appointment include the Toyota Chair for Materials Engineering at the Massachusetts Institute of Technology and the first Statutory (University) Chair of Materials Modelling at the University of Oxford, where he was also the director of the Materials Modelling Laboratory. He is the current chairman of the Psi-k Charity and Board of Trustees, and holder of an Excellence Chair at the University of Bremen.
BWIS Sponsored Event
"How Beauty Leads Physics Astray"
Presented by Sabine Hossenfelder, Frankfurt Institute for Advanced Studies, Germany
Tuesday, April 9, 2019, 5 pm
Hosted by: Vivian Stojanoff
To develop fundamentally new laws of nature, theoretical physicists often rely on arguments from beauty. Simplicity and naturalness in particular have been strongly influential guides in the foundations of physics ever since the development of the standard model of particle physics. In this lecture I argue that arguments from beauty have led the field into a dead end and discuss what can be done about it.
"Do Women Get Fewer Citations Than Men?"
Hosted by: Berndt Mueller
I will talk about the results of a citation analysis on publication data from the arXiv and inspire in which we explored gender differences. I will further explain how we can use bibliometric analysis to improve the efficiency of knowledge discovery.
""Charge and lattice entanglement in quantum materials observed by TEM: Tb2Cu0.83Pd0.17O4 and Cu2S""
Presented by Wei Wang, Institute of Physics, Chinese Academy of Sciences, Beijing
Plenty of physical properties in strongly electron correlated system are thought to arise from intricate interplay among charge, spin, orbital and lattice. Understanding the structural origin of these functionalities, such as superconductivity, multiferroics, etc, has attracted tremendous attention for decades. Using electron diffraction technique in TEM, we recently studied the modulated structure in Tb2Cu0.83Pd0.17O4 compound and phase transition in Cu2S. After a brief introduction of TEM techniques that I have employed for the study, I will report observations of electron-beam-induced smectic-nematic phase transitions in Tb2Cu0.83Pd0.17O4. Electron diffraction and HAADF-STEM images indicate a superlattice structure with Cu/Pd displacements perpendicular to the Cu-O plane on Cu sites. In addition, the superlattice modulation undergoes a reversible smectic-nematic phase transition under the electron beam illumination. Our in situ TEM results imply that the modulated structure root in a charge ordering at Cu sites. Then I will switch to an on-going study of the Cu2S at high temperature. Previous reports show that the crystal structural of Cu2S can be manipulated by electron beam illumination, suggesting a strong coupling between charge and lattice. To explore the structural phase transition and to have a better understanding of superionic behavior in this material, we focus on diffuse scattering in the electron diffraction patterns obtained at high temperatures, which results from short range ordering of Cu atoms. Electron diffraction tomographyic data were collected in order to reconstruct the real-space structure for Cu atoms. Preliminary results will be shown followed by a discussion with early-stage interpretations.
High Energy Theory Seminar
"Dark Matter — or What?"
Tuesday, April 9, 2019, 11 am
In this talk I will explain (a) what observations speak for the hypothesis of dark matter, (b) what observations speak for the hypothesis of modified gravity, and (c) why it is a mistake to insist that either hypothesis on its own must explain all the available data. The right explanation, I will argue, is instead a suitable combination of dark matter and modified gravity, which can be realized by the idea that dark matter has a super fluid phase.
"The Color Glass Condensate density matrix: Lindblad evolution, entanglement entropy and Wigner functional"
Presented by Alex Kovner, U Connecticut
We introduce the notion of the Color Glass Condensate (CGC) density matrix ρ̂ . This generalizes the concept of probability density for the distribution of the color charges in the hadronic wave function and is consistent with understanding the CGC as an effective theory after integration of part of the hadronic degrees of freedom. We derive the evolution equations for the density matrix and show that it has the celebrated Kossakowsky-Lindblad form describing the non-unitary evolution of the density matrix of an open system. Additionally, we consider the dilute limit and demonstrate that, at large rapidity, the entanglement entropy of the density matrix grows linearly with rapidity according to dSe/dy=γ, where γ is the leading BFKL eigenvalue. We also discuss the evolution of ρ̂ in the saturated regime and relate it to the Levin-Tuchin law and find that the entropy again grows linearly with rapidity, but at a slower rate. Finally we introduce the Wigner functional derived from this density matrix and discuss how it can be used to determine the distribution of color currents, which may be instrumental in understanding dynamical features of QCD at high energy.
"Views and news on chiral transport"
Presented by Karl Landsteiner
Thursday, April 4, 2019, 4 pm
I present an effective action approach to chiral transport. Chiral Magnetic and Chiral Vortical Effect are treated in exact parallel and result in the known dependence on chemical potential and temperature. The approach sheds light on some of the more obscure features of chiral transport such as covariant and consistent anomalies and a seeming mismatch of the derivative expansion. As a related application I will comment on the thermal Hall effect on 2D topological insulators. Then I discuss a new example of chiral transport: anomalous Hall viscosity at the quantum critical point of the Weyl-semimetal/insulator transition. Results from a holographic model will be compared to a weak coupling quantum field theory analysis.
"Flavour physics with dynamical chiral fermions"
Presented by Peter Boyle, University of Edinburgh
I discuss recent simulations with dynamical chiral fermions. In particular I focus on neutral kaon mixing amplitudes in and beyond the standard model, calculated for the first time with physical quark masses. A puzzle in the non-perturbative renormalisation of the BSM operators is resolved. The prospects for extension of this calculation to B-meson mixing amplitudes is discussed, and initial results for the standard model B mixing amplitudes presented. Prospects for future calculations over the next five years are considered.
"Prospects on nucleon tomography"
Presented by Herve Moutard, Université Paris-Saclay
Hosted by: Salvatore Fazio
Much attention has been devoted in recent years to the three-dimensional quark and gluon structure of the nucleon. In particular the concept of Generalized Parton Distributions promises an understanding of the generation of the charge, spin, and energy-momentum structure of the nucleon by its fundamental constituents. Forthcoming measurements with unprecedented accuracy at Jefferson Lab and at a future electron-ion collider will presumably challenge our quantitative description of the three-dimensional structure of hadrons. To fully exploit these future experimental data, new tools and models are currently being developed. After a brief reminder of what make Generalized Parton Distributions a unique tool to understand the nucleon structure, we will discuss the constraints provided by the existing measurements and review recent theoretical developments. We will explain why these developments naturally fit in a versatile software framework, named PARTONS, dedicated to the phenomenology and theory of GPDs.
"Topological semimetals predicted from first-principles and theoretical approaches"
Presented by Jiawei Ruan, School of Physics, Nanjing University, China
Monday, April 1, 2019, 11 am
Building 734, Seminar Room 201
Weyl semimetals are new states of matter which feature novel Fermi arcs and exotic transport phenomena. Based on first-principles calculations, we report that the HgTe-class materials [1] as well as four chalcopyrites [2] are ideal Weyl semimetals, having largely separated Weyl points and uncovered Fermi arcs that are amenable to experimental detections. We also construct a minimal effective model to capture the low-energy physics of this class of Weyl semimetals. Our discovery is a major step toward a perfect playground of intriguing Weyl semimetals and potential applications for low-power and high-speed electronics. Besides the ideal Weyl semimetals, I will talk about Non-Hermitian nodal-line semimetals with an anomalous bulk-boundary correspondence [3]. I will also present recent results of saddle surface in topological materials and a new method to construct a simplified tight-binding model based on group theory analysis. [1] JR et al., Nature communications 7, 11136 (2016). [2] JR et al., PRL 116, 226801 (2016). [3] H. Wang, JR, and H. Zhang, PRB 99, 075130 (2019).
"Toward a unified description of both low and high ptparticle production in high energy collisions"
Presented by Jamal Jalilian-Marian, Baruch College, City University of New York
Friday, March 29, 2019, 2 pm
Inclusive particle production at high p_t is successfully described by perturbative QCD using collinear factorization formalism with DGLAP evolution of the parton distribution functions. This formalism breaks down at small Bjorken x (high energy) due to high gluon density (gluon saturation) effects. The Color Glass Condensate (CGC) formalism is an effective action approach to particle production at small Bjorken x (low p_t) which includes gluon saturation. The CGC formalism nevertheless breaks down at intermediate/large Bjorken x, corresponding to the high p_t kinematic region in high energy collisions. Here we describe the first steps taken towards the derivation of a new formalism, with the ultimate goal of having a unified formalism for particle production at both low and high p_t in high energy hadronic/heavy ion collisions.
High Energy / Nuclear Theory / RIKEN Seminars
"Lattice Workshop for US -Japan Intensity Frontier Incubation (1/1)"
"Quantum Information Science Landscape, Vision, and NIST"
Presented by Carl Williams, NIST
Tuesday, March 26, 2019, 3:30 pm
Hosted by: Andrei Nomerotski
The first part of the colloquium will provide an overview of United States government's interest in quantum information science from the National Strategic Overview for Quantum Information Science that established the policy objectives for this administration to the National Quantum Initiative Act that formalizes parts of this strategy for key civilian science agencies. This portion of the talk will conclude with placing the United States strategy in the global context and describe how the United States plans to establish the foundation for the quantum 2.0 economy. The second part of the colloquium will begin with a high-level overview of NIST, of NIST's interest in Quantum Information Science, before talking briefly about some interesting highlights from NIST laboratories. Moving from the highlights, the talk will explore ongoing and future metrological applications followed by some hypothetical conjectures of future technological applications with a focus on how quantum information science and its technology may impact fundamental physics from exploring potential time variation of fundamental constants to future probes of dark matter and gravitational waves.
Tuesday, March 26, 2019, 9 am
"Deep learning at the edge of discovery at the LHC"
Presented by Javier Duarte, FNAL
Monday, March 25, 2019, 2:30 pm
The discovery of the Higgs boson at the Large Hadron Collider in 2012 opened a new sector for exploration in the standard model of particle physics. Recent developments, including the use of deep learning to identify a complex but common decay of the Higgs boson to bottom quarks, have expanded our ability to study the production of Higgs bosons with very large momenta. By studying these Higgs bosons and measuring their momentum spectrum, we may be able to discover new physics at very high energy scales inaccessible directly at the LHC. I will explain these searches and the direction that deep learning is taking in particle physics, especially how it's changing the way we think about the trigger, event reconstruction, and our computing paradigm.
Monday, March 25, 2019, 9 am
CANCELED - NT/RIKEN Seminar
Presented by Alex Kovner, University of Connecticut
"Installation and preliminary results from ProtoDUNE Single Phase experiment at CERN"
Presented by Maura Spanu, BNL
Thursday, March 21, 2019, 3 pm
DUNE is a leading-edge, international experiment for neutrino science and proton decay. Its ambitious physics program requires a careful prototyping of the engineering solutions envisaged for the scale-up of the LArTPC technology, as well as a careful control of the systematics through the acquisition of a deep knowledge of the detector response and performances. ProtoDUNE is an extensive prototype program (ProtoDUNE) development at the European Research Center (CERN) Neutrino Platform facility with the aim to answer to all the open questions about DUNE design. The Single Phase prototype (ProtoDUNE SP) has been assembled in the EHN1 extension at CERN between 2016 and 2018 and it successfully took its first beam data from a dedicated SPS tertiary line from September to November 2018.
"Development of LArTPC for Neutrino Physics"
Presented by Xin Qian, BNL
Liquid Argon Time Projection Chamber (LArTPC), with its mm-scale position resolution and the full-active-volume imaging-aided calorimetry, is an excellent device to detect accelerator neutrinos at GeV energy range. This technology may hold the key to search for new CP violation in the lepton sector, to determine the neutrino mass hierarchy, to search for baryon number violation, and to search for sterile neutrino(s). In this talk, I will review the existing achievements and current status of the detector development.
"Modified Structure of Protons and Neutrons in Correlated Pairs"
Presented by Dr. Barak Schmookler, Center for Frontiers in Nuclear Science, Stony Brook University
Tuesday, March 19, 2019, 11 am
It has been known for several decades that the inelastic structure of the nucleon is modified by the presence of the nuclear medium. This modification is called the EMC effect. However, there is still no consensus as to the underlying QCD-based quark-gluon dynamics driving the effect. One approach to describe the EMC effect is to slightly modify the structure of all the nucleons in the nucleus. Recent evidence, however, suggests that the EMC effect may arise due to two-nucleon Short Range Correlations (SRC), which are pairs of nucleons close together in the nucleus. If this is true, it implies that nucleons are largely unmodified most of the time, but have their structure significantly modified when they temporarily fluctuate into SRC pairs. In this presentation, I will discuss the experimental evidence linking the EMC effect to two-nucleon SRCs. I will then describe a new data-driven phenomenological model of the EMC effect based on neutron-proton SRC pairs, and I will show that this model can consistently describe the effect across nuclei.
"Baryons as Quantum Hall Droplets"
Presented by Zohar Komargodski, Simons Center, Stony Brook
We revisit the problem of baryons in the large N limit of Quantum Chromodynamics. A special case in which the theory of Skyrmions is inapplicable is one-flavor QCD, where there are no light pions to construct the baryon from. More generally, the description of baryons made out of predominantly one flavor within the Skyrmion model is unsatisfactory. We propose a model for such baryons, where the baryons are interpreted as quantum Hall droplets. An important element in our construction is an extended, 2+1 dimensional, meta-stable configuration of the η′ particle. Baryon number is identified with a magnetic symmetry on the 2+1 dimensional sheet. If the sheet has a boundary, there are finite energy chiral excitations which carry baryon number. These chiral excitations are analogous to the electron in the fractional quantum Hall effect. Studying the chiral vertex operators we are able to determine the spin, isospin, and certain excitations of the droplet. In addition, balancing the tension of the droplet against the energy stored at the boundary we estimate the size and mass of the baryons. The mass, size, spin, isospin, and excitations that we find agree with phenomenological expectations.
Joint Nuclear/High Energy Physics Seminar
"Precision measurements of fundamental interactions with (anti)neutrinos"
Presented by Professor Roberto Petti, University of South Carolina
"Neutron scattering study of strongly correlated systems"
Presented by Yao Shen, Fudan University, China
In strongly correlated systems, interactions between various microscopic degrees of freedom with similar energy scales can induce strong competition and frustration, leading to exotic phenomena. Here we use neutron scattering technique to study several strongly correlated systems to show how the competition and interplay between these degrees of freedom can induce different phases and properties. 1) In the pressure-induced superconductor CrAs, the competition between various magnetic interactions lead to a noncollinear helimagnetic order. In addition, CrAs exhibits a spin reorientation at a critical pressure (Pc ~ 0.6 GPa), which is accompanied by a lattice anomaly and coincides with the emergence of bulk superconductivity, indicating the strong interplay between magnetic, structural and electronic degrees of freedom. 2) FeSe, the structurally simplest iron-based superconductor, shows nematic order at 90 K, but no magnetic order in the parent phase. Our neutron scattering experiments reveal both stripe and Neel spin fluctuations that are coupled to the nematicity. The competition between these two phases suppress the magnetic order and drive the system into a nematic quantum disordered paramagnet. Similar phenomenon is observed in YFe2Ge2, in which the magnetic order is suppressed by the competition between stripe type AFM phase and in-plane FM phase. 3) In the heavily electron-doped FeSe based superconductor Li0.8Fe0.2ODFeSe (Tc=41 K), a twisted dispersion of spin excitations is observed which may be caused by the competition between itinerant and local electrons, analogous to the hole-doped cuprates which host remarkably high Tc as well. 4) In the two-dimensional triangular lattice antiferromagnet YbMgGaO4, due to the strong spin-orbit coupling and crystalline electric field (CEF), the low-lying crystal field ground state is a Kramers doublet. The geometric frustration is enhanced by the anisotropic interactions and a quantum sp
PubSci
"PubSci: Big Bang Physics and the Building Blocks of Matter"
The Snapper Inn 500 Shore Dr, Oakdale, NY 11769
"The PROSPECT Antineutrino Detector and Early Physics Results"
Presented by Xianyi Zhang, Illinois Institute of Technology
Thursday, March 7, 2019, 1:30 pm
PROSPECT, Precision Reactor Oscillation and SPECTrum, is a short baseline reactor antineutrino experiment. The PROSPECT antineutrino detector is an optically segmented liquid scintillator detector deployed ~7 m from a highly enriched U-235 reactor. This detector was designed to investigate discrepancies in the reactor antineutrino flux and spectrum by model-independently probing the eV-scale sterile neutrino oscillation from nuclear fission reactor, as well as precisely measuring the U-235 antineutrino spectrum. The particle multi-segment scattering in PROSPECT detector brought a typical challenge in characterizing the scintillator nonlinearity. This talk details the energy scale study for PROSPECT with data-MC comparison of detector calibrations. The detector construction, commissioning, and its early physics measurements are also presented.
"Neutrino cross sections"
Presented by Callum Wilkinson, University of Bern
Tuesday, March 5, 2019, 2 pm
Current and planned neutrino oscillation experiments operate in the 0.1-10 GeV energy regime and use a variety of nuclear targets. At these energies, the neutrino cross section is not well understood: a variety of interaction processes are possible and nuclear effects play a significant role. This talk will give an overview of the state of neutrino cross sections, and explore their relationship with neutrino oscillation experiments.
"Measurement of LAr purity using Cosmic Muons"
Presented by Monica Nunes, IFGW/UNICAMP, Brazil
Monday, March 4, 2019, 3 pm
3-209B, Bldg. 510
LArIAT is an experiment based on a LArTPC aiming to study relevant cross sections of charged particles with argon as well as the development of the related instrumentation. Charged particle crossing the detector excite and ionize the argon atoms and the electrons generated by the ionization are drifted in an electric field towards the anodic wire planes of the TPC. With electronegative impurities in the liquid argon, the amount of charge collected by the wires is going to be smaller and affect the quality of the results obtained in the experiment. Cosmic muons that cross the TPC between beam spills is a valuable tool for measuring the liquid argon purity inside the detector. With the electron lifetime obtained with the purity analysis based on cosmics, is possible to correct data used for all other studies based on charge collection of the LArTPC. In this seminar, I'll present the method and the results obtained in the LArIAT experiment. I'll also present other tasks that I performed on LArIAT during my PhD research.
Joint NT/RIKEN/CFNS Seminar
"Measuring color memory in a color glass condensate"
Presented by Ana-Maria Raclariu, Harvard University
Thursday, February 28, 2019, 4 pm
Building 510, Room 2-38 CFNS Seminar Room
"Status and physics potential of the JUNO experiment"
Presented by Zeyuan Yu, Institute of High Energy Physics, China
The Jiangmen Underground Neutrino Observatory (JUNO) is a 20 kton multi-purpose liquid scintillator detector with an unprecedented energy resolution of 3% at 1 MeV being built in a dedicated underground laboratory in China and expected to start data taking in 2021. The main physics goal of the experiment is the determination of the neutrino mass ordering with a significance of 3-4 sigma within six years of running using electron anti-neutrinos coming from two nuclear power plants at a baseline of about 53 km. Beyond this fundamental question, JUNO will also have a very rich physics program including the precise measurement at a sub-percent level of the solar neutrino oscillation parameters, the detection of low-energy neutrinos coming from galactic core-collapse supernova, diffuse supernova background, the Sun, the Earth (geo-neutrinos) but also proton decay searches. This talk will give an overview on the JUNO physics potential and the current status of the project.
"Precision measurement of neutrinos at Hyper-Kamiokande"
Presented by Akira Konaka, TRIUMF
Tuesday, February 26, 2019, 3:30 pm
Hyper-Kamiokande (HyperK) is a water Cherenkov neutrino detector whose construction in Japan was recently approved. The fiducial mass is 187kton, eight times larger than the Super-Kamiokande detector. The upgraded J-PARC accelerator located 295km away will provide high intensity neutrino and anti-neutrino beams tuned at the oscillation maximum. In this talk, I will describe the challenges of the systematic uncertainties in future neutrino oscillation experiments and how HyperK plans to address them. In addition to the observation of CP violation, Hyper-Kamiokande will explore directions that may become the main research topic in the future if something new is discovered: Precision neutrino oscillations to test the unitarity of the lepton flavour mixing, neutrino astronomy, such as supernova neutrinos and searches for astrophysical point sources of neutrinos, and searches for phenomena beyond the standard model, such as dark matter and nucleon decays.
"Unconventional superconductivity and complex tensor order in half-Heusler superconductors"
Presented by Igor Boettcher, University of Maryland
Hosted by: Laura Classen
A revolutionary new direction in the field of superconductivity emerged recently with the synthesis of superconductors with strong inherent spin-orbit coupling such as the half-Heusler alloys. Due to band inversion, the low-energy degrees of freedom are electrons at a three-dimensional quadratic band touching point with an effective spin 3/2, which allows for higher-spin Cooper pairing and potentially topological superconductivity. I will illuminate some possibilities for unconventional superconductivity in this system, in particular a novel superconducting quantum critical point and the transition into a phase with complex tensor order, which is a superconducting state captured by a complex second-rank tensor valued order parameter describing Cooper pairs having spin 2. Here the interplay of both tensorial and complex nature results in a rich and intriguing phenomenology. I will highlight how optical response measurements can shed light on the phase structure of individual compounds.
"Quantum Chaos, Wormholes and the Sachdev-Ye-Kitaev Model"
Friday, February 22, 2019, 2 pm
2-38 CFNS Seminar Room
The Sachdev-Ye-Kitaev (SYK) model has a long history in nuclear physics where its precursor was introduced as a model for the two-body nuclear interaction to describe the spectra of complex nuclei. Most notably, its level density is given by the Bethe formula and its level correlations are consistent with chaotic motion of the nucleons. Recently, this model received a great of attention as a solvable model for the quantum states of a black hole, exactly because of these properties. In this lecture we introduce the SYK model from a nuclear physics perspective and discuss its chaotic nature and its relation with black hole physics. We end with a summary of recent work on two SYK models coupled by a spin-spin interaction as a model for wormholes.
"Probing New Physics and the Nature of the Higgs Boson at ATLAS"
Presented by Lailin Xu, University of Michigan
The long-sought Higgs boson discovered at the LHC completes the Standard Model of the particle physics. During the last six years, substantial achievements have been made to probe the nature of the Higgs boson. Participle physics is however at an impasse: deep mysteries of the Electroweak symmetry breaking remain unanswered, and long-awaited new physics phenomena beyond the SM have not shown up yet. In this talk, I start with a brief overview on the current profile of measurements of the Higgs boson properties and couplings. I then present Higgs measurements in the four-lepton channel, and how we use the Higgs boson as a portal in the quest for new physics. In the end, I discuss the prospect of the Higgs measurements including the Higgs self-coupling at future colliders.
"Physics education research in higher education: What can we learn from the top cited papers in the Physical Review?"
Presented by Charles Henderson, Western Michigan University
The journal Physical Review Physics Education Research was started in 2005 as the archival research journal for the field of Physics Education Research (PER). In this talk I will identify some important findings from the field of PER based on highly cited articles from the journal. For example, there is strong evidence that in typical physics courses many students do not learn the core concepts of the discipline; student beliefs about physics become less expert like; and there is a significant gender gap, with men outperforming women. Many PER-based instructional strategies can improve student knowledge and some instructional strategies can improve student beliefs. However, implementation of these strategies is low because the field often uses ineffective dissemination strategies.
": Electronic structure of d-metal systems as revealed by ab initio modeling of resonant inelastic X-ray scattering"
Presented by Lei Xu, Leibniz Institute for Solid State and Materials Research Dresden, Germany
Friday, February 15, 2019, 11 am
I will present our work on the theoretical investigation of the electronic structure, magnetic interactions and resonant inelastic X-ray scattering (RIXS) in 3d or 4d-5d transition metal (TM) compounds by using wave-function-based many-body quantum chemistry (QC) methods. My presentation contains two parts. In the first part, I will discuss the magnetic properties of 4d and 5d TM ions with a formally degenerate t12g electron configuration in the double-perovskite (DP) materials Ba2YMoO6, Ba2LiOsO6 and Ba2NaOsO6. Our analysis indicates that the sizable magnetic moments and g-factors found experimentally are due to both strong TM d – ligand p hybridization and dynamic Jahn-Teller effects. Our results also point out that cation charge imbalance in the DP structure allows a fine tuning of the gap between the t2g and eg levels. In another example of t12g electron configuration, spin-Peierls (SP) TiPO4 compound, we assign excitation peaks of experimental RIXS spectra and find that the d1 ground state is composed of an admixture of dz2 and dxz orbital character. In the second part, I will discuss a computational scheme for computing intensities as measured in X-ray absorption and RIXS experiments. We take into account the readjustment of the charge distribution in the 'vicinity' of an excited electron for the modeling of RIXS. The computed L3-edge RIXS spectra for Cu2+ 3d9 ions in KCuF3 and for Ni2+ 3d8 ions in La2NiO4 reproduce trends found experimentally for the incoming-photon incident-angle and polarization dependence.
"Measuring CCQE-like Cross Sections in MINERvA: when statistics meet precision"
Presented by Mateus F. Carneiro, Oregon State University
MINERvA is a detector build to measure neutrino-nucleus cross sections. As we move towards more precise measurements, cross sections are of extreme importance to the future of neutrino physics. This talk will walk through all the steps necessary to simulate, select signal and measure a CCQE cross section while we test different nuclear models. New preliminary MINERvA results using the new configuration of the NuMI beam will be presented.
"Chiral Photocurrents and Terahertz Emission in Dirac and Weyl Materials"
Presented by Mr. Sahal Kaushik, Stony Brook University
Thursday, February 14, 2019, 12 pm
Recently, chiral photocurrents have been observed in Weyl materials. We propose a new mechanism for photocurrents in Dirac materials in the presence of magnetic fields, that does not depend on any asymmetries of the crystal. This Chiral Magnetic Photocurrent would be an independent probe of the chiral anomaly. We also also discuss an observation of terahertz emission in the Weyl material TaAs with tunable ellipticity, due to chiral photocurrents induced by an ultrafast near infrared laser.
"Resonant inelastic X-ray scattering (RIXS) as a probe of exciton-phonon coupling"
Presented by Andrey Geondzhian, European Synchrotron Radiation Facility (ESRF), France (UTC+1)
Thursday, February 14, 2019, 9:30 am
Phonons contribute to resonant inelastic X-ray scattering (RIXS) as a consequence of the coupling between electronic and lattice degrees of freedom. Unlike other techniques that are sensitive to electron-phonon interactions, RIXS can give access to momentum dependent coupling constants. This Information is highly desirable in the context of understanding anisotropic conventional and unconventional superconductivity. In my talk, I will consider the phonon contribution to RIXS from the theoretical point of view. In contrast to previous studies, we emphasize the role of the core-hole lattice coupling. Our model, with parameters obtained from first principles, shows that even in the case of a deep core-hole, RIXS probes exciton-phonon coupling rather than a direct electron-phonon coupling. Further, to address the needs of predictive approach and overcome limitations of the model studies we developed a Green's function formalism to capture electron-phonon contributions to RIXS and other core-level spectroscopies (X-ray photoemission spectroscopy (XPS), X-ray absorption spectroscopy (XAS)). Our approach is based on the cumulant expansion of the Green's function combined with many-body theory calculated vibrational coupling constants. In the case of the XAS and RIXS, we use a two-particle exciton Green's function, which accounts implicitly for particle-hole interference effects that have previously proved difficult. Finally, to demonstrate the methodology, we successfully applied our formalism to small molecules, for which unambiguous experimental data exist.
"Phase transition in functional materials and structural dynamics as studied by UTEM"
Presented by Ming Zhang, Institute of Physics, Chinese Academy of Sciences
My presentation contains 3 parts. First, I will briefly introduce the pump-probe technique and Ultrafast Transmission Electron Microscopy (UTEM). Then I will demonstrate our development project of the UTEM, including the modifications of the configuration, the establishment of optical system, the generation of photoelectrons, and specific cases are discussed to show the capability of our UTEM. At last, I will highlight the application of UTEM via two examples: (1) The photoinduced martensitic (MT) transition and reverse transition in a shape memory alloy Mn50Ni40Sn10 have been examined by UTEM, and imaging and diffraction observations clearly show a variety of structural dynamic features at picosecond time scales; (2) The Lorentz UTEM for direct imaging photoinduced ultrafast magnetization dynamics, revealing remarkable features of magnetic transient states after a femtosecond pulsed laser excitation, and three successive dynamical processes involving four distinct magnetic states are evidently observed in MnNiGa crystals.
"Realizing relativistic dynamics with slow light polaritons at room temperature"
Presented by Eden Figueroa, Stony Brook University
CFNS Seminar Room
Experimental verification of relativistic field theory models requires accelerator experiments. A possible pathway that could help understanding the dynamics of such models for bosons or fermions is the use of quantum technology in the form of quantum analog simulators. In this talk we will explore the possibility of generating nonlinear Dirac-type Hamiltonians using coherent superpositions of photons and spin wave excitations of atoms. Our realization uses a driven slow-light setup, where photons mimic the Dirac fields and different dynamics can be implemented and tuned by adjusting optical parameters. We will show our progress tin building a quantum simulator of the Jackiw-Rebbi model using highly-interacting photons strongly coupled to a room temperature atomic ensemble. We have identified suitable conditions in which the input photons dispersion relations can be tuned to a spinor of light configuration, mimicking the Dirac regime and providing a framework to create tunable interactions and varying mass terms. Lastly, we will show our vision to scale these ideas to multiple interacting fermions.
"Modification of the nucleon-nucleon potential and nuclear correlations due to the QCD critical point"
Presented by Juan M. Torres-Rincon, Stony Brook University
Thursday, February 7, 2019, 12 pm
Hosted by: Enrico Rinaldi
The scalar-isoscalar mode of QCD becomes lighter/nearly massless close to the chiral transition/second-order critical point. This mode is the main responsible for the attractive part of the nucleon-nucleon potential at distances of 1-2 fm. Therefore, a long-range strong attraction among nucleons is predicted to develop close to the QCD critical point. Using the Walecka-Serot model for the NN potential we study the effects of the critical mode in a system of nucleons and mesons using a Molecular Dynamics+Langevin equations for the freeze-out conditions of heavy-ion collisions. Beyond mean field, we observe strong nucleon correlations leading to baryon clustering. We propose that light-nuclei formation, together with an enhancement of cumulants of the proton distribution can signal the presence of the QCD critical point.
"Stimulation of quantum phases by time-dependent perturbations"
Presented by Victor Galitski, University of Maryland
Thursday, February 7, 2019, 11 am
I will review our theory work on dynamic stimulation of various quantum phases. A key idea here is that the equilibrium distribution is rarely optimal for occurrence of a given quantum state and dynamic perturbations can be used to "deform" an electron population in a favorable way in order to enhance quantum coherence. To illustrate this idea, I will show how both Cooper pairing and phase coherence can be dynamically enhanced in both conventional superconductors and bosonic superlfuids. Then, I will discuss dynamic enhancement of high-temperature superconductivity in the cuprates, as it reported in experiments by the Andrea Cavalleri group in Hamburg. It will be shown that an optical pump can suppress charge order and simultaneously enhance superconductivity, due to the inherent competition between the two. In the second part of my talk, I will generalize these ideas to quantum cavities, where the light-matter coupling can be strongly enhanced. In particular, I will discuss the hybridization of cavity photons with collective modes in interacting two-dimensional materials, including the formation of Higgs polaritons and the closest analogue to excitons in a superconductor - Bardasis-Schrieffer modes - hybridized with light.
"Novel Electrochemistry for Fuel Cell Reactions: Efficient Synthesis and New Characterization Methods"
Presented by Zhixiu Liang
The ever increasing consumption of fossil fuels for transportation causes climate change causing a growing concern about their future availability and further adverse environmental effects. To address this issue, the concept of CO2 neutral fuels-based energy cycle was brought out. The key reactions in that concept are electrochemical methanol oxidation (MOR), ethanol oxidation reaction (EOR), and CO2 reduction reaction (CO2RR). These all are elctrocatalysis research challenges being slow even at the best catalyst that hamper application of fuel cells, and bring environmental benefits. My research made these improvements of catalysts for the key reactions. In-situ electrochemical infrared reflective absorbance spectrum (EC-IRRAS) reveals that at lower temperature, such reaction is not complete and generates more formate; at elevated temperature, such reaction is complete to carbonate. Ethanol is one of the ideal fuels for fuel cells, but requires highly improved catalysts. Au@PtIr/C catalyst was synthesized with a surfactant-free wet-chemistry approach. Transmission electron microscope (TEM) characterization confirms the monolayer/sub-monolayer Pt-Ir shell, gold core structure. The catalyst has a very high mass activity of 58 A/mg at peak current. In situ EC-IRRAS reveals that C-C bond is cleaved upon contact with the catalyst surface leading to ethanol complete oxidation to CO2. Related researches on methodologies, included in situ TEM to help obtaining catalysts improvements, give morphologic, structural and spectroscopic information at wide range from hundreds of microns to sub-nanometer coupled with various detectors. Microelectromechanical System (MEMS) based chips technology enables TEM observation in operando, with liquid-flow-cell chips and electrochemistry chips designed and fabricated. Ag@Au hollow cubes synthesis via galvanic replacement of Au on Ag cubes was investigated with in situ TEM. The results demonstrate abnormal react
"Probing the Sea Quark Polarization at RHIC/STAR"
Presented by Jinlong Zhang, Stony Brook University
Tuesday, February 5, 2019, 11 am
Polarized proton-proton collision experiments at RHIC have provided unique opportunities to study the spin structure of the nucleon. One of the primary motivations of RHIC spin program is to probe sea quark spin-flavor structure via W-boson production in proton-proton collisions at a center of mass energy of 500 GeV. Measurements of the longitudinal single-spin asymmetry, A_L, of W-bosons with the STAR detector have provided significant constraints on the polarized parton distribution functions and especially the first experimental indication of a flavor asymmetry of polarized sea. In this seminar, I will present the analyses and latest results from STAR, as well as their impact on our knowledge of the sea quark helicity distributions.
"Sorting out jet quenching in heavy-ion collisions"
Presented by Jasmine Brewer, Massachusetts Institute of Technology
Thursday, January 31, 2019, 12 pm
We introduce a new "quantile'' analysis strategy to study the modification of jets as they traverse through a droplet of quark-gluon plasma. To date, most jet modification studies have been based on comparing the jet properties measured in heavy-ion collisions to a proton-proton baseline at the same reconstructed jet transverse momentum pT. It is well known, however, that the quenching of jets from their interaction with the medium leads to a migration of jets from higher to lower pT, making it challenging to directly infer the degree and mechanism of jet energy loss. Our proposed quantile matching procedure is inspired by (but not reliant on) the approximate monotonicity of energy loss in the jet pT. In this strategy, jets in heavy-ion collisions ordered by pT are viewed as modified versions of the same number of highest-energy jets in proton-proton collisions. Despite non-monotonic fluctuations in the energy loss, we use an event generator to validate the strong correlation between the pT of the parton that initiates a heavy-ion jet and the pT of the vacuum jet which corresponds to it via the quantile procedure. We demonstrate that this strategy both provides a complementary way to study jet modification and mitigates the effect of pT migration in heavy-ion collisions.
"What can we learn from cloudy convection in a box? Laboratory meets LES with cloud microphysics"
Presented by Raymond Shaw, MTU
Conference Room Bldg 815E
Hosted by: Fan Yang
Inspired by early convection-tank experiments (e.g., Deardorff and Willis) and diffusion-chamber experiments, we have developed a cloud chamber that operates on the principle of isobaric mixing within turbulent Rayleigh-Bénard convection. The "Pi cloud chamber" has a height of 1 m and diameter of 2 m. An attractive aspect of this approach is the ability to make direct comparison to large eddy simulation with detailed cloud microphysics, with well characterized boundary conditions, and statistical stationarity of both turbulence and cloud properties. Highlights of what we have learned are: cloud microphysical and optical properties are representative of those observed in stratocumulus; aerosol number concentration plays a critical role in cloud droplet size dispersion, i.e., dispersion indirect effect; aerosol-cloud interactions can lead to a condition conducive to accelerated cloud collapse; realistic and persistent mixed-phase cloud conditions can be sustained; LES is able to capture the essential features of the turbulent convection and warm-phase cloud microphysical conditions. It is worth considering what more could be learned with a larger-scale cloudy-convection chamber. Turbulence Reynolds numbers and Lagrangian-correlation times would be scaled up, therefore allowing more enhanced role of fluctuations in the condensation-growth process. Larger vertical extent (of order 10 m) would approach typical collision mean free paths, thereby allowing for direct observation of the transition from condensation- to coalescence-growth. In combination with cloudy LES, this would be an opportunity for microphysical model validation, and for synergistic learning from model-measurement comparison under controlled experimental conditions.
"Strongly-correlated systems: Controllable field-theoretical approach"
Presented by Igor Tupitsyn, University of Massachusetts Amherst
Tuesday, January 29, 2019, 1:30 pm
Hosted by: Alexei Tsvelik
Accurate account for interactions in theoretical models for strongly correlated many-body systems is the key for understanding real materials and one of the major technical challenges of modern physics. To accept this challenge, new and more effective methods, capable of dealing with interacting systems/models in an approximation-free manner, are required. One of such methods is the field-theoretical Diagrammatic Monte Carlo technique (DiagMC). While a conventional Quantum Monte Carlo samples the configuration space of a given model Hamiltonian, the DiagMC samples the configuration space of the model-specific Feynman diagrams and obtains final results with controlled accuracy by accounting for all the relevant diagrammatic orders. In contrast to conventional QMC, it does not suffer from the fermionic sign problem and can be applied to any system with arbitrary dispersion relation and shape of the interaction potential (both doped and undoped). In the first part of my talk I will introduce the technique, based on its bold-line (skeleton) implementation, and benchmark it against known results for the problem of semimetal-insulator transition in suspended graphene. In the second part I will briefly demonstrate its applications to various strongly-correlated systems/problems (stability of the 2d Dirac liquid state against strong long-range Coulomb interaction; interacting Chern insulators; phonons in metals; 1d chain of hydrogen atoms; uniform electron gas (jellium model), optical conductivity, etc).
"Measurements and Calculations of $\hat{q}L$ via transverse momentum broadening in RHIC collisions using di-hadron correlations"
Presented by Michael Tannenbaum, BNL
The renewed interest in analyzing RHIC data on di-hadron correlations as probes of final state transverse broadening as shown at Quark Matter 2018 by Miklos Gyulassy citing theoretical calculations compared to experimental measurements which didn't look right on Miklos' figure led me to take a closer look at this issue using published PHENIX data . The measured values of $\hat{q}L$ show the interesting effect of being consistent with zero for values of the associated particle transverse momentum pTa >3 GeV/c. This is shown to be related to the well-known effect of the variable IAA, the ratio of the Au+Au to p+p pTa distributions for a given trigger pTt.
Office of Educational Programs Event
"High School Science Bowl"
Saturday, January 26, 2019, 8 am
Hosted by: Amanda Horn
Nuclear Theory / RIKEN Seminar
"Effective field theory of hydrodynamics"
Presented by Paolo Glorioso, Kadanoff Center for Theoretical Physics and Enrico Fermi Institute, University of Chicago
Friday, January 25, 2019, 2 pm
CFNS Seminar Room 2-38
I will give an overview of our work on developing an effective field theory of dissipative hydrodynamics. The formulation is based on the Schwinger-Keldysh formalism, which provides a functional approach that naturally includes dissipation and fluctuations. Hydrodynamics is implemented by introducing suitable degrees of freedom and symmetries. I will then discuss two important by-products. First, the second law of thermodynamics, which in the traditional approach is imposed at phenomenological level, is here obtained from a basic symmetry principle together with constraints from unitarity. Second, I will show consistency with unitarity and causality of the hydrodynamic path-integral at all loops, which leads to the first systematic framework to compute hydrodynamic fluctuations.
"Quarkonium production in heavy ion collisions: open quantum system, effective field theory and transport equations"
Presented by Xiaojun Yao, Duke University
In this talk, I will present a connection between two approaches of studying quarkonium dynamics inside quark-gluon plasma: the open quantum system formalism and the transport equation. I will discuss insights from the perspective of quantum information. I will show that under the weak coupling and Markovian approximations, the Lindblad equation turns to a Boltzmann transport equation after a Wigner transform is applied to the system density matrix. I will demonstrate how the separation of physical scales justifies the approximations, by using effective field theory of QCD. Finally, I will show some phenomenological results based on the derived transport equation.
"A Roadmap for the Best PMTs and SiPM in Physics Research"
Presented by Razmik Mirzoyan, Max Planck Institute for Physics, Germany
Photomultiplier Tubes (PMT) are the most wide spread detectors for measuring fast and faint light signals. In cooperation with the companies Hamamatsu Photonics K.K. (Japan) and Electron Tubes Enterprises Ltd. (England) we pursued an improvement program for the PMTs for the Cherenkov Telescope Array (CTA) project. CTA is the next major Imaging Atmospheric Cherenkov Telescopes (IACT) array for ground-based very high energy gamma-ray astrophysics. A total of ∼100 telescopes of sizes of 23m, 12m and 4m in diameter will be built in northern and southern hemispheres. The manufacturers succeeded producing 1.5′ PMTs of enhanced peak quantum efficiency of ∼38-42 % and after pulsing below 0.02% (threshold ≥ 4 photoelectrons). The novel 1.5′ PMTs have the world-wide best parameters. It is interesting to compare the performance of PMTs with the current generation of SiPMs. In the imaging camera of the MAGIC IACT, consisting of 1039 PMTs, since many months we are operating composite clusters of SiPMs from the three well-known manufacturers. A critical comparison of these two types of sensors will be presented. Prospects for further significant improvements of PMTs and SiPMs will be discussed, also in the frame of the supported by the EU SENSE Roadmap for the best fast light sensors.
"Recent Progress in Non-perturbative methods for QFTs"
Presented by Lorenzo Vitale, Boston University
Quantum field theories (QFT) are notoriously hard to solve in the strongly coupled regime, and few tools are available in space dimension larger than one. In this talk I discuss recent progress and ideas in characterizing certain QFTs in dimension d >= 1, based on the Hamiltonian Truncation and S-matrix bootstrap techniques. Some of the applications I will mention are Landau-Ginzburg theories and the Chern-Simons-matter theories.
"Exploring the HEP frontier with the Cosmic Microwave Background and 21cm cosmology"
Presented by Laura Newburgh, Yale University
Hosted by: Anze Slosar
Current cosmological measurements have left us with deep questions about our Universe: What caused the expansion of the Universe at the earliest times? How many standard model particles are there? What is the underlying nature of Dark Energy and dark matter? New experiments like CMB-StageIV, Simons Observatory, and CHIME are poised to address these questions through measurements of the polarized Cosmic Microwave Background and 3-dimensional maps of structure. In this talk, I will describe efforts in the community to deploy enormous experiments that are capable of turning CMB measurements into probes of high energy particle physics. I will also discuss how we can broaden the potential science returns by including 21 cm measurements of large scale structure as a new means to probe Dark Energy with experiments like CHIME and HIRAX.
"Chiral Vortical Effect For An Arbitrary Spin"
Presented by Andrey Sadofyev, Los Alamos National Lab
Chiral effects attracted significant attention in the literature. Recently, a generalization of chiral vortical effect (CVE) to systems of photons was suggested. In this talk I will discuss the relation of this new transport to the topological phase of photons and show that, in general, CVE can take place in rotating systems of massless particles with any spin.
"Timing circuits for high-energy physics applications"
Presented by Jeffrey Prinzie, KU Leuven University, Belgium
In the era of complex systems on chip (SoCs), clock and timing generation is required in nearly any application. These timing generators supply clock signals to digital modules, act as heartbeats for serial communication links or provide picosecond accurate reference information to time-interval sensors. Phase Locked Loops are the main building block that provide clock signals. However, in the high-energy physics community, ionizing radiation effects degrade these circuits significantly and produce soft-errors which can disturb an entire system. In this seminar, the application of these timing blocks in the high-energy physics are discussed together with the mitigation techniques for ionizing radiation.
"Proton decay matrix elements on lattice"
Presented by Mr. Jun-sik Yoo, Stony Brook University
Proton decay is one of possible signatures of baryon number violation, which has to exist to explain the baryon asymmetry and the existence of nuclear matter. Proton decay is one of natural implications of the Grand Unification Theory. After integrating out the high energy degrees of freedom, the baryon number violation operator that mediates proton decay can be found as the composite operator of standard model fields. We discuss the hadronic matrix elements of this BV operator made of three quarks and a lepton. We will start from the current experimental bound of proton lifetime. We present preliminary results of matrix element calculation done with the 2+1 dynamical flavor domain wall fermions at the physical point. We will discuss the proton decay channels that no matrix element has been calculated on the lattice.
"Effect of ion irradiation on the mechanical behavior and microstructural evolution of nanoscale metallic alloys"
Presented by Gowtham Sriram Jawaharram, University of Illinois at Urbana - Champaign
Wednesday, January 16, 2019, 11 am
Nanostructured alloys are considered as potential candidates for next generation (Generation IV) nuclear reactors because the high densities of interfacial defect sinks present in these materials. The effect of irradiation on the mechanical behavior of such alloys has received limited attention, likely resulting from the experimental challenges associated with performing such experiments. The first part of the talk will report on our recent efforts to perform high temperature irradiation induced creep (IIC) measurements in focused ion beam fabricated FCC alloys (single crystalline Ag nanopillars and nanocrystalline high entropy alloys (HEA) microbeams) by combining in-situ TEM based small-scale mechanical testing with ion irradiation and in-situ laser heating using the in-situ ion irradiation transmission electron microscope (I3TEM) at Sandia National Laboratories. The effect of pillar size, grain size, and temperature on the observed creep mechanism will be discussed. The second part of the talk will focus on the microstructural evolution of model highly immiscible CuW alloys during thermal annealing and high temperature irradiation characterized using high angle annular dark field (HAADF) imaging. The results will be discussed from the context of evolution and spatial distribution of W precipitates and its effect on hardness as a function of irradiation dose and temperature.
"Liquid Argon Detectors and Readout Electronics: From R&D to Physics Discovery"
Presented by Hucheng Chen, BNL
BNL has a long history of R&D in noble liquid based detectors, from the invention of the first liquid argon (LAr) calorimeter in 1974, to the construction of large liquid argon time projection chambers (LAr TPC) leading to Deep Underground Neutrino Experiment (DUNE) by 2026, in a time span over half a century. Readout electronics has always been an integral part of the detector, in both ATLAS LAr Calorimeter where high precision has played an essential role in the 2012 Higgs discovery, and in LAr TPC based neutrino detectors where cryogenic electronics proved to be an enabling technology. The development of noble liquid based detectors and readout electronics systems at BNL will be presented, focused on integrated detector-readout design, as motivated by our physics interests and experiment requirements, in energy frontier LHC experiments, and in intensity frontier short baseline and long baseline neutrino experiments. As both experiments present challenges in data acquisition, the FELIX based DAQ system for high-bandwidth detector readout developed at BNL, also being adopted in various particle physics experiments worldwide, will be discussed as well.
"Cross section measurements and new physics searches with WZ vector boson scattering events at CMS"
Presented by Kenneth Long, University of Wisconsin - Madison
Thursday, January 10, 2019, 3 pm
As the standard model (SM) Higgs boson looks increasingly like its SM expectation, expanded tests of the electroweak (EW) sector of the SM are a focal point of the long-term LHC program. Production of massive vector bosons via vector boson scattering provides a direct probe of the self-interactions of the massive vector bosons, which are intimately connected to the Higgs-Englert-Brout mechanism of EW symmetry breaking. A search for vector boson scattering of W and Z bosons has recently been performed by the CMS experiment using data collected in 2016. I will present this search as well as WZ cross section measurements, which are less dependent on theoretical inputs. This process is also sensitive to New Physics in the EW sector. I will present interpretations of these results in terms of explicit models predicting additional charged Higgs bosons and in the generalized framework of dimension-8 effective field theory.
"A novel background subtraction method for jet studies in heavy ion collisions"
Presented by Alba Soto Ontoso, BNL
"Exact Solution and Semiclassical Analysis of BCS-BEC Crossover in One Dimension"
Presented by Tianhao Ren, Columbia University
Monday, January 7, 2019, 1:30 pm
In this talk, I will introduce a new type of model for two-component systems in one dimension subject to exact solutions by Bethe ansatz. It describes the BCS-BEC crossover in one dimension and its integrability is obtained by fine-tuning the model parameters. The new model has rich many-body physics, where the Fermi momentum for the ground state distribution is constrained to be smaller than a certain value and the zero temperature phase diagram with an external field has a critical field strength for polarization. Also the low energy excitation spectra of the new model present robust features that can be related to solitons at BCS-BEC crossover in one dimension, as shown by the semiclassical analysis.
"DM-Ice17 and COSINE-100 NaI(Tl) Dark Mater Experiment: Testing DAMA's Claim for a Dark Matter Discovery"
Presented by Jay Hyun Jo, Yale University
Thursday, January 3, 2019, 3 pm
Astrophysical observations give overwhelming evidence for the existence of dark matter, yet we do not know what it is. For over 20 years, the DAMA collaboration has asserted that they observe a dark matter-induced annual modulation signal but their observation has yet to be confirmed by an independent measurement. DM-Ice17 is a prototype experiment consisting of 17 kg of NaI(Tl) detectors to test the DAMA's claimed detection of the dark matter annual modulation, which has been continuously operating at the South Pole since 2011. COSINE-100 is a joint experiment between DM-Ice and KIMS collaboration, situated at the Yangyang Underground Laboratory in South Korea. COSINE-100 consists of eight low background NaI(Tl) crystals with a total mass of 106 kg and 2000 liters of liquid scintillator as an active veto, and the physics run of the experiment began in September 2016. The recent results from DM-Ice17 and COSINE-100, including the status of the field, will be presented.
"Data Acquisition Systems for High-Speed and High-Dynamic Range Pixel Array Detectors"
Presented by Prafull Purohit, Cornell University
Pixel array detectors (PADs) have seen significant increase in performance and operational complexity in recent years due to advances in microelectronics technology and photon science needs. These advances in detector technology put equal complexity and performance requirements on the data acquisition and control systems for successful operation. A Field Programmable Gate Array (FPGA) with its high-speed processing and reconfiguration capabilities can be utilized to meet current requirements and future needs. In this talk, I will present some of the recent work done at Cornell University on FPGA-based data acquisition systems for high-speed, high dynamic range detectors as well as detectors for time resolved experiments.
Particle Physics Seminar (Leona Woods Distinguished Postdoctoral Lectureship Award)
"Measurement of top quark pair production in association with a Higgs or gauge boson at the LHC with the ATLAS detector"
Presented by María Moreno Llácer, CERN
Thursday, December 20, 2018, 3 pm
The top quark is unique among the known quarks since it decays before it has an opportunity to form hadronic bound states. This makes measurements of its properties particularly interesting as one can access directly the properties of a bare quark. Given its large mass (the heaviest fundamental particle), the top quark may play a special role in the electroweak symmetry breaking mechanism and therefore, new physics related to this might be found first in top quark precision measurements. Possible new physics signals would cause deviations of the top quark couplings from the Standard Model (SM) prediction. It couples to the SM fields through its gauge and Yukawa interactions. The high statistics top quark sample at the LHC has allowed to access the associated production of a top quark pair with a boson: tt+photon, tt+W, tt+Z and tt+H. The latest measurements carried out by the ATLAS detector for these physics processes will be presented, highlighting the main challenges.
Physics Colloquium (Leona Woods Distinguished Postdoctoral Lectureship Award)
"On top of the top: challenging the Standard Model with precise measurements of top quark properties"
Tuesday, December 18, 2018, 3:30 pm
The understanding of the Electro-Weak Symmetry Breaking mechanism and the origin of the mass of fundamental particles is one of the most important questions in particle physics today. The top quark is unique among the known quarks since it is the heaviest fundamental particle in the Standard Model. Its large mass makes the top quark very different from all other particles, with a Yukawa coupling to the Higgs boson close to unity. For these reasons, the top quark and the Higgs boson play very special roles in the SM and in many extensions thereof. An accurate knowledge of their properties can bring key information on fundamental interactions at the electroweak breaking scale and beyond. The Large Hadron Collider is providing an enormous dataset of proton-proton collisions at the highest energies ever achieved in a laboratory. With the unprecedentedly large sample of top quarks, a new frontier has opened, the flavour physics of the top quark, allowing to study whether the Higgs field is the unique source of the top quark's mass and whether there are unexpected interactions between the top quark and the Higgs boson. The answers to these questions will shed light on what may lie beyond the Standard Model and can even have cosmological implications.
"Uncovering the interactions behind quantum phenomena"
Presented by Keith Taddei, Oak Ridge National Laboratory
Tuesday, December 18, 2018, 11 am
Quantum computing, spintronics and plasmonics are nascent fields with potential to radically change our technological landscape. Fundamental to advancing these technologies is a mastery of quantum materials such as superconductors, quantum-spin-liquids and multiferroics. Ideally, we would know exactly what interactions give rise to these phenomena and design materials suitable for applications however, such an understanding as of yet eludes us. Instead we are stuck digging around in the phase space of known quantum materials slowly uncovering pertinent details to their design, filling in pieces of our incomplete picture. In this presentation, I will discuss recent bits I have found in my use of neutron scattering to study quantum materials. Starting with a novel new family of quasi-one-dimensional (Q1D) superconductors (A1,2TM3As3 with A = alkali metal and TM = Cr, Mo) I will present findings of short-range structural order and a proximate magnetic instability which, due the radically different structure, allow for new insights to the pertinence to such orders to superconductivity. Importantly, in these materials the two orders break different symmetries and so their interactions with the superconducting order can be studied independently. Next, I will discuss an interesting yet neglected family of frustrated magnetic materials – the rare-earth pyrogermanates (REPG). We find the Er2Ge2O7 REPG to exhibit 'local-Ising' type magnetism in direct analogy to the spin-ice pyrochlores suggesting effects of local anisotropies and dipole interactions. Finally, I will present ongoing work investigating spin-driven polarization effects in the magnetically and structurally straightforward multiferroic BiCoO3. These results demonstrate the essential role of neutron and x-ray scattering techniques in studying these complex materials and the fruitful opportunities these systems present to advance our understanding of quantum materials.
"The Science of the Sudbury Neutrino Observatory (SNO) and SNOLAB"
Presented by Art McDonald, Queens University
Hosted by: David Asner
A description of the science associated with the Sudbury Neutrino Observatory, performed with substantial contributions from BNL scientists, and its relation to other neutrino measurements will be given, along with a discussion of the new set of experiments that are at various stages of development or operation at SNOLAB. These experiments will perform measurements of neutrino properties and seek direct detection of Weakly-Interacting Massive Particles (WIMPS) as Dark Matter candidates. Specific examples will include SNO+ (with BNL participation), in which the central element of the SNO detector will now be liquid scintillator with Te dissolved for neutrino-less double beta decay and DEAP-3600 using liquid argon for single phase direct Dark Matter detection. Future directions for Dark Matter detection with liquid argon will also be discussed.
"Lattice QCD Input for Fundamental Symmetry Tests"
Presented by Micheal Wagman, MIT
Friday, December 14, 2018, 2 pm
Experimental detection of fundamental symmetry violation would provide a clear signal for new physics, but theoretical predictions that can be compared with data are needed in order to interpret experimental results as measurements or constraints of beyond the Standard Model physics parameters. For low-energy experiments involving protons, neutrons, and nuclei, reliable theoretical predictions must include the strong interactions of QCD that confine quarks and gluons. I will discuss experimental searches for neutron-antineutron oscillations that test beyond the Standard Model theories of matter-antimatter asymmetry with low-scale baryon-number violation. Lattice QCD can be used to calculate the neutron-antineutron transition rate using a complete basis of six-quark operators describing neutron-antineutron oscillations in effective field theory, and I will present the first lattice QCD results for neutron-antineutron oscillations using physical quark mass simulations and fully quantified uncertainties. Other experiments searching for neutrinoless double-beta decay and dark matter direct detection use large nuclear targets that are more difficult to simulate in lattice QCD because of an exponentially difficult sign(al-to-noise) problem. I will briefly describe the state-of-the-art for lattice QCD calculations of axial, scalar, and tensor matrix elements relevant to new physics searches with nuclei and outline my ongoing efforts to improve signal-to-noise problems using phase unwrapping.
"Discussion of opportunities related to Quantum Information initiative"
Presented by Alexei Tsvelik, BNL
Friday, December 7, 2018, 3 pm
Alexei Tsvelik will be sharing his thoughts on how we can answer to the DOE initiative on Quantum Information
"Magnetic skyrmions at room temperature - statics, dynamics, and high resolution imaging"
Presented by Dr. Felix Buttner, Dept of Mat Sci & Eng , MIT
NSLS-II Bldg 743 Room 156
Magnetic skyrmions are the smallest non-trivial entities in magnetism with great potential for data storage applications. These chiral and topological quasi-particles furthermore exhibit fascinating static and dynamical properties that render them the ideal candidates to study new physics in high spin-orbit coupling materials. In this talk, I will first give a general introduction to the field of skyrmionics and the fundamental properties of skyrmions that derive from their energetics. I will then discuss various ways of creating and stabilizing room-temperature skyrmions experimentally, as well as how we can move them and observe their topological dynamics via high resolution time-resolved x-ray imaging. I will conclude with perspectives of future research in this field and related areas.
"The Global Electroweak Fit in the light of the new results from the LHC"
Presented by Matthias Schott, University of Mainz
Thursday, December 6, 2018, 3 pm
With the high integrated luminosities recorded at the LHC and the very good understanding of the LHC detectors, it is possible to measure electroweak observables to the highest precision. In this talk, I review the measurement of the W boson mass as well as the measurement of the electroweak mixing angle with the ATLAS detector, both achieving highest precision after several years of intense effort. Special focus is drawn on a discussion of the modeling uncertainties as well as the physics potential of the latest low-mu runs, recorded at in 2017 and 2018. The results will be interpreted in terms of the overall consistency of the Standard Modell by the global electroweak fit, performed by the Gfitter Collaboration.
"On QCD and its Phase Diagram from a Functional RG Perspective"
Presented by Mario Mitter, BNL
Thursday, December 6, 2018, 12 pm
"Laser induces dynamics in complex oxides with visible/NIR and X-ray probe (Note: This will be a skype presentation)"
Presented by Sergii Parchenko, Swiss Light Source, Paul Scherrer Institute, Switzerland
**********Note: This will be a Skype Presentation************ recent achievements in generation of ultrashort and intense light pulses allow observation of the physical process on the ultrafast regime. exploring fundamental physical processes on the time scales of interactions, responsible for them, is the key for future understanding of the physical principles and implementation then to the technological application. with this talk, i'm going to present the study of laser induced dynamics in complex oxides with focus on several physical objects: magnetic exchange interaction, insulator to metal transition and magneto-electric coupling. it will be discussed how the study of laser induced changes with different probing methods could help to understand the microscopic mechanisms of physical processes on the ultrafast time scale.
"First-principles description of correlated materials with strong spin-orbit coupling: the analytic continuation and branching ratio calculation"
Presented by Jae-Hoon Sim, Department of Physics, KAIST, Korea, Republic of (South)
Monday, December 3, 2018, 1:30 pm
Hosted by: Sangkook Choi
The DFT+DMFT combined with the continuous-time quantum Monte Carlo (CT-QMC) impurity solver is one of the successful approaches to describe correlated electron materials. However, analytic continuation of the QMC data written in the imaginary frequency to the real axis is a difficult numeric problem mainly due to the ill-conditioned kernel matrix. While the maximum entropy method is one of the most suitable choices to gain information from the noisy input data, its applications to the materials with strong spin-orbit coupling are limited by the non-negative condition of the output spectral function. In the first part of this talk, I will discuss the newly developed methods for analytic continuation problem, the so-called maximum quantum entropy method (MQEM) [1]. It is the extension of the conventional method, introducing quantum relative entropy as a regularization function. The application of the MQEM for a prototype j_eff=1/2 Mott insulator, Sr2IrO4, shows that it provides a reasonable band structure without introducing a material specific base set. I will also introduce the application of machine learning technique to the same problem [2]. In the second part, a simple technique to branching ratio from the first-principles calculation will be discussed [3]. The calculated ?L·S? and branching ratio of the different 5d iridates, namely Sr2IrO4, Sr2MgIrO6, Sr2ScIrO6, and Sr2TiIrO6 are in good agreement with recent experimental data. Its reliability and applicability also be carefully examined in the recent study. [1] J.-H. Sim and M. J. Han, Phys. Rev. B 98, 205102 (2018). [2] H. Yoon, J.-H. Sim, and M. J. Han, Phys. Rev. B (in press). [3] J.-H. Sim, H. Yoon, S. H. Park, and M. J. Han, Phys. Rev. B 94, 115149 (2016).
"Localized-to-itinerant crossovers in Kondo materials"
Presented by Daniel Mazzone, Brookhaven National Laboratory, NSLS-II
Monday, December 3, 2018, 11 am
ISB Bldg. Conf. Room 201 (upstairs)
While charge carriers in crystalline structures can be located close to the nuclei or establish a delocalized character, they often epitomize strong fluctuations at intermediate regimes where emergent quantum phases show an intricate coupling among various degrees of freedom. Kondo materials are particularly interesting model systems to investigate strongly correlated phenomena, because they often possess small energy scales that are highly susceptible to macroscopic constraints. I will present recent neutron and X-ray scattering results on the series Nd1-xCexCoIn5 and Sm1-xYxS, where the ground state properties were tuned either via chemical substitution or magnetic field. We find that Nd substitution in CeCoIn5 affects the magnetic coupling parameters, triggering a change in the magnetic symmetry that is offset from the emergence of coherent heavy bands and unconventional superconductivity. Intriguingly, another magneto-superconducting phase with altered coupling is observed in Nd0.05Ce0.95CoIn5 at large magnetic fields. Sm1-xYxS features a transition towards an intermediate valence state under yttrium doping. Our results unravel a Kondo-triggered Lifshitz-transition in the mixed-valence state, which dives an unusually strong charge localization at low temperatures.
"Novel probes of small-x QCD"
Presented by Juan Rojo, VU University
Friday, November 30, 2018, 2 pm
The small Bjorken-x regime of QCD is of great interest since a variety of different phenomena are known or expected to emerge, from BFKL small-x effects and non-linear and saturation dynamics to shadowing corrections in heavy nuclei. In this talk we present recent developments in our understanding of perturbative and non-perturbative QCD at small-x: the evidence for BFKL dynamics in the HERA structure function data, the precision determination of collinear PDFs from charm production at LHCb, and the first results on neural-network based fits of nuclear PDFs. We also highlight the remarkable connection between small-x QCD and high-energy astrophysics, in particular for the theoretical predictions of signal and background event rates at neutrino telescopes such as IceCube and KM3NET
"The structure of the proton in the LHC precision era"
Presented by Juan Rojo, Vrije Universiteit Amsterdam and Nikhef
Thursday, November 29, 2018, 3 pm
The determination of the partonic structure of the proton is a central component of the precision phenomenology program at the Large Hadron Collider (LHC). This internal structure of nucleons is quantified in the collinear QCD factorization framework by the Parton Distribution Functions (PDFs), which encode the probability of finding quarks and gluons inside the proton carrying a given amount of its momentum. PDFs cannot currently be computed from first principles, and therefore they need to be determined from experimental data from a variety of hard-scattering cross-sections in lepton-proton and proton-proton collisions. This program, known as the global QCD analysis, involves combining the most PDF-sensitive data and the highest precision QCD and electroweak calculations available within a statistically robust fitting methodology. In this talk I review our current understanding of the quark and gluon structure of the proton, which emphasis for the implications for precision LHC phenomenology and searches for new physics, but also exploring other aspects of the nucleon structure such as their impact on high-energy neutrino telescopes, the connection with lattice QCD calculations, and the onset of novel small-x dynamics beyond the collinear framework. Finally, I highlight the prospects for improving our understanding of the quark/gluon structure of the nucleon at the high-luminosity LHC era.
"Studying Quantum Matter on Near-Term Quantum Computers"
Presented by Brian Swingle, University of Maryland and Institute of Advanced Study
Tuesday, November 27, 2018, 3:30 pm
Hosted by: Rob Pisarski
From the point of view of fundamental physics, one of the greatest promises of quantum information science is a new set of quantum computational tools for addressing previously intractable problems. However, at present we find ourselves in an age of embodied quantum information, where the substrate carrying the information cannot yet be abstracted away and effects of noise cannot be neglected. Nevertheless, I will argue that such noisy, intermediate size quantum devices may be useful for addressing open problems in quantum many-body physics, and potentially quantum field theory. Using two case studies, I will show how quantum information is informing our understanding of quantum matter and how near-term quantum computers might realistically help.
Nuclear Theory / RIKEN
"Casimir effect in Yang-Mills theory"
Presented by Dimitra Karabali, Lehman College CUNY
We consider the Casimir effect in a gauge-invariant Hamiltonian formulation of nonabelian gauge theories in $(2+1)$ dimensions. We compare our analytical results with recent lattice simulations.
"Exclusive $\rho$ meson production in $eA$ collisions: collinear factorization and the CGC"
Presented by Renaud Boussarie, BNL
Thursday, November 15, 2018, 12 pm
We will focus on the theoretical description of exclusive ρ meson production in eA collisions, using a hybrid factorization scheme which involves Balitsky's shockwave description of the Color Glass Condensate in the t channel, and Distribution Amplitudes (DAs) in the s channel. We will first give a quick introduction to the shockwave framework and to collinear factorization up to twist 3 for DAs, then we will apply these framweworks to the production of a longitudinal meson at NLO accuracy, and to the production of a transverse meson at twist 3 accuracy. We will insist on the experimental applications, and on several theoretical questions raised by our results: the dilute BFKL limit at NLO for diffraction, and collinear factorization breaking at twist 3.
"Global models for atmospheric new particle formation from the CERN CLOUD experiment"
Presented by Hamish Gordon, Leeds
Thursday, November 15, 2018, 11 am
Hosted by: Laura Fierce
In this seminar I will introduce the CERN CLOUD chamber experiment studying atmospheric new particle formation. I will then focus on work we have done to parameterize new particle formation and growth rates for atmospheric models. I will discuss the implementation of the parameterizations into the models, and the implications of the results from these models for estimated cloud condensation nuclei concentrations and indirect aerosol radiative forcing. The uncertainties in modelling new particle formation remain large, and I will outline how we are moving forward to try to reduce them.
"From nuts to soup: Recent advances in QCD in the Regge limit and the approach to thermalization in heavy-ion collisions"
Twenty five years to date, Larry McLerran and the speaker proposed that the Regge limit of QCD could be described by a many-body classical effective field theory now known as the Color Glass Condensate (CGC). Our radical conjecture was prompted by the phenomenon of gluon saturation, whereby many-body gluodynamics leads to the emergence of a semi-hard scale that screens color in the infrared. In the first part of this talk, we will review developments since in the CGC effective theory, and emphasize a paradigm shift in what constitutes fundamental degrees of freedom in the Regge limit. We shall also outline a color memory effect in the CGC which bears an exact analogy to the gravitational memory effect that could be discovered by LIGO in the near future. This correspondence in turn prompts one to speculate that asymptotic so-called BMS-like symmetries of gravity may also apply in QCD's Regge limit, leading to novel insight into how pions form "soft hair" on glue. In the second part of the talk, we discuss how the CGC provides an ab initio picture of thermalization and hydrodynamics in ultrarelativistic heavy-ion collisions. We focus on the discovery of a pre-thermal turbulent attractor, its topological properties, and a remarkable universality between this attractor and cold atomic gases prepared with the same boundary conditions.
"Towards laboratory detection of superfluid phases of QCD"
Presented by Ajit Srivastava, Institute of Physics, Bhubaneswar
Friday, November 9, 2018, 2 pm
Exotic phases of QCD exhibiting strong correlations exist at very high baryon density and relatively low temperatures. Examples of such phases range from nucleon superfluid phases expected to occur in the interior of neutron stars, to possible color superconducting phases, which may occur in the core of a neutron stars. Some of these phases may also occur in relativistic heavy ion collisions in the high baryon density regime, e.g. at RHIC (BES), FAIR, and NICA. We discuss the possibilities of detecting them in heavy ion collisions focusing on the universal aspects of associated phase transitions.
"Managing scientific data at the exascale with Rucio"
Presented by Martin Barisits, CERN
Thursday, November 8, 2018, 3 pm
Hosted by: Paul Laycock, Eric Lancon
Rucio is an open source software framework that provides scientific collaborations the functionality to organise, manage, and access their volumes of data. The data can be spread across heterogeneous data centres at widely distributed locations. Rucio has been originally developed to meet the requirements of the high-energy physics experiment ATLAS, and is continuously extended to support the LHC upgrades and more diverse scientific communities. Next to ATLAS, the Xenon1t dark matter search and AMS cosmic ray experiment are also using Rucio in production. The CMS experiment will deploy Rucio by 2019 and operate at a similar scale as ATLAS. Additionally several other experiments such as Belle-2 (B mesons), SKA (radio astronomy), LIGO (gravitational waves), DUNE and IceCube (both neutrino) are currently evaluating Rucio for adoption. This talk will discuss the exascale challenges these scientific experiments face and how Rucio will help to address them. Specifically the possibilities for uncomplicated deployment, easy integration in existing data workflows and the benefit of using the automated services provided by Rucio, will be shown. Also the transition of Rucio from a single-experiment system to an open community project, developed by scientists from multiple experiments, will be presented as well.
"Dirac fermions and critical phenomena: exponents and emergent symmetries"
Presented by Michael Scherer, University of Cologne, Germany
Thursday, November 8, 2018, 1:30 pm
Dirac fermions appear as quasi-particle excitations in various condensed-matter systems for example in graphene or as surface states of topological insulators. Close to a quantum phase transition they exhibit a series of exotic properties, e.g., emergent symmetries, fluctuation-induced critical points, the appearance of two length scales and a hierarchy of mass gaps. I discuss mechanisms that are behind these phenomena from a quantum field-theoretical point of view. Further, I present a four-loop renormalization group study for the determination of the Dirac fermions' critical behavior and compare to the predictions of complementary approaches such as quantum Monte Carlo and the conformal bootstrap. Finally, I will also comment on the possibility to test duality conjectures with these calculations.
"Diffractive Electron-Nucleus Scattering and Ancestry in Branching Random Walks"
Presented by Alfred Mueller, Columbia
"Materials Tribology: An Application-Driven Field with Rich Opportunities for Fundamental Studies of Surface Chemistry, Physics, Structure"
Presented by Brandon A. Krick, Department of Mechanical Engineering and Mechanics, Lehigh University
NSLS-II Bldg. 743 Rm 156
The significant economic (~3-6% of developed countries GDP) and environmental (several percent of our annual energy consumption) impacts of friction and wear make tribology is an important, application-driven field. However, there is an opportunity and need for inherently fundamental studies on surface chemistry, physics and structure to elucidate fundamental mechanisms for friction and wear. The non-equilibrium and transient nature of shear-induced changes caused by contacting surfaces in relative motion requires both in situ and ex situ advanced characterization techniques; many of these only available at the light source at Brookhaven. A brief overview of shear-induced (sliding friction/wear) alterations of surfaces will be presented for material systems including: - environmental and tribochemistry molybdenum disulphide based coatings for space applications - shear-induced band bending in GaN - mechanochemistry of polymer nanocomposites
"Search of the rare decay of KL→π0νν at J-PARC"
Presented by Yu-Chen, Tung
J-PARC KOTO is a dedicated experiment to search for the rare KL→π0νν decay. This decay is special not only because of its direct CP violating process, but also its theoretical cleanness. In the standard model, the branching ratio of KL→π0νν is calculated to be 3×10-11 with only a few percent uncertainty, which provides a clean base to explore new physics through finding deviations from the standard model. In the recently released results of data collected in 2015, the sensitivity of search was improved by an order of magnitude from the previous result and no event was observed in the signal region, with the prediction of 0.4 background event. In this talk, I will report the analysis and DAQ plan toward the sensitivity of O(-11).
"DIS on "Nuclei" using holography"
Presented by Kiminad Mamo, Stony Brook University
Thursday, November 1, 2018, 12 pm
"Cosmic Chandlery with Thermonuclear Supernovae"
Presented by Alan Calder, Stony Brook University
Tuesday, October 30, 2018, 3:30 pm
Thermonuclear (Type Ia) supernovae are bright stellar explosions distinguished by light curves that can be calibrated to allow for their use as "standard candles" for measuring cosmological distances. Our research investigates how properties of the host galaxy such as composition and age influence properties of the progenitor system, which in turn influence the thermonuclear burning during an event and thus its brightness. I will present the results from ensembles of simulations addressing the influence of age and composition on the brightness of an event. These results show that the outcome depends sensitively on the nuclear burning, particularly weak interactions. Thus precise measurement of the largest possible scales of the Universe requires accurately capturing physics at some of the smallest scales.
"Preparing Physics Software for the Future - the HEP Software Community"
Presented by Benedikt Hegner, BNL
Thursday, October 25, 2018, 3 pm
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the High Luminosity LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, the High Energy Physics community created a white paper (arXiv:1712.06982) to describe and define the R&D activities required to prepare for this software upgrade. This presentation describes the expected software and computing challenges and the already taken steps to tackle them
"HL-LHC Crab Cavities and Recent Beam Tests in the SPS Machine"
Presented by Dr. Rama Calaga, CERN
The design, development and challenges of the fabrication of the DQW crab cavity cryomodule is outlined. The successful installation and beam tests with protons in the SPS machine are presented along lessons learned and future plans for the HL-LHC series manufacturing.
"Quest for quark-gluon plasma"
Presented by Edward Shuryak, SBU
"Studying out-of-equilibrium Quark-Gluon Plasma with QCD kinetic"
Presented by Aleksas Mazeliauskas, University of Heidelberg
Friday, October 19, 2018, 2 pm
In relativistic heavy nucleus collisions an ultra-dense, high-temperature state of nuclear matter is created with de-confined quarks and gluons. Understanding how the non-equilibrium Quark-Gluon Plasma thermalizes is important in connecting the initial state physics with the emergent hydrodynamic behavior of the QGP at later times. In this talk, I will use weakly coupled QCD kinetic theory with quark and gluon degrees of freedom to study the QGP evolution in the far-from-equilibrium regime, where it exhibits universal scaling, and its approach to thermal and chemical equilibrium.
"Stronger together: combining searches for new heavy resonances"
Presented by Viviana Cavaliere, Brookhaven National Lab
Many theories beyond the Standard Model predict new s-channel resonances decaying into two bosons (WW,ZZ,WZ,WH,ZH) and possibly leptons (ll, lv). This talk will summarize the recent ATLAS combination of heavy resonances searches which places stringent constraints on the couplings to boson, quarks and leptons taking advantage of the statistical combination of searches in different channels. Prospect for future resonances searches at HL-LHC and HE-LHC will be discussed as well.
"Valence parton distribution function of pion using lattice"
Presented by Nikhil Karthik, BNL
Thursday, October 18, 2018, 12 pm
Hosted by: Yuya Tanizaki
"Probing nanostructured materials atom by atom: An ultra-high resolution aberration-corrected electron microscopy study"
Presented by Dr. Nasim Alem, Penn State University
Wednesday, October 17, 2018, 10 am
CFN, Bldg 735, Conference Room B, 2nd Floor
Defects can have a profound effect on the macroscale physical, chemical, and electronic properties of nanostructures. They can lead to structural distortions, introduce extra states in the band gap and give rise to excess potential locally at buried interfaces. While defects and interfaces have been a well-studied subject for decades, little is known about their local atomic and chemical structure, sub-Angstrom structural distortions within their vicinity, and their stability and transition dynamics under extreme conditions. Using ultra-highresolution aberration-corrected S/TEM imaging and spectroscopy, this talk will discuss our recent efforts on the determination of the defect chemistry and sub-Angstrom relaxation effects in nanostructures around dopants, grain boundaries, domain walls, and interfaces in the family of 2D crystals, complex oxides, and diamond carbon nanothreads. In the family of 2D crystal transition metal dichalcogenides (TMDs) alloys, we show how the formation of chemically ordered states and vacancy/dopant coupling leads to unusual relaxation effects around dopant-vacancy complexes. In addition, we explore stability and transition dynamics of defects leading to grain boundary migration in monolayer TMDs under electron beam irradiation. This talk also presents how ferroelectric polarization emerges at the atomic level across the domain walls in single phase and hybrid complex oxide systems and the impact of this emergence on the macroscale properties. Finally, we uncover the atomic and chemical structure of the carbon nanothreads using low dose high resolution electron microscopy. Bio: Nasim Alem is an assistant professor in the Materials Science and Engineering department at the Penn State University. Nasim received her B.S. degree in Metallurgical Engineering from Sharif University of Technology, Tehran, Iran and her M.S. degree in Materials Science and Engineering from Worcester Polytechnic Institute. She received her PhD from the Ma
"Energy dependence of jet quenching signatures in heavy-ion collisions"
Presented by James Brandenburg
Tuesday, October 16, 2018, 11 am
High-pT partons traveling through the quark-gluon plasma (QGP) lose energy due to strong interactions. This effect, called jet-quenching, is attributed to collisional and radiative energy losses as high-pT partons travel through and interact with the dense medium. Over the years jet-quenching has become well established in high-energy heavy-ion collisions. In high-energy A+A collisions, the observation of jet-quenching is considered to be clear evidence of QGP formation. The Beam Energy Scan program at RHIC provided a unique opportunity to study the QCD phase diagram and to search for the turn off of key QGP signatures, such as jet-quenching, at sufficiently low collision energies. The collision energy dependence of jet-quenching effects, quantified through the nuclear modification factor (Rcp) of charged and identified hadrons will be discussed. The limitations of Rcp as an observable will be discussed and compared with a more differential technique for quantifying jet-quenching. Finally, the outlook for improved jet-quenching measurements in the second phase of the RHIC Beam Energy Scan will be presented.
"In-situ Investigation of Crystallization of a Metallic Glass by Bragg Coherent X-ray Diffraction"
Presented by Bo Chen, Tongji University, China
Monday, October 15, 2018, 11 am
The crystallization behaviour of metallic glass (MG) has long been investigated ever since the discovery of these important functional materials [1]. Compared with crystalline and amorphous extremes, mate-rials containing crystalline precipitates within an otherwise amorphous MG or partially crystallized ma-terials have distinct properties that could be a way of tuning the materials' characteristics. Several methods including powder X-ray diffraction (XRD), transmission electron microscope (TEM) and se-lected area electron diffraction (SAED) are usually combined to characterize the degree of crystalline structure in amorphous materials. Until now, these methods, however, have failed to show the crystal-lization of individual crystal grains in three dimensions. In this work, the in-situ Bragg coherent X-ray diffraction imaging (BCDI) [2, 3] reveals the grain growth and the strain variation of individual crystals up to the sizes of a few hundred nanometers from the pure Fe-based MG powder during heating. We have found that there is preferential growth along one direction during the crystal formation; there is fractal structure around the developing crystal surface; there is also strain relaxation within the growing crystals while cooling. The work supports a two-step crystallization model for the Fe-based MG during heating. This could help to pave the way for designing partially crystalline materials with their at-tendant soft magnetic, anti-corrosive and mechanical properties. References [1] D. H. Kim, W. T. Kim, E. S. Park, N. Mattern, and J. Eckert, Prog. Mater. Sci. 2013, 58, 1103. [2] M. A. Pfeifer, G. J. Williams, I. A. Vartanyants, R. Harder and I. K. Robinson, Nature 2006, 442, 63. [3] I. K. Robinson and R. Harder, Nat. Mater. 2009, 8, 291.
Nuclear Theory/RBRC Seminar
"Anyonic particle-vortex statistics and the nature of dense quark matte"
Presented by Srimoyee Sen, University of Arizona
We show that Z_3-valued particle-vortex braiding phases are present in high density quark matter. Certain mesonic and baryonic excitations, in the presence of a superfluid vortex, have orbital angular momentum quantized in units of 1/3. Such non-local topological features can distinguish phases whose realizations of global symmetries, as probed by local order parameters, are identical. If Z_3 braiding phases and angular momentum fractionalization are absent in lower density hadronic matter, as is widely expected, then the quark matter and hadronic matter regimes of dense QCD must be separated by at least one phase transition.
NSLS-II Colloquium Series
"Biophysical Studies of an RNA Virus particle and its Maturation: Insights into an Elegantly Programmed Nano-machine"
Presented by John E. (Jack) Johnson, Department of Integrative Structural and Computational Biology, The Scripps Research Institute
Nudaurelia Capensis ? Virus (N?V) is a eukaryotic, quasi-equivalent, RNA virus, with a T=4 surface lattice, where maturation is dramatic (a change in particle size of 100Å) and is novel in that it can be investigated in vitro. Here we use X-ray crystallography, biochemistry, Small Angle X-ray Scattering, and electron cryo-microscopy and image reconstruction (CryoEM), to characterize maturation intermediates, an associated auto-catalytic cleavage, the kinetics of morphological change and to demonstrate that regions of N?V subunit folding are maturation-dependent and occur at rates determined by their quasi-equivalent position in the capsid. Matsui, T., Lander, G. C., Khayat, R., and Johnson, J. E. 2010. Subunits fold at position-dependent rates during maturation of a eukaryotic RNA virus. Proc Natl Acad Sci U S A 107:14111-5. Veesler, D., and Johnson, J.E. 2012. Virus Maturation. Annual review of biophysics 41:473-496. Doerschuk, P. C., Gong, Y., Xu, N., Domitrovic, T., and Johnson, J. E. 2016. Virus particle dynamics derived from CryoEM studies. Curr Opin Virol 18:57-63.
"Higgs to beauty quarks"
Presented by Caterina Vernieri, SLAC
The Higgs boson discovery at the LHC marked a historic milestone in the study of fundamental particles and their interactions. Over the last six years, we have begun measuring its properties, which are essential to build a deep understanding of the Higgs sector of the Standard Model and to potentially uncover new phenomena. The Higgs' favored decay mode to beauty (b) quarks (~60%) had so far remained elusive because of the overwhelming background of b-quark production due to strong interactions. Observing the Higgs decay to b-quarks was one of the critical missing pieces of our knowledge of the Higgs sector. Measuring this decay is a fundamental step to confirm the mass generation for fermions and may also provide hints of physics beyond the Standard Model. The CMS observation of the decay of the SM Higgs boson into a pair of b-quarks exploiting an exclusive production mode (VH) is yet another major milestone. This experimental achievement at the LHC, considered nearly impossible in the past, makes use of several advanced machine learning techniques to identify the b-quark distinctive signature, improve the Higgs boson mass resolution, and discriminate the Higgs boson signal from background processes.
NSLS-II Friday Seminar
"Highly Active and Stable Carbon Nanosheets Supported Iron Oxide for Fischer-Tropsch to Olefins Synthesis"
Presented by Congjun Wang, National Energy Technology Laboratory, Pittsburgh, PA
Light olefins production utilizes the energy intensive process of steam cracking. Fischer-Tropsch to olefins (FTO) synthesis potentially offers a more sustainable alternative. Here we show a promising FTO catalyst comprised of iron oxide nanoparticles supported on carbon nanosheets (CNS) fabricated from the carbonization of potassium citrate, which incorporates well dispersed K-promoter throughout the CNS support. This catalyst exhibits, to the best of our knowledge, the highest iron time yield of 1790–1990 μmolCO/gFe•s reported in the literature, 41% light olefins selectivity, and over 100 hours stable activity, making it one of the best performing FTO catalysts. Detailed characterization, including synchrotron X-ray spectroscopy, illustrates that the CNS support facilitates iron oxide reduction to metallic iron, leading to efficient transformation to the active iron carbide phase during FTO reaction. Since K is a commonly used promoter, our K-promoted CNS support potentially has broad utility beyond the FTO reactions demonstrated in the current study.
"SB/BNL Joint Cosmo seminar: Weighing Galaxy Clusters with Weak Lensing in Hyper Suprime-Cam Survey"
Presented by Dr. Elinor Medezinski, Princeton University
Thursday, October 4, 2018, 3 pm
Hosted by: Chi-Ting Chang
The most fundamental question in observational cosmology today is what is the nature of dark energy and dark matter. Clusters of galaxies serve as beacons to the growth of structure over cosmic scales, making them a sensitive cosmological tool. However, accurately measuring their masses has been notoriously difficult. Weak lensing provides the best direct probe of the cluster mass, both the baryonic and dark components, but it requires high-quality wide-field imaging and careful control of systematics. With its unprecedentedly deep and exquisite seeing, the Subaru Hyper Suprime-Cam (HSC) survey is an ongoing campaign to observe 1,400 square degrees to r~26, providing the closest precursor to LSST. In this talk, I will present our new field-leading results from the first HSC data release of ~150 square degrees that encompass thousands of clusters. Harnessing our new HSC survey, I measure benchmark weak lensing cluster masses with improved methodology, and reconcile previous tension on cosmological parameters between the SZ and CMB within the Planck survey. In the next decade, LSST and WFIRST will discover hundreds of thousands of galaxy clusters, peering deep to the epoch of formation. I will describe these surveys and the multifold breakthrough science we will achieve in the new era of astronomy.
"Latest XENON1T results"
Presented by Qing Lin, Columbia University
Understanding the properties of dark matter particle is a fundamental problem in particle physics and cosmology. The search of dark matter particle scattering off nuclei target using ultra-low background detector is one of the most promising technology to decipher the nature of dark matter. The XENON1T experiment, which is a dual phase detector with ~2.0 tons of xenon running at the Gran Sasso Laboratory in Italy, is designed to lead the field of dark matter direct detection. Since November 2016, the XENON1T detector is continuously taking data, with a background rate of more than one order of magnitude lower than any current generation dark matter search experiment. In this talk, I will present the latest results from XENON1T. Details about the XENON1T detector as well as the data analysis techniques will also be covered.
"Exploring the Phase Diagram with Succeptibility Scaling Functions: Epic Voyage or Just Another Bad Trip"
Presented by Roy A Lacey, Stony Brook University
Tuesday, October 2, 2018, 11 am
Hosted by: Jiangyong Jia
A major goal of the ongoing experimental programs at RHIC is to chart the QCD phase diagram.Pinpointing the location of the first order phase boundary which terminates at a critical end point (CEP), in the temperature versus baryon chemical potential (T,µB) plane of this phase diagram, is key to this mapping. Finite-Size-Finite-Time succeptibility scaling functions can give crucial insight on these essential landmarks of the phase diagram. I will discuss recent attempts to extract and use such scaling functions to pin down the location of the CEP, as well as the associated critical exponents required to identify its universality class.
"(Somewhat) New ideas in ultralight DM searches:"
Presented by Babette Döbrich, CERN
Monday, October 1, 2018, 3 pm
The search for ultra-light Dark Matter particles complements the search for more ``classical'' Dark Matter candidates at the GeV scale in an important fashion. I will review the motivation of the QCD axion, highlighting ongoing established techniques and set-ups and explain in detail a novel technique based on microwave filters (RADES), ongoing at CERN. Finally I will present results of one of the possibly cheapest experiments for ultralight Dark Photon Dark Matter (FUNK) achieving competitive results.
"Neutrinoless double beta decay in effective field theory"
Presented by Jordy De Vries, UMass Amherst
Friday, September 28, 2018, 2 pm
Hosted by: Chun Shen
Next-generation neutrinoless double-beta decay experiments aim to discover lepton-number violation in order to shed light on the nature of neutrino masses. A non-zero signal would have profound implications by demonstrating the existence of elementary Majorana particles and possibly pointing towards a solution of matter-antimatter asymmetry in the universe. However, the interpretation of the experimental signal (or lack thereof) requires care. First of all, a single nonzero measurement would indicate lepton-number violation but will not identify the underlying source. Second, complicated hadronic and nuclear input is required to connect the experimental data to a fundamental description of lepton-number violation. In this talk, I will use effective field theories to connect neutrinoless double-beta decay measurements to the fundamental lepton-number-violating source.
"Measurements of the branching fraction and time-dependent CP asymmetries for B0 -> J/psi pi0 decays"
Presented by Bilas Pal, BNL
Thursday, September 27, 2018, 3 pm
Measurements of the time-dependent CP asymmetries and branching fraction of B0 -> J/psi pi0 will be discussed. The CP asymmetry parameters for the decay B0 -> J/psi pi0 have previously been measured by BaBar and Belle experiments, but the results of mixing induced CP asymmetry (S) were not in good agreement with each other. Furthermore, the BaBar result lies outside the physically allowed region. Previous Belle measurements were based on 535M BB-bar pairs. We updated the measurements using the final Belle data set of 772M BB-bar pairs.
"Universality and quantum criticality of the one-dimensional spinor Bose gas"
Thursday, September 27, 2018, 1:30 pm
ISB Bldg. 734 Conf. Rm. 201
Hosted by: Igor Zaliznyak
We investigate the universal thermodynamics of the two-component one-dimensional Bose gas with contact interactions in the vicinity of the quantum critical point separating the vacuum and the ferromagnetic liquid regime. We find that the quantum critical region belongs to the universality class of the spin-degenerate impenetrable particle gas which, surprisingly, is very different from the single-component case and identify its boundaries with the peaks of the specific heat. In addition, we show that the compressibility Wilson ratio, which quantifies the relative strength of thermal and quantum fluctuations, serves as a good discriminator of the quantum regimes near the quantum crit- ical point. Remarkably, in the Tonks-Girardeau regime the universal contact develops a pronounced minimum, reflected in a counterintuitive narrowing of the momentum distribution as we increase the temperature. This momentum reconstruction, also present at low and intermediate momenta, signals the transition from the ferromagnetic to the spin-incoherent Luttinger liquid phase and can be detected in current experiments with ultracold atomic gases in optical lattices.
Sustainable Energy Technologies Department
"Advances in Ultra-High Energy Resolution STEM-EELS"
Presented by Tracy C. Lovejoy, Nion R&D
Bldg. 734, Room 201
Hosted by: Feng Wang
The capabilities of scanning transmission electron microscopes (STEMs) have advanced very significantly in the last two decades. The first major advance was the successful implementation of electron-optical aberration correction, which allowed the STEMs to reach direct sub-angstrom resolution in 2002 [1]. This improvement made the imaging and spectroscopy of single atoms straightforward. A very recent major development has been the improvement of energy resolution of EELS due to the introduction of a new generation of monochromators and ultra-stable electron spectrometers. The Ultra-High Energy Resolution Monochromated EELS-STEM (U-HERMES™) system developed by Nion combines a dispersing-undispersing ground-potential monochromator [2], a bright cold-field-emission gun, an advanced aberration corrector, and a new EEL spectrometer. The latest version of the system allows 5 meV energy resolution EELS and has achieved 1.07 Å spatial resolution at the sample at 30kV when monochromating, and it greatly extends the capabilities of vibrational spectroscopy in the EM, introduced 4 years ago [3]. U-HERMES™ has so far been used for: damage-free identification of different bonds including hydrogen bonds in guanine [4]; probing atomic vibrations at surfaces and edges of nano-objects with nm-level spatial resolution [5]; achieving sub-nm spatial resolution in images obtained with dark-field EELS vibrational signals [6]; nanoscale mapping of phonon dispersion curves [7]; nanoscale temperature determination by electron energy gain spectroscopy [8]; identification of different isotopes by vibrational spectroscopy in the EM [9]; vibrational spectroscopy of ice; and vibrational fingerprinting of biological molecules.
"Unraveling the nucleon's mass and spin structure at an Electron-Ion Collider"
Presented by Yoshitaka Hatta, BNL and Kyoto University
The US-based Electron-Ion Collider (EIC) is a future high-luminosity, polarized collider dedicated to the physics of the nucleon/nucleus structure. Among the many physics problems that can be addressed at the EIC, I will focus on the origin of the mass and spin of the nucleon, namely, how they can be understood in terms of quarks' and gluons' degrees of freedom. I will give a review of the mass and spin decompositions in QCD and discuss possible experimental observables.
"Searches for decays of a Higgs boson into pairs of light (pseudo)scalars with the ATLAS detector"
Presented by Ljiljana Morvaj
The branching ratio of the Standard Model (SM) Higgs boson to non-SM or "exotic" states is currently constrained to be less than 34% at 95% confidence level. This opens possibility to search for new particles in the decays of the Higgs boson. Such searches could provide a unique access to hidden-sector states that are singlets under the SM gauge transformations. A search for decays of the Higgs boson to a pair of new spin–0 particles, H → aa, where the a–bosons decay to a b-quark pair and a muon pair, is presented in this seminar. The analysis uses 36.1 fb−1 of proton-proton collisions data with √s = 13 TeV recorded by the ATLAS experiment at the LHC in 2015 and 2016. No deviation from the Standard Model prediction is observed and limits on Br(H → aa → bbμμ) are set in the a–boson mass range of 20–60 GeV. Searches in other final states, such as four b-quarks (H → aa → 4b) and two jets and two photons (H → aa → ggγγ), are also discussed.
"Status of Pythia 8 for an Electron-Ion Collider"
Presented by Ilkka Helenius, University of Tubingen
Pythia 8 is a general-purpose Monte-Carlo event generator widely used to simulate high-energy proton-proton collisions at the LHC. Recently it has been extended to handle also other collision systems involving lepton and heavy-ion beams. In this seminar I will review the current Pythia 8 capabilities in processes relevant to an Electron-Ion Collider (EIC) and discuss about the projected future improvements. The relevant processes can be divided into two regions based on the virtuality of the intermediate photon: deeply inelastic scattering (DIS) at high virtualities and photoproduction at low virtualities. I will begin with an introduction of the event generation steps in Pythia 8 and then briefly discuss how the DIS processes can be simulated. Then I present our photoproduction framework and compare the results to the HERA data for charged-hadron and dijet production in lepton-proton collisions. In particular I discuss about the role of multiparton interactions in photon-proton interactions with resolved photons and how these can be constrained with the existing HERA data. Then I discuss how the same framework can be applied to ultra-peripheral heavy-ion collisions at the LHC where one can study high-energy photon-nucleus interactions in a kinematic region comparable to EIC. Finally I will show our first predictions for dijet production in these events and quantify the contribution of diffractive events according to the hard diffractionmodel that has been recently implemented into Pythia 8.
"The Belle II Experiment"
Presented by Bryan Fulsom, PNNL
The first generation of B-Factories, BaBar and Belle, operated over the previous decade and produced many world-leading measurements related to flavor physics. Their discoveries contributed to the awarding of the 2008 Nobel Prize in Physics. The Belle II experiment, now underway at the KEK laboratory in Japan, is a substantial upgrade of both the Belle detector and the KEKB accelerator. It aims to collect 50 times more data than existing B-Factory samples. This will provide unprecedented sensitivity to new physics signatures in the flavor sector. This talk will present the upgrade efforts of the Belle II experiment, results from its recent first e+e- collisions, and the future physics opportunities the experiment will provide.
"Multiloop functional renormalization group: Exact flow equations from the self-consistent parquet relations"
Presented by Fabian Kugler, Ludwig-Maximilians-Universitat Munchen, Germany
ISB 734 Conference Room 201
Hosted by: Andreas Weichselbaum
The functional renormalization group (fRG) is a versatile, quantum-field-theoretical formulation of the powerful RG idea and has seen a large number of successful applications. The main limitation of this framework is the truncation of the hierarchy of flow equations, where typically effective three-particle interactions are neglected altogether. From another perspective, the parquet formalism consists of self-consistent many-body relations on the one- and two-particle level and allows for the most elaborate diagrammatic resummations. Here, we unify these approaches by deriving multiloop fRG flow equations from the self-consistent parquet relations [1]. On the one hand, this circumvents the reliance on higher-point vertices within fRG and equips the method with quantitative predictive power [2]. On the other hand, it enables solutions of the parquet equations in previously unaccessible regimes. Using the X-ray-edge singularity as an example, we introduce the formalism and illustrate our findings with numerical results [3]. Finally, we discuss applications to the 2D Hubbard model [4] and the combination of multiloop fRG with the dynamical mean-field theory. [1] F. B. Kugler and J. von Delft, arXiv:1807.02898 (2018) [2] F. B. Kugler and J. von Delft, PRB 97, 035162 (2018) [3] F. B. Kugler and J. von Delft, PRL 120, 057403 (2018) [4] A. Tagliavini, C. Hille, F. B. Kugler, S. Andergassen, A. Toschi, and C. Honerkamp, arXiv:1807:02697 (2018)
"Potential and Issues for Future Accelerators and Ultimate"
Presented by Stephen Brooks, BNL
Particle colliders have been remarkably successful tools in particle and nuclear physics. What are the future trends and limitations of accelerators as they currently exist, and are there possible alternative approaches? What would the ultimate collider look like? This talk examines some challenges and possible solutions. Accelerating a single particle rather than a thermal distribution may allow exploration of more controlled interactions without background. Also, cost drivers are possibly the most important limiting factor for large accelerators in the foreseeable future so emerging technologies to reduce cost are highlighted.
"Flavor physics and CP violation - Recent results from combined BaBar+Belle measurements, ongoing work at LHCb, and prospects for the new physics searches at Belle II"
Presented by Markus Roehrken, CERN
Monday, September 17, 2018, 1 pm
During the 2000s, the BaBar experiment at SLAC (Stanford/USA) and the Belle experiment at KEK (Tsukuba/Japan) performed a very successful flavor physics program. BaBar and Belle discovered CP violation in the B meson system and put tight experimental constraints on the quark-flavor sector of the Standard Model. The excellent experimental confirmation of the theory predictions by BaBar and Belle led to the Nobel Prize in physics for Makoto Kobayashi and Toshihide Maskawa in 2008. Continuing on these efforts, the new high-luminosity accelerator SuperKEKB and the next-generation B factory experiment Belle II recently started operation at KEK in Japan. SuperKEKB is designed to operate at an instantaneous luminosity of 8x10^35/cm^2/s, which is a factor 40 higher than the world record achieved by its predecessor KEKB. The upgraded Belle II detector will collect data samples about two orders of magnitudes larger than those of the BaBar and Belle experiments. In this talk, we introduce to flavor physics and CP violation. We will present recent results of a combined analysis campaign, which for the first time makes simultaneous use of the large final data samples of BaBar and Belle in single physics analyses. The approach provides access to an integrated luminosity of about 1.1 inverse attobarn, and thus allows to perform early Belle II-like measurements. In addition, ongoing work of the speaker at LHCb is reported. At the end of the talk, the prospects for the new physics searches in heavy flavor decays governed by quantum-loop effects at Belle II are briefly discussed.
"Probing the Higgs Yukawa couplings at the LHC"
Presented by Konstantinos Nikolopoulos
The Higgs boson observation by ATLAS and CMS Collaborations at the CERN Large Hadron Collider has completed the Standard Model; the culmination of a century of discoveries. Despite the overwhelming success of the Standard Model to-date, the origin of fermion masses through Yukawa couplings to the Higgs doublet is loosely constrained experimentally and offers opportunities for BSM physics contributions. The status of the determination of the Higgs boson Yukawa couplings at the LHC will be presented and the prospects for the future will be discussed.
"Giant photocurrent in asymmetric Weyl semimetals from the helical magnetic effect"
Presented by Yuta Kikuchi, RBRC
"Temperature, Mass, Flavor: Emerging Phase Structure of SU(3) Gauge Theories with Fundamental Quarks"
Presented by Ivan Horvath, University of Kentucky
Friday, September 7, 2018, 2 pm
Recently we have outlined the general phase structure of the above relevant theory set, inferred from probing the glue by external Dirac probes. Its novelty and simplicity stems from the conclusion that changes in temperature, mass and number of flavors lead to analogous dynamical effects. In this talk I will review that picture, and introduce an important refinement of it that is currently emerging in the thermal corner of the phase diagram.
"Pair-breaking quantum phase transition in superconducting nanowires"
Presented by Andrey Rogachev, University of Utah
Friday, September 7, 2018, 11 am
Quantum phase transitions (QPT) between distinct ground states of matter are widespread phenomena, yet there are only a few experimentally accessible systems where the microscopic mechanism of the transition can be tested and understood. In this talk we will report on discovery that a magnetic-field driven quantum phase transition in MoGe superconducting nanowires can be fully explained by the critical theory of pair-breaking transitions characterized by a correlation length exponent v≈1 and dynamic critical exponent z≈ 2. We find that in the quantum critical regime, the electrical conductivity is in agreement with a theoretically predicted scaling function and, moreover, that the theory quantitatively describes the dependence of conductivity on the critical temperature, field magnitude and orientation, nanowire cross-sectional area, and microscopic parameters of the nanowire material. At the critical field, the conductivity follows a T^(d–2)/z dependence predicted by phenomenological scaling theories and more recently obtained within a holographic framework. Our work uncovers the microscopic processes governing the transition: the pair-breaking effect of the magnetic field on interacting Cooper pairs overdamped by their coupling to electronic degrees of freedom. It also reveals the universal character of continuous quantum phase transitions. In the talk we will also briefly comment on reliability of the finite-size scaling analysis, origin of zero-bias anomaly in wires and implication of our finding for QPT in superconducting films.
RIKEN/NT & Quantum Computing Seminar
"Quantum Uncertainty and Quantum Computation"
Thursday, September 6, 2018, 12:30 pm
I will discuss the uncertainty in quantum mechanics as a property reflecting the "quantity" (measure) on the set of possible probing outcomes. This is in contrast to the commonly used "spectral distance" (metric). An unexpected insight into the nature of quantum uncertainty (and that of measure) is obtained as a result. One of the motivations for considering measure uncertainty is that it is directly relevant for assessing the efficiency of quantum computation.
Special Particle Phyics Seminar
"LHCb"
Presented by Angelo Di Canto, CERN
"Neutron production and capture in stellar nucleosynthesis:^{22}Ne(\Alpha,n)^{25}Mg reaction and radiative neutron captures of radioactive nuclei"
Presented by Shuya Ota, Texas A&M University
Most of elements heavier than Fe in the Universe are produced by a series of neutron capture reaction and ??-decay in stars. The s-process, which occurs under moderate neutron environments (~107-10 neutrons/cm3) such as in He burning of massive stars, is responsible for producing almost half of the heavy elements. Neutrons for the s-process environment is believed to be supplied by two dominant reactions, one of which is 22Ne(?,n)25Mg reaction. This reaction in massive stars is dominated by a few resonance reactions. Nevertheless, there remain large uncertainties about contribution of the reaction to the s-process nucleosynthesis because the reaction cross sections are too small for direct measurements due to Coulomb barrier (E? = 400-900 keV in the lab system). In the first half of this seminar, I will present our experiment to determine these resonance strengths with a cyclotron accelerator at Texas A&M University. The experiment was performed by an indirect approach using 6Li(22Ne,25Mg+n)d ?-transfer reaction, in which resonance properties such as neutron decay branching ratios of produced 26Mg were studied by measuring deuterons, ?-ray, and 26Mg in coincidence using large arrays of Si and Ge, and a magnetic spectrometer. Our results showed neutron production from 22Ne(?,n)25Mg reaction can be about 10 times lower than past measurements. The effect of our measurements on the s-process nucleosynthesis will be discussed. In the second half of this seminar, I will present our experiments to determine neutron capture cross sections of radioactive nuclei using the Surrogate Reaction method [1]. Neutron capture reactions for the s-process involve relatively long-lived nuclei neighboring stability in the nuclear chart. Therefore, the Surrogate Reaction, which creates the same compound nuclei as the neutron capture reaction using a stable beam and target, can be a useful approach. On the other hand, the r- process, which produces the other half
Special Nuclear Theory/RIKEN Lunch Seminar
"Signal-to-noise issues in non-relativistic quantum matter: from entanglement to thermodynamics"
Presented by Joaquin Drut, University of North Carolina
Non-relativistic quantum matter, as realized in ultracold atomic gases, continues to be a remarkably versatile playground for many-body physics. Experimentalists have exquisite control over temperature, density, coupling, and shape of the trapping potential. Additionally, a wide range of properties can be measured: from simple ones like equations of state to more involved ones like the bulk viscosity and entanglement. The latter has received much attention due to its connection to quantum phase transitions, but it has proven extremely difficult to compute: stochastic methods display exponential signal-to-noise issues of a very similar nature as those due to the infamous sign problem affecting finite-density QCD. In this talk, I will present an algorithm that solves the signal-to-noise issue for entanglement, and I will show results for strongly interacting systems in three spatial dimensions that are the first of their kind. I will also present a few recent explorations of the thermodynamics of polarized matter and other cases that usually have a sign problem, using complexified stochastic quantization.
"Probe strong magnetic field in QGP with dielectrons from photon-photon interactions"
Presented by Zhangbu Xu, BNL
Tuesday, August 28, 2018, 11 am
We presents first measurements of $e^+e^-$ pair production from light-light scattering in non-central heavy ion collisions. The excess yields peak distinctly at low transverse momentum with sqrt() between 40 to 60 MeV/c. The excess yields can be explained only when the photon-photon interactions are included in model calculations. However, the measured pT^2 distributions are significantly broader than model calculation and are different between Au+Au and U+U. Our measurements provide a possible experimental evidence of the existence of strong electromagnetic field. And I will discuss its possible impact on emerging phenomena in hadronic heavy-ion collisions, such as Chiral Magnetic Effect.
"Spinon Confinement and a Longitudinal Mode in One Dimensional Yb2Pt2Pb"
Presented by Bill Gannon, Department of Physics and Astronomy, Texas A&M University
Thursday, August 23, 2018, 1:30 pm
Abstract: The Yb3+ magnetic moments in Yb2Pt2Pb are seemingly classical, since the large spin-orbit coupling of the 4f-electrons and the crystal electric field dictate a J = +/-7/2 Yb ground state doublet. Surprisingly, the fundamental low energy magnetic excitations in Yb2Pt2Pb are spinons on one dimensional chains, shown to be in good agreement with the behavior expected with the XXZ Hamiltonian for nearly isotropic, S = +/-1/2 magnetic moments. We have performed new high resolution neutron scattering measurements to examine the properties of these excitations in a magnetic field. In fields larger than 0.5 T, the chemical potential closes the gap to the spinon dispersion, modifying the quantum continuum through the formation of a spinon Fermi surface. This leads to the formation of spinon bound states along the chains, coupled to a longitudinally polarized interchain mode at energies below the quantum continuum. The ground state doublet nature of the Yb ions ensures that at all fields, transverse excitations are virtually nonexistent, allowing direct measurement of the mode dispersion.
"Non-abelian symmetries and applications in tensor networks"
Presented by Andreas Weichselbaum, BNL
"Computation of the shear viscosity in QCD at (almost) next to leading order"
Presented by Derek Teaney, Stony Brook University
Friday, August 17, 2018, 2 pm
"Universality in Classical and Quantum Chaos"
Presented by Masanori Hanada, YITP
We study the chaotic nature of classical and quantum systems. In particular, we will study the detail of the Lyapunov growth. We will show the evidence that the spectrum of Lyapunov exponents admits universal description by Random Matrix Theory, and systems dual to black holes exhibit 'strong' universality.
"Quantum computing for deuteron"
Presented by Thomas Papenbrock, University of Tennessee
"Electric conductivity of hot and dense quark matter in a magnetic field"
Presented by Yoshimasa Hidaka, RIKEN
"Advances in high energy electron holography"
Presented by Dr. Toshiaki Tanigaki, Hitachi, Japan
Conference room in building 480
Hosted by: MG Han
Advances in High-Voltage Electron Holography T. Tanigaki Research & Development Group, Hitachi, Ltd. Email: [email protected] Electron holography can observe electromagnetic field inside materials and devices at high-resolution around atomic scale. The high penetration power of a high energy electron wave is crucial to observing magnetic structures, which exist only in thick samples. It is particularly crucial in three-dimensional (3D) observations, which require a series of sample observations with the sample increasingly tilted so that the projected sample thickness increases with the tilt angle. As an example of this, magnetic vortex cores confined in stacked ferromagnetic (Fe) discs were observed three-dimensionally by using vector-field electron tomography with a 1.0 MV holography electron microscope [1]. To invent new functional materials and devices for establishing a sustainable society, methods for controlling atomic arrangements in small areas such as interfaces have become important [2,3]. Electron holography is a powerful tool for analyzing the origins of functions by observing electromagnetic fields and strains at high resolutions. The advantages of high-voltage electron holography are high resolution and penetration power due to high energy electron waves. The quest for finding the ultimate resolution through continuous improvements on holography electron microscopes led to the development of an aberration corrected 1.2 MV holography electron microscope [4,5] (Figure 1). We describe recent results obtained by using the high-voltage electron holography. Spatial resolution of 1.2 MV holography electron microscope reached 0.043 nm at high-resolutions, when the sample was placed in a high magnetic field of the objective lens [4]. Under the observation conditions, in which the sample was placed in a field-free position for observing a magnetic field, the spatial res
"Deep Neural Network Techniques R&D for Data Reconstruction of Liquid Argon TPC Detectors"
Presented by Kazuhiro Terao, SLAC National Accelerator Laboratory
Tuesday, August 7, 2018, 10 am
Liquid Argon Time Projection Chambers (LArTPCs) are capable of recording images of charged particle tracks with breathtaking resolution. Such detailed information will allow LArTPCs to perform accurate particle identification and calorimetry, making it the detector of choice for many current and future neutrino experiments. However, analyzing such images can be challenging, requiring the development of many algorithms to identify and assemble features of the events in order to reconstruct neutrino interactions. In the recent years, we have been investigating a new approach using deep neural networks (DNNs), a modern solution to a pattern recognition for image-like data in the field of Computer Vision. A modern DNN can be applied for various types of problems such as data reconstruction tasks including interaction vertex finding, pixel clustering, and particle/topology type identification. We have developed a small inter-experiment collaboration to share generic software tools and algorithms development effort that can be applied to non-LArTPC imaging detectors. In this talk I will discuss the challenges of LArTPC data reconstruction, recent work and future plans for developing a full LArTPC data reconstruction chain using DNNs.
Summer Sundays
"Atom Smashing Fun with the Relativistic Heavy Ion Collider"
Sunday, August 5, 2018, 10 am
Berkner Hall, Room B
"Significant Excess of Electron-Like Events in the MiniBooNE Short-Baseline Neutrino Experiment"
Presented by William Louis, Los Alamos National Lab
Thursday, August 2, 2018, 3 pm
The MiniBooNE experiment at Fermilab observes a total electron-neutrino event excess in both neutrino and antineutrino running modes of 460.5 +- 95.8 events (4.8 sigma) in the energy range from 200-1250 MeV. The MiniBooNE L/E distribution and the allowed region from a two-neutrino oscillation fit to the data are consistent with the L/E distribution and allowed region reported by the LSND experiment. All of the major backgrounds are constrained by in-situ event measurements, so non-oscillation explanations would need to invoke new anomalous background processes. Although the data are fit with a two-neutrino oscillation model, other models may provide better fits to the data. The MiniBooNE event excess will be further studied by the Fermilab short-baseline neutrino (SBN) program.
"Nucleon isovector axial charge in 2+1-flavor domain-wall QCD with physical mass"
Presented by Shigemi Ohta, IPNS, KEK
"Imaging Non-equilibrium Dynamics in Two-Dimensional Materials"
Presented by Kenneth Beyerlein, Max Planck Institute for the Structure and Dynamics of Matter, Germany
Wednesday, August 1, 2018, 11 am
The interfaces in thin film heterostructures dictate the performance of an electronic device. Understanding their behavior upon exposure to light is important for advancing photovoltaics and spintronics. However, producing an atomic image of these dynamics is an under-determined problem without a unique solution. In this talk, I will show how a set of ultrafast soft X-ray diffraction rocking curves can be spliced together to add constraints to the phase retrieval problem. In doing so, the anti-ferromagnetic order through a NdNiO3 film after illumination of the substrate with a mid-Infrared laser pulse will be imaged. Notably, a disordered phase front initiated at the substrate interface is shown to evolve at twice the speed of sound. This time-spliced imaging technique opens a new window into the correlated dynamics of two-dimensional materials.
"Dark Matter Annual Modulation with SABRE"
Presented by Lindsey Bignell, Australian National University
Tuesday, July 31, 2018, 1:30 pm
SABRE is a dark matter direct detection experiment with a target of ultra-pure NaI(Tl). Our experiment is motivated by the DAMA result; a long-standing and highly statistically significant modulation of the count rate in their NaI(Tl) detector that is consistent with that expected from the dark matter halo. However, a number of other direct detection experiments, using different target materials, exclude the dark matter parameter space implied by DAMA for the simplest WIMP-nucleus interaction models. SABRE hopes to carry out the first model-independent test of the DAMA claim, with sufficient sensitivity to confirm or refute their result. SABRE will also operate identical detectors in the northern and southern hemisphere, to rule out seasonally-modulated backgrounds. The southern detector will be housed at the first deep underground laboratory in the Southern Hemisphere: the Stawell Underground Physics Laboratory. This talk will give an overview of the SABRE design and its predicted sensitivity, as well as an update on the proof-of-principle detector which will be operating this summer.
"Capturing the Inner Beauty of the Quark Gluon Plasma"
Presented by Jin Huang, Brookhaven National Laboratory
The Quark Gluon Plasma (QGP) filled the universe in the first microsecond after the Big Bang and today is routinely recreated in high energy nuclear collisions at RHIC and the LHC. While experiments have revealed an array of surprising QGP properties, such as perfect fluidity, extreme vorticity, and near total opaqueness to hard scattered quarks and gluons, the detailed physics that gives rise to these properties remains a focus of forefront research. Heavy quarks, particularly the very heavy beauty quark, provide the means to clarify the connection between the microscopic physics of the QGP and its larger scale properties. Beauty quarks – or b-quarks – are produced rarely in collisions at RHIC, and the planned sPHENIX experiment will be equipped with a high rate vertex tracker enabling precision measurements of b-quark observables, such as B-meson suppression and flow, the differential suppression of Upsilon states and a number of observables related to b-quark jets. These b-quark probes will provide crucial information about the microscopic description of QGP at RHIC energies. In this talk, I will explain b-quark physics in the QGP, current measurements, the program at the planned sPHENIX experiment and its relevance to the future Electron Ion Collider.
"Jets as a probe of transverse spin physics"
Presented by Zhongbo Kang, UCLA
Jets are collimated spray of hadrons that are naturally produced in high energy colliders. They are powerful probes of many different aspects of QCD dynamics. In this talk, we will demonstrate how to use jets to explore the transverse momentum dependent (TMD) physics. A novel TMD framework to deal with back-to-back two particle correlations is presented, with which we could study the Sivers asymmetry for photon+jet or dijet production in transversely polarized proton-proton collisions. At the end of the talk, we also show how jet substructure could be used to explore the TMD fragmentation functions. We expect these studies to have important applications at RHIC in the future.
Nuclear Theory Seminar
"Medium Modification of Jet Substructure in the Opacity Expansion"
Presented by Matthew Sievert, Los Alamos
Hosted by: Yacine Mehtar-Tani
The modification of jets and their substructure in the presence of quark-gluon matter, beyond solely the quenching of their production, is a cornerstone of jet tomography. Although the nature of different nuclear environments can vary widely, the manner in which an external potential leads to a modification of jets and their substructure is universal and applies to both hot and cold nuclear matter. An order-by-order calculation of the medium modifications is possible on the basis of the opacity expansion, a series which can be truncated at finite order if the average number of scatterings in the quark-gluon matter is not too large. Other methods exist which can resum the full opacity series into a path integral formalism that remains applicable at very high opacities. In this talk, I will present a new calculation in the opacity expansion approach which computes the gluon substructure of a quark jet with exact kinematics at second order in opacity. I will also derive a set of recursion relations which can be used to construct higher orders terms in the opacity expansion to any finite order. And finally, I will compare this approach to the resumed path integral formalism, discussing the strengths and weaknesses of both methods and opportunities to study their overlap.
"Atomic level structural characterization of materials by electron microscopy"
Presented by Shize Yang, Center for Functional Nanomaterials
In recent years, with the development of technologies, electron microscopy techniques have been widely developed. Important advancement has been achieved on in-situ electron microscopy, cryogenic electron microscopy, electron tomography, advanced electron energy loss spectroscopy etc. In this talk I will briefly introduce and show how those techniques provide a vital role in the structural characterization of 2D materials, catalysts and battery materials.
Physics Summer School
"Working with High-Performance Astronomical CCD"
Hosted by: Mary Bishai and Anze Slosar
"Jet fragmentation in a QCD plasma: Universal quark/gluon ratio and wave turbulence"
Presented by Soeren Schlichting, University of Washington
Thursday, July 26, 2018, 11 am
We investigate the radiative break-up of a highly energetic quark or gluon in a high-temperature QCD plasma. Within an inertial range of momenta T
"Tale of coherent photon products: from UPC to HHIC"
Presented by Wangmei Zha, University of Science and Technology of China
"Silicon detectors"
Presented by Gabriele Giacomini
RIKEN Lunch Seminar/Special Nuclear Theory Seminar
"Neutrino Scattering on Quantum Computers"
Presented by Alessandro Roggero, University of Washington
"Invariant and insensitive: climate model microphysics as a scaling problem"
Presented by Mikael Witte, National Center for Atmospheric Research
Hosted by: Yangang Liu
Clouds are inherently multiscale phenomena: the particles that make up clouds are typically microns to millimeters, while the large-scale circulations that drive cloud systems can be hundreds of kilometers across. Limited computational power and the need to accurately represent the large-scale circulations in numerical simulations of the atmosphere make explicit inclusion of cloud microphysics a practical impossibility. In the last 20 years there has been a shift toward representing microphysics as scale-aware processes. Despite this shift, many unanswered questions remain regarding the scaling characteristics of microphysical fields and how best to incorporate that information into parameterizations. In this talk, I will present results from analysis of high frequency in situ aircraft measurements of marine stratocumulus taken over the southeastern Pacific Ocean aboard the NCAR/NSF C-130 during VOCALS-REx. First, I will show that cloud and rain water have distinct scaling properties, indicating that there is a statistically and potentially physically significant difference in the spatial structure of the two fields. Covariance of cloud and rain is a strong function of length/grid scale and this information can easily be incorporated in large-scale model parameterizations. Next I will show results from multifractal analysis of cloud and rain water to understand the spatial structure of these fields, the results of which provide a framework for development of a scale-insensitive microphysics parameterization. Finally, I compare observed microphysical scaling properties with those inferred from large eddy simulations of drizzling stratocumulus, applying the same analyses as applied to the aircraft observations. We find that simulated cloud water agrees well with the observations but the drizzle field is substantially smoother than observed, which has implications for the ability of limited-area models to adequately reproduce the spatial structure o
Currently showing events from the past year. See all past events » | CommonCrawl |
Peter Hilton
Peter John Hilton (7 April 1923[1] – 6 November 2010[2]) was a British mathematician, noted for his contributions to homotopy theory and for code-breaking during World War II.[3]
Peter Hilton
Hilton in 1993
Born
Peter John Hilton
(1923-04-07)7 April 1923
London, England
Died6 November 2010(2010-11-06) (aged 87)
Binghamton, New York, United States
Alma materThe Queen's College, Oxford
Known forEckmann–Hilton argument
Eckmann–Hilton duality
Hilton's theorem
Spouse
Margaret Mostyn
(m. 1949–2010)
Children2
Scientific career
FieldsMathematician
InstitutionsUniversity of Birmingham
Cornell University
Case Western Reserve University
Binghamton University
University of Central Florida
ThesisCalculation of the homotopy groups of $A_{n}^{2}$-polyhedra (1949)
Doctoral advisorJ. H. C. Whitehead
Doctoral studentsPaul Kainen
Early life
He was born in Brondesbury, London, the son Mortimer Jacob Hilton, a Jewish physician who was in general practice in Peckham, and his wife Elizabeth Amelia Freedman, and was brought up in Kilburn.[4][5] The physiologist Sidney Montague Hilton (1921–2011) of the University of Birmingham Medical School was his elder brother.[6]
Hilton was educated at St Paul's School, London.[7][8][4] He went to The Queen's College, Oxford in 1940 to read mathematics, on an open scholarship, where the mathematics tutor was Ughtred Haslam-Jones.[7][4][9]
Bletchley Park
A wartime undergraduate in wartime Oxford, on a shortened course, Hilton was obliged to train with the Royal Artillery, and faced scheduled conscription in summer 1942.[10] After four terms, he took the advice of his tutor, and followed up a civil service recruitment contact.[4] He had an interview for mathematicians with knowledge of German, and was offered a position in the Foreign Office without being told the nature of the work. The team was, in fact, recruiting on behalf of the Government Code and Cypher School. Aged 18, he arrived at the codebreaking station Bletchley Park on 12 January 1942.[11]
Hilton worked with several of the Bletchley Park deciphering groups. He was initially assigned to Naval Enigma in Hut 8. Hilton commented on his experience working with Alan Turing, whom he knew well for the last 12 years of his life, in his "Reminiscences of Bletchley Park" from A Century of Mathematics in America:[12]
It is a rare experience to meet an authentic genius. Those of us privileged to inhabit the world of scholarship are familiar with the intellectual stimulation furnished by talented colleagues. We can admire the ideas they share with us and are usually able to understand their source; we may even often believe that we ourselves could have created such concepts and originated such thoughts. However, the experience of sharing the intellectual life of a genius is entirely different; one realizes that one is in the presence of an intelligence, a sensibility of such profundity and originality that one is filled with wonder and excitement.
Hilton echoed similar thoughts in the Nova PBS documentary Decoding Nazi Secrets (UK Station X, Channel 4, 1999).[13]
In late 1942, Hilton transferred to work on German teleprinter ciphers.[10] A special section known as the "Testery" had been formed in July 1942 to work on one such cipher, codenamed "Tunny", and Hilton was one of the early members of the group.[14] His role was to devise ways to deal with changes in Tunny, and to liaise with another section working on Tunny, the "Newmanry", which complemented the hand-methods of the Testery with specialised codebreaking machinery.[14] Hilton has been counted as a member of the Newmanry, possibly on a part-time basis.[15]
Recreational
A convivial pub drinker at Bletchley Park, Hilton also spent time with Turing working on chess problems and palindromes.[16] He there constructed a 51-letter palindrome:[17]
"Doc note, I dissent. A fast never prevents a fatness. I diet on cod."
He did not use paper or pencil while composing it, but lay on his bed, eyes closed, and assembled it mentally over one night. It took him five hours. [18]
Mathematics
Hilton obtained his DPhil in 1949 from Oxford University under the supervision of John Henry Whitehead. His dissertation was "Calculation of the homotopy groups of $A_{n}^{2}$-polyhedra".[19][20] His principal research interests were in algebraic topology, homological algebra, categorical algebra and mathematics education. He published 15 books and over 600 articles in these areas, some jointly with colleagues. Hilton's theorem (1955) is on the homotopy groups of a wedge of spheres. It addresses an issue that comes up in the theory of "homotopy operations".[21]
Turing, at the Victoria University of Manchester, in 1948 invited Hilton to see the Manchester Mark 1 machine. Around 1950, Hilton took a position at the university maths department. He was there in 1949, when Turing engaged in a discussion that introduced him to the word problem for groups.[22] Hilton worked with Walter Lederman.[23] Another colleague there was Hugh Dowker, who in 1951 drew his attention to the Serre spectral sequence.[24]
In 1952, Hilton moved to DPMMS in Cambridge, England, where he ran a topology seminar attended by John Frank Adams, Michael Atiyah, David B. A. Epstein, Terry Wall and Christopher Zeeman.[25] Via Hilton, Atiyah became aware of Jean-Pierre Serre's coherent sheaf proof of the Riemann–Roch theorem for curves, and found his first research direction in sheaf methods for ruled surfaces.[26]
In 1955, Hilton started work with Beno Eckmann on what became known as Eckmann-Hilton duality for the homotopy category.[27] Through Eckmann, he became editor of the Ergebnisse der Mathematik und ihrer Grenzgebiete, a position he held from 1964 to 1983.[28]
Hilton returned to Manchester as Professor, in 1956.[29] In 1958, he became the Mason Professor of Pure Mathematics at the University of Birmingham.[7] He moved to the United States in 1962 to be Professor of Mathematics at Cornell University, a post he held until 1971.[1] From 1971 to 1973, he held a joint appointment as Fellow of the Battelle Seattle Research Center and Professor of Mathematics at the University of Washington. On 1 September 1972, he was appointed Louis D. Beaumont University Professor at Case Western Reserve University; on 1 September 1973, he took up the appointment. In 1982, he was appointed Distinguished Professor of Mathematics at Binghamton University, becoming Emeritus in 2003. Latterly, he spent each spring semester as Distinguished Professor of Mathematics at the University of Central Florida.
Hilton is featured in the book Mathematical People.[30]
Death and family
Peter Hilton died on 6 November 2010 in Binghamton, New York, at age 87. He left behind his wife, Margaret Mostyn (born 1925), whom he married in 1949, and their two sons, who were adopted.[31] Margaret, a schoolteacher, had an acting career as Margaret Hilton in the US, in summer stock theatre.[4] She also played television roles.[32] She died in Seattle in 2020.[33]
In popular culture
Hilton is portrayed by actor Matthew Beard in the 2014 film The Imitation Game, which tells the tale of Alan Turing and the cracking of Nazi Germany's Enigma code.
Academic positions
• Lecturer at University of Cambridge, 1952–55
• Senior Lecturer at University of Manchester, England, 1956–58
• Mason Professor of Pure Mathematics, University of Birmingham, England, 1958–62
• Visiting Professor at the Eidgenössische Technische Hochschule at Zürich, ETH Zurich, 1966–67, 1981–82, 1988–89
• Visiting Professor at the Courant Institute of Mathematical Sciences, New York University, 1967–68
• Visiting Professor at the Universitat Autònoma de Barcelona, Autonomous University of Barcelona, 1989
• Professeur invité, University of Lausanne, in 1996
Honours
• Silver Medal, University of Helsinki, 1975
• Doctor of Humanities (hon. causa), N. University of Michigan, 1977
• Corresponding Member, Brazilian Academy of Sciences, 1979
• Doctor of Science (hon. causa), Memorial University of Newfoundland, 1983
• Doctor of Science (hon. causa), Autonomous University of Barcelona, 1989
• In August 1983, an international conference on algebraic topology was held, under the auspices of the Canadian Mathematical Society, to mark Hilton's 60th Birthday. Professor Hilton was presented with a Festschrift of papers dedicated to him (London Mathematical Society Lecture Notes, Volume 86, 1983). The American Mathematical Society has published the proceedings under the title ‘Conference on Algebraic Topology in Honor of Peter Hilton’[34]
• Hilton was selected in October 1992, to deliver the invited lecture at the ‘Georges de Rham’ day at the University of Lausanne.
• An International Conference was held in Montreal in May 1993, to mark the 70th birthday of Hilton. The proceedings were published as The Hilton Symposium, CRM Proceedings and Lecture Notes, Volume 6, American Mathematical Society (1994), edited by Guido Mislin.
• In 1994, Hilton was the Mahler Lecturer of the Australian Mathematical Society.
• In the summers of 2001 and 2002, Hilton was Visiting Erskine Fellow at the University of Canterbury, Christchurch, New Zealand.[35]
• In winter term of 2005 Hilton received an appointment as Courtesy Faculty in the College of Arts and Sciences at University of South Florida.
Hilton's former PhD students
According to the Mathematics Genealogy Project site, Hilton supervised at least 27 doctoral students, including Paul Kainen at Cornell University.[19]
Bibliography
• Peter J. Hilton, An introduction to homotopy theory, Cambridge Tracts in Mathematics and Mathematical Physics, no. 43, Cambridge University Press, 1953.[36] ISBN 0-521-05265-3 MR0056289
• Peter J. Hilton, Shaun Wylie, Homology theory: An introduction to algebraic topology, Cambridge University Press, New York, 1960.[37] ISBN 0-521-09422-4 MR0115161
• Peter Hilton, Homotopy theory and duality, Gordon and Breach, New York-London-Paris, 1965 ISBN 0-677-00295-5 MR0198466
• H.B. Griffiths and P.J. Hilton, "A Comprehensive Textbook of Classical Mathematics", Van Nostrand Reinhold, London, 1970, ISBN 978-0442028640
• Peter J. Hilton, Guido Mislin, Joe Roitberg, Localization of nilpotent groups and spaces, North-Holland Publishing Co., Amsterdam-Oxford, 1975. ISBN 0-444-10776-2 MR0478146
• Peter Hilton, Jean Pedersen, Build your own polyhedra. Second edition, Dale Seymour Publications, Palo Alto, 1994. ISBN 0-201-49096-X
• Peter Hilton, Derek Holton, Jean Pedersen, Mathematical reflections: In a room with many mirrors. Corrected edition, Undergraduate Texts in Mathematics, Springer-Verlag, New York, 1996. ISBN 0-387-94770-1
• Peter J. Hilton, Urs Stammbach, A course in homological algebra. Second edition, Graduate Texts in Mathematics, vol 4, Springer-Verlag, New York, 1997. ISBN 0-387-94823-6 MR1438546
• Hans Walser, 99 Points of Intersection, translated by Peter Hilton and Jean Pedersen, MAA Spectrum, Mathematical Association of America, 2006. ISBN 978-0-88385-553-9
• Peter Hilton, Derek Holton, Jean Pedersen, Mathematical vistas: From a room with many windows, Undergraduate Texts in Mathematics, Springer-Verlag, New York, 2010. ISBN 1-4419-2867-7
• Peter Hilton, Jean Pedersen, A mathematical tapestry: Demonstrating the beautiful unity of mathematics, Cambridge University Press, Cambridge, 2010. ISBN 0-521-12821-8
References
1. Peter Hilton, "On all Sorts of Automorphisms", The American Mathematical Monthly, 92(9), November 1985, p. 650
2. Obituaries: Peter Hilton, 8 November 2010, retrieved 9 November 2010
3. Pedersen, Jean (2011). "Peter Hilton: Code Breaker and Mathematician (1923–2010)" (PDF). Notices of the American Mathematical Society. 58 (11): 1538–1540. MR 2896083.
4. James, I. M. "Hilton, Peter John (1923–2010)". Oxford Dictionary of National Biography (online ed.). Oxford University Press. doi:10.1093/ref:odnb/102834. (Subscription or UK public library membership required.)
5. "Jewish Personnel at Bletchley Park in World War II". www.jewishvirtuallibrary.org.
6. "Sidney Montague Hilton 17 March 1921–28 January 2011" (PDF). static.physoc.org. pp. 62–63.
7. "About the speaker", announcement Archived 22 February 2007 at the Wayback Machine of a lecture given by Peter Hilton at Bletchley Park on 12 July 2006. Retrieved 18 January 2007.
8. Stewart, Ian (2 December 2010). "Peter Hilton obituary". The Guardian.
9. Titchmarsh, E. C. (1963). "Ughtred Shuttleworth Haslam-Jones | Titchmarsh, E. C. | download". Journal of the London Mathematical Society. s1-38 (1).
10. Peter Hilton, "Living with Fish: Breaking Tunny in the Newmanry and the Testery", p. 190 from pp. 189–203 in Jack Copeland ed, Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006.
11. Hilton, "Living with Fish", p. 189
12. Hilton, Peter. "A Century of Mathematics in America, Part 1, Reminiscences of Bletchley Park" (PDF).
13. "NOVA, Transcripts, Decoding Nazi Secrets, PBS". PBS. Retrieved 29 August 2019.
14. Jerry Roberts, "Major Tester's Section", p. 250 of pp. 249–259 in Jack Copeland ed, Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006.
15. Reeds, James A.; Diffie, Whitfield; Field, J. V. (7 July 2015). Breaking Teleprinter Ciphers at Bletchley Park: An edition of I.J. Good, D. Michie and G. Timms: General Report on Tunny with Emphasis on Statistical Methods (1945). John Wiley & Sons. p. 553. ISBN 978-0-470-46589-9.
16. Sugarman, Martin (2011). "A supplement to 'Breaking the codes: Jewish personnel at Bletchley Park'". Jewish Historical Studies. 43: 219. ISSN 0962-9696. JSTOR 29780153.
17. Jack Good, "Enigma and Fish", p. 160 from pp. 149–166 in F. H. Hinsley and Alan Strip, editors, Codebreakers: The Inside Story of Bletchley Park, 1993.
18. Inc, Thinkmap. "The Palindrome Game of the Enigma Codebreakers". www.vocabulary.com. Retrieved 13 February 2021. {{cite web}}: |last= has generic name (help)
19. Peter Hilton at the Mathematics Genealogy Project
20. David Joyner and David Kahn, editors, "Edited Transcript of Interview with Peter Hilton for Secrets of War", in Cryptologia 30(3), July–September 2006, pp. 236–250.
21. Whitehead, George W. (6 December 2012). Elements of Homotopy Theory. Springer Science & Business Media. p. xiii. ISBN 978-1-4612-6318-0.
22. Hodges, Andrew (30 November 2012). Alan Turing: The Enigma. Random House. p. 412. ISBN 978-1-4481-3781-7.
23. Ledermann, Walter (1 February 2010). Encounters of a Mathematician. Lulu.com. p. 99. ISBN 978-1-4092-8267-9.
24. Dowker, Hugh (31 January 1985). Aspects of Topology: In Memory of Hugh Dowker 1912-1982. Cambridge University Press. p. 281. ISBN 978-0-521-27815-7.
25. James, I. M. (24 August 1999). History of Topology. Elsevier. p. 654. ISBN 978-0-08-053407-7.
26. Atiyah, Michael (28 April 1988). Collected Works: Michael Atiyah Collected Works: Volume 1: Early Papers; General Papers. Clarendon Press. p. 2. ISBN 978-0-19-853275-0.
27. Canadian Mathematical Society (1967). Canadian Mathematical Bulletin. Canadian Mathematical Society. p. 764.
28. Götze, Heinz (10 December 2008). Springer-Verlag: History of a Scientific Publishing House: Part 2: 1945 - 1992. Rebuilding - Opening Frontiers - Securing the Future. Springer Science & Business Media. p. 60. ISBN 978-3-540-92888-1.
29. Otte, M. (8 March 2013). Mathematiker über die Mathematik (in German). Springer-Verlag. p. 426. ISBN 978-3-642-80866-1.
30. D. Albers and G.L. Alexanderson, Mathematical People, Birkhauser, Boston, 1995. ISBN 0-8176-3191-7
31. "Professor Peter Hilton – Telegraph obituary". mathshistory.st-andrews.ac.uk. 10 November 2010.
32. "Margaret Hilton". IMDb.
33. "Margaret Hilton Obituary - Seattle, WA". Dignity Memorial.
34. Contemporary Mathematics 37, AMS, 1985
35. "Philosophy". The University of Canterbury.
36. Curtis, Morton L. (1954). "Review: An introduction to homotopy theory, by P. J. Hilton". Bulletin of the American Mathematical Society. 60 (2): 182–185. doi:10.1090/s0002-9904-1954-09797-5.
37. Massey, William S. (1964). "Review: An introduction to algebraic topology, by P. J. Hilton and S. Wylie". Bulletin of the American Mathematical Society. 70 (3): 333–335. doi:10.1090/s0002-9904-1964-11085-5.
External links
• O'Connor, John J.; Robertson, Edmund F., "Peter Hilton", MacTutor History of Mathematics Archive, University of St Andrews
• "Home page". Binghamton University.
• "The World Celebrates Professor's Birthday". University of Central Florida. Archived from the original on 19 May 2005.
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Sweden
• Latvia
• Czech Republic
• Australia
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• IdRef
| Wikipedia |
# Understanding multithreading
Multithreading is a programming concept that allows multiple threads of execution to run concurrently within a single program. Each thread represents a separate flow of control, and they can perform tasks simultaneously, greatly improving the efficiency of certain applications.
Threads and OS
In a multithreaded program, the operating system manages the execution of threads. The OS assigns CPU time to each thread, allowing them to run in parallel. This means that multiple threads can execute different parts of the program at the same time, leading to faster execution.
Multithreading Models
There are several models for dividing and managing work in multithreaded applications:
1. Boss/Worker Model: In this model, a main thread assigns tasks to other worker threads. The main thread manages the requests and distributes the work among the workers.
2. Peer Model: In the peer model, threads run in parallel without a specified manager. Each thread performs its own tasks independently.
3. Pipeline Model: The pipeline model involves processing data through a sequence of operations. Each thread works on a different part of the data stream in parallel.
Synchronization
Synchronization is crucial in multithreaded programming to ensure that threads access shared resources in a controlled manner. Without proper synchronization, threads can interfere with each other and cause unpredictable behavior.
Synchronization Mechanisms
There are various synchronization mechanisms available in multithreaded programming:
1. Mutexes/Lockers: Mutexes provide exclusive access to critical sections of code. Threads can lock a mutex to acquire exclusive access and unlock it when they are done. This ensures that only one thread can access the critical section at a time.
2. Condition Variables: Condition variables allow threads to synchronize their execution based on the value of a shared variable. A sleeping thread can be awakened by another thread signaling it.
Here's an example of using a mutex to provide exclusive access to a critical section:
```c++
#include <pthread.h>
pthread_mutex_t mutex;
void* thread_function(void* arg) {
// Lock the mutex
pthread_mutex_lock(&mutex);
// Critical section
// Perform operations that require exclusive access
// Unlock the mutex
pthread_mutex_unlock(&mutex);
return NULL;
}
```
## Exercise
Why is synchronization important in multithreaded programming?
### Solution
Synchronization is important in multithreaded programming to prevent race conditions and ensure that threads access shared resources in a controlled manner. Without synchronization, threads can interfere with each other and lead to unpredictable results.
# Overview of the OpenMP library
OpenMP (Open Multi-Processing) is an API (Application Programming Interface) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It provides a simple and flexible way to parallelize code and take advantage of multiple processors or cores.
OpenMP uses a shared memory model, where multiple threads can access and modify shared data. It allows developers to specify parallel regions in their code, where multiple threads can execute simultaneously. OpenMP takes care of thread creation, synchronization, and load balancing.
OpenMP is widely used in scientific and engineering applications, where performance is critical. It is supported by most compilers and operating systems, making it highly portable.
Here's an example of using OpenMP to parallelize a simple "Hello, World!" program in C++:
```cpp
#include <iostream>
#include <omp.h>
int main() {
#pragma omp parallel
{
int thread_id = omp_get_thread_num();
std::cout << "Hello, World! I'm thread " << thread_id << std::endl;
}
return 0;
}
```
In this example, the `#pragma omp parallel` directive creates a team of threads, and each thread prints its own ID.
## Exercise
What is the purpose of the OpenMP library?
### Solution
The purpose of the OpenMP library is to support multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It provides a simple and flexible way to parallelize code and take advantage of multiple processors or cores.
# Using OpenMP for parallel computing
The `#pragma omp parallel` directive is used to create a team of threads. The code block following this directive will be executed by each thread in the team. Each thread will have its own copy of variables, and they will execute the code in parallel.
Here's an example of using the `#pragma omp parallel` directive:
```cpp
#include <iostream>
#include <omp.h>
int main() {
#pragma omp parallel
{
int thread_id = omp_get_thread_num();
std::cout << "Hello, World! I'm thread " << thread_id << std::endl;
}
return 0;
}
```
In this example, the `#pragma omp parallel` directive creates a team of threads, and each thread prints its own ID.
## Exercise
Modify the code from the previous example to print the total number of threads in the team.
### Solution
```cpp
#include <iostream>
#include <omp.h>
int main() {
#pragma omp parallel
{
int thread_id = omp_get_thread_num();
int num_threads = omp_get_num_threads();
std::cout << "Hello, World! I'm thread " << thread_id << " out of " << num_threads << " threads." << std::endl;
}
return 0;
}
```
# Basics of physical simulations
Time integration is a fundamental concept in physical simulations. It involves updating the state of a system over time based on its current state and the forces acting on it. The state of a system is typically represented by a set of variables, such as position and velocity.
Numerical methods are used to approximate the continuous behavior of a system in discrete time steps. These methods involve solving differential equations that describe the motion of the system. One commonly used numerical method is the Euler method, which approximates the change in position and velocity over a small time step.
Let's consider a simple example of a ball falling under the influence of gravity. We can model the motion of the ball using the equations of motion:
$$\frac{d^2x}{dt^2} = -g$$
where $x$ is the position of the ball and $g$ is the acceleration due to gravity. To solve this equation numerically, we can use the Euler method:
$$x_{n+1} = x_n + v_n \Delta t$$
$$v_{n+1} = v_n + a_n \Delta t$$
where $x_n$ and $v_n$ are the position and velocity at time step $n$, $a_n$ is the acceleration at time step $n$, and $\Delta t$ is the time step size.
## Exercise
Implement the Euler method to simulate the motion of a ball falling under the influence of gravity. Assume an initial position of 0 and an initial velocity of 0. Use a time step size of 0.1 and simulate the motion for 10 seconds. Print the position of the ball at each time step.
### Solution
```cpp
#include <iostream>
int main() {
double x = 0; // initial position
double v = 0; // initial velocity
double dt = 0.1; // time step size
double g = 9.8; // acceleration due to gravity
for (double t = 0; t <= 10; t += dt) {
x = x + v * dt;
v = v - g * dt;
std::cout << "Position at time " << t << ": " << x << std::endl;
}
return 0;
}
```
# Implementing physical simulations in C++
A simulation program typically consists of the following components:
1. Initialization: This is where you set up the initial state of the system, such as the position and velocity of the objects.
2. Time integration: This is where you update the state of the system over time using numerical methods, as we discussed in the previous section.
3. Forces and interactions: This is where you calculate the forces acting on the objects in the system and update their positions and velocities accordingly.
4. Rendering: This is where you visualize the simulation, such as by drawing the objects on the screen.
Let's walk through an example of simulating the motion of a ball under the influence of gravity. We will assume that the ball is subject to the equations of motion we discussed earlier.
First, we need to initialize the state of the ball. We can define variables for the position, velocity, and acceleration of the ball:
```cpp
double x = 0; // initial position
double v = 0; // initial velocity
double a = -9.8; // acceleration due to gravity
```
Next, we need to update the state of the ball over time. We can do this using the Euler method:
```cpp
double dt = 0.1; // time step size
for (double t = 0; t <= 10; t += dt) {
x = x + v * dt;
v = v + a * dt;
}
```
Finally, we can print the position of the ball at each time step:
```cpp
std::cout << "Position at time " << t << ": " << x << std::endl;
```
## Exercise
Implement a simulation program that simulates the motion of a ball under the influence of gravity. Assume an initial position of 0 and an initial velocity of 0. Use a time step size of 0.1 and simulate the motion for 10 seconds. Print the position of the ball at each time step.
### Solution
```cpp
#include <iostream>
int main() {
double x = 0; // initial position
double v = 0; // initial velocity
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
for (double t = 0; t <= 10; t += dt) {
x = x + v * dt;
v = v + a * dt;
std::cout << "Position at time " << t << ": " << x << std::endl;
}
return 0;
}
```
# Optimizing performance with OpenMP
To use OpenMP in our simulation program, we need to identify the parts of our code that can be parallelized. This typically includes the time integration step, where we update the state of the system over time.
To parallelize the time integration step, we can use the `#pragma omp parallel for` directive. This directive tells OpenMP to distribute the iterations of a loop across multiple threads. Each thread will execute a subset of the loop iterations in parallel.
Let's modify our simulation program to parallelize the time integration step using OpenMP. We will use the `#pragma omp parallel for` directive to parallelize the loop that updates the state of the ball over time.
```cpp
#include <iostream>
#include <omp.h>
int main() {
double x = 0; // initial position
double v = 0; // initial velocity
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
#pragma omp parallel for
for (double t = 0; t <= 10; t += dt) {
x = x + v * dt;
v = v + a * dt;
std::cout << "Position at time " << t << ": " << x << std::endl;
}
return 0;
}
```
## Exercise
Modify the simulation program to parallelize the time integration step using OpenMP. Use the `#pragma omp parallel for` directive to parallelize the loop that updates the state of the ball over time. Assume an initial position of 0 and an initial velocity of 0. Use a time step size of 0.1 and simulate the motion for 10 seconds. Print the position of the ball at each time step.
### Solution
```cpp
#include <iostream>
#include <omp.h>
int main() {
double x = 0; // initial position
double v = 0; // initial velocity
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
#pragma omp parallel for
for (double t = 0; t <= 10; t += dt) {
x = x + v * dt;
v = v + a * dt;
std::cout << "Position at time " << t << ": " << x << std::endl;
}
return 0;
}
```
# Advanced concepts in multithreading
Thread synchronization is important when multiple threads access shared resources or modify shared data. Without proper synchronization, race conditions can occur, leading to incorrect results or program crashes. OpenMP provides synchronization primitives, such as mutexes and condition variables, to ensure proper synchronization between threads.
Load balancing is the process of distributing the workload evenly among threads to maximize the utilization of resources. In physical simulations, load balancing can be challenging because the workload is often dynamic and depends on the state of the system. OpenMP provides load balancing mechanisms, such as dynamic scheduling, to automatically distribute the workload among threads.
Thread affinity is the concept of binding threads to specific processors or cores. This can improve cache utilization and reduce cache coherence traffic, leading to better performance. OpenMP provides thread affinity control, allowing us to specify the affinity of threads to processors or cores.
Let's explore an example of thread synchronization in the context of a physical simulation. Suppose we have multiple threads that update the position and velocity of objects in the system. To ensure that the updates are performed correctly, we need to synchronize access to the shared variables.
```cpp
#include <iostream>
#include <omp.h>
int main() {
double x = 0; // initial position
double v = 0; // initial velocity
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
#pragma omp parallel
{
// Synchronize access to shared variables
#pragma omp critical
{
x = x + v * dt;
v = v + a * dt;
}
// Print the position of the ball
#pragma omp critical
{
std::cout << "Position at time " << t << ": " << x << std::endl;
}
}
return 0;
}
```
## Exercise
Modify the simulation program to synchronize access to the shared variables using OpenMP's critical directive. Assume an initial position of 0 and an initial velocity of 0. Use a time step size of 0.1 and simulate the motion for 10 seconds. Print the position of the ball at each time step.
### Solution
```cpp
#include <iostream>
#include <omp.h>
int main() {
double x = 0; // initial position
double v = 0; // initial velocity
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
#pragma omp parallel
{
// Synchronize access to shared variables
#pragma omp critical
{
x = x + v * dt;
v = v + a * dt;
}
// Print the position of the ball
#pragma omp critical
{
std::cout << "Position at time " << t << ": " << x << std::endl;
}
}
return 0;
}
```
# Advanced features of the OpenMP library
Task parallelism is a programming model where tasks are created and executed in parallel. This allows for more fine-grained parallelism and can improve load balancing. OpenMP provides tasking constructs, such as `#pragma omp task` and `#pragma omp taskwait`, to implement task parallelism.
Nested parallelism is the ability to have parallel regions within parallel regions. This can be useful when dealing with nested loops or recursive algorithms. OpenMP provides nested parallelism support, allowing us to control the level of parallelism in our code.
Work sharing constructs are used to distribute work among threads in a parallel region. OpenMP provides work sharing constructs, such as `#pragma omp for` and `#pragma omp sections`, to distribute loop iterations or sections of code among threads.
Let's explore an example of task parallelism in the context of a physical simulation. Suppose we have a large number of objects in the system, and we want to update their positions and velocities in parallel. We can use task parallelism to create tasks for each object and execute them in parallel.
```cpp
#include <iostream>
#include <omp.h>
void updateObject(double& x, double& v, double a, double dt) {
x = x + v * dt;
v = v + a * dt;
}
int main() {
const int numObjects = 1000;
double x[numObjects]; // initial positions
double v[numObjects]; // initial velocities
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
#pragma omp parallel
{
#pragma omp for
for (int i = 0; i < numObjects; i++) {
// Create tasks for updating objects
#pragma omp task
{
updateObject(x[i], v[i], a, dt);
}
}
// Wait for all tasks to complete
#pragma omp taskwait
// Print the positions of the objects
#pragma omp for
for (int i = 0; i < numObjects; i++) {
std::cout << "Position of object " << i << ": " << x[i] << std::endl;
}
}
return 0;
}
```
## Exercise
Modify the simulation program to use task parallelism to update the positions and velocities of the objects in the system. Assume a system with 1000 objects, each with an initial position of 0 and an initial velocity of 0. Use a time step size of 0.1 and simulate the motion for 10 seconds. Print the positions of the objects at each time step.
### Solution
```cpp
#include <iostream>
#include <omp.h>
void updateObject(double& x, double& v, double a, double dt) {
x = x + v * dt;
v = v + a * dt;
}
int main() {
const int numObjects = 1000;
double x[numObjects]; // initial positions
double v[numObjects]; // initial velocities
double a = -9.8; // acceleration due to gravity
double dt = 0.1; // time step size
#pragma omp parallel
{
#pragma omp for
for (int i = 0; i < numObjects; i++) {
// Create tasks for updating objects
#pragma omp task
{
updateObject(x[i], v[i], a, dt);
}
}
// Wait for all tasks to complete
#pragma omp taskwait
// Print the positions of the objects
#pragma omp for
for (int i = 0; i < numObjects; i++) {
std::cout << "Position of object " << i << ": " << x[i] << std::endl;
}
}
return 0;
}
```
# Building more complex physical simulations
Collision detection and resolution is an important aspect of many physical simulations. It involves detecting when objects collide and updating their positions and velocities accordingly. There are various algorithms and techniques for collision detection and resolution, such as bounding volume hierarchies and impulse-based methods.
Constraints are used to enforce relationships between objects in a physical simulation. For example, in a cloth simulation, constraints can be used to maintain the shape and structure of the cloth. There are different types of constraints, such as distance constraints, angle constraints, and collision constraints.
Fluid simulation is a complex topic that involves simulating the behavior of fluids, such as water or smoke. Fluid simulations can be used in various applications, such as computer graphics and scientific simulations. There are different techniques for fluid simulation, such as particle-based methods and grid-based methods.
Let's explore an example of building a more complex physical simulation: a simple cloth simulation. In this simulation, we will simulate the behavior of a cloth as it interacts with external forces, such as gravity and wind. We will use constraints to maintain the shape and structure of the cloth.
```cpp
#include <iostream>
#include <vector>
#include <omp.h>
struct Particle {
double x; // position
double y;
double vx; // velocity
double vy;
};
void updateParticle(Particle& particle, double dt) {
// Update position based on velocity
particle.x += particle.vx * dt;
particle.y += particle.vy * dt;
// Apply external forces, such as gravity and wind
particle.vx += 0.1; // example: wind force
particle.vy += -9.8 * dt; // example: gravity force
}
void applyConstraints(std::vector<Particle>& particles) {
// Apply constraints to maintain cloth structure
// Example: distance constraint between adjacent particles
for (int i = 0; i < particles.size() - 1; i++) {
Particle& p1 = particles[i];
Particle& p2 = particles[i + 1];
double dx = p2.x - p1.x;
double dy = p2.y - p1.y;
double distance = std::sqrt(dx * dx + dy * dy);
double targetDistance = 0.1; // example: target distance between particles
double correction = (distance - targetDistance) / distance;
p1.x += dx * correction * 0.5;
p1.y += dy * correction * 0.5;
p2.x -= dx * correction * 0.5;
p2.y -= dy * correction * 0.5;
}
}
int main() {
const int numParticles = 10;
std::vector<Particle> particles(numParticles);
double dt = 0.1; // time step size
#pragma omp parallel
{
for (double t = 0; t <= 10; t += dt) {
#pragma omp for
for (int i = 0; i < numParticles; i++) {
updateParticle(particles[i], dt);
}
#pragma omp single
{
applyConstraints(particles);
}
}
}
return 0;
}
```
## Exercise
Implement a more complex physical simulation of your choice, combining the concepts and techniques we have covered so far. You can choose to simulate a different type of physical system, such as a rigid body or a fluid. Use OpenMP to parallelize the simulation and optimize its performance.
### Solution
```cpp
// Example answer:
#include <iostream>
#include <vector>
#include <omp.h>
struct RigidBody {
double x; // position
double y;
double vx; // velocity
double vy;
double mass;
};
void updateRigidBody(RigidBody& body, double dt) {
// Update position based on velocity
body.x += body.vx * dt;
body.y += body.vy * dt;
// Apply external forces, such as gravity and wind
body.vx += 0.1; // example: wind force
body.vy += -9.8 * dt; // example: gravity force
}
int main() {
const int numBodies = 10;
std::vector<RigidBody> bodies(numBodies);
double dt = 0.1; // time step size
#pragma omp parallel
{
for (double t = 0; t <= 10; t += dt) {
#pragma omp for
for (int i = 0; i < numBodies; i++) {
updateRigidBody(bodies[i], dt);
}
}
}
return 0;
}
```
# Troubleshooting common issues
One common issue is race conditions, where multiple threads access shared data simultaneously and produce incorrect results. To avoid race conditions, you can use synchronization primitives, such as mutexes or critical sections, to ensure exclusive access to shared data.
Another common issue is load imbalance, where the workload is not evenly distributed among threads, leading to poor performance. To address load imbalance, you can use load balancing techniques, such as dynamic scheduling or work stealing, to distribute the workload evenly among threads.
A third common issue is false sharing, where multiple threads access different variables that happen to be located on the same cache line. This can result in cache thrashing and degrade performance. To mitigate false sharing, you can use padding or align variables to avoid them sharing the same cache line.
Let's explore an example of a common issue: race conditions. Suppose we have multiple threads that update the position and velocity of objects in a physical simulation. If these updates are not properly synchronized, race conditions can occur and produce incorrect results.
```cpp
#include <iostream>
#include <omp.h>
struct Object {
double x; // position
double y;
double vx; // velocity
double vy;
};
void updateObject(Object& object, double dt) {
// Update position based on velocity
object.x += object.vx * dt;
object.y += object.vy * dt;
// Apply external forces, such as gravity and wind
object.vx += 0.1; // example: wind force
object.vy += -9.8 * dt; // example: gravity force
}
int main() {
const int numObjects = 10;
Object objects[numObjects];
double dt = 0.1; // time step size
#pragma omp parallel
{
for (double t = 0; t <= 10; t += dt) {
#pragma omp for
for (int i = 0; i < numObjects; i++) {
updateObject(objects[i], dt);
}
}
}
return 0;
}
```
## Exercise
Identify and fix the race condition in the simulation program. Assume a system with 10 objects, each with an initial position of 0 and an initial velocity of 0. Use a time step size of 0.1 and simulate the motion for 10 seconds. Print the positions of the objects at each time step.
### Solution
```cpp
#include <iostream>
#include <omp.h>
struct Object {
double x; // position
double y;
double vx; // velocity
double vy;
};
void updateObject(Object& object, double dt) {
// Update position based on velocity
object.x += object.vx * dt;
object.y += object.vy * dt;
// Apply external forces, such as gravity and wind
object.vx += 0.1; // example: wind force
object.vy += -9.8 * dt; // example: gravity force
}
int main() {
const int numObjects = 10;
Object objects[numObjects];
double dt = 0.1; // time step size
#pragma omp parallel
{
for (double t = 0; t <= 10; t += dt) {
#pragma omp for
for (int i = 0; i < numObjects; i++) {
#pragma omp critical
{
updateObject(objects[i], dt);
}
}
}
}
return 0;
}
```
# Real-world applications of OpenMP in physical simulations
One example of a real-world application is computational fluid dynamics (CFD), which involves simulating the behavior of fluids, such as air or water, in various scenarios. CFD simulations are used in aerospace engineering, automotive design, and weather prediction, among others.
Another example is molecular dynamics (MD), which involves simulating the behavior of molecules and atoms in a system. MD simulations are used in drug discovery, material science, and biochemistry, among others. OpenMP can be used to parallelize the computation of forces between particles in MD simulations.
OpenMP is also used in computer graphics applications, such as physics-based animation and rendering. Physics-based animation involves simulating the behavior of objects, such as cloth or hair, to create realistic animations. OpenMP can be used to parallelize the simulation of these objects, improving performance and enabling real-time interaction.
Let's explore an example of a real-world application: computational fluid dynamics (CFD). CFD simulations are used to study the behavior of fluids, such as air or water, in various scenarios. OpenMP can be used to parallelize the computation of fluid flow and improve the performance of CFD simulations.
```cpp
#include <iostream>
#include <vector>
#include <omp.h>
struct FluidCell {
double density;
double velocityX;
double velocityY;
};
void updateFluidCell(FluidCell& cell, double dt) {
// Update fluid cell based on fluid flow equations
// Example: advection, diffusion, and pressure forces
// ...
// Apply external forces, such as gravity or wind
cell.velocityX += 0.1; // example: wind force
cell.velocityY += -9.8 * dt; // example: gravity force
}
int main() {
const int numCells = 1000;
std::vector<FluidCell> fluidCells(numCells);
double dt = 0.1; // time step size
#pragma omp parallel
{
for (double t = 0; t <= 10; t += dt) {
#pragma omp for
for (int i = 0; i < numCells; i++) {
updateFluidCell(fluidCells[i], dt);
}
}
}
return 0;
}
```
## Exercise
Implement a real-world application of your choice using OpenMP in a physical simulation. Choose an application from scientific simulations, computer graphics, or engineering, and use OpenMP to parallelize the simulation and improve its performance.
### Solution
```cpp
// Example answer:
#include <iostream>
#include <vector>
#include <omp.h>
struct RigidObject {
double x; // position
double y;
double z;
double vx; // velocity
double vy;
double vz;
double mass;
};
void updateRigidObject(RigidObject& object, double dt) {
// Update position based on velocity
object.x += object.vx * dt;
object.y += object.vy * dt;
object.z += object.vz * dt;
// Apply external forces, such as gravity and wind
object.vx += 0.1; // example: wind force
object.vy += -9.8 * dt; // example: gravity force
}
int main() {
const int numObjects = 10;
std::vector<RigidObject> objects(numObjects);
double dt = 0.1; // time step size
#pragma omp parallel
{
for (double t = 0; t <= 10; t += dt) {
#pragma omp for
for (int i = 0; i < numObjects; i++) {
updateRigidObject(objects[i], dt);
}
}
}
return 0;
}
``` | Textbooks |
Nguyen Minh Chuong
Research interests: Toán tử tich phân kỳ dị và song nhỏ trên trường thực, p-adic
Office: Room 306, Building A5
Tel: 04 37563474 Ext. 306
Email: nmchuong AT math.ac.vn
1 Dung Kieu Huu, Duong Dao Van, Nguyen Minh Chuong, Rough Hausdorff operator and its commutators on the Heisenberg group, Advances in Operator Theory ISSN: 6 (2021),
2 Nguyen Minh Chuong, Dao Van Duong, Nguyen Duc Duyet, Weighted estimates for commutators of Hausdorff operators on the Heisenberg Group, Russian Mathematics, 64, No. 2 (2020), 35-55.
3 Nguyen Minh Chuong, Dao Van Duong, Kieu Huu Dung, Maximal operators and singular integrals on the weighted Lorentz and Morrey spaces, Journal of Pseudo-Differential Operators and Applications, 11 (2020), 201-228. (SCI-E, Scopus)
4 Nguyen Minh Chuong, Dao Van Duong, Kieu Huu Dung, Multilinear Hausdorff operator on variable exponent Morrey-Herz type spaces, Integral Transforms and Special Functions, 31 (2020), 62-86. (SCI-E, Scopus).
5 Nguyen Minh Chuong, Dao Van Duong, Nguyen Duc Duyet, Weighted Mory-Herz Spaces Estimates for Rough Hausdorff Operator and its commutators, Journal of Pseudo-Differential Operators and Applications, 11, No. 2 (2020), 753-787. (SCI-E, Scopus).
6 Dao Van Duong, Kieu Huu Dung, Nguyen Minh Chuong, Weighted estimates for commutators of multilinear Hausdorff operators on variable exponent Morrey Herz type spaces, Czechoslovak Mathematical Journal, 70 (2020), 833-865. (SCI-E, Scopus).
7 Nguyen Minh Chuong, N. T. Hong, H. D. Hung, Multilinear Hardy–Cesàro operator and commutator on the product of Morrey–Herz spaces, Analysis Mathematica, 43 (2017) pp 547–565, SCI(-E); Scopus.
8 Nguyen Minh Chuong, D. V. Duong, The p-adic weighted Hardy-Cesàro operators on weighted Morrey-Herz space, P-adic numbers, ultrametric analysis and applications, 8 (2016), 204-216, SCI(-E); Scopus.
9 Nguyen Minh Chuong, D. V. Duong, H. D. Hung, Bounds for the weighted Hardy-Cesàro operator and its commutator on weighted Morrey-Herz type spaces, Zeitschrift für Analysis und ihre Anwendungen/Journal of Analysis and its Applications 35 (2016) 489-504, SCI(-E).
10 Nguyen Minh Chuong, Ha Duy Hung, Nguyen Thi Hong, Bounds of p-adic weighted Hardy-Cesàro operators and their commutators on p-adic weighted spaces of Morrey types, P-Adic Numbers, Ultrametric Analysis, and Applications, 8 (2016), 30-43,SCI(-E); Scopus.
11 Nguyen Minh Chuong, Ha Duy Hung, Bounds of weighted Hardy-Cesàro operators on weighted Lebesgue and BMO spaces, Integral Transforms and Special Functions, 25 (2014), 697 -- 710, SCI(-E), Scopus.
12 Nguyen Minh Chuong, Dao Van Duong, Wavelet bases in the Lebesgue spaces on the field of p-adic numbers, p-Adic Numbers, Ultrametric Analysis and Applications, 5 (2013), 106-121, SCI(-E); Scopus.
13 Nguyen Minh Chuong, Dao Van Duong, Weighted Hardy-Littlewood operators and commutators on p-adic functional spaces, p-Adic Numbers, Ultrametric Analysis and Applications, 5 (2013), 65 - 82, SCI(-E); Scopus.
14 Nguyen Minh Chuong, D. V. Duong, Boundedness of the wavelet integral operator on weighted function spaces, Russian Journal of Mathematical Physics 20 (2013), 268-275, SCI(-E); Scopus.
15 Nguyen Minh Chuong, Tran Dinh Ke, Generalized Cauchy problems involving nonlocal and impulsive conditions. Journal of Evolution Equations, 12 (2012), 367–392, SCI(-E); Scopus.
16 Cung The Anh, Nguyen Minh Chuong, Tran Dinh Ke, Global attractor for the m-semiflow generated by a quasilinear degenerate parabolic equation, Journal of Mathematical Analysis and Applications, 363 (2010), 444–453, SCI(-E); Scopus.
17 Nguyen Minh Chuong, Ha Duy Hung, A Muckenhoupt's weight problem and vector valued maximal inequalities over local fields, P-Adic Numbers, Ultrametric Analysis, and Applications 2 (2010), 305–321, SCI(-E); Scopus.
18 Yu. V. Egorov, Nguyen Minh Chuong, Dang Anh Tuan, A semilinear elliptic boundary value problem for degenerate pseudodifferential equations. (Russian) Doklady Akademii Nauk, 427 (2009), 10--13; translation in Doklady Akademii Nauk, 80 (2009), 456–459.
19 Yu. V. Egorova, Nguyen Minh Chuong, Dang Anh Tuan, On a semilinear boundary value problem for degenerate parabolic pseudodifferential equations. (Russian) Dokl. Akad. Nauk 427 (2009), no. 2, 155--159; translation in Dokl. Math. 80 (2009), no. 1, 482–486.
20 Yu. V. Egorov, Nguyen Minh Chuong, Dang Anh Tuan, Semilinear boundary value problems for degenerate pseudodifferential operators in spaces of Sobolev type. Russian Journal of Mathematical Physics, 15 (2008), 222 - 237.
21 Nguyen Minh Chuong, Nguyen Van Co, $p$-adic pseudodifferential operators and wavelets. In: Frames and operator theory in analysis and signal processing, 33 - 45, Contemporary Mathematics, 451, Amer. Math. Soc., Providence, RI, 2008.
22 Nguyen Minh Chuong, C. C. Kiet, A nonclassical boundary value problem for a pseudodifferential equation of variable order. Differentsialnye Uravneniya 44 (2008), N0 8, 1142 - 143; English transl.: Differential Equations 44 (2008), 1183 - 1185.
23 Nguyen Minh Chuong, Dang Ang Tuan, A semilinear nonclassical pseudodifferential boundary value problem in Sobolev spaces $H_{l,p},1Advances in deterministic and stochastic analysis, 15 - 32, World Sci. Publ., Hackensack, NJ, 2007.
24 Nguyen Minh Chuong, . D. Thinh, Sobolev spaces with weight on Riemannian manifolds. In: Advances in deterministic and stochastic analysis, 269 - 278, World Sci. Publ., Hackensack, NJ, 2007.
25 Nguyen Minh Chuong, N. V. Co; L. Q. Thuan, Harmonic analysis over $p$-adic field. I. Some equations and singular integral operators. In: Harmonic, wavelet and $p$-adic analysis, 271--290, World Sci. Publ., Hackensack, NJ, 2007.
26 Nguyen Minh Chuong, Nguyen Xuan Thuan, Random nonlinear variational inequalities for mappings of monotone type in Banach spaces, Stochastic Analysis and Applications, 24 (2006), 489 - 499.
27 Nguyen Minh Chuong, Tran Tri Kiet, On a nonclassical boundary value problem for a parabolic pseudo-differential equation, Differentsialnye Uravneniya 42 (2006), N0 5, 707 - 709. (in Russian)
28 Nguyen Minh Chuong, Yu. V. Egorov, Dang Anh Tuan, On a nonclassical semilinear boundary value problem for parabolic pseudodifferential equations in Sobolev spaces, Dokl. Akad. Nauk 411 (2006), 732 - 735. (in Russian)
29 Nguyen Minh Chuong, Yu. V. Egorov and D. A. Tuan and T. T. Kiet, Non-classical pseudo-differential boundary value problems in Sobolev spaces $H_{1,p}, 1dge, NJ, 2004.
30 Nguyen Minh Chuong, Bui Kien Cuong, Convergence estimates of Galerkin-wavelet solutions to a Cauchy problem for a class of periodic pseudodifferential equations. Proceedings of the American Mathematical Society, 132 (2004), 3589 - 3597.
31 Nguyen Minh Chuong, Tran Dinh Ke, Existence of solutions for a nonlinear degenerate elliptic system. Electron. Journal of Differential Equations, 2004, 93, 15p. (electronic).
32 Youri V. Egorov, Nguyen Minh Chuong, Dang Anh Tuan, A semilinear non-classical pseudo-differential boundary value problem in the Sobolev spaces. Comptes Rendus de l'Académie des Sciences, 337 (2003), 451 - 456.
33 Nguyen Minh Chuong, Bui Kien Cuong, The convergence estimates for Galerkin-wavelet solution of periodic pseudodifferential initial value problems. International Journal of Mathematics and Mathematical (2003), 857 - 867.
34 Nguyen Quynh Nga, Nguyen Minh Chuong, Some fixed point theorems for noncompact and weakly asymptotically regular set-valued mappings, Numer. Funct. Anal. Optim. 24 (2003), 895 - 905.
35 Nguyen Minh Chuong, Nguyen Xuan Thuan, Random equations for weakly semimonotone operators of type (S) and semi-J-monotone operators of type (J-S), Random Oper. Stochastic Equations 10 (2002), 123 - 132.
36 Nguyen Minh Chuong, Ta Ngoc Tri, The integral wavelet transform in weighted Sobolev spaces, Abstr. Appl. Anal. 7 (2002), 135 - 142.
37 Nguyen Minh Chuong, Nguyen Xuan Thuan, The surjectivity of semiregular maximal monotone random mappings, Random Oper. Stochastic Equations 10 (2002), 47 - 58.
38 Nguyen Minh Chuong, Nguyen Xuan Thuan, Nonlinear variational inequalities for random weakly semimonotone operators. Random Operators and Stochastic Equations, 9 (2001), 319 - 328.
39 Nguyen Minh Chuong, T. Q. Binh, Approximation of nonlinear operator equations. Numerical Functional Analysis and Optimization, 22 (2001), 831 - 844.
40 Nguyen Minh Chuong, N. V. Khai, K. V. Ninh, N. V. Tuan and N. Tuong, Numerical analysis (Vietnamese) - Giải tích số. NXB Giáo dục, Hanoi, 2001, 460 trang
41 Nguyen Minh Chuong, N. X. Thuan, Random fixed point theorems for multivalued nonlinear mappings. Random Operators and Stochastic Equations, 9 (2001), 235 - 244.
42 Nguyen Minh Chuong, Bui Kien Cuong, Galerkin-wavelet approximation for a class of partial integro-differential equations. Fract. Calc. Journal of Applied Analysis,4 (2001), 143 - 152.
43 Nguyen Quynh Nga, Nguyen Minh Chuong, On a multivalued nonlinear variational inequality. (Russian) Differ. Uravn. 37 (2001), N01, 128 - 129, 143; English transl.: Differ. Equ. 37 (2001), N01, 144 - 145.
44 Nguyen Minh Chuong, Nguyen Van Co, An iteration scheme for non-expansive mappings in metric spaces of hyperbolic type. Vietnam Journal of Mathematics, 28 (2000), 257 - 262.
45 Nguyen Minh Chuong, Nguyen Minh Tri, The integral wavelet transform in L^p(\mathbb R^n), Fract. Calc. Journal of Applied Analysis, 3 (2000), 133 - 140. 1(2000), 133 - 140.
46 Nguyen Minh Chuong, Ha Tien Ngoan, Nguyen Minh Tri, L. Q. Trung, Partial differential equations (in Vietnamese) – Phương trình đạo hàm riêng. NXB Giáo dục, Hà Nội, 2000, 331 trang.
47 Tran Quoc Binh, Nguyen Minh Chuong, On a fixed point theorem for nonexpansive nonlinear operator. Acta Mathematica Vietnamica, 24 (1999), 1 - 8.
48 Nguyen Minh Chuong, Nguyen Van Co, Multidimensional p -adic Green function. Proc. Amer. Math. Soc. 127 (1999), 685 - 694.
49 Nguyen Minh Chuong, Yu. V. Egorov, Some semilinear boundary value problems for singular integro-differential equations. Uspekhi Matematicheskikh Nauk, 53 (1998), 249 - 250.
50 Nguyen Minh Chuong, N. V. Tuan, Spline collocation methofs for Fredhom-Volterra integro-differential equations of high order. Vietnam Journal of Mathematics 29 (1997), 15 - 24.
51 Nguyen Minh Chuong, N. V. Tuan, Spline collocation methods for a system of nonlinear Fredholm-Volterra integral equations. Acta Mathematica Vietnamica, 21 (1996), 155 - 169.
52 Nguyen Minh Chuong, Tran Quoc Binh, On a fixed point theorem. Functional Analysis and Its Applications, 30 (1996), 220 - 221.
53 Nguyen Minh Chuong, N. V. Tuan, Spline collocation methods for Fredholm integro-differential equations of second order. Acta Mathematica Vietnamica, 20 (1995), 85 - 98.
54 Nguyen Minh Chuong, Nguyen Minh Tri, Le Quang Trung, Theory of partial differential equations (in Vietnamese) – Lý thuyết các phương trình đạo hàm riêng. NXB Khoa học Kỹ thuật, Hà Nội, 1995, 288 trang
55 Nguyen Minh Chuong, Ya. D. Mamedov and K. V. Ninh, Approximate solutions of operator equations}. Science and Technology Publishing, Hanoi 1992, 244 pages
56 Nguyen Minh Chuong, Nguyen Van Khai, On multistep Newton-Seidel methods for quasilinear operator equations. Acta Mathematica Vietnamica, 17 (1992), 103 - 114.
57 Nguyen Minh Chuong, Some approximative problems for nonlinear inequalities. Uspekhi Matematicheskikh Nauk, 46 (1991) (in Russian).
58 Nguyen Minh Chuong, N. V. Kinh, Regularization of variational inequalities with perturbed non-monotone and discontinuous operators. Differentsialnye Uravneniya, 27 (1991), 2171 - 2172 (in Russian).
59 Nguyen Minh Chuong, K. V. Ninh, On approximative normal values of multivalued operators in vector topological spaces. J. Isv. Vuzov SSSR (1991), 89, VINITI 29-04-91, No1 774-B-91 (in Russian).
60 Nguyen Minh Chuong, Le Quang Trung, Khuat Van Ninh, Boundary value problem for nonlinear parabolic equations of infinite order in Sobolev-Orlicz spaces. Matematicheskie Zametki 48 (1990), 78 - 85 (in Russian).
61 Nguyen Minh Chuong, On the parabolic pseudodifferential operators of variable order in Sobolev spaces with weighted norms. Acta Mathematica Vietnamica, 13 (1988), 5 - 14.
62 Nguyen Minh Chuong, Le Quang Trung, On a nonelliptic problem for pseudodifferential operators of variable order. Tap chi Toan hoc 16 (1988), 1 - 5 (in Vietnamese).
63 Nguyen Minh Chuong, Le Quang Trung, Limit equations for degenerate nonlinear elliptic equations in weighted Sobolev-Orlicz spaces. Uspekhi Matematicheskikh Nauk, 43 (1988), 181 - 182 (in Russian).
64 Nguyen Minh Chuong, Le Quang Trung, Degenerate elliptic nonlinear differential equations of infinite order in weighted Sobolev - Orlicz spaces. Differentsialnye Uravneniya, 24 (1988), No 3, 535 - 537 (in Russian).
65 Nguyen Minh Chuong, On the theory of parabolic pseudodifferential operators of variable order. Differentialnye Uravneniya 21 (1985), 686 - 694 (in Russian).
66 Nguyen Minh Chuong, Yu. V. Egorov, A problem with a directional derivative in S. L. Sobolev spaces of variable order. Differentialnye Uravneniya 20 (1984), 2163 - 2164 (in Russian).
67 Nguyen Minh Chuong, Parabolic pseudodifferential operators of variable order. Matematicheskie Zametki, 35 (1984), 21 - 229 (in Russian).
68 Nguyen Minh Chuong, Parabolic pseudodifferential operators of variable order. Dr. Sci. Dissertation, Moscow Univ. (1983).
69 Nguyen Minh Chuong, Isomorphism of S. L. Sobolev of variable order. Matematicheskii Sbornik, (NS) 121 (1983), 3 - 17 (in Russian).
70 Nguyen Minh Chuong, Degenerate parabolic pseudodifferential operators of variable order. Doklady Akademii Nauk, 268 (1983), 1055 - 1058 (in Russian).
71 Nguyen Minh Chuong, Sobolev spaces of variable order. Uspekhi Matematicheskikh Nauk, 37 (1982), (226), 117 (in Russian).
72 Nguyen Minh Chuong, A boundary value problem with a discontinuous boundary condition. Uspekhi Matematicheskikh Nauk, 37 (1982), (227), 191 - 192 (in Russian).
73 Nguyen Minh Chuong, Parabolic systems of pseudodifferential equations of varable order. Doklady Akademii Nauk SSSR, 264 (1982), 299 - 302 (in Russian).
74 Nguyen Minh Chuong, Parabolic pseudodifferential operators of variable order in S. L. Sobolev spaces with weighted norms. Doklady Akademii Nauk SSSR, 262 (1982), 804 - 807 (in Russian).
75 Nguyen Minh Chuong, On a class of pseudodifferential operators of variable order. Tap chi Toan hoc 9 (1981), No3, 1 - 6 (in Vietnamese). Doklady Akademii Nauk SSSR (1981), 1308 - 1312 (in Russian).
76 Nguyen Minh Chuong, Functional spaces with norms depending on parameters. Tap chi Toan hoc 7 (1979), 1 - 6 (in Vietnamese).
77 Nguyen Minh Chuong, On a class of pseudodifferential operators with parameters. Tap chi Toan hoc 7 (1979), 6 - 10 (in Vietnamese).
78 Nguyen Minh Chuong, D. Ngoc, On non-elliptic boundary value problem. Tap chi Toan hoc 5 (1977), 24 - 27 (in Vietnamese).
79 Nguyen Minh Chuong, Generalized Sobolev spaces and their applications in partial differential equations. Tap san Toan Ly 9 (1971).
80 Nguyen Minh Chuong, Yu. V. Egorov, The problem with an oblique derivative for a second order parabolic equation. Uspekhi Matematicheskikh Nauk, 24 (1969), (148), 197 - 198.
81 Nguyen Minh Chuong, On oblique derivative problem for parabolic differential equations of second order. Ph. D. Dissertation, Moscow Univ. (1968).
82 Nguyen Minh Chuong, On Menelaus and Ceva theorems in $n$-dimensional hyperbolic spaces. Tập san Toán lý, (1963), 55 - 56 (in Vietnamese).
83 Nguyen Minh Chuong, L. D. Phi and N. C. Qui, Elementary geometry (in Vietnamese) - Hình học sơ cấp. NXB Giáo dục, Hà Nội (1963), 280 trang. | CommonCrawl |
\begin{document}
\subjclass[]{} \keywords{} \begin{abstract}
In this paper, we demonstrate conditions under which a Lindel\"{o}f dynamical system exhibits $\omega$-chaos. In particular, if a system exhibits a generalized version of the specification property and has at least three points with mutually separated orbit closures, then the system exhibits dense $\omega$-chaos. \end{abstract} \title{Specification and $\omega$-chaos in non-compact systems} \section{Introduction}
Chaotic behavior in dynamical systems is an area of considerable mathematical interest. Fittingly, there are a large number of non-equivalent notions of chaos including Li-Yorke chaos \cite{Li-Yorke-Chaos}, distributional chaos \cite{distributional-chaos}, Devaney chaos \cite{devaney-chaos}, and $\omega$-chaos \cite{IntroductionOfOmegaChaos}.
In this paper, we focus on $\omega$-chaos, which was introduced by Li in 1993 and shown to be equivalent to positive topological entropy in the context of continuous interval maps \cite{IntroductionOfOmegaChaos}. Briefly, a system exhibits $\omega$-chaos provided there exists an uncountable $\omega$-scrambled set, i.e., an uncountable set of points, each of which has $\omega$-limit set containing non-periodic points, and each pair of which have nonempty intersection but uncountable relative complements.
In 2009, Lampart and Oprocha demonstrated that a large class of shift spaces exhibit $\omega$-chaos \cite{Lampart_And_Oprohca_ask_a_question}. In particular they demonstrate that (weak) specification and the existence of a non-transitive orbit is enough to guarantee that a non-degenerate shift space exhibits $\omega$-chaos. Briefly, a system $(X,f)$ exhibits \emph{(weak) specification} provided that for a desired tolerance $\delta$ there exists a ``relaxation time'' $N_\delta$ such that for $a_1\leq b_1<b_1+N_\delta\leq a_2\leq b_2<\cdots<b_{n-1}+N_\delta\leq a_n\leq b_n$ ($n=2$ for weak specification) and $x_1,\ldots x_n\in X$, there exists a point $z\in X$ such that the orbits of $z$ and $x_i$ agree for iterates of $f$ between $a_i$ and $b_i$.
The specification property was first studied by Bowen \cite{Intro_of_spec} and has implications in a wide variety of settings, including the existence of certain invariant measures \cite{Sigmund}. In addition, the relationship between the specification property and chaos has been well-studied, see e.g., \cite{spec_and_dense_distro_chaos,spec_and_distro_chaos, spec_and_distro_chaos_on_cmpct_metric_spaces, wang_and_wang}.
In their 2009 paper, Lampart and Oprocha asked whether every system with the specification property exhibits $\omega$-chaos. Meddaugh and Raines demonstrated that a weak form of specification was sufficient to ensure $\omega$-chaos for shift spaces \cite{WeakSpecAndBaire}. Hunter and Raines tendered a partial answer to this question for more general systems when they showed that compact metric systems with the specification property and uniform expansion near a fixed point exhibit $\omega$-chaos \cite{REEVE}.
The main result of this paper is a natural extension of this result to Lindel\"of systems.
\begin{maintheorem*}
Let $(X,f)$ be an expansive dynamical system such that $X$ is Lindel{\"o}f, every open set of $X$ is uncountable, $(X,f)$ has ISP, and there exist $t_0$, $t_1, s \in X$ such that $\overline{Orb(t_0)}, \overline{Orb(t_1)}, \overline{Orb(s)}$ are each a non-zero distance away from each other. Then $(X,f)$ exhibits dense $\omega$-chaos. \end{maintheorem*}
\section{Preliminaries}\label{defs} For the purposes of this paper, we adopt the following notational conventions. We use $\omega$ to denote $\mathbb{N}\cup\{0\}$. Given a metric space $X$ with metric $d$ and $A,B\subseteq X$, we take $d(A,B)=\inf\{d(a,b):a\in A, b\in B\}$. A space $X$ is Lindel\"of provided that every open cover of $X$ has a countable subcover.
A dynamical system is a pair $(X,f)$ consisting of a metric space $X$ and continuous surjection $f:X\to X$. For a given system $(X,f)$, $f^n$ denotes the $n$-fold composition of $f$ and the orbit of a point $x\in X$ is the set $Orb_f(x)=\{f^n(x):n\in\omega\}$. A point $x$ is periodic with prime period $p$ provided that $f^p(x)=x$ and $p$ is the minimal iterate for which that holds. The $\omega$-limit set of $f$ in the system $(X,f)$ is the set of accumulation points of the orbit of $x$, i.e., $\omega_f(x)=\bigcap_{n\in\omega}\overline{\{f^i(x):i\geq n\}}$. When the function $f$ is clear from context, the subscripts on $\omega_f$ and $Orb_f$ will be supressed.
The primary object of investigation in this paper is the notion of $\omega$-chaos, as defined by Li \cite{IntroductionOfOmegaChaos}.
\begin{dfn}
Let $(X,f)$ be a dynamical system.
A subset $A$ of $X$ is \textbf{$\omega$-scrambled} provided that, for $x\neq y\in A$, the following hold
\begin{enumerate}
\item $\omega(x)\cap \omega(y)\neq\varnothing$,
\item $\omega(x)\setminus\omega(y)$ is uncountable, and
\item $\omega(x)$ contains non-periodic points
\end{enumerate}
The system exhibits \textbf{$\omega$-chaos} if it contains an uncountable $\omega$-scrambled set. \end{dfn}
As mentioned in the introduction, the specification property, as defined by Bowen, will be important in what follows.
\begin{dfn}
We say a dynamical system $(X,f)$ has the \textbf{specification property} (SP) if for every $\delta>0$ there exists some $N_\delta \in \mathbb{N}$ such that for any finite collection of points $x_1, x_2, x_3\dots x_n\in X$ and any sequence
$0\leq a_1\leq b_1 < a_2\leq b_2<\dots<a_n\leq b_n$ with $a_{i+1}-b_i \geq N_\delta$ there is a point $p \in X$ such that $d(f^j(p), f^j(x_i)) < \delta$ for $a_i\leq j\leq b_i$.
\end{dfn}
In the context of non-compact dynamical systems, we will make use of a generalized notion of the specification property. It is not surprising that such a generalization is appropriate---indeed similar generalizations naturally appear in the study of $\omega$-limits in non-compact settings \cite{OmegaBaire}.
\begin{dfn}
We say a dynamical system $(X,f)$ has the \textbf{infinite specification property} (ISP) if for every $\delta>0$ there exists some $N_\delta \in \mathbb{N}$ such that for any countable collection of points $x_1, x_2, x_3, \ldots \in X$ and any sequence
$0\leq a_1\leq b_1 < a_2\leq b_2<\dots$ with $a_{i+1}-b_i \geq N_\delta$ there is a point $p \in X$ such that $d(f^j(p), f^j(x_i)) < \delta$ for $a_i\leq j\leq b_i$. \end{dfn}
This definition may seem too specific to be applicable. But as we will see, there are many systems with ISP. It is worth nothing that, in compact systems, the two forms of specification are equivalent.
\begin{lemma}
If $(X,f)$ has ISP then it has SP. If $X$ is compact, then the converse holds. \end{lemma} \begin{proof}
Suppose $(X,f)$ is a system with ISP. Then for any finite collection of points $x_1, x_2,\ldots, x_n$ there is a countable collection of points $y_1, y_2,\ldots$ such that $y_i = x_i$ for all $i\leq n$. There exists a point $p\in X$ that witnesses ISP with respect to $y_1, y_2,\ldots$ and thus witnesses SP with respect to $x_1, x_2,\ldots, x_n$.
Suppose now that $(X,f)$ is a compact dynamical system with SP. Fix $\delta >0$ and let $N_{\delta/2}$ be given by SP. Let $x_1,x_2,\dots$ be a countable collection of points in $X$, and let $a_1\leq b_1 < a_2\leq b_2<\dots$ with $a_{i+1}-b_i \geq N_{\delta/2}$ be a sequence of natural numbers.
For each $n\in \mathbb{N}$ we can find $p_n$ so that $d(f^j(p_n), f^j(x_i)) < \delta/2$ for $a_i\leq j\leq b_i, i\leq n$. The set $\{p_n\}_{n\in\mathbb{N}}$ is an infinite subset of a compact space and thus has an accumulation point, $q$.
Since $f^i$ is continuous for all $i\in \mathbb{N}$ we have $\lim_{n\to\infty}d(f^i(p_n), f^i(q))\to 0$ and so $d(f^j(q), f^j(x_i)) \leq \delta/2 < \delta$ for $a_i\leq j\leq b_i$. \end{proof}
For the purposes of this paper, we will make use of the fact that for a countable (finite or infinite) alphabet $\Sigma$, the shift space on $\Sigma$ exhibits ISP.
\begin{dfn}
Let $\Sigma$ be a nonempty collection of symbols. The space $\Sigma^\omega$ is the space of all sequences $(x_0, x_1, x_2,\ldots)$ where $x_i\in \Sigma$ for all $i\in\omega$.
The space is equipped with a map $\sigma$ called \textbf{the shift map} where $\sigma(x_0, x_1, x_2,\ldots) = (x_1, x_2, x_3,\ldots)$.
In the case that $\Sigma$ is a collection of numbers, $\Sigma^\omega$ is equipped with a metric. $$d((x_0, x_1,\ldots), (y_0, y_1,\ldots)) = \sum_{i=0}^\infty\frac{\min\{|x_i-y_i|, 1\}}{2^i}$$ \end{dfn}
Of particular import going forward is the shift map with $\Sigma=2=\{0,1\}$, i.e. $2^\omega = \{(x_0, x_1,\ldots) : x_i\in \{0,1\}\text{ for all } i\in \omega\}$ due to the fact that Lampart and Oprocha demonstrated that it exhibits $\omega$-chaos \cite{Lampart_And_Oprohca_ask_a_question}.
\begin{lemma}
The shift space $\Sigma ^\omega$ with the shift map has ISP if $|\Sigma|\geq 2$. \end{lemma} \begin{proof}
Let $\delta > 0$ and let $N$ be such that $2^{-N}<\delta/2$. Let $x_1, x_2, \dots$ be a countable collection of points in $\Sigma^\mathbb{N}$, and let $a_1\leq b_1 <a_2\leq b_2 < a_3\leq b_3<\dots$ be a sequence of natural numbers such that $a_i - b_{i-1} > N$.
Finally, let $y$ be a point in $\Sigma^\mathbb{N}$ such that $y_{[a_i, b_i+N]} = (x_i)_{[a_i, b_i+N]}$. Note that such points exist since $a_i - b_{i-1} > N$. Then $\sigma^j(y)$ and $\sigma^{j}(x_i)$ agree for at least $N$ symbols when $a_i\leq j\leq b_i$, so $d(\sigma^j(y), \sigma^{j}(x_i)) < 2^{-N} < \delta$ when $a_i\leq j\leq b_i$ as required for ISP. \end{proof}
In order to make discussing the patterns of the orbit of a point given by specification easier, we introduce the following lemma. \begin{lemma} \label{spec pattern}
Let $(X,f)$ be a dynamical system with infinite specification and fix $\delta >0$. Let $N_\delta>0$ witness specification. For $z_0, z_1,\ldots \in X$, $c_0,c_1,\ldots \in \omega$, and $M\geq N_\delta$, there exists a $p\in X$ such that the $f^i(p)$ is less than $\delta$ away from the $i$-th term in the following sequence whenever that term is not $*$. (We use the convention that $(*)^M$ denotes $M$-many consecutive entries which are $*$.)
\begin{align*}z_0, f(z_0), \dots, f^{c_0}(z_0), &(*)^{M}, z_1, f(z_1), \dots, f^{c_1}(z_1), (*)^{M},\ldots\\
\dots, &(*)^{M},z_n, f(z_n), \dots, f^{c_n}(z_n), (*)^{M}, \ldots\end{align*} \end{lemma}
\begin{proof}
Fix $z_0, z_1,\ldots \in X$, $c_0,c_0,\ldots \in \omega$, and $M\geq N_\delta$. We will apply the infinite specification property to the sequence $x_0,x_1,\ldots$ and $0\leq a_0\leq b_0<a_1\leq b_1\dots$ defined as follows. Fix $a_0=0$ and $b_0=c_0$. If $b_i$ is defined, choose $a_{i+1}=b_i+M\geq b_i+N_\delta$. If $a_i$ is defined, choose $b_i=a_i+c_i$. By surjectivity, choose $x_i\in f^{-a_i}(z_i)$. By infinite specification property, choose a point $p\in X$ such that $d(f^j(p),f^j(x_p))<\delta$ when $a_i\leq j\leq b_i$. It is immediately clear that $p$ satisfies the conclusion of the lemma. \end{proof} If $(X,f)$ has only the standard specification property, a similar result holds for any finite lists $x_0,\ldots x_n$ and $c_0,\ldots c_n$.
Our main result requires a form of local expansion.
\begin{dfn}
We say $f$ is \textbf{expansive} if there is some $\eta > 0, \lambda>1$ such that if $0< d(x, y) < \eta$ then $d(f(x), f(y))> \lambda d(x,y)$ for all $x, y \in X$.
\end{dfn}
The following lemma will be used frequently in the following section.
\begin{lemma}\label{Expansivity_Implies_Closeness}
Let $(X,f)$ be expansive and let $\eta > 0, \lambda >1$ witness expansivity.
If $p, q\in X, j\in \mathbb{N}$ are such that $d(f^i(p), f^i(q)) < \eta$ for all $i\leq j$, then $d(p ,q) < \eta\lambda^{-j}$.
Moreover, if $p \in X$ and $\{q_n\}_{n\in \mathbb{N}}$ is a sequence of points such that $d(f^i(p), f^i(q_n)) < \eta$ for all $i\leq j_n$ where $j_n\to \infty$. Then $q_n \to p$.
\end{lemma} \begin{proof}
Since $d(f^i(p), f^i(q)) < \eta$ for all $i\leq j$ we have $\eta > d(f^j(p), f^j(q)) > \lambda d(f^{j-1}(p), f^{j-1}(q)) > \lambda^2 d(f^{j-2}(p), f^{j-2}(q)) > \dots > \lambda^j d(p, q).$ Dividing both sides by $\lambda^j$ yields the result.
From here, $d(f^{j_n}(p), f^{j_n}(q_n)) < \eta$ implies $d(p, q_n) < \eta\lambda^{-j_n}$. Since $j_n\to \infty$ we have $q_n\to p$. \end{proof}
It is worth noting that $\omega$-chaos is a rather local property. There are cases where a dynamical system $(X, f)$ exhibits $\omega$-chaos even though the only ``chaotic" part of $X$ is a measure theoretically small portion of the space.
Consider a finite measure system $(X,f)$ with $\omega$-chaos and affix to it a much larger space $X'$ and extend $f$ to $X'$ in such a way that $f|X'$ is the identity. The new system $(X\cup X', f)$ still has $\omega$-chaos, but the chaos is relegated to the comparatively small portion of the system, $X$. To consider a more global form of $\omega$-chaos we introduce the following.
\begin{dfn}
We say a system exhibits \textbf{dense $\omega$-chaos} if any open set contains an uncountable $\omega$-scrambled set. \end{dfn}
Systems with dense $\omega$-chaos do exist. In fact, $2^\omega$ has dense $\omega$-chaos
To see this, let $\alpha\in 2^\omega$. Note that prepending any finite word $w$ to an infinite word $\alpha$ does not alter the $\omega$-limit set of $\alpha$.
That is to say $\omega(w\alpha) = \omega(\alpha)$.
Note, also, that basic open sets in $2^\omega$ take the form of cylinder sets. So pick any finite word $w\in 2^\omega$. Prepend every point in $2^\omega$ with $w$ to get the open set $[w]\subseteq 2^\omega$. This open set exhibits $\omega$-chaos.
\section{Existence of $\omega$-chaos in certain Lindel{\"o}f systems} In this section, we will prove the following theorem. \begin{maintheorem*}\label{MainThm} Let $(X,f)$ be an expansive dynamical system such that $X$ is Lindel{\"o}f, every open set of $X$ is uncountable, $(X,f)$ has ISP, and there exist $t_0$, $t_1, s \in X$ such that $\overline{Orb(t_0)}, \overline{Orb(t_1)}, \overline{Orb(s)}$ are each a non-zero distance away from each other. Then $(X,f)$ exhibits dense $\omega$-chaos. \end{maintheorem*}
We begin with the following constructions. Let $(X,f)$ be given as in the main theorem. Let $\eta, \lambda$ witness the expansivity of $f$. Let $t_0, t_1, s$ be three points such that the orbit closures of these points have non-zero distance between each other. Fix $\xi$ in $X$ and $U$ a neighborhood thereof. Let $D>0$ be small enough so that the ball of radius $D$ around $\xi$ is contained entirely within $U$. Fix $\epsilon > 0$ such that $$\min\{d(\overline{Orb(t_o)}, \overline{Orb(t_1)}), d(\overline{Orb(t_o)}, \overline{Orb(s)}), d(\overline{Orb(t_1)}, \overline{Orb(s)}), D, \eta\}> 2\epsilon.$$ Lastly, let $N$ be given by ISP with respect to $\epsilon / 2$ and let $P\in \mathbb{N}$ such that $P> N$.
We construct two sequences of numbers as follows: $a_0 = 0$, if $a_i$ is defined, let $b_i = a_i + P$, and if $b_i$ is defined, let $a_{i+1} = b_i+N$.
With that construction done, we are now ready to leverage ISP. \begin{dfn}
For $\beta \in 2^\omega$, let $E_\beta = \{x\in X : a_i\leq j\leq b_i, d(f^j(x), f^{j-a_i}(t_{\beta_i}))\leq \frac{\epsilon}{2}\text{ for all }i\in\omega\}$. \end{dfn}
Note that $E_\beta \neq \varnothing$ due to ISP.
\begin{lemma}
$E_\beta$ is closed, and $E_\beta\cap E_\gamma \neq \varnothing$ if and only if $\beta = \gamma$. \end{lemma} \begin{proof}
Let $\{p_k\}_{k\in\omega}$ be a sequence of points in $E_\beta$ which limits to $p$. Choose $i\in \omega$ and choose some $j\in \{a_i,\dots, b_i\}$.
Since $f^j$ is a continuous function, $f^j(p_k) \to f^j(p)$ and thus $d(f^j(p), f^{j-a_i}(t_{\beta_i}))\leq \epsilon/2$. So $p\in E_\beta$.
To verify the other claim, let $p\in E_\beta \cap E_\gamma$. Then for any $j\in\{a_i,\dots,b_i\}$ we have that $d(f^j(p), f^{j-a_i}(t_{\beta_i}))\leq \epsilon/2$ and $d(f^j(p), f^{j-a_i}(t_{\gamma_i}))\leq \epsilon/2$. But, since $d(Orb(t_0), Orb(t_1)) > 2\epsilon$, we must have $t_{\beta_i}=t_{\gamma_i}$ for all $i$ and thus $\beta=\gamma$. \end{proof}
\begin{lemma}\label{E_beta_not_periodic}
If $\beta\in 2^\omega$ is not periodic, then $E_\beta$ does not contain periodic points. \end{lemma} \begin{proof}
Suppose $q\in E_\beta$ is periodic with period $K$. Then $d(f^j(q), f^{j-a_i}(t_{\beta_i}))\leq\epsilon/2$ for all $i\in \omega, a_i\leq j\leq b_i$.
Note that $a_i = i(N+P)$
so $a_i+ a_h = i(N+P) + h(N+P) = (i+h)(N+P) = a_{i+h}$.
Let $L = lcm(K, N+P)$ and choose $m$ such that $L=m(N+P)$ and note that $L=a_m$.
Moreover, $f^{L+j}(q) = f^j(q)$, and thus we have the following.
\begin{align*}
d(t_{\beta_{m+i}}, t_{\beta_i}) &\leq d(t_{\beta_{m+i}}, f^{a_i}(q)) + d(f^{a_i}(q), t_{\beta_i})\\
&=d(t_{\beta_{m+i}}, f^{L+a_i}(q)) + d(f^{a_{i}}(q), t_{\beta_i}) \\
&=d(t_{\beta_{m+i}}, f^{a_m+a_i}(q)) + d(f^{a_{i}}(q), t_{\beta_i}) \\
&=d(t_{\beta_{m+i}}, f^{a_{m+i}}(q)) + d(f^{a_{i}}(q), t_{\beta_i}) \\
&\leq \epsilon
\nonumber
\end{align*}
Since $Orb(t_{\beta_0})$ and $Orb(t_{\beta_1})$ are $2\epsilon$ apart this means that $\beta_{i+m} = \beta_i$ for all $i\in\omega$, i.e. $\beta$ is periodic. \end{proof}
\begin{lemma}\label{we_can_spec_images_of_beta}
Let $\beta\in 2^\omega$ and let $q\in E_\beta$. Then $f^{a_k}(q) \in E_{\sigma^k(\beta)}$. \end{lemma} \begin{proof}
Fix $k\in \omega$.
By definition of $E_\beta$, $d(f^j(q), f^{j-a_i}(t_{\beta_i}))\leq \frac{\epsilon}{2}, a_i\leq j\leq b_i\text{ for all }i\in\omega$ and so it certainly holds for all $i>k$. So $d(f^j(f^{a_k}(q)), f^{j-a_i}(t_{\sigma^k(\beta)_i}) \leq \epsilon/2$. \end{proof} Going forward, the following sets will be useful \[G_\beta = {\bigcup_{i\in \omega} f^i(E_\beta)}, \quad H_\beta' = {\bigcup_{\alpha\in \omega_\sigma(\beta)}G_\alpha}, \quad H_\beta = \overline{\bigcup_{\alpha\in \omega_\sigma(\beta)}G_\alpha}\]
The definition of $H_\beta$ leads to the following useful lemmas which will be used to establish that certain $\omega$-limit sets have uncountable set differences. The first lemma is immediate from the definitions.
\begin{lemma}\label{H_beta_sets_are_uncountable}
If $\beta\in 2^\omega$ is such that $\omega(\beta)$ is uncountable, then $H_\beta'$ and $H_\beta$ are uncountable. \end{lemma}
Less immediately, we have the following. \begin{lemma}\label{b_not_in_orb_closure_implies_Eb_not_meet_H}
If $\beta,\chi\in 2^\omega$ are such that $\beta\notin \overline{Orb(\chi)}$, then $E_\beta\cap H_\chi = \varnothing$. \end{lemma} \begin{proof}
Let $\beta, \chi\in 2^\omega$ such that $\beta\notin \overline{Orb(\chi)}$. Let $r$ be such that $\beta_{[0, r]}$ does not occur in $\chi$.
Suppose, by way of contradiction, that $\{z_k\}_{k\in \omega}$ is a sequence of points in $H_\chi'$ which limits to $z\in E_\beta$. Then for all but finitely many $z_k$ we have, for $j\leq b_r$,
\begin{equation}\label{zk_and_z_are_close}
d(f^j(z_k), f^{j}(z)) \leq \epsilon/2
\end{equation}
Since each $z_k$ belongs to $H_\chi'$, for each $k$ there exists $v_k\in \omega$ minimal such that $z_k\in f^{v_k}(E_{\alpha_k})$ for some $\alpha_k$. As such, by Lemma \ref{we_can_spec_images_of_beta}, for each $i\in \omega$ with $a_i > v_k$ we have $f^{a_i-v_k}(z_k)\in E_{\sigma^i(\alpha_k)}$.
For each $k$, let $i_k$ be minimal so that $a_{i_k}\geq v_k$ and let $q_k = a_{i_k}-v_k.$
Then $f^{q_k}(z_k)\in E_{\theta^k}$ for some $\theta^k\in \omega(\chi)$.
Notice that by construction of the $E_\alpha$ sets, any point in $E_\alpha$ is exactly $P+N$ many iterates away from being in $E_{\sigma(\alpha)}.$ So any point in $H_\chi'$ is fewer than $P+N$ iterates away from being in some $E_\alpha$ (if it take $N+P$ many iterates, then we must have started out in some $E_{\sigma^{-1}(\alpha)}$ set and thus needed 0 iterates to be in an $E$ set).
Moreover, for any point in $E_\alpha$, the first $P$ many iterates are controlled by specification, and iterates number $P+1, P+2, \dots, P+N-1$ are all not controlled by specification.
So, if $q_k < N$, then $z_k$ is in a part of the orbit of $E_{\alpha^k}$ that is not controlled by specification.
Similarly, if $q_k \geq N$, then $z_k$ is in a part of the orbit of $E_{\alpha^k}$ that \emph{is} controlled by specification.
As such we have two cases.
If $q_k < N$, let $y_k = q_k$ and let $\delta^k = \theta^k$. So $f^{y_k}(z_k)\in E_{\delta^k}$.
If, on the other hand, $q_k \geq N$, then since $z_k$ is in a part of the orbit of $E_{\alpha^k}$ that is controlled by specification, we let $y_k = 0$ and let $\delta^k = \alpha^k$.
Thus, in either case we have $f^{a_i}(f^{y_k}(z_k)) = f^{a_i+y_k}(z_k)\in B_{\epsilon/2}(Orb(t_{\delta^k_i}))$ for all $i\in \omega$.
But, $\beta_{[0, r]}$ does not occur in $\chi$, so for each $k$ there is some index $i_k \leq r$ such that $\beta_{i_k}\neq \delta^k_{i_k}$ and so $f^{a_{i_k}}(f^{y_k}(z_k)) \in B_{\epsilon/2}(t_{1-\beta_{i_k}})$.
Since $z\in E_\beta$, we have $f^{a_{i_k}}(z) \in B_{\epsilon/2}(t_{\beta_{i_k}})$ for each $k\in \omega$, but more importantly $f^{a_{i_k}+y_k}(z)\in B_{\epsilon/2}(f^{y_k}(t_{\beta_{i_k}}))$ since $y_k \leq N$ and so $a_{i_k}+y_k<b_{i_k}.$
But this, combined with (\ref{zk_and_z_are_close}), means $f^{a_{i_k}+y_k}(z)\in B_{\epsilon}(Orb(t_{\beta_{i_k}})) \cap B_{\epsilon}(Orb(t_{1-\beta_{i_k}}))$, which is empty, a contradiction. \end{proof}
We now demonstrate how, given $\beta$ and $\chi$ in $2^\omega$, to construct a pair of points whose $\omega$-limit sets meet and will each contain one of $H_\beta$ or $H_\chi$ and miss the other.
For $n\in \omega, z\in H_\beta$ let $U_{z,n} = \{ y: d(f^i(y), f^i(z)) <\epsilon/4\text{ for }i\leq b_n\}$. Then $\mathcal{U}_n = \{U_{z,n}:z\in H_\beta\}$ is an open cover of $H_\beta$. $H_\beta$ is closed subset of the Lindel{\"o}f space $X$ and is therefore Lindel{\"o}f \cite{Munkres}. Thus, we can choose a countable set $\{U_{z_1, n}, U_{z_2,n},\dots\}$ which is an open cover of $H_\beta$. Finally, let $H_{\beta, n} = \{z_1, z_2, z_3,\dots\}$. Since each $H_{\beta, n}$ is countable, we can enumerate the sets: \begin{align*} H_{\beta, 1} &= \{\gamma_{1, 0}, \gamma_{1, 1}, \gamma_{1,2},\dots\}\\ H_{\beta, 2} &= \{\gamma_{2, 0}, \gamma_{2, 1}, \gamma_{2,2},\dots\}\\ H_{\beta, 3} &= \{\gamma_{3, 0}, \gamma_{3, 1}, \gamma_{3,2},\dots\} \end{align*} We can list the elements of the union of the $H_{\beta, i}$ sets via the following diagonalization: $$(\gamma_{1,0}, \gamma_{1,1}, \gamma_{2,0}, \gamma_{1,2}, \gamma_{2,1}, \gamma_{3,0}, \dots)$$ Since the collection of sets $H_{\beta ,n}$ are not necessarily disjoint, the previous list may have repetition, but that does not affect the proof.
Let $M$ be given by ISP with respect to $\epsilon/8$. Now, apply Lemma \ref{spec pattern} to find a point $p_\beta$ such that $f^i(p_\beta)$ is less than $\epsilon/8$ from the $i$-th term the sequence \begin{align*}
\xi, (*)^M, & \gamma_{1,0},\dots, f^{b_1}(\gamma_{1,0}), (*)^M, s, (*)^M, \\
& \gamma_{1,1},\dots, f^{b_1}(\gamma_{1,1}), (*)^M, s, f(s), (*)^M,\\
& \gamma_{2,0},\dots, f^{b_2}(\gamma_{2,0}), (*)^M, s, f(s), f^2(s), (*)^M,\\ & \dots. \end{align*} whenever the $i$-th term is not a $*$. Note that the sequence follows the orbit of the point $\gamma_{j, w}$ for $b_j$-many iterates, and we follow the orbit of $s$ for $(n-1)$-many iterates $n-$th time we arrive at $s$ in the sequence.
We now turn our attention to the $\omega$-limit set of $p_\beta$.
\begin{lemma}
$H_\beta\cup \{s\} \subseteq \omega(p_\beta)$. \end{lemma} \begin{proof}
To see that $s\in \omega(p_\beta)$, use the specification pattern of $p_\beta$ to define a sequence of integers $i_k$ such that $f^{i_k}(p_\beta) \in B_{\epsilon/8}(s)$ and $f^{i_k+j}(p_\beta)\in B_{\epsilon/8}(f^j(s))$ for all $j\leq k, k\in \omega$. Lemma \ref{Expansivity_Implies_Closeness} then gives us that $\lim_{k\to \infty}f^{i_k}(p_\beta) = s$.
Now to see that $H_\beta$ is contained in $\omega(p_\beta)$, let $\upsilon \in H_\beta$. For each $n\in \mathbb{N}$ there exists an $l_n$ such that $\upsilon \in U_{\gamma_{n, l_n}, n}$.
Notice that since $\upsilon\in U_{\gamma_{n, l_n}, n}$ we have that $d(f^i(\upsilon), f^i(\gamma_{n,l_n}))<\epsilon/4$ for all $i\leq b_n$. Thus, by the expansivity of $f$,
together with Lemma \ref{Expansivity_Implies_Closeness}
we have that $\lim_{n\to \infty}\gamma_{n,l_n} = \upsilon$.
Now define a sequence $i_n$ such that $f^{i_n}(p_\beta)\in B_{\epsilon/8}(\gamma_{n, l_n})$ and for all $j\in \{0, 1, \dots, b_{n}\}$ we have $f^{i_n+j}(p_\beta)\in B_{\epsilon/8}(f^{j}(\gamma_{n, l_n}))$. Such a sequence exists by the construction of $p_\beta$ via ISP.
Then,
\[
d(f^{i_n}(p_\beta), \upsilon) \leq d(f^{i_n}(p_\beta), \gamma_{n, l_n}) + d(\gamma_{n, l_n} , \upsilon).
\]
But, as we have already seen $d(\gamma_{n, l_n} , \upsilon)\to 0$ as $n\to \infty$. Moreover, by Lemma \ref{Expansivity_Implies_Closeness}, since $$d(f^{i_n+j}(p_\beta), f^{j}(\gamma_{n, l_n})) <\eta$$ for all $j<b_n$, we have $d(f^{i_n}(p_\beta), \gamma_{n, l_n}) < \eta\lambda^{-b_n}$. So as $n\to\infty, d(f^{i_n}(p_\beta), \gamma_{n, l_n}) \to 0$. \end{proof}
\begin{lemma}\label{omega_p_gamma_is_disjoint_from_H_beta}
If $\beta, \gamma\in2^\omega$ are such that $\overline{Orb(\beta)}\cap \overline{Orb(\gamma)} = \varnothing$, then $\omega(p_\gamma) \cap H_\beta' = \varnothing$. \end{lemma} \begin{proof}
Let $q\in H_\beta'$. Then $q\in\bigcup _{\alpha \in \omega(\beta)} G_\alpha$.
As such, be Lemma \ref{we_can_spec_images_of_beta} there is some $m\in\omega$ such that $f^m(q)\in E_{\alpha}$ for some $\alpha\in \omega(\beta)$.
By definition of $E_\alpha$ we have $d(f^{j}(f^m(q)), f^{j-a_i}(t_{\alpha_i}))\leq \epsilon/2$ for all $j\in\{a_i,\dots, b_i\}, i\in\omega$.
Now suppose, by way of contradiction, $q\in \omega(p_\gamma)$. This implies that $f^m(q)\in \omega(p_\gamma)$ as well since $\omega$-limit sets are forward invariant.
Let $\{k_n\}_{n\in\omega}$ be a sequence of natural numbers such that $\lim_{n\to\infty}f^{k_n}(p_\gamma) = f^m(q)$.
By the construction of $p_\gamma$, for every $k_n$, $f^{k_n}(p_\gamma)$ is either
\begin{enumerate}
\item in a part of $Orb(p_\gamma)$ that is controlled by specification and within $\epsilon/8$ of $\xi$ (but this only occurs once at the start of the spec pattern of $p_\gamma$),
\item in a part of $Orb(p_\gamma)$ that is controlled by specification and within $\epsilon/8$ of the orbit of $s$,
\item in a part of $Orb(p_\gamma)$ that is controlled by specification and within $\epsilon/8$ of the orbit of some $\gamma_{i, j}$, or
\item not in a part of $Orb(p_\gamma)$ that is controlled by specification, in which case $f^{k_n}(p_\gamma)$ is fewer than $M+1$ iterates away from being within $\epsilon/8$ of some $\gamma_{i,j}$ or $s$.
\end{enumerate}
For the sake conciseness, let $p_n = f^{k_n}(p_\gamma)$. Observe that at least one of conditions (1)-(4) must be met infinitely often, and since condition (1) is only satisfied once, at least one of conditions (2)-(4) must be met infinitely often.
Suppose condition (2) is satisfied by infinitely many $p_n$. Then $f^m(q) = \lim_{n\to\infty} p_n$ would be within $\epsilon/4$ of $Orb(s)$. But this cannot be since, by choice of $\epsilon$, we have $d(\overline{Orb(s)}, \overline{Orb(t_{\alpha_0})}) > 2\epsilon$, and $f^m(q)$ is within $\epsilon/2$ of $t_{\alpha_0}$ by virtue of being in $E_\alpha$.
Suppose condition (3) is satisfied by infinitely many $p_n$. Then for infinitely many $n$, $p_n$ is within $\epsilon/8$ of some point $\gamma'_{i_n, j_n}$ in the orbit of some $\gamma_{i_n, j_n}$. Let $L_n$ be minimal so that $L_n +k_n = b_i +1$ for some $i\in \omega$.
Then, $p_n$ is guaranteed to be within $\epsilon/8$ of the orbit of $\gamma_{i_n, j_n}$ for at least $L_n$ many iterates.
We have two possibilities. Either $\{L_n\}_{n\in \omega}$ is unbounded or not. If it is unbounded, then we can pass to $\{p_{n_r}\}_{r\in\omega}$, a subseqeunce of $p_n$ on which $L_{n_r}$ is monotone increasing.
Then by Lemma \ref{Expansivity_Implies_Closeness}, we have that $d(p_{n_r}, \gamma'_{i_{n_r}, j_{n_r}})\to 0$. Moreover, since $p_{n_r}\to f^m(q)$, we have $d(p_{n_r}, f^m(q)) \to 0$.
So, by the triangle inequality
\[
d(f^m(q), \gamma'_{i_{n_r}, j_{n_r}}) \leq d(f^m(q), p_{n_r}) + d(p_{n_r}, \gamma'_{i_{n_r}, j_{n_r}})
\]
The latter two terms go to 0 as $n_r\to \infty$ and so $d(f^m(q), \gamma'_{i_{n_r}, j_{n_r}})\to 0$. This means that $\lim_{r\to\infty} \gamma'_{i_{n_r}, j_{n_r}} = f^m(q)$.
But each $\gamma'_{i_{n_r}, j_{n_r}} \in H_\gamma$ since each $\gamma'_{i_{n_r}, j_{n_r}}$ is in the orbit of some $\gamma_{i,j}\in H_\gamma$ and $H_\gamma$ is forward invariant.
Moreover, $H_\gamma$ is closed, which implies that $f^m(q)\in H_\gamma$.
But, $E_\alpha \cap H_\gamma = \varnothing$ by Lemma \ref{b_not_in_orb_closure_implies_Eb_not_meet_H}, a contradiction.
If, on the other hand, $\{L_n\}_{n\in\omega}$, is bounded, then we can pass to $\{p_{n_r}\}_{r\in\omega}$, a subseqeunce of $p_n$, on which $L_{n_r}$ is constant.
Let $L$ be this constant. Then $f^{M+L}(p_n)$ is within $\epsilon/8$ of $s$ for all $r$.
Additionally, every time an iterate of $p_\gamma$ in the specification pattern is within $\epsilon/8$ of $s$, it will stay within $\epsilon/8$ of $Orb(s)$ for increasingly more iterates.
Hence as $n_r\to \infty$ increasingly more iterates of $f^{M+L}(p_{n_r})$ will stay within $\epsilon/8$ of $Orb(s)$.
This fact, in conjunction with Lemma \ref{Expansivity_Implies_Closeness}, implies that the sequence $\{p_{n_r}\}_{r\in\omega}$ is actually converging to $s$.
But, the forward orbit of $f^m(q)$ will be within $\epsilon/2$ of $\{t_0, t_1\}$ infinitely often.
So, since $d(\overline{Orb(s)}, \overline{Orb(t_{i})}) > 2\epsilon$ for $i\in \{0,1\}$, we cannot have $\{f^{M+L}(p_n)\}_{n\in\omega}$ converging to $s$ and to $f^{M+L}(f^m(q))$.
Lastly, suppose that for infinitely many $p_n$ condition (4) is satisfied. If infinitely many $p_n$ are not in a part of $Orb(p_\gamma)$ controlled by specification and are fewer than $M+1$ iterates away from being within $\epsilon/8$ of $s$, then there is some $w\leq M+1$ such that infinitely many $p_n$ are exactly $w$ iterates away from being within $\epsilon/8$ of $s$.
By construction of $p_\gamma$, as $n\to \infty$, increasingly more iterates of $f^w(p_n)$ will be within $\epsilon/8$ of $s$ which implies $f^{w}(p_n)\to s$ by Lemma \ref{Expansivity_Implies_Closeness}.
This would mean $f^m(q) = s$, a contradiction.
On the other hand, if infinitely many $p_n$ are fewer than $M+1$ iterates away from being within $\epsilon/8$ of some $\gamma_{i_n,j_n}$, then there is some $w\leq M+1$ such that infinitely many $p_n$ are exactly $w$ iterates away from being within $\epsilon/8$ of $\gamma_{i_n,j_n}$. By construction of the orbit of $p_\gamma$, this means that $d(f^j(f^w(p_n)), f^j(\gamma_{i_n, j_n})) < \epsilon/8$ for $j\leq b_{i_n}$.
Again, we have two possibilities. Either the collection $\{i_n\}_{n\in\omega}$ is unbounded or bounded. If the collection is unbounded, then we can pass to $\{p_{n_r}\}_{r\in\omega}$, a subseqeunce of $\{p_n\}_{n\in\omega}$ on which $\{i_{n_r}\}_{r\in\omega}$ is monotone increasing and $f^w(p_{n_r})$ is within $\epsilon/8$ of $\gamma_{i_{n_r}, j_{n_r}}$.
Since $i_{n_r}$ is monotone increasing, $f^w(p_{n_r})$ will be within $\epsilon/8$ of $Orb(\gamma_{i_{n_r}, j_{n_r}})$ for increasingly many iterates, which by Lemma \ref{Expansivity_Implies_Closeness} means $d(f^w(p_{n_r}), \gamma_{i_{n_r}, j_{n_r}})\to 0$. As before, this means $d(f^w(f^m(q)), \gamma_{i_n, j_n}) \to 0$.
Now, let $g\in \omega$ be such that $f^{g}(f^w(f^m(q)))\in E_\delta$ for some $\delta\in \omega(\beta)$. Such a $g$ exists by Lemma \ref{we_can_spec_images_of_beta}.
Since $f^g$ is continuous, we have $\lim_{r\to\infty}f^g(\gamma_{i_{n_r}, j_{n_r}}) = f^g(f^w(f^m(q)))$.
But, $H_\gamma$ is forward invariant, so $f^g(\gamma_{i_n, j_n})\in H_\gamma$, and $H_\gamma$ is closed, so $f^g(f^w(f^m(q)))\in H_\gamma$.
But, $\delta\notin \overline{Orb(\gamma)}$ so $E_\delta\cap H_\gamma = \varnothing$, a contradiction.
If, on the other hand $\{i_n\}_{n\in \omega}$ is bounded, we can pass to $\{p_{n_r}\}_{r\in\omega}$, a subsequence of $\{p_n\}_{n\in\omega}$, such that $\{i_{n_r}\}_{r\in\omega}$ is constant.
Let $L$ be this constant. Then $f^{M+b_L+w}(p_{n_r})$ is within $\epsilon/8$ of $s$. And again, since we stay within $\epsilon/8$ of the orbit of $s$ for increasingly many iterates each time we visit in the specification pattern of $p_\gamma$, this means $f^{M+b_L+w}(p_{n_r}) \to s$. But as before, since inifnitely many iterates of $f^m(q)$ are within $\epsilon/2$ of $\{t_0, t_1\}$ and $d(\overline{Orb(s)}, \overline{Orb(t_{\alpha_i})}) > 2\epsilon$ for $i\in \{0,1\}$, we cannot have $f^{M+b_L+w}(p_{n_r})$ converging to $s$ and $f^{M+b_L+w}(f^m(q))$.
Thus, $f^m(q)\notin \omega(p_\gamma)$, implying $q\notin \omega(p_\gamma)$ which is the desired contradiction. \end{proof}
We are now ready to prove our main theorem.
\begin{maintheorem*}
Let $(X,f)$ be an expansive dynamical system such that $X$ is Lindel{\"o}f, every open set of $X$ is uncountable, $(X,f)$ has ISP, and there exist $t_0$, $t_1, s \in X$ such that $\overline{Orb(t_0)}, \overline{Orb(t_1)}, \overline{Orb(s)}$ are each a non-zero distance away from each other. Then $(X,f)$ exhibits dense $\omega$-chaos. \end{maintheorem*}
\begin{proof}
Fix $\xi$ in $X$ and $U$ a neighborhood thereof. Let $W\subset 2^\omega$ be an uncountable set of non-periodic points with pairwise disjoint orbits and uncountable $\omega$-limit sets. Such a set does exist \cite{uncountable_minimal_sets_in_2w}.
Let $\beta, \gamma \in W$. Find $p_\beta$ and $p_\gamma$ in $X$ following the method outlined above. By Lemma \ref{E_beta_not_periodic}, since $\beta, \gamma$ are not periodic, $E_\beta$ and $E_\gamma$ do not contain periodic points implying that $H_\beta$ and $H_\gamma$ do not contain \emph{only} periodic points.
Moreover, $\omega(p_\beta)\cap \omega(p_\gamma)\ni s$, and $\omega(p_\beta)\setminus \omega(p_\gamma) \supset H_\beta'$ by Lemma \ref{omega_p_gamma_is_disjoint_from_H_beta} which is uncountable as shown in Lemma \ref{H_beta_sets_are_uncountable}.
Similarly, $\omega(p_\gamma)\setminus \omega(p_\beta) \supset H_\gamma'$ which is also uncountable.
There are uncountably many such $\beta$ and $\gamma$ in $W$. From this, it follows that $(X,f)$ exhibits $\omega$-chaos.
Moreover, $p_\beta, p_\gamma \in B_{\epsilon/8}(\xi) \subset U$. Thus $(X,f)$ exhibits dense $\omega$-chaos. \end{proof} Note that if $X$ contains some open sets which are not uncountable, then $X$ will still exhibit $\omega$-chaos, though not densely.
\end{document} | arXiv |
The cohomology groups of the total space of the pencil.
A description of the Gauss-Manin local system on $\mathbb P^1-\Delta$ (e.g. in terms of monodromy action of the generators for $\pi_1$).
The Hodge numbers of a hypersurface $X\subset \mathbb P^n$ can be computed using Lefschetz hyperplane away from the middle row, and then Hirzebruch's generating function for the primitive middle Hodge numbers (see http://www.math.purdue.edu/~dvb/preprints/book-chap17.pdf for details). In your example, the Hodge diamond has middle row $(35,232,35)$. The total space of a Lefschetz pencil is the blow up of $X$ at the base locus of the pencil (8 points in your example), so it has $b_2 = 310$.
The local system description is more difficult. Let $C_p$ be a smooth octic curve over a general point $p\in \mathbb P^1$. The local system has rank $h^1(C_p)=2g(C_p)= 42$. There are $392$ nodal curves in the pencil. Since $\pi_1(\mathbb P^1 - 392,p)$ is free on 391 generators, you need to find the monodromy for each generator. By the Picard-Lefschetz formula, the monodromy around a given nodal fiber is given by $$\tau(x) = x- \langle x,e \rangle e$$ where $e\in H^1(C_p,\mathbb Z)$ is the vanishing cycle, represented by an embedded $S^1$ which gets contracted to the node.
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology cohomology singularity-theory or ask your own question.
How does Tate verify his own conjecture for the Fermat hypersurface?
Does the (torsion) Zariski cohomology of a (singular) hyperplane section of a smooth projective variety vanish (in small degrees)?
Are there known cases of the Mumford–Tate conjecture that do not use Abelian varieties? | CommonCrawl |
Optomechanical dissipative solitons
Jing Zhang1,2,
Bo Peng1,
Seunghwi Kim3,
Faraz Monifi1,
Xuefeng Jiang1,
Yihang Li1,
Peng Yu4,
Lianqing Liu4,
Yu-xi Liu5,
Andrea Alù ORCID: orcid.org/0000-0002-4297-52743,6 &
Lan Yang ORCID: orcid.org/0000-0002-9052-04501
Nature volume 600, pages 75–80 (2021)Cite this article
Solitons
Nonlinear wave–matter interactions may give rise to solitons, phenomena that feature inherent stability in wave propagation and unusual spectral characteristics. Solitons have been created in a variety of physical systems and have had important roles in a broad range of applications, including communications, spectroscopy and metrology1,2,3,4. In recent years, the realization of dissipative Kerr optical solitons in microcavities has led to the generation of frequency combs in a chip-scale platform5,6,7,8,9,10. Within a cavity, photons can interact with mechanical modes. Cavity optomechanics has found applications for frequency conversion, such as microwave-to-optical or radio-frequency-to-optical11,12,13, of interest for communications and interfacing quantum systems operating at different frequencies. Here we report the observation of mechanical micro-solitons excited by optical fields in an optomechanical microresonator, expanding soliton generation in optical resonators to a different spectral window. The optical field circulating along the circumference of a whispering gallery mode resonator triggers a mechanical nonlinearity through optomechanical coupling, which in turn induces a time-varying periodic modulation on the propagating mechanical mode, leading to a tailored modal dispersion. Stable localized mechanical wave packets—mechanical solitons—can be realized when the mechanical loss is compensated by phonon gain and the optomechanical nonlinearity is balanced by the tailored modal dispersion. The realization of mechanical micro-solitons driven by light opens up new avenues for optomechanical technologies14 and may find applications in acoustic sensing, information processing, energy storage, communications and surface acoustic wave technology.
You have full access to this article via your institution.
Cavity optomechanics has attracted extensive attention in recent years, creating opportunities for high-precision sensing, communications and quantum information processing14, as well as for fundamental science, for example, macroscopic quantum effects in mechanical systems15 and gravitational wave detection16. Although most of the studies about cavity optomechanics focus on the 'cooling' regime—where the cavity is driven by a laser red-detuned from the cavity resonance and thus absorbs phonons from mechanical modes—extensive research has been recently devoted to the 'heating' regime of cavity optomechanics. In this regime, a photon blue-detuned from the cavity resonance will emit a phonon into the mechanical modes when entering the cavity. Various phenomena have been observed in this regime, such as phonon lasing, in which coherent mechanical vibrations are excited through optical pumping17,18,19. For strong pumps, it is also possible to observe nonlinear optical effects, such as chaos20,21 optical solitons22,23,24, and surface acoustic wave frequency combs25,26,27,28.
Recently, optical solitons have been demonstrated and exploited in Kerr optical frequency combs, which provide robust, equally spaced spectral lines, ideally suited for timing and metrology1,2,3,4. These developments have enabled the realization of chip-scale frequency combs through Kerr nonlinearity or electro-optical interactions29,30,31,32 obtained by balancing nonlinearity and dispersion in Kerr microresonators5,33. The phononic counterparts of the optical frequency comb, that is, mechanical frequency combs, have been theoretically proposed using Fermi–Pasta–Ulam–Tsingou chains, and later demonstrated in micromechanical resonators using nonlinear three-wave mixing25,34,35. Their repetition rates are from a few Hz up to kHz, ten orders of magnitude smaller than the typical rates of optical combs, implying much finer frequency resolution.
Here we investigate nonlinear mechanical phenomena in optomechanical resonators and report observation of mechanical micro-solitons in an optical whispering gallery mode (WGM) toroidal microresonator. Although optical solitons have been observed in various WGM microresonators6,7,8,9,10, here we experimentally demonstrate optomechanical solitons—that is, localized acoustic waves stimulated by an optical pumping field—in WGM optomechanical resonators. In certain parameter regimes, we also observe mechanical frequency combs. The observed localized acoustic waves through optomechanical interactions are distinct from optical solitons—that is, localized photonic wavepackets, theoretically predicted in optomechanical arrays22, in which both Kuznetsov–Ma solitons23 and Akhmediev breathers24 have been studied. In addition—unlike previous studies on frequency combs in optomechanical resonators25,26,27,28 formed by cascaded four-wave mixing of different mechanical modes via nonlinear optomechanical coupling—the frequency combs observed in our experiments stem from the cnoidal wave mechanical motion associated with soliton formation. As the pump grows, this periodic-pulse-type motion becomes localized and turns into a single mechanical wavepacket, supporting a mechanical micro-soliton.
Figure 1 shows a toroidal microdisk resonator supporting both high-quality factor (Q) optical and mechanical modes. We focus on radial mechanical modes supported by the thin disk (Fig. 1a–c). When a blue-detuned optical pump is coupled into the resonator, radial mechanical waves are excited. At the rim of the microresonator, the back-action of the optical mode leads to the indirect coupling between the counterpropagating mechanical modes. A high optical pump power triggers the nonlinearity in the mechanical wave propagating from the rim of the resonator towards the central pedestal (Fig. 1a). At the silicon pedestal, the mechanical travelling wave experiences a π phase shift upon reflection and travels back towards the perimeter of the toroid (Fig. 1b). Once the mechanical travelling wave meets the ring of the toroid where the optical mode resides, the optomechanical coupling enables optically induced modulation of the mechanical wave, which travels back towards the central pedestal again.
Fig. 1: Mechanism of acoustic-wave propagation in an optomechanical resonator.
a, An inward mechanical travelling wave is excited and periodically modulated by the optical field at the rim of the microresonator. b, An outward mechanical travelling wave reflected by the silicon pedestal. c, The multiple reflections, properly tailored by nonlinearities and modal dispersion, can form an optomechanical soliton with dimension much larger than the physical size of the resonator. Snapshots of the propagating mechanical travelling wave are shown as a function of time. d, An equivalent optomechanical lattice, in which the optical field modulates the mechanical travelling wave periodically. e, Analogy of the optomechanical micro-soliton to a conventional mechanical soliton. The blue and pink curves denote inward and reflected outward waves, respectively.
This multiple scattering process of the mechanical wave can be modelled as the propagation through an effective mechanical lattice36 (Fig. 1d). Inward propagation towards the pedestal (illustrated by the blue travelling waves in Fig. 1c) experiences effective mechanical properties (blue block in Fig. 1d) that are different to those of outward travelling waves (red in Fig. 1c, d), owing to the different roles played by the nonlinear optomechanical interactions. Hence, the overall mechanical response of the toroid can be described by the effective mechanical lattice sketched in Fig. 1d, with an added π phase shift at the end of each red block (red arrows in Fig. 1d). This acoustic-wave lattice induces an unusual dispersion of the mechanical travelling wave. A self-reinforcing wavepacket (that is, a micro-soliton), following the same dynamics of a shallow-water wave with a weak nonlinear restoring force37 (longwave scenario), can, therefore, arise in such a mechanical effective lattice. Figure 1e illustrates how the mechanical travelling waves collectively behave as a single wavepacket under the nonlinear coupling via optomechanical interactions with the blue-detuned pumping. The envelope of the mechanical wavepacket (see the green envelope in Fig. 1c and the wavepacket in Fig. 1e) is much larger than the unit cell of the mechanical lattice, and so the micro-soliton has a time-varying wave amplitude rather than a visualized packet in a single cell, as shown in Fig. 1c, d.
In the parameter space, the mechanical dynamics can experience different regimes. By tuning the pump power or the frequency detuning between the pump field and the cavity mode, transitions from sinusoidal to cnoidal and solitary regimes can be observed for the mechanical motion. When the optical pump power is low, the optomechanically induced mechanical nonlinearity can be neglected. A simple continuous sinusoidal wave is excited in the resonator. As the pump power increases, the system becomes unstable: the optomechanical interaction introduces a third-order mechanical nonlinear term36. Meanwhile, the optomechanical interaction modifies the dispersion relation as ωμ = Ωm + D1μ + D3μ3, where μ, D1 and D3 are the relative mode number and the first- and third-order dispersion coefficients of the mechanical travelling wave36, respectively. The mechanical dispersion broadens the travelling wave, and the mechanical nonlinearity sharpens the wavepacket.
When nonlinearity and dispersion balance each other, a stable and localized mechanical wave packet, that is, an optomechanical soliton, is achieved. In the soliton regime, the shape of the soliton pulse can largely vary with pump power as well as with frequency detuning, which affects the strength of nonlinearity and the mode dispersion. This mechanism is quite different from the formation of optical solitons in microresonators described by the Lugiato–Lefever equation5. The dynamics of our soliton system can be characterized by the following modified Korteweg–de Vries equation,
$$\frac{\partial u}{\partial t}=[(G-{\Gamma }_{{\rm{m}}})-\xi {u}^{2}]u-v\frac{\partial u}{\partial z}-\sigma u\frac{\partial u}{\partial z}-{d}_{{\rm{KdV}}}\frac{{\partial }^{3}u}{\partial {z}^{3}}+\zeta (t),$$
where u(z, t) is the amplitude of the mechanical travelling wave, z is the coordinate along the radius of the microtoroid, v is the effective propagating speed of the soliton, σ is the aforementioned optomechanically induced third-order mechanical nonlinear coefficient, dKdV = (8π3/λ3)/D3 is the normalized third-order dispersion coefficient of the mechanical travelling wave, λ is the wavelength of the mechanical mode, G designates the mechanical gain induced by the phonon lasing effects, Γm characterizes the damping rate of the mechanical wave, ξ is the strength of the third-order mechanical nonlinear term induced by optomechanics, ζ(t) denotes white noise satisfying E(ζ(t)) = 0, E(ζ(t)ζ(t′)) = Dδ(t − t′) where δ(·) is the Dirac delta function, which is 0 everywhere except at the origin, and its integral from -∞ to +∞ is equal to 1, and D is the strength of the noise. Note that the Korteweg–de Vries equation has been used to study mechanical solitons in shallow-water waves37 as well as acoustic solitons in solid-state structures36. Unlike cavity optical solitons described by the Lugiato–Lefever equation, in which loss is compensated by a coherent input field38,39,40,41,42, the mechanical loss in our system is compensated by mechanical gain and nonlinear saturation induced by phonon lasing. This is analogous to dissipative solitons observed in laser systems, in which loss is balanced by laser gain43.
We experimentally observe these phenomena in an optomechanical microresonator with Ωm = 27.3 MHz and quality factor of 8 × 106 coupled to a tapered fibre. Several phase transitions of the mechanical mode can be observed by gradually increasing the blue detuning of the driving field from the resonant frequency (Fig. 2a). On the blue-detuned side, the energy is fed into the mechanical mode by the energy transfer between the optical pump and the cavity mode, as shown in the energy diagram in Fig. 2a, inset. When the detuning is much smaller than the mechanical frequency, a sinusoidal waveform is excited owing to a weak modulation of the mechanical mode by the optomechanical effects (Fig. 2b, e, which shows the temporal and frequency responses, respectively). A further increase of the detuning induces cascaded sidebands around the main peak of the mechanical mode in the frequency domain (Fig. 2f) and a periodic localized wavepacket in the time domain (Fig. 2c). The localized wavepacket observed here is induced by a particular mechanical motion of the microtoroid, cnoidal wave motion44,45,46, which leads to a localized wavepacket in the output field. Further blue detuning towards the mechanical frequency leads to an increase in the period of the localized pulse, until the system enters a regime in which the recurrence of the localized pulse stops being periodic, and several localized pulses may appear at the same time. This corresponds to the multi-soliton regime, in which the mechanical nonlinearity is so strong that the localized mechanical wavepackets may not be stable, but they interact, merge and collide with each other. Finally, the single-soliton wavepacket regime is reached, when the blue detuning is close to the mechanical frequency. Under this condition, the mechanical mode is resonant with the radiation force; thus, energy can be efficiently transferred from the optical mode to the mechanical mode (Fig. 2a, inset), maximizing the optomechanically induced mechanical nonlinearity. In this regime, we can obtain maximum localization of the mechanical wavepacket and observe a single pulse in the time domain (Fig. 2d), corresponding to a broadband frequency spectrum (Fig. 2g). Figure 2h, i shows the dispersion curves of the mechanical travelling wave for different frequency detuning levels and different pump powers, respectively.
Fig. 2: Generation of an optomechanical soliton.
a, Different observed regimes as a function of frequency detuning between pump and cavity mode. Inset, the energy transfer mechanism from optical to mechanical mode. More energy can be efficiently fed into the mechanical mode from the optical mode under the matching condition Δ = ωp − ω0 = Ωm, leading to the formation of mechanical solitons. ωp, ω0 and Δ are the frequencies of the pump field, cavity mode and frequency detuning, respectively, and Ωm is the mechanical mode frequency. b–d, Time-domain spectra of the output field in periodic (b), cnoidal wave (c) and soliton (d) regimes. The frequency detuning of the input field increases from left to right, and we can observe phonon localization for increased frequency detuning. e–g, Output spectra in the frequency domain of the pump field in periodic (e; single peak), cnoidal wave (f; frequency-comb-type spectrum) and soliton (g; broadband peak) regimes. h, Dispersion spectra for different normalized frequency detunings. i, Dispersion spectra for different pump powers, P. dBm, decibel milliwatts; FFT, fast Fourier transform; RF, radio frequency; WGM, whispering gallery mode.
To provide further insight into the phenomena observed in Fig. 2, we have explored the effect of radiation force, which serves as the energy source to the mechanical mode. The strength of the radiation force driving the mechanical mode can be enhanced by increasing either the detuning of the pump frequency or the pump power. In Fig. 3a–c, we see that the localized pulse in the cnoidal wave regime is a slowly varying envelope of the oscillating optical signal induced by the periodic mechanical motion. By adjusting the frequency detuning and the pump power separately, we observe changes in the spectral features of the pulses, as shown in Fig. 3d–f and Fig. 3g–i, respectively. With an increase in the frequency detuning and pump power, the width of the pulse in the output field that is induced by the localized mechanical wavepacket decreases (Fig. 3d, g), while the period of the pulse increases (Fig. 3e, h). The peak value of the wavepacket is proportional to the input pump power (Fig. 3i) and also increases with the frequency detuning (Fig. 3f). When we increase the pump power or change the detuning Δ between the pump field and the cavity mode to Ωm, more energy is transferred from the optical mode to the mechanical mode47, leading to an enhancement of the optomechanically induced mechanical nonlinearity. In turn, the localized acoustic wave becomes narrower with a higher amplitude and a larger period. The cnoidal wave motion provides a more controllable way to generate mechanical frequency combs, in comparison to traditional approaches using wave-mixing processes25,26. Additionally, the localized wavepackets observed in our experiments are induced by the localized mechanical motion of the microtoroid, rather than optomechanical-induced localized photonic wavepacket (see our analysis in Supplementary Information section V).
Fig. 3: Localized periodic phonon pulses in the cnoidal wave regime and soliton regime.
a–c, Slowly varying envelope of the amplitude of the periodic mechanical motion in the cnoidal wave regime with increasing magnification. d–f, Width (d), period (e) and amplitude (f) of the periodic pulses in the output field versus different frequency detunings of the input field in the cnoidal wave regime. g–i, Width (g), period (h) and amplitude (i) of the periodic pulses in the output field versus different input pump power in the cnoidal wave regime. The pulse width decreases, and period and amplitude increase with increased frequency detuning and pump power in the cnoidal wave regime. j–l, With increased detuning frequency, an eight-soliton pulse (j), a four-soliton pulse (k) and a single-soliton pulse (l) are generated. a.u., arbitrary units.
The distinctive feature of our optomechanical solitons spanning a broad spectral window (as shown in Fig. 2g) indicates exciting opportunities for sensing applications. To show this, we exploit the optomechanical soliton to detect weak low-frequency vibrations of a cantilever tip actively excited by external electric pumping circuits (Fig. 4a). The eigenfrequency of the mechanical cantilever tip is Ωtip = 384.24 kHz, which is far smaller than the eigenfrequency of the mechanical mode of the microtoroid. When the optomechanical resonator is in the periodic regime, it does not resonate with the low-frequency vibration of the tip. Consequently, we cannot observe the response of the optomechanical resonator to the vibrating cantilever tip (Fig. 4c). However, when the optomechanical resonator is in the soliton regime, with a wider power spectrum, more mechanical modes with different frequencies contribute to the response to the vibration of the cantilever tip. Although each mode gives a small contribution to the response to the tip vibration, which is far off resonance of the resonator, the collective contributions of all optomechanical modes are large enough to detect the low-frequency vibration of the cantilever tip (Fig. 4d). Figure 4b shows the detection efficiency increases with the width of the optomechanical soliton in the frequency domain. When the optomechanical soliton has a stronger localization in time domain, the width of the optomechanical soliton will increase. In this case, more modes of the optomechanical resonator will be involved, and therefore the enhancement will increase (Fig. 4b, inset).
Fig. 4: Detection of a low-frequency vibration of a cantilever tip by an optomechanical soliton.
a, Schematic diagram of the experimental set-up. Inset, top view of the microtoroid and cantilever tip. b, Peak power of the spectrum for detecting the motion of the cantilever tip versus the width of the mechanical soliton in the frequency domain. Inset, the enhancement factor η versus the width of the mechanical soliton in the frequency domain. With the increase of the width of the mechanical soliton in the frequency domain, the cavity becomes more sensitive to the motion of the cantilever tip, owing to the increase of the enhancement factor ηenh. c, Power spectrum for detecting the mechanical motion of the cantilever tip when the mechanical mode of the microtoroid is in the periodic regime, in which the mechanical motion of the cantilever tip with eigenfrequency Ωtip = 384.24 kHz (far off resonance from the eigenfrequency Ωm (in MHz) of the mechanical mode of the microtoroid) cannot be detected. Inset, power spectrum of the mechanical mode of the microtoroid in the periodic regime. d, Power spectrum for detecting the mechanical motion of the cantilever tip when the mechanical mode of the microtoroid is in the soliton regime, in which the mechanical motion of the cantilever tip is observed. Inset, power spectrum of the microtoroid in the mechanical soliton regime. a.u., arbitrary units.
In summary, we have reported the formation of a mechanical micro-soliton excited by light in an optomechanical microresonator. The stable mechanical soliton pulses result from two competing effects: (i) the dispersion of phonons induced by periodic modulation of the mechanical travelling wave owing to optomechanical interactions at the edge of the microtoroid; and (ii) optomechanically induced mechanical nonlinearity. We have also demonstrated that the shape of the mechanical soliton pulses can be controlled by tuning various system parameters, such as the pump power and the frequency detuning between the pump field and the cavity mode. The ability to localize phonons by optomechanically induced mechanical nonlinearity in nanostructures opens up new avenues for optomechanical applications. Cnoidal wave motion of the microtoroid resonator can be seen as a peculiar radio-frequency comb, and thus it may be useful for radio-frequency standards, clocks, astrocombs, and so on. The stable localized soliton pulse may be of great interest for sensing and transferring information in the radio-frequency regime using nanophotonic structures.
Our system includes a high-Q whispering gallery mode optomechanical microtoroid resonator coupled to a tapered optical-fibre waveguide19,48. In our experiments, a tunable external-cavity laser diode in the 1,550-nm band was amplified by an erbium-doped fibre amplifier before it is coupled into a fibre connected to the tapered fibre waveguide. By changing the gap between the resonator and the tapered fibre, we could adjust the portion of the pump power coupled into the resonator. The output of the fibre-coupled resonator was fed into a photodetector that was connected to an oscilloscope, in order to monitor the time-domain behaviour of the transmission spectra, and also to an electrical spectrum analyser to obtain the power spectra.
The datasets generated during and/or analysed in this study are available from the corresponding author upon reasonable request.
Holzwarth, R. et al. Optical frequency synthesizer for precision spectroscopy. Phys. Rev. Lett. 85, 2264–2267 (2000).
CAS PubMed ADS Google Scholar
Udem, T., Holzwarth, R. & Hänsch, T. W. Optical frequency metrology. Nature 416, 233–237 (2002).
Haus, H. A. & Wong, W. S. Solitons in optical communications. Rev. Mod. Phys. 68, 423–444 (1996).
ADS Google Scholar
Kippenberg, T. J., Holzwarth, R. & Diddams, S. A. Microresonator-based optical frequency combs. Science 332, 555–559 (2011).
Herr, T. et al. Temporal solitons in optical microresonators. Nat. Photon. 8, 145–152 (2014).
CAS ADS Google Scholar
Kippenberg, T. J., Gaeta, A. L., Lipson, M. & Gorodetsky, M. L. Dissipative Kerr solitons in optical microresonators. Science 361, eaan8083 (2018).
Brasch, V. et al. Photonic chip-based optical frequency comb using soliton Cherenkov radiation. Science 351, 357–360 (2016).
MathSciNet CAS PubMed MATH ADS Google Scholar
Stern, B., Ji, X., Okawachi, Y., Gaeta, A. L. & Lipson, M. Battery-operated integrated frequency comb generator. Nature 562, 401–405 (2018).
Suh, M.-G., Yang, Q.-F., Yang, K. Y., Yi, X. & Vahala, K. J. Microresonator soliton dual-comb spectroscopy. Science 354, 600–603 (2016).
Yi, X., Yang, Q.-F., Yang, K. Y., Suh, M.-G. & Vahala, K. J. Soliton frequency comb at microwave rates in a high-Q silica microresonator. Optica 2, 1078–1085 (2015).
Shao, L. et al. Microwave-to-optical conversion using lithium niobate thin-film acoustic resonators. Optica 6, 1498–1505 (2019).
Forsch, M. et al. Microwave-to-optics conversion using a mechanical oscillator in its quantum ground state. Nat. Phys. 16, 69–74 (2020).
Yamazaki, R. et al. Radio-frequency-to-optical conversion using acoustic and optical whispering-gallery modes. Phys. Rev. A 101, 053839 (2020).
Aspelmeyer, M., Kippenberg, T. J. & Marquardt, F. Cavity optomechanics. Rev. Mod. Phys. 86, 1391–1452 (2014).
Chan, J. et al. Laser cooling of a nanomechanical oscillator into its quantum ground state. Nature 478, 89–92 (2011).
LIGO Scientific Collaboration and Virgo Collaboration. Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116, 061102 (2016).
MathSciNet ADS Google Scholar
Grudinin, I. S., Lee, H., Painter, O. & Vahala, K. J. Phonon laser action in a tunable two-level system. Phys. Rev. Lett. 104, 083901 (2010).
PubMed ADS Google Scholar
Jing, H. et al. PT-symmetric phonon laser. Phys. Rev. Lett. 113, 053604 (2014).
Zhang, J. et al. A phonon laser operating at an exceptional point. Nat. Photon. 12, 479–484 (2018).
Carmon, T., Cross, M. C. & Vahala, K. J. Chaotic quivering of micron-scaled onchip resonators excited by centrifugal optical pressure. Phys. Rev. Lett. 98, 167203 (2007).
Monifi, F. et al. Optomechanically induced stochastic resonance and chaos transfer between optical fields. Nat. Photon. 10, 399–405 (2016).
Gan, J.-H., Xiong, H., Si, L.-G., Lü, X.-Y. & Wu, Y. Solitons in optomechanical arrays. Opt. Lett. 41, 2676–2679 (2016).
Xiong, H., Gan, J. H. & Wu, Y. Kuznetsov–Ma soliton dynamics based on the mechanical effect of light. Phys. Rev. Lett. 119, 153901 (2017).
Xiong, H. & Wu, Y. Optomechanical Akhmediev breathers. Laser Photon. Rev. 12, 1700305 (2018).
Ganesan, A., Do, C. & Seshia, A. Phononic frequency comb via intrinsic three-wave mixing. Phys. Rev. Lett. 118, 033903 (2017).
Butsch, A., Koehler, J. R., Noskov, R. E. & Russell, P. St. J. CW-pumped single-pass frequency comb generation by resonant optomechanical nonlinearity in dual-nanoweb fiber. Optica 1, 158–163 (2014).
Savchenkov, A. A., Matsko, A. B., Ilchenko, V. S., Seidel, D. & Maleki, L. Surface acoustic wave opto-mechanical oscillator and frequency comb generator. Opt. Lett. 36, 3338–3340 (2011).
Miri, M.-A., D'Aguanno, G. & Alù, A. Optomechanical frequency combs. New J. Phys. 20, 043013 (2018).
Del'Haye, P. et al. Optical frequency comb generation from a monolithic microresonator. Nature 450, 1214–1217 (2007).
Savchenkov, A. A. et al. Tunable optical frequency comb with a crystalline whispering gallery mode resonator. Phys. Rev. Lett. 101, 093902 (2008).
Rueda, A., Sedlmeir, F., Kumari, M., Leuchs, G. & Schwefel, H. G. L. Resonant electro-optic frequency comb. Nature 568, 378–381 (2019); correction 569, E11 (2019).
Zhang, M. et al. Broadband electro-optic frequency comb generation in a lithium niobate microring resonator. Nature 568, 373–377 (2019).
Li, Q. et al. Stably accessing octave-spanning microresonator frequency combs in the soliton regime. Optica 4, 193–203 (2017).
CAS PubMed PubMed Central ADS Google Scholar
Cao, L. S., Qi, D. X., Peng, R. W., Wang, M. & Schmelcher, P. Phononic frequency combs through nonlinear resonances. Phys. Rev. Lett. 112, 075505 (2014).
Czaplewski, D. A. et al. Bifurcation generated mechanical frequency comb. Phys. Rev. Lett. 121, 244302 (2018).
Hao, H. Y. & Maris, H. J. Experiments with acoustic solitons in crystalline solids. Phys. Rev. B 64, 064302 (2001).
Hereman, W. Shallow water waves and solitary waves. In Encyclopedia of Complexity and Systems Science (ed. Meyers, R. A.) 480 (Springer, 2009); https://doi.org/10.1007/978-0-387-30440-3_480.
Barland, S. et al. Temporal localized structures in optical resonators. Adv. Phys. X 2, 496–517 (2017).
Lugiato, L., Prati, F. & Brambilla, M. Nonlinear Optical Systems (Cambridge Univ. Press, 2015).
Jang, J. K., Erkintalo, M., Murdoch, S. G. & Coen, S. Ultraweak long-range interactions of solitons observed over astronomical distances. Nat. Photon. 7, 657–663 (2013).
Barland, S. et al. Cavity solitons as pixels in semiconductor microcavities. Nature 419, 699–702 (2002).
Leo, F. et al. Temporal cavity solitons in one-dimensional Kerr media as bits in an all-optical buffer. Nat. Photon. 4, 471–476 (2010).
Grelu, P. & Akhmediev, N. Dissipative solitons for mode-locked lasers. Nat. Photon. 6, 84–92 (2012).
Korteweg, D. J. & de Vries, G. On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves. Philos. Mag. 39, 422–443 (1895).
MathSciNet MATH Google Scholar
Boyd, J. P. The double cnoidal wave of the Korteweg–de Vries equation: an overview. J. Math. Phys. 25, 3390–3401 (1984).
MathSciNet MATH ADS Google Scholar
Nayanov, V. I. Surface acoustic cnoidal waves and solitons in a LiNbO3-(SiO film) structure. JETP Lett. 44, 314–317 (1986); translated from Pis'ma Zh. Eksp. Teor. Fiz. 44, 245–247 (1986).
Fiore, V. et al. Storing optical information as a mechanical excitation in a silica optomechanical resonator. Phys. Rev. Lett. 107, 133601 (2011).
Carmon, T., Rokhsari, H., Yang, L., Kippenberg, T. J. & Vahala, K. J. Temporal behavior of radiation-pressure-induced vibrations of an optical microcavity phonon mode. Phys. Rev. Lett. 94, 223902 (2005).
The project is supported by the NSF grant number EFMA1641109 and ARO grant numbers W911NF1710189 and W911NF1210026. J.Z. is supported by the NSFC under grant numbers 61622306 and 11674194. Y.-x.L. is supported by the NSFC under grant number 61025022. Y.-x.L. and J.Z. are supported by the National Basic Research Program of China (973 Program) under grant number 2014CB921401, the Tsinghua University Initiative Scientific Research Program, and the Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation. L.L. is supported by the NSFC under grant number. 61925307. S.K. and A.A. are supported by the Office of Naval Research and the Air Force Office of Scientific Research.
Department of Electrical and Systems Engineering, Washington University, St. Louis, MO, USA
Jing Zhang, Bo Peng, Faraz Monifi, Xuefeng Jiang, Yihang Li & Lan Yang
Department of Automation, Tsinghua University, Beijing, P. R. China
Jing Zhang
Photonics Initiative, Advanced Science Research Center, City University of New York, New York, NY, USA
Seunghwi Kim & Andrea Alù
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, P. R. China
Peng Yu & Lianqing Liu
Institute of Microelectronics, Tsinghua University, Beijing, P. R. China
Yu-xi Liu
Physics Program, Graduate Center, City University of New York, New York, NY, USA
Andrea Alù
Seunghwi Kim
Faraz Monifi
Xuefeng Jiang
Yihang Li
Peng Yu
Lianqing Liu
Lan Yang
J.Z., B.P., F.M. and L.Y. conceived the idea. L.Y. designed the experiments. J.Z. performed the experiments and processed the data with the help of X.J., Y.L., P.Y. and L.L. J.Z. and S.K. provided theoretical analysis under the guidance of Y.-x.L. and A.A. J.Z. and L.Y. wrote the manuscript with contributions from all authors. L.Y. supervised the project.
Correspondence to Lan Yang.
Peer review information Nature thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
This file contains supplementary text, supplementary equations, supplementary sections S1–S8 and supplementary references.
Peer Review File
Zhang, J., Peng, B., Kim, S. et al. Optomechanical dissipative solitons. Nature 600, 75–80 (2021). https://doi.org/10.1038/s41586-021-04012-1
Issue Date: 02 December 2021
Optomechanics joins the soliton club
Alessia Pasquazi
Nature Physics (2021)
News & Views 01 Dec 2021 | CommonCrawl |
Ole Peder Arvesen
Ole Peder Arvesen (27 March 1895 – 23 January 1991) was a Norwegian engineer and mathematician.
Ole Peder Arvesen
Ole Peder Arvesen, c. 1935
Born(1895-03-27)27 March 1895
Fredrikstad, Norway
Died23 January 1991(1991-01-23) (aged 95)
EducationEngineering and mathematics
OccupationProfessor of descriptive geometry
EmployerNorwegian Institute of Technology
Organizations
• Student Society in Trondheim
• Royal Norwegian Society of Sciences and Letters
AwardsOrder of St. Olav (1965)
Arvesen was born in Fredrikstad. He was appointed professor of descriptive geometry at the Norwegian Institute of Technology from 1938 to 1965. He served as secretary general of the Royal Norwegian Society of Sciences and Letters from 1950 to 1966, having been a fellow since 1934, and was also a fellow of the Norwegian Academy of Technological Sciences. Among his publications are Under Duskens billedbok (Under the Dusk picture book) from 1928, the textbook Innføring i nomografi (Introduction to nomography) from 1932, Mennesker og matematikere (People and mathematicians) from 1940, Glimt av den store karikatur (Glimpse of the great caricature) from 1941, and the memoir book Men bare om løst og fast (But just about this and that) from 1976. He was decorated Knight, First Class of the Order of St. Olav in 1965.
A portrait of Arvesen, painted by Agnes Hiorth, is located at the Student Society in Trondheim, where he was an active participant over many years.[1][2]
References
1. Johnson, Dag. "Ole Peder Arvesen". In Helle, Knut (ed.). Norsk biografisk leksikon (in Norwegian). Oslo: Kunnskapsforlaget. Retrieved 26 April 2014.
2. Godal, Anne Marit (ed.). "Ole Peder Arvesen". Store norske leksikon (in Norwegian). Oslo: Norsk nettleksikon. Retrieved 26 April 2014.
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Netherlands
Academics
• zbMATH
People
• Deutsche Biographie
| Wikipedia |
Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests?
I have some non-Gaussian distributed variable and I need to check if there are significant differences between the values of this variable in 5 different groups.
I have performed Kruskal-Wallis one-way analysis of variance (which came up significant) and after that I had to check which groups were significantly different. Since the groups are kind of sorted (the values of the variable in the first group are supposed to be lower than the values of the variable in the second group which are supposed to be lower than the values of the variable in the third group and so on) I only performed 4 tests:
Group 1 vs Group 2
I have performed this analysis with two different methods. I started by using Dunn's Multiple Comparison Test but nothing came up significant. On the other hand if I use Mann-Whitney test and correct for the number of tests (4) using Bonferroni, 3 tests come up significant.
What does it mean? Which results should I trust?
hypothesis-testing post-hoc wilcoxon-mann-whitney-test kruskal-wallis-test dunn-test
RossellaRossella
$\begingroup$ Note that if you a priori expect Group 1 values to be the lowest and Group 5 the highest, then comparing Group 1 with Group 5 will have the highest power to detect a difference. $\endgroup$
– amoeba
$\begingroup$ On the Bonferroni correction, you must divide the p value by the number of groups, not the number of tests you performed. $\endgroup$
– Caramba
$\begingroup$ If your alternative is ordered, it would seem better to use a test designed for that situation. $\endgroup$
– Glen_b
$\begingroup$ @Caramba that is incorrect. If you have four groups and do all paired comparisons, you adjust for all six comparisons. $\endgroup$
You should use a proper post hoc pairwise test like Dunn's test.*
If one proceeds by moving from a rejection of Kruskal-Wallis to performing ordinary pair-wise rank sum tests (with or without multiple comparison adjustments), one runs into two problems:
the ranks that the pair-wise rank sum tests use are not the ranks used by the Kruskal-Wallis test (i.e. you are, in effect, pretending to perform post hoc tests, but are actually using different data than was used in the Kruskal-Wallis test to do so); and
Dunn's test preserves a pooled variance for the tests implied by the Kruskal-Wallis null hypothesis.
Of course, as with any omnibus test (e.g., ANOVA, Cochran's $Q$, etc.), post hoc tests following rejection of a Kruskal-Wallis test which have been adjusted for multiple comparisons may fail to reject all pairwise tests for a given family-wise error rate or given false discovery rate corresponding to a given $\alpha$ for the omnibus test.
* Dunn's test is implemented in Stata in the dunntest package (within Stata type net describe dunntest, from(https://alexisdinno.com/stata)), and in R in the dunn.test package. Caveat: there are a few less well-known post hoc pair-wise tests to follow a rejected Kruskal-Wallis, including Conover-Iman (like Dunn, but based on the t distribution, rather than the z distribution, and strictly more powerful as a post hoc test) which is implemented for Stata in the conovertest package (within Stata type net describe conovertest, from(https://alexisdinno.com/stata)), and for R in the conover.test package, and the Dwass-Steel-Critchlow-Fligner tests.
AlexisAlexis
Not the answer you're looking for? Browse other questions tagged hypothesis-testing post-hoc wilcoxon-mann-whitney-test kruskal-wallis-test dunn-test or ask your own question.
Any other non-parametric alternative to Kruskal-Wallis?
Multiple comparisons in a non-parametric test
Non-parametric test for unequal samples with subsequent post-hoc analysis?
Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or Mann-Whitney)?
What are the (practical) differences between Tukey's HSD and the Dunn-Sidak correction procedures?
Five different groups
P-value correction/adjustment for multiple groups and high number of tests
Alternative to Friedman test for single treatments (no repetitions)
Kruskal-Wallis and Mann-Whitney U give different results
Mann-Whitney U Significance Levels on ordinal data?
Which result to choose when Kruskal-Wallis and Mann-Whitney seem to return contradicting results?
Kruskal-Wallis test is not significant but some of the Mann-Whitney comparisons are significant
Can Mann-Whitney test be used for post-hoc comparisons after Kruskal-Wallis?
Bonferroni correction on multiple Kruskal-Wallis tests
Problem with Dunn's post hoc test to compare a lot of groups?
About Kruskal-Wallis, Mann-Whitney U and multiple comparisons correction
Mann-Whitney (2 groups) contradicted by Kruskal-Wallis (3 groups) | CommonCrawl |
The classification of multiplicity-free plethysms of Schur functions
by Christine Bessenrodt, Chris Bowman and Rowena Paget PDF
We classify and construct all multiplicity-free plethystic products of Schur functions. We also compute many new (infinite) families of plethysm coefficients, with particular emphasis on those near maximal in the dominance ordering and those of small Durfee size.
Murat Altunbulak and Alexander Klyachko, The Pauli principle revisited, Comm. Math. Phys. 282 (2008), no. 2, 287–322. MR 2421478, DOI 10.1007/s00220-008-0552-z
Christine Bessenrodt and Christopher Bowman, Multiplicity-free Kronecker products of characters of the symmetric groups, Adv. Math. 322 (2017), 473–529. MR 3720803, DOI 10.1016/j.aim.2017.10.009
Peter Bürgisser, Matthias Christandl, and Christian Ikenmeyer, Even partitions in plethysms, J. Algebra 328 (2011), 322–329. MR 2745569, DOI 10.1016/j.jalgebra.2010.10.031
Peter Bürgisser, Christian Ikenmeyer, and Greta Panova, No occurrence obstructions in geometric complexity theory, 57th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2016, IEEE Computer Soc., Los Alamitos, CA, 2016, pp. 386–395. MR 3631001
Jonah Blasiak, Ketan D. Mulmuley, and Milind Sohoni, Geometric complexity theory IV: nonstandard quantum group for the Kronecker problem, Mem. Amer. Math. Soc. 235 (2015), no. 1109, x+160. MR 3338303, DOI 10.1090/memo/1109
Cristina M. Ballantine and Rosa C. Orellana, A combinatorial interpretation for the coefficients in the Kronecker product $s_{(n-p,p)}\ast s_\lambda$, Sém. Lothar. Combin. 54A (2005/07), Art. B54Af, 29. MR 2264933
Michel Brion, Stable properties of plethysm: on two conjectures of Foulkes, Manuscripta Math. 80 (1993), no. 4, 347–371. MR 1243152, DOI 10.1007/BF03026558
Andrew A. H. Brown, Stephanie van Willigenburg, and Mike Zabrocki, Expressions for Catalan Kronecker products, Pacific J. Math. 248 (2010), no. 1, 31–48. MR 2734163, DOI 10.2140/pjm.2010.248.31
Luisa Carini, On the multiplicity-free plethysms $p_2[s_\lambda ]$, Ann. Comb. 21 (2017), no. 3, 339–352. MR 3685118, DOI 10.1007/s00026-017-0354-0
Christophe Carré and Bernard Leclerc, Splitting the square of a Schur function into its symmetric and antisymmetric parts, J. Algebraic Combin. 4 (1995), no. 3, 201–231. MR 1331743, DOI 10.1023/A:1022475927626
Matthias Christandl and Graeme Mitchison, The spectra of quantum states and the Kronecker coefficients of the symmetric group, Comm. Math. Phys. 261 (2006), no. 3, 789–797. MR 2197548, DOI 10.1007/s00220-005-1435-1
Laura Colmenarejo, Stability properties of the plethysm: a combinatorial approach, Discrete Math. 340 (2017), no. 8, 2020–2032. MR 3648227, DOI 10.1016/j.disc.2016.10.009
Luisa Carini and J. B. Remmel, Formulas for the expansion of the plethysms $s_2[s_{(a,b)}]$ and $s_2[s_{(n^k)}]$, Discrete Math. 193 (1998), no. 1-3, 147–177. Selected papers in honor of Adriano Garsia (Taormina, 1994). MR 1661367, DOI 10.1016/S0012-365X(98)00139-3
Christophe Carré and Jean-Yves Thibon, Plethysm and vertex operators, Adv. in Appl. Math. 13 (1992), no. 4, 390–403. MR 1190119, DOI 10.1016/0196-8858(92)90018-R
Melanie de Boeck, Rowena Paget, and Mark Wildon, Plethysms of symmetric functions and highest weight representations, Trans. Amer. Math. Soc. 374 (2021), no. 11, 8013–8043. MR 4328690, DOI 10.1090/tran/8481
Yoav Dvir, On the Kronecker product of $S_n$ characters, J. Algebra 154 (1993), no. 1, 125–140. MR 1201916, DOI 10.1006/jabr.1993.1008
Fulvio Gesmundo, Christian Ikenmeyer, and Greta Panova, Geometric complexity theory and matrix powering, Differential Geom. Appl. 55 (2017), 106–127. MR 3724215, DOI 10.1016/j.difgeo.2017.07.001
Christian Gutschwager, Reduced Kronecker products which are multiplicity free or contain only few components, European J. Combin. 31 (2010), no. 8, 1996–2005. MR 2718277, DOI 10.1016/j.ejc.2010.05.008
Christian Ikenmeyer and Greta Panova, Rectangular Kronecker coefficients and plethysms in geometric complexity theory, 57th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2016, IEEE Computer Soc., Los Alamitos, CA, 2016, pp. 396–405. MR 3631002
Christian Ikenmeyer and Greta Panova, Rectangular Kronecker coefficients and plethysms in geometric complexity theory, Adv. Math. 319 (2017), 40–66. MR 3695867, DOI 10.1016/j.aim.2017.08.024
G. D. James, A characteristic-free approach to the representation theory of ${\mathfrak {S}}_{n}$, J. Algebra 46 (1977), no. 2, 430–450. MR 439924, DOI 10.1016/0021-8693(77)90380-5
A. Klyachko, Quantum marginal problem and representations of the symmetric group, Preprint arXiv:0409113, 2004.
I. G. Macdonald, Symmetric functions and Hall polynomials, 2nd ed., Oxford Classic Texts in the Physical Sciences, The Clarendon Press, Oxford University Press, New York, 2015. With contribution by A. V. Zelevinsky and a foreword by Richard Stanley; Reprint of the 2008 paperback edition [ MR1354144]. MR 3443860
L. Manivel, A note on certain Kronecker coefficients, Proc. Amer. Math. Soc. 138 (2010), no. 1, 1–7. MR 2550164, DOI 10.1090/S0002-9939-09-10086-2
Igor Pak and Greta Panova, Bounds on certain classes of Kronecker and $q$-binomial coefficients, J. Combin. Theory Ser. A 147 (2017), 1–17. MR 3589885, DOI 10.1016/j.jcta.2016.10.004
Bruce E. Sagan, The symmetric group, 2nd ed., Graduate Texts in Mathematics, vol. 203, Springer-Verlag, New York, 2001. Representations, combinatorial algorithms, and symmetric functions. MR 1824028, DOI 10.1007/978-1-4757-6804-6
Richard P. Stanley, Enumerative combinatorics. Vol. 2, Cambridge Studies in Advanced Mathematics, vol. 62, Cambridge University Press, Cambridge, 1999. With a foreword by Gian-Carlo Rota and appendix 1 by Sergey Fomin. MR 1676282, DOI 10.1017/CBO9780511609589
Richard P. Stanley, Positivity problems and conjectures in algebraic combinatorics, Mathematics: frontiers and perspectives, Amer. Math. Soc., Providence, RI, 2000, pp. 295–319.
John R. Stembridge, Multiplicity-free products of Schur functions, Ann. Comb. 5 (2001), no. 2, 113–121. MR 1904379, DOI 10.1007/s00026-001-8008-6
John R. Stembridge, A concise proof of the Littlewood-Richardson rule, Electron. J. Combin. 9 (2002), no. 1, Note 5, 4. MR 1912814
Retrieve articles in Transactions of the American Mathematical Society with MSC (2020): 05E05, 20C30, 20C15
Retrieve articles in all journals with MSC (2020): 05E05, 20C30, 20C15
Christine Bessenrodt
Affiliation: Institut für Algebra, Zahlentheorie und Diskrete Mathematik, Leibniz Universität Hannover, 30167 Hannover, Germany
MR Author ID: 36045
Email: [email protected]
Chris Bowman
Affiliation: Department of Mathematics, University of York, Heslington, YO10 5DD, United Kingdom
MR Author ID: 922280
Email: [email protected]
Rowena Paget
Affiliation: School of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury CT2 7NF, United Kingdom
Email: [email protected]
Received by editor(s): April 18, 2020
Received by editor(s) in revised form: January 4, 2022
Published electronically: May 4, 2022
Additional Notes: The second author would like to thank the Alexander von Humboldt Foundation and EPSRC fellowship grant EP/V00090X/1 for financial support and the Leibniz Universität Hannover for their ongoing hospitality
MSC (2020): Primary 05E05, 20C30, 20C15
DOI: https://doi.org/10.1090/tran/8642 | CommonCrawl |
\begin{definition}[Definition:Inner Product Norm]
Let $\Bbb F$ be a subfield of $\C$.
Let $\struct {V, \innerprod \cdot \cdot}$ be an inner product space over $\Bbb F$.
Then the '''inner product norm''' on $V$ is the mapping $\norm \cdot : V \to \R_{\ge 0}$ given by:
:$\norm x = \sqrt {\innerprod x x}$
for each $x \in V$.
\end{definition} | ProofWiki |
Spatiotemporal characteristics of GNSS-derived precipitable water vapor during heavy rainfall events in Guilin, China
Liangke Huang1,2,
Zhixiang Mo ORCID: orcid.org/0000-0002-9400-24071,2,
Shaofeng Xie1,2,
Lilong Liu1,2,
Jun Chen3,
Chuanli Kang1,2 &
Shitai Wang1,2
Satellite Navigation volume 2, Article number: 13 (2021) Cite this article
Precipitable Water Vapor (PWV), as an important indicator of atmospheric water vapor, can be derived from Global Navigation Satellite System (GNSS) observations with the advantages of high precision and all-weather capacity. GNSS-derived PWV with a high spatiotemporal resolution has become an important source of observations in meteorology, particularly for severe weather conditions, for water vapor is not well sampled in the current meteorological observing systems. In this study, an empirical atmospheric weighted mean temperature (Tm) model for Guilin is established using the radiosonde data from 2012 to 2017. Then, the observations at 11 GNSS stations in Guilin are used to investigate the spatiotemporal features of GNSS-derived PWV under the heavy rainfalls from June to July 2017. The results show that the new Tm model in Guilin has better performance with the mean bias and Root Mean Square (RMS) of − 0.51 and 2.12 K, respectively, compared with other widely used models. Moreover, the GNSS PWV estimates are validated with the data at Guilin radiosonde station. Good agreements are found between GNSS-derived PWV and radiosonde-derived PWV with the mean bias and RMS of − 0.9 and 3.53 mm, respectively. Finally, an investigation on the spatiotemporal characteristics of GNSS PWV during heavy rainfalls in Guilin is performed. It is shown that variations of PWV retrieved from GNSS have a direct relationship with the in situ rainfall measurements, and the PWV increases sharply before the arrival of a heavy rainfall and decreases to a stable state after the cease of the rainfall. It also reveals the moisture variation in several regions of Guilin during a heavy rainfall, which is significant for the monitoring of rainfalls and weather forecast.
Atmospheric water vapor, as a key factor in regulating Earth's climate, plays an important role in global atmospheric radiation, water cycling, and energy balance, and its variation is the main driving force of climate change (Wang et al., 2007). Precipitable Water Vapor (PWV) refers to the total water vapor content of a vertical column per unit area in atmosphere (King et al., 1992). Understanding the spatiotemporal variation of water vapor is of vital significance for the evolution of weather and climate prediction. Various techniques have been developed to acquire the PWV content over the past few decades. Traditional methods mainly include radiosondes, microwave radiometers, and satellite remote sensing. Among them, radiosonde is the primary in situ measurement to obtain PWV because of its high accuracy. However, it is hard to meet the demands of monitoring and forecasting extreme weather at the mesoscale and microscale due to its high cost and low spatiotemporal resolution. With the development of the Global Positioning System (GPS), GLObal NAvigation Satellite System (GLONASS), Galileo navigation satellite system (Galileo), BeiDou Navigation Satellite System (BDS), and other satellite navigation systems, a reliable technique to retrieve PWV with the Global Navigation Satellite System (GNSS), called ground-based GNSS, has been proposed and is fully operational (Bevis et al., 1992; Hein, 2020; Li et al., 2015a; Yang et al., 2020). Ground-based GNSS can detect satellite signal propagation delays in atmosphere and further retrieve the PWV, which has attracted a widespread attention due to its advantages of high temporal resolution, high precision, low-cost, and resistance to all weather conditions (Elgered et al., 1997; Emardson et al., 1998; Vaquero-Martínez et al., 2017).
In the ground-based GNSS technique, the retrieval of PWV from GNSS observations entails the acquisition of the Zenith Wet Delay (ZWD) and the conversion coefficient (Π). GNSS signals are affected by troposphere refraction when propagating through the neutral atmosphere, thus causing a tropospheric delay, which is expressed as the Zenith Total Delay (ZTD). The ZTD consists of two components: the Zenith Hydrostatic Delay (ZHD), which is caused by the dry gases of the troposphere, and the ZWD, which stems from the water vapor. The ZTD can be accurately calculated from GNSS observations using high-precision GNSS processing software. The ZHD can be estimated with tropospheric empirical models. Then, the ZWD can be obtained by subtracting ZHD from the ZTD. To convert the ZWD to the PWV, Bevis et al. (1992) proposed a formula to calculate the atmospheric weighted mean temperature (Tm). Tm, as one parameter to compute Π, can be exactly calculated from the vertical profiles of atmospheric temperature and humidity based on numerical integration or estimated from an empirical Tm model with surface temperature (Ts) (Bevis et al., 1994; Davis et al., 1985). Many studies have mentioned that the relative error of Π basically comes from Tm, and the accuracy of Tm is regarded as one of the largest error sources in PWV calculation (Huang et al., 2019a; Liu et al., 2012; Wang et al., 2005). Therefore, precisely estimating Tm is of great significance to improve the accuracy of the GNSS-based PWV retrieval. Generally, it is not easy to access the Tm in some regions because of the lack of atmospheric profiles. Thus, an accurate empirical Tm model is needed to satisfy such demands and provide convenience for users. Bevis et al. (1994) found that Tm and Ts have a strong linear correlation and proposed an empirical formula (Tm = 70.2 + 0.72Ts) using the 8718 radiosonde profiles in North America that is commonly used to estimate Tm. Some regional empirical Tm models with higher accuracy than the Bevis formula have been formulated. Li et al. (1999) constructed a linear regression equation of Tm and Ts in eastern China (Tm = 44.05 + 0.81Ts) by using the radiosonde data of eastern China, which has a good performance in that region. When obtaining an accurate Tm parameter, the error of GNSS-derived PWV can be reduced. In some previous studies, the relative accuracy of the GNSS-derived PWV has been proven to be reliable with the root-mean-square values of 1–3 mm (Gui et al., 2017; Zhao et al., 2019a). Therefore, the GNSS-based PWV has been extensively used in meteorology applications, especially in monitoring and forecasting extreme weather conditions (Calori et al., 2016; Li et al., 2015b; Liu et al., 2019b; Wang et al., 2013; Zhao et al., 2018).
In recent years, a great number of studies have been conducted in the applications of the GNSS retrieving water vapor in the analysis of extreme rainfall conditions. Zhang et al. (2015) investigated the signature of GPS-derived PWV with two severe weather case studies. Shi et al. (2015) analyzed a series of rainfall events in Wuhan, China using real-time GPS Precise Point Positioning (PPP) based PWV estimation. Chen et al. (2017) used the Hong Kong GPS network to study the water vapor variability during three heavy rainfall events in Hong Kong. Yao et al. (2017) showed that GNSS-derived PWV has a correlation with rainfall. They also investigated three different rainfall events in Wuhan and Zhejiang regions. Manandhar et al. (2018) studied the trend of PWV values derived at a GPS station in Singapore with diurnal and seasonal variations by separating several rainy and clear days. Barindelli et al. (2018) analyzed the temporal variations in the PWV derived from a regional GNSS network during two heavy rainfall events. Moreover, they found the characteristics corresponding to the intense precipitation events. Chen et al. (2018) constructed a PWV map with the data at Hunan GNSS stations and synoptic observations to reveal the water vapor advection, transportation, and convergence during a heavy rainfall. These studies indicate the feasibility and practicality of GNSS-derived PWV to monitor rainfall events. Although the analysis of PWV characteristics during the rainfall is performed, most studies are limited in providing the detailed spatiotemporal evolution with GNSS-derived PWV during heavy rainfall events.
Guilin, which has numerous mountains and rivers, is a well-developed tourist city and has been awarded the only international tourism city in China located in a rain-prone area in low latitudes with an average annual precipitation of more than 1500 mm. In June and July 2017, several heavy rainfalls occurred in Guilin, which induced continuous floods, landslides, and other disasters. Moreover, only one radiosonde station is in Guilin. Thus, the weather changes of the whole area are difficult to monitor. Using the GNSS technique for capturing the signature of PWV with high spatiotemporal resolution is of great significance. The extreme weather in Guilin should be evaluated and forecast by studying the spatial and temporal distribution and evolution of water vapor. In this work, we establish an empirical Tm model using radiosonde records for GNSS PWV calculation. Then, the empirical Tm model is compared with the other three commonly used Tm models to verify its reliability and availability. Later, the hourly GNSS PWV is obtained using the observations at 11 GNSS stations. We evaluate its accuracy by comparing it with radiosonde-derived PWV. Finally, the relationship between GNSS PWV and rainfall is analyzed, and the spatiotemporal characteristics of GNSS PWV under heavy rainfalls in Guilin are investigated.
Study area and data source
Guilin is situated in southern China with a territory of approximately 27,800 km2. It has a subtropical humid monsoon climate with four distinctive seasons. Moreover, Guilin has plenty of rainfall. The average annual rainfall varies from 1900 to 2000 mm, which is mainly concentrated from April to July, and the longest continuous precipitation period is 30 days. Heavy showers frequently occur in summer, thus leading to serious threats to the safety of people and property. Using GNSS to monitor the highly variable characteristics of water vapor has a great potential in improving the ability of extreme weather forecasting for Guilin. In this study, the data from the Guilin radiosonde station, 12 ground meteorological stations, and 11 GNSS stations and the ERA5 reanalysis dataset in 2017 are used to analyze the spatiotemporal characteristics of GNSS-derived water vapor during heavy rainfall events in Guilin. The corresponding stations are presented in Fig. 1. The details of GNSS and radiosonde stations, including the station name, geodetic coordinates, elevations, and distance between each GNSS site and the radiosonde station are shown in Table 1.
Geographic distribution of the 11 GNSS sites (red triangles), 1 radiosonde station (yellow pentagram), and 12 ground meteorological (MET) stations (blue circles) in Guilin, China
Table 1 Details of the information on GNSS and radiosonde stations in Guilin
Radiosonde profiles
The radiosonde is one of the most commonly instruments to measure high-quality meteorological parameters. In this study, radiosonde observations are used to establish the empirical Tm model for Guilin and serve as the reference values to evaluate the GNSS PWV results. Radiosonde profiles contain the atmospheric parameters in vertical direction collected with a sounding balloon at 00:00 and 12:00 Coordinated Universal Time (UTC) every day, including surface parameters, such as surface temperature Ts and pressure (Ps), and pressure level parameters, such as absolute temperature (T), relative humidity (RH) and geopotential height (H) at every pressure level. From the meteorological parameters at different levels, Tm can be calculated using the numerical integration of radiosonde measurements, which can be expressed as follows (Bolton, 1980; Wang et al., 2016):
$$T_{{\text{m}}} = \frac{{\mathop \sum \nolimits_{1}^{n} \frac{{e_{i} }}{{T_{i} }}{\Delta }h_{i} }}{{\mathop \sum \nolimits_{1}^{n} \frac{{e_{i} }}{{T_{i}^{2} }}{\Delta }h_{i} }}$$
$$e = \frac{{{\text{RH}} \cdot e_{{\text{s}}} }}{100}$$
$$e_{{\text{s}}} = 6.112 \times 10^{{\left( {\frac{{7.5 \times T_{{\text{d}}} }}{{T_{{\text{d}}} + 237.3}}} \right)}}$$
where Δhi represents the height difference of ith layer (unit: m), n represents the number of layers, ei and Ti are the average water vapor pressure (unit: hPa) and temperature (unit: K) at the ith layer of the atmosphere, respectively, es denotes the saturated vapor pressure (unit: hPa) and Td the atmospheric temperature in Celsius (T = Td + 273.15).
ERA5 reanalysis dataset
ERA5 is a newly released fifth-generation European Centre for Medium-Range Weather Forecasts (ECMWF) global climate reanalysis dataset. The ERA5 dataset is produced by using the 4D-Var data assimilation scheme in the CY41R2 model of the ECMWF's Integrated Forecast System (IFS), which is expected to gradually replace the prior ERA-Interim dataset. ERA5 provides hourly atmospheric reanalysis data at 37 pressure levels from 1000 to 1 hPa, with a horizontal spatial resolution of 0.25° × 0.25°. Surface parameter products are also available (such as Ts, Ps and PWV). In addition, ERA5 has been demonstrated to have a good performance in PWV estimates (Wang et al., 2020; Zhang et al., 2019b). In this study, the gridded Tm series derived from ERA5 profiles (37 levels) are used for the evaluation of Tm models over Guilin, and the PWV series derived from ERA5 surface products are used to compare with the spatial evolution of GNSS PWV.
Surface meteorological observations
The hourly rainfall data, surface pressure, and surface temperature observations are provided by the China Meteorological Administration (CMA) at 12 MET stations from June to July 2017 over Guilin. The meteorological sensors at these stations are well maintained and regularly adjusted every year by the CMA. As the GNSS and meteorological stations are not collocated (Fig. 1) and most of the GNSS stations are not equipped with meteorological sensors, the Ts and Ps at a GNSS site are calculated with their corresponding values at the nearest MET station for GNSS PWV retrieval using the equations below (Suparta & Rahman, 2016):
$$P_{{{\text{MSL}}}} = \frac{{P_{{{\text{MET}}}} }}{{(1 - 0.0000226 \, \times \, h_{{{\text{MET}}}} )^{5.225} }}$$
$$T_{{{\text{MSL}}}} = T_{{{\text{MET}}}} + 273.16 + (0.0065 \, \times \, h_{{{\text{MET}}}} )$$
$$P_{{{\text{GNSS}}}} = P_{{{\text{MSL}}}} \, \times \, (1 - 0.0000226 \, \times \, h_{{{\text{GNSS}}}} )^{5.225}$$
$$T_{{{\text{GNSS}}}} = T_{{{\text{MSL}}}} - 273.16 - (0.0065 \, \times \, h_{{{\text{GNSS}}}} )$$
where TMSL and PMSL indicate the mean sea level temperature (unit: °C) and mean sea level pressure (unit: hPa), respectively, TMET and PMET are the temperature and surface pressure at MET station, respectively, TGNSS and PGNSS are the temperature and surface pressure at GNSS station, respectively, and hMET and hGNSS represent the height of the MET and GNSS stations, respectively (unit: km).
GNSS observations
The GNSS observations are collected from the Guilin GNSS network from June to July 2017 (Fig. 1). Currently, the International GNSS Service (IGS) center provides the ZTD product with a precision of better than 5 mm (Byun & Bar-Sever, 2009; Huang et al., 2019b), which can be used as the reference values for assessing other ZTD results. In this study, the correlation between the tropospheric parameters across the Guilin GNSS network is reduced by introducing six IGS stations (BJFS, SHAO, CHAN, LHAZ, TWTF, and URUM), and the GNSS data are processed with the GAMIT/GLOBK software (a high-precision GNSS data processing software). The ZTD is estimated with a half-hour interval. To confirm the reliability of ZTDs, the ZTDs obtained from the IGS center (IGS-ZTD) are used to evaluate the ZTDs derived from the GNSS observations (GNSS-ZTD) at stations BJFS and TWTF from June to July 2017, which is from Day Of Year (DOY) 152–213. The ZTD differences with their deviations from the mean value larger than three times the STandard Deviation (STD) are removed as gross errors. The results are shown in Fig. 2.
ZTDs from GNSS observations and IGS center at stations BJFS and TWTF from June to July 2017. a ZTD time series for BJFS and b for TWTF, c correlation plot between IGS-ZTD and GNSS-ZTD for BJFS and d for TWTF
From Fig. 2, one can see that the IGS-ZTD and the GNSS-ZTD show good consistency at stations BJFS and TWTF. Compared with the IGS-ZTD, the correlation coefficient (R) and the Root Mean Square (RMS) error of the GNSS-ZTD are 0.998 and 5.3 mm at station BJFS and 0.990 and 5.6 mm at station TWTF, respectively. The results indicate that the ZTD derived from the GNSS observations are reliable and accurate for GNSS PWV retrieval.
Retrieval of PWV from GNSS observations
The basic formula for retrieving PWV from GNSS observations is as follows (Askne & Nordius, 1987):
$${\text{PWV}} = \varPi \cdot {\text{ZWD}}$$
In this study, the ZTD is derived from the Guilin GNSS observations processed with the GAMIT/GLOBK software. After retrieving the ZTD, the ZWD is calculated by subtracting the ZHD from the ZTD, where the ZHD can be estimated with the Saastamoinen model (Saastamoinen, 1972):
$${\text{ZHD}} = \frac{{2.2767 \times P_{{\text{s}}} }}{{1 - 0.00266\cos 2\varphi - 0.00028 \times h_{{\text{o}}} }}$$
where φ is the latitude of station (unit: radians), ho is the height of the station above sea level (unit: km), and Ps is the surface pressure of the station (unit: hPa), interpolated from the nearby MET stations. With Tm, Π can be calculated as follows:
$$\varPi = \frac{{10^{6} }}{{\rho_{w} R_{v} [(k_{3} /T_{{\text{m}}} + k_{2}^{^{\prime}} )]}}$$
where ρw is the density of liquid water (1 × 103 kg/m3), Rv is the specific gas constant for water vapor (461.495 J kg−1 k−1), and k'2 (22.13 ± 2.2 K/hPa) and k3 ((3.739 ± 0.012) × 105 K/hPa) are the atmospheric physical constants. Given that the humidity and temperature profiles are usually hard to obtain, a regression analysis method, based on the linear correlation between Tm and Ts, is often adopted to determine the Tm:
$$T_{{\text{m}}} = a + b \cdot T_{{\text{s}}}$$
where a and b indicate the coefficients of regression equation. These coefficients are estimated using the least squares adjustment with the years of radiosonde data, and the formulas are as follows:
$$b = \frac{{\sum\nolimits_{i = 1}^{n} {(T_{{{\text{m}}_{i} }} - \overline{{T_{{\text{m}}} }} )(T_{{{\text{s}}_{i} }} - \overline{{T_{{\text{s}}} }} )} }}{{\sum\nolimits_{i = 1}^{n} {(T_{{{\text{s}}_{i} }} - \overline{{T_{{\text{s}}} }} )} }}$$
$$a = \frac{{\sum\nolimits_{i = 1}^{n} {T_{{{\text{m}}_{i} }} } - b\sum\nolimits_{i = 1}^{n} {T_{{{\text{s}}_{i} }} } }}{n} = \overline{{T_{{\text{m}}} }} - b\overline{{T_{{\text{s}}} }}$$
where \(\overline{{T_{{\text{m}}} }} = \frac{1}{n}\sum\nolimits_{i = 1}^{n} {T_{{{\text{m}}_{i} }} }\), \(\overline{{T_{{\text{s}}} }} = \frac{1}{n}\sum\nolimits_{i = 1}^{n} {T_{{{\text{s}}_{i} }} }\).
In addition, a widely used empirical tropospheric delay model, the Global Pressure and Temperature 2 wet (GPT2w) model, can also provide the Tm product (Böhm et al., 2015). The GPT2w model has two horizontal resolutions of 5° × 5° (GPT2w-5) and 1° × 1° (GPT2w-1), and has excellent performance in Tm estimation against other models (Huang et al., 2019b).
The summary of GNSS PWV processing is shown in Fig. 3. First, we use the radiosonde profiles to establish a regional empirical Tm model. Then, the hourly Ps and Ts from ground meteorological stations are used to interpolate those for the near GNSS site. The interpolated Ps is used to calculate the ZHD by Eq. (9). Then, the interpolated Ts is used to obtain the Tm with the regional empirical Tm model. Subsequently, the ZWD is calculated by subtracting the ZHD from the ZTD, which is obtained from GNSS observations. The conversion coefficient Π is calculated by Eq. (10). Finally, the hourly GNSS PWV is obtained by multiplying the ZWD by Π.
Process for GNSS PWV retrieval
Empirical T m model in Guilin
Establishment of T m model for Guilin
Due to the systematic errors of the global Tm models, which cannot provide superior performance, a regional optimized Tm model needs developing (Huang et al., 2019a). Thus, to reduce the Tm calculation error and improve the reliability of GNSS PWV, radiosonde profiles are used to establish the local empirical Tm model for Guilin. Before establishing the Tm model, the time series and relationship between Tm and Ts are presented in Fig. 4 using Guilin radiosonde profiles from 2012 to 2017.
Relationship between Tm and Ts from the radiosonde data from 2012 to 2017. a Tm and Ts time series, b correlation plot between Tm and Ts
As seen in Fig. 4a, the Tm and Ts have a similar trend over time. The Ts varies between 270 and 310 K, while the range of Tm is smaller than Ts, ranging from 260 to 295 K. Figure 4b shows that the R between Tm and Ts is 0.91, which indicates the Tm and Ts have a strong linear correlation in Guilin. Thus, the regression equation coefficients of a and b are calculated by Eqs. (12) and (13) with the radiosonde profiles from 2012 to 2017. Then, the regional empirical Tm model for Guilin (Guilin model) is established as follows:
$$T_{{\text{m}}} = 101.01 + 0.62T_{{\text{s}}}$$
Evaluation of T m model
To evaluate the performance of the Guilin Tm model, the gridded Tm series derived from the ERA5 dataset in 2017, which are different from that used in building Guilin model, are used as references. The widely used Bevis model (Bevis et al., 1994), GPT2w-1 model and the Li model (Li et al., 1999) are used to compare with the Guilin model. The Ts from the ERA5 data are used as the Tm model input. Then, the mean bias, RMS, STD and R are calculated. The results are listed in Table 2.
Table 2 Statistical results of the mean bias, RMS and STD for the four models over Guilin using ERA5 data from 2017
According to Table 2, the Li model has the largest mean bias and RMS, whereas the Guilin model has the smallest RMS and STD values of 2.12 and 2.04 K, respectively, and the mean bias of − 0.51. The Guilin model improves the accuracy (in RMS) of Tm estimation by 35%, 40% and 22% with respect to the Bevis, the Li, and GPT2w-1 models, respectively. Thus, the accuracy of the Guilin model is evidently higher than other three models over Guilin.
According to several studies (He et al., 2017; Huang et al., 2019a, 2019b; Wang et al., 2005, 2016), the impact of the Tm values calculated by the four models on their resultant GNSS PWV is analyzed in terms of theoretical function. In this study, a similar method is utilized to analyze the impact of Tm values on GNSS PWV. The relationship of the RMS between Tm and PWV can be calculated by the following formula:
$$\frac{{{\text{RMS}}_{{{\text{pwv}}}} }}{{{\text{PWV}}}} = \frac{{{\text{RMS}}_{\varPi } }}{\varPi } = \frac{{k_{3} \cdot {\text{RMS}}_{{T_{{\text{m}}} }} }}{{\left( {\frac{{k_{3} }}{{T_{{\text{m}}} }} + k_{2}^{^{\prime}} } \right)T_{{\text{m}}}^{2} }} = \frac{{k_{3} }}{{\left( {\frac{{k_{3} }}{{T_{{\text{m}}} }} + k_{2}^{^{\prime}} } \right)T_{{\text{m}}} }} \cdot \frac{{{\text{RMS}}_{{T_{{\text{m}}} }} }}{{T_{{\text{m}}} }}$$
where RMSpwv is the RMS of PWV, \({\text{RMS}}_{{T_{{\text{m}}} }}\) is the RMS of Tm, PWV and Tm are given by the radiosonde profiles and the Tm with models, respectively. RMSpwv/PWV is defined as the relative error of PWV. RMSpwv and RMSpwv/PWV are used to assess the impact of the errors in Tm on its resultant GNSS PWV results. The theoretical results of RMSpwv and RMSpwv/PWV in different Tm models are shown in Table 3.
Table 3 Statistical results of theoretical RMS and relative errors in PWV resulting from the four models validated by using radiosonde profiles from 2017
Table 3 shows that the mean RMSpwv and RMSpwv/PWV values of the Guilin model are the smallest among the four models. In terms of RMSpwv, the RMSpwv values of the Guilin model are all less than 0.60 mm, and the mean RMSpwv is 0.29 mm. In terms of RMSpwv/PWV, the mean RMSpwv/PWV of the Guilin model is 0.74%, ranging from 0.71 to 0.77%, which is more stable than other models. From the results, the impact of the Tm values calculated from the Guilin model on the GNSS PWV is smaller than those in the Bevis, Li and GPT2w-1 models.
The Bevis model and the Li model were established by using the various radiosonde stations in mainland United States (27–65° N area) and eastern China (20–50° N, 100–130° E area), respectively. The empirical model GPT2w-1 was developed based on global ERA-Interim dataset but no input of meteorological parameters. As a result, the systematic errors will be produced when the three models are used in Guilin. Moreover, GPT2w-1 provides one Tm value a day, which cannot meet the demand for obtaining hourly Tm values. Thus, the Tm values derived from other three models are not optimal. Compared with the other three models, the model established in local areas has smaller errors and greater applicability. Hence, the Guilin model is the preferred model, and can be applied to the Tm estimation for hourly GNSS PWV retrieval in Guilin.
Comparison of GNSS-derived PWV with radiosonde-derived PWV
After retrieving the PWV, the accuracy of GNSS-derived PWV (GNSS-PWV) needs to be evaluated. In this study, the PWV retrieved from the data at the Guilin radiosonde station at 00:00 and 12:00 UTC per day from June to July 2017 is compared with the GNSS-PWV. The GNSS station JZ87 was selected because it is nearest to the Guilin radiosonde station. The GNSS-PWV values at the corresponding time were extracted for comparison.
The RadioSonde-derived PWV (RS-PWV) is based on the geopotential height system, while the GNSS-PWV adopts the geodetic height system. Thus, it is necessary to unify the different height systems when the GNSS-PWV are compared with the RS-PWV. In this study, we follow the method described in Wang et al. (2016) to unify the different height systems. PWV is greatly affected in the vertical direction, and the PWVs at different heights must be unified to reduce the impact of height differences on the PWV comparisons. The empirical correction model of PWV is used as follows (Zhang et al., 2019a; Zhao et al., 2019b):
$${\text{PWV}}_{{h_{1} }} = {\text{PWV}}_{{h_{2} }} \times \exp \left( { - \left( {h_{1} - h_{2} } \right)/2000} \right)$$
where \({\text{PWV}}_{{h_{1} }}\) and \({\text{PWV}}_{{h_{2} }}\) denote the PWVs corresponding to the heights of h1 and h2, respectively. After the GNSS-PWV and RS-PWV are referred to the same height, they can be compared. The PWV comparison form June to July 2017 is shown in Fig. 5.
Comparison of CNSS-PWV with RS-PWV at station JZ87 from June to July 2017
As presented in Fig. 5, the GNSS-PWV agrees well with the RS-PWV except some small differences. Moreover, the RS-PWV is generally greater than the GNSS-PWV. The R of the two kinds of PWVs is 0.79, which shows a strong correlation. The mean bias of the PWV differences is − 0.90 mm, and the RMS and STD are 3.53 and 3.43 mm, respectively. Notably, the accuracy of GNSS-PWV meets the requirements of atmospheric research (Wang et al., 2013). The GNSS-PWV and the RS-PWV have a good agreement, and the PWV derived from the GNSS observations is reliable, which can be used to monitor water vapor variation and analyze the PWV spatiotemporal characteristics.
PWV time characteristics
Rainfall intensity is the average amount of rainfall over a certain period. It is also an important indicator to describe the characteristics of a heavy rainfall. The increase in the intensity entails heavier rainfall. In China's meteorological department, the classification of rainfall intensity has generally adopted a standard, as shown in Table 4. To improve the description of rainfall, we use this standard in the following discussion.
Table 4 Classification of rainfall intensity in China's meteorological department
From June to July 2017, several continuous rainfalls occurred in many areas of Guilin. The daily precipitation observed by 12 MET stations in Guilin is shown in Fig. 6.
Daily precipitation observed at 12 MET stations in Guilin from June to July 2017 (red dashed lines represent the classification of rainfall)
As shown in Fig. 6, several days of rain was recorded from June to July 2017 in Guilin. Among them, especially around 1 July (DOY 182), the precipitation observed by the 12 MET stations in Guilin was almost more than 50 mm, reaching the level of heavy rain. One of these rainfalls had more than 250 mm of precipitation in Yongfu County, reaching the highest level in rainfall intensity. On the basis of the weather condition and the available data, the observations at seven GNSS stations in Guilin from 27 June to 2 July, 2017 and the related rainfall data from MET stations near the GNSS sites are selected as examples to analyze the PWV time-varying characteristics and the relationship between GNSS-PWV and the actual precipitation during the rainfall events, as shown in Fig. 7.
Hourly PWV retrieved from seven GNSS sites and actual precipitation from 27 June to 2 July 2017 (the red circle represents PWV variation before rainfall events, and the black circle represents PWV variation after rainfall events)
Figure 7 shows that the PWVs retrieved from seven GNSS stations (JZ84, JZ85, JZ87, JZ89, JZ92, JZ94, and JZ95) in six days are evidently correlated with the actual precipitation, and have a similar trend. The PWV greatly changes, especially in the case of heavy rain, and most of the precipitation occurs near the peak values of PWV. The PWV usually increases significantly before a heavy rainfall and changes slightly but remains stable during the heavy rainfall. After the heavy rainfall, the PWV drops sharply and reaches the trough. The rainfall that occurred on 1 July 2017 (DOY 182) at station JZ84 is used as an example to analyze the PWV variations in a rainfall process (see Fig. 7a). Before the heavy rainfall, the PWV values fluctuated around 55–60 mm before 28 June (DOY 179). On 28 June, the PWV fell to 50 mm. Then, it began to rise until 1 July (DOY 182) to the peak value of 65.6 mm. When the heavy rainfall stopped, the PWV decreased rapidly and down to the minimum of 44.2 mm on 2 July (DOY 183). Similar variations of PWV and precipitation can also be found in other stations during the heavy rainfall.
To further investigate the relationship between time varying PWV and rainfall, seven rainfall events at different GNSS stations are selected for analysis by grouping them into the before (red circles, see Fig. 7) and after (black circles, see Fig. 7) the rainfall events. The concept of PWV variation and the rate of time varying PWV are defined as follows (Yao et al., 2017): ΔPWV (PWV variation) = Max PWV (the maximum PWV before a rainfall) − Min PWV (the minimum PWV before/after the adjacent maximum PWV); Interval time refers to the period between the Max PWV and the Min PWV; the rate of change of PWV = ΔPWV/Interval time. The results are represented in Tables 5 and 6, respectively.
Table 5 Statistical PWV variation before several rainfall events (they are also shown by red circles in Fig. 7)
Table 6 Statistical PWV variation after several rainfall events (they are also shown by black circles in Fig. 7)
In Table 5, before rainfall the maximum PWV values reach high levels that are all more than 60 mm, with one value even reaching 64.4 mm. The PWV time variation is also large and can reach 4.3 mm within 1 h before the rainfall. In addition, most of the rates of PWV time variation are greater than 1 mm/h. The results indicate that, as one of the main preconditions for rainfall, GNSS-PWV quickly increases in a few hours before rainfall.
In Table 6, after rainfall the minimum PWV values reach low levels that are all near 50 mm. Moreover, the rate of PWV descent can reach 6.5 mm/h after the rainfall. The results also prove that GNSS-PWV rapidly decreases for a few hours after a rainfall.
To further analyze the time varying characteristics of GNSS-PWV during the rainfall, the hourly PWV at 11 GNSS stations from 27 June to 6 July 2017 in Guilin are selected, as shown in Fig. 8.
Temporal evolution of PWV retrieved at 11 GNSS stations from 27 June to 6 July 2017
Figure 8 shows the GNSS-PWV time variation in the selected period. The PWVs retrieved at 11 GNSS stations generally have the same change process. The GNSS can effectively capture the significant variation of PWV over time in Guilin. Before the heavy rainfall, the PWV fluctuated slightly from 27 to 30 June (DOY 178–181) and remained stable at approximately 60 mm in the high water vapor content state. However, the PWV generally began to show a significant upward trend after 30 June and reached its peak value on 1 July, when the heavy rainfall occurred. After the heavy rainfall, the PWV dropped sharply to its minimum value on 2 July. Then, it gradually increased in the following days. It is finally stabilized at approximately 60 mm. The significant variation of PWV is mainly concentrated in the three days of 30 June to 2 July (DOY 181–183), which is the period of torrential rain. In addition, the PWV derived from station JZ88 is significantly lower than that of other GNSS stations in Fig. 8. Among the 11 GNSS stations, the elevation of station JZ88 is much higher than other stations (see Table 1). This discovery is consistent with the previous studies that the GNSS-PWV tends to decrease with the increasing elevation (Liu et al., 2019a; Onn & Zebker, 2006). From the above analysis, there is a strong correlation between GNSS-derived PWV and rainfall. To some extent the rapid rise of PWV may be a warning of a coming rainfall event since the occurrence of a rain requires sufficient water vapor supply. Though not all high PWV values indicate the occurrence of rainfalls (Yao et al., 2017), the increase of water vapor is one of the necessary pre-conditions for the occurrence of a rain. When rainfall begins to weaken, the PWV drops sharply. Then, the rain finally ceases. Afterward, the consumed water vapor is replenished, and PWV rises gradually until it is stable at a certain value.
PWV space characteristics
The heavy rainfall in the most areas of Guilin is mainly concentrated on 1 July 2017 from the meteorological observations. It is a long process and has a wide range and great intensity. Hence, in this study, we selected the hourly GNSS-PWVs at 10 GNSS stations during the three days from 30 June to 2 July 2017 (the observations of station JZ86 were unavailable in this period). Spatial interpolation is carried out to construct the distribution maps of PWV and to retrieve the corresponding actual precipitation at different times during the three-day rainfall events. Figure 9 exhibits the geographic distribution of the daily accumulated precipitation over Guilin for 30 June to 2 July 2017. The precipitation data are received from the 12 MET stations. Figure 10 shows the spatial evolution of PWV derived from the observations at 10 GNSS stations during the same period, and the spatial evolution of ERA5 PWV is also presented in Fig. 11 for comparison. The PWV distribution maps are hourly, and we present the PWV maps with a time interval of 6 h because of space constraints. In addition, the UTC is adopted in the precipitation maps and PWV maps.
Evolution of actual precipitation in Guilin from June 30 to July 2, 2017
Spatial evolution of GNSS-PWV for Guilin every 6 h from June 30 to July 2, 2017
Spatial evolution of ERA5 PWV for Guilin every 6 h from June 30 to July 2, 2017
As shown in Fig. 9, several rainfall events were observed over the most parts of Guilin. On 30 June, the precipitation increased from about 5 mm at the southeast to 65 mm at the northwest (see Fig. 9a). Then, the rainfall was getting heavy and developed into torrential rain on 1 July over the most parts of Guilin, particularly in western Guilin, even reaching its highest precipitation of 250 mm (see Fig. 9b). The accumulated precipitation from 30 June to 1 July was increasing. On 2 July, the precipitation was getting light and even ceased on the most area except for several rainfall events in the south Guilin (see Fig. 9c).
On 30 June, the GNSS-PWV experienced a significant increase from the southwest Guilin to the central part from 00:00 to 24:00 UTC, and it spread upward from southwest to north of Guilin until it covered the whole Guilin (see Fig. 10a–e), indicating that a large amount of water vapor from the southwest flowed into Guilin. The GNSS-PWV overall increased from 60 to 65 mm. This is consistent with the precipitation pattern displayed in Fig. 9a, wherein the west Guilin has more precipitation than the east. The spatial variations of the GNSS-PWV agree well with the ERA5 PWV displayed in Fig. 11a–e in that the PWV gradually increased from southwest to the central part of Guilin. The increase of water vapor transport from southwest offered a favorable condition for the heavy rainfall. High moisture conditions portended that the heavy rainfall was coming soon. In addition, the southwest jet stream provides sufficient moisture and dynamic conditions for the heavy rainfalls in Guilin according to the report from Guilin Meteorological Bureau. Similarly, the PWV in the southwest increased relatively earlier than the other parts of Guilin, as presented in Figs. 10 and 11.
On 1 July, Fig. 10e–f shows that Guilin has high PWV values from 0:00 to 6:00 UTC, and the GNSS-PWV value was approximately 65 mm overall. The water vapor around Guilin gradually gathered and then covered the entire Guilin. In this circumstance, torrential rains occurred in the most parts of Guilin and reached the maximum precipitation on this day (see Fig. 9b). The GNSS-PWV values had small fluctuations but did not change much during this period. From 6:00 to 24:00 UTC, as shown in Fig. 10f–i, the GNSS-PWV in southern and northern Guilin gradually showed a trend of polarization, that is, the significant GNSS-PWV decreases were observed in the northern, while the southern experienced a slight increase in GNSS-PWV. Finally, four gradient forms of GNSS-PWV values emerged in Guilin at 24:00 UTC, which are approximately 35–45 mm, 45–55 mm, 55–65 mm, and 65–70 mm from northwest to southeast, respectively. The phenomenon is consistent with the precipitation pattern displayed in Fig. 9b, wherein the rainfall gradually decreased from west to east. Different intensities of rainfall in different regions caused the multilevel differentiation of PWV values over Guilin. Meanwhile, the GNSS-PWV maps shown in Fig. 10e–i also agree well with the ERA5 PWV variations, but GNSS-PWV seems a little larger than ERA5 PWV due to the lower spatial resolution and the different heights in PWV.
On 2 July, as shown in Fig. 10i–l, the GNSS-PWV differentiation was gradually weakened until the overall PWV values became stable at approximately 55 mm. The rainfalls gradually weakened and ceased in the most parts of Guilin, whereas the south experienced several rainfall events (see Fig. 9c). Unlike GNSS-PWV maps, the ERA5 PWV maps show the PWV in the south Guilin is significantly greater than that in the north from 12:00 to 18:00 UTC in Fig. 11k–l, which is more consistent with the actual precipitation as shown in Fig. 9c. This is benefited from the fact that ERA5 has a higher spatial resolution than GNSS, which can reduce the error caused by spatial interpolation.
As stated above, we can find the corresponding relationship between the accumulation and release of PWV and the actual precipitation process. In general, the accumulation of moisture increased before the torrential rain, and the PWV also increased largely, reaching the peak value afterward. At the peak value of the PWV, the rainfall arrived after a short period. As shown in Figs. 10 and 11, the high level of PWV was sustained for nearly a day. Then, Guilin experienced its heaviest rainfall during the three-day rainfall events. Therefore, the longer the high level PWV was sustained, the heavier the rainfall will be. During the torrential rain, the PWV values had slight fluctuations. The torrential rain lasted for a while, and continuous heavy rainfall caused a large amount of moisture loss. We can see that the PWV decreased rapidly and dropped to the valley value after a period of heavy rainfall. Finally, the PWV increased slightly and was gradually stabilized to a fixed value.
Moreover, the PWV varied from place to place. Guilin has a subtropical monsoon climate due to the difference in land and sea thermal properties, the prevailing wind of warm moist air mostly comes from the southwest and southeast in summer, and the precipitation is degressive from the southeast to northwest. The PWV in northwest Guilin is approximately 5 mm smaller than southeast Guilin because of its proximity to the inland. The increase of PWV mainly starts from south Guilin, which may be influenced by the subtropical monsoon climate and geographical location. In addition, we mentioned that GNSS stations in higher elevations tend to have smaller GNSS-PWV in the previous section. The GNSS-PWV maps also show that the PWV near the station JZ88 is usually lower than that in the surrounding areas in Guilin, as shown in Fig. 10. The spatial evolution of GNSS-PWV and ERA5 PWV is generally consistent except several small discrepancies, indicating the ERA5 PWV can also reveal the moisture variations during the rainfall events in Guilin, which can be combined with GNSS-PWV to analyze the moisture variations with higher spatial resolution in future work.
The distribution condition of moisture must be obtained, and the impacts of the spatiotemporal characteristics of moisture on the evolution of severe weathers and global climate change must be understood. Monitoring the climate change in Guilin is limited because of insufficient observations. As the ground-based GNSS rapidly developed over the past few decades, such measurements have become crucial for measuring highly dynamic water vapor. In this research, GNSS-derived PWV from the observations at 11 GNSS stations in Guilin are used to analyze the spatial and temporal characteristics of GNSS water vapor during heavy rainfalls from June to July 2017 in Guilin. PWV is the indicator used for monitoring heavy rainfall events.
To improve the estimation accuracy of the GNSS PWV retrieval, a new Tm model is established for Guilin using the radiosonde profiles from 2012 to 2017. The results show that the new Tm model has better performance than the widely used Bevis, Li, and GPT2w-1 Tm models over Guilin with the mean bias and RMS of − 0.51 and 2.12 K, respectively. Additionally, the impact of different Tm models on the resultant GNSS PWV is investigated, showing that the mean RMSpwv and RMSpwv/PWV are 0.29 mm and 0.74%, respectively.
To assess the reliability of GNSS-PWV, the GNSS-PWV retrieved from GNSS station JZ87 is compared with the RS-PWV, and a good agreement with them is obtained. The correlation coefficient between the two kinds of PWVs is high of 0.79, and the mean bias and RMS of GNSS-PWV are − 0.9 and 3.53 mm, respectively. According to the analysis of the spatial and temporal characteristics of water vapor during the heavy rainfall events that occurred in Guilin from June 27 to July 2, 2017, the variation of PWV derived from GNSS had a direct relationship with the rainfall events. PWV increases before the arrival of a heavy rainfall and decreases to a stable value after its cease. The GNSS-PWV can effectively reveal the significant variation of water vapor and the actual rainfall in several regions of Guilin. Furthermore, given the subtropical monsoon climate and geographical location, the PWV in Guilin mainly starts to increase from the south before a rainfall in summer.
This research demonstrates the possibility of revealing the moisture transportation and variation during heavy rainfalls using GNSS-derived PWV, which is of great significance and an indication for the prediction of rainfall and severe weather. In future, we will mainly focus on the following four issues: (1) considering additional factors that affect moisture, such as geography, pressure, temperature, and humidity; (2) using GNSS tomographic technique to investigate the relationship between the three-dimensional PWV and rainfall; (3) combining multisource data in the construction of PWV maps and improving its reliability; and (4) retrieving PWV with higher spatial and temporal resolutions.
In this experiment, the radiosonde data came from the University of Wyoming (http://www.weather.uwyo.edu/upperair/sounding.html). The ERA5 data came from the ECMWF (https://cds.climate.copernicus.eu/#!/home). The ground meteorological stations data came from the China Meteorological Administration (http://data.cma.cn). The IGS ZTD products came from IGS associate analysis centers (https://cddis.nasa.gov/archive/gnss/products/troposphere/zpd/), and the Guilin GNSS data came from the Guangxi Bureau of Surveying, Mapping, and Geoinformation.
Askne, J., & Nordius, H. (1987). Estimation of tropospheric delay for microwaves from surface weather data. Radio Science, 22, 379–386.
Barindelli, S., Realini, E., Venuti, G., Fermi, A., & Gatti, A. (2018). Detection of water vapor time variations associated with heavy rain in northern Italy by geodetic and low-cost GNSS receivers. Earth, Planets and Space, 70, 1.
Bevis, M., Businger, S., Chiswell, S., Herring, T. A., Anthes, R. A., Rocken, C., & Ware, R. H. (1994). GPS meteorology: Mapping zenith wet delays onto precipitable water. Journal of Applied Meteorology, 33, 379–386.
Bevis, M., Businger, S., Chiswell, S., Herring, T. A., Rocken, C., Anthes, R. A., & Ware, R. H. (1992). GPS meteorology: Remote sensing of atmospheric water vapor using the global positioning system. Journal of Geophysics Research, 97, 15787–15801.
Böhm, J., Moller, G., Schindelegger, M., Pain, G., & Weber, R. (2015). Development of an improved empirical model for slant delays in the troposphere (GPT2w). GPS Solution, 19, 433–441.
Bolton, D. (1980). The computation of equivalent potential temperature. Monthly Weather Review, 108, 1046–1053.
Byun, S., & Bar-Sever, Y. (2009). A new type of troposphere zenith path delay product of the international GNSS service. Journal of Geodesy, 83, 367–373.
Calori, A., Santos, J. R., Blanco, M., Pessano, H., Llamedo, P., Alexander, P., & de la Torre, A. (2016). Ground-based GNSS network and integrated water vapor mapping during the development of severe storms at the Cuyo region (Argentina). Atmospheric Research, 176–177, 267–275.
Chen, B., Dai, W., Liu, Z., Wu, L., Kuang, C., & Ao, M. (2018). Constructing a precipitable water vapor map from regional GNSS network observations without collocated meteorological data for weather forecasting. Atmospheric Measurement Techniques, 11, 5153–5166.
Chen, B., Liu, Z., Wong, W. K., & Woo, W. C. (2017). Detecting water vapor variability during heavy precipitation events in Hong Kong using the GPS tomographic technique. Journal of Atmospheric and Oceanic Technology, 34, 1001–1019.
Davis, J. L., Herring, T. A., Shapiro, I. I., Rogers, A. E. E., & Elgered, G. (1985). Geodesy by radio interferometry: Effects of atmospheric modeling errors on estimates of baseline length. Radio Science, 20, 1593–1607.
Elgered, G., Johansson, J. M., Rönnäng, B. O., & Davis, J. L. (1997). Measuring regional atmospheric water vapor using the Swedish permanent GPS network. Geophysical Research Letters, 24, 2663–2666.
Emardson, T. R., Elgered, G., & Johannson, J. M. (1998). Three months of continuous monitoring of atmospheric water vapor with a network of global positioning system receivers. Journal of Geophysical Research, 103(D2), 1807–1820.
Gui, K., Che, H., Chen, Q., Zeng, Z., Liu, H., Wang, Y., Zheng, Y., Sun, T., Liao, T., Wang, H., & Zhang, X. (2017). Evaluation of radiosonde, MODIS-NIR-Clear, and AERONET precipitable water vapor using IGS ground-based GPS measurements over China. Atmospheric Research, 197, 461–473.
He, C., Wu, S., Wang, X., Hu, A., Wang, Q., & Zhang, K. (2017). A new voxel-based model for the determination of atmospheric weighted mean temperature in GPS atmospheric sounding. Atmospheric Measurement Techniques, 10, 2045–2060.
Hein, G. W. (2020). Status, perspectives and trends of satellite navigation. Satellite Navigation, 1, 22.
Huang, L., Jiang, W., Liu, L., Hua, C., & Ye, S. (2019a). A new global grid model for the determination of atmospheric weighted mean temperature in GPS precipitable water vapor. Journal of Geodesy, 93, 159–176.
Huang, L., Liu, L., Chen, H., & Jiang, W. (2019b). An improved atmospheric weighted mean temperature model and its impact on GNSS precipitable water vapor estimates for China. GPS Solution, 23, 2.
King, M. D., Kaufman, Y. J., Menzel, W. P., & Tanré, D. (1992). Remote sensing of cloud, aerosol, and water vapor properties from the moderate resolution imaging spectrometer (MODIS). IEEE Transactions on Geoscience and Remote Sensing, 30(1), 2–27.
Li, J., Mao, J., & Li, C. (1999). The approach to remote sensing of water vapor based on GPS and linear regression Tm in eastern region of China. Acta Meteorologica Sinica, 57, 283–292. (in Chinese).
Li, X., Dick, G., Lu, C., Ge, M., Nilsson, T., Ning, T., Wickert, J., & Schuh, H. (2015a). Multi-GNSS meteorology: Real-time retrieving of atmospheric water vapor from BeiDou, Galileo, GLONASS, and GPS observations. IEEE Transactions on Geoscience and Remote Sensing, 53(12), 6385–6393.
Li, X., Zus, F., Lu, C., Dick, G., Ning, T., Ge, M., Wickert, J., & Schuh, H. (2015b). Retrievingof atmospheric parameters frommulti-GNSS in real time: Validationwith water vapor radiometer andnumerical weather model. Journal of Geophysical Research: Atmospheres, 120, 7189–7204.
Liu, C., Zheng, N., Zhang, K., & Liu, J. (2019a). A new method for refining the GNSS-derived precipitable water vapor map. Sensors, 19, 3.
Liu, L., Yao, C., & Wen, H. (2012). Empirical Tm modeling in the region of Guangxi. Geodesy and Geodynamics, 3, 47–52.
Liu, Y., Zhao, Q., Yao, W., Ma, X., Yao, Y., & Liu, L. (2019b). Short-term rainfall forecast model based on the improved BP-NN algorithm. Scientific Reports, 9, 19751.
Manandhar, S., Lee, Y. H., Meng, Y. S., Yuan, F., & Ong, J. T. (2018). GPS-derived PWV for rainfall nowcasting in tropical region. IEEE Transactions on Geoscience and Remote Sensing, 56, 4835–4844.
Onn, F., & Zebker, H. A. (2006). Correction for interferometric synthetic aperture radar atmospheric phase artifacts using time series of zenith wet delay observations from a GPS network. Journal of Geophysical Research-Solid Earth, 111, B09102.
Saastamoinen, J. (1972). Atmospheric correction for the troposphere and stratosphere in radio ranging satellites. Geophysical Monograph Series, 15, 247–251.
Shi, J., Gao, Y., & Guo, J. (2015). Real-time GPS precise point positioning-based precipitable water vapor estimation for rainfall monitoring and forecasting. IEEE Transactions on Geoscience and Remote Sensing, 53, 3452–3459.
Suparta, W., & Rahman, R. (2016). Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood. Atmospheric Research, 168, 205–219.
Vaquero-Martínez, J., Antón, M., de Galisteo, J. P. O., Cachorro, V. E., Wang, H., Abad, G. G., Román, R., & Costa, M. J. (2017). Validation of integrated water vapor from OMI satellite instrument against reference GPS data at the Iberian Peninsula. Science of the Total Environment, 580, 857–864.
Wang, H., Wei, M., Li, G., Zhou, S., & Zeng, Q. (2013). Analysis of precipitable water vapor from GPS measurements in Chengdu region: Distribution and evolution characteristics in autumn. Advances in Space Research, 52, 656–667.
Wang, J., Zhang, L., & Dai, A. (2005). Global estimates of water-vapor-weighted mean temperature of the atmosphere for GPS applications. Journal of Geophysical Research, 110, D21101.
Wang, J., Zhang, L., Dai, A. G., Van Hove, T., & Van Baelen, J. (2007). A near-global, 2-hourly data set of atmospheric precipitable water from ground-based GPS measurements. Journal of Geophysical Research, 112, D11107.
Wang, S., Xu, T., Nie, W., Jiang, C., Yang, Y., Fang, Z., Li, M., & Zhang, Z. (2020). Evaluation of precipitable water vapor from five reanalysis products with ground-based GNSS observations. Remote Sensing, 12, 1817.
Wang, X., Zhang, K., Wu, S., Fan, S., & Cheng, Y. (2016). Water vapor-weighted mean temperature and its impact on the determination of precipitable water vapor and its linear trend. Journal of Geophysical Research: Atmospheres, 121, 833–852.
Yang, Y., Mao, Y., & Sun, B. (2020). Basic performance and future developments of BeiDou global navigation satellite system. Satellite Navigation, 1, 1.
Yao, Y., Shan, L., & Zhao, Q. (2017). Establishing a method of short-term rainfall forecasting based on GNSS-derived PWV and its application. Scientific Reports, 7, 12465.
Zhang, H., Yuan, Y., Li, W., & Zhang, B. (2019a). A real-time precipitable water vapor monitoring system using the national GNSS network of China: Method and preliminary results. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(5), 1587–1598.
Zhang, K., Manning, T., Wu, S., Rohm, W., Silcock, D., & Choy, S. (2015). Capturing the signature of severe weather events in Australia using GPS measurements. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8, 1839–1847.
Zhang, Y., Cai, C., Chen, B., & Dai, W. (2019b). Consistency evaluation of precipitable water vapor derived from ERA5 GNSS, and radiosondes over China. Radio Science, 54, 561–571.
Zhao, Q., Ma, X., Yao, W., & Yao, Y. (2019a). A new typhoon-monitoring method using precipitation water vapor. Remote Sensing, 11, 23.
Zhao, Q., Yang, P., Yao, W., & Yao, Y. (2019b). Hourly PWV dataset derived from GNSS observations in China. Sensors, 20, 1.
Zhao, Q., Yao, Y., & Yao, W. (2018). GPS-based PWV for precipitation forecasting and its application to a typhoon event. Journal of Atmospheric and Solar-Terrestrial Physics, 167, 124–133.
The authors would like to thank the University of Wyoming for providing the radiosonde profiles, the ECMWF for providing the ERA5 data, the China Meteorological Administration for providing the ground meteorological data, the IGS for providing the ZTD products, and the Guangxi Bureau of Surveying, Mapping, and Geoinformation for providing the GNSS data.
This research was funded by the National Natural Foundation of China (41704027; 41664002; 41864002), the Guangxi Natural Science Foundation of China (2017GXNSFBA198139; 2017GXNSFDA198016; 2018GXNSFAA281182; 2018GXNSFAA281279), the "Ba Gui Scholars" program of the provincial government of Guangxi, and the Open Fund of Hunan Natural Resources Investigation and Monitoring Engineering Technology Research Center (No: 2020–9).
College of Geomatics and Geoinformation, Guilin University of Technology, Guilin, 541004, China
Liangke Huang, Zhixiang Mo, Shaofeng Xie, Lilong Liu, Chuanli Kang & Shitai Wang
Guangxi Key Laboratory of Spatial Information and Geomatics, Guilin, 541004, China
School of Geodesy and Geomatics, Wuhan University, Wuhan, 430079, China
Jun Chen
Liangke Huang
Zhixiang Mo
Shaofeng Xie
Lilong Liu
Chuanli Kang
Shitai Wang
Conceptualization, L.H. and Z.M.; methodology, L.H. and Z.M.; software, S.X.; validation, L.H. and Z.M.; formal analysis, L.H.; investigation, S.X. and C.K.; resources, S.X., L.L. and S.W.; data curation, L.H.; writing—original draft preparation, Z.M.; writing—review and editing, L.H. and J.C.; funding acquisition, L.H.; S.X. and L.L. All authors read and approved the final manuscript.
Correspondence to Zhixiang Mo or Shaofeng Xie.
Huang, L., Mo, Z., Xie, S. et al. Spatiotemporal characteristics of GNSS-derived precipitable water vapor during heavy rainfall events in Guilin, China. Satell Navig 2, 13 (2021). https://doi.org/10.1186/s43020-021-00046-y
DOI: https://doi.org/10.1186/s43020-021-00046-y
Precipitable water vapor
Spatiotemporal characteristic
Atmospheric weighted mean temperature | CommonCrawl |
\begin{document}
\newcommand{A new series solution method for the transmission problem}{A new series solution method for the transmission problem} \newcommand{Y. Jung and M. Lim}{Y. Jung and M. Lim}
\title{{A new series solution method for the transmission problem}\thanks{{This work is supported by the Korean Ministry of Science, ICT and Future Planning through NRF grant No. 2016R1A2B4014530 (to Y.J. and M.L.).}}} \author{ YoungHoon Jung\thanks{\footnotesize Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, Daejeon 305-701, Korea ({[email protected]}, {[email protected]}).} \and Mikyoung Lim\footnotemark[2]\ \thanks{\footnotesize Cooresponding author}}
\date{\today} \maketitle \begin{abstract} In this paper, we propose a novel framework for the conductivity transmission problem in two dimensions with a simply connected inclusion of arbitrary shape. We construct a collection of harmonic basis functions, associated with the inclusion, based on complex geometric function theory. It is well known that the solvability of the transmission problem can be established via the boundary integral formulation in which the Neumann-Poincar\'{e} (NP) operator is involved. The constructed basis leads to explicit series expansions for the related boundary integral operators. In particular, the NP operator becomes a doubly infinite, self-adjoint matrix operator, whose entry is given by the Grunsky coefficients corresponding to the inclusion shape. This matrix formulation provides us a simple numerical scheme to compute the transmission problem solution and, also, the spectrum of the NP operator for a smooth domain, by use of the finite section method. The proposed geometric series solution method requires us to know the exterior conformal mapping associated with the inclusion. We derive an explicit boundary integral formula, with which the exterior conformal mapping can be numerically computed, so that one can apply the method for an inclusion of arbitrary shape. We provide numerical examples to demonstrate the effectiveness of the proposed method. \\
Dans cet article, nous proposons un nouveau cadre pour le probl\`{e}me de conductivit\'{e} en deux dimensions avec une inclusion de conductivit\'{e} simplement connect\'{e}e et de forme arbitraire. Bas\'{e}e sur la th\'{e}orie de la fonction g\'{e}om\'{e}trique complexe, nous construisons une collection des fonctions harmoniques li\'{e}e \`{a} l'inclusion. Le fait que la solvabilit\'{e} du probl\`{e}me de conductivit\'{e} avec une inclusion peut \^{e}tre \'{e}tablie par l'op\'{e}rateur Neumann-Poincar\'{e} est bien connu. Avec les fonctions de base construites, l'op\'{e}rateur Neumann-Poincar\'{e} devient une matrice auto-adjointe en dimension infinie, dont l'entr\'{e}e est d\'{e}fini par les coefficients de Grunsky associ\'{e}s \`{a} la g\'{e}om\'{e}trie de l'inclusion. Sur la base de cette formulation matricielle, nous d\'{e}rivons un sch\'{e}ma num\'{e}rique simple pour calculer la solution du probl\`{e}me de transmission et, \'{e}galement, le spectre de l'op\'{e}rateur Neumann-Poincar\'{e} sur les domaines planaires de forme arbitraire. Nous d\'{e}rivons aussi une formule de l'int\'{e}grale au bord explicitement pour la transformation conforme de l'ext\'{e}rieur. Avec cette formule nous pouvons calculer la transformation conforme de l'ext\'{e}rieur des domaines planaires de forme arbitraire. Nous effectuons des exp\'{e}riences num\'{e}riques afin de d\'{e}montrer l'efficacit\'{e} de la m\'{e}thode.
\end{abstract}
\noindent {\footnotesize {\bf AMS subject classifications.} { 35J05; 30C35;45P05} }
\noindent {\footnotesize {\bf Keywords.} {Interface problem; Transmission problem; Neumann-Poincar\'{e} operator; Geometric function theory; Plasmonic resonance; Finite section method; Conformal mapping} }
\section{Introduction}
The aim of this paper is to provide an analytical framework for the transmission problem in two dimensions that is applicable to an object of arbitrary shape. We let $\Omega$ be a simply connected bounded domain in $\mathbb{R}^2$ with a piecewise smooth boundary, possibly with corners, where the background is homogeneous with dielectric constant $\epsilon_m$ and $\Omega$ is occupied by a material of dielectric constant $\epsilon_c$. We consider the interface problem \begin{equation}\label{cond_eqn0} \begin{cases} \displaystyle\nabla\cdot\sigma\nabla u=0\quad&\mbox{in }\mathbb{R}^2, \\
\displaystyle u(x) - H(x) =O({|x|^{-1}})\quad&\mbox{as } |x| \to \infty \end{cases} \end{equation} with $ \sigma=\epsilon_c\chi_{\Omega}+\epsilon_m\chi_{\mathbb{R}^2\setminus \overline{\Omega}}$ for a given background field $H$. The symbol $\chi$ indicates the characteristic function. The solution $u$ should satisfy the transmission condition
$$u\big|^+=u\big|^-\quad\mbox{and }\quad \epsilon_m\pd{u}{\nu}\Big|^+=\epsilon_c\pd{u}{\nu}\Big|^-\qquad\mbox{a.e. on }\partial\Omega.$$ Here, $\nu$ is the outward unit normal vector on $\partial\Omega$ and the symbols $+$ and $-$ indicate the limit from the exterior and interior of $\Omega$, respectively. We may interpret the conductivity problem as the quasi-static formulation of electric fields or anti-plane elasticity. In recent years there has been increased interest in the analysis of the transmission problem in relation to applications in various areas such as inverse problems, invisibility cloaking, and nano-photonics \cite{Ammari:2004:RSI:book, Ciraci:2012:PUL,Milton:2006:CEA,Pendry:2006:CEF}.
A classical way to solve \eqnref{cond_eqn0} is to use the layer potential ansatz \begin{equation}\label{eqn:layerpotentialansatz} u(x) = H(x)+\mathcal{S}_{\partial\Omega}[\varphi](x), \end{equation} where $\mathcal{S}_{\partial\Omega}$ indicates the single layer potential associated with the fundamental solution to the Laplacian and $\varphi$ involves the inversion of $\lambda I - \mathcal{K}_{\partial\Omega}^{*}$, where $\lambda =\frac{\epsilon_c+\epsilon_m}{2(\epsilon_c-\epsilon_m)}$ and $\mathcal{K}_{\partial\Omega}^{*}$ is the Neumann--Poincar\'{e} (NP) operator $\mathcal{K}_{\partial\Omega}^{*}$ (see \cite{Escauriaza:1992:RTW, Escauriaza:1993:RPS,Kellogg:1929:FPT,Kenig:1994:HAT}). We reserve the mathematical details for the next section. The boundary integral equation can be numerically solved with high precision even for domains with corners \cite{Helsing:2013:SIE}.
In the present paper, we provide a new series solution method to the transmission problem for a domain of arbitrary shape. When $\Omega$ has a simple shape such as a disk or an ellipse, there are globally defined orthogonal coordinates, namely the polar coordinates or the elliptic coordinates. In these coordinate systems the transmission problem \eqnref{cond_eqn0} can be solved by analytic series expansion; see for example \cite{Ammari:2019:SRN, Ando:2016:APR, Milton:2006:CEA}. In \cite{Ammari:2019:SRN}, an algebraic domain, which is the image of the unit disk under the complex mapping $w+\frac{a}{w^m}$ for some $m\in\mathbb{N},a\in\mathbb{R}$, was considered. For a domain of arbitrary shape, unlike in the case of a disk or an ellipse, an orthogonal coordinate system can be defined only locally and there is none known that is defined on the whole space $\mathbb{R}^2.$ This is the main obstacle when we seek to find the explicit series solution to \eqnref{cond_eqn0}.
As far as we know, there has been no previous work that provides a series solution to the transmission problem for a domain of arbitrary shape. The key idea of our work is to define a curvilinear orthogonal coordinate system only on the exterior region $\mathbb{R}\setminus{\overline{\Omega}}$ by using the exterior conformal mapping associated with $\Omega$. We construct harmonic basis functions in the exterior region that decay at infinity by using the coordinates. We then adopt the Faber polynomials, first introduced by G. Faber in \cite{Faber:1903:PE}, as basis functions on the interior of $\Omega.$ It is worth mentioning that the Faber polynomials have been widely adopted in classical subjects of analysis such as univalent function theory \cite{Duren:1983:UF}, analytic function approximation \cite{Smirnov:1968:FCV}, and orthogonal polynomial theory \cite{Suetin:1974:POR}. The Grunsky inequalities, which are about the Faber polynomials' coefficients, were known to be related to the Fredholm eigenvalue \cite{Schiffer:1981:FEG}. Recently, the Faber polynomials were applied to compute the conformal mapping \cite{Wala:2018:CMD}.
Our results explicitly reveal the relationship among the layer potential operators, the Faber polynomials and the Grunsky coefficients. For a set of density basis functions on $\partial\Omega$, namely $\{\zeta_m\}$, which we define in terms of the curvilinear orthogonal coordinates, we derive explicit series expressions for the single layer potential and the NP operator. Similar consideration holds for the double layer potential. The results are summarized in Theorem \ref{thm:series}. One of the remarkable consequences of our approach is that the NP operator with respect to the basis $\{\zeta_m\}$ has a doubly infinite, self-adjoint matrix representation \begin{equation}\label{eqn:matrixKstar} \displaystyle[\mathcal{K}_{\partial\Omega}^{*}]=\frac{1}{2}\begin{bmatrix} \displaystyle& & \vdots & & & &\vdots & &\\[2mm] \displaystyle& 0 & 0 & 0& 0&{\mu}_{3,1}&{\mu}_{3,2}&{\mu}_{3,3}&\\[2mm] \displaystyle\cdots& 0 & 0 & 0& 0&{\mu}_{2,1}&{\mu}_{2,2}&{\mu}_{2,3}&\cdots\\[2mm] \displaystyle& 0 & 0 & 0& 0&{\mu}_{1,1}&{\mu}_{1,2}&{\mu}_{1,3}&\\[2mm] \displaystyle& 0 & 0 & 0& 1&0&0&0&\\[2mm] \displaystyle&{\overline{{\mu}_{1,3}}}& {\overline{{\mu}_{1,2}}}\displaystyle&{\overline{{\mu}_{1,1}}}&0&0&0&0&\\[2mm] \displaystyle\cdots&{\overline{{\mu}_{2,3}}}&{\overline{{\mu}_{2,2}}}\displaystyle&{\overline{{\mu}_{2,1}}}&0&0&0&0&\cdots\\[2mm] \displaystyle&\overline{{\mu}_{3,3}}& {\overline{{\mu}_{3,2}}}\displaystyle&{\overline{{\mu}_{3,1}}}&0&0&0&0&\\[2mm] \displaystyle& &\vdots & & & &\vdots& & \end{bmatrix}. \end{equation} It is worth emphasizing that $\mathcal{K}_{\partial\Omega}$ and $\mathcal{K}^*_{\partial\Omega}$ are identical to the same matrix operator via (two different sets of) boundary basis functions; see the discussion below Theorem \ref{thm:series} for more details. The matrix formulation of the NP operators provides us a simple numerical scheme to compute the transmission problem solution and, also, to approximate the spectrum of $\mathcal{K}^*_{\partial\Omega}$ for a smooth domain, by use of the finite section method.
The proposed method requires us to know the exterior conformal mapping coefficients. We derive an integral formula for the exterior conformal mapping coefficients.
To state the result more simply, we identify $z=x_1+ix_2$ in $\mathbb{C}$ with $x=(x_1,x_2)$ in $\mathbb{R}^2$. We assume that $\Psi$ maps $\{w\in\mathbb{C}:|w|>\gamma\}$ conformally onto $\mathbb{C}\setminus\overline{\Omega}$ with $\gamma>0$ and that it has Laurent series expansion \begin{equation}\label{conformal:Psi} \Psi(w)=w+a_0+\frac{a_1}{w}+\frac{a_2}{w^2}+\cdots. \end{equation} From the Riemann mapping theorem there exist unique $\gamma$ and $\Psi$ satisfying such properties; see for example \cite[Chapter 1.2]{Pommerenke:1992:BBC}. It turns out (see Theorem \ref{thm:a_k} and its proof in section \ref{sec:transmission_solution}) that the coefficients satisfy \begin{align*} \displaystyle&\gamma^2 = \frac{1}{2\pi}\int_{\partial\Omega}z\overline{\varphi(z)}\,d\sigma(z),\\
\displaystyle&a_m = \frac{\gamma^{m-1}}{2\pi}\int_{\partial\Omega}z|\varphi(z)|^{-m+1}(\varphi(z))^m\,d\sigma(z),\quad m=0,1,\dots, \end{align*} where $$\varphi(z)=(I-2\mathcal{K}^*_{\partial\Omega})^{-1}(\nu_1+i\nu_2).$$ This formula leads to a numerical method to compute the exterior conformal mapping coefficients for a given domain of arbitrary shape (and the interior conformal mapping by reflecting the domain across a circle). It is worth mentioning that a numerical scheme for computation of the conformal mapping in terms of the double layer potential was observed in \cite{Wala:2018:CMD}.
The rest of the paper is organized as follows. In section 2 we formulate the transmission problem \eqnref{cond_eqn0} using boundary integrals. Section 3 constructs harmonic basis functions by using the Faber polynomials. We then define two separable Hilbert spaces on $\partial \Omega$ and obtain their properties in section 4. Section 5 is devoted to deriving series expansions of the single and double layer potentials and the NP operators. We finally investigate the properties of the NP operators in the defined Hilbert spaces and provide the numerical scheme to compute the solution to the transmission problem and the spectrum of $\mathcal{K}^*_{\partial\Omega}$ in section 6, and we conclude with some discussion.
\section{Boundary integral formulation}\label{sec:integralformulation}
For $\varphi\in L^2(\partial\Omega)$, we define \begin{align*}
\mathcal{S}_{\partial\Omega}[\varphi](x)&=\int_{\partial\Omega}\Gamma(x-y)\varphi(y)\,d\sigma(y),~~~~x\in\mathbb{R}^2,\\[1.5mm]
\mathcal{D}_{\partial\Omega}[\varphi](x)&=\int_{\partial\Omega}\frac{\partial}{\partial\nu_y}\Gamma(x-y)\varphi(y)\,d\sigma(y),~~~~x\in\mathbb{R}^2\setminus\partial\Omega, \end{align*}
where $\Gamma$ is the fundamental solution to the Laplacian, {\it i.e.}, $$\Gamma(x)=\frac{1}{2\pi}\ln|x|$$ and $\nu_y$ denotes the outward unit normal vector on $\partial\Omega$. We call $\mathcal{S}_{\partial\Omega}[\varphi]$ a single layer potential and $\mathcal{D}_{\partial\Omega}[\varphi]$ a double layer potential associated with the domain $\Omega$. The Neumann-Poincar\'{e} (NP) operators $\mathcal{K}_{\Omega}$ and $\mathcal{K}_{\Omega}^*$ are defined as \begin{align*}
\displaystyle\mathcal{K}_{\partial\Omega}^{*}[\varphi](x)=p.v.\frac{1}{2\pi}\int_{\partial\Omega}\frac{\left<x-y,\nu_x\right>}{|x-y|^2}\varphi(y)\,d\sigma(y),\\[1.5mm]
\displaystyle \mathcal{K}_{\partial\Omega}[\varphi](x)=p.v.\frac{1}{2\pi}\int_{\partial\Omega}\frac{\left<y-x,\nu_y\right>}{|x-y|^2}\varphi(y)\,d\sigma(y). \end{align*} Here $p.v$ denotes the Cauchy principal value. One can easily see that $\mathcal{K}_{\partial\Omega}^{*}$ is the $L^2$ adjoint of $\mathcal{K}_{\partial\Omega}$.
The single and double layer potentials satisfy the following jump relations on the interface, as shown in \cite{Verchota:1984:LPR}: \begin{align}
\displaystyle \mathcal{S}_{\partial\Omega}[\varphi]\Big|^{+}(x)&=\mathcal{S}_{\partial\Omega}[\varphi]\Big|^{-}(x)~~~~~~~~\text{a.e. }x\in\partial\Omega,\notag\\[1.5mm]
\displaystyle\frac{\partial}{\partial\nu}\mathcal{S}_{\partial\Omega}[\varphi]\Big|^{\pm}(x)&=\left(\pm\frac{1}{2}I+\mathcal{K}_{\partial\Omega}^{*}\right)[\varphi](x)~~~~~~~~\text{a.e. }x\in\partial\Omega\label{eqn:Kstarjump},\\[1.5mm]
\displaystyle \mathcal{D}_{\partial\Omega}[\varphi]\Big|^{\pm}(x)&=\left(\mp\frac{1}{2}I+\mathcal{K}_{\partial\Omega}\right)[\varphi](x)~~~~~~~~\text{a.e. }x\in\partial\Omega\notag,\\[1.5mm]
\displaystyle \frac{\partial}{\partial\nu}\mathcal{D}_{\partial\Omega}[\varphi]\Big|^{+}(x)&=\frac{\partial}{\partial\nu}\mathcal{D}_{\partial\Omega}[\varphi]\Big|^{-}(x)~~~~~~~~\text{a.e. }x\in\partial\Omega.\notag \end{align} Due to the jump formula \eqnref{eqn:Kstarjump}, the solution to \eqnref{cond_eqn0} can be expressed as \begin{equation}\label{umh} u(x)=H(x)+\mathcal{S}_{\partial \Omega}[\varphi](x),\quad x\in\mathbb{R}^2, \end{equation} where $\varphi$ satisfies \begin{equation}\label{eqn:boundaryintegral_varphi} \varphi=(\lambda I-\mathcal{K}_{\partial \Omega}^*)^{-1}\left[\nu\cdot\nabla H\right]\quad\mbox{on }\partial \Omega \end{equation} with $\lambda =\frac{\epsilon_c+\epsilon_m}{2(\epsilon_c-\epsilon_m)}$.
For $|\lambda|\geq 1/2$, the operator $\lambda I -\mathcal{K}_{\partial\Omega}^*$ is invertible on $L^2_0(\partial\Omega)$ \cite{Escauriaza:1992:RTW,Kellogg:1929:FPT}; see \cite{Ammari:2013:MSM:book, Ammari:2004:RSI:book} for more details and references.
Note that $\lambda$ in \eqnref{eqn:boundaryintegral_varphi} belongs to the resolvent of $\mathcal{K}^*_{\partial\Omega}$ for any $0<\epsilon_c/\epsilon_m\neq 1<\infty$.
Let us review some properties of the NP operators. The operator $\mathcal{K}_{\partial\Omega}$ is symmetric in $L^2(\partial \Omega)$ only for a disk or a ball \cite{Lim:2001:SBI}. However, $\mathcal{K}_{\partial\Omega}$ and $\mathcal{K}^*_{\partial\Omega}$ can be symmetrized using Plemelj's symmetrization principle (see \cite{Khavinson:2007:PVP}) \begin{equation}\label{eqn:symmetrization} \mathcal{S}_{\partial\Omega}\mathcal{K}_{\partial\Omega}^*=\mathcal{K}_{\partial\Omega}\mathcal{S}_{\partial\Omega}. \end{equation} We denote by $H^{-1/2}_0(\partial\Omega)$ the space of functions $u$ contained in $H^{-1/2}(\partial\Omega)$ such that $\langle u, 1\rangle_{-1/2,1/2}=0$, where $\langle\cdot,\cdot\rangle_{-1/2,1/2}$ is the duality pairing between the Sobolev spaces $H^{-1/2}(\partial\Omega)$ and $H^{1/2}(\partial\Omega)$. The operator $\mathcal{K}_{\partial\Omega}^*$ is self-adjoint in $\mathcal{H}^*$ which is the space $H^{-1/2}_0(\partial\Omega)$ equipped with the new inner product \begin{equation} \langle \varphi,\psi\rangle_{\mathcal{H}^*}:=-\langle \varphi,\mathcal{S}_{\partial\Omega}[\psi]\rangle_{-1/2,1/2}. \end{equation}
The spectrum of $\mathcal{K}_{\partial\Omega}^*$ on $\mathcal{H}^*$ lies in $(- 1/2 , 1/2)$ \cite{Escauriaza:2004:TPS, Fabes:1992:SRC, Kellogg:1929:FPT}; see also \cite{Kang:2018:SPS,Krein:1998:CLO} for the permanence of the spectrum for the NP operator with different norms. If $\partial\Omega$ is $C^{1,\alpha}$ with some $\alpha>0$, then $\mathcal{K}^*_{\partial D}$ is compact as well as self-adjoint on $\mathcal{H}^*$. Hence, $\mathcal{K}_{\partial\Omega}^*$ has discrete real eigenvalues contained in $(-1/2,1/2)$ that accumulate to $0$.
Plasmonic materials, of which permittivity has a negative real part and a small loss parameter, admit the so-called plasmonic resonance when the corresponding $\lambda$ is very close to the spectrum of $\mathcal{K}_{\partial \Omega}^*$.
We refer the reader to \cite{Ando:2016:APR, Bonnetier:2018:PRB:preprint, Helsing:2017:CSN,Helsing:2013:PCC,Kang:2017:SRN, Perfekt:2017:ESN} and references therein for recent results on the spectral properties of the NP operators and plasmonic resonance.
\section{Geometric harmonic basis} In this section, we construct a set of harmonic basis functions based on the exterior conformal mapping and the Faber polynomials. We introduce only the key properties of Faber polynomials in this section; however, we offer more information in the appendix in order to make this paper self-contained. \subsection{Faber Polynomials}\label{sec:Faber}
Let $z=\Psi(w)$ be the exterior conformal mapping associated with $\Omega$ given by \eqnref{conformal:Psi}. The mapping $\Psi$ uniquely defines a sequence of $m$-th order monic polynomials $\{F_m(z)\}_{m=0}^\infty$, called the Faber polynomials, via the generating function relation \begin{equation}\label{eqn:Fabergenerating}
\frac{\Psi'(w)}{\Psi(w)-z}=\sum_{m=0}^\infty \frac{F_m(z)}{w^{m+1}},\quad z\in\overline{\Omega},\ |w|>\gamma.
\end{equation} In what follows, $\partial\Omega_r$ denotes the image of $|w|=r~(r\geq\gamma)$ under the mapping $\Psi(w)$ and $\Omega_r$ is the region enclosed by $\partial\Omega_r.$ For every fixed $z\in\overline{\Omega_r}$ with $r\geq\gamma$,
the series \eqref{eqn:Fabergenerating} converges in the domain $|w|>r$ and, furthermore, it uniformly converges
in the closed domain $|w|\geq r$ if $z\in\Omega_r$ (see \cite{Smirnov:1968:FCV} for more details). Let us state the properties of Faber polynomials that are the key technical tools of this paper (see appendix \ref{section:appdixFaber} for the derivation).
\begin{itemize} \item Decomposition of the fundamental solution to the Laplacian:
\begin{equation}\label{eqn:log_decomp}
\log(\Psi({w})-{z})=\log w -\sum_{m=1}^\infty \frac{1}{m}F_m({z})w^{-m},\quad |w|>r,\ z\in\Omega_r. \end{equation} One can derive this equation by integrating \eqnref{eqn:Fabergenerating} with respect to $w.$
\item Series expansion in the region $\mathbb{C}\setminus\overline{\Omega}$:
\begin{equation}\label{eqn:Faberdefinition}
F_m(\Psi(w))
=w^m+\sum_{k=1}^{\infty}c_{m,k}{w^{-k}},\quad m=1,2,\dots. \end{equation} The coefficients $c_{m,k}$ are called the Grunsky coefficients.
\item Grunsky identity: \begin{equation}\label{eqn:Grunskyidentity}
k c_{m,k}=m c_{k,m}\quad \text{for any } m,k\geq1. \end{equation}
\item Bounds on the Grunsky coefficients: \begin{equation}\label{eqn:GrunskyBounds}
\sum_{k=1}^\infty \left| \sqrt{\frac{k}{m}}\frac{c_{m,k}}{\gamma^{m+k}} \right|^2\leq 1\quad \text{for any } m\geq 1. \end{equation}
\end{itemize}
\begin{remark}
Once the coefficients of the exterior conformal mapping $\gamma, a_0, a_1, a_2\dots$ are known, the Faber polynomials and the Grunsky coefficients can be easily computed via the recursion formulas
\eqnref{eqn:Faberrecursion} and \eqnref{eqn:cnkrecursion} in the appendix. \end{remark}
\subsection{Geometric harmonic basis functions}\label{sec:geometric_basis}
While it is straightforward to find appropriate harmonic basis functions for a circle or an ellipse, it becomes complicated for a domain with arbitrary geometry. Since the solution to \eqnref{cond_eqn0} is harmonic in each of the two regions $\Omega$ and $\mathbb{C}\setminus\overline{\Omega}$, we require two sets of basis functions for the interior and exterior of $\Omega$, separately. We construct two systems satisfying the following: \begin{itemize} \item[(a)] {\it Interior harmonic basis}, consisting of polynomial functions, such that any harmonic function in a domain containing $\overline{\Omega}$ can be expressed as a series of these basis functions.
\item[(b)] {\it Exterior harmonic basis}, consisting of harmonic functions in $\mathbb{C}\setminus\overline{\Omega}$, such that any harmonic function in $\mathbb{C}\setminus\overline{\Omega}$ which decays like $O(|x|^{-1})$ as $|x|\rightarrow\infty$ can be expressed as a series of these basis functions. \end{itemize} In addition, we require the two sets of basis functions to have explicit relations on $\partial \Omega$ such that the interior and exterior boundary values of the solution can be matched.
We remind the reader that the Faber polynomials are monic and admit the series expansion in the exterior region $\mathbb{C}\setminus\overline{\Omega}$ in terms of $w^{\pm k}$, $k\in\mathbb{N}$, as shown in \eqnref{eqn:Faberdefinition}. Each $w^{-k}$ decays to zero as $|z|\rightarrow\infty$ since $ w=\Psi^{-1}(z)=z+O(1) $
and is harmonic (see \eqref{eqn:laplacian}). Because of these reasons we set
\begin{itemize} \item[(a)] {\it Interior harmonic basis}: $\left\{F_m(z)\right\}_{m\in\mathbb{N}},$ \item[(b)] {\it Exterior harmonic basis}: $\left\{w^{m}(z)\right\}_{m\in\mathbb{Z}}$. \end{itemize} Here, $w^m(z)$ means the function $\left(\Psi^{-1}(z)\right)^m$.
\section{Two Hilbert spaces on $\partial\Omega$: definition and duality}\label{sec:Kpm:def} In the previous section we defined the harmonic basis functions in the interior and exterior of the domain $\Omega$. We recall the reader that the boundary integral formulation \eqnref{umh} is involved with the density function $\varphi$ on $\partial\Omega$ satisfying \eqnref{eqn:boundaryintegral_varphi}. In this section we construct a basis for density functions on $\partial\Omega$ with which one can reformulate \eqnref{umh} and \eqnref{eqn:boundaryintegral_varphi} in series form.
First, we introduce the curvilinear coordinate system in the exterior region $\mathbb{C}\setminus{\overline{\Omega}}$ associated with the exterior conformal mapping $\Psi$. Then, we construct basis functions on $\partial\Omega$ with the coordinates and define two Hilbert spaces $K^{\pm 1/2}(\partial\Omega)$ that are dual to each other and can be identified with $l^2(\mathbb{C})$ via the Fourier series expansions.
\subsection{Orthogonal coordinates in $\mathbb{C}\setminus \Omega$}\label{sec:coordinate}
In view of the domain representation using its exterior conformal mapping $\Psi$, it is natural to adopt the curvilinear coordinate system generated by $z=\Psi(w)$. To deal with the transmission condition on $\partial\Omega$ in terms of $\Psi$, the regularity of $\Psi$ up to the boundary $\partial\Omega$ should be assumed. The continuous extension of the conformal mapping to the boundary is well known (Carath\'{e}odory theorem \cite{Caratheodory:1913:GBR}). When the boundary $\partial\Omega$ is $C^{1,\alpha}$, the exterior conformal mapping $\Psi$ allows the $C^{1,\alpha}$ extension to $\partial\Omega$ by Kellogg-Warschawski theorem \cite[Theorem 3.6]{Pommerenke:1992:BBC}. If $\Omega$ has a corner point, then the regularity of $\Psi$ depends on the angle of $\partial\Omega$ at the corner point. The regularity of the interior conformal mapping for a domain with corners is well established (see for example \cite{Pommerenke:1992:BBC}), and the results can be equivalently translated into the exterior case. We provide the regularity result for the exterior conformal mapping $\Psi$ in terms of the exterior angles of corner points in appendix \ref{appen:boundarybehavior}. Figure \ref{fig:angle} illustrates the exterior angle at a corner point. \begin{figure}
\caption{A domain with one corner point whose exterior angle is $\alpha \pi$.}
\label{fig:angle}
\end{figure}
Here and after, we assume that the boundary $\partial\Omega$ is a piecewise $C^{1,\alpha}$ Jordan curve, possibly with a finite number of corner points without inward or outward cusps. We define the coordinate system which associates each $z\in \mathbb{C}\setminus\Omega$ with the modified polar coordinate $(\rho,\theta)\in[\rho_0,\infty)\times[0,2\pi)$ via the relation $$z=\Psi(e^{\rho+i\theta}).$$ We let $\Psi(\rho,\theta)$ to indicate $\Psi(e^{\rho+i\theta})$ for convenience.
Denote the scale factors as $h_\rho=|\frac{\partial \Psi}{\partial\rho}|$ and $h_\theta=|\frac{\partial \Psi}{\partial\theta}|$. The partial derivatives satisfy $i\frac{\partial \Psi}{\partial \rho}=\frac{\partial \Psi}{\partial \theta}$ so that $\{\frac{\partial \Psi}{\partial\rho},\frac{\partial \Psi}{\partial\theta}\}$ are orthogonal vectors in $\mathbb{C}$ and the scale factors coincide.
We set $$h(\rho,\theta):=h_\rho=h_\theta.$$ We remark that the scale factor $h(\rho,\theta)$ is integrable on the boundary $\partial\Omega$ (for the proof see Lemma \ref{lemma:regularity} in the appendix). One can easily show, for a function $u$ defined in the exterior of $\Omega$, that \begin{equation}
\Delta u=\frac{1}{h^2(\rho,\theta)}\left(\frac{\partial^2 u}{\partial \rho^2}+\frac{\partial^2 u}{\partial \theta^2}\right).\label{eqn:laplacian} \end{equation}
On $\partial\Omega=\{\Psi(\rho_0,\theta):\theta\in[0,2\pi)\}$, the length element is $d\sigma(z)=h(\rho_0,\theta)d\theta$ for $z=\Psi(\rho_0,\theta)$. The exterior normal derivative of $u(z)=(u\circ\Psi)(\rho,\theta)$ is
\begin{equation}
\frac{\partial u}{\partial \nu}\Big|_{\partial\Omega}^{+}(z)=\frac{1}{h}\frac{\partial }{\partial \rho}u(\Psi(e^{\rho+i\theta}))\Big|_{\rho\rightarrow\rho_0^+}.\label{eqn:normalderiv}
\end{equation}
A great advantage of using the coordinate system $(\rho,\theta)$ is that, thanks to $h_\rho=h_\theta$, the integration of the normal derivative for $u$ is simply
\begin{equation}\label{eqn:boundaryintegral}
\int_{\partial\Omega}\frac{\partial u}{\partial\nu}\Big|^+_{\partial\Omega}(z)\, d\sigma(z)=\int_{0}^{2\pi}\frac{\partial u}{\partial \rho}(\rho_0,\theta)\Big|_{\rho\rightarrow\rho_0^+}\, d\theta.
\end{equation}
\subsection{Geometric density basis functions} In this subsection, we set up two systems of density basis functions on $\partial\Omega$ whose usage will be clear in the subsequent sections.
Define for each $m\in\mathbb{Z}$ the density functions
\begin{equation}
\begin{cases}
\displaystyle\widetilde{\eta}_m(z)=\displaystyle\widetilde{\eta}_m(\Psi(e^{\rho_0+i\theta}))=e^{im\theta},\\
\displaystyle\widetilde{\zeta}_m(z)=\displaystyle\widetilde{\zeta}_m(\Psi(e^{\rho_0+i\theta}))=\frac{e^{im\theta}}{h(\rho_0,\theta)}.
\end{cases}
\end{equation}
We then normalize them (with respect to the norms that will be defined later) as
\begin{equation}
\begin{cases}
\displaystyle \eta_m(z)=\displaystyle|m|^{-\frac{1}{2}}\widetilde{\eta}_m(z),\\[2mm]
\displaystyle\zeta_m (z)= \displaystyle|m|^{\frac{1}{2}}\;\widetilde{\zeta}_m(z),\quad m\neq 0.
\end{cases}
\end{equation}
For $m=0$, we set $\zeta_0=\widetilde{\zeta}_0$ and $\eta_0=\widetilde{\eta}_0=1$.
Due to Lemma \ref{lemma:regularity} in the appendix, we have $h(\rho_0,\theta),\frac{1}{h(\rho_0,\theta)}\in L^1([0,2\pi])$, and, hence,
\begin{equation}\label{zetaL2}\widetilde{\zeta}_m(z),\ \widetilde{\eta}_m(z),\ \zeta_m(z),\ \eta_m(z)\in L^2(\partial\Omega).\end{equation}
In Figure \ref{fig:kite}, two geometric boundary basis functions $\widetilde{\eta}_1$ and $\widetilde{\zeta}_1$ are drawn for a domain enclosed by a parametrized curve, where the corresponding conformal mapping is computed by using
Theorem \ref{thm:a_k}.
Before defining new spaces on $\partial\Omega$, let us consider the Sobolev spaces $H^{\pm 1/2}$ on the $1$-dimensional torus $\mathbb{T}^1=\mathbb{R}^1/2\pi\mathbb{Z}^1$.
We denote by $L^2(\mathbb{T}^1)$ the space consisting of periodic functions $f$ on $\mathbb{T}^1$ such that $$\|f\|^2_{L^2(\mathbb{T}^1)}=\frac{1}{2\pi}\int_0^{2\pi}|f(\theta)|^2 d\theta<\infty.$$
The space $L^2(\mathbb{T}^1)$ can be identified with $l^2(\mathbb{Z})$ via the Fourier basis. Similarly, the Sobolev space $H^{1/2}(\mathbb{T}^1)$ admits the Fourier series characterization as follows:
$$H^{1/2}(\mathbb{T}^1)=\left\{\varphi =\sum_{m=-\infty}^\infty a_m e^{im\theta}\;\bigg|\;\|\varphi\|_{H^{1/2}(\mathbb{T}^1)}^2=|a_0|^2+\sum_{m=-\infty,m\neq0}^\infty |m||a_m|^2<\infty\right\}.$$
For each $l\in H^{-1/2}(\mathbb{T}^1)=\left(H^{1/2}(\mathbb{T}^1)\right)^*$, it satisfies
$$\|l\|_{H^{-1/2}(\mathbb{T}^1)}^2=|b_0|^2+\sum_{m=-\infty, m\neq0}^\infty |m|^{-1}|b_m|^2<\infty,\quad\mbox{where } b_m = l(e^{im\theta}).$$
Conversely, for each sequence $(b_m)\in l^2(\mathbb{Z})$ satisfying $|b_0|^2+\sum_{m=-\infty, m\neq 0}^\infty |m|^{-1}|b_m|^2<\infty$, there exists $l\in H^{-1/2}(\mathbb{T}^1)$ such that $l(e^{im\theta})=b_m$ for each $m$.
We will define two separable Hilbert spaces on $\partial\Omega$ in a similar manner in the following subsection.
\vskip .5cm
\begin{figure}
\caption{Kite-shaped domain $\Omega$}
\caption{Level coordinates curves of $\Psi(\rho,\theta)\}
\caption{$\widetilde{\eta}_1(\theta)=e^{i\theta}$}
\caption{$\widetilde{\zeta}_1(\theta)=e^{i\theta}/h(\theta)$}
\caption{A general shaped domain $\Omega$ and its geometric boundary basis functions. The domain $\Omega$ is given by the parametrization $x(t) = \cos(t) + 0.65\cos(2t) - 0.65,\ y(t) = 1.5\sin(t)$, $t\in[0,2\pi]$. (a) illustrates $\partial\Omega$.
(b) shows several level curves of curvilinear coordinates $(\rho,\theta)$ made by the exterior conformal mapping associated with $\Omega$, where the coefficients of $\Psi$ are numerically computed by using Theorem \ref{thm:a_k}.
(c,d) show the real (dashed) and imaginary part (solid) of the basis functions.
}
\label{fig:kite}
\end{figure}
\subsection{Definition of the spaces $K^{1/2}(\partial\Omega)$ and $K^{-1/2}(\partial\Omega)$} For the sake of simplicity, we write $f(\theta)=(f\circ\Psi)(\rho_0,\theta)$ for a function $f$ defined on $\partial\Omega$.
Consider the vector space of functions \begin{align}\label{def:Kzeta}
K^{-1/2}(\partial\Omega)&:=\left\{\varphi:\partial\Omega\rightarrow\mathbb{C}\;\Big| \;\ \sum_{m\in\mathbb{Z}}|a_m|^2<\infty,~a_m=\frac{1}{2\pi}\int_{\partial\Omega}\varphi\overline{\eta_{m}} \,d\sigma\right\}. \end{align} We shall consider two functions $\varphi_1,\varphi_2\in K^{-1/2}(\partial\Omega)$ equivalent if $$\frac{1}{2\pi}\int_{\partial\Omega}\varphi_1\overline{\eta_m }\, d\sigma=\frac{1}{2\pi}\int_{\partial\Omega}\varphi_2\overline{\eta_m }\, d\sigma\quad \text{for all } m\in\mathbb{Z}.$$ We do not distinguish between equivalent functions in $K^{-1/2}(\partial\Omega)$. Among all functions in the equivalence class, denoted by $[\varphi]$, containing a given element $\varphi\in K^{-1/2}(\partial\Omega)$, we take the series expansion with respect to the basis $\{\zeta_m\}$ as the representative of the class $[\varphi]$. In other words, we write $$\varphi=\sum_{m\in\mathbb{Z}}a_m\zeta_m \quad\mbox{with} \quad a_m=\frac{1}{2\pi}\int_{\partial\Omega}\varphi\overline{\eta_{m}}\, d\sigma.$$ Then one can define the inner product and the associated norm in $K^{-1/2}(\partial\Omega)$ in terms of the Fourier coefficients with respect to the basis $\{\zeta_m\}.$ In the same way we define $K^{1/2}(\partial\Omega)$, by exchanging the role of $\{\zeta_m\}$ and $\{\eta_m\}$, as \begin{align}\label{def:Keta}
K^{1/2}(\partial\Omega)&:=\left\{\psi:\partial\Omega\rightarrow\mathbb{C}\;\Big| \;\ \sum_{m\in\mathbb{Z}}|b_m|^2<\infty,~b_m=\frac{1}{2\pi}\int_{\partial\Omega}\varphi\overline{\zeta_{m}} \,d\sigma\right\}. \end{align} For any $\psi\in K^{1/2}(\partial\Omega)$, we can write $$\psi= \sum_{m\in\mathbb{Z}}b_m \eta_m\quad \mbox{with}\quad b_m = \frac{1}{2\pi}\int_{\partial\Omega}\varphi\overline{\zeta_{m}}\, d\sigma.$$ We identify the two spaces $K^{-1/2}(\partial\Omega)$ and $K^{1/2}(\partial\Omega)$ with $l^2(\mathbb{C})$ and define the inner-products via the boundary bases $\{\zeta_m\}$ and $\{\eta_m\}$, respectively. The discussion can be summarized as follows.
\begin{definition}\label{definition:twospaces} We define two Hilbert spaces $K^{-1/2}(\partial\Omega)$ and $K^{1/2}(\partial\Omega)$ by \eqnref{def:Kzeta} and \eqnref{def:Keta} (quotiented by the equivalence class of the zero function) such that they are isomorphic to $l^2(\mathbb{C})$ via the boundary bases $\{\zeta_m\}$ and $\{\eta_m\}$, respectively. In other words, they are \begin{align*}
K^{-1/2}(\partial\Omega)&=\left\{\varphi=\sum_{m\in\mathbb{Z}} a_m \zeta_m\;\Big|\;\ \sum_{m\in\mathbb{Z}} |a_m|^2<\infty\right\},\\
K^{1/2}(\partial\Omega)&=\left\{\psi=\sum_{m\in\mathbb{Z}}b_m \eta_m\;\Big| \;\ \sum_{m\in\mathbb{Z}} |b_m|^2<\infty\right\} \end{align*} equipped with the inner products \begin{align} &\Big(\sum c_m\zeta_m,\ \sum d_m \zeta_m\Big)_{-1/2}=\sum c_m \overline{d_m},\\\label{innerproduct_zeta} &\Big(\sum c_m\eta_m,\ \sum d_m \eta_m\Big)_{1/2}=\sum c_m \overline{d_m}. \end{align} For the sake of notational convenience we may simply write $K^{-1/2}$ and $K^{1/2}$ for the two spaces. \end{definition}
Let us consider the operator $$I(\varphi,\psi)=\frac{1}{2\pi}\int_{\partial\Omega} \varphi(z)\overline{\psi(z)}\,d\sigma(z)\quad \mbox{for }\varphi\in K^{-1/2},\ \psi\inK^{1/2}.$$ For any finite combinations of basis functions it holds that
$I\left(\sum_{|m|\leq N}a_m\zeta_m, \sum_{|m|\leq N}b_m\eta_m\right)=\sum_{|m|\leq N} a_m \overline{b_m},$
and hence we have $|I(\varphi,\psi)|\leq \|\varphi\|_{K^{-1/2}}\|\psi\|_{K^{1/2}}<\infty$. We define a duality pairing between $K^{-1/2}(\partial\Omega)$ and $K^{1/2}(\partial \Omega)$, which is clearly the extension of the $L^2$ pairing: $$(\varphi,\psi)_{-1/2,1/2}=\sum_{m=-\infty}^\infty a_m \overline{b_m}$$ for $\varphi=\sum a_m\zeta_m\in K^{-1/2}(\partial\Omega)$ and $\psi=\sum b_m\eta_m\in K^{1/2}(\partial\Omega)$. Clearly, the pair of indexed families of functions $\{\zeta_m\}$ and $\{\eta_m\}$ is a complete biorthogonal system for $K^{1/2}(\partial\Omega)$ and $K^{-1/2}(\partial\Omega)$.
If the boundary $\partial\Omega$ is smooth enough, the space $K^{\pm1/2}(\partial\Omega)$ coincides with the classical trace spaces $H^{\pm1/2}(\partial\Omega)$. \begin{lemma}\label{smooth:equi} Let $\Omega$ be a simply connected bounded domain with $C^{1,\alpha}$ boundary with some $\alpha>0$. Then the following relations hold: \begin{align*} K^{1/2}(\partial\Omega)&=H^{1/2}(\partial\Omega),\\ K^{-1/2}(\partial\Omega)&=H^{-1/2}(\partial\Omega). \end{align*}
The norm $\|\cdot\|_{K^{1/2}(\partial\Omega)}$ is equivalent to $\|\cdot\|_{H^{1/2}(\partial\Omega)}$ and the norm $\|\cdot\|_{K^{-1/2}(\partial\Omega)}$
to $\|\cdot\|_{H^{-1/2}(\partial\Omega)}$. Moreover, the two duality pairings $(\cdot,\cdot)_{-1/2,1/2}$ and $\langle\cdot,\cdot\rangle_{-1/2,1/2}$ coincide. \end{lemma} \begin{proof} For a general domain $D$, the space $H^{1/2}(\partial D)$ can be characterized as the Hilbert space of functions $u:\partial D\rightarrow\mathbb{C}$ equipped with the fractional Sobolev-Slobodeckij norm
$$\|u\|_{H^{1/2}(\partial D)}^2= \|u\|_{L^2(\partial D)}^2+\int_{\partial D}\int_{\partial D}\frac{|u(z)-u(\tilde{z})|^2}{|z-\tilde{z}|^2 }\, d\sigma(z)d\sigma(\tilde{z})<\infty.$$ Since $h$ and $1/h$ are non-vanishing continuous on $\partial\Omega$, we deduce that $u\in H^{1/2}(\partial\Omega)$ if and only if $(u\circ \Psi)(\rho_0,\cdot)\in H^{\frac{1}{2}}(\mathbb{T}^1)$ and
$\|u\|_{H^{1/2}(\partial\Omega)}\sim \|(u\circ\Psi)(\rho_0,\cdot)\|_{H^{1/2}(\mathbb{T}^1)}.$ Therefore, we prove the lemma by considering the Fourier coefficients characterizations of $H^{1/2}(\mathbb{T}^1)$. \end{proof}
\section{Boundary integral operators in terms of geometric basis} In this section we derive the series expansions in terms of harmonic basis functions for the boundary integral operators related to the integral formulation for the transmission problem. We then apply the results to obtain an explicit formula for the exterior conformal mapping coefficients.
\subsection{Main results}
We set $\mathcal{S}_{\partial\Omega}[\varphi](z)=\mathcal{S}_{\partial\Omega}[\varphi](x)$ for $x=(x_1,x_2)$ and $z=x_1+ix_2$, and other integral operators are defined in the same way. Here we present our main results. The proof is at the end of this subsection.
\begin{theorem}[Series expansion for the boundary integral operators]\label{thm:series} Assume that $\Omega$ is a simply connected bounded domain in $\mathbb{R}^2$ enclosed by a piecewise $C^{1,\alpha}$ Jordan curve, possibly with a finite number of corner points without inward or outward cusps. Let $F_m $ be the $m$-th Faber polynomial of $\Omega$, $c_{i,j}$ be the Grunsky coefficients and $z=\Psi(w)=\Psi(e^{\rho+i\theta})$ for $\rho>\rho_0=\ln \gamma$. \begin{itemize} \item[\rm(a)] We have (for $m=0$) \begin{equation}\label{Scal_zeta0} \mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_0](z)= \begin{cases} \ln \gamma \quad &\mbox{if }z\in\overline{\Omega},\\
\ln|w|\quad&\mbox{if }z\in\mathbb{C}\setminus\overline{\Omega}. \end{cases} \end{equation} For $m=1,2,\dots$, we have
\begin{align}\label{eqn:seriesSLpositive}
\mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_m](z)&=
\begin{cases}
\displaystyle-\frac{1}{2m\gamma^m}F_m(z)\quad&\text{for }z\in\overline{\Omega},\\[2mm]
\displaystyle-\frac{1}{2m\gamma^m}\left(\sum_{k=1}^{\infty}c_{m,k}e^{-k(\rho+i\theta)}+\gamma^{2m}e^{m(-\rho+i\theta)}\right)\quad &\text{for } z\in\mathbb{C}\setminus\overline{\Omega},
\end{cases}\\[3mm]
\label{eqn:seriesSLnegative}
\mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_{-m}](z)&=
\begin{cases}
\displaystyle -\frac{1}{2m\gamma^{m}}\overline{F_{m}(z)}\quad&\text{for }z\in\overline{\Omega},\\[2mm]
\displaystyle-\frac{1}{2m\gamma^m}\left(\sum_{k=1}^{\infty}\overline{c_{m,k}}e^{-k(\rho-i\theta)}+\gamma^{2m}e^{m(-\rho-i\theta)}\right)\quad &\text{for } z\in\mathbb{C}\setminus\overline{\Omega}.
\end{cases}
\end{align}
The series converges uniformly for all $(\rho,\theta)$ such that $\rho\geq\rho_1>\rho_0$.
\item[\rm(b)] We have (for $m=0$) \begin{align}\label{eqn:seriesDLzero}
\displaystyle \mathcal{D}_{\partial\Omega} [1](z)&=
\begin{cases}
\displaystyle 1&\text{for }z\in\Omega,\\
\displaystyle 0&\text{for } z\in\mathbb{C}\setminus\overline{\Omega}.
\end{cases}
\end{align} For $m=1,2,\dots$, we have \begin{align}\label{eqn:seriesDLpositive}
\displaystyle \mathcal{D}_{\partial\Omega} [\widetilde{\eta}_m](z)
&=
\begin{cases}
\displaystyle \frac{1}{2\gamma^m}F_m(z)\quad&\text{for }z\in{\Omega},\\[2mm]
\displaystyle \frac{1}{2\gamma^m}\left(\sum_{k=1}^{\infty}c_{m,k}e^{-k(\rho+i\theta)}-\gamma^{2m}e^{m(-\rho+i\theta)}\right) \quad&\text{for } z\in\mathbb{C}\setminus\overline{\Omega},
\end{cases}\\[3mm] \label{eqn:seriesDLnegative}
\displaystyle \mathcal{D}_{\partial\Omega} [\widetilde{\eta}_{-m}](z)&=
\begin{cases}
\displaystyle \frac{1}{2\gamma^m}\overline{F_m(z)}\quad&\text{for }z\in{\Omega},\\[2mm]
\displaystyle \frac{1}{2\gamma^m}\left(\sum_{k=1}^{\infty}\overline{c_{m,k}}e^{-k(\rho-i\theta)}-\gamma^{2m}e^{m(-\rho-i\theta)}\right) \quad&\text{for } z\in\mathbb{C}\setminus\overline{\Omega}.
\end{cases}
\end{align}
The series converges uniformly for all $(\rho,\theta)$ such that $\rho\geq\rho_1>\rho_0$.
\item[\rm(c)]
We have (for $m=0$)
\begin{equation}\label{eqn:Kcal:zeta0}
\mathcal{K}^*_{\partial\Omega}[\zeta_0]=\frac{1}{2}\zeta_0,
\quad
\mathcal{K}_{\partial\Omega} [1]=\frac{1}{2}.
\end{equation} For $m =1,2,\cdots$
\begin{align}\label{NP_series1}
&\mathcal{K}_{\partial\Omega}^{*}[{\zeta_m}](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\frac{\sqrt{m}}{\sqrt{k}}\frac{c_{k,m}}{\gamma^{m+k}}\, {\zeta}_{-k}(\theta),\quad
\mathcal{K}_{\partial\Omega}^{*}[{\zeta}_{-m}](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\frac{\sqrt{m}}{\sqrt{k}}\frac{\overline{c_{k,m}}}{\gamma^{m+k}}\, \zeta_{k}(\theta),\\ \label{NP_series2}
&\mathcal{K}_{\partial\Omega} [\eta_m](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\frac{\sqrt{k}}{\sqrt{m}}\frac{c_{m,k}}{\gamma^{m+k}}\eta_{-k}(\theta),
\quad
\mathcal{K}_{\partial\Omega} [\eta_{-m}](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\frac{\sqrt{k}}{\sqrt{m}}\frac{\overline{c_{m,k}}}{\gamma^{m+k}}\eta_{k}(\theta).
\end{align}
The infinite series converges either in $K^{1/2}(\partial\Omega)$ or in $K^{-1/2}(\partial\Omega)$.
\end{itemize} \end{theorem}
The coefficients in the equations \eqnref{NP_series1} and \eqnref{NP_series2} are symmetric due to the Grunsky identity \eqnref{eqn:Grunskyidentity}. In other words, the double indexed coefficient \begin{equation}\label{def:mu}
{\mu}_{k,m}=\sqrt{\frac{m}{k}} \frac{c_{k,m}}{\gamma^{m+k}},\quad k,m\geq1, \end{equation} satisfies \begin{equation}\label{eqn:mu:symm} {\mu}_{k,m}={\mu}_{m,k}\quad\mbox{for all }m,k\geq1. \end{equation} The bound \eqnref{eqn:GrunskyBounds} implies that \begin{equation} \label{seq:inequality}
\sum_{k=1}^\infty| {\mu}_{m,k}|^2=\sum_{k=1}^\infty| {\mu}_{k,m}|^2\leq1\quad\mbox{for all }m\geq1. \end{equation} Using the modified Grunsky coefficients \eqnref{def:mu}, the formulas \eqnref{NP_series1} and \eqnref{NP_series2} become simpler: for $m =1,2,\cdots$,
\begin{align}\label{NP_series3}
&\mathcal{K}_{\partial\Omega}^{*}[{\zeta_m}](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\mu_{k,m}\, {\zeta}_{-k}(\theta),\quad
\mathcal{K}_{\partial\Omega}^{*}[{\zeta}_{-m}](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\overline{\mu_{k,m}}\, \zeta_{k}(\theta),\\ \label{NP_series4}
&\mathcal{K}_{\partial\Omega} [\eta_m](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\mu_{m,k}\eta_{-k}(\theta),
\quad
\mathcal{K}_{\partial\Omega} [\eta_{-m}](\theta)=\frac{1}{2}\sum_{k=1}^{\infty}\overline{\mu_{m,k}}\eta_{k}(\theta).
\end{align} It follows directly that $\mathcal{K}_{\partial\Omega}^{*}$ is self-adjoint on $K^{-1/2}(\partial\Omega)$ (and $\mathcal{K}_{\partial\Omega}$ on $K^{1/2}(\partial\Omega)$) thanks to \eqnref{eqn:mu:symm}.
We may identify each $\varphi=\sum_{m\in\mathbb{Z}} a_m\zeta_m\in K^{-1/2}(\partial\Omega)$ with $(a_m)\in l^2(\mathbb{Z})$ and the operator $\mathcal{K}_{\partial\Omega}^{*}:K^{-1/2}(\partial\Omega)\rightarrowK^{-1/2}(\partial\Omega)$ with the bounded linear operator $\left[\mathcal{K}^*_{\partial\Omega}\right]:l^2(\mathbb{Z})\rightarrow l^2(\mathbb{Z})$. Using \eqnref{seq:inequality}, it is easy to see that \begin{equation}\label{K_Kmat}
\big\|\mathcal{K}^*_{\partial\Omega}\big\|_{K^{-1/2}\rightarrowK^{-1/2}}=\big\|[\mathcal{K}_{\partial\Omega}^{*}]\big\|_{l^2\rightarrow l^2}\leq\frac{1}{2}. \end{equation} The matrix corresponding to $\mathcal{K}_{\partial\Omega}^{*}:K^{-1/2}(\partial\Omega)\rightarrow K^{-1/2}(\partial\Omega)$ via the basis set $\{\zeta_m\}_{m\in\mathbb{Z}}$ (or equivalently the matrix of $\left[\mathcal{K}^*_{\partial\Omega}\right]:l^2(\mathbb{Z})\rightarrow l^2(\mathbb{Z})$) is a self-adjoint, doubly infinite matrix given by \eqnref{eqn:matrixKstar}. In the same way, we can identify $\mathcal{K}_{\partial\Omega}:K^{1/2}(\partial\Omega)\rightarrow K^{1/2}(\partial\Omega)$ with the operator $[\mathcal{K}_{\partial\Omega}]:l^2(\mathbb{C})\rightarrow l^2(\mathbb{C})$. Hence we have the following: \begin{equation}\label{eqn:KstarKmatequality} [\mathcal{K}_{\partial\Omega}]=[\mathcal{K}_{\partial\Omega}^*]. \end{equation}
Since $\mathcal{K}^*_{\partial\Omega}$ is self-adjoint on $K^{-1/2}(\partial\Omega)$, the spectrum of $\mathcal{K}_{\partial\Omega}^*$ on $K^{-1/2}(\partial\Omega)$ lies in $[-1/2,1/2]$ from \eqnref{K_Kmat}. For a $C^{1,\alpha}$ domain, it holds that $H^{-1/2}(\partial\Omega)=K^{-1/2}(\partial\Omega)$ and, hence, the spectrum of $\mathcal{K}^*_{\partial\Omega}$ on $\mathcal{H}^*$ and $\mathcal{K}^*_{\partial\Omega}$ on $K^{-1/2}(\partial\Omega)$ coincide.
Therefore, the result is in accordance with the fact that the spectrum of $\mathcal{K}_{\partial\Omega}^*$ on $\mathcal{H}^*$ lies in $(- 1/2 , 1/2)$ \cite{Kellogg:1929:FPT}.
\noindent{\textbf{Proof of Theorem \ref{thm:series}.}} First, we compute $\mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_0]$. We set $z=\Psi(w)\in\mathbb{C}\setminus\overline{\Omega}$ and use \eqnref{eqn:Faberdefinition} and \eqnref{eqn:log_decomp} to derive
\begin{align*}
\mathcal{S}_{\partial \Omega}[\widetilde{\zeta}_0](z)&=\frac{1}{2\pi}\int_{\partial \Omega}\ln|z-\tilde{z}| \frac{1}{h(\rho,\tilde{\theta})}d\sigma(\tilde{z})\\ &=\mbox{Re}\left\{\frac{1}{2\pi}\int_0^{2\pi}\log(\Psi(w)-\Psi(\gamma e^{i\tilde{\theta}}))\right\}d\tilde{\theta}\\ &=\lim_{r\rightarrow\gamma^+}\mbox{Re}\left\{\frac{1}{2\pi}\int_0^{2\pi}\log(\Psi(w)-\Psi(r e^{i\tilde{\theta}}))d\tilde{\theta}\right\}\\
&=\lim_{r\rightarrow\gamma^+}\mbox{Re}\{\log w\}=\ln |w|. \end{align*} Indeed, we have $\frac{1}{2\pi}\int_0^{2\pi}F_n(\Psi(re^{i\tilde{\theta}}))d\tilde{\theta}=0$ for $r>\gamma$, $n\in\mathbb{N}$ because the series in \eqnref{eqn:Faberdefinition} has a zero constant term. From the continuity of the single layer potential \eqnref{Scal_zeta0} follows. By applying the jump relations \eqref{eqn:Kstarjump} to \eqnref{Scal_zeta0} we obtain \begin{equation}\label{Kcalzero}\mathcal{K}^*_{\partial\Omega}[\zeta_0]=\frac{1}{2}\zeta_0. \end{equation}
Second, we expand the single layer potential on $\partial\Omega$ by the geometric basis $\{\zeta_{\pm m}\}_{m\in\mathbb{N}}$. We use the fact that for $\varphi\in H^{-1/2}_0(\partial\Omega)$, the function $u:=\mathcal{S}_{\partial\Omega}[\varphi]$ is the unique solution to the transmission problem \begin{equation}\label{eqn:SLtransmission}
\displaystyle\begin{cases}
\displaystyle\Delta u=0\quad&\text{in } \mathbb{R}^2\setminus\partial\Omega,\\
\displaystyle u\big|^+-u\big|^-=0\quad&\text{a.e. on }\partial \Omega,\\[1mm]
\displaystyle \frac{\partial u}{\partial \nu}\Bigr|^{+}-\frac{\partial u}{\partial \nu}\Bigr|^-=\varphi\quad&\text{a.e. on }\partial\Omega,\\
\displaystyle u(x)=O({|x|^{-1}})\quad &\text{as } |x|\rightarrow\infty.
\end{cases} \end{equation} If we set $u$ as $$u(z)= \begin{cases}
\displaystyle F_m(z)\quad&\text{for }z\in\Omega,\\[2mm]
\displaystyle F_m(z)-w^m+\gamma^{2m}\overline{w^{-m}}\quad &\text{for } z\in\mathbb{C}\setminus\overline{\Omega},
\end{cases} $$ then it satisfies \eqnref{eqn:SLtransmission} with $$
\varphi(z)=\frac{\partial u}{\partial \nu}\Bigr|_{+}-\frac{\partial u}{\partial \nu}\Bigr|_-
=\frac{\partial}{\partial\nu}\left(-w^m+\gamma^{2m}\overline{w^{-m}}\right)\Big|_+=-2m\gamma^m \widetilde{\zeta}_m(\theta)\quad\mbox{a.e. on }\partial\Omega.$$ Indeed, the above equation holds for $z\in\partial\Omega$ which is not a corner point (see Lemma \ref{lemma:regularity} for differentiability). Therefore, for each $m=1,2,\dots$ it holds that
\begin{equation}\label{SL:simple}
\mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_m](z)=
\begin{cases}
\displaystyle-\frac{1}{2m\gamma^m}F_m(z)\quad&\text{for }z\in\Omega,\\[3mm]
\displaystyle-\frac{1}{2m\gamma^m}\left(F_m(z)-w^m+\gamma^{2m}\overline{w^{-m}}\right)\quad &\text{for } z\in\mathbb{C}\setminus\overline{\Omega}.
\end{cases}
\end{equation} We remind the reader that for each $m\in\mathbb{N}$, the Faber polynomial satisfies \begin{equation}\label{Fm_exterior} F_m(z)=w^m+c_{m,1}w^{-1}+c_{m,2}w^{-2}+\cdots\quad \mbox{with }z=\Psi(w)\in\mathbb{C}\setminus\overline{\Omega}. \end{equation} Equation \eqnref{eqn:seriesSLpositive} follows from \eqnref{Fm_exterior}. In view of the conjugate property $$\mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_{-m}](z)=\mathcal{S}_{\partial\Omega}[\overline{\widetilde{\zeta}_m}](z)=\overline{\mathcal{S}_{\partial\Omega}[\widetilde{\zeta}_{m}](z)},$$ we complete the proof of (a).
Now, we consider the double layer potential on $\partial\Omega$. One can easily show that for any $\psi\in L^2(\partial\Omega)$, the function $v:=\mathcal{D}_{\partial\Omega}\psi$ is the unique solution to the following problem: \begin{equation}\label{eqn:DLtransmission}
\displaystyle\begin{cases}
\displaystyle\Delta v=0\quad&\text{in } \mathbb{R}^2\setminus\partial\Omega,\\
\displaystyle v\big|^+-v\big|^-=\psi\quad&\text{a.e. on }\partial \Omega,\\[1mm]
\displaystyle \frac{\partial v}{\partial \nu}\Bigr|^{+}-\frac{\partial v}{\partial \nu}\Bigr|^-=0\quad&\text{a.e. on }\partial\Omega,\\
\displaystyle v(x)=O({|x|^{-1}})\quad &\text{as } |x|\rightarrow\infty.
\end{cases} \end{equation} It is straightforward to see \eqnref{eqn:seriesDLzero}. For $m=1,2,\dots$, one can observe from \eqnref{Fm_exterior} that
\begin{align}\label{DL:simple}
\displaystyle \mathcal{D}_{\partial\Omega} [\widetilde{\eta}_m](z)&=
\begin{cases}
\displaystyle\frac{1}{2\gamma^m}F_m(z)\quad&\text{for }z\in\Omega,\\[3mm]
\displaystyle\frac{1}{2\gamma^m}\left(F_m(z)-w^m-\gamma^{2m}\overline{w^{-m}}\right)\quad &\text{for } z\in\mathbb{C}\setminus\overline{\Omega}.
\end{cases}
\end{align} From the conjugate relation $$\mathcal{D}_{\partial\Omega} [\widetilde{\eta}_{-m}](z)=\overline{\mathcal{D}_{\partial\Omega} [\widetilde{\eta}_{m}](z)},$$
we complete the proof of (b).
To prove (c), we use the jump relation \begin{align}\notag \mathcal{K}_{\partial\Omega}^{*}[\zeta_m]&=
\frac{\partial}{\partial\nu}\mathcal{S}_{\partial\Omega}[\zeta_m]\Big|^{-}+\frac{1}{2}\zeta_m\\
&=-\frac{1}{2\sqrt{m}\gamma^m}\frac{\partial F_m}{\partial\nu}\Big|^- +\frac{1}{2}\zeta_m
=-\frac{1}{2\sqrt{m}\gamma^m}\frac{\partial F_m}{\partial\nu} +\frac{1}{2}\zeta_m.\label{eqn:proofc_0} \end{align} For any non-corner point, say $x=\Psi(\rho_0,\widetilde{\theta})$, $h(\rho_0,\theta)$ is bounded away from zero and infinity in a neighborhood of $(\rho_0,\widetilde{\theta})$ and, hence, it holds for a sufficiently smooth function $u$ that \begin{align}\label{eqn:eqlim}
\frac{\partial u}{\partial \nu}\Big|^+(x)
=\frac{1}{h(\rho_0,\widetilde{\theta})}\lim_{\rho\rightarrow \rho_0}
\frac{\partial (u\circ\Psi)}{\partial\rho}(e^{\rho+i\widetilde{\theta}}). \end{align}
First, we show that $\mathcal{K}_{\partial\Omega}^{*}[\zeta_m]\in K^{-1/2}(\partial\Omega)$.
Since $F_m(z)$ is a polynomial and $h(\rho,\theta)=|\frac{\partial \Psi}{\partial\rho}|$, for any $\rho_1>\rho_0$ there is a constant $M>0$ such that \begin{equation}\label{eqn:uniform:bdd}
\left| \frac{\partial (F_m\circ\Psi)}{\partial\rho}(e^{\rho+i\theta})\right| \leq Mh(\rho,\theta)\quad\text{for } \rho_0\leq\rho\leq\rho_1. \end{equation}
From \eqnref{eqn:proofc_0} we have \begin{align}
\frac{1}{2\pi}\int_{\partial\Omega}
\mathcal{K}_{\partial\Omega}^{*}[\zeta_m] \eta_{-n} d\sigma
&=-\frac{1}{2\sqrt{m}\gamma^m}
\frac{1}{2\pi}\int_{\partial\Omega}\frac{\partial F_m}{\partial\nu} \eta_{-n} d\sigma
+\frac{1}{2}\delta_{m,-n}.\label{eqn:proofc} \end{align} Fix $m,n\geq 1.$ Applying \eqref{eqn:eqlim} to $F_m$, it follows that \begin{align*}
\frac{1}{2\pi}\int_{\partial\Omega}\frac{\partial F_m }{\partial\nu}\eta_{-n}d\sigma
&=\frac{1}{2\pi}\int_{0}^{2\pi}\lim_{\rho\rightarrow\rho_0}\frac{\partial (F_m\circ\Psi)}{\partial\rho}(e^{\rho+i\theta})\frac{e^{in\theta}}{\sqrt{n}}\,d\theta. \end{align*} From Lemma \ref{lemma:regularity}, $h(\rho_0,\theta)$ is integrable and $\int_{0}^{2\pi}h(\rho,\theta)d\theta$ converges to $\int_{0}^{2\pi}h(\rho_0,\theta)d\theta$ as $\rho\rightarrow\rho_0$. In view of \eqref{eqn:uniform:bdd}, we can exchange the order of the limit and the integration in the above equation by the dominated convergence theorem. We obtain \begin{align*}
\frac{1}{2\pi}\int_{\partial\Omega}
\frac{\partial F_m }{\partial\nu}\eta_{-n}
d\sigma
&=\lim_{\rho\rightarrow\rho_0}\frac{1}{2\pi}\int_{0}^{2\pi}
\left[me^{m(\rho+i\theta)}-\sum_{k=1}^\infty kc_{m,k}e^{-k(\rho+i\theta)}\right]
\frac{e^{in\theta}}{\sqrt{n}}d\theta\\
&={\sqrt{m}\gamma^m}\delta_{m,-n}-\frac{\sqrt{n}c_{m,n}}{\gamma^k}. \end{align*} From \eqnref{eqn:proofc}, we deduce \begin{align*} \frac{1}{2\pi}\int_{\partial\Omega} \mathcal{K}^*_{\partial\Omega}[\zeta_m]\eta_{-n} d\sigma &=\frac{1}{2}\frac{\sqrt{n}}{\sqrt{m}}\frac{c_{m,n}}{\gamma^{m+n}} \end{align*} and $\frac{1}{2\pi}\int_{\partial\Omega} \mathcal{K}^*_{\partial\Omega}[\zeta_m]\eta_{n} d\sigma=0$. We conclude $\mathcal{K}_{\partial\Omega}^{*}[\zeta_m]\in K^{-1/2}(\partial\Omega)$ by the bound \eqref{eqn:GrunskyBounds}. By taking the complex conjugate, we can prove the formula for the negative indices. We can similarly prove (c) for $\mathcal{K}_{\partial\Omega}$ by using (b).
$\Box$
\subsection{An ellipse case} Let us derive the series expansions for the boundary integrals for a simple example. Consider the conformal mapping $$\Psi(w)=w+\frac{a}{w}.$$ Then for each $\rho>\rho_0$, $\Psi(e^{\rho+i\theta})$ is a parametric representation of an ellipse. Substituting $\Psi(w)$ into \eqref{eqn:Fabergenerating} gives \begin{align*}
\frac{w\Psi'(w)}{\Psi(w)-z}&=1+\frac{zw-2a}{w^2-zw+a}\\&=1+\frac{(w_1+w_2)w-2w_1w_2}{(w-w_1)(w-w_2)}=1+\left(\frac{w_1}{w-w_1}+\frac{w_2}{w-w_2}\right)\\
&=1+\sum_{n=1}^{\infty}(w_1^n+w_2^n)w^{-n} \end{align*} where \begin{equation*}
w_1=\frac{z+\sqrt{z^2-4a}}{2}~\text{ and }~w_2=\frac{z-\sqrt{z^2-4a}}{2}. \end{equation*} Comparing with the right hand side of equation \eqref{eqn:Fabergenerating}, the Faber polynomials associated with the ellipse are \begin{align*}
F_0(z)&=1\\
F_m(z)&=\frac{1}{2^m}\left[\left(z+\sqrt{z^2-4a}\right)^m+\left(z-\sqrt{z^2-4a}\right)^m\right],\quad m=1,2,\cdots. \end{align*}
For each $m\in\mathbb{N}$, $F_m(\Psi(w))=w^m+\frac{a^m}{w^m}$ so that the Grunsky coefficients are \begin{equation*}
c_{m,k}=\begin{cases}
a^k&\text{if }k=m,\\
0&\text{otherwise}.
\end{cases} \end{equation*} From Theorem \ref{thm:series} (c) it follows that \begin{align}
\displaystyle& \mathcal{K}_{\partial\Omega}^{*}[\widetilde{\zeta}_m](z)=\frac{1}{2}\frac{a^m}{\gamma^{2m}}\widetilde{\zeta}_{-m}(z),\\
\displaystyle &\mathcal{K}_{\partial\Omega}^{*}[\widetilde{\zeta}_{-m}](z)=\frac{1}{2}\frac{\bar{a}^m}{\gamma^{2m}}\widetilde{\zeta}_{m}(z). \end{align} Hence, $\mathcal{K}_{\partial\Omega}^{*}$ corresponds to the $2\times 2$ matrix \begin{equation*} \frac{1}{2\gamma^{2m}}
\begin{bmatrix}
0 & {a^m}\\
{\bar{a}^m} & 0
\end{bmatrix} \end{equation*} in the space spanned by $\widetilde{\zeta}_{-m}$ and $\widetilde{\zeta}_{m}$. In particular, $\mathcal{K}_{\partial\Omega}^{*}$ has the eigenvalues and the corresponding eigenfunctions
$$\pm \frac{1}{2}\frac{|a|^m}{\gamma^{2m}},\quad \pm\left(\frac{{a}}{|a|}\right)^m\widetilde{\zeta}_{-m}+\widetilde{\zeta}_{m},\quad m=1,2,\dots.$$
\subsection{Integral formula for the conformal mapping coefficients }\label{sec:transmission_solution}
\begin{theorem}\label{thm:a_k} We assume the same regularity for $\Omega$ as in Theorem \ref{thm:series}. Then, the coefficients of the exterior conformal mapping $\Psi(w)$ satisfy \begin{align} \displaystyle&\gamma^2 = \frac{1}{2\pi}\int_{\partial\Omega}z\overline{\varphi(z)}\, d\sigma(z),\\
\displaystyle&a_m = \frac{\gamma^{m-1}}{2\pi}\int_{\partial\Omega}z|\varphi(z)|^{-m+1}(\varphi(z))^m\, d\sigma(z),\quad m=0,1,\dots, \end{align} where $$\varphi(z)=(I-2\mathcal{K}^*_{\partial\Omega})^{-1}(\nu_1+i\nu_2)$$ and $\nu=(\nu_1,\nu_2)$ is the outward unit normal vector of $\partial \Omega$. \end{theorem}
\begin{proof} As before, we let $z=\Psi(w)$ be the exterior conformal mapping given as \eqnref{conformal:Psi}. Since $d\sigma(z)=h(\rho,\theta)d\theta$, we have \begin{align}\notag \int_{\partial \Omega}z\widetilde{\zeta}_m(z)\, d\sigma(z)&=\int_0^{2\pi}\left(e^{\rho_0+i\theta}+a_0+a_1e^{-\rho_0-i\theta}+\cdots\right)\frac{e^{im\theta}}{h(\rho_0,\theta)}h(\rho_0,\theta)\, d\theta\\ &=\frac{2\pi}{\gamma^m} a_m, \qquad m=-1,1,0,\dots.\label{coeff_expression} \end{align}
We remind the reader that, by taking the interior normal derivative of the single layer potential, $\widetilde{\zeta}$ satisfies \begin{equation}
(-\frac{1}{2}I+\mathcal{K}^*_{\partial\Omega})\widetilde{\zeta}_m=-\frac{1}{2m\gamma^m}\pd{F_m}{\nu}\Big|_{\partial\Omega}. \end{equation} Note that \begin{equation} \widetilde{\zeta}_m = \widetilde{\zeta}_0^{-m+1}\widetilde{\zeta}_1^m,\quad \widetilde{\zeta}_{-m}(z)=\overline{\widetilde{\zeta}_m(z)} \end{equation} and \begin{equation}
|\widetilde{\zeta}_1(\theta)|=\frac{1}{h(\rho,\theta)}=\widetilde{\zeta}_0(\theta). \end{equation} Applying these relations to \eqnref{coeff_expression}, it follows that $$\gamma=\frac{1}{2\pi}\int_{\partial\Omega}z\widetilde{\zeta}_{-1}(z)\, d\sigma(z)=\frac{1}{2\pi}\int_{\partial\Omega}z\frac{1}{2\gamma}\, \overline{(\frac{1}{2}I-\mathcal{K}^*_{\partial\Omega})^{-1}\pd{F_1}{\nu}}\, d\sigma(z).$$ For $k= 0,1,2,\dots$, we have \begin{align*}
a_m
&=\frac{\gamma^m}{2\pi} \int_{\partial\Omega} z\widetilde{\zeta}_0^{-m+1}\widetilde{\zeta}_1^m\, d\sigma(z)\\
&= \frac{\gamma^m}{2\pi} \int_{\partial\Omega} z\big|\widetilde{\zeta}_1\big |^{-m+1}\zeta_1^m\, d\sigma(z).
\end{align*} Owing to the fact that $F_1(z)=z-a_0$, we deduce
$$\widetilde{\zeta}_1(z)=\frac{1}{2\gamma}(\frac{1}{2}I-\mathcal{K}^*_{\partial\Omega})^{-1}(\nu_1+i\nu_2)\Big|_{\partial\Omega}.$$ Therefore we complete the proof. \end{proof}
\section{Numerical computation} We provide the numerical scheme and examples for the transmission problem based on the series expansions of the boundary integral operators. First, we explain how to obtain the exterior conformal mapping for a given simply connected domain in section \ref{conformal:numerical}. We then provide the numerical computation based on the finite section method in section \ref{sec:finitesection}. \subsection{Computation of the conformal mapping for a given curve}\label{conformal:numerical} From Theorem \ref{thm:a_k}, one can numerically compute the exterior conformal mapping for a given curve by solving \begin{equation}\label{eqn:eqnforconformal}
\left(\frac{1}{2}I-\mathcal{K}^*_{\partial\Omega}\right)[\varphi](z)=\nu_1+i\nu_2. \end{equation} It is well known that one can solve such a boundary integral equation by applying the Nystr\"{o}m discretization for $\mathcal{K}_{\partial\Omega}^{*}$ on $\partial\Omega$. There, we first parametrize $\partial\Omega$, say $z(t)$, and discretize it, say $\{z_p\}_{p=1}^P$ . We then approximate the boundary integral operator $\mathcal{K}_{\partial\Omega}^{*}[\varphi]$ as $$\mathcal{K}_{\partial\Omega}^{*}[\varphi](x) \approx \sum_{p=1}^P\frac{\partial}{\partial\nu_x}\Gamma(x-z_p)\varphi(z_p) w_p,\quad x\in\partial\Omega.$$ The weights $\{w_p\}_{p=1}^P$ are chosen by numerical integration methods. To obtain the accurate solution to \eqnref{eqn:eqnforconformal}, we apply the RCIP method \cite{Helsing:2013:SIE} (see also the references therein for further details).
Figure \ref{fig:rectangle} shows the exterior conformal mapping for the rectangular domain with height $1$ and width $6.$ To ensure accuracy, we plot the difference $|\gamma^{(k)}-\gamma^{(k-1)}|$, where $\gamma^{(k)}$ is the logarithmic capacity of the domain $\Omega$ computed with $k$ subdivisions in the RCIP method. \begin{figure}
\caption{Rectangular domain $\Omega$}
\caption{Level coordinates curves of $\Psi(\rho,\theta)\}
\label{fig:rectangle}
\end{figure}
\subsection{Numerical scheme for the transmission problem solution and the spectrum of the NP operator based on the finite section method}\label{sec:finitesection} We now consider the numerical approximation of the solution to the boundary integral equation $(\lambda I -\mathcal{K}^*_{\partial\Omega})x=y$ and the spectrum of $\mathcal{K}_{\partial\Omega}^{*}$. Once we have the infinite matrix expression for an operator, it is natural to consider its finite dimensional projection. More precisely speaking, we apply the finite section method to $(\lambda I -\mathcal{K}^*_{\partial\Omega})x=y$ on $K^{-1/2}(\partial\Omega)$. Recall that $K^{-1/2}(\partial\Omega)$ is a separable Hilbert space. We set $H=K^{-1/2}$ and $$H_n=\mbox{span}\left\{\zeta_{-n},\zeta_{-n+1},\cdots,\zeta_{-1},\zeta_1,\cdots,\zeta_{n-1},\zeta_n\right\} \quad\mbox{for each }n\in\mathbb{N}.$$ Then, $H_n$ is an increasing sequence of finite-dimensional subspaces of $H$ such that the union of $H_n$ is dense in $H$. We may identify the orthogonal projection operator to $H_n$, say $P_n$, as the operator on $l^2_0(\mathbb{C})$ given by \begin{equation*}
P_n(a)=(\dots,0,0,a_{-n},\dots,a_{-2},a_{-1},a_{1},a_2,\dots,a_n,0,0,\dots)\quad\mbox{for } a\in l^2_0(\mathbb{C}). \end{equation*} Clearly, we have $
\|P_n a-a\|_{l^2}^2=\sum_{|m|> n}|a_m|^2\rightarrow 0\ \mbox{ as }n\rightarrow\infty, $ so that $P_n a\rightarrow a$ as $n\rightarrow\infty$. We denote $[\mathcal{K}^*_{\partial\Omega}]_n$ the $n$-th section of $[\mathcal{K}_{\partial\Omega}^{*}]$, that is $$[{\mathcal{K}_{\partial\Omega}^{*}}]_n=P_n[\mathcal{K}_{\partial\Omega}^{*}]P_n=P_n \mathcal{K}_{\partial\Omega}^{*} P_n.$$ We identify the range of $P_n$ with $\mathbb{C}^{2n}$ and $[{\mathcal{K}_{\partial\Omega}^{*}}]_n$ with a $2n\times 2n$-matrix, respectively.
Using the finite section of $\mathcal{K}^*_{\partial\Omega}$, we can approximate the solution to the boundary integral equation and the spectrum of $\mathcal{K}^*_{\partial\Omega}$ as follows: \begin{itemize} \item [(a)] [Computation of solution to the boundary integral equation]\\
Let $|\lambda|>\frac{1}{2}$. Then, we have $\|I-(I-\frac{1}{\lambda}\mathcal{K}^*_{\partial\Omega})\|=\frac{1}{\lambda}\|\mathcal{K}^*_{\partial\Omega}\|<1$. From Corollary \ref{cor:finitesection} in the appendix, the projection method for $(\lambda I-\mathcal{K}^*_{\partial\Omega})$ converges, {\it i.e.}, there exists an integer $N$ such that for each $y\inK^{-1/2}(\partial\Omega)$ and $n\geq N$, there exists a solution $x_n$ in $H_n$ to the equation $P_n (\lambda I-\mathcal{K}^*_{\partial\Omega})P_n x_n = P_n y$, which is unique in $H_n$, and the sequence $\{x_n\}$ converges to $(\lambda I-\mathcal{K}_{\partial\Omega}^{*})^{-1}y$. \item[(b)][Computation of the spectrum of the NP operator]\\ For self-adjoint operators on a separable complex Hilbert space, the spectrum outside the convex hull of the essential spectrum can be approximated by eigenvalues of truncation matrices. Since $\mathcal{K}^*_{\partial\Omega}$ is self-adjoint in $K^{-1/2}(\partial\Omega)$, the eigenvalues of the finite section operator $[{\mathcal{K}_{\partial\Omega}^{*}}]_n$ converge to those of $\mathcal{K}^*_{\partial\Omega}$ as shown in \cite[Theorem 3.1]{Bottcher:2001:AAN}. Since $[{\mathcal{K}_{\partial\Omega}^{*}}]_n$ is a finite-dimensional matrix, one can easily compute their eigenvalues. \end{itemize}
\subsection{Numerical examples for the transmission problem} We provide examples of numerical computations for the transmission problem \eqnref{cond_eqn0}. More detailed numerical results will be reported in a separate paper. \begin{example}
We take $\Omega$ to be the kite-shaped domain whose boundary is parametrized by
\begin{align*}
\partial\Omega = \{x(t),y(t):=(\cos{t}+0.65\cos(2t)-0.65, 1.5\sin t): t\in[0,2\pi] \}.
\end{align*}
We set $\epsilon_c = 10000, ~\epsilon_m = 1$ and choose the entire harmonic field $H(x,y)=x.$ We computed the conformal mapping coefficients $\gamma$ and $a_k$ up to $k=50$ terms to approximate $\partial\Omega$ and truncated the matrix $[\mathcal{K}_{\partial\Omega}^{*}]_n$ with $n=120$.
The result is demonstrated in Figure \ref{fig:transmissionsolution}. \end{example}
\begin{example} We take $\Omega$ to be the boat-shaped domain whose exterior conformal mapping is given by \begin{align*}
{\Psi}(w):=w+\frac{0.3}{w}+\frac{0.08}{w^4}\quad\mbox{for } |w|\geq 1.
\end{align*}
We set $\epsilon_c = 10, ~\epsilon_m = 1$ and choose the harmonic field $H(x,y)=y$. We used the matrix $[\mathcal{K}_{\partial\Omega}^{*}]_{n}$ with $n=120.$ The result is demonstrated in Figure \ref{fig:transmissionsolution}. \end{example} \begin{figure}
\caption{Level curves of the solution to the problem \eqnref{cond_eqn0} with the conditions in Example 1 (left) and Example 2 (right).
}
\label{fig:transmissionsolution}
\end{figure}
\subsection{Numerical examples of the NP operator spectrum computation}
We give numerical examples for the NP operators of a smooth domain.
\noindent{\textbf{Example 1.}} Figure \ref{fig:example1} shows the eigenvalues of $\mathcal{K}_{\partial\Omega}^*$ of a smooth domain $\Omega$. The eigenvalues are computed by projecting the operator to $H_{100}$ space. The eigenvalues calculated using $[\mathcal{K}^*_{\partial\Omega}]_N$ with various $N$ are given in Table \ref{table:example1}. \begin{figure}
\caption{Eigenvalues of a smooth domain $\Omega$. The domain is given by the conformal mapping $\Psi(z)=z+\frac{1.5}{z}-\frac{1.5 i}{z^2}-\frac{1.5}{z^3}+\frac{1.5 i}{z^4}+\frac{1.5}{z^5}-\frac{1.5i}{z^6}$ with $\gamma=2$. The left figure shows the geometry of $\Omega$, and the right figure is the graph of the eigenvalues $\lambda_n^+$ values against $n$. The values are computed using $[\mathcal{K}_{\partial\Omega}^{*}]_{100}$.
}
\label{fig:example1}
\end{figure} \begin{table}[!htbp] \begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
& $[\mathcal{K}_{\partial\Omega}^{*}]_{10}$ & $[\mathcal{K}_{\partial\Omega}^{*}]_{25}$ & $[\mathcal{K}_{\partial\Omega}^{*}]_{100}$ \\
\hline\hline
$\lambda_1^+$& 0.273558129190339 & 0.273558172995812 & 0.273558172996823\\
\hline
$\lambda_2^+$& 0.156056092355289 & 0.156056664309330 & 0.156056664318575 \\
\hline
$\lambda_3^+$& 0.0859247988176952 & 0.0859480356185409 & 0.0859480356609182\\
\hline
$\lambda_4^+$& 0.0494465179065547 & 0.0496590062381220 & 0.0496590063829059 \\
\hline
$\lambda_5^+$& 0.0309734543945057 & 0.0319544507137494 & 0.0319544514943216\\
\hline
$\lambda_6^+$& 0.0131127593266966 & 0.0193300891854449 & 0.0193300897106055\\
\hline
$\lambda_7^+$& 0.00194937208308878 & 0.00776576627656122 & 0.00776578032772436 \\
\hline
$\lambda_8^+$& 0.000974556412340587 & 0.00585359278858573 & 0.00585361352314610\\
\hline
$\lambda_9^+$& 0.000178694695157472 & 0.00262352270034986 & 0.00262389455625349\\
\hline
$\lambda_{10}^+$& 0.000118463567811146 &0.00194010061907249 & 0.00194031822065751\\
\hline \end{tabular}
\caption{Eigenvalues of $[\mathcal{K}_{\partial\Omega}^*]_N$, $N=10,25,100$, for $\Omega$ given in Figure \ref{fig:example1}.}\label{table:example1} \end{center} \end{table}
\section{Conclusion}
We defined the density basis functions whose layer potentials have exact representation in terms of the Faber polynomials and the Grunsky coefficients. These density basis functions give rise to the two Hilbert spaces $K^{\pm1/2}$ which are equivalent to the trace spaces $H^{\pm1/2}$ when the boundary is smooth. On these spaces the Neumann-Poincar\'{e} operators are identical to doubly infinite, self-adjoint matrix operators. Our result provides a new symmetrization scheme for the Neumann-Poincar\'{e} operators different from Plemelj's symmetrization principle. We emphasize that $\mathcal{K}_{\partial\Omega}$ and $\mathcal{K}_{\partial\Omega}^*$ are actually identical to the same matrix and, furthermore, the matrix formulation gives us a simple method of eigenvalue computation. Since our approach requires the exterior conformal mapping coefficients to be known, we derived a simple integral expression for the exterior conformal mapping coefficients. Numerical results show successful computation of conformal mapping and eigenvalues of the Neumann-Poincar\'{e} operators. The present work provides a novel framework for the conductivity transmission problem.
\begin{appendices} \section{Boundary behavior of conformal maps}\label{appen:boundarybehavior} In this section we review regularity results on the interior conformal mappings that are provided in \cite{Pommerenke:1992:BBC}. We then derive the regularity for the exterior conformal mapping; see Lemma \ref{lemma:regularity}.
We say that a Jordan curve $C$ is of class $\mathcal{C}^m$ if it has a parametrization $w(t),0\leq t\leq 2\pi,$ that is $m$-times continuously differentiable and satisfies $w'(t)\neq0$ for all $t$. It is of class $\mathcal{C}^{m,\alpha}\;(0<\alpha<1) $ if it furthermore satisfies \begin{equation*}
\left|w^{(m)}(t_1)-w^{(m)}(t_2)\right|\leq M|t_1-t_2|^{\alpha}\quad\mbox{for }t_1,t_2\in[0,2\pi]. \end{equation*} \begin{theorem}{\rm(Kellogg-Warschawski theorem \cite[Theorem 3.6]{Pommerenke:1992:BBC})}
Let $f$ map $\mathbb{D}$ conformally onto the inner domain of the Jordan curve $C$ of class $\mathcal{C}^{m,\alpha}$ where $m\in\mathbb{N}$ and $0<\alpha<1$. Then $f^{(m)}$ has a continuous extension to $\overline{\mathbb{D}}$ and the extension satisfies \begin{equation*}
\left|f^{(m)}(z_1)-f^{(m)}(z_2)\right|\leq M|z_1-z_2|^\alpha~\text{ for } z_1,z_2\in\overline{D}.
\end{equation*} \end{theorem}
To state the regularity results on the conformal mapping associated with a domain with corners we need the concept of {\it Dini-contiuity}: \begin{itemize} \item For a function $\phi:[0,2\pi]\rightarrow\mathbb{C}$ we define the modulus of continuity as
\begin{equation*}
\omega(\delta)=\sup\left\{\left|\phi(z_1)-\phi(z_2)\right|~:~ |z_1-z_2|\leq \delta,\ z_1,z_2\in [0,2\pi]\right\},\quad\delta> 0.
\end{equation*} The function $\phi$ is called {\it Dini-continuous} if $
\int_{0}^{\pi}\frac{\omega(t)}{t}dt<\infty $. The end-point of the integration interval $\pi$ could be replaced by any positive number. \item We say that a Jordan curve $C$ is {\it Dini-smooth} if it has a parametrization $w(t),0\leq t\leq 2\pi,$ such that $w'(t)$ is Dini-continuous and $w'(t)\neq 0$ for all $t$. Every $C^{1,\alpha}$ Jordan curve is Dini-smooth. \end{itemize}
Let $\Omega$ be a simply connected domain whose boundary is a Jordan curve. We also let a complex function $S$ map $\mathbb{D}$ conformally onto $\Omega$. We allow $\Omega$ to have a corner on its boundary. We say that $\partial \Omega$ has a corner of opening $\pi\beta\;(0\leq\beta\leq 2)$ at $S(\zeta)\in\partial\Omega,\;\zeta=e^{i\theta},$ if \begin{equation*}
\arg\left|S(e^{i t})-S(e^{i \theta})\right|\rightarrow
\begin{cases}
\displaystyle \eta & \text{ as } t\rightarrow \theta+,\\
\displaystyle \eta+\pi\beta & \text{ as } t\rightarrow \theta-.
\end{cases} \end{equation*} If $\beta=1$, then at $S(\zeta)$ we have a tangent vector with direction angle $\beta$. If $\beta$ is $0$ or $2$, then we have an outward-pointing cusp or an inward-pointing cusp, respectively.
We say that $\partial\Omega$ has a {\it Dini-smooth corner} at $S(\zeta)\in\partial\Omega$ if there are two closed arcs $A^{\pm}\subset \partial \mathbb{D}$ ending at $\zeta\in\partial\mathbb{D}$ and lying on opposite sides of $\zeta$ that are mapped onto Dini-smooth Jordan curves $C^+$ and $C^-$ forming the angle $\pi\beta$ at $S(\zeta)$.
\begin{theorem}{\rm(\cite[Theorem 3.9]{Pommerenke:1992:BBC})}\label{thm:regularityatboundary}
If $\partial \Omega$ has a Dini-smooth corner of opening $\pi\beta\;(0<\beta\leq 2)$ at $S(\zeta)\neq \infty$, then the functions
\begin{equation*}
\frac{S(z)-S(\zeta)}{(z-\zeta)^\beta}\text{ and } \frac{S'(z)}{(z-\zeta)^{\beta-1}}
\end{equation*}
are continuous and $\neq 0,\infty$ in $\mathbb{\overline{D}}\cap D(\zeta,\rho)$ for some $\rho>0.$ \end{theorem}
\begin{lemma}[Boundary behavior of exterior conformal mapping]\label{lemma:regularity} If $\partial \Omega$ has a Dini-smooth corner at $\Psi(w_0)$ of exterior angle $\pi\alpha \,(0<\alpha<2)$, then the functions $$\frac{\Psi(w)-\Psi(w_0)}{(w-w_0)^\alpha}\quad\mbox{and}\quad\frac{\Psi'(w)}{(w-w_0)^{\alpha-1}}$$ are continuous and $\neq 0,\infty$ in $(\mathbb{C}\setminus\mathbb{D})\cap D(w_0;\delta)$ for some $\delta>0$, where $D(w_0;\delta)$ denotes the disk centered at $w_0$ with radius $\delta$. \end{lemma} \begin{proof} We apply Theorem \ref{thm:regularityatboundary} to see the boundary behavior of $\Psi$ at the corner points. Set $G$ to be the reflection of $\Omega$ with respect to a circle centered at some point $z_0\in\Omega$. Then for a conformal mapping $f$ from $\mathbb{D}$ onto $G$ we have $\Psi(w)=1/f(\frac{1}{w-z_0}+z_0)$. If $\partial \Omega$ has a corner at $\Psi(w_0)$ of exterior angle $\pi\alpha$, then $\partial G$ has a corner at the corresponding point $f(z_0)$ of opening $\pi \alpha$. Since $z\rightarrow\frac{1}{z}$ is a conformal mapping from $\mathbb{C}\setminus\{0\}$ onto $\mathbb{C}\setminus\{0\}$, $f$ and $\Psi$ have the same regularity behavior at the corresponding corner points. From Theorem \ref{thm:regularityatboundary}, we complete the proof. \end{proof}
\section{The Faber polynomials}\label{section:appdixFaber} Substituting equation \eqref{conformal:Psi} into equation \eqref{eqn:Fabergenerating}, we obtain the recursion relation \begin{equation}\label{eqn:Faberrecursion}
-na_n=F_{n+1}(z)+\sum_{s=0}^{n}a_{s}F_{n-s}(z)-zF_n(z),\quad n\geq 0. \end{equation} with the initial condition $F_0(z)=1.$ The first three polynomials are $$F_0(z)=1,\quad F_1(z)=z-a_0,\quad F_2(z)=z^2-2a_0 z+(a_0^2-2a_1).$$
Multiplying both sides of equation \eqref{eqn:Fabergenerating} by $w^{n}$ and integrating on the contour $|w|=R$, we have \begin{equation}\label{eqn:FabervalCauchyintegral}
F_n(z)=\frac{1}{2\pi i}\int_{|w|=R}\frac{w^n\Psi'(w)}{\Psi(w)-z} dw,\quad z\in\overline{\Omega_r},~\gamma\leq r<R<\infty. \end{equation}
Now let $z\in \mathbb{C}\setminus\Omega$ and consider the function $$W(z,w) = \frac{\Psi'(w)}{\Psi(w)-z}=\frac{\Psi'(w)}{\Psi(w)-\Psi(r)},\quad r=\Psi^{-1}(z).$$
$W(z,w)$ is defined for $|w|>\gamma$ and $|r|>\gamma$ and has a only singularity at $w=r$. After considering the residue at the simple pole at $w=r$ for fixed $r$ and the simple pole at $r=w$ for fixed $w$, we see that \begin{equation}\label{eqn:expressionPsiwr}
\frac{\Psi'(w)}{\Psi(w)-\Psi(r)}=\frac{1}{w-r}+M(w,r) \end{equation}
and $M(w,r)$ is analytic for $|w|>\gamma$ and $|r|>\gamma.$ Expanding $M(w,r)$ in double-power series and collecting the terms of the same degree with respect to $w$ we find \begin{align}\label{eqn:expressionMwr}
M(w,r) = a_0(r)+a_1(r)\frac{1}{w}+a_2(r)\frac{1}{w^{2}}+\cdots,\quad |w|>\gamma,~|r|>\gamma. \end{align} Letting $r\neq \infty$ and $w=\infty$ in the equation \eqnref{eqn:expressionPsiwr} and \eqnref{eqn:expressionMwr} , we observe that $a_0(r)=a_1(r)=0.$ Similarly, for $r=\infty$ and $w\neq \infty$ we see that $a_k(\infty)=0,~k=2,3,\dots$. Considering the Laurent expansions of $a_k(w),~k=2,3,\dots$, one can show that the series expansion \begin{equation}\label{eqn:Faber}
\frac{\Psi'(w)}{\Psi(w)-\Psi(r)}-\frac{1}{w-r}=\sum_{m=1}^\infty\sum_{k=1}^\infty c_{m,k}r^{-k}w^{-m-1} \end{equation}
holds for $|r|>\gamma$ and $|w|>\gamma$. From \eqref{eqn:FabervalCauchyintegral}, and \eqref{eqn:Faber} we immediately observe the following relation: \begin{equation}\label{eqn:Faberdef}
F_m(\Psi(r))-r^m
=\sum_{k=1}^{\infty}c_{m,k}{r^{-k}},\quad m=1,2,\dots. \end{equation}
The Faber polynomials associated with $\Omega$ form a basis for analytic functions in $\Omega$. From the Cauchy integral formula and \eqref{eqn:Fabergenerating}, one can easily derive the following:
any complex function $f(z)$ that is analytic in the bounded domain enclosed by the curve $\{\Psi(\zeta):|\zeta|=R\}$, $R>\gamma$, admits the series expansion \begin{equation}\label{faberseries}
f(z)=\sum_{m=0}^{\infty}\alpha_m F_m(z)\quad\mbox{in }\overline{\Omega} \end{equation} with \begin{equation}\notag
\alpha_m = \frac{1}{2\pi i}\int_{|w|=r}\frac{f(\Psi(w))}{w^{m+1}}\, dw,\quad \gamma<r<R. \end{equation}
\section{Properties of the Grunsky coefficients}\label{section:Grunsky} The Grunsky coefficients $c_{m,k}$'s can be directly computed from the coefficients of the exterior conformal mapping $\Psi$ via the recursion formula \begin{equation}\label{eqn:cnkrecursion}
c_{m,k+1}=c_{m+1,k}-a_{m+k}+\sum_{s=1}^{m-1}a_{m-s}c_{s,k}-\sum_{s=1}^{k-1}a_{k-s}c_{m,s},\quad m,k\geq 1, \end{equation}
with the initial condition $c_{n,1}=na_n$ for all $n\geq1$. We set $\sum_{s=1}^{0}=0$. Indeed, the relation \eqref{eqn:cnkrecursion} can be easily derived by substituting \eqref{eqn:Faberdef} into \eqref{eqn:Faberrecursion} and comparing terms $r^{-k}$ of the same order.
Applying the Cauchy integral formula on $\Omega_r$, $r>\gamma$ and \eqref{eqn:Faberdef} we derive \begin{align*}
0&=\frac{1}{2\pi i}\int_{\Omega_r}F_n(z)F'_m(z)dz\\
&=\frac{1}{2\pi i}\int_{|w|=r}F_n(\Psi(w))F'_m(\Psi(w))\Psi'(w)dw=mc_{n,m}-nc_{m,n}.
\end{align*} This identity implies the Grusnky identity \begin{equation}
mc_{n,m}=nc_{m,n}\quad m,n=1,2,\cdots \end{equation}
We now review the polynomial area theorem and the Grunsky inequalities. More details can be found in \cite{Duren:1983:UF}.
The polynomial area theorem states the following: Let $P(z)$ be an arbitrary non-constant polynomial of degree $N$ that admits the expansion
$$P(\Psi(w))=\sum_{k=-N}^{\infty}b_kw^{-k},\quad|w|>\gamma.$$ Then
\begin{equation}\label{area}
\sum_{k=1}^{\infty}k\left|{b_k}{\gamma^{-k}}\right|^2\leq \sum_{k=1}^{N}k\left|{b_{-k} \gamma^k} \right|^2
\end{equation}
with equality if and only if $\Omega$ has the measure zero. In fact, this relation is a result of a complex form of Green's theorem: Let $f(u+iv)$ be $C^1(\overline{\Omega})$, then $$\iint_\Omega \pd{f}{\bar{z}}dudv=\frac{1}{2i}\int_{\partial \Omega}f(z)dz,\quad z=u+iv.$$ We can easily prove the inequality \eqnref{area} by setting $f(z)=\overline{P(z)}P'(z)$: \begin{align*} 0&\leq\int_\Omega \overline{P'(z)}P'(z)dudv\\ &=\int_\Omega\pd{}{\bar{z}}\left(\overline{P(z)}P'(z)\right)dudv =\frac{1}{2i}\int_{\partial\Omega}\overline{P(z)}P'(z)dz\\
&=\frac{1}{2i}\int_{|w|=\gamma}\overline{P({\Psi(w)}}P'(\Psi(w))\Psi'(w)dw
=-\pi\sum_{k=-N}^\infty k|b_k|^2\gamma^{-2k}. \end{align*}
One can derive a system of inequalities that are known as the Grunsky inequalities by applying the polynomial area theorem to $P(z)=\sum_{n=1}^{N}\frac{\lambda_n}{\gamma^n}F_n(z)$ for some complex numbers $\lambda_1,\dots,\lambda_N$; see \cite{Grunsky:1939:KSA} and \cite{Duren:1983:UF}.
\begin{lemma}[Strong Grunsky inequalities]\label{lemma:strongGrunsky} Let $N$ be a positive integer and $\lambda_1,\lambda_2,\dots,\lambda_N$ be complex numbers that are not all zero. Then, we have
\begin{equation*}
\sum_{k=1}^{\infty}k\left|\sum_{n=1}^{N}\frac{c_{n,k}}{\gamma^{n+k}}\lambda_n\right|^2\leq\sum_{n=1}^{N}n|\lambda_n|^2.
\end{equation*}
Strict inequality holds unless $\Omega$ has measure zero.
\end{lemma}
From the Grunsky inequalities we can derive an important bound for the Grunsky coefficients as follows. Choose some $1\leq m\leq N$ and let $\lambda_k = \delta_{mk}/\sqrt{m},~1\leq k\leq N$ in the strong Grunsky inequality. Then it holds that \begin{equation}\label{muineq1}
\sum_{k=1}^\infty\left|\sqrt{\frac{k}{m}}\frac{c_{m,k}}{\gamma^{m+k}}\right|^2\leq 1. \end{equation}
\section{Convergence of the finite section method} We briefly introduce the convergence conditions for the finite section method. For more details we refer the reader to \cite{Gohberg:2003:BCL, Kress:2014:LIE}.
Let $H$ be a separable complex Hilbert space and $\mathcal{B}(H)$ be the linear space of bounded linear operators on $H$. We let $H_n$ be an increasing sequence of finite-dimensional subspaces of $H$ such that the union of $H_n$ is dense in $H$. We let $P_n$ be the orthogonal projection of $H$ onto $H_n$. Thus, $\|P_n\|=1$ for each $n$ and $P_n x\rightarrow x$ for every $x\in\mathcal{H}$.
For an operator $A\in \mathcal{B}(H)$ that is invertible, we say the projection method for $Ax=y$ {\it converges }if there exists an integer $N$ such that for each $y\in H$ and $n\geq N$, there exists a solution $x_n$ in $H_n$ to the equation $P_n AP_n x_n = P_n y$, which is unique in $H_n$, and the sequence $\{x_n\}$ converges to $A^{-1}y$.
One can easily derive the following proposition and the corollary. \begin{prop} Let $A\in\mathcal{B}(H)$ be invertible. Then the projection method for $Ax=y$ converges if and only if there is an integer $N$ such that for $n\geq N$, the restriction of the operator $P_nAP_n$ on $H_n$ has a bounded inverse, denoted by $(P_nAP_n)^{-1}$, and \begin{equation*}
\sup_{n\geq{N}}\|(P_nAP_n)^{-1}\|< \infty. \end{equation*} \end{prop} \begin{cor}\label{cor:finitesection}
If $A\in\mathcal{B}(\mathcal{H})$ is invertible and satisfies $\|I-A\|<1$, then the projection method for $A$ converges. \end{cor}
\end{appendices}
\ifx \bblindex \undefined \def \bblindex #1{} \fi\ifx \bbljournal \undefined
\def \bbljournal #1{{\em #1}\index{#1@{\em #1}}} \fi\ifx \bblnumber
\undefined \def \bblnumber #1{{\bf #1}} \fi\ifx \bblvolume \undefined \def
\bblvolume #1{{\bf #1}} \fi\ifx \noopsort \undefined \def \noopsort #1{} \fi
\end{document} | arXiv |
# Relationship between trigonometric functions and angles
Trigonometric functions are mathematical functions that describe the relationships between angles and their corresponding ratios in a right-angled triangle. These functions are crucial in various fields such as physics, engineering, and computer graphics.
The three main trigonometric functions are sine (sin), cosine (cos), and tangent (tan). Each function is defined as a ratio of the corresponding side to the hypotenuse of a right-angled triangle.
- Sine (sin) is the ratio of the opposite side to the hypotenuse.
- Cosine (cos) is the ratio of the adjacent side to the hypotenuse.
- Tangent (tan) is the ratio of the opposite side to the adjacent side.
Understanding the relationship between these functions and angles is crucial for solving problems and applying trigonometry in various fields.
Consider a right-angled triangle with angles α, β, and γ. The sides of the triangle are a, b, and c, where c is the hypotenuse.
The sine of angle α can be calculated as:
$$sin(\alpha) = \frac{a}{c}$$
The cosine of angle β can be calculated as:
$$cos(\beta) = \frac{b}{c}$$
The tangent of angle γ can be calculated as:
$$tan(\gamma) = \frac{a}{b}$$
## Exercise
Calculate the sine, cosine, and tangent of the angles in the following right-angled triangle:
```
a
/|
/ |
b/ |c
/ |
/ |
/ |
```
# Unit circle and its significance
The unit circle is a fundamental concept in trigonometry. It is a circle with a radius of 1 unit, centered at the origin (0, 0) of a coordinate plane.
The unit circle is used to represent the values of sine, cosine, and tangent functions for angles in the range [0, 360] degrees. Each angle corresponds to a point on the unit circle, and the coordinates of the point represent the values of sine and cosine functions for that angle.
The unit circle is significant because it allows trigonometric functions to be visualized and computed easily. It also simplifies the calculation of trigonometric functions for angles in the range [0, 360] degrees.
Consider the unit circle and an angle θ. The coordinates of the point on the unit circle corresponding to the angle θ are given by:
$$x = cos(\theta)$$
$$y = sin(\theta)$$
## Exercise
Find the coordinates of the point on the unit circle corresponding to the angle 60°.
# Cosine, sine, and tangent as ratios
Cosine, sine, and tangent are defined as ratios in a right-angled triangle. These ratios are used to calculate the corresponding trigonometric functions for an angle.
- Sine (sin) is the ratio of the opposite side to the hypotenuse.
- Cosine (cos) is the ratio of the adjacent side to the hypotenuse.
- Tangent (tan) is the ratio of the opposite side to the adjacent side.
These ratios can be calculated using the Pythagorean theorem in a right-angled triangle.
In a right-angled triangle with sides a, b, and c, where c is the hypotenuse, the sine, cosine, and tangent ratios can be calculated as:
$$sin(\alpha) = \frac{a}{c}$$
$$cos(\beta) = \frac{b}{c}$$
$$tan(\gamma) = \frac{a}{b}$$
## Exercise
Calculate the sine, cosine, and tangent ratios for the following right-angled triangle:
```
a
/|
/ |
b/ |c
/ |
/ |
/ |
```
# Applications of trigonometric functions in real-life scenarios
Trigonometric functions have numerous applications in real-life scenarios. Some of these applications include:
- In physics: Trigonometric functions are used to model the motion of objects in two dimensions, such as projectile motion and circular motion.
- In engineering: Trigonometric functions are used to design structures, such as bridges and buildings, and to analyze the behavior of mechanical systems.
- In computer graphics: Trigonometric functions are used to create animations, simulate physics, and develop mathematical models for various applications.
- In navigation: Trigonometric functions are used to calculate distances, bearings, and positions on Earth's surface.
- In music: Trigonometric functions are used to create sound waves and model the vibration of strings and other musical instruments.
Understanding the applications of trigonometric functions is essential for developing a deeper understanding of the subject and its practical implications.
Consider a projectile launched at an angle θ with an initial velocity v. The horizontal and vertical components of the velocity vector can be calculated using the sine and cosine functions as follows:
$$v_x = v \cdot cos(\theta)$$
$$v_y = v \cdot sin(\theta)$$
## Exercise
Calculate the horizontal and vertical components of the velocity for a projectile launched at an angle 45° with an initial velocity of 100 m/s.
# Working with negative angles and quadrants
Trigonometric functions can be extended to negative angles and quadrants. The sine, cosine, and tangent functions have periodicity properties that allow them to be defined for all real numbers.
The sine function is an odd function, meaning that sin(-θ) = -sin(θ). The cosine function is an even function, meaning that cos(-θ) = cos(θ). The tangent function is neither odd nor even, and its behavior for negative angles is more complex.
Understanding the behavior of trigonometric functions for negative angles and quadrants is crucial for solving problems and applying trigonometry in various fields.
Consider the unit circle and an angle -θ. The coordinates of the point on the unit circle corresponding to the angle -θ are given by:
$$x = cos(\theta)$$
$$y = -sin(\theta)$$
## Exercise
Find the coordinates of the point on the unit circle corresponding to the angle -60°.
# Solving problems using trigonometric functions
Trigonometric functions are used to solve problems in various fields. Some common problem-solving techniques involve:
- Identifying the relevant trigonometric functions and their arguments.
- Applying the appropriate trigonometric identities and formulas.
- Using the properties of trigonometric functions, such as periodicity, odd/even, and inverse functions.
Problem-solving using trigonometric functions often involves manipulating equations and inequalities to find the desired solution.
Consider a right-angled triangle with sides a, b, and c, where c is the hypotenuse. The area of the triangle can be calculated using the sine and cosine functions as follows:
$$Area = \frac{1}{2} \cdot a \cdot b$$
## Exercise
Calculate the area of the following right-angled triangle:
```
a
/|
/ |
b/ |c
/ |
/ |
/ |
```
# Rotation and its relationship with trigonometry
Rotation is a fundamental concept in trigonometry. It is used to describe the movement of objects in two dimensions, such as points, vectors, and shapes.
Trigonometric functions play a crucial role in describing the rotation of objects. The rotation of a point in the plane can be described using the sine and cosine functions as follows:
$$x' = x \cdot cos(\theta) - y \cdot sin(\theta)$$
$$y' = x \cdot sin(\theta) + y \cdot cos(\theta)$$
Understanding the relationship between rotation and trigonometry is essential for solving problems and applying the subject in various fields.
Consider a point (x, y) in the plane. The coordinates of the point after a rotation of angle θ can be calculated using the sine and cosine functions as follows:
$$x' = x \cdot cos(\theta) - y \cdot sin(\theta)$$
$$y' = x \cdot sin(\theta) + y \cdot cos(\theta)$$
## Exercise
Calculate the coordinates of the point (3, 4) after a rotation of 60°.
# Applying trigonometric functions to 2D and 3D shapes
Trigonometric functions can be applied to describe the shape and properties of 2D and 3D shapes. Some common applications include:
- Describing the shape of circles, ellipses, and other conic sections.
- Modeling the projection of 3D shapes onto 2D planes.
- Analyzing the deformation of shapes under various transformations, such as scaling, rotation, and reflection.
Understanding the relationship between trigonometric functions and shapes is crucial for solving problems and applying the subject in various fields.
Consider a circle with radius r centered at the origin (0, 0) of a coordinate plane. The equation of the circle is given by:
$$(x - r)^2 + (y - r)^2 = r^2$$
## Exercise
Calculate the coordinates of the points on the circle with radius 5 centered at the origin (0, 0).
# Visualization of trigonometric functions
Visualization is an essential tool for understanding and applying trigonometric functions. Some common visualization techniques include:
- Using graphs to represent the relationship between angles and trigonometric functions.
- Creating animations to demonstrate the behavior of objects under rotation and other transformations.
- Using interactive tools and software to explore the properties of trigonometric functions.
Understanding the visualization of trigonometric functions is crucial for developing an intuitive understanding of the subject and its practical applications.
Consider the unit circle and an angle θ. The coordinates of the point on the unit circle corresponding to the angle θ can be visualized using a graph or an interactive tool.
## Exercise
Visualize the points on the unit circle corresponding to angles in the range [0, 360] degrees.
# Summary and review of key concepts
In this textbook, we have covered the key concepts of trigonometric functions and their applications in various fields. We have discussed:
- The relationship between trigonometric functions and angles.
- The unit circle and its significance.
- The cosine, sine, and tangent as ratios.
- Applications of trigonometric functions in real-life scenarios.
- Working with negative angles and quadrants.
- Solving problems using trigonometric functions.
- Rotation and its relationship with trigonometry.
- Applying trigonometric functions to 2D and 3D shapes.
- Visualization of trigonometric functions.
Understanding these concepts is essential for developing a deep understanding of trigonometry and its practical applications. | Textbooks |
Research protocol: Cervical Arthroplasty Cost Effectiveness Study (CACES): economic evaluation of anterior cervical discectomy with arthroplasty (ACDA) versus anterior cervical discectomy with fusion (ACDF) in the surgical treatment of cervical degenerative disc disease — a randomized controlled trial
Valérie N. E. Schuermans ORCID: orcid.org/0000-0002-1733-85641,2,3,
Anouk Y. J. M. Smeets1,2,3,
Toon F. M. Boselie1,2,
Math J. J. M. Candel4,
Inez Curfs5,
Silvia M. A. A. Evers6,7 &
Henk Van Santbrink1,2,3
To date, there is no consensus on which anterior surgical technique is more cost-effective in treating cervical degenerative disc disease (CDDD). The most commonly used surgical treatment for patients with single- or multi-level symptomatic CDDD is anterior cervical discectomy with fusion (ACDF). However, new complaints of radiculopathy and/or myelopathy commonly develop at adjacent levels, also known as clinical adjacent segment pathology (CASP). The extent to which kinematics, surgery-induced fusion, natural history, and progression of disease play a role in the development of CASP remains unclear. Anterior cervical discectomy with arthroplasty (ACDA) is another treatment option that is thought to reduce the incidence of CASP by preserving motion in the operated segment. While ACDA is often discouraged, as the implant costs are higher while the clinical outcomes are similar to ACDF, preventing CASP might be a reason for ACDA to be a more cost-effective technique in the long term.
In this randomized controlled trial, patients will be randomized to receive ACDF or ACDA in a 1:1 ratio. Adult patients with single- or multi-level CDDD and symptoms of radiculopathy and/or myelopathy will be included. The primary outcome is cost-effectiveness and cost-utility of both techniques from a healthcare and societal perspective. Secondary objectives are the differences in clinical and radiological outcomes between the two techniques, as well as the qualitative process surrounding anterior decompression surgery. All outcomes will be measured at baseline and every 6 months until 4 years post-surgery.
High-quality evidence regarding the cost-effectiveness of both ACDA and ACDF is lacking; to date, there are no prospective trials from a societal perspective. Considering the aging of the population and the rising healthcare costs, there is an urgent need for a solid clinical cost-effectiveness trial addressing this question.
ClinicalTrials.gov NCT04623593. Registered on 29 September 2020.
Strengths and limitations of this study
External validity and generalizability are limited, as costs are country-specific.
A broad economic analysis from a societal perspective in the setting of a prospective randomized controlled trial will investigate the cost-effectiveness of ACDA and ACDF for CDDD, resulting in level I evidence.
The sample size is based on estimated costs and average yearly incidence of CASP, which may differ from reality.
The burden for patients in this study is low, and potential risks in the intervention group (ACDA) are similar to those in the control group (ACDF with stand-alone cages), which is standard care in our center.
Since the gap in incidence of additional CASP-related surgery between ACDF and ACDA expands considerably with time, we expect that a difference in costs, if found within the 4-year follow-up period, will also increase over time.
Cervical degenerative disc disease (CDDD) is the degeneration of a cervical intervertebral disc and/or the adjoining vertebral bodies, resulting in clinical symptoms of cervical radiculopathy, myelopathy, myeloradiculopathy, and axial pain. The incidence of degenerative pathologies is significantly increasing as the proportion of elderly in the population is rising [1, 2]. Currently, generalized spinal disc degeneration occurs in more than 90% of adults over 50 years old [3]. This age group now represents 32.8% of the population in Europe and is projected to reach 40.6% by 2050 [2]. In the next 20 years, a significant increase in anterior cervical decompression surgeries is predicted in people aged 45–54, mainly affecting the working population [4,5,6]. Symptoms of radiculopathy and/or myelopathy lead to restrictions in daily life and loss of professional capability, resulting in absenteeism. Societal healthcare costs are therefore significantly affected by CDDD. Further costs of healthcare are accrued when patients require surgical treatment, combined with associated hospitalization and rehabilitation. Currently, there are several (surgical) treatments for CDDD available. To date, there is no consensus on which surgical technique is more cost-effective in treating CDDD with radiculopathy and/or myelopathy.
One of the most common procedures for treating patients with single- or multi-level CDDD is anterior cervical discectomy with fusion (ACDF) [7, 8], which results in fusion in 95–100% of the cases [9]. Axial pain alone is not considered an indication for surgical treatment in our country. The primary goal of ACD(F) is the relief of symptoms of radiculopathy and/or myelopathy through decompression of neural structures. Fusion in itself is not a requisite to reach this goal. In our center, ACDF with stand-alone cages is the standard procedure for CDDD. Plate constructs are only used in case there are signs of instability (e.g., spondylolisthesis) and additional stabilization is deemed necessary. A common concern regarding ACDF with stand-alone cages is the occurrence of cage subsidence. However, in our recent retrospective cohort of 673 patients, only 1 patient required additional surgery due to subsidence (0.15%) [10]. Good short-term clinical results are achieved for both radiculopathy and myelopathy [9, 11]. Clinical results are independent from the technique used and from the occurrence of fusion [9, 11,12,13]. However, patient-reported satisfaction gradually decreases in the years following surgery [14, 15]. This is thought to be the consequence of the development of new complaints due to degenerative changes at a segment adjacent to the site of the index surgery, also known as adjacent segment pathology (ASP) [16].
A recent consensus proposes a distinct definition that distinguishes between radiologic adjacent segment pathology (RASP) and clinical adjacent segment pathology (CASP) [16]. CASP occurs at an estimated cumulative rate of 1.6–4.2% per year after ACDF [16, 17]; however, the incidence reported in literature varies widely [18,19,20,21]. Nevertheless, 50–75% of the patients that develop CASP require additional adjacent segment surgery [17, 22,23,24,25,26,27]. In our retrospective cohort, we observed an average rate of 2.1% of CASP per year, with an additional adjacent segment surgery rate at an average of 1.5% per year [10]. The annual incidence was unevenly distributed as half of these additional surgeries for CASP occurred within 2.5 years, which suggests a peak incidence in the first years following the index surgery [10].
The underlying mechanism of ASP remains a matter of debate. Besides natural progression of degeneration, compensation for the loss of motion in the fused segment is thought to cause overstraining of the adjacent segments [28,29,30]. Altered cervical sagittal alignment is also thought to be important in the accelerated development of CASP. Indeed, higher rates of CASP are observed after ACD, concomitant with an increased segmental kyphosis at the index level [12, 31, 32]. Unlike ACD, ACDF with plate constructs restores cervical sagittal lordosis. However, a higher rate of ASP is observed in patients with plate constructs compared to ACDF with stand-alone cages [12, 33]. This finding might be explained by the plate causing strain on the adjacent segments, or the more extensive surgical preparation for installing the plate, which increases the chance of iatrogenic damage to the adjacent level. Another contributing factor might be the occurrence of subsidence of the plate construct into the adjacent segment. Disc height at the adjacent segments has been found to be significantly decreased in patients with plate constructs, which supports this theory [33]. Nevertheless, the extent to which altered cervical motion influences the development of ASP remains unknown [34, 35].
Anterior cervical discectomy with arthroplasty (ACDA) was developed as another treatment option for cervical radiculopathy and/or myelopathy to reduce the incidence of CASP by preserving motion in the operated segment. Previously conducted research in patients with radiculopathy and/or myelopathy has shown that clinical and radiological outcomes are similar between ACDA and ACD(F) [28, 36, 37]. A meta-analysis found better neurological outcomes in patients with myelopathy after ACDA, in contrast to the pre-existing notion that ACDA leads to less favorable outcomes in myelopathy due to micro-trauma caused by preserved mobility [38]. Moreover, additional adjacent segment surgery rates are significantly lower for ACDA, for single- and multi-level surgeries [39, 40]. The difference in additional adjacent segment surgery rates between ACDA and ACDF increases exponentially with longer follow-up time (Table 1). Despite these long-term benefits of ACDA, it is often discouraged since the clinical outcomes are similar to ACDF, but the implant costs are higher. However, preventing new complaints and additional surgeries due to CASP might be a reason for ACDA to be a more cost-effective technique in the long term. A systematic review of economic evaluations in anterior cervical decompression surgery was conducted by our research group [41]. Indeed, the majority of studies reported ACDA to be the most cost-effective technique despite the higher implant costs. However, literature was highly heterogeneous and of low quality. Hence, although there is increasing evidence suggesting that ACDA might be the more long-term cost-effective technique because of the reduced risk on CASP and associated additional surgery rates compared to ACD(F), the quality of this evidence is lacking, especially in Europe. Therefore, the need for a solid clinical cost-effectiveness trial addressing this question is high. Currently, ACDA cannot be offered as a standard treatment in the Netherlands because it is not reimbursed as the costs for ACDA are higher and short-term symptom relief appears equal between the techniques.
Table 1 Incidence rates of additional surgery for CASP
Equipoise remains concerning the cost-effectiveness of these techniques; the main aim of CACES is to determine the long-term cost-effectiveness of ACDA in comparison to ACDF. As the difference in re-operation rates for CASP is expected to be lower after ACDA, we hypothesize that the initial higher costs of a disc prosthesis might reimburse itself on the long term. The study aims to investigate a population that reflects daily practice; therefore, both radiculopathy and myelopathy and single- and multi-level patients will be included.
The primary objective of the proposed study is to examine whether ACDA is preferred over ACDF in terms of cost-effectiveness from a healthcare and societal perspective, in patients with single- or multi-level symptomatic CDDD. Secondary objectives include differences in clinical and radiological outcomes between the two techniques, as well as the qualitative process surrounding anterior decompression surgery.
This is a prospective, single-blinded, randomized controlled trial. One study arm receives ACDA (intervention group), and the other receives ACDF (control group). Patients will be randomized in a computer-generated 1:1 ratio to ensure equal division among both groups. No block randomization or stratification will be used. Only the coordinating investigator will be able to perform the computer-generated randomization. Group allocation will be performed per patient, not per level, meaning that no hybrid constructs will be used in this study. CACES investigators and surgeons will not be blinded for treatment allocation. All other involved healthcare professionals are blinded for allocation. All visible documentation will report concerning "ACD," unspecified whether it concerns fusion or a disc prosthesis. A follow-up period of 4 years was chosen to encompass the expected incidence of 1.6–4.2% of additional CASP-related surgeries of per year, with a peak incidence in the first 2.5 years [16]. It is reasonable to expect that most CASP events will occur within this follow-up period and that we will be able to gather reliable cost data of patients and informal caregivers (ICGs).
Participants and recruitment
Patients will be enrolled in the Department of Neurosurgery, Zuyderland Medical Center, Heerlen, the Netherlands. All adult patients indicated for single- or multi-level surgery for radiculopathy and/or myelopathy due to CDDD are eligible to be included in the study.
If patients choose not to participate in the study, the standard treatment, ACDF with stand-alone cage, will be offered. If patients agree to be included in the study, they will complete the informed consent form. ICGs will also be asked to participate, to enable a broad societal evaluation of costs. Patients can leave the study at any time for any reason without consequences. The investigator can decide to withdraw a patient from the study for urgent medical reasons. Since the sample size accounts for a 20% loss to follow-up, patients will not be replaced after withdrawal.
In order to be eligible to participate in this study, a patient must meet all of the following criteria:
Indication for anterior cervical decompression surgery
Single- or multi-level CDDD, located between C3 and C7, with a maximum of 4 consecutive levels
Symptoms of myelopathy, radiculopathy, or myeloradiculopathy
In the case of pure radiculopathy: refractory to at least 6 weeks of conservative therapy
In the case of myelopathy, it must be symptomatic
Patients ≥18 years of age
Patients who meet any of the following criteria will be excluded from participation in this study:
Indication for (additional) posterior surgical approach
Indication for additional stabilization of the pathological segment by a plate; in case there are sign of instability (e.g., spondylolisthesis) and additional stabilization is deemed necessary
Previous surgery of the cervical spine (anterior and posterior)
Traumatic origin of the compression
Previous radiotherapy of the cervical spine
Inflammatory spinal disease, e.g., Bechterew's disease, Forestier's disease
Infection of the cervical spine
Unable to fill out Dutch questionnaires
Informed consent not possible
Dropout criteria include technical reasons related to placing either of the implants during the surgery. Other reasons for dropout are expected to be low.
One of two surgical procedures will be performed on patients participating in this study: ACDA (intervention group) and ACDF (control group). In both techniques, the disc space is approached through a right- or left-sided anterior surgical approach. The disc content will be removed, after which the endplates are prepared with curettes. The posterior longitudinal ligament is opened, and the dura is visualized to ensure adequate decompression. In the ACDF group, a cage (CeSPACE® XP) is implanted; in the ACDA group, a disc prosthesis (ActivC®) is implanted after the discectomy. No bone substitutes will be used.
Implantation of the devices is done in accordance with the manufacturer's protocol for implantation and endplate preparation. In both groups, the wound is closed in layers, after a prevertebral wound drain is placed. Potential surgical risks are similar between the cage (ACDF) and the arthroplasty group (ACDA) [38, 61].
Post-surgical care is the same for both groups. Patients receive 24 h of antibiotic prophylaxis, and after 24 h, the prevertebral wound drain is removed. Patients can use analgesics post-operatively, and they are allowed to mobilize immediately. The day after surgery, a physical therapist provides patients with standardized exercises. Patients are not routinely referred to a physical therapist, and no neck collar is used.
Figure 1 depicts the flowchart of the study process. Patients will be selected by screening the appointments in the neurosurgical outpatient clinic the waiting lists for surgery. When a patient is indicated to undergo anterior cervical decompression surgery, the treating neurosurgeon will inform the patient about the CACES trial. The patient is contacted by the investigator to provide extensive information. Additionally, written information is sent to them by (e)mail. A week after the information is sent, the investigator contacts the patient again to answer remaining questions. Patients are included in the study when informed consent is signed. Data is collected in the research manager tool, which is a database with a built-in auditing system. Questionnaires, including reminders, are sent automatically once the patient is registered in Research Manager. If patients or informal caregivers prefer to fill out questionnaires on paper, the answers will be registered in Research Manager by the investigator and the paper version will be stored. Data will be collected at baseline and at 6-month intervals until 4 years post-surgery. The patients will be blinded to their received treatment for the first year after surgery to obtain more reliable patient-reported outcome measures (PROMs). After 1 year, patients will have an outpatient appointment with the coordinating investigator for unblinding (if wanted) and additional imaging.
Study design. Timing of pre- and post-operative outcome measurements and follow-up evaluations
All outcomes will be measured at baseline and at 6-month intervals until 4 years post-surgery.
The primary outcome is cost-effectiveness and cost-utility of ACDA compared to ACDF from a societal perspective, in patients with single- or multi-level CDDD and symptoms of radiculopathy and/or myelopathy. The primary outcome will be evaluated at 4 years post-operative.
The economic evaluation will involve a combination of a cost-effectiveness analysis (CEA) and cost-utility analysis (CUA) from a societal perspective. In the CUA, the effects are presented as quality-adjusted life years (QALYs), based on the EuroQol 5 dimensions 5 levels (EQ-5D-5L) [62] utility scores [63]. The Net Monetary Benefit (NMB) will be determined, in which effect is expressed in QALYs. The incremental cost-effectiveness ratio (ICER) will be expressed as the incremental costs per QALY. This economic evaluation will be conducted according to the Dutch Guidelines of the National Health Care Institute [64,65,66]. The following validated cost questionnaires will be used: Productivity Cost Questionnaire (iPCQ) [67] and the Medical Consumption Questionnaire (iMCQ) [67]. These questionnaires allow a broad measurement of societal costs, including medical consumption, and both paid and unpaid loss of productivity for patients. ICGs will be asked to fill out the Limited Valuation of Informal Care Questionnaire (iVICQ) [68], which consists of the Care-related Quality of Life instrument (CarerQol-7D) and the Self-Rated Burden scale (SRB). This measures the (subjective) burden for ICG and the assessment of caregiving in terms of wellbeing.
Secondary outcomes will consist of both clinical and radiological outcome measurements.
Clinical outcomes will be assessed according to the rate of CASP and associated additional surgeries, combined with patient-reported outcome measures (PROMs). The included PROMs are the Neck Disability Index (NDI) [69], Visual Analogue Scale (VAS) [70] for neck and arm pain, Hospital Anxiety Depression Scale (HADS) [71], and the modified Japanese Orthopedic Association score (mJOA) [72] for myelopathy or myeloradiculopathy patients. CASP is defined as the presence of newly developed symptomatic cervical radiculopathy and/or myelopathy on a level adjacent to the initial surgery, confirmed by corresponding findings upon magnetic resonance imaging (MRI). It should be noted that RASP is differentiated from CASP by the presence of clinical symptoms that are attributed to the degenerative changes for the latter. Patients were only considered as having CASP when new clinical symptoms developed. Additional adjacent segment surgery for CASP was defined as surgery for radiculopathy and/or myelopathy at a segment adjacent to the level of initial surgery. Notably, neck pain itself is not considered as a surgical indication in our national guidelines [73]. Re-operations at the index level and at levels not adjacent to the initially operated level were not considered as additional surgery for CASP.
Radiological outcomes will be assessed at three time points: pre-operative, directly post-operative, and 1 year post-operative. Pre-operative imaging will assess baseline degeneration according to the Kellgren-Lawrence Score (KS) [74], cervical sagittal alignment, and baseline disc height. Moreover, a full sagittal spine X-ray will be taken to assess pre-operative global balance according to the odontoid-hip axis (OD-HA) [75]. A standard cervical spine X-ray will be taken at the immediate post-operative period (before discharge) to assess the position of the implant, subsidence, and cervical sagittal alignment. Cervical spine X-rays will be taken again 1 year post-operative to assess fusion, cage subsidence, adjacent segment degeneration (KS), adjacent segment disc height, and alignment. A flexion and extension X-ray will be taken to assess movement. Radiological cage subsidence will be defined as > 3-mm decrease of interbody height compared to the direct post-operative X-ray [76, 77].
Moreover, a process evaluation will be performed to determine the underlying values, needs, impacts, and preferences of people with CDDD. The focus will be on the experiences and opinions of patients, caregivers, and professionals concerning the process surrounding ACDA and ACDF. A process evaluation might also identify gaps or limitations in the published research with regard to important outcomes to those with lived experience. A qualitative analysis will be performed according to the framework provided by Saunders et al. [78].
Per-operative parameters such as intraoperative complications, operative time, size of the implant, and blood loss will be evaluated. Perioperative complications, re-operations at index- or non-adjacent levels, and their associated costs will be assessed.
Power considerations
The primary outcome of the proposed study is cost-effectiveness. No difference in QALYs is expected between the intervention and control groups based on previous literature [41].
The only expected difference in clinical effectivity is the additional adjacent segment surgery rate, with an average of 1.0% versus 2.2% (Table 1). Although small, a difference in additional surgery rates is present in all trials, increasing exponentially over the years. We believe that the small inter-group difference may result in a cost-effective intervention, mainly because we expect most of the difference in costs to arise from episodes of secondary symptoms, which can lead to absenteeism, use of analgesic medication, use of (para)medical supplies, and additional adjacent segment surgery.
Reported additional adjacent segment surgery rates for ACDA vary from 0 to 2.8% per year, with an average of 1.0% based on the largest FDA/IDE trials [46, 79]. As for ACDF, yearly reported additional adjacent segment surgery rates range from 0 to 4.2%, with an average of 2.2% [80, 81].
The cost of a cervical disc prosthesis for ACDA is estimated to be around €2000, while the cage for an ACDF costs €500, resulting in a difference of €1500 per implant. Based on our data from the past years, on average, 1.2 implants are used per procedure.
Hospital admission costs and procedural costs are similar for both procedures, which is estimated to be €11,000 in our center. As reported by ArboNed for the year 2019, the average absenteeism for cervical disc pathology was 242 days, with a standard deviation of 225 [personal communication] [82]. The average cost for 1 day of absenteeism varies between €200 and €400, depending on salary and company; for example, 1 day of absence from work is usually calculated to cost around €250 in the Netherlands [personal communication] [82].
In the Netherlands, the willingness to pay (WTP) threshold for spinal pathologies — classified as a moderate disease — is set at €50,000/QALY [65, 66]. However, in the sample size calculation, we will not fix WTP at one value. The sample size calculation is based on both QALYs and costs, using Net Monetary Benefit (NMB) as outcome. The NMB for a treatment is defined as [83]:
$$NMB= WTP\times Effect- Costs$$
The smallest relevant effect in terms of NMB will be quantified in terms of Cohen's d, which is defined as:
$$d=\frac{average\ NMB\ (ACDF)- average\ NMB\ \left(\mathrm{ACDA}\right)}{Standard\ deviation\ of\ NMB}$$
The standard classification of Cohen's d consists of a small effect (d = 0.2), a medium-sized effect (d = 0.5), and a large effect (d = 0.8) [84].
A two-sided test will be used with a 5% level of significance (α = 0.05). A Cohen's d of 0.4 is often referred to as the hinge point, meaning a greater than average effect [84]. We expect a bit less than a medium-sized effect, for which we choose a Cohen's d in between 0.5 and 0.4, resulting in a Cohen's d of 0.45. Inclusion of 79 patients per treatment group will then yield a power of 0.80. Taking into account a 20% loss to follow-up in 4 years, the required sample size is 99 per group, with a total of 198 patients.
All statistical procedures will be conducted based on both the intention-to-treat principle and on actual participation in treatment (i.e., per-protocol analyses) and will be performed using SPSS statistics (SPSS, IBM, Corporation, Chicago, USA). Effect sizes and their 95% confidence intervals will be estimated. A two-sided significance level of 0.05 will be used as a threshold to determine whether differences are statistically significant. Trends in differences between ACDA and ACDF will be measured over time. If significant differences are found, post hoc analyses further investigate these effects.
Costs will be calculated by multiplying volumes (resource use) with unit costs. For the unit costs, we will use the Dutch costing guidelines. Productivity costs will be calculated by means of the friction cost method, based on a mean added value of the Dutch working population. Cost prices (2022) will be expressed in euros. If necessary, existing cost prices will be updated using the consumer price index (2022).
This study is supported by the national Dutch patient society of spine patients ('Nederlandse Vereniging van rugpatienten 'de Wervelkolom') as well as by the Dutch national society of neurosurgeons (NVVN).
To protect the privacy of all participants, all collected data will be encoded. Data collection will be carried out using the data management module from the research manager at Zuyderland. This application for detailed custom-made electronic data collection meets all guidelines of Good Clinical Practice (GCP). Trial auditing is incorporated in this system. Handling of personal data will comply with the General Data Protection Regulation (GDPR) (in Dutch: Algemene Verorderning Gegevensbescherming (AVG)). Informed consent forms and questionnaires filled-out on paper will be stored for 15 years in a safe location. Only the coordinating investigators will have access to the encoding key. ICG information will automatically be linked to the specific patient they are taking care of. During the study, the encoded data will be accessible on request. After the study, the encoded data will be publicly accessible for verification or further research. Monitoring will be performed by trained and qualified monitors from an external party, remotely and on site (Clinical Trial Centre Maastricht). All (serious) adverse events will be recorded. All patients are insured for participation in this trial and the possible consequences.
Protocol amendments
All substantial amendments will be notified to the METC and to the competent authority. A "substantial amendment" is defined as an amendment that is likely to affect the safety or physical or mental integrity of the subjects of the trial, the scientific value of the trial, the conduct or management of the trial, or the quality or safety of any intervention used in the trial. Non-substantial amendments will not be notified to the accredited METC and the competent authority, but will be recorded and filed.
This research will be conducted according to principles enshrined in the Declaration of Helsinki (3rd edition 2013) and in accordance with the Dutch Medical Research Involving Human Subjects Act (WMO). The study protocol has been approved by the Medical Ethical Committee of the Zuyderland Medical Center, Heerlen (document nr: METCZ20200025, NL72534.096.20). Written informed consent will be obtained from all patients included in the study.
Patients are insured and covered for damage to research subjects through injury or death caused by the study. The insurance applies to the damage that becomes apparent during the study or within 4 years after the end of the study.
There is currently no consensus on which anterior surgical technique is most cost-effective for treating CDDD. The role of ACDA in the prevention of ASP remains a controversial topic, which is a consequence of the ambiguous accelerating factors of ASP. Current literature reports inconsistent incidence rates and controversial risk factors. The extent to which kinematics, surgery-induced fusion, and natural history of disease play a role in the development of ASP remains unclear. Therefore, this study aims to provide high-quality evidence concerning the cost-effectiveness of ACDA in comparison to ACDF, as well as to evaluate the proposed accelerating factors of ASP. This is of dual importance; on the one hand, it will provide optimal patient care, and on the other hand, it will limit the exponentially increasing healthcare costs.
Based on the available literature, a difference in clinical effectiveness between the two treatments in the short term (1 to 2 years after surgery) is not expected [35, 85,86,87,88,89], as the risk profiles for ACDF and ACDA surgeries are comparable [61]. Aside from the high implant costs, ACDA is sometimes discouraged because of the concern that it leads to less favorable outcomes in myelopathy [61, 90, 91]. However, current literature provides no evidence supporting this concern and even reports better neurological outcomes after ACDA [61, 90,91,92]. Moreover, revision rates at the index level appear to be similar for ACDA and ACDF. As such, in our opinion, ACDA poses no additional risk for patients [93].
A concern that may arise regarding this study protocol is the use of ACDF with stand-alone cages instead of plate constructs, which is the preferred choice in some parts of the world. The main concern of ACDF with stand-alone cages is the occurrence of subsidence; we have demonstrated this to be very low (0.15%) in our own population. Moreover, it has been proven that the different fusion techniques are equally safe. In fact, current literature challenges the use of plate constructs [94]. As described in the introduction and outlined in Table 1, most studies show lower rates of adjacent segment surgery in the ACDA groups. Hence, the goal for which the cervical disc prosthesis was designed appears to be (partly) reached. However, not all aspects have been sufficiently investigated. Only few studies have investigated the cost-effectiveness between techniques, although not in such an extensive, prospective, randomized controlled trial [41, 95].
External validity and generalizability are the main constraint of this study, as costs and preferred surgical technique are country-specific. For example, the costs of absenteeism are dependent on country, different salaries, and duration. The costs of surgery and hospital admission can also differ between countries, as can insurance reimbursement for ACDA. ACDA is not reimbursed in the Netherlands, while in other countries it is, and is offered as standard care. Nevertheless, we expect that the proportion of costs and effectiveness can be extrapolated to other countries, even though exact costs cannot. Importantly, the methodology of this study is reproducible and may encourage researchers around the world to also investigate cost-effectiveness in other settings. The inclusion of patients with single- and multi-level pathology and radiculopathy and/or myelopathy will provide an adequate representation of the daily practice, which ameliorates generalizability. Another limitation of this study is that the sample size is based on estimated costs and incidences of CASP, which may differ from reality.
A primary outcome based on different parameters, costs, and effects/utility remains relatively uncommon despite the increasing number of economic evaluations. No similar studies are available to estimate effect sizes; therefore, the sample size in this study might be over- or underestimated. There are no previous prospective trials on cost-effectiveness for ACDA versus ACDF, which does not allow for comparison.
If our hypothesis is not confirmed and this study shows that ACDF is more cost-effective than ACDA, there will be no change in the current standard treatment for CDDD. This study will remain valuable, as it will provide new insights into the existing debate concerning anterior decompression techniques in the treatment of CDDD.
If our hypothesis is proven correct and the chance of developing CASP is lower in ACDA than ACDF, ACDA has potential to lower direct and indirect healthcare costs. The gap in incidence of additional CASP-related surgery between ACDF and ACDA expands considerably with time (Table 1). We therefore expect that if a difference in costs within 4 years is detected, this difference will increase over time.
The Dutch National Society of Neurosurgeons supports the intentions of the study. The first step is to revise the national guidelines if the proposed study shows greater cost-effectiveness of ACDA compared to ACDF with equal clinical effectiveness. The ACDA procedure can then be implemented, leading to standard reimbursement. The process evaluation in this study will offer valuable insight into the optimal process surrounding ACDA.
Dissemination policy
The findings of this study will be disseminated in conferences and seminars and will be published in an open-access international peer-reviewed journal.
Recruiting, started 01.01.2022. Anticipated end of study 01.06.2028.
Anterior cervical discectomy
ACDA:
Anterior cervical discectomy with arthroplasty
ACDF:
ASP:
Adjacent segment pathology
Adverse reaction
CACES:
Cervical Arthroplasty Cost Effectiveness Study
CareQoL:
Care related Quality of Life
CASP:
Clinical adjacent segment pathology
CDDD:
Cervical degenerative disc disease
CEA:
CUA:
Cost-utility analysis
DCM:
Degenerative cervical myelopathy
GCP:
Good Clinical Practice
General Data Protection Regulation (in Dutch: Algemene Verorderning Gegevensbescherming (AVG))
HADS:
Hospital Anxiety and Depression Scale
ICG:
Informal caregiver
ICER:
Incremental cost-effectiveness ratio
iMCQ:
Medical Consumption Questionnaire
IMP:
Investigational Medicinal Product
IMPD:
Investigational Medicinal Product Dossier
iPCQ:
Productivity Cost Questionnaire
iVICQ:
Limited Valuation Of Informal Care Questionnaire
METC:
Medical research ethics committee (MREC); in Dutch: medisch-ethische toetsingscommissie
MRI:
NDI:
Neck Disability Index
NMB:
Net monetary benefit
NSAID:
Non-steroid anti-inflammatory drug
PEEK:
Polyether ether ketone
PROMs:
QALY:
Quality-adjusted life year
RASP:
Radiological Adjacent Segment Pathology
SRB:
Self-Related Burden Scale
VAS:
Visual Analogue Scale
WTP:
Abdulkarim JA, Dhingsa R, Finlay DBL. Magnetic resonance imaging of the cervical spine: frequency of degenerative changes in the intervertebral disc with relation to age. Clin Radiol. 2003;58(12):980–4.
European Commission - Eurostat. Ageing Europe - looking at the lives of older people in the EU. European Union. 2019.
Woods BI, Hilibrand AS. Cervical radiculopathy: epidemiology, etiology, diagnosis, and treatment. J Spinal Disord Techn. 2015;28(5):E251–9.
Neifert SN, Martini ML, Yuk F, McNeill IT, Caridi JM, Steinberger J, et al. Predicting trends in cervical spinal surgery in the United States from 2020 to 2040. WORLD Neurosurg. 2020;141:E175–81.
Hammer C, Heller J, Kepler C. Epidemiology and pathophysiology of cervical disc herniation. Semin Spine Surg. 2016.
Roughley P, Martens D, Rantakokko J, Alini M, Mwale F, Antoniou J. The involvement of aggrecan polymorphism in degeneration of human intervertebral disc and articular cartilage. Eur Cells Materials. 2006;11:1–7.
Jacobs WCH, Anderson PG, Limbeek J, Willems PC, Pavlov P. Single or double-level anterior interbody fusion techniques for cervical degenerative disc disease. Cochrane Database Syst Rev. 2004;(4):CD004958.
Korinth MC. Treatment of cervical degenerative disc disease - current status and trends. Zentralbl Neurochir. 2008;69(3):113–24.
Dowd GC, Wirth FP. Anterior cervical discectomy: is fusion necessary? J Neurosurg. 1999;90(1 Suppl):8–12.
Schuermans V, Smeets AYJM, Wijsen NPMH, Curfs I, Boselie TFM, van Santbrink H. Clinical adjacent segment pathology after anterior cervical decompression surgery for cervical degenerative disc disease: a single center retrospective cohort study with long-term follow-up. Brain Spine. 2022;2:100869.
Fehlings MG, Ibrahim A, Tetreault L, Albanese V, Alvarado M, Arnold P, et al. A global perspective on the outcomes of surgical decompression in patients with cervical spondylotic myelopathy: results from the prospective multicenter AOSpine international study on 479 patients. Spine (Phila Pa 1976). 2015;40(17):1322–8.
Xie J, Hurlbert RJ. Discectomy versus discectomy with fusion versus discectomy with fusion and instrumentation: a prospective randomized study. Neurosurgery. 2007;61(1):107–16.
Joo Y-H, Lee J-W, Kwon K-Y, Rhee J-J, Lee H-K. Comparison of fusion with cage alone and plate instrumentation in two-level cervical degenerative disease. J KOREAN Neurosurg Soc. 2010;48(4):342–6.
Donk RD, Verbeek ALM, Verhagen WIM, Groenewoud H, Hosman AJF, Bartels RHMA. What's the best surgical treatment for patients with cervical radiculopathy due to single-level degenerative disease? A randomized controlled trial. PLoS One. 2017;12(8):e0183603.
Nandoe Tewarie RDS, Bartels RHMA, Peul WC. Long-term outcome after anterior cervical discectomy without fusion. Eur spine J Off Publ Eur Spine Soc Eur Spinal Deform Soc Eur Sect Cerv Spine Res Soc. 2007;16(9):1411–6.
Riew KD, Norvell DC, Chapman JR, Skelly AC, Dettori JR. Introduction/summary statement: adjacent segment pathology. Spine. 2012;37(22 Suppl):S1–7.
Lawrence BD, Hilibrand AS, Brodt ED, Dettori JR, Brodke DS. Predicting the risk of adjacent segment pathology in the cervical spine: a systematic review. Spine. 2012;37(22 Suppl):S52–64.
Gornet MF, Lanman TH, Burkus JK, Dryer RF, McConnell JR, Hodges SD, et al. Two-level cervical disc arthroplasty versus anterior cervical discectomy and fusion: 10-year outcomes of a prospective, randomized investigational device exemption clinical trial. J Neurosurg Spine. 2019:1–11. [Internet]. Available from: https://www.cochranelibrary.com/central/doi/10.1002/central/CN-01960416/full.
Shriver MF, Lubelski D, Sharma AM, Steinmetz MP, Benzel EC, Mroz TE. Adjacent segment degeneration and disease following cervical arthroplasty: a systematic review and meta-analysis. Spine J. 2016;16(2):168–81.
Hilibrand AS, Carlson GD, Palumbo MA, Jones PK, Bohlman HH. Radiculopathy and myelopathy at segments adjacent to the site of a previous anterior cervical arthrodesis. J Bone Jt Surg - Ser A. 1999;81(4):519–28.
Robertson JT, Papadopoulos SM, Traynelis VC. Assessment of adjacent-segment disease in patients treated with cervical fusion or arthroplasty: a prospective 2-year study. J Neurosurg Spine. 2005;3(6):417–23.
Gore DR, Sepic SB. Anterior cervical fusion for degenerated or protruded discs. A review of one hundred forty-six patients. Spine (Phila Pa 1976). 1984;9(7):667–71.
Williams JL, Allen MBJ, Harkess JW. Late results of cervical discectomy and interbody fusion: some factors influencing the results. J Bone Joint Surg Am. 1968;50(2):277–86.
Bohlman HH, Emery SE, Goodfellow DB, Jones PK. Robinson anterior cervical discectomy and arthrodesis for cervical radiculopathy. Long-term follow-up of one hundred and twenty-two patients. J Bone Jt Surg Am. 1993;75(9):1298–307.
Hilibrand AS, Robbins M. Adjacent segment degeneration and adjacent segment disease: the consequences of spinal fusion? Spine J. 2004;4(6 Suppl):190S–4S.
Hilibrand AS, Carlson GD, Palumbo MA, Jones PK, Bohlman HH. Radiculopathy and myelopathy at segments adjacent to the site of a previous anterior cervical arthrodesis. J Bone Joint Surg Am. 1999;81(4):519–28.
Gore DR, Sepic SB. Anterior discectomy and fusion for painful cervical disc disease. A report of 50 patients with an average follow-up of 21 years. Spine (Phila Pa 1976). 1998;23(19):2047–51.
Helgeson MD, Bevevino AJ, Hilibrand AS. Update on the evidence for adjacent segment degeneration and disease. Spine J. 2013;13(3):342–51.
Seo M, Choi D. Adjacent segment disease after fusion for cervical spondylosis; myth or reality? Br J Neurosurg. 2008;22(2):195–9.
Eck JC, Humphreys SC, Lim T-H, Jeong ST, Kim JG, Hodges SD, et al. Biomechanical study on the effect of cervical spine fusion on adjacent-level intradiscal pressure and segmental motion. Spine (Phila Pa 1976). 2002;27(22):2431–4.
Martins AN. Anterior cervical discectomy with and without interbody bone graft. J Neurosurg. 1976;44(3):290–5.
Hauerberg J, Kosteljanetz M, Bøge-Rasmussen T, Dons K, Gideon P, Springborg JB, et al. Anterior cervical discectomy with or without fusion with ray titanium cage: a prospective randomized clinical study. Spine (Phila Pa 1976). 2008;33(5):458–64.
Ji GY, Oh CH, Shin DA, Ha Y, Kim KN, Yoon DH, et al. Stand-alone cervical cages versus anterior cervical plates in 2-level cervical anterior interbody fusion patients: analysis of adjacent segment degeneration. J Spinal Disord Tech. 2015;28(7):E433–8.
Boselie TFM, van Mameren H, de Bie RA, van Santbrink H. Cervical spine kinematics after anterior cervical discectomy with or without implantation of a mobile cervical disc prosthesis; an RCT. BMC Musculoskelet Disord. 2015;16:34.
Boselie TFM, Willems PC, van Mameren H, de Bie RA, Benzel EC, van Santbrink H. Arthroplasty versus fusion in single-level cervical degenerative disc disease: a Cochrane review. Spine (Phila Pa 1976). 2013;38(17):E1096–107.
Vleggeert-Lankamp CLA, Janssen TMH, van Zwet E, Goedmakers CMW, Bosscher L, Peul W, et al. The NECK trial: effectiveness of anterior cervical discectomy with or without interbody fusion and arthroplasty in the treatment of cervical disc herniation; a double-blinded randomized controlled trial. Spine J. 2019;19(6):965–75.
Zhang Y, Liang C, Tao Y, Zhou X, Li H, Li F, et al. Cervical total disc replacement is superior to anterior cervical decompression and fusion: a meta-analysis of prospective randomized controlled trials. PLoS One. 2015;10(3):e0117826.
Maharaj MM, Mobbs RJ, Hogan J, Zhao DF, Rao PJ, Phan K. Anterior cervical disc arthroplasty (ACDA) versus anterior cervical discectomy and fusion (ACDF): a systematic review and meta-analysis. J Spine Surg (Hong Kong). 2015;1(1):72–85.
Radcliff K, Coric D, Albert T. Five-year clinical results of cervical total disc replacement compared with anterior discectomy and fusion for treatment of 2-level symptomatic degenerative disc disease: a prospective, randomized, controlled, multicenter investigational device exemption. J Neurosurg Spine. 2016;25(2):213–24.
Kong L, Cao J, Wang L, Shen Y. Prevalence of adjacent segment disease following cervical spine surgery. Medicine (United States). 2016.
Schuermans V, Smeets AYJM, Boselie TFM, Zarrouk O, Hermans SMM, Droeghaag R, et al. Cost-effectiveness of anterior surgical decompression surgery for cervical degenerative disc disease: a systematic review of economic evaluations. Eur Spine J. 2022;31(5):1206–18.
Radcliff K, Davis RJ, Hisey MS, Nunley PD, Hoffman GA, Jackson RJ, et al. Long-term Evaluation of Cervical Disc Arthroplasty with the Mobi-C© Cervical Disc: A Randomized, Prospective, Multicenter Clinical Trial with Seven-Year Follow-up. Int J spine Surg. 2017;11(4):31.
Jackson RJ, Davis RJ, Bae HW, Hoffman GA, Hisey MS, Kim KD, et al. Subsequent surgery rates after treatment with TDR or ACDF at one or two levels: results from an FDA clinical trial at 7 years. Spine J [Internet]. 2016;16(10 CC-Back and Neck):S204-S205. Available from: https://www.cochranelibrary.com/central/doi/10.1002/central/CN-01427593/full.
Ramch, ran S, Smith JS, Ailon T, Klineberg E, Shaffrey C, et al. Assessment of Impact of Long-Cassette Standing X-Rays on Surgical Planning for Cervical Pathology: An International Survey of Spine Surgeons. Neurosurgery. 2016;78(5):717–24.
Loumeau TP, Darden BV, Kesman TJ, Odum SM, Van Doren BA, Laxer EB, et al. A RCT comparing 7-year clinical outcomes of one level symptomatic cervical disc disease (SCDD) following ProDisc-C total disc arthroplasty (TDA) versus anterior cervical discectomy and fusion (ACDF). Eur spine J Off Publ Eur Spine Soc Eur Spinal Deform Soc Eur Sect Cerv Spine Res Soc. 2016;25(7):2263–70.
Janssen ME, Zigler JE, Spivak JM, Delamarter RB, Darden BV 2nd, Kopjar B. ProDisc-C Total Disc Replacement Versus Anterior Cervical Discectomy and Fusion for Single-Level Symptomatic Cervical Disc Disease: Seven-Year Follow-up of the Prospective Randomized U.S. Food and Drug Administration Investigational Device Exemption Stud. J Bone Joint Surg Am. 2015;97(21):1738–47.
Zigler JE, Delamarter R, Murrey D, Spivak J, Janssen M. ProDisc-C and anterior cervical discectomy and fusion as surgical treatment for single-level cervical symptomatic degenerative disc disease: five-year results of a Food and Drug Administration study. Spine (Phila Pa 1976). 2013;38(3):203–9.
Delamarter RB, Zigler J. Five-year reoperation rates, cervical total disc replacement versus fusion, results of a prospective randomized clinical trial. Spine (Phila Pa 1976) [Internet]. 2013;38(9 CC-Back and Neck):711-7. Available from: https://www.cochranelibrary.com/central/doi/10.1002/central/CN-00906545/full.
Murrey D, Janssen M, Delamarter R, Goldstein J, Zigler J, Tay B, et al. Results of the prospective, randomized, controlled multicenter Food and Drug Administration investigational device exemption study of the ProDisc-C total disc replacement versus anterior discectomy and fusion for the treatment of 1-level symptomatic cerv. Spine J. 2009;9(4):275–86.
Gornet MF, Burkus JK, Shaffrey ME, Schranck FW, Copay AG. Cervical disc arthroplasty: 10-year outcomes of the Prestige LP cervical disc at a single level. J Neurosurg Spine. 2019;31(3):317–25.
Gornet MF, Lanman TH, Kenneth Burkus J, Hodges SD, McConnell JR, Dryer RF, et al. One-level versus 2-level treatment with cervical disc arthroplasty or fusion: Outcomes up to 7 years. Int J Spine Surg. 2019.
Lanman TH, Burkus JK, Dryer RG, Gornet MF, McConnell J, Hodges SD. Long-term clinical and radiographic outcomes of the Prestige LP artificial cervical disc replacement at 2 levels: results from a prospective randomized controlled clinical trial. J Neurosurg Spine. 2017;27(1):7–19.
Gornet MF, Burkus JK, Shaffrey ME, Argires PJ, Nian H, Harrell FEJ. Cervical disc arthroplasty with PRESTIGE LP disc versus anterior cervical discectomy and fusion: a prospective, multicenter investigational device exemption study. J Neurosurg Spine. 2015;23(5):558–73.
Burkus JK, Traynelis VC, Haid RWJ, Mummaneni P V. Clinical and radiographic analysis of an artificial cervical disc: 7-year follow-up from the Prestige prospective randomized controlled clinical trial: Clinical article. J Neurosurg Spine. 2014;21(4):516–28.
Mummaneni P V, Burkus JK, Haid RW, Traynelis VC, Zdeblick TA. Clinical and radiographic analysis of cervical disc arthroplasty compared with allograft fusion: a randomized controlled clinical trial. J Neurosurg Spine. 2007;6(3):198–209.
Loidolt T, Kurra S, Riew KD, Levi AD, Florman J, Lavelle WF. Comparison of adverse events between cervical disc arthroplasty and anterior cervical discectomy and fusion: a 10-year follow-up. Spine J. 2021.
Lavelle WF, Riew KD, Levi AD, Florman JE. Ten-year Outcomes of Cervical Disc Replacement With the BRYAN Cervical Disc: Results From a Prospective, Randomized, Controlled Clinical Trial. Spine (Phila Pa 1976). 2019;44(9):601–8.
Sasso RC, Anderson PA, Riew KD, Heller JG. Results of cervical arthroplasty compared with anterior discectomy and fusion: four-year clinical outcomes in a prospective, randomized controlled trial. Orthopedics. 2011;34(11):889.
Heller JG, Sasso RC, Papadopoulos SM, Anderson PA, Fessler RG, Hacker RJ, et al. Comparison of BRYAN cervical disc arthroplasty with anterior cervical decompression and fusion: clinical and radiographic results of a randomized, controlled, clinical trial. Spine (Phila Pa 1976). 2009;34(2):1740.
Yin S, Yu X, Zhou S, Yin Z, Qiu Y. Is cervical disc arthroplasty superior to fusion for treatment of symptomatic cervical disc disease? A meta-analysis. Clin Orthop Relat Res. 2013;471(6):1904–19.
Herdman M, Gudex C, Lloyd A, Janssen M, Kind P, Parkin D, et al. Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L). Qual life Res an Int J Qual life Asp Treat care Rehabil. 2011;20(10):1727–36.
Versteegh MM, Vermeulen KM, Evers SMAA, de Wit GA, Prenger R, Stolk EA. Dutch tariff for the five-level version of EQ-5D. Value Heal J Int Soc Pharmacoeconomics Outcomes Res. 2016;19(4):343–52.
Hakkaart-van Roijen L, van der Linden N, Bouwmans C, Kanters TA, Tan SS. Kostenhandleiding: Methodologie van kostenonderzoek en referentieprijzen voor economische evaluaties in de gezondheidszorg; 2016.
Zorginstituut Nederland. Richtlijn voor het uitvoeren van economische evaluaties in de gezondheidszorg. 2016.
Zorginstituut Nederland. Kosteneffectiviteit in de praktijk. 2015.
Bouwmans C, Krol M, Severens H, Koopmanschap M, Brouwer W, Hakkaart-van Roijen L. The iMTA Productivity Cost Questionnaire: A Standardized Instrument for Measuring and Valuing Health-Related Productivity Losses. Value Heal J Int Soc Pharmacoeconomics Outcomes Res. 2015;18(6):753–8.
Hoefman RJ, van Exel NJA, Foets M, Brouwer WBF. Sustained informal care: the feasibility, construct validity and test-retest reliability of the CarerQol-instrument to measure the impact of informal care in long-term care. Aging Ment Health. 2011;15(8):1018–27.
Vernon H. The Neck Disability Index: state-of-the-art, 1991-2008. J Manipulative Physiol Ther. 2008;31(7):491–502.
Hawker GA, Mian S, Kendzerska T, French M. Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), Short-Form McGill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (S. Arthritis Care Res (Hoboken). 2011;63 Suppl 1:S240-52.
Spinhoven P, Ormel J, Sloekers PP, Kempen GI, Speckens AE, Van Hemert AM. A validation study of the Hospital Anxiety and Depression Scale (HADS) in different groups of Dutch subjects. Psychol Med. 1997;27(2):363–70.
Tetreault L, Kopjar B, Nouri A, Arnold P, Barbagallo G, Bartels R, et al. The modified Japanese Orthopaedic Association scale: establishing criteria for mild, moderate and severe impairment in patients with degenerative cervical myelopathy. Eur spine J Off Publ Eur Spine Soc Eur Spinal Deform Soc Eur Sect Cerv Spine Res Soc. 2017;26(1):78–84.
voor Neurochirurgie NV. Richtlijn: Behandeling van cervicaal radiculair syndroom ten gevolge van een cervicale hernia nuclei pulposi; 2010.
Kellgren JH, Lawrence JS. Radiological assessment of osteo-arthrosis. Ann Rheum Dis. 1957;16(4):494–502.
Le Huec JC, Thompson W, Mohsinaly Y, Barrey C, Faundez A. Sagittal balance of the spine. Eur spine J Off Publ Eur Spine Soc Eur Spinal Deform Soc Eur Sect Cerv Spine Res Soc. 2019;28(9):1889–905.
Kaiser MG, Mummaneni P V, Matz PG, Anderson PA, Groff MW, Heary RF, et al. Radiographic assessment of cervical subaxial fusion. J Neurosurgery-Spine. 2009;11(2):221–7.
Gercek E, Arlet V, Delisle J, Marchesi D. Subsidence of stand-alone cervical cages in anterior interbody fusion: warning. Eur spine J Off Publ Eur Spine Soc Eur Spinal Deform Soc Eur Sect Cerv Spine Res Soc. 2003;12(5):513–6.
Saunders RP, Evans MH, Joshi P. Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide. Health Promot Pract. 2005;6(2):134–47.
Nunley PD, Jawahar A, Kerr EJ 3rd, Gordon CJ, Cavanaugh DA, Birdsong EM, et al. Factors affecting the incidence of symptomatic adjacent-level disease in cervical spine after total disc arthroplasty: 2- to 4-year follow-up of 3 prospective randomized trials. Spine (Phila Pa 1976). 2012;37(6):445–51.
Davis RJ, Kim KD, Hisey MS, Hoffman GA, Bae HW, Gaede SE, et al. Cervical total disc replacement with the Mobi-C cervical artificial disc compared with anterior discectomy and fusion for treatment of 2-level symptomatic degenerative disc disease: a prospective, randomized, controlled multicenter clinical trial: clinic. J Neurosurg Spine. 2013;19(5):532–45.
Ishihara H, Kanamori M, Kawaguchi Y, Nakamura H, Kimura T. Adjacent segment disease after anterior cervical interbody fusion. Spine J. 2004;4(6):624–8.
Arboned [Internet]. 2019. Available from: https://www.arboned.nl/.
Willan AR, Briggs AH. Statistical analysis of cost-effectiveness data, Vol. 37. Wiley; 2006.
Cohen J. Statistical power analysis for the behavioral sciences. Academic press; 2013.
Cepoiu-Martin M, Faris P, Lorenzetti D, Prefontaine E, Noseworthy T, Sutherland L. Artificial cervical disc arthroplasty: a systematic review. Spine (Phila Pa 1976). 2011;36(25):E1623-33.
McAfee PC, Reah C, Gilder K, Eisermann L, Cunningham B. A meta-analysis of comparative outcomes following cervical arthroplasty or anterior cervical fusion: results from 4 prospective multicenter randomized clinical trials and up to 1226 patients. Spine (Phila Pa 1976). 2012;37(11):943–52.
Upadhyaya CD, Wu J-C, Trost G, Haid RW, Traynelis VC, Tay B, et al. Analysis of the three United States Food and Drug Administration investigational device exemption cervical arthroplasty trials. J Neurosurg Spine. 2012;16(3):216–28.
Yu L, Song Y, Yang X, Lv C. Systematic review and meta-analysis of randomized controlled trials: comparison of total disk replacement with anterior cervical decompression and fusion. Orthopedics. 2011;34(10):e651-8.
Jiang H, Zhu Z, Qiu Y, Qian B, Qiu X, Ji M. Cervical disc arthroplasty versus fusion for single-level symptomatic cervical disc disease: a meta-analysis of randomized controlled trials. Arch Orthop Trauma Surg. 2012;132(2):141–51.
Gao Y, Liu M, Li T, Huang F, Tang T, Xiang Z. A meta-analysis comparing the results of cervical disc arthroplasty with anterior cervical discectomy and fusion (ACDF) for the treatment of symptomatic cervical disc disease. J Bone Joint Surg Am. 2013;95(6):555–61.
Gornet MF, McConnell JR, Riew KD, Lanman TH, Burkus JK, Hodges SD, et al. Treatment of Cervical Myelopathy: Long-term Outcomes of Arthroplasty for Myelopathy Versus Radiculopathy, And Arthroplasty Versus Arthrodesis for Myelopathy. Clin spine Surg. 2018;31(10):420–7.
Gornet MF, Riew KD, Lanman TH, Burkus JK, Hodges SD, McConnell JR, et al. Long-term outcomes of arthroplasty for cervical myelopathy versus radiculopathy, and arthroplasty versus arthrodesis for cervical myelopathy. Eur spine J [Internet]. 2017;Conference(2 Supplement 1 CC-Back and Neck):S330. Available from: https://www.cochranelibrary.com/central/doi/10.1002/central/CN-01446007/full.
Goffin J, Casey A, Kehr P, Liebig K, Lind B, Logroscino C, et al. Preliminary clinical experience with the Bryan Cervical Disc Prosthesis. Neurosurgery. 2002;51(3):840–7.
Jacobs W, Willems PC, van Limbeek J, Bartels R, Pavlov P, Anderson PG, et al. Single or double-level anterior interbody fusion techniques for cervical degenerative disc disease. Cochrane database Syst Rev. 2011;(1):CD004958.
Radcliff K, Guyer RD. Economics of cervical disc replacement. Int J Spine Surg. 2020;14:S67–72 Available from: https://pubmed.ncbi.nlm.nih.gov/32994308/.
Data sharing statement
The medical equipment in this study is partly funded by B. Braun Aesculap. In-kind support is provided in the form of reduced costs for the implants. The study is investigator-initiated, meaning that the study was first designed and planned and only afterwards financial support was sought. B. Braun Aesculap has no influence on any aspect of the conception/writing of the protocol, selection of subjects, data acquisition or data analysis, or any step in the writing or publication process of articles concerning this study.
Department of Neurosurgery, Maastricht University Medical Center+, P. Debyelaan 25, Maastricht, 6229 HX, The Netherlands
Valérie N. E. Schuermans, Anouk Y. J. M. Smeets, Toon F. M. Boselie & Henk Van Santbrink
Department of Neurosurgery, Zuyderland Medical Center, Henri Dunantstraat 5, Heerlen, 6419 PC, The Netherlands
CAPHRI School for Public Health and Primary Care, Maastricht University, Universiteitssingel 40, Maastricht, 6229 ER, The Netherlands
Valérie N. E. Schuermans, Anouk Y. J. M. Smeets & Henk Van Santbrink
Department of Methodology and Statistics, Care and Public Health Research Institute (CAPHRI), Maastricht University, Peter Debyeplein 1, Maastricht, 6229 HA, The Netherlands
Math J. J. M. Candel
Department of Orthopaedic Surgery and Traumatology, Zuyderland Medical Center, Henri Dunantstraat 5, Heerlen, 6419 PC, The Netherlands
Inez Curfs
Department of Public Health Technology Assessment, Maastricht University, Duboisdomein 30, Maastricht, 6229 GT, The Netherlands
Silvia M. A. A. Evers
Trimbos Institute, Netherlands Institute of Mental Health and Addiction, Centre of Economic Evaluation & Machine Learning, Utrecht, The Netherlands
Valérie N. E. Schuermans
Anouk Y. J. M. Smeets
Toon F. M. Boselie
Henk Van Santbrink
VS: design of the work and acquisition. AS: conception and design of the work. TB: conception and substantial revision. MC: analysis. IC: substantial revision and interpretation of the data. SE: analysis and interpretation of the data. HvS: conception and substantial revision. The author(s) read and approved the final manuscript.
Correspondence to Valérie N. E. Schuermans.
This research is conducted according to principles enshrined in the Declaration of Helsinki (3rd edition 2013) and in accordance with the Medical Research Involving Human Subjects Act (WMO). This study protocol has been approved by the Medical Ethical Committee of the Zuyderland Medisch Centrum, Heerlen (document nr: METCZ20200025, NL72534.096.20).
Written informed consent will be obtained from all patients included in the study.
SPIRIT figure CACES.
Schuermans, V.N.E., Smeets, A.Y.J.M., Boselie, T.F.M. et al. Research protocol: Cervical Arthroplasty Cost Effectiveness Study (CACES): economic evaluation of anterior cervical discectomy with arthroplasty (ACDA) versus anterior cervical discectomy with fusion (ACDF) in the surgical treatment of cervical degenerative disc disease — a randomized controlled trial. Trials 23, 715 (2022). https://doi.org/10.1186/s13063-022-06574-5 | CommonCrawl |
Институт теоретической и математической физики
МГУ имени М.В. Ломоносова
Об ИТМФ
Постдоки
Ассоциированные члены
Визитёры
Научный семинар
Журнальный клуб
Магистерская программа "Квантовая гравитация и математическая физика"
Расписание магистерской программы
Специалитет "ФМиМФ"
Темы курсовых работ для 2 курса
Зимняя школа 2022
Российский конкурс на соискание позиции постдока
Международный конкурс на соискание позиции постдока
Ведущий научный сотрудник
Ведущий научный сотрудник с частичной занятостью
Если Вы хотели бы подписаться на рассылку о семинарах ИТМФ, пожалуйста, напишите по электронному адресу: [email protected]
"Finite Features in Holography"
Dr. Dionysios Anninos, King's College London
December 14, 2022 (Wednesday)
Abstract: We consider theories of gravity where a more "finite" holographic theory is relevant. From a Lorentzian perspective we comment on properties of gravity in the presence of finite timelike boundaries, and holographic realisations. Time permitting, we also discuss Euclidean gravity on manifolds with no boundary, like Euclidean de Sitter.
ITMP seminar will be held on Wednesday, December 14 at 18:00. The seminar will be organized via zoom.
"Automorphic symmetry and AdS String novel integrable deformations"
Anton Pribytok, Trinity College, Dublin
December 7, 2022 (Wednesday)
Abstract: We address the new structures arising in quantum and string integrable theories, as well as construct a method to find them. Initially we implement the automorphic symmetries on periodic lattice systems and exploit properties of an integrable hierarchy. This prescription is first applied for potentially new sl_2 sector, Generalised Hubbard type classes and more. We then construct a boost recursion for systems with R-/S-matrices that exhibit arbitrary spectral dependence, which is also an apparent property of the scattering in AdS integrable backgrounds. The generalised bottom-up approach based on coupled differential systems is derived to resolve for R-matrices exactly. In addition, one can isolate a special class of models (non-difference form) that provide a new structure consistently arising in AdS_{3} and AdS_{2} string backgrounds. These classes can be proven to be represented as deformations of the AdS_{2,3} models, which satisfy free fermion, constraint, braiding unitarity, crossing and exhibit deformed algebraic structure that shares certain properties with AdS_3xSˆ3xMˆ4 and AdS_2xSˆ2xTˆ6 models. A related discussion on further investigation of AdS_3 free fermion property, TBA finite size corrections and GSE will be provided.
ITMP seminar will be held on Wednesday, December 7 at 18:00. The seminar will be organized via zoom.
"Angular Momentum Loss Due to Tidal Effects in the Post-Minkowskian Expansion"
Dr. Carlo Heissenberg, Uppsala University and NORDITA
November 30, 2022 (Wednesday)
Abstract: The steadily increasing sensitivity of gravitational-wave measurements challenges the state of the art of precision calculations for gravitational collisions. In this context, scattering amplitudes have contributed to the advance of the precision frontier in the Post-Minkowskian (PM) expansion, based on successive approximations labeled by powers of Newton's constant G and valid for generic velocities. In this talk, based on arXiv:2210.15689, I will illustrate the calculation of tidal corrections to the loss of angular momentum in a two-body collision at leading Post-Minkowskian order from an amplitude-based approach. The eikonal operator allows us to efficiently combine elastic and inelastic amplitudes, and captures both the contributions due to genuine gravitational-wave emissions and those due to the static gravitational field. We calculate the former by harnessing powerful collider-physics techniques such as reverse unitarity, thereby reducing them to cut two-loop integrals. For the latter, we can employ the results of arXiv:2203.11915 where static-field effects were calculated for generic gravitational scattering events using the leading soft graviton theorem.
ITMP seminar will be held on Wednesday, November 30 at 18:00. The seminar will be organized via zoom.
"AdS/BCFT from Bootstrap: Construction of Gravity with particle & brane"
Dr. Yuya Kusuki, Caltech and Wako, RIKEN
Abstract: We consider gravity with branes and massive particles, which has many unclear aspects. For example, we do not know how to understand some problematic configurations, like brane self-intersections, negative tension branes, and spinning particles interacting with branes. We address these issues through AdS/BCFT. We solve a related conformal bootstrap and show that the self-intersection can be avoided by the black hole formation. We also reveal how to resolve the other problems in AdS/BCFT using the bootstrap.
In the later part, we give the resolution of the same problems from the gravity side. For this purpose, we develop a simple way to construct gravity with branes and particles by cutting and pasting. The solution from this construction tells us how to resolve the issues from the gravity side, which is completely consistent with the CFT result.
As a bonus, we find a refined formula for the holographic Rényi entropy, which appears to be crucial to correctly reproduce the boundary entropy term. We also find a holographic dual of boundary primaries.
"R-matrix formulation of affine Yangian of \hat{gl}(1|1) and integrable systems of N=2 superconformal field theory"
Dr. Alexey Litvinov, Landau ITP, Chernogolovka
Abstract: We study N=2 superconformal field theory and define the R-matrix which acts as an intertwining operator between different realisations of N=2W -algebras of type A. Using the R-matrix we define RLL algebra and relate it to current realisation of affine Yangian of \hat{gl}(1|1). We also derive Bethe ansatz equations for the spectrum of integrals of motion.
ITMP seminar will be held on Wednesday, November 9 at 18:00. The seminar will be organized in person in room г-725, ITMP with the possibility to join via zoom.
"Intersecting D-brane models and the anomalous magnetic moment of the muon"
Francois Rondeau, Cyprus University
November 9, 2022 (Wednesday)
Abstract: The anomalous magnetic moment of the muon ($(g-2)_{\mu}$) might be one of the most promising signals of new physics beyond the Standard Model (SM). The theoretical calculation predicts a value smaller than the experimental measurement from the Brookhaven National Laboratory and Fermilab experiments, with a standard deviation of $4.7\sigma$. This talk will discuss recently proposed solutions to this discrepancy in the framework of low mass scale strings, large extra dimensions and intersecting D-brane physics.
After reminding some basic aspects of intersecting D-brane models, and in particular how the SM of particle physics can be obtained in this framework, we will present two proposals able to explain (part of) the observed discrepancy. They are based on the contributions to the $(g-2)_{\mu}$ coming from the Kaluza-Klein (KK) states of the lepton number gauge boson, corresponding to an open string tied to a leptonic $U(1)$ brane, as well as from light scalars corresponding to the first stringy excitations of an open string stretched between two D-branes intersecting with an ultra-small angle. In both cases, we show that there is a region of the parameter space compatible with the current experimental bounds, where these states can provide significant contributions to the $(g-2)_{\mu}$, and fully or partially bridge the gap in the observed discrepancy. We then build the minimal D-brane configurations able to realise these two proposals.
The talk is based on 2110.01247, 2112.07587 and 2209.11152.
ITMP seminar will be held on Wednesday, November 9 at 18:00. The seminar will be organized via zoom.
"Chiral Approach to Massive Higher Spins"
Dr. Alexander Ochirov, London Institute for Mathematical Sciences
Abstract: Quantum field theory of higher-spin particles is a formidable subject, where preserving the physical number of degrees of freedom in the Lorentz-invariant way requires a host of auxiliary fields. They can be chosen to have a rich gauge-symmetry structure, but introducing consistent interactions in such approaches is still a non-trivial task, with massive higher-spin Lagrangians specified only up to three points. In this talk, I will discuss a new, chiral description for massive higher-spin particles, which in four spacetime dimensions allows to do away with the unphysical degrees of freedom. This greatly facilitates the introduction of consistent interactions. I will concentrate on three theories, in which higher-spin matter is coupled to electrodynamics, non-Abelian gauge theory or gravity. These theories are currently the only examples of consistently interacting field theories with massive higher-spin fields.
"Exact off-shell Sudakov form factor in N=4 SYM"
Dr. Leonid Bork, ITEP, Moscow
October 26, 2022 (Wednesday)
Abstract: We consider the Sudakov form factor in N=4 SYM in the off-shell kinematical regime, which can be achieved by considering the theory on its Coulomb branch. We demonstrate that up two three loops both the infrared-divergent as well as the finite terms do exponentiate, with the coefficient accompanying log's of mass determined by the octagon anomalous dimension Γ_oct. This behaviour is in strike contrast to previous conjectural accounts in the literature. Together with the finite terms we observe that up to three loops the logarithm of the Sudakov form factor is identical to twice the logarithm of the null octagon O_0, which was recently introduced within the context of integrability-based approaches to four point correlation functions with infinitely-large R-charges. The null octagon O_0 is known in a closed form for all values of the 't Hooft coupling constant and kinematical parameters. We conjecture that the relation between O_0 and the off-shell Sudakov form factor will hold to all loop orders.
ITMP seminar will be held on Wednesday, October 26 at 18:00. The seminar will be organized in person in room г-725, ITMP with the possibility to join via zoom.
"Phase transitions in the dark Universe"
Dr. Sabir Ramazanov, CEICO, Prague
Abstract: Typically phase transitions are associated with a constant scale of symmetry breaking. In cosmology, however, a different situation is also natural: the expectation value of a scalar field responsible for symmetry breaking can be induced by its interactions with hot primordial plasma, in which case the expectation value decreases as the Universe cools down. This picture can be realized in a simple renormalizable and approximately scale-invariant scenario. Non-trivial time-dependence of the symmetry breaking scale enables new mechanisms of particle production at phase transitions. These particles can be considered for the role of dark matter. Another phenomenologically interesting outcome of phase transitions is formation of topological defects, which serve as a source of potentially observable gravitational waves. The latter can be used to probe the underlying model in a very weakly coupled regime inaccessible by other experiments.
ITMP seminar will be held on Wednesday, October 19 at 18:00. The seminar will be arranged via Zoom.
"An Effective Field Theory for Large Oscillons"
Vasily Maslov, ITMP
October 5, 2022 (Wednesday)
Abstract: Based on arXiv:2208.04334. We consider oscillons - localized, quasiperiodic, and extremely long-living classical solutions in models with real scalar fields. We develop their effective description in the limit of large size at finite field strength. Namely, we note that nonlinear long-range field configurations can be described by an effective complex field $\psi(t, \mathbf{x})$ which is related to the original fields by a canonical transformation. The action for $\psi$ has the form of a systematic gradient expansion. At every order of the expansion, such an effective theory has a global U(1) symmetry and hence a family of stationary nontopological solitons - oscillons. The decay of the latter objects is a nonperturbative process from the viewpoint of the effective theory. Our approach gives an intuitive understanding of oscillons in full nonlinearity and explains their longevity. Importantly, it also provides reliable selection criteria for models with long-lived oscillons. This technique is more precise in the nonrelativistic limit, in the notable cases of nonlinear, extremely long-lived, and large objects, and also in lower spatial dimensions. We test the effective theory by performing explicit numerical simulations of a (d+1)-dimensional scalar field with a plateau potential.
ITMP seminar will be held on Wednesday, October 5 at 18:00. The seminar will be arranged via Zoom.
"Recent developments in higher spin gravity"
Dr. Evgeny Skvortsov, University of Mons
September 28, 2022 (Wednesday)
Abstract: Firstly, I will review main ideas behind higher spin gravity and well-known obstructions to constructing such theories in flat and anti-de Sitter spaces. I will give a short list of theories that avoid these no-go's. In the main part, I will present in detail Chiral higher spin gravity, which is the only consistent model with propagating massless fields at present: action, quantum checks, consistent truncations, applications to AdS/CFT and 3d bosonization duality. If time permits, I will discuss appications of higher spin symmetries to 3d bosonization.
ITMP seminar will be held on Wednesday, September 28 at 18:00. The seminar will be arranged via Zoom.
"Colorful particles and fields in low dimensions"
Dr. Euihun Joung, Kyung Hee University, Seoul
June 8, 2022 (Wednesday)
Abstract: I will present a few recent applications of colorful extensions of spacetime symmetries to the theories of particles and fields in two and three dimensions. In particular, the two dimensional BF theories is shown to reduce a colorful extension of Schwarzian theory.
ITMP seminar will be held on Wednesday, June 8 at 18:00. The seminar will be arranged via Zoom.
"Invariant traces of the flat space chiral higher-spin algebra as scattering amplitudes"
Dr. Dmitry Ponomarev, ITMP MSU
Abstract: We sum up two- and three-point amplitudes in the chiral higher-spin theory over helicities and find that these quite manifestly have the form of invariant traces of the flat space chiral higher-spin algebra. We consider invariant traces of products of higher numbers of on-shell higher-spin fields and interpret these as higher-point scattering amplitudes. This construction closely mimics its anti-de Sitter space counterpart, which was considered some time ago and was confirmed holographically.
"q-Virasoro constraints and the algebra of Wilson loops in 3d"
Dr. Luca Cassia, Uppsala University
May 25, 2022 (Wednesday)
Abstract: The BPS/CFT correspondence predicts that the partition function and other BPS observables in a 3d N=2 gauge theory can be mapped to states in a q-deformed CFT, i.e. a theory whose symmetry algebra is q-Virasoro. Following this idea, I will discuss how to derive q-Virasoro constraints for the generating function of 1/2-BPS Wilson loops for a family of 3d N=2 gauge theories. Under certain assumptions on the Chern--Simons level and the number of flavors the constraints can be solved exactly and the solution can be recast as a "superintegrability" formula for characters of the gauge group. Moreover, I will show that the same logic can be applied to refined Chern--Simons theories and refined ABJ theories to derive formulas for refined knot invariants of the unknot. If time permits I will also discuss the symmetries of the constraints and their physical interpretation as dualities of the gauge theory. The talk is based on https://arxiv.org/abs/2007.10354 and https://arxiv.org/abs/2107.07525.
ITMP seminar will be held on Wednesday, May 25 at 18:00. The seminar will be arranged via Zoom.
"Null Wilson loop with Lagrangian insertion in N=4 super Yang-Mills theory"
Dr. Dmitry Chicherin, Annecy LAPTH
Abstract: Null Wilson loops in N = 4 super Yang-Mills are dual to planar scattering amplitudes. The duality holds at the level of their finite four-dimensional loop integrands as well as for the integrated observables with properly regularized infrared and cusp divergences. We consider a closely related infrared-finite four-dimensional observable which interpolates between the integrand and the fully loop-integrated quantity. It is defined as the null polygonal Wilson loop with a Lagrangian insertion normalized by the Wilson loop without insertion. Unlike the ratio and remainder functions of N = 4 super Yang-Mills amplitudes, this observable is non-trivial already at four points and it is reminiscent of finite parts of QCD amplitudes. We discuss the general structure of this n-point observable at weak coupling. At n>4, the loop corrections have rich kinematics. Indeed, there are several leading singularities and the accompanying loop functions are multivariable transcendental functions. We discuss a Grassmannian representation for the leading singularities and reveal their conformal and dual conformal symmetries. We conjecture a duality relation between the observable and the highest transcendentality piece of the planar all-plus amplitude in pure Yang-Mills theory. We test the duality using the one-loop n-point and two-loop four-point perturbative data for the observable. Relying on the duality, we predict the highest weight piece of the three-loop five-particle all-plus amplitude.
"Quantum states and their back-reacted geometries"
Prof. Sergey Solodukhin, University of Tours
A black hole quantum state (Hartle-Hawking or Boulware) is usually defined in a fixed background of a classical black hole. In my talk I will discuss the corresponding space-time geometry when the back-reaction is taken into account. The important questions include: does the back-reacted geometry always contain a horizon?; how it depends on the choice of the quantum state?; and what is the right choice for the quantum state for the non-physical fields such as ghosts? I will answer these and other questions in the context of a two-dimensional dilaton gravity. The talk is based on a joint recent work with D. Sarkar and Y. Potaux, arXiv:2112.03855.
"How Pauli-Villars' regularization tells the Nambu-Goto and Polyakov strings apart"
Dr. Makeenko Yuri M, ITEP Moscow
April 27, 2022 (Wednesday)
Abstract: It is known since 1980s that both lattice regularizations and the KPZ-DDK technique work for bosonic strings only in target-space dimension d<2. I discuss first that the classical string ground state is unstable for d>2, where another quantum ground state has lower energy and is stable under fluctuations for d<26. Then I consider high-derivative corrections to the Liouville action emerging from higher orders of the Seeley expansion and show at one loop that the KPZ-DDK technique tells the Nambu-Goto and Polyakov strings apart.
ITMP seminar will be held on Wednesday, April 27 at 18:00. The seminar will be arranged via Zoom.
"Recursion Relations for Five-Point Conformal Blocks and Beyond: A Practical Approach"
Dr. Valentina Prilepina, ITMP
Abstract: In this talk, I will consider five-point functions in conformal field theories (CFTs) in d > 2 spacetime dimensions. I will put forward a concrete and practical approach to computing global conformal blocks that appear in five-point functions of arbitrary scalar operators in general CFTs. By exploiting the weight-shifting operator formalism, I will construct a simple set of recursion relations for generating five-point blocks for arbitrary symmetric traceless exchange, which may be utilized to reduce such blocks to linear combinations of scalar exchange blocks with shifted external dimensions. Throughout, I will restrict attention to parity-even five-point functions in parity-preserving CFTs. Our results may be seen as a natural generalization of the work of Dolan and Osborn to the 5-point case. Moving beyond the external scalar case, I will consider how to promote one of the external scalars to a spin-1 or a spin-2 operator. I will show a way to derive additional recursion relations which encode these blocks in terms of combinations of weight shifting operators acting on the seed blocks for external scalars. I will comment on one possible application of these results. Lastly, I will reach beyond five-point functions to describe some ongoing work on generalizations of such techniques to six-point functions. I will also comment on our current attempts to bootstrap the 3D Ising model via five-point blocks.
"Exact quantization"
Barak Gabai, Harvard University
Abstract: 1d Hamiltonian problems are classically-integrable by definition. A manifestation of that is the fact that we can always use action-angle coordinates to parametrize the phase space. A generalization of the action variables to QM is not immediately obvious. In the presentation I will explain how to define rigorous quantities that can be thought of as the quantum action variables and how to write down an integral equation that determines their values, for a large class of 1d Schrodinger problems. Then, I will argue that these objects encode all of the dynamical information about the Quantum system, and explain how to extract the spectrum. The results are exact and rigorous.
"Anomaly-free scale symmetry and gravity"
Prof. Mikhail Shaposhnikov, EPFL Lausanne
April 6, 2022 (Wednesday)
Abstract: What is the global symmetry of Nature? In the absence of gravity, the most obvious answer to this question is given by special relativity and is associated with the Poincare transformations. As was noted a long time ago, the free Maxwell equations have a wider symmetry group - the 15 parameters conformal invariance, containing in addition to ten Poincare generators, four special conformal transformations, and dilatations. Dilatations change the length of the rulers, while special conformal transformation can bend the lines but do not alter the angles between them. Could it be that the symmetry of all interactions is conformal? We show that conformal symmetry can be made free from the quantum anomaly only in the flat space. The presence of gravity would reduce the global symmetry group of the fundamental theory to the scale invariance only. We discuss how the effective Lagrangian respecting the scale symmetry can be used for the description of particle phenomenology and cosmology.
ITMP seminar will be held on Wednesday, April 6 at 18:00. The seminar will be arranged via Zoom.
"Graviton Scattering in AdS at Two Loops"
Dr. Ellis Ye Yuan, Zhejiang University
March 30, 2022 (Wednesday)
Abstract: I will present a preliminary result on the third-order correction in 1/N expansion to the four-point correlator of the stress tensor multiplet in N=4 super Yang-Mills theory at large 't Hooft coupling, which corresponds to the two-loop scattering of four gravitons in the dual AdS5×S5 supergravity. This is obtained by bootstrapping an educated ansatz based on intuitions from a hidden 10-dimensionthe conformal symmetry, which I will describe in detail.
ITMP seminar will be held on Wednesday, March 30 at 18:00. The seminar will be arranged via Zoom.
"On moduli stabilisation, de Sitter spacetime and inflation from string compactifications"
Prof. Fernando Quevedo, Cambridge University
Abstract: An overview of the different attempts to stabilise the moduli fields, measuring the size and shape of the extra dimensions in string theory, will be given. The extension of these techniques towards obtaining de Sitter spacetime and a period of early universe inflation will be highlighted, emphasizing recent progress and open challenges.
"On AdS3 Holography"
Jan Troost, Ecole Normale Supérieure, Paris
Abstract: I will provide a conceptual tour of our recent work on holography in asymptotically AdS3 space-times. I will discuss a proposed conformal field theory dual of pure AdS3 gravity, how to implement topological twisting in quantum theories of supergravity and I will touch upon the determination of the energy of asymptotic winding strings in black holes in AdS3 space-time.
"Quantum Gravity on a Manifold with boundaries: Schrödinger Evolution and Constraints"
Dr. Jose Alejandro Rosabal, ITMP MSU
March 9, 2022 (Wednesday)
Abstract: In this work, we derive the boundary Schr¨odinger (functional) equation for the wave function of a quantum gravity system on a manifold with boundaries. From a detailed analysis of the gravity boundary condition on the spatial boundary, we find that while the lapse and the shift functions are independent Lagrange multipliers on the bulk, on the spatial boundary, these two are related; namely, they are not independent. In the Hamiltonian ADM formalism, a new Lagrange multiplier, solving the boundary conditions involving the lapse and the shift functions evaluated on the spatial boundary, is introduced. The classical equation of motion associated with this Lagrange multiplier turns out to be an identity when evaluated on a classical solution of Einstein's equations. On the other hand, the quantum counterpart is a constraint equation involving the gravitational degrees of freedom defined only on the boundary. This constraint has not been taken into account before when studying the quantum gravity Schr¨odinger evolution on manifolds with boundaries.
ITMP seminar will be held on Wednesday, March 9 at 18:00. The talk will take place in ITMP, room G-725 while it will also be possible to participate online via Zoom.
"Horava gravity as palladium of locality, unitarity and renormalizability: methods and results"
Dr. Andrei O. Barvinsky, Lebedev Institute, ITMP
Abstract: We discuss problems of covariant renormalization of Lorentz non-invariant Horava gravity models. Projectable version of these models in generic spacetime dimensions is shown to maintain UV renormalizability, locality and unitarity, which is based on the BRST structure of renormalization in a special class of regular background covariant gauges. Renormalization group flow with the asymptotically free UV fixed point is presented in (2+1)-dimensional theory. Beta functions of (3+1)-dimensional theory are obtained by the combination of dimensional reduction and the method of universal functional traces. They feature fixed points with several candidates for asymptotic freedom and also suggest an intriguing connection between the (3+1)-dimensional Horava gravity with detailed balance and gravitational Chern-Simons/topological massive gravity.
"An Overview of Quadratic Gravity"
Prof. John F. Donoghue, University of Massachusetts
February 16, 2022 (Wednesday)
Abstract: Quadratic Gravity is a renormalizeable UV completion of quantum gravity which retains the metric as the fundamental field. However, it appears that it must violate at least one property which we normally expect for our quantum field theories. I will give an overview of work analyzing the theory, and of the remaining unknown aspects of its QFT treatment.
ITMP seminar will be held on Wednesday, February 16 at 18:00. The seminar will be arranged via Zoom.
"$J\bar T$ - deformed CFTs as non-local CFTs"
Dr. Monica Guica, Institute of Theoretical Physics, Saclay
Abstract: I will start with a review of TTbar and JTbar - deformed CFTs and their holographic interpretation. I will then show that both TTbar and JTbar - deformed CFTs possess Virasoro x Virasoro symmetry. For the case of JTbar, I will discuss the classical realization of these symmetries in terms of field-dependent coordinate transformations and show how the associated generators can be used to define an analogue of "primary" operators in this non-local theory, whose correlation functions are entirely fixed in terms of those of the undeformed CFT.
ITMP seminar will be held on Wednesday, January 26 at 18:00. The seminar will be arranged via Zoom.
"Classical solutions and semiclassical expansion of non-relativistic strings in AdS5xS5"
Dr. Andrea Fontanella, ITMP
Abstract: The S-matrix of a theory is a key object to compute, since it encodes many physical properties of it. In this talk we present some recent work which aims to pave the ground towards an S-matrix computation for non-relativistic strings in AdS5xS5. After introducing some key features of the non-relativistic theory, we shall first address the question of how classical string solutions look like in this theory. We shall point out that every solution has a common feature, due to solving the equations of motion for the Lagrange multiplier fields. Some string solutions with closed and twisted boundary conditions will be presented, which represent the non-relativistic (and twisted) analogue of the BMN and GKP solutions for the relativistic theory. The second part of the talk focusses on the perturbative expansion in large string tension of the action around the twisted BMN-like solution. To perform this expansion we shall fix light-cone gauge and also additionally expand in large AdS radius.
ITMP seminar will be held on Wednesday, December 15 at 17:00. The talk will take place in ITMP, room G-725 while it will also be possible to participate online via Zoom.
"A Puncture in the Euclidean Black Hole"
Prof. Nissan Itzhaki, Tel Aviv University
Abstract: We consider the backreaction of the winding zero mode on the cigar geometry. We focus on the case of the $SL(2,R)_k/U(1)$ cigar associated with e.g. the near-horizon limit of $k$ NS5 black-branes. We solve the equations of motion numerically in the large $k$ limit as a function of the amplitude of the winding mode at infinity. We find that there is a critical amplitude $C_c=\exp(-\gamma/2)$ that admits a critical solution. The exact CFT description of the $SL(2,R)_k/U(1)$ cigar, in particular the FZZ duality, fixes completely the winding amplitude. We find that in the large $k$ limit there is an exact agreement $C_c=C_{FZZ}$. The critical solution is a cigar with a puncture at its tip; consequently, the BH entropy is carried entirely by the winding condensate. We argue that, in the Lorentzian case, the information is ejected from the black hole through this puncture.
ITMP seminar will be held on Wednesday, December 8 at 18:00. The seminar will be arranged via Zoom.
"Disforming the Kerr metric"
Dr. Timothy Anson, ITMP
Abstract: Starting from a recently constructed stealth Kerr solution of higher order scalar tensor theory, I will present disformal versions of the Kerr spacetime with a constant disformal factor and a regular scalar field. While the disformed metric has only a ring singularity and asymptotically is quite similar to Kerr, it is neither Ricci-flat nor circular. Non-circularity has far reaching consequences on the structure of the solution. In particular, I will discuss the properties of important hypersurfaces in the disformed spacetime: ergosphere, stationary limit and event horizon, and highlight the differences with the Kerr metric. I will also mention experimental signatures of these spacetimes.
ITMP seminar will be held on Wednesday, December 1 at 18:00. The talk will take place in ITMP, room G-725 while it will also be possible to participate online via Zoom.
"Infinite symmetries and Ward identities in celestial holography"
Dr. Hongliang Jiang, Queen Mary University of London
Abstract: Celestial holography reformulates the scattering amplitude holographically in terms of celestial conformal field theory living at boundary null infinity, thus opening up an interesting and promising avenue towards flat holography. In this talk, I will discuss various aspects of symmetry and their implications in celestial holography. I will first discuss how to realize the global symmetry of spacetime in celestial CFT, ranging from Poincare to conformal symmetry, and further to the superconformal symmetry of N=4 SYM. Then I will study the asymptotic symmetries from the celestial conformal field theory point of view. More specifically, by focusing on the soft sector of celestial OPEs, I will derive an infinite-dimensional symmetry algebra, dubbed holography chiral algebra, in supersymmetric Einstein-Yang-Mills theory. In the case of pure Einstein gravity, the holography chiral algebra turns out to be the w_{1+∞} algebra. These infinite symmetries give rise to infinite Ward identities in celestial CFT, which are equivalent to infinite soft theorems in scattering amplitudes. Finally, I will also derive general formulae for celestial OPEs and the corresponding Ward identities arising from arbitrary cubic interactions of three spinning massless particles.
ITMP seminar will be held on Wednesday, November 24 at 18:00. The seminar will be arranged via Zoom.
"Colour-kinematic duality, double copy and homotopy algebras"
Dr. Tommaso Macrelli, Surrey University
Abstract: While colour-kinematic duality and double copy are a well established paradigm at tree level, their loop level generalisation remained for a long time an unsolved problem. Lifting the on-shell, scattering amplitude-based description to an action-based approach, we show that a theory that exhibits tree level colour-kinematic duality can be reformulated in a way such that its loop integrands manifest colour-kinematic duality. After a review of Batalin-Vilkovisky formalism and homotopy algebras, we discuss how these structures emerge in quantum field theory and gravity. We focus then on the application of these sophisticated mathematical tools to colour-kinematic duality and double copy, introducing an adequate notion of colour-kinematic factorisation. This talk is based on arXiv:2007.13803 [hep-th], arXiv:2102.11390 [hep-th], arXiv:2108.03030 [hep-th].
"Surface operators in the 6d N=(2,0) theories"
Dr. Maxime Trepanier, ITMP
Abstract: One of the surprising predictions of string theory is the existence of QFTs in 6d with superconformal symmetry known as the 6d N=(2,0) theories. These theories contain surface operators which are analogous in many ways to Wilson loops in gauge theories. In this talk I will discuss some of the properties of these operators that make them ideal observables to learn about the 6d N=(2,0) theories. After reviewing the simplest example (the sphere) both in field theory and in holography, I will discuss how supersymmetry helps to find the holographic description for more complicated surface operators. In particular, in my recent work we found the holographic description for a class of tori, and their expectation value captures a 6d analog of the quark-antiquark potential.
ITMP seminar will be held on Wednesday, November 10 at 18:00. The talk will take place in ITMP, room G-725 while it will also be possible to participate online via Zoom.
"Love and Naturalness"
Dr. Mikhail Ivanov, Princeton IAS & INR RAS
Abstract: Tidal Love numbers of a compact body capture multipole moments induced by external gravitational fields. Also they appear as Wilson coefficients in the post-Newtonian effective field theory of extended objects. It has been known for a decade that Love numbers vanish identically for black holes in four dimensions, which posed a major naturalness problem in the EFT context. In my talk, I will present a new hidden ("Love") symmetry of black holes, which elegantly resolves this naturalness paradox.
ITMP seminar will be held on Wednesday, November 3 at 18:00. The seminar will be arranged via Zoom.
"Stress tensor and conformal correlators"
Prof. Andrei Parnachev, Trinity College Dublin
Abstract: I will describe a calculation of the contributions of stress tensor composites to correlators in CFTs (in d>2) with a large number of degrees of freedom, including holographic CFTs. Implications for holography, thermalization and possible higher spin symmetries relevant for near lightcone correlators will be discussed.
ITMP seminar will be held on Wednesday, October 27 at 18:00. The talk will take place in ITMP, room G-725 while it will also be possible to participate online via Zoom.
"Sigma models as Gross-Neveu models"
Dr. Dmitry Bykov, Steklov Mathematical Institute, ITMP MSU
Abstract: I will show that there is a wide class of integrable sigma models that are exactly and explicitly equivalent to bosonic Gross-Neveu models. In full generality these are models with quiver variety phase spaces, but the familiar CP^n, Grassmannian or flag manifold sigma models belong to this class as well. This approach leads to a new take on topics such as RG (Ricci) flow, construction of integrable deformations and the inclusion of fermions. In particular, it provides a way of obtaining worldsheet SUSY theories from target space SUSY theories by means of a supersymplectic quotient.
"Unitarization in Kaluza Klein theory and the Geometric Bootstrap"
Dr. Kurt Hinterbichler, Case Western Reserve University
Abstract: The infinite towers of massive particles present in Kaluza Klein reductions of Einstein gravity conspire to soften the high-energy behavior of the scattering of massive spin-2 states. The mechanism by which this occurs leads to a bootstrap procedure, analogous to the conformal bootstrap, that yields new non-trivial constraints on the eigenvalue spectra of closed Einstein manifolds.
"Non-perturbative gravitational effects in cosmology"
Dr. Victor Gorbenko, Stanford University
Abstract: I will start with reviewing some recent progress in understanding the black hole evaporation with the help of gravitational path integral methods. It turns out that for certain observables, for example the so-called Page curve for the BH entropy, higher-topology saddle points of the gravitational path integral produce important and calculable contributions. I will then discuss potential applications of similar ideas in cosmology and when such non-perturbative effects may be relevant. Then, I will demonstrate a concrete calculation in a simple toy-model, where a bra-ket wormhole, that is a connection between the bra and the ket of the wave function of the universe arises.
ITMP seminar will be held on Wednesday, October 6 at 18:00. The seminar will be arranged in the hybrid format. The talk will take place in ITMP, room G-725 while it will also be possible to participate online via Zoom.
"Black holes in scalar-tensor theories"
Dr. Eugeny Babichev, The University Paris-Saclay
Abstract: I will review black hole solutions in Horndeski theory and its extensions. For shift symmetric theories of Horndeski and beyond Horndeski theory, black holes involve several classes of nontrivial solutions: those that include, at the level of the action, a linear coupling to the Gauss-Bonnet term and those that involve time dependence in the scalar field. I will describe black hole solutions of both classes in some detail. I will also review and discuss recent results on hairy black holes in particular subclasses of the theory.
"Black holes on a conifold with fluxes"
Prof. Alex Buchel, Perimeter Institute
Abstract: We present a comprehensive analysis of the black holes on warped deformed conifold with fluxes in Type IIB supergravity. These black holes realize the holographic dual to thermal states of the N = 1 supersymmetric SU(N) × SU(N + M) cascading gauge theory of Klebanov et al on round S^3 . There are three distinct mass scales in the theory: the strong coupling scale Λ of the cascading gauge theory, the compactification scale µ = 1/L_3 (related to the S^3 radius L_3 ) and the temperature T of a thermal state. Depending on Λ , µ and T , there is an intricate pattern of confinement/deconfinement (Hawking-Page) and the chiral symmetry breaking phase transitions.
"Comments on Large Charge and N=4 SYM"
Dr. Shota Komatsu, CERN and Princeton IAS
June 16, 2021 (Wednesday)
Abstract: I will discuss three topics on the large charge limit of N=4 super Yang-Mills and its holographic dual. First, I will discuss what we can learn from holography about general structures of the large-charge expansion of SCFT with higher-rank gauge groups. Second, I will briefly discuss the double-scaling limit and their relation to the central extension of the symmetry. Third I will explain how the ideas from the large-charge expansion help to resolve confusions in the literature on the holographic computation of correlation functions of heavy operators dual to D-branes in AdS.
ITMP seminar will be held on Wednesday, June 16 at 18:00. The seminar will be arranged via Zoom.
"Integrable Systems and Spacetime Dynamics"
Kristiansen Lara, University of Santiago, Chile
Abstract: It is shown that the Ablowitz-Kaup-Newell-Segur (AKNS) integrable hierarchy can be obtained as the dynamical equations of three-dimensional General Relativity with a negative cosmological constant. This geometrization of AKNS system is possible through the construction of novel boundary conditions for the gravitational field. These are invariant under an asymptotic symmetry group characterized by an infinite set of AKNS commuting conserved charges. Gravitational configurations are studied by means of $SL(2,\mathbb{R})$ conjugacy classes. Conical singularities and black hole solutions are included in the boundary conditions. Based on https://arxiv.org/abs/2104.09676.
"Celestial CFT Correlators and Conformal Block Decomposition"
Dr. Angelos Fotopoulos, Northeastern University, Boston
Abstract: I will shortly review the connection of the Bondi-van der Burg-Metzner-Sachs (BMS) symmetry group of asymptotically flat four-dimensional spacetimes at null infinity to the S-matrix of elementary particles and gravitons.
Applying Mellin transformations to traditional, momentum space amplitudes we can compute celestial amplitudes. These are conjectured to correspond to correlators of conformal primary fields on a putative two-dimensional celestial sphere. I will elaborate on the proposal of flat holography in which four-dimensional physics is encoded in two-dimensional celestial conformal field theory (CCFT). The symmetry underlying CCFT is the extended BMS symmetry of (asymptotically) flat spacetime. I will show how to use soft and collinear theorems of Einstein-Yang-Mills theory to derive the OPEs of BMS field operators generating the symmetries of the BMS group, superrotations and supertranslations.
In the second part of my talk, time permitted, I will discuss a proposal for radial quantization of the CCFT and I will present an attempt to decompose gluon celestial amplitudes in conformal blocks. This will reveal some very interesting aspects of the Celestial CFT: the emergence of new operators in the theory and space-time scattering channels which lead to continuous complex spin primary fields. The space-time interpretation of these operators is still unclear. Moreover, CFT crossing symmetry and its relation to four-dimensional spacetime crossing symmetry of the S-matrix are some interesting open problems.
"Inflation and reheating with Higgs inflation and related models"
Dr. Fedor Bezrukov, University of Manchester
Abstract: Using the Higgs field for inflation is an economic way to solve two problems with one scalar field -- Higgs field is require in the Standard Model, and a scalar field is the simplest method to drive inflation of the early Universe. However, consistent attempt to do this takes us to the limits of validity of the quantum field theory, and significant problems with the reheating after inflation. I'll discuss the ways to deal with this by combining the theory with R^2 gravitational terms, and describe the highly nontrivial dynamics of the system after inflation.
"Probing boundary states at finite coupling"
Dr. Edoardo Vescovi, KTH Royal Institute of Technology, Stockholm University
Abstract: In this talk, we calculate exactly the correlation function of a Wilson loop operator and a local non-BPS operator in planar N=4 SYM. First, we introduce an effective theory for such observable in free theory and we reformulate the gauge-theory computation as an overlap between an energy eigenstate of a spin-chain and a matrix product state. The form of the result supports the interpretation of the correlator as an overlap between an integrable boundary state, which we determine using symmetry and integrability, and the state corresponding to the single-trace operator. Second, we explain how to formulate a non-perturbative bootstrap program based on the results obtained in this framework.
We also discuss two applications of the effective theory to the four-point function of determinant operators in N=4 SYM, as well as to all-loop determinant correlators in the fishnet theory.
"Thermal correlators in CFT and black holes"
Prof. Jorge Guillermo Russo, Catalan Institution for Research and Advanced Studies and University of Barcelona, Barcelona
Abstract: We describe the computation of thermal 2-point correlation functions in the black brane AdS5 background dual to 4d CFT's at finite temperature for operators of large scaling dimension. This gives rise to a formula that matches the expected structure from the OPE. The thermal 2-point function has an exponentiation property, whose origin we explain. We also compute the first correction to the two-point function due to graviton emission, which encodes the time travel to the black hole singularity.
We will also discuss some interesting features of higher-point functions.
"Exceptional world-volume currents and their algebras"
Dr. David Osten, ITMP
May 5, 2021 (Wednesday)
Abstract: After a short review of exceptional generalised geometry I will introduce a unified setup for a duality-covariant Hamiltonian formulation of world-volume theories of objects in string and M-theory. Based on https://arxiv.org/abs/2103.03267
ITMP seminar will be held on Wednesday, May 5 at 18:00. The seminar will be arranged via Zoom.
"Models for a vast energy range: particles meet gravity and cosmology"
Dr. Alberto Salvio, University of Rome Tor Vergata
Abstract: I will discuss models valid over a vast energy range by following 2 strategies. In one approach (bottom-up) I will illustrate the main phenomenological aspects of a scenario where one adds to the Standard Model 3 right-handed neutrinos and an axion sector. This can account for neutrino oscillations, dark matter, baryogengesis, inflation and can also stabilize the electroweak vacuum. In another, more ambitious approach, I will talk about a scenario where all couplings flow to zero at infinite energy (total asymptotic freedom). The corresponding phenomenology of this top-down approach will also be treated.
"Losing the trace to discover dynamical Newton or Planck constants"
Dr. Alexander Vikman, Institute of Physics of the Czech Academy of Sciences, Prague
Abstract: I will discuss our recent work e-Print: 2011.07055. There we showed that promoting the trace part of the Einstein equations to a trivial identity results in the Newton constant being an integration constant. Thus, in this formulation the Newton constant is a global dynamical degree of freedom which is also a subject to quantization, quantum fluctuations and the Heisenberg uncertainty relations. This is similar to what happens to the cosmological constant in the unimodular gravity where the trace part of the Einstein equations is lost in a different way. I will consider a constrained variational formulation of these modified Einstein equations. Then, drawing on analogies with the Henneaux-Teitelboim action for unimodular gravity, I will discuss different general-covariant actions resulting in these dynamics. In this approach, it turned out that the inverse of the dynamical Newton constant is canonically conjugated to the Ricci scalar integrated over spacetime. Surprisingly, instead of the dynamical Newton constant one can formulate an equivalent theory with a dynamical Planck constant. Finally, I will show that an axion-like field can play a role not only of the cosmological constant, as in e-Print: 2001.03169, but also of the Newton constant or even of the Planck constant.
"Supergravity excitations of stringy geometries"
Associate Prof. Oleg Lunin, SUNY-Albany
Abstract: Motivated by the desire to understand the dynamics of light modes on various gravitational backgrounds, this talk summarizes recent results concerning properties of scalar, vector, and tensor excitations of black holes and integrable stringy geometries. For rotating black holes and for certain Wess-Zumino-Witten models, full separability of all dynamical equations is demonstrated, and symmetries underlying this property are uncovered. For other classes of integrable backgrounds, the energy spectra of various fields are evaluated, and the algebraic constructions of the corresponding eigenfunctions are presented.
ITMP seminar will be held on Wednesday, April 7at 18:00. The seminar will be arranged via Zoom.
"Integrated four-point correlators in N=4 SYM"
Dr. Congkao Wen, Queen Mary University of London
Abstract: In this talk, we will discuss integrated correlators of four superconformal primaries in N=4 super Yang-Mills, that are defined by integrating over spacetime coordinates of the four-point correlator with certain integration measures. The integrated correlators can be computed using supersymmetric localisation, which are expressed as N-dimensional matrix-model integrals. We will mostly focus on one of the integrated correlators. We find that this integrated correlator can be presented as a lattice sum, which makes manifest the SL(2, Z) modular invariance of N=4 SYM. Furthermore, the integrated correlator obeys a remarkable Laplace-difference equation, which relates the correlator of SU(N) theory with those of SU(N-1) and SU(N+1) theories. The expression allows us to obtain exact results of the integrated correlator in various limits. For instance, in perturbation, the expression is checked to be consistent with known results in the literature; in the large-N limit, it is shown to match with the expected results from string theory due to AdS/CFT duality.
"Towards the Virasoro-Shapiro Amplitude in AdS5xS5"
Prof. Paul Heslop, Durham University
Abstract: We propose a systematic procedure for obtaining all single trace 1/2-BPS correlators in N=4 super Yang-Mills corresponding to the four-point tree-level amplitude for type IIB string theory in AdS5xS5. The underlying idea is to compute generalised contact Witten diagrams coming from a 10d effective field theory on AdS5xS5 whose coefficients are fixed by the flat space Virasoro-Shapiro amplitude up to ambiguities related to commutators of the 10d covariant derivatives which require additional information such as localisation. We illustrate this procedure by computing stringy corrections to the supergravity prediction for all single trace 1/2-BPS correlators up to O(α′7), and spell out a general algorithm for extending this to any order in α′.
"Chaotic scattering with a highly excited string"
Dr. Vladimir Rosenhaus, Institute for Advanced Study, Princeton
Abstract: Motivated by the desire to understand chaos in the S-matrix of string theory, we study tree level scattering amplitudes involving highly excited strings. While the amplitudes for scattering of light strings have been a hallmark of string theory since its early days, scattering of excited strings has been far less studied. Recent results on black hole chaos, combined with the correspondence principle between black holes and strings, suggest that the amplitudes have a rich structure. We review the procedure by which an excited string is formed by repeatedly scattering photons off of an initial tachyon. We compute the scattering amplitude of one arbitrary excited string decaying into two tachyons, and study its properties for a generic excited string. We find the amplitude is highly erratic as a function of both the precise excited string state and of the tachyon scattering angle.
"Spinning and spinless two-particle dynamics from gravitational scattering amplitudes"
Prof. Radu Roiban, Penn State University
Abstract: In the appropriate classical limit, scattering amplitude-based techniques can yield the classical interaction of massive bodies at fixed order in the post-Minkowskian expansion.
In this talk we review a framework for the construction of the two-body Hamiltonian for compact binary systems and illustrate it with state of the art results for both spinning and spinless systems. Throughout we emphasize structure of gravitational interactions and direct relations between scattering amplitudes and observables of two-particle dynamics.
"O(D,D) and string alpha'-corrections"
Linus Wulff, Masaryk University
Abstract: String theory on a d-dimensional torus features an O(d,d) symmetry. It has been suggested that the low-energy effective action can be formulated with a larger O(D,D) symmetry, even before putting the theory on a torus. This approach, which goes by the name of Double Field Theory (DFT), has proven very useful. I will address the problem of constructing higher derivative invariants in this formalism. In agreement with the literature we find that a quadratic Riemann invariant can be constructed, which can account for the first alpha'-correction to the bosonic and heterotic string. However, we find that no cubic or quartic Riemann invariants can be constructed. This suggests that the quartic Riemann terms arising at order alpha'^3 in string theory do not have a DFT embedding.
ITMP seminar will be held on Wednesday, March 3 at 18:00. The seminar will be arranged via Zoom.
"Gravity as a double copy of gauge theory"
Ricardo Monteiro, Queen Mary University of London
Abstract: Relations expressing gravity as a "double copy" of gauge theory appeared first in string theory, and have been used to compute scattering amplitudes in theories of gravity, with applications to both theory and phenomenology. I will discuss how the double copy extends to solutions to the equations of motion, including our best known black hole spacetimes, and how this story connects to the original story for scattering amplitudes.
"Holographic correlators and emergent Parisi-Sourlas supersymmetry"
Xinan Zhou, Princeton University
Abstract: In this talk, I will first review the recent progress in computing holographic four-point correlators in maximally supersymmetric CFTs. I will introduce the so-called Maximally R-symmetry Violating (MRV) limit, in which the Mellin amplitudes drastically simplify and become easy to compute. From the MRV limit, I will show that the full amplitudes can be reconstructed by using symmetries. This gives a complete answer to all tree-level four-point functions in all three maximally supersymmetric backgrounds AdS4xS7, AdS5xS5, and AdS7xS4. In the second part of my talk, I will point out that these results have surprising properties. The Mellin amplitudes exhibit an emergent dimensional reduction structure, which allows them to be expressed in terms of only scalar exchange amplitudes from lower dimensional spacetimes. I will explain that this dimensional reduction structure is closely related to a holographic realization of the Parisi-Sourlas supersymmetry.
"Scattering from production in 2d"
Piotr Tourkine, Paris LPTHE, CERN
February 3, 2021 (Wednesday)
Abstract: In this seminar, I will talk about recent results about a numerical method to find unitary S-matrices. The method, based on works of Atkinson, was invented in the late 60s and was used as a proof of existence of functions that satisfy all of the S-matrix axioms in 4d. However, it was not put in practical use. Our recent results concern the implementation of those methods for S-matrices in two dimensions, using two different iterative schemes: a fixed-point iteration and Newton's method. Those schemes iterate the unitarity and dispersion relations, and converge to solutions to the S-matrix axioms. This numerical strategy provides a solution to the problem of reconstructing the scattering amplitude starting from a given particle production probability. After a general introduction, the talk will be focused on how we implemented the algorithms, what they converge to, and the conditions under which they converge. We will see that the question of convergence naturally connects to the recent study of the coupling maximization in the two-dimensional S-matrix bootstrap. If time allows, I'll also comment on a fractal structure which we observed to be related to the so-called CDD-ambiguities. I'll conclude with possible future directions.
ITMP seminar will be held on Wednesday, February 3 at 18:00. The seminar will be arranged via Zoom.
"The O(N) vector model at large charge: EFT, large N and resurgence"
Dr. Domenico Orlando, INFN Turin, University of Bern
Abstract: I will discuss the IR fixed point of the O(N) vector model in 3 dimensions (Wilson-Fisher point) in the framework of the large charge expansion. First I will construct an EFT valid for any N, then verify the prediction of the model in the double scaling limit of large N, large charge and finally discuss the use of resurgence to extend the validity of the EFT to sectors of small charge.
"Loops in Holography"
Prof. Ivo Sachs, Ludwig Maximilian University of Munich
Abstract: A quantum field theory (QFT) in an (Anti-)de Sitter, (A-)dS space-time can be characterised by a conformal field theory (CFT) much like the S-matrix for Minkowski QFT's. I will review how quantum loops are reflected in CFT, both in AdS and dS. The latter has a remote connection to density perturbations in the Universe.
ITMP seminar will be held on Wednesday, December 16 at 18:00. The seminar will be arranged via Zoom.
"RG Limit Cycles and "Spooky" Fixed Points in Perturbative QFT"
Fedor Popov, Princeton University
Abstract: We study quantum field theories with sextic interactions in 3−ϵ dimensions, where the scalar fields ϕab form irreducible representations under the O(N)2 or O(N) global symmetry group. We calculate the beta functions up to four-loop order and find the Renormalization Group fixed points. In an example of large N equivalence, the parent O(N)2 theory and its anti-symmetric projection exhibit identical large N beta functions which possess real fixed points. However, for projection to the symmetric traceless representation of O(N), the large N equivalence is violated by the appearance of an additional double-trace operator not inherited from the parent theory. Among the large N fixed points of this daughter theory we find complex CFTs. The symmetric traceless O(N) model also exhibits very interesting phenomena when it is analytically continued to small non-integer values of N. Here we find unconventional fixed points, which we call "spooky." They are located at real values of the coupling constants gi, but two eigenvalues of the Jacobian matrix ∂βi/∂gj are complex. When these complex conjugate eigenvalues cross the imaginary axis, a Hopf bifurcation occurs, giving rise to RG limit cycles. This crossing occurs for Ncrit≈4.475, and for a small range of N above this value we find RG flows which lead to limit cycles.
"Large-N localization and quiver CFT"
Prof. Konstantin Zarembo, Niels Bohr Institute, Nordita
Abstract: The quiver CFT interpolates between N=4 SYM and N=2 super-QCD, and is dual to strings on the AdS5 x S5/Z2 orbifold. Localization of the path integral in the quiver CFT potentially gives means to test this duality rigorously. The strong-coupling solution of the localization matrix model, which I will describe, is in the remarkable qualitative agreement with the holographic dual, but is not devoid of puzzling features.
"Scattering in chiral strong backgrounds"
Dr. Tim Adamo, University of Edinburgh
Abstract: There are many reasons to be interested in quantum field theory in the presence of strong, non-perturbative background fields, but surprisingly little is known in these scenarios, especially when compared to trivial backgrounds. For instance, the full semi-classical S-matrices of gauge theory and gravity are known in a trivial background, but even for simple strong backgrounds the tree-level amplitudes of these theories have not been computed beyond four-external particles. This raises the question: are all-multiplicity formulae (and other hallmarks of the study of scattering amplitudes in recent years) inextricably tied to trivial backgrounds? In this talk, I will demonstrate that for a broad class of chiral backgrounds in four-dimensions, we can find all-multiplicity expressions for tree-level scattering amplitudes of gauge theory and gravity which are remarkably simple and clearly un-related to standard background-coupled field theory on space-time.
"Closed strings and weak gravity condition from higher-spin causality"
Dr. Sandipan Kundu, Johns Hopkins University
Abstract: I will show that metastable higher spin particles, free or interacting, cannot couple to gravity while preserving causality unless there exist higher spin states in the gravitational sector much below the Planck scale. Causality imposes an upper bound on the mass of the lightest higher spin particle in the gravity sector in terms of quantities in the non-gravitational sector. I will argue that any weakly coupled UV completion of such a theory must have a gravity sector containing infinite towers of asymptotically parallel, equispaced, and linear Regge trajectories. This implies that the gravity sector has a stringy structure with an upper bound on the string scale. Another consequence of this bound is that all metastable higher spin particles in 4d with masses below the string scale must satisfy a weak gravity condition. Moreover, these bounds also have surprising implications for large N QCD coupled to gravity.
"Event shapes, the light-ray OPE and superconvergence"
Dr. Alexander Zhiboedov, CERN
Abstract: I will describe recent progress in understanding event shapes in conformal field theories. Familiar from the description of hadronic events at colliders, these observables are computed by matrix elements of the so-called light-ray operators. Characterizing the light-ray operators in a nonperturbative setting poses many theoretical challenges. I will describe some of the recently developed tools to perform computations of conformal event shapes and illustrate them using N=4 SYM. Finally, I will explain how via holography some of the basic properties of conformal event shapes lead to nontrivial sum rules obeyed by possible UV completions of general relativity.
"Modification of the radiation definition in odd dimensions"
Mikhail Khlopunov, ITMP
Abstract: In this seminar, I will discuss modification of the radiation definition for the case of odd-dimensional space-time. The standard definition of the wave zone fails in odd dimensions, due to the violation of the Huygens principle: the signal from an instantaneous flash of current reaches an observer after an interval of time required for propagation of the signal at the speed of light, but then the tail is observed endlessly; while in even-dimensional spaces an instant signal ends instantly at the observation point. The reason is that the retarded Green's functions of the wave equation in odd dimensions has support localised not only on the light cone, but also inside it. I will discuss the covariant retarded quantities technique and Rohrlich-Teitelboim definition of the wave zone. Using this definition, the scalar radiation from a point-like charge, moving along the circular trajectory in three and five-dimensional space-times, will be computed. The obtained results will be verified by the calculations of the spectral distributions of the radiated energy, which are indifferent to the dimensionality of the space-time. Also, the contributions of the tails to the radiation will be discussed.
"On double elliptic integrable system: characteristic determinant and Manakov triple"
Dr. Andrei Zotov, Steklov Mathematical Institute RAS
Abstract: We discuss double elliptic integrable systems, where the dependence on the momenta of particles is elliptic. These models generalize the relativistic Ruijsenaars integrable systems, which contain trigonometric (exponential) dependence on momenta. A brief review on underlying dualities and interrelations between integrable systems will be given. We explain how to construct a kind of determinant representation for the known conservation laws. Finally, we will see that in the classical case the double elliptic models are naturally described by the Manakov L-A-B triple instead of the Lax pair.
"Production of massive particles from the decay of a free massless field"
Dr. Ariel Arza, ITMP
Abstract: In this talk we will show, in a simple toy model (H=g\phi\chi^2), how Bose enhancement allows the decay of a massless field into massive particles. It happens due to a parametric resonance caused by a big energy density of the decaying field. I will show the instability conditions as well as the energy density threshold for the decay to occur. I will also compute the equivalent of the spontaneous decay rate.
"Hamiltonian structures of the spin Ruijsenaars-Schneider models"
Prof. Gleb Arutyunov, University of Hamburg
Abstract: TBA
"Constrained modified gravities: generalized unimodular gravity and beyond"
Nikita Kolganov, ITMP
Abstract: In my talk, I will give you a brief introduction into modified gravity theories, and then I will proceed to particular modified gravity models, which I'm interested in. Most of the modified gravities are equivalent to the General Relativity, equipped with some additional degrees of freedom. In our case these d.o.f. are obtained by the explicit breaking of general covariance by an algebraic constraint on metric coefficients. I'm going to discuss the covariantization issue of such theories, its duality to well-known k-essence and self-gravitating media models, and applications to inflationary cosmology.
"Schwarzschild black hole thermodynamic entropy on a nice slice"
Dr. Jose Alejandro Rosabal, Asia Pacific Center for Theoretical Physics, Pohang
July 22, 2020 (Wednesday)
Abstract: In this seminar I will present the calculation of the thermodynamic entropy of a Schwarzschild black hole on a nice slice. This can be seen as a warm up to address the calculation of the Von Neumann entropy for two intervals, that includes a portion of the interior of the black hole, and the Page curve. In the end I will comment on the relation between this work and some recent proposal.
ITMP seminar will be held on Wednesday, July 22 at 14:00. Given the situation the seminar will be arranged via Zoom.
"Gauge/gravity duality and the phenomenology of hadrons"
Dr. Frederic Brunner, Vienna University of Technology (TU Wien)
July 8, 2020 (Wednesday)
Abstract: I will give a pedagogical introduction to aspects of hadronic physics at low energy, and show how the methodology of gauge/gravity duality may be applied to certain problems arising in this context. As a concrete example, I will talk about the search for glueballs, bound states of gluons predicted by lattice gauge theory. These states have not been identified unambiguously among the hadrons we observe in collider experiments, in part due to the lack of theoretical predictions of their decay rates. I will show how gauge/gravity duality can be used to calculate glueball decay rates, and discuss potential implications.
ITMP seminar will be held on Wednesday, July 8 at 18:00. Given the situation the seminar will be arranged via Zoom.
"Current Algebra and Generalised Geometry"
Dr. David Osten, Ludwig Maximilian University/Max-Planck-Institute for Physics, Munich
Abstract: The first part of the talk will be a review of T-duality and generalised geometry in string theory. Generalised geometry is a generalisation of Riemannian geometry that captures certain 'non-geometric' backgrounds which are nevertheless well-defined in string theory. A convenient characterisation of such backgrounds is given in terms of the so-called generalised fluxes. In the second part of the talk I want to discuss the meaning of these generalised fluxes in the worldsheet theory. There, they describe a deformation of the canonical Poisson structure -- namely the current algebra. Besides applications to magnetic backgrounds and integrable sigma models which motivated this work, the virtue of the deformed current algebra lies in the fact that it gives clear-cut routes to non-commutative and non-associative interpretations of the generalised flux backgrounds, to generalisations of T-duality and to generalisations to M-theory.
ITMP seminar will be held on Wednesday, June 24 at 18:00. Given the situation the seminar will be arranged via Zoom.
"Introduction to the TT̄ deformation"
Dr. Riccardo Conti, Turin University
Abstract: In this talk I will give a pedagogical introduction to a special irrelevant deformation of 2-dimensional Quantum Field Theories, the so-called TT̄ deformation, which has recently attracted the attention of the theoretical physics community due to the interesting links with string theory and the AdS/CFT correspondence. I will review some basic aspects of this deformation and describe the principal features of the deformed models both at classical and quantum level. Finally, I will briefly comment on other similar irrelevant deformations and present a series of interesting open questions that still wait for answers.
"Efficient Rules for All Conformal Blocks: A Dream Come True"
Dr. Valentina Prilepina, Laval University
Abstract: In this talk, I will lay out a set of efficient rules for computing d-dimensional global conformal blocks in arbitrary Lorentz representations in the context of the embedding space operator product expansion (OPE) formalism. With these rules in place, the general procedure for determining all possible conformal blocks is reduced to (1) identifying the relevant group theoretic quantities and (2) applying the conformal rules to obtain the blocks. The rules represent a systematic prescription for computing the blocks in a convenient mixed OPE-three-point- function basis as well as a set of rotation matrices, which are necessary to translate these blocks to the pure three-point function basis relevant for the conformal bootstrap. I will start by tracing their origin by describing some of the essential ingredients present in the formalism that naturally give rise to these rules. I will then map out the derivation of the rules, first outlining the general algorithm for the rotation matrices and then proceeding to the conformal blocks. Along the way, l will introduce a convenient diagrammatic notation (somewhat reminiscent of Feynman diagrams), which serves to encode parts of the computation in a compact form. Finally, I will treat several interesting examples to demonstrate the application of these rules in practice.
ITMP seminar will be held on Wednesday, May 27 at 18:00. Given the situation the seminar will be arranged via Zoom.
"The echo method for axion dark matter detection"
Abstract: The echo method is a new idea for axion dark matter search. It relies on the stimulated decay of cold axions into two photons when a powerful beam of microwave radiation is shot to the space. From the axion decay, a feeble but detectable amount of electromagnetic radiation is received nearby the location where the outgoing beam was released. In this talk I will describe the essentials of this idea as well as the challenges for the future.
"Polyvector deformations in supergravity and the string/M-theory dynamics"
Dr. Ilya Bakhmatov, ITMP
Abstract: Integrability preserving Yang-Baxter deformations of the string sigma-model have a simple supergravity description. We will review how introducing an extra bi-vector field to account for the r-matrix, one recovers the YB deformations after the simple open/closed string map. Things can be made clearer by using the formalism of double field theory, a T-duality covariant description of supergravity defined on an extended spacetime. We will discuss generalisations to the tri-vector deformations of d=11 supergravity and potential consequences for the fundamental membrane dynamics.
ITMP seminar will be held on Wednesday, April 29 at 18:00. Given the situation the seminar will be arranged via Zoom.
"Review on higher-spin theories"
Dr. Dmitry Ponomarev, ITMP
Abstract: I will start by explaining what is meant by higher-spin theories. Then I will illustrate why making higher-spin fields interact is not easy. I will then review the standard no-go theorems for higher-spin interactions. Next, I will discuss different approaches to circumvent these no-go's and review the most successful attempts. Finally, I will discuss some potentially promising directions for future research.
"Some aspects of massive gravity and Horndeski theory"
Prof. Mikhail S. Volkov (Université de Tours, France)
March 26, 2020 (Thursday)
Abstract: In the first part of the talk I plan to give a very brief introduction to the ghost-free massive gravity theory and then briefly describe the recent work on the theory of massive gravitons in arbitrary spacetimes. In the second part, I would like to present some new results obtained in the context of the Horndeski theory, in particular concerning the Palatini analysis of this theory.
ITMP seminar will be held on Thursday, March 26 at 16:00. Given the situation the seminar will be arranged via Skype.
Доклад Марка Энно (Marc Henneaux), профессора Брюссельского Свободного университета "Asymptotic symmetries of electromagnetism and gravity".
9 октября (среда)
Марк Энно - известный физик-теоретик, профессор Брюссельского Свободного университета, директор Международного Сольвеевского института физики и химии (Бельгия), профессор Collège de France. Лауреат ряда престижных международных премий, включая Francqui Prize, Humboldt Research Award, Премию имени Н.Н. Боголюбова (ОИЯИ).
Аннотация доклада: The asymptotic symmetries of gravity and electromagnetism are remarkably rich. The talk will explain the asymptotic structure of gravity and electromagnetism in the asymptotically flat case by making central use of the Hamiltonian formalism. In particular, how the relevant infinite-dimensional asymptotic symmetry groups emerge at spatial infinity, and the extension to higher spacetime dimensions, will be discussed.
Доклад может быть интересен научным сотрудникам, студентам старших курсов и аспирантам, специализирующимся в теоретической физике. Рабочий язык - английский.
Адрес: Ломоносовский корпус МГУ, ауд. Г-716.
Все мероприятия ИТМФ
© Институт теоретической и математической физики 2022
Подтверждая отправку формы Я соглашаюсь на обработку персональных данных и соглашаюсь с политикой конфиденциальности | CommonCrawl |
Hölder's theorem
In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found.[1]
The theorem also generalizes to the $q$-gamma function.
Statement of the theorem
For every $n\in \mathbb {N} _{0},$ there is no non-zero polynomial $P\in \mathbb {C} [X;Y_{0},Y_{1},\ldots ,Y_{n}]$ such that
$\forall z\in \mathbb {C} \smallsetminus \mathbb {Z} _{\leq 0}:\qquad P\left(z;\Gamma (z),\Gamma '(z),\ldots ,{\Gamma ^{(n)}}(z)\right)=0,$
where $\Gamma $ is the gamma function. $\quad \blacksquare $
For example, define $P\in \mathbb {C} [X;Y_{0},Y_{1},Y_{2}]$ by
$P~{\stackrel {\text{df}}{=}}~X^{2}Y_{2}+XY_{1}+(X^{2}-\nu ^{2})Y_{0}.$
Then the equation
$P\left(z;f(z),f'(z),f''(z)\right)=z^{2}f''(z)+zf'(z)+\left(z^{2}-\nu ^{2}\right)f(z)\equiv 0$
is called an algebraic differential equation, which, in this case, has the solutions $f=J_{\nu }$ and $f=Y_{\nu }$ — the Bessel functions of the first and second kind respectively. Hence, we say that $J_{\nu }$ and $Y_{\nu }$ are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, $\Gamma $, is not differentially algebraic and is therefore transcendentally transcendental.[2]
Proof
Let $n\in \mathbb {N} _{0},$ and assume that a non-zero polynomial $P\in \mathbb {C} [X;Y_{0},Y_{1},\ldots ,Y_{n}]$ exists such that
$\forall z\in \mathbb {C} \smallsetminus \mathbb {Z} _{\leq 0}:\qquad P\left(z;\Gamma (z),\Gamma '(z),\ldots ,{\Gamma ^{(n)}}(z)\right)=0.$
As a non-zero polynomial in $\mathbb {C} [X]$ can never give rise to the zero function on any non-empty open domain of $\mathbb {C} $ (by the fundamental theorem of algebra), we may suppose, without loss of generality, that $P$ contains a monomial term having a non-zero power of one of the indeterminates $Y_{0},Y_{1},\ldots ,Y_{n}$.
Assume also that $P$ has the lowest possible overall degree with respect to the lexicographic ordering $Y_{0}<Y_{1}<\cdots <Y_{n}<X.$ For example,
$\deg \left(-3X^{10}Y_{0}^{2}Y_{1}^{4}+iX^{2}Y_{2}\right)<\deg \left(2XY_{0}^{3}-Y_{1}^{4}\right)$
because the highest power of $Y_{0}$ in any monomial term of the first polynomial is smaller than that of the second polynomial.
Next, observe that for all $z\in \mathbb {C} \smallsetminus \mathbb {Z} _{\leq 0}$ we have:
${\begin{aligned}P\left(z+1;\Gamma (z+1),\Gamma '(z+1),\Gamma ''(z+1),\ldots ,\Gamma ^{(n)}(z+1)\right)&=P\left(z+1;z\Gamma (z),[z\Gamma (z)]',[z\Gamma (z)]'',\ldots ,[z\Gamma (z)]^{(n)}\right)\\&=P\left(z+1;z\Gamma (z),z\Gamma '(z)+\Gamma (z),z\Gamma ''(z)+2\Gamma '(z),\ldots ,z{\Gamma ^{(n)}}(z)+n{\Gamma ^{(n-1)}}(z)\right).\end{aligned}}$
If we define a second polynomial $Q\in \mathbb {C} [X;Y_{0},Y_{1},\ldots ,Y_{n}]$ by the transformation
$Q~{\stackrel {\text{df}}{=}}~P(X+1;XY_{0},XY_{1}+Y_{0},XY_{2}+2Y_{1},\ldots ,XY_{n}+nY_{n-1}),$
then we obtain the following algebraic differential equation for $\Gamma $:
$\forall z\in \mathbb {C} \smallsetminus \mathbb {Z} _{\leq 0}:\qquad Q\left(z;\Gamma (z),\Gamma '(z),\ldots ,{\Gamma ^{(n)}}(z)\right)\equiv 0.$
Furthermore, if $X^{h}Y_{0}^{h_{0}}Y_{1}^{h_{1}}\cdots Y_{n}^{h_{n}}$ is the highest-degree monomial term in $P$, then the highest-degree monomial term in $Q$ is
$X^{h+h_{0}+h_{1}+\cdots +h_{n}}Y_{0}^{h_{0}}Y_{1}^{h_{1}}\cdots Y_{n}^{h_{n}}.$
Consequently, the polynomial
$Q-X^{h_{0}+h_{1}+\cdots +h_{n}}P$
has a smaller overall degree than $P$, and as it clearly gives rise to an algebraic differential equation for $\Gamma $, it must be the zero polynomial by the minimality assumption on $P$. Hence, defining $R\in \mathbb {C} [X]$ by
$R~{\stackrel {\text{df}}{=}}~X^{h_{0}+h_{1}+\cdots +h_{n}},$
we get
$Q=P(X+1;XY_{0},XY_{1}+Y_{0},XY_{2}+2Y_{1},\ldots ,XY_{n}+nY_{n-1})=R(X)\cdot P(X;Y_{0},Y_{1},\ldots ,Y_{n}).$
Now, let $X=0$ in $Q$ to obtain
$Q(0;Y_{0},Y_{1},\ldots ,Y_{n})=P(1;0,Y_{0},2Y_{1},\ldots ,nY_{n-1})=R(0)\cdot P(0;Y_{0},Y_{1},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]}.$
A change of variables then yields
$P(1;0,Y_{1},Y_{2},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]},$
and an application of mathematical induction (along with a change of variables at each induction step) to the earlier expression
$P(X+1;XY_{0},XY_{1}+Y_{0},XY_{2}+2Y_{1},\ldots ,XY_{n}+nY_{n-1})=R(X)\cdot P(X;Y_{0},Y_{1},\ldots ,Y_{n})$
reveals that
$\forall m\in \mathbb {N} :\qquad P(m;0,Y_{1},Y_{2},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]}.$ :\qquad P(m;0,Y_{1},Y_{2},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]}.}
This is possible only if $P$ is divisible by $Y_{0}$, which contradicts the minimality assumption on $P$. Therefore, no such $P$ exists, and so $\Gamma $ is not differentially algebraic.[2][3] Q.E.D.
References
1. Bank, Steven B. & Kaufman, Robert. “A Note on Hölder’s Theorem Concerning the Gamma Function”, Mathematische Annalen, vol 232, 1978.
2. Rubel, Lee A. “A Survey of Transcendentally Transcendental Functions”, The American Mathematical Monthly 96: pp. 777–788 (November 1989). JSTOR 2324840
3. Boros, George & Moll, Victor. Irresistible Integrals, Cambridge University Press, 2004, Cambridge Books Online, 30 December 2011. doi:10.1017/CBO9780511617041.003
| Wikipedia |
Power flow analysis and optimal locations of resistive type superconducting fault current limiters
Xiuchang Zhang1,
Harold S. Ruiz2,
Jianzhao Geng1,
Boyang Shen1,
Lin Fu1,
Heng Zhang1 &
Tim A. Coombs1
Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E–J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.
With the persistent increase of conventional system generation and distributed generations (DGs), such as photovoltaic plants, concentrating solar power plants, and wind farms, the likelihood of fault events capable of causing great and irreparable damage to a large set of electrical devices, or even system blackouts, has been rapidly rising (Zhang et al. 2013; Zheng et al. 2015). Various strategies for mitigating fault current levels have been implemented in the power industry, such as construction of new substations, splitting existing substation buses, upgrading of multiple circuit breakers, and the installation of high impedance transformers. Nevertheless, all these operational practices involve a non-negligible degradation of the systems stability and performance, which ultimately means the occurrence of significant economic losses and further investment (Kovalsky et al. 2005). Series reactors and solid state fault current limiters are also widely used, although these insert a high impedance causing a continuous voltage drop and power losses during normal operation (Ye and Juengst 2004). However, superconducting fault current limiting technology can stand up to all these difficulties, preserving the stability and reliability of the power system with minimum losses under normal conditions (Angeli et al. 2016), although a comparison of different fault protection approaches is out of the scope of this manuscript. An exhaustive review on successful field tests and different existing numerical models of SFCLs can be found in Ref. Ruiz et al. (2015).
Two simplified SFCL models have been identified as in common use for simulating the performance of SFCLs installed in real power grids. The first approach is to model the SFCL as a step-resistance with a pre-defined triggering current, quench time, and recovery time, as in Ref. Khan et al. (2011) and Ref. Hwang et al. (2013). This approach allows us to consider a simplified scenario where no energy loss occurs during the superconducting state and a high impedance in normal state, by assuming that the SFCL responds to faults instantaneously. However, this may lead to significant inaccuracies, since the quenching and recovery characteristics depend on the thermal and electrical properties of the superconductors, which are both neglected in this simplified model. Modelling of a resistive type SFCL can also be simplified by using an exponential function for the dynamic resistance of the SFCL device, in which the quenching action of the superconducting material is solely determined by time. This method has been previously implemented in Ref. Park et al. (2010, 2011) in order to study the optimal locations and associated resistive values of SFCLs for a schematic power grid with an interconnected wind-turbine generation system, which found that the installation of SFCLs cannot only reduce the short-circuit current level, but also dramatically enhance the reliability of the wind farm. Compared to the previous approach, this exponential resistance curve fits better with the real performance of an SFCL and furthermore provides aggregated computational benefits in terms of numerical convergence. Nevertheless, SFCL characteristics, including triggering current, quenching, and recovery time, must also be set before initialising the simulation. Therefore, under this scenario the physical properties of the superconductors are ignored as well. A more advanced model for a resistive-type SFCL was presented in Ref. Langston et al. (2005), in which both the physical properties and the real dimensions of superconductors were considered. A similar model was then built by Colangelo and Dutoit (2013) in order to simulate the behaviour of the SFCL designed in the ECCOFLOW project. Using this model the quenching action of the SFCL is no longer pre-defined. However, the computational complexity of these models is significantly increased, especially during large scale power network simulations. Hence, during a performance simulation of SFCLs installed in power systems, it is important to study the necessity of considering the thermal and electrical properties of superconducting materials, in order to be able to choose a better trade-off between computational complexity and model accuracy. For any of the adopted strategies, the research must ultimately address the process of finding of the optimal locations for multiple SFCLs inside a power network, which, according to our knowledge, has yet only considered a maximum of just two SFCLs. This means that the cooperation between prospective need for more SFCLs remains an open issue.
In this paper we present a comprehensive study into the performance and optimal location analysis of resistive type SFCLs in realistic power systems, starting from the simplest consideration of a single step-resistance for the activation of an SFCL, up to considering the actual electro-thermal behaviour of the superconducting component. We have simulated the performance of SFCLs described by two different models: (1) as a non-linear resistance depending on time, and (2) as a dynamic temperature-dependent model consisting of the actual E–J characteristics of the superconducting material. The applied power grid model which includes interconnected dispersed energy resources was built based on the UK network standard. Through simulation of the system behaviours under three fault conditions (two distribution network faults in different branches, and one transmission system fault), the optimum SFCL installation schemes were found from all the feasible combinations of SFCLs. In addition, a detailed comparison between the figures obtained for each of the above cases was performed, proving that the non-linear resistor model is insufficient for accurate estimation of the reliability and optimal location of an SFCL, as the complex thermal and electrical behaviours of the superconducting material during its transition to the normal state cannot be simplified to a single step-resistance.
This paper is organised as follows. "SFCLs and topology of the power system" section introduces the topology of the power system and the proposed SFCL models. "Network stability, current limiting performance, and recovery characteristics" section presents a comprehensive reliability study on the SFCL scheme, including analysis of the network stability, current limiting performance, and recovery characteristics of the SFCL both with and without the inclusion of a bypass switch. "Identification of the optimal location" section then describes a novel method for determining optimal locations of multiple SFCLs in a large scale electrical grid. Finally, the main conclusions of this paper are summarized in "Conclusion" section.
SFCLs and topology of the power system
The topology of the modelled power system depicted in Fig. 1 was built based on the UK network standards (Butler 2001). The power system has a 120 MVA conventional power plant emulated by a three-phase synchronous machine, which is additionally connected to a local industrial load of 40 MW located 5 km away from the main power plant. Afterwards, the voltage level is boosted from 23 to 275 kV by a step-up transformer (TR1), from which the conventional power plant is connected to an upstream power grid rated with a short circuit level of 2 GW through a 130 km distributed-parameters transmission line. Then, the 275 kV high-voltage transmission system is split into two distribution networks. First, after the voltage level being stepped down to 33 kV by substations TR2 and TR4, the upper branch (industrial branch) supplies power to three industrial loads with a rated power of 55, 15, and 10 MW, separately. Likewise, the lower branch (domestic branch) is also connected to two step-down substations TR3 and TR7, with 70 km distance between them. The role of these two substations is reduce the voltage of the lower sub-grid to 33 kV, as it is the same voltage level rated by the interconnected 90 MVA wind power plant, which emulates the Rhyl Flats offshore wind farm located in North Wales, after being boosted by TR10. This offshore wind power plant is composed of twenty-five fixed-speed induction-type wind turbines each having a rating of 3.6 MVA, and is located 30 km away from its connecting point with the lower distribution network (Feng et al. 2010). After integration, the lower branch and the wind farm together provide electric energy to four domestic loads with a rated power of 50, 15, 12 and 10 MW, separately. Finally, the industrial branch and the domestic branch are connected through a bus-bar coupler, and the power system is balanced in a way that the current flowing through the bus-tie is only a few amperes during normal operation.
Power system model based on the UK grid standard as described in section "SFCLs and topology of the power system". Three prospective fault positions and five prospective SFCL locations are illustrated
It is generally accepted that a three-phase (symmetric) short-circuit fault provokes the highest fault current among all possible faults, since it will cause the most drastic decrease of the system impedance. In order to ensure safe operation, the maximum current and electrodynamic withstand capabilities of electrical equipment are primarily designed according to this situation. Therefore, it is essential to simulate the behaviour of the power system under three-phase short-circuit fault. The symmetric faults were initialised at three potential locations marked as Fault 1 (132 kV), Fault 2 (33 kV) and Fault 3 (275 kV), which represent prospective faults occurring at the industrial branch, the domestic branch, and the transmission system, respectively (see Fig. 1). Five positions for the installation of SFCLs were proposed as shown in Fig. 1, namely at: (1) the integrating point between the conventional power plant and the upstream power grid (Location 1), (2) the interconnection between the wind farm and the port of domestic branch (Location 2), (3) the industrial loads branch (Location 3), (4) the domestic loads branch (Location 4), and (5) the bus-tie coupling the two distribution networks (Location 5).
Identical single phase SFCLs were implemented for each one of the three phases of the system, as each phase of the SFCL is only triggered by the current flowing through its own phase. However, under symmetric faults, each phase of the SFCL will quench slightly asynchronously within the first cycle of the fault current, leading to an instantaneous imbalance between the phases (Blair et al. 2012). Hence, for all types of faults and at diverse locations, independent modules for each one of the three phases have been considered, in order to allow for an accurate simulation of the effects of an SFCL on the overall power grid. Two different models were considered to emulate the SFCL performance, as described below.
Step resistance SFCL
The current limiting performance of the developed step resistance SFCL model is dominated by five predefined parameters: (1) triggering current; (2) quenching resistance; (3) quenching time, which has been assumed to be equal to 1 ms in accordance with Refs. Sung et al. (2009) and Alaraifi et al. (2013; 4) a normal operating resistance of 0.01 \(\Omega\); and (5) a recovery time of 1 s. The values of the triggering current and quenching resistance are not provided in this section since they vary with the location of the SFCL. The structure of the step resistance model is illustrated in Fig. 2.
One phase of the step resistance SFCL model
The operating principle of this model can be summarised as follows: first, the SFCL model calculates both the absolute and the RMS values of the flowing current. If both values are lower than the triggering current, the model will consider the SFCL in the superconducting state and insert a normal operating resistance (0.01 \(\Omega\)) into the grid. Otherwise, if either the absolute value or the RMS value of a passing current exceeds the triggering current level, the output resistance will be increased to the quenching resistance after the predefined quenching time. Lastly, if the current flowing through the SFCL model falls below the triggering current due to the clearance of the fault, the SFCL will restore its superconducting state after the recovery time.
SFCL with E–J–T power law
The sudden change in the SFCL resistance can be macroscopically simplified into the E–J power law (Rhyner 1993), which can be divided into three sub-regions: the superconducting state defined by \(E(T,t)<E_{0}\) and \(T(t)<T_{c}\), the flux flow state defined by \(E(T,t)>E_{0}\) and \(T(t)<T_{c}\), and the normal conducting state defined by \(T(t)>T_{c}\), with \(T_{c}\) the critical temperature of the Bi2212 bar, and \(E_{0}=1\times 10^{-6}\hbox {V}\cdot \hbox {m}^{-1}\) (Langston et al. 2005; Blair et al. 2012; Bock et al. 2015). All three sub-regions follow different power laws, the combination of which forms the E–J characteristics of the SFCL as follows:
$$\begin{aligned} E(T,t)= {\left\{ \begin{array}{ll} E_{c}\left( \dfrac{J(t)}{J_{c}(T(t))}\right) ^{n} , &\quad \text {for } E(T,t)<E_{0} \text { and } T(t)<T_{c},\\ E_{0}\left( \dfrac{E_{c}}{E_{0}}\right) ^{m/n}\left( \dfrac{J_{c}(77 K)}{J_{c}(T(t))}\right) \left( \dfrac{J(t)}{J_{c}(77 K)}\right) ^{m} , &\quad \text {for } E(T,t)>E_{0} \text { and } T(t)<T_{c},\\ \rho (T_{c})\dfrac{T(t)}{T_{c}}J(t) , & \quad\text {for } T(t)>T_{c}, \end{array}\right. } \end{aligned}$$
$$\begin{aligned} J_{c}(T(t))=J_{c}(77K)\dfrac{T_{c}-T(t)}{T_{c}-77},\quad {\text{for}} \quad J>J_{c}. \end{aligned}$$
When modelling the SC state, we used \(n=9\) in accordance with Refs. Buhl et al. (1997), Paul and Meier (1993), Herrmann et al. (1996), Bock et al. (2005), Noe et al. (2001) and \(m=3\) for the flux flow state in good agreement with the experimental data reported in Refs. Paul et al. (2001) and Chen et al. (2002). In addition, we have assumed that the normal conducting state resistivity is a linear function of temperature when \(T(t)> T_{c}\), with \(\rho (T_{c}) = 7\times 10^{-6}\,\Omega\) for Bi2212 bars (Elschner et al. 2001). Furthermore, the relationship between the critical current density and the temperature was also set to be linear, as in Eq. (2), as this has been proven by Kozak et al. for the specific case of Bi2212 compounds (Kozak et al. 2005). To complete the SFCL model, a CuNi alloy \((\rho =40\,\mu \Omega \cdot m)\) resistor was connected in parallel with the superconductor on the basis of the project disclosed in Ref. Rettelbach and Schmitz (2003). This shunt resistance can protect the superconducting material from being damaged by hot spots that develop under limiting conditions, and furthermore prevents over-voltages that may possibly appear if the quench occurs too rapidly (Noe and Steurer 2007; Bock et al. 2004). Finally, by assuming that the SC composite is homogeneous, the thermal modelling of the SFCL considers the first order approximation of the heat transfer between the superconductor and the liquid nitrogen bath is as follows:
$$\begin{aligned} R_{SC}&= \dfrac{1}{2\kappa \pi d_{SC}l_{SC}} , \end{aligned}$$
$$\begin{aligned} C_{SC}&= \dfrac{\pi d_{SC}^{2}}{4}l_{SC}c_{v} ,\end{aligned}$$
$$\begin{aligned} Q_{generation}(t)&= I(t)^{2}\times R_{SFCL}(t), \end{aligned}$$
$$\begin{aligned} Q_{cooling}(t)&= \dfrac{T(t)-77}{R_{SC}}, \end{aligned}$$
where \(R_{SC}\) stands for the thermal resistance from the SC material to its surrounding coolant, \(C_{SC}\) is the specific heat of Bi2212 (Meerovich and Sokolovsky 2007), \(c_{v}=0.7\times 10^{-6} J/(m^{3}\cdot K)\), and
$$\begin{aligned} T(t)=77+\dfrac{1}{C_{SC}}\int _{0}^{t}[Q_{generated}(t)-Q_{cooling}(t)]dt. \end{aligned}$$
The SC is modelled as a cylindrical wire of length \(l_{SC}\), which is adjusted at each installing location in order to limit the prospective fault current to the desired level. Likewise, the diameter \(d_{SC}\) is regulated to ensure that the SFCL not only remains into the superconducting state during normal operation, but also quenches within a few milliseconds once a short-circuit fault occurs at some location on the grid. In practice, although the wire diameter cannot be modified after fabrication, one can connect several wires in parallel to achieve the expected current limiting performance (Blair et al. 2011), which allows us to use the previous approaches.
Network stability, current limiting performance, and recovery characteristics
Performance comparison between the step SFCL model and the E–J power law based SFCL model: a resistance growth, b fault current characteristics, c current distribution in the SFCL, d temperature curves of each phase. The displayed insets in subplots a and d are measured in the corresponding units of the main plot
In order to compare the fault current limitation properties of the two SFCL models, in Fig. 3 we present the results for a three-phase to ground fault with negligible fault resistance when it is initialised at the domestic network (Fault 2), and a single SFCL is installed next to the fault position (Location 4). Figure 3a illustrates that the step resistance model and the E–J power law based model both respond almost simultaneously to the occurrence of a short-circuit fault. However, as the SFCL needs ~2 ms to fully quench due to its E–J characteristic and dynamic temperature (Fig. 3d), the first peak reduction gained onto the step resistance model is overestimated by 11% (7.6 and 6.5 kA for the two SFCL models, respectively. 10 kA without SFCL), as shown in Fig. 3b. In addition, the shunt resistor diverts the major portion of the fault current after the superconductor develops its normal state (Fig. 3c). Therefore, the shunt resistance effectively lowers the thermal stress on the HTS wire, simultaneously preventing damages by overheating, whilst the recovery time is reduced (Morandi 2013).
Initial tests without integration of the SFCL model have confirmed that the power system operates at the rated state during normal operation. Then, under occurrence of three-phase to ground faults at Fault-1, Fault-2 and Fault-3 (see Fig. 1), the short-circuit currents were measured at the integrating point (Location 1), wind farm (Location 2), branch 1 (Location 3) and branch 2 (Location 4), such that the instantaneous fault current can be described by:
$$\begin{aligned} i_{k}&=\underbrace{I_{pm}sin(\omega t+\alpha -\beta _{kl})}_{\text {periodic component}} \nonumber \\ &\quad+ \underbrace{[I_{m}sin(\alpha -\beta )]-I_{pm}sin(\alpha -\beta _{kl})]e^{-\dfrac{t}{\tau _{k}}}}_{\text {aperiodic component}}, \end{aligned}$$
where \(I_{m}\) is the amplitude of the rated current of the power grid, \(\phi\) and \(\phi _{kl}\) represent the impedance angles before and after the fault, respectively, \(\alpha\) defines the fault inception angle, \(I_{pm}\) states the magnitude of the periodic component of the short-circuit current, and \(\tau _{k}\) stands for the time constant of the circuit. Hence, the fault currents achieve their maximum values when \(\alpha -\beta _{kl}=\pi (n+1) / 2\) with \(n\in \mathbb {Z}\). This condition was implemented all through our study in order to consider the most hazardous fault scenarios, and the impact of the SFCLs on the generation side and the voltage stability of the grid. For instance, the response of the output electrical power, rotor speed, and terminal voltage for the conventional power plant (23 kV/120 MVA), and the voltage output at the domestic branch (Branch 2) for a scenario in which when a 200 ms three-phase to ground fault is applied at the industrial branch (Fault-1), after 1.2 s within normal operating conditions, as shown in Fig. 4.
Generator parameters and voltages of branch 2 in response to a 200 ms three-phase to ground fault at branch 1
Initially we have to consider the power system operation without the insertion of SFCLs. Under this scenario, the output electrical power drops sharply to 0.15 pu just after the fault incident (Fig. 4a), whilst the governors of the power plant, such as steam and hydro, still contribute with the same mechanical power to the rotors. Thus, a rapid acceleration of the rotors occurs due to this power imbalance, as shown in Fig. 4b. However, when an SFCL is installed at Branch 1 (Location 1), its high resistance state facilitates the SFCL to dissipate the excess generator power during the fault condition, hence improving the energy balance of the system and effectively reducing the variation of the rotor speed. Furthermore, considering the conventional equal-area criterion for stability issues (Sung et al. 2009; Kundur et al. 1994), the SFCL could improve the damping characteristics of generator speed and system frequency, as well as the system current, because the insertion of high resistance into the grid would significantly increase the damping ratio. Moreover, due to the short-circuit fault of Branch 1, a sharp voltage drop (Fig. 4c, d) can be seen at both the power plant terminal (0.5 pu) and the non-faulted Branch 2 (0.35 pu). Then, by introducing the SFCL, which acts as a voltage booster, the observed voltage dips are mitigated by 40 and 50%, respectively. This improvement allows the healthy parts of the system (without the fault inception) to be less affected, and makes integration of an SFCL a reliable fault ride-through scheme.
Without the protection of the SFCL, a 200 ms short-circuit fault was initiated in Branch 1 (Fault 1) in order to study the relationship between the current limiting performance of an SFCL and the maximum normal resistance. First, without the protection of the SFCL, simulation results have shown that the first peak of the current flowing into Branch 1 reached ~3.8 kA, which is ~6.8 times higher than the rated value (560 A). Then, after installation of the SFCL, a considerable reduction of the fault current was observed as shown in Fig. 5. The insets (a) and (b) on this figure illustrate the variation of the limited current when the two SFCL models (step resistance, and E–J–T power law) were integrated at Branch 1 (Location 3).
Current curves of phase A under a branch 1 fault (Fault 1) when the SFCL resistance increases from 0.2 R to 2.0 R
For the step-resistance model, with the SFCL resistance increasing from 0.2 R to 2 R \((R=30\,\Omega\)) during the quenched state, the peak value of the fault current gradually decreased from ~3.8 to ~1.2 kA, showing a small displacement of the peak values. However, in the case of the E–J–T power law model, a noticeable kink appeared at 2.5 kA, when the maximum resistance of the SFCL was greater than 1 R. Remarkably, this distinctive kink can be interpreted as the threshold value for the maximum reduction of the fault current for an SFCL which cannot be determined with any other model, to the best of the knowledge of the authors. To illustrate the difference, the step-model resistance predicts a continuous decrease on the first peak of the fault current as R increases (Fig. 5a, c), contrary to what is observed with the more realistic E–J–T model (Fig. 5b, d), which predicts that no matter the increment of the SFCL resistance, after a certain value it can only limit the first peak of the fault current to a well defined threshold. For instance, for the case illustrated in Fig. 5, we have determined that on the instant that the kink appears, the current curves overlap at about 2.5 kA, defining hence, the maximum peak reduction of the fault current at this location (Location 3), and therefore an optimal SFCL resistance. It is worth mentioning that the characteristic kink is also observed when the SFCL is located at any other position, e.g., at the bus-tie (Fig. 5c, d), which validates the generality of our statement. Thus, in terms of economic considerations, it represents a very valuable result for distribution operators as it allows to state a maximum threshold on the required size for the capacity of the SFCL, minimising material investments for specific locations as beyond this threshold no further reduction of the first peak of the fault currents can be achieved.
Although the passive transition of the SC material and the high normal resistance enables the SFCL to limit the fault current before attaining its first peak, in some cases the recovery characteristics of the SFCL need to be improved because the SC may need several minutes to restore its superconducting state under load conditions. For instance, if a fault event quenches a single SFCL located at the domestic branch, it may take more than 300 seconds to recover once the fault current has been cleared. Therefore, in order to decrease the recovery time of the SFCL we have connected a bypass switch parallel to both the SC and the shunt resistance (Melhem 2011). Thus, when the SFCL can quickly recover the superconducting state under load conditions, the switch S1 remains closed after the fault is cleared. However, if the SFCL cannot be automatically recovered within a few seconds, then switch S2 can be closed and switch S1 instantaneously opens to quickly disconnect the SC from the system. This allows the SC to undergo its recovery process without further accumulation of heat, as shown in Fig. 6 for an SFCL installed at Location 2 after encountering a 0.2 s three-phase to ground fault at the domestic branch (Fault 2). For this case, and without applying the bypass switch strategy, a certain amount of current will continue passing through the SFCL after the clearance of the fault. This flow of current keeps continuous to generate heat inside of the superconductor, which significantly slows down the decrease of temperature, and hence delays the recovery of the SFCL by over five minutes. However, with a properly designed control scheme, the E–J–T model can open the switch S1 and close the switch S2 at the moment that the fault ends, thus transferring the current to the S2 branch. In fact, by using this method we have determined that the recovery time can be reduced to less than 1.6 s without affecting the normal operation of the power grid. After the SC is restored to its superconducting state, the switches S1 and S2 act again to prepare the SFCL for the next fault. However, as it is not possible to foresee the location of a fault event, the optimal location for the installation of one or more SFCLs has to be assessed, being this the purpose of the following section.
SFCL resistance and temperature dynamics with and without the assistance of the Bypass switch strategy shown in the bottom of the figure
Identification of the optimal location
In order to attain an accurate estimation of the optimal location for the installation of one or more SFCLs, all possible SFCL combinations according to the five proposed locations depicted in Fig. 1, were analysed for the three different fault points. This resulted in a total of 31 allocation strategies, including five different schemes for the integration of a single SFCL (Locations 1–5), 10 dual combinations of SFCLs, 10 further combinations of three SFCLs, five combinations of four SFCLs, and finally the cooperation between all five SFCLs.
The current signals at both the wind farm terminal (Location 2) and the integrating point of the conventional power plant and the upstream power grid (Location 1) were measured for all three fault conditions (Fig. 1). We also analysed the current injection of the industrial branch (Location 3) and the domestic branch (Location 4) when faults happen at the two networks: Fault 1 and Fault 2, respectively. For the sake of brevity, we do not present the results for the measured current at the industrial branch when Fault 2 or Fault 3 occurs, because based on the analysis of the system impedance change, the magnitude of the current flowing into the industrial branch is actually reduced by the two faults to levels lower than the normal current, i.e., at this point the SFCL does not need to be triggered to protect this branch. The same argument applies to the domestic branch under Fault 1 and Fault 3 conditions. Our results are presented below in terms of the single or multiple SFCL strategies. The optimal SFCL installation scheme was found by following the algorithm shown in Fig. 7.
Flowchart of the algorithm for determining optimal installation strategy of SFCLs. Parameters being initialized during the third step: number of \(k=1\); number of installed SFCL \(S_{k}=1\); maximal current reduction \(R_{m}=0\); current reduction margin of one additional SFCL\(=CRM\); number of measured points \(C_{m}\); optimal strategy \(OP=0\)
Single SFCL installation
Figure 8a shows the reduction in the fault current under the three fault conditions illustrated in Fig. 1 when a single SFCL is installed at at the referred locations (Locations 1 to 5). For the sake of comparison, the size of the superconductor which has to be defined into the E–J–T power law model, was systematically adjusted so that it defined the same maximum resistance as the one used with the step resistance model. Thus, when the step resistance model was considered, the maximum reduction of the fault current was overestimated in comparison with the more realistic E–J–T model. For all five SFCL locations, the first peak of the fault current was always found to be lower in the first case. The reason for this difference is that, once the current exceeds the critical value of the SC, the SFCL described by the step resistance model directly jumps to the maximum resistance after the pre-defined response time, whilst in the E–J–T model the dynamic increase of the resistance depends not only on the passing current, but also on the temperature of the superconductor. Therefore, under the E–J–T model the SFCL cannot gain its maximum rated resistance before the first fault peak is reached, which leads to a relatively lower reduction of the fault current (~20%).
Reduction of the first peak of the fault current events at different locations, measured after the installation of a a single SFCL, b two SFCLs, c three SFCLs, and d up to four and five SFCLs installed at different locations
Based on both the SFCL models tested, the simulations performed generally showed a negative impact on the reduction of the fault peak at certain integration points when the SFCL was installed at Location 1 or Location 2. In these cases the fault current was actually increased by the insertion of a SFCL. In more detail, when the SFCL was installed beside the wind farm (Location 2), the sudden increase in the fault current flowing through the integrating point under Fault 2 (at the domestic branch) was caused by the abrupt change of the impedance of the power system. This SFCL entered the normal state, reducing the current output of the wind farm due to its rapid rise in resistance and hence, the conventional power plant and the upstream power grid were forced to supply a higher current to the faulted branch. Similar behaviour was obtained under the fault conditions F1 and F2 when the SFCL was installed at Location 2, and the current was measured at the integrating point (see Figs. 1, 8a). Furthermore, when a single SFCL was installed at Location 1 (integrating point), following the E–J–T model the SFCL can only limit the fault current in two cases, whilst with the simplified step-resistance the benefits of the SFCL can be overrated as it leads to a positive balance in up to four different fault conditions. This highlights the importance of finding a suitable optimal allocation strategy for the SFCLs under a wide number of fault conditions, and the need for considering adequate physical properties for the electro-thermal dynamics of the SC materials. It ultimately tries to fill the gap between acquired scientific knowledge and the demand for more reliable information from the standpoint of the power distribution companies. Thus, the final decision for an optimal location has to be made under the circumstance of having a twofold conclusion.
Firstly, the decision can be made according to the highest total reduction on the fault current passing through different points and under different fault circumstances as shown in Fig. 8a. There, it can be observed that for the eight most important cases combining the occurrence of a fault at certain positions and the measuring point for the current reduction, the SFCL installed at the port of the wind farm (Location 2) appears to be the best option, as in this case the fault current can be reduced in six of the eight different scenarios with an accumulated reduction of 290% from the step resistance model, and 220% from the E–J–T power law model, respectively. Nevertheless, this strategy has also an adverse impact on the remaining two other scenarios (F1-IP & F2-IP). Secondly, a decision can be made in terms of the overall performance for achieving positive impacts under the scope of any of prospective circumstances. In this sense, we have determined that placing the SFCL at Location 5, at the bus-tie between the industrial and domestic branches, is the most reliable option. An SFCL installed at the bus-tie is capable of reducing the harmonics and voltage dips, doubling the short-circuit power, and ensuring even loading of parallel transformers (Colmenar-Santos et al. 2016). Moreover, the recovery characteristics of the SFCL can also see benefit from this arrangement as after a quench of the SFCL, the bus-tie can be switched open for a short time (few seconds) to help the SFCL restore the superconducting state. However, a drawback of this switching strategy is that this measure may temporarily reduce the quality of the power supply, but a strong impact on the normal operation of the power system is not foreseen.
Multiple installation of SFCLs
Firstly, a double protection strategy, the installation of two SFCLs in different grid positions, was assessed. According to both the step resistance model and the E–J power law based model, the highest fault current reduction was always achieved when the SFCLs were installed at Location 2 (wind farm) and Location 3 (industrial branch) simultaneously, accomplishing a 400 and 330% total fault limitation, respectively (Fig. 8b). Indeed, this arrangement can be considered a much better strategy in comparison to the results obtained when just a single SFCL was considered, as the total current limitation is improved by ~110%. Furthermore, contrary to the previous case, the current flowing through the integrating point when the fault occurs at the industrial branch (Fault 2) significantly decreased rather than having an adverse effect on the power system. Moreover, under this dual strategy the measured current reduction showed a balanced performance on all the different analysed cases, unlike the results obtained for when a sole SFCL was installed. In addition, if system operators measure the optimal strategy for the installation of two SFCLs in terms of the number of limited cases, different conclusions can be obtained under the framework of different physical models, e.g., when the step-resistance or the E–J–T power law model is considered. According to the step resistance model, installing the two SFCLs at either Locations 1 & 2 or Locations 4 & 5 produced a positive response to all eight measured fault conditions. When the SFCLs were installed at Locations 1 & 2, a better performance was obtained as the total reduction in the fault current (330%) was 40% greater than the performance obtained by SFCLs installed at Locations 4 and 5 (290%). However, when the E–J–T model was used, installing the SFCLs at Locations 1 & 2 increased the magnitude of the current at the integrating point under the occurrence of a fault in the domestic branch (Fault 2). This was due to the unsuccessful triggering of the SFCL at Location 1, as explained in the previous subsection. Therefore, from the point of view of the system operators, Locations 4 & 5 can be considered as the most reliable solution as it is the only combination capable of limiting all fault conditions and for all the considered scenarios.
Secondly, we added an additional SFCL to the grid to assess the overall performance of this new system. Figure 8c shows that most of the installation strategies for three SFCLs produced a reduction of the fault current in all eight measured scenarios. Both SFCL models agreed with the conclusion that the greatest reduction in the fault current was achieved when the SFCLs were installed simultaneously at the Locations 2, 3 and 4. This strategy showed a 470% total reduction using the step resistance model, and 375% using the E–J–T model, attaining a significant increase on the overall performance of the system by about 70 and 45%, respectively, in comparison with the best achieved performance when the dual SFCLs strategy was considered. Besides this huge improvement, the three SFCLs strategy could also respond positively to any fault conditions, which means installing three SFCLs can be considered the most reliable strategy for both overall fault current reduction and the number of cases exhibiting fault current reduction. Moreover, it was found that, under all fault conditions, the fault current levels of all measured points can be reduced to lower than the safety thresholds, which was set as three times of the normal current according to common practice. Until a significant reduction of the overall price of a SFCL is achieved, distribution network operators may not consider this strategy to be be cost-effective in terms of the initial investment, but given the expected reduction on the price of the second generation of high temperature superconducting wires, this decision can be seen as the most profitable strategy in terms of grid safety and reliability. However, a limit for the maximum number of the SFCLs required must also be established in order to guarantee the maximum benefits at minimum cost.
Thus, in Fig. 8c we show the performance comparison among five different scenarios when four SFCLs were installed into the power system. With four SFCLs working together, all of the combinations effectively limited the fault current for all eight studied cases, except for when the SFCLs were described using the E–J–T model and installed at Locations 1, 2, 3, and 5. Under this scheme the measured fault current increased when the fault was initialised at the domestic branch (Fault 2), due to the lack of action from the SFCL installed at Location 1. When the step resistance model was considered, the accumulated maximum reduction on the fault current was again overestimated, achieving a 480% reduction when the SFCLs were installed at Locations 1, 2, 3 and 4 or at 2, 3, 4 and 5. In comparison, Locations 2, 3, 4 and 5 produced a prospective reduction of 395% when the more realistic E–J–T model was considered. The maximum accumulated reduction of the fault current achieved by any of the strategies using four SFCLs was just over 10% more than the most effective of the strategies using three SFCLs. This enables us to define an upper limit for the number of SFCLs needed.
In order to verify our previous statement, we also studied the result of considering even one more SFCL, as there are five prospective locations for SFCL installations in the power grid displayed in Fig. 1. Compared to the last analysed case (4 SFCLs), the accumulated maximum reduction of the fault current reached a 15% greater reduction when the SFCLs were simulated using the step resistance model, but surprisingly no further improvement was obtained when the more realistic E–J–T model was incorporated. This important result can be understood as a consequence of the mutual influence between the integrated SFCLs, i.e., when the fault current passing through one SFCL is substantially decreased by the influence of the others, the rate of heat accumulation reduces accordingly, slowing down the rate of the temperature rise and hence reducing the resistance that the SFCL can develop before reaching the first peak of the fault.
Table 1 Optimal installation strategies for SFCLs according to the step-resistance and E–J power law models
Table 1 summarizes the optimal allocation strategies and the corresponding performances of the SFCLs modelled during our study. The preferable locations for the installation of the SFCLs have been determined in terms of the two identified standards: (1) the maximum accumulated fault current reduction, and (2) the maximum number of measuring conditions that could be limited. The results in the table are categorised by the number of SFCLs required by each strategy, and also the physical models used to emulate the characteristics of the SFCLs. In all the cases the step resistance model led to an overestimation of the actual performance figures achievable by the SFCLs when more realistic physical properties were considered. Finally, when the strategy is to maximize the benefits from installing only one or two SFCLs, a compromise must be made between increasing the fault current reduction, and maximizing the actual number of measuring conditions where the fault current can be limited. Therefore, based upon the comprehensive study presented in this paper, we conclude that the optimal installation strategy is the installation of a maximum of three SFCLs at Locations 2, 3, and 4, as this strategy produced the maximum reduction of the fault current for all fault conditions, and the addition of further SFCLs did not represent a significant enough improvement to justify the increased cost.
The superconducting fault current limiter is a promising device that can limit the escalating fault levels caused by the expansion of power grids and the integration of renewable energy sources. This paper presents a comprehensive study on the performance and optimal allocation analysis of resistive type SFCLs inside of a power system based on UK network standards. In order to assess the impact of incorporating SC material properties on the performance of SFCLs, two different models were used throughout the study. First, the active operation of an SFCL was modelled using a Heaviside function. Second, a more realistic model was used to simulate the operation of an SFCL, taking into consideration the proper E–J characteristics of the superconducting material and dynamic temperature evolution. Independently of the model used, we have proven that SFCLs can effectively improve the damping characteristics of the generation system, and can mitigate voltage dips at the grid. However, we have shown that although computing time can be reduced when step-resistance models are used, such simplifications lead to strong overestimations of the actual prospective performance of the SFCL, in terms of the maximum reduction on the fault current and its correlated normal resistance. Thus, this comparison led us to the conclusion that adequate physical properties for the electro-thermal dynamics of the SC materials has to be considered in order to accurately predict behaviour of SFCLs inside a power system.
A systematic study was then performed using the prospective strategies for the installation of one or more SFCLs. We have proven that installing more SFCLs does not necessarily mean better overall performance. For our power system model, the simultaneous use of three SFCLs that installed at Locations 2, 3, and 4, is the best protection strategy in terms of the performance, economic efficiency and the reliability of the overall grid. In order to draw this conclusion, all the potential combinations of two, three, four, and five SFCLs were studied under a wide number of fault scenarios and measuring strategies.
SFCLs:
superconducting fault current limiters
DGs:
distributed generations
TR:
Alaraifi S, El Moursi M, Zeineldin H (2013) Optimal allocation of HTS-FCL for power system security and stability enhancement. IEEE Trans Power Syst 28(4):4701–4711
Angeli G, Bocchi M, Ascade M, Rossi V, Valzasina A, Ravetta C, Martini L (2016) Status of superconducting fault current limiter in italy: final results from the in-field testing activity and design of the 9 kv/15.6 mva device. IEEE Trans Appl Supercond 26(3):1–5
Blair SM, Booth CD, Singh NK, Burt GM, Bright CG (2011) Analysis of energy dissipation in resistive superconducting fault-current limiters for optimal power system performance. IEEE Trans Appl Supercond 21(4):3452–3457
Article ADS CAS Google Scholar
Blair SM, Booth CD, Burt GM (2012) Current-time characteristics of resistive superconducting fault current limiters. IEEE Trans Appl Supercond 22(2):5600,20–5600,205
Bock J, Breuer F, Walter H, Noe M, Kreutz R, Kleimaier M, Weck K, Elschner S (2004) Development and successful testing of mcp bscco-2212 components for a 10 mva resistive superconducting fault current limiter. Supercond Sci Technol 17(5):S122
Bock J, Breuer F, Walter H, Elschner S, Kleimaier M, Kreutz R, Noe M (2005) Curl 10: development and field-test of a 10 kv/10 mva resistive current limiter based on bulk MCP-BSCCO 2212. IEEE Trans Appl Supercond 15(2):1955–1960
Bock J, Hobl A, Schramm J, Krämer S, Jänke C (2015) Resistive superconducting fault current limiters are becoming a mature technology. IEEE Trans Appl Supercond 25(3):1–4
Buhl D, Lang T, Gauckler L (1997) Critical current density of bi-2212 thick films processed by partial melting. Supercond Sci Technol 10(1):32
Butler S (2001) UK Electricity Networks: the nature of UK electricity transmission and distribution networks in an intermittent renewable and embedded electricity generation future. http://www.parliament.uk/documents/post/e5.pdf. Accessed 11 Sep 2016
Chen M, Paul W, Lakner M, Donzel L, Hoidis M, Unternaehrer P, Weder R, Mendik M (2002) 6.4 MVA resitive fault current limiter based on bi-2212 superconductor. Phys C 372:1657–1663
Colangelo D, Dutoit B (2013) Mv power grids integration of a resistive fault current limiter based on hts-ccs. IEEE Trans Appl Supercond 23(3):5600,804–5600,804
Colmenar-Santos A, Pecharromán-Lázaro J, de Palacio Rodríguez C, Collado-Fernández E (2016) Performance analysis of a superconducting fault current limiter in a power distribution substation. Electr Power Syst Res 136:89–99
Elschner S, Breuer F, Wolf A, Noe M, Cowey L, Bock J (2001) Characterization of bscco 2212 bulk material for resistive current limiters. IEEE Trans Appl Supercond 11(1):2507–2510
Feng Y, Tavner P, Long H (2010) Early experiences with UK Round 1 offshore wind farms. Proc Inst Civ Eng: Energy 163(4):167–181
Herrmann P, Cottevieille C, Leriche A, Elschner S (1996) Refrigeration load calculation of a htsc current lead under ac conditions. IEEE Trans Magn 32(4):2574–2577
Hwang JS, Khan UA, Shin WJ, Seong JK, Lee JG, Kim YH, Lee BW (2013) Validity analysis on the positioning of superconducting fault current limiter in neighboring AC and DC microgrid. IEEE Trans Appl Supercond 23(3):5600,204–5600,204
Khan UA, Seong J, Lee S, Lim S, Lee B (2011) Feasibility analysis of the positioning of superconducting fault current limiters for the smart grid application using simulink and simpowersystem. IEEE Trans Appl Supercond 21(3):2165–2169
Kovalsky L, Yuan X, Tekletsadik K, Keri A, Bock J, Breuer F (2005) Applications of superconducting fault current limiters in electric power transmission systems. IEEE Trans Appl Supercond 15(2):2130–2133
Kozak S, Janowski T, Kondratowicz-Kucewicz B, Kozak J, Wojtasiewicz G (2005) Experimental and numerical analysis of energy losses in resistive sfcl. IEEE Trans Appl Supercond 15(2):2098–2101
Kundur P, Balu NJ, Lauby MG (1994) Power system stability and control, vol 7. McGraw-hill, New York
Langston J, Steurer M, Woodruff S, Baldwin T, Tang J (2005) A generic real-time computer simulation model for superconducting fault current limiters and its application in system protection studies. IEEE Trans Appl Supercond 15(2):2090–2093
Meerovich V, Sokolovsky V (2007) Thermal regimes of HTS cylinders operating in devices for fault current limitation. Supercond Sci Technol 20(5):457
Melhem Z (2011) High Temperature Superconductors (HTS) for Energy Applications. Elsevier, Amsterdam
Morandi A (2013) State of the art of superconducting fault current limiters and their application to the electric power system. Phys C 484:242–247
Noe M, Steurer M (2007) High-temperature superconductor fault current limiters: concepts, applications, and development status. Supercond Sci Technol 20(3):R15
Noe M, Juengst KP, Werfel F, Cowey L, Wolf A, Elschner S (2001) Investigation of high-Tc bulk material for its use in resistive superconducting fault current limiters. IEEE Trans Appl Supercond 11(1):1960–1963
Park WJ, Sung BC, Park JW (2010) The effect of SFCL on electric power grid with wind-turbine generation system. IEEE Trans Appl Supercond 20(3):1177–1181
Article ADS MathSciNet Google Scholar
Park WJ, Sung BC, Song KB, Park JW (2011) Parameter optimization of SFCL with wind-turbine generation system based on its protective coordination. IEEE Trans Appl Supercond 21(3):2153–2156
Paul W, Meier J (1993) Inductive measurements of voltage–current characteristics between 10–12 and 10–2 v/cm in rings of Bi2212 ceramics. Phys C 205(3–4):240–246
Paul W, Chen M, Lakner M, Rhyner J, Braun D, Lanz W (2001) Fault current limiter based on high temperature superconductors-different concepts, test results, simulations, applications. Phys C 354(1):27–33
Rettelbach T, Schmitz G (2003) 3D simulation of temperature, electric field and current density evolution in superconducting components. Supercond Sci Technol 16(5):645
Rhyner J (1993) Magnetic properties and AC-losses of superconductors with power law current–voltage characteristics. Phys C 212(3–4):292–300
Ruiz HS, Zhang X, Coombs T (2015) Resistive-type superconducting fault current limiters: concepts, materials, and numerical modeling. IEEE Trans Appl Supercond 25(3):1–5
Sung BC, Park DK, Park JW, Ko TK (2009) Study on a series resistive SFCL to improve power system transient stability: modeling, simulation, and experimental verification. IEEE Trans Ind Electron 56(7):2412–2419
Ye L, Juengst KP (2004) Modeling and simulation of high temperature resistive superconducting fault current limiters. IEEE Trans Appl Supercond 14(2):839–842
Zhang J, Dai S, Zhang Z, Zhang D, Zhao L, Shi F, Wu M, Xu X, Wang Z, Zhang F et al (2013) Development of a combined YBCO/Bi2223 coils for a model fault current limiter. IEEE Trans Appl Supercond 23(3):5601705–5601705
Zheng F, Deng C, Chen L, Li S, Liu Y, Liao Y (2015) Transient performance improvement of microgrid by a resistive superconducting fault current limiter. IEEE Trans Appl Supercond 25(3):1–5
XZ, HSR, and TAC have equally contributed to the writing of the manuscript, the design of the SFCL topology and the power grid, as well as to the formulation of the SFCL models for the study of transient states. The main coding tasks have been performed by XZ, JG, BS, FL, and HZ have equally contributed to data mining of the 93 different cases that were studied. All the authors have read and approved the final manuscript.
The authors would like to thank the editors and referees for giving useful suggestions for improving the work.
The datasets supporting the conclusions of this article are included within the article.
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), Project NMZF/064. The Ph.D. study of X. Zhang is funded by the China Scholarship Council (No. 201408060080).
Department of Engineering, University of Cambridge, 9 JJ Thomson Avenue, Cambridge, CB3 0FA, UK
Xiuchang Zhang, Jianzhao Geng, Boyang Shen, Lin Fu, Heng Zhang & Tim A. Coombs
Department of Engineering, University of Leicester, University Road, Leicester, LE1 7RH, UK
Harold S. Ruiz
Xiuchang Zhang
Jianzhao Geng
Boyang Shen
Lin Fu
Tim A. Coombs
Correspondence to Xiuchang Zhang.
Zhang, X., Ruiz, H.S., Geng, J. et al. Power flow analysis and optimal locations of resistive type superconducting fault current limiters. SpringerPlus 5, 1972 (2016). https://doi.org/10.1186/s40064-016-3649-4
Superconducting fault current limiter
Distributed power system
Short-circuit current
Optimal location | CommonCrawl |
Journal of Innovation and Entrepreneurship
A Systems View Across Time and Space
Research | Open | Published: 17 March 2016
Factors influencing access to finance by SMEs in Mozambique: case of SMEs in Maputo central business district
Hezron Mogaka Osano1 &
Hilario Languitone2
Journal of Innovation and Entrepreneurshipvolume 5, Article number: 13 (2016) | Download Citation
SMEs play an important role in the economic development of Mozambique. Access to finance is important for the growth of SMEs. Thus, the purpose of the study was to establish the factors that influence access to finance by SMEs. The factors that were addressed included structure of financial sector, awareness of funding opportunities, collateral requirements, and small business support services. The target population was 2725 which comprised of 2075 staff of three Banks, namely BIM Bank, BCI Bank, and Standard Bank and 650 SMEs in Maputo Central Business District. The research focused on a sample size of 242 SMEs and 324 staff of the named Banks. Descriptive and inferential research design was used. Structured questionnaires were used to collect the primary data. The findings from the study were that there is a relationship between the structure of the financial sector and access to finance by SMEs; there is a relationship between awareness of funding and access to finance by SMEs; there is a relationship between collateral requirements and access to finance by SMEs; and there is a relationship between small business support and access to finance by SMEs. The study findings are significant since they would enable the government to come up with appropriate regulation, funding programs, and schemes toward improvement of access to finance by SMEs. This study concludes that small business support services should be provided to SMEs to improve access to finance and that there is a need for more funding programs and financial schemes to assist SMEs. It is further concluded that since information is concerned with funding opportunities by SMEs, then relevant information should be available and known to all players in the financial market.
The accessibility of finance by SMEs has stirred attention of academicians and policy makers worldwide for many decades. Discussion on the problem of access to finance by SMEs in Mozambique has taken place in form of seminars and several debates for the purpose of improving the finance line for SMEs and to formally integrate their contributions in the economy (MIC, 2007). This is because finance is a significant element for determining the growth and survival of SMEs (ACCA, 2009). Access to finance allows small businesses to undertake productive investments and contribute to the development of the national economy and alleviation of poverty in most of Sub-Saharan African countries (Beck and Demirguc-Kunt, 2006). External finance for small and medium enterprises is essential for boosting start-up businesses. In addition, without external finance, small and medium enterprises will probably not be able to compete in an international market, to expand the businesses and strike linkages of business with the large firms. Further, access to finance is the most serious barrier to expansion of businesses and start-ups which have been mentioned by existing SMEs and potential operators (Olomi and Urassa, 2008).
In the context of Mozambique, the small enterprises are those with less than ten employees and medium enterprise with employees between 11 and 50. The large enterprises are those which have more than 50 employees. It is observed that 98.6 % of Mozambican firms are composed of SMEs. They provide more employment, diversification, and stimulus for innovations, mobilize social and economic resources, and provide a greater level of competition. In this regard, the government needs to employ suitable strategies in order to minimize the scarcity of bank financing for SMEs in the country and drive the national economic development (MIC, 2007). This pattern is repeated in Brazil, where SMEs account for 99 % of formal companies (IBGE, 2007), and in the UK where 99.8 % of the jobs are in SMEs (FSB, 2012).
Only 5 % of the SMEs are financed through banking institutions meaning they use other financing lines for both investment and working capital (MIC, 2007). Practically many of the SMEs finance their projects through their own funds, family funds, and friends' funds due to a number of difficulties in accessing bank financing (MIC 2007). Non-banking institutions, non-governmental development banks, also finance SMEs. The non-banking institutions include PODE (Development of Enterprises' Projects) that provide long-term financing in a period of 2.5 to 7 years (MIC, 2007). In addition, some other non-banking institutions focus in funding small agriculture and some other sectors. Those institutions include GAPI (Society for Management in Financing and Promoting the SMEs), FFPI (Small Industries Funding Program) which provide funds for small industries, and FARE (Funds for Aiding and Rehabilitation of the Economy) (MPD, 2007). There are a number of challenges SMEs face that prevent them from conducting their businesses effectively and efficiently. The cost of finance products in the country is high (25–30 %). Even if the financial market is stabilized, banks face issues of high overhead costs and this influences the price of finance products (MIE, 2010). The financial institutions have highlighted several constraints encountered by SMEs which limit the provision of finance products to SMEs. The constraints faced by SMEs are associated with the lack of clear financial plans, the accounting documentation, higher rate of interest, and the lack of collateral requirements (MIC, 2007). There remain in the country important policy challenges on the lending side.
As in other countries in the world, Mozambique focuses on both demand and supply side (Central Bank of Mozambique, 2013). The demand side encourages the commercial banks to provide finance to the small businesses through guarantees and provide more financial assistance through the affordable cost of capital, micro-finance, and innovation fund. On the supply side, this focus is in diminishing the asymmetries of information between the two players (lenders and borrowers). The relevant information between lenders and borrowers should be provided to improve the situation (Central Bank of Mozambique, 2013). Yet, small and medium enterprises still face a number of constraints in accessing bank financing attributed to lack of collateral requirements, structure of the financial sector, awareness of funding opportunities, and small business support services (Manasseh, 2004).
Indeed, financial constraints in Mozambique weigh down the SMEs and many of them collapse during the first year of start-up due to lack of financial resources to run the business. SMEs contribute over 20 % of the country's revenue (MPD, 2007). Therefore, an improvement of the financial systems is necessary as is the need to adopt suitable mechanisms in order to assist small businesses financially.
Therefore, this study investigated the factors influencing access to finance by SMEs in Mozambique in order to set some light on how the problem of access to finance would be addressed to reduce the number of SMEs collapsing.
This section presents both the theoretical and empirical review of the related literature on the subject under study.
Theoretical review
Information asymmetry theory
Information asymmetry theory postulates that when two parties are making decisions or transactions, there exists a situation where when one party has more or better information than the other. Thus, information asymmetry may cause an imbalance of power between the parties.
In this context, for example, the borrowers are more likely to get more information than the lenders. Information related with the risk associated with the investments is likely to be available to the borrowers. Matthews and Thompson (2008) observed that this may lead to the problems of moral hazard, where a party will take risks because they assume final cost of that risk, as well as adverse selection, where there are adverse results because parties have different/imperfect information; therefore, the problems may cause inefficiency related to the flow or transfer of funds from the lenders (surplus) to the borrowers.
Furthermore, for overcoming these issues, the financial intermediaries use three major ways such as providing the commitment for long-term relationship with the clients. The second way is through the sharing of the information. Lastly is through the delegation and monitoring of the credit applicants. When the customers borrow money directly from banks, the banks should consider the need for relevant information to be addressed and so as to redress the asymmetry of the information (Matthews and Thompson, 2008).
It is argued that the acuteness of information asymmetries between bankers and entrepreneurs is the main stumbling block to SME financing in Sub-Saharan Africa. However, the gap between banks and SMEs can be narrowed by developing financial systems that are more adapted to local contexts. In addition, avenues should be explored for sharing of risks and reduction of perceived risks by banks by promoting sustainable guarantee funds to facilitate better access to financing by SMEs (Leffileur, 2009).
Empirical review
Many factors have been considered for the purpose of explaining the scarcity of bank financing by SMEs. Anzoategui and Rocha (2010) have suggested that competition in financial sector is more crucial. The lack of it can actually raise the price of financial products and influence directly the growth of small firms and the younger firms in the world. They have also added that the low level of competition in the financial sector can probably affect the stability of the banking industry.
In the context of UK, it is believed that access to finance by SMEs is closely affected by the differences in commercial banks or the practices and the policies of the supply side of finance. It is argued that most of the commercial banks in UK differ in terms of the relationship between those lending institutions and the entrepreneur (BBA, 2002; Watanabe, 2005). The World Bank (2003) identified a number of factors that constitute constraints by SMEs to access finance. These factors include distortions of financial sectors, lack of know-how on the banking part, information asymmetry (access to business information), and the high risk in lending to small businesses.
The study of Beck (2007) identified that the weaknesses in financial and legal systems present, in the developing countries, an obstacle in accessing finance products. When Beck analyzed 70 developing countries, he concluded that the local government has actually the entire responsibility to build institutions. Market activities should be undertaken in friendly manner in order to provide a proper regulatory framework to reduce financial constraints by SMEs. Some studies (Bigsten, 2003; Yitayal, 2004), with the main focus on the developing countries, observed that the lack of collateral requirements, high risks, information asymmetries, small credit transactions particularly of rural households, and the distance between lender-borrowers as the main causes for credit variation among the different and existent sources of credit. In addition, the same researchers state that the policy and the type of financial institution in one or in other way determine access to finance.
It has been remarked that interest rates charged by banks in Sub-Saharan Africa create disincentives for most borrowers to acquire funds to invest in their businesses on one hand. On the other hand, the interest rates charged by banks discourage most small businesses from applying for bank financing (Diagne and Zeller, 2002; Foltz, 2004). Fatoki and Smit (2011) in South Africa grouped the major factors that influence the low access to finance by SMEs in two ways; internal and external. The internal factors include the business information, collateral, networking, and managerial competences. External factors constitute the legal environment, crime and corruption, ethical perceptions, and macro-economy.
Olomi and Urassa (2008), in the study based in Tanzania, identified three major groups of constraints of access to finance by SMEs. The first group of factors included the capacity (low level of knowledge and skills), under-developed culture of business, non-separation of the business between personal issues and family, credit history of SMEs, and lack of knowledge of available finance services. The second group of factors included the number of competent personnel and lack of experience of SMEs. The third group of factors is related to the regulation of the environment where transactions occur between lenders and borrowers, lack of system identification, and credit reference bureaus.
From the study of Brownbridge (2002), it is noted that loan term places an important element when it comes to lending issues. The loan term affects the revenue of lending institutions (banks), the repayment schedule of credit applicants, the financial cost of customers, and also the sustainability of the use of the finance products. It is further stated that in most cases the loan period and the size present obstacles for accessing bank financing and the interest rate affects access to finance in some few cases. Several studies (Kaufmann and Wilhelm, 2006; World Bank, 2003; USAID, 2005; USAID, 2007) found that the major problems concerning access to finance for small businesses in Mozambique are basically related with the high interest rate charged on financial products and the inefficient banking services, than would be justified by economic reasons. Thus, interest rates in Mozambique are at higher levels compared to other Sub-Saharan African countries. Furthermore, the differences in interest rates vary based on the currencies. The loans made on the domestic currency Metical (MZN) carry higher interest rates than those made in US dollar (USAID, 2007).
The commercial banks of Mozambique impose excessively high interest rates and fees in various services, such as transfer of funds, account statements, banking guarantees, and letters of credit. Many SMEs do not attempt to acquire finance from commercial banks due to the high interest rates charged (Kaufmann and Wilhelm, 2006). Consequently, most of the SMEs use their own capital, funding from family and friends for working capital, and investment capital as well. The World Bank (2003) shows that 90 % of working capital and 64.9 % of new investments were financed by SMEs' own capital, compared to 6.9 % working capital and 8.2 % of new investments which were financed by the banking institutions indicating low access to finance by SMEs in Mozambique. We consider a number of factors that affect access to finance by SMEs in Mozambique.
Collateral requirements
Collateral refers to the extent to which assets are committed by borrowers to a lender as security for debt payment (Gitman, 2003). The security assets should be used to recover the principal in case of default. SMEs in particular provide security in form of properties (houses, the businesses, the car, and anything that could actually bring back the principal) in case of default on loans (Garrett, 2009). Security for loans must actually be capable of being sold under the normal conditions of the market, at a fair market value and also with reasonable promptness. However, in most banks, in order to finance SMEs and to accept loan proposals, the collateral must be 100 % or more, equal to the amount of credit extension or finance product (Mullei and Bokea, 2000).
Moral hazard issues can be reduced by collateral requirements by increasing and adding a potential cost to borrowers when those are not making their best effort. Sometimes the borrowers extract the funds provided by the lenders for their own personal and private use. Therefore, the collateral requirements when in place can reduce negative consequences that can rise due to an improper utilization of the funds by SMEs. It is evident that most SMEs are denied and discriminated by the lenders in providing financing. This is because of high risk and for not having adequate resources to provide as collateral (Kihimbo et al. 2012).
Small business support services
Governments all over the world have designed a number of support services for SMEs which include the policy initiatives and support programs for the purpose of creating and developing the SME sector. Support programs are designed to assist SMEs in order to link them to the larger developmental vision of the nation with the main focus being poverty reduction and growth of small firms (Charbonneau and Menon, 2013).
A number of initiatives for SME support are in place in many countries which include Brazil, Argentina, Chile, Uruguay, and Mexico. In the European Union and other countries, such initiatives are covered by the specific acts of SMEs: in India by Micro and SME Development Act, in Kenya by Micro and Small Enterprises Act, in Malaysia by SME Master plan, in Tanzania by the SME Development Policy, and in the USA by the Small Business Act (Charbonneau and Menon, 2013). The government of Kenya has put in place the Micro and Small Enterprises Act as initiative aimed at encouraging all Kenyans in establishing SMEs by creating an enabling environment for small businesses to thrive and enhancing access to funding (Rambo, 2013).
For their sustenance, SMEs need to use ICT in order to become more competitive and to provide opportunities to participate in the global value chains (Charbonneau and Menon, 2013). Small business support services are provided by national agencies, both private and public. Indeed, most SMEs are not aware of funding programs and that most SMEs face difficulty in accessing funds to invest in their projects.
It is pointed out that in South Africa there are a number of financial schemes and funding programs that support the SMEs' access to finance. In this context, these schemes and funding programs are promoted by both the private and the public agencies. Despite availability of those funding programs in South Africa, there was a low awareness of funding programs especially government support schemes (DTI, 2010). In Mozambique, a number of difficulties in supporting SMEs have been discussed. In resolving these difficulties, some initiatives have been developed for the purpose of dealing with the issue of access to finance by SMEs. One of such initiative was the conference of "Know and Use Financing SMEs" organized by IPEME, entrepreneurs, banks, and insurers to discuss finance constraints by SMEs (MIC, 2007). The banking schemes and insurance institutions are the bridge for linking these funds to SMEs and the implementation of funding. Therefore, there is need for mutual understanding of the obstacles and difficulties in accessing bank financing. It is pointed out that SMEs generate, in total, more jobs than larger companies and are fundamental to the competitiveness of the country as well as in stimulating innovation (IPEME, 2013).
The government plays a crucial role by leveraging small business owners to implement additional funding mechanisms for SMEs by encouraging, promoting and supporting private initiatives (MIC, 2007). In addition, the Institute for the Promotion of Small and Medium Enterprises has several programs for creating and strengthening businesses, providing integrated assistance in management and business development (training and preparation of business plans). However, these programs seem not to be enough as the rate of rejection is higher particularly for bank sponsored schemes in financing SMEs (IPEME, 2013).
Structure of financial sector
Competition in the financial sector is more important particularly for the cost of services and products in the banking industry. Furthermore, the level of competition in the financial sector provides and determines the price of financial products and the level of access to finance by small businesses (Thorsten and Maksimovic, 2003). The direct competition in the banking industry may impact on the growth of new firms and younger firms. If there is low competition, this will undermine the overall stability of the banking industry. In addition, the products and services might be expensive and there will be less growth of new firms (Anzoategui and Rocha, 2010).
The banking system regulatory structure should have a greater implication between concentration of the market and access to finance. It is important to note that when there is a high regulatory regime, then entry barriers may increase. In most cases, the competitiveness of the banking system will not rely on the actual market structure but will rely on the regulatory regime of the country (Black and Philip, 2002). There is no clear relation between regulatory restriction, interference of the government on the process of intermediation and banking system's competitiveness and SMEs' access to finance. However, the regulatory restrictions may reduce the efficiency and competitiveness in the banking system and further block banks from using their information advantages (Scott and William, 2001).
The ownership structure of banks may influence the relation between access to finance, market power, and costs of external financing. Local domestic banks are more likely to pursue more information, better enforcement mechanisms than the foreign owned banks, and foreign banks may be willing to lend to opaque borrowers (Cetorelli and Michele, 2001).
Awareness of funding opportunities
The flow of information in the financial market is crucial for both SMEs and financial providers (Falkena et al. 2001). In order for SMEs to identify potential supplier of financial services, they require enough information. The financial institutions require information to enable them to evaluate the potential risks associated with the SMEs that apply for bank financing and also to access the location where the same SMEs will be operating and its market segments (Othieno, 2010). Information is concerned with awareness of funding opportunities by SMEs. In addition, information asymmetry is that relevant information is not available and known to all players in the financial market (Agostino, 2008). Information asymmetries are actually concerned with the two players in the financial market. In this case, the borrowers know more about their business cases and the bankers may not know more about it on one hand. On the other hand, it entails the lack of timely, accurate, quality, quantity, and complete information regarding the ability of the applicants to repay back the loan and to access financial products from the banking institutions (Bazibu, 2005).
A study by Agostino (2008), conducted on agricultural sector, pointed out that the failure of the current African market is because of the number of the current agricultural credit problems. These problems are associated with the imperfection of the information in the risk presences. The failures of the market mostly occur due to the fact that it is costly to screen credit applicants. The imperfections of the information affect almost all small holder farmers who are in most cases African women.
For SMEs, there are two external financing that are mostly important for financing the businesses. The first is the equity financing which is provided in form of venture capital and available for new small businesses (Deakins, 2008). However, due to lack of equity financing, the small businesses go after debt financing that is mostly provided by the banks and non-banking institutions. Indeed, access of debt financing is very limited especially for SMEs due to the requirements for the provision of debt (Deakins, 2008).
Equity financing
The equity financing method refers to the extent to which the company issues a certain portion of shares of its stock and in return receives money. Depending on how the SMEs raise the equity capital, the debtors have to relinquish a certain portion of the business often 25 to 75 % of the business (Covas and Haan, 2006). Gomes et al. (2006) pointed out that equity financing is a method for raising funds for any investments. In this context, equities are issued in the form of common stocks which gives a claim to share in the net incomes after expenses and taxes. Equity holders are paid periodically in form of dividends and can be considered as a long-term security as there is no maturity date. In South Africa, as in other countries in the world, there is a misperception that a broad supply of debt financing to SMEs will overcome most development problems that SMEs face. In practice, SMEs without a proven track record experience debt financing's shortage (Mahembe, 2011).
The Cruickshank Commission of UK highlighted that intervention of the government in the past meant to stimulate the provision of debt finances had been misdirected. However, intervention of public policy has to be reoriented away from debt financing, in order to put in place and emphasize initiatives that help to facilitate the provision of equity financing to small businesses. The national government should then support the expansion and establishment of venture capital funds. According to Cruickshank Commission, there are a number of results of the market failure in the provision of SMEs' equity financing. These include insufficient SME risk capital being available to SME sector, mainly to high growth potential SMEs and illiquid equity market for small firm sectors (Falkena et al., 2001).
Debt financing refers to the case where companies get finance products in a form of loan from lending institutions and give their promise to repay back at a given period of time and interest rate (Cooper and Ejarque, 2003). Furthermore, debt financing is the most common instrument used in the financial market for obtaining funds for investments and to finance new businesses including SMEs. This involves an agreement between the lenders and the borrowers, concerning the fixed interest rate to be paid for the loan in a given period of time. The maturity date of a debt of less than 1 year is considered as short-term debt and more than a year is considered as the long-term debt (Mahembe, 2011). Debt financing includes, therefore, the secured loans which involve the collateral requirements for securing bank financing. When the SMEs default on the loan commitments, banks usually rely on collateral to recover the money invested in a particular business (Falkena et al., 2001).
In the case of unsecured loans, the lender provides loans taking into account the borrower's reputation. For that transaction to take place, a strong relationship between the borrowers and the banks is needed. Loans of this kind are usually short term and the rate of interest is often high (Cole, 2003). Most of the lenders are more unlikely to provide unsecured loans to the small businesses unless there was a lot of business that were made in the past between the borrower and lender, otherwise the lender will still insist that the borrower provide collateral for the loans. The insistence of the lenders on collateral relies on the borrower's present financial and economic conditions. The loans are actually subjected to the repayment period which include short time (less than 1 year or between 6 and 18 months), the intermediary term (the repayment between 3 years), and the long term (paid back in 5 years) (Falkena et al., 2001).
Bank financing is important for the growth of small firms. Young businesses and any other enterprises in the world including SMEs depend mostly on bank financing to boost the business and to carry out new investments and projects. In addition, funding of investment includes equity and other forms of debt. The various studies on the area of access to finance have pointed out the challenges that SMEs face, in Mozambique in particular and in the world in general, which include the lack of collateral, cost of capital, business plan, and number of lending institutions (USAID, 2007; Anzoategui and Rocha, 2010; Brownbridge, 2002; Fatoki and Smit, 2011; Olomi and Urassa. 2008; Beck, 2007).
Conceptual framework of the study
The conceptual framework of this study shows the focus on the factors influencing access to finance by small and medium enterprises. The variables in the conceptual framework are tested as hypotheses to establish the relationships between variables.
The independent variables of this study include the collateral requirements, structure of financial sector, small business support services, and awareness of funding opportunities and the dependent variable is the access to finance by SMEs. The measures or indicators for access to finance include the amount of financing provided to SMEs as a total funding, increase in number of SMEs accessing bank loans, and the percentage of bank financing as total of SMEs' funding. Figure 1 shows the conceptual framework showing the relationship among variables.
Descriptive and inferential research design was used. In this study, both simple random sampling and stratified sampling were used. Simple random sampling was used in order to select the SMEs from the total population. The stratified sampling was used to classify the respondents into categories that included the relevant management and the staff dealing with SMEs from BIM Bank, BCI Bank, and Standard Bank.
The target population of the study was 2725 which comprised 2075 of bank staff of BIM Bank, BCI Bank, and Standard Bank and 650 small and medium enterprises in Maputo Central Business District. The SMEs were those licensed and which operate under the legal framework of doing business in Mozambique. The population of the banks included the staff of BIM Bank, BCI Bank, and Standard Bank which were accessed in their respective Banks Head Offices (INE, 2012).
To determine the sample, the following formula provided by Easterby-Smith et al. (1999) was used by the researcher.
$$ \mathrm{S}\mathrm{S}={Z}^{2\kern0.24em }pq\left(\frac{N}{E^2\left(N-1\right)+{Z}^2pq}\right) $$
SS = required sample size;
z = z value at 95 % confidence level (1.96);
p = the population in the target population estimated to have characteristics being measured (50 %);
q = 100 − p = 50 %;
N = total population;
E = margin error.
Assume 50 % of the population being measured. A sample size of 242 SMEs was computed from a population of 650 SMEs and 324 employees from a population of 2075 employees from three banks split (as in Table 1) were taken.
Table 1 Sample size for banks
From the data collected, out of the 242 questionnaires administered to SMEs owners, 123 were filled and returned; and out of the 324 questionnaires that were administered to employees working in the banks, 222 were filled and returned. This represented a response rate of 50.8 and 68.5 %, respectively. This corroborates Bailey's (2000) assertion that a response rate greater than 50 % is adequate. This implies that based on this assertion the response rate in this case of 50.8 and 68.5 % is good. Data obtained from research instrument was analyzed using the Statistical Package for Social Science (SPSS). Data were also arranged in a meaningful form, into tables of frequencies, percentages, and charts.
This section presents results and findings from the study. The first section deals with the background information of the respondents; while the other five sections present findings of the analysis, based on the objectives of the study where both descriptive and inferential statistics have been employed.
Validity and reliability results
Validity results
Validity refers to the degree to which the measures of the instruments measure what it is supposed to measure (Joppe 2000; Mugenda, 2008). Face validity of a measuring instrument was established by pre-testing questionnaires on four SMEs. The research assessed the clarity and ease of use of the research instruments and any sensitive, biased items were identified and modified.
Content validity of this study was determined by first discussing the items in the instrument with three experts who indicated against items (with a rating scale of 1–4) in the questionnaire whether it measured what it was meant to measure or not in relation to the research objectives. Content validity index of 0.802 was computed. Mugenda and Mugenda (2003) recommend a content validity index of above 0.5, indicating that the validity of the instrument was acceptable.
Reliability results
Reliability of research instruments indicates the degree to which the research is without bias therefore ensured consistent measurement across time and the several items within instrument (Kothari, 2004). The study used the Cronbach's alpha coefficient to determine the internal consistency of the scale that was used to measure the reliability of the variables of the study. In this regard, a Cronbach's alpha of 0.6 is considered satisfactory and 0.7 to 0.8 good (Cooper and Schindler 2008; Mugenda and Mugenda, 2003; Sekaran and Bougie, 2013). The alpha coefficients were all greater than 0.7, indicating an acceptable reliability of the instruments. The instrument therefore was appropriate for the study (see Table 2).
Table 2 Reliability results
Pearson's correlation matrix
The study conducted a correlation analysis of the variables of the study which included collateral requirements, awareness of funding opportunities, structure of financial sector, and small business support services and access to finance. To establish the relationship between the variables, the study used Karl Pearson's coefficient of correlation (see Table 3). It was found that there was a positive correlation between collateral requirement and small business support services (r = 0.331). It was also found that there was a positive correlation between collateral requirement and structure of financial sector (r = 0.564, sig. 0.1, two-tailed). However, there was no significant correlation between all the independent variables and access to finance.
Table 3 Pearson correlation coefficient matrix
Multiple regression analysis was used to establish the relationship between the variables of the study. In doing so, the regression model below was used: y = β0 + β1x1 + β2x2 + β3x3 + β4x4 + ε. where y = dependent variable (access to finance); β1 − β4 = model parameters or coefficients; x 1 − x 5 = independent variables namely structure of financial sector, awareness of funding opportunities, collateral requirements, and small business support services; and ε = error terms.
Table 4 shows a model summary and indicates the adjusted R square used as test for model fitness. The F-test was carried out to test the significance of the regression model in predicting the dependent variable (access to finance). From the results, it was found that the four independent variables moderately predict access to finance in SMEs (adjusted R squared = 0.703). That means the model explains 70.3 % the variance in the access to finance; 29.7 % of variations are brought about by factors not captured in the objectives. Therefore, further research should be conducted to investigate the other factors (29.7 %) that affect access to finance in SMEs. The regression equation appears to be very useful for making predictions since the value of R 2 is close to 1. Table 5 indicates the ANOVA (F-test results for the regression model).
Table 4 Coefficient of determination (R 2)
Table 5 ANOVA
The null hypothesis was rejected because the linear regression F-test results (F = 4.244; and 446 df) are significant at p < 0.05. Therefore, the null hypothesis (Ho) was rejected and concluded that the regression model linearly explains the access to finance. Therefore, the study accepted alternative hypothesis:
Ha1: There is a relationship between collateral requirement and access to finance;
Ha2: There is a relationship between small business support services and access to finance;
Ha3: There is a relationship between awareness of funding opportunities and the access to finance; and
Ha4: There is a relationship between structure of financial sector and the access to finance
The study conducted a multiple regression analysis so as to determine the regression coefficients (β) which shows that all the independent variables of collateral requirement, awareness of funding opportunities, structure of financial sector, and small business support services to the dependent variable have a significant contribution to access to finance (Table 6).
Table 6 Regression coefficients
This section discusses the findings from the study, and also it draws the conclusions based on the objectives of the study.
Discussion of findings
Influence of collateral requirements in access to finance
The study sought to establish the influence of collateral requirements on access to finance. Collateral refers to the assets committed by borrowers to a lender as security for debt payment (Gitman, 2003).
The study found that collateral requirements influence access to finance by SMEs in Mozambique. It is evident that most SMEs are denied and discriminated by the lenders in provision of financing. This is because of high risk and for not having adequate resources to provide as collateral. The study also found that houses, land, and businesses are used as security and that banks demand SMEs to post the collateral in order to reduce moral hazard. This finding is in line with the findings of Mullei and Bokea (2000) that banks ask for collaterals in order to finance SMEs and to accept loan proposal and that the collateral must therefore be 100 % or more, equal to the amount of credit extension or finance product.
Further, the study revealed that collateral creates disincentive to the SMEs to acquire bank financing and that SMEs are discriminated by banks due to high risks in lending to them. This finding concurs with Kihimbo et al. (2012) that most SMEs are denied and discriminated by the lenders in providing financing.
Effect of small business support services in access to finance
The study found that small business support services influence access to finance by SMEs. Charbonneau and Menon (2013) suggest that SMEs for their sustenance need to use ICT which can then make them become more competitive and provide opportunities to participate in global value chains. Small business support services are provided by national agencies, both private and public.
Also the respondents agreed that small business support services could improve access to finance and that there are enough number of funding programs and financial schemes to assist SMEs but most of them are not able make proper proposals for funding. This finding agrees with Rambo (2013) observations that most SMEs are not aware of funding programs and that most SMEs face difficulties in accessing funds to invest in their projects.
Influence of the structure of financial sector in access to finance
The study showed that that the regulatory regime in Mozambique influences access to finance. This study finding disagrees with the findings of Scott and William (2001) that there is no clear relationship between regulatory restriction, interference of the government on the process of intermediation and banking system's competitiveness and SMEs' access to finance. This study therefore concludes that the regulatory regime should increase the efficiency and competitiveness in the banking system and make access to finance easier.
Effect of awareness of funding opportunities in access to finance
The study found that awareness of funding affects access to finance. It was found that there is information asymmetry. The financial institutions know very little about the SMEs. Information asymmetries are actually concerned with the two players in the financial market. In this case, the borrowers know more about their business cases and the bankers may not know more about it on one hand. On the other hand, it entails the lack of timely, accurate, quality, quantity, and complete information regarding the ability of the applicants to repay back the loan and to access financial products from the banking institutions.
The study revealed that the banks require more information to evaluate potential risks associated to SMEs. The flow of information in the financial market is crucial for both SMEs and financial providers (Falkena et al., 2001). In order for SMEs to identify potential suppliers of financial services, they require enough information. This study therefore infers that availability of information is essential to both the banks and the SMEs. This will enhance the understanding of the potential risks associated with the SMEs that apply for bank financing and also to access the location where the same SMEs will be operating and its market segments (Othieno, 2010).
There are a number of observations: SMEs need to use ICT to sustain businesses and to become more competitive and that small business support services could improve access to finance; there is not enough number of funding programs and financial schemes to assist SMEs; majority of SMEs are not aware of funding programs and financial schemes provided by the government and private sector; and public and private sectors have not put in place enough funding programs and financial schemes to assist SMEs. In addition, it has been observed that houses, land, and businesses were provided as security and that banks demand SMEs to provide collateral in order to reduce moral hazard; collateral creates disincentive to the SMEs to acquire bank financing and that SMEs are discriminated by banks due to high risks in lending to them. The banking systems and regulatory structure impede access to finance by SMEs. It is further concluded that the banks require more information to evaluate potential risks associated to SMEs in Mozambique. The study concludes that SMEs should be sensitized about funding programs and financial schemes provided by the government and private sector and that public and private sectors put in place funding programs and financial schemes to assist SMEs. The present study was confined to SMEs in Mozambique. For further research, it would be useful to carry out a similar study across East Africa and beyond and see whether the same results would be replicated.
ACCA. (2009). Access to finance for small and medium enterprises sector: the evidence and the conclusion.
Agostino, M. (2008). Effects of screening and monitoring on credit rationing of SMEs. Economic notes. 37(2-2008), 155-179. Banca Monte dei Paschi di Siena SpA. Oxford: Blackwell Publishing Ltd.
Anzoategui, D., & Rocha, R. (2010). The competition of banks in the Middle East and Northern Africa region. Washington: Policy Research Working Paper 5363. Washington: World Bank.
Bailey, R. (2000). Research findings. New York: McGraw-Hill.
Bazibu, M. (2005). Information Asymmetry and Borrowers' Performance on Loans in Commercial Banks. Unpublished MBA research dissertation. Kampala: Makerere University.
BBA. (2002). Ethnic minority business in the UK: access to finance and business support. London:British Bankers Association Research Report.
Beck, T. (2007). Financing constraints of SMEs in developing countries: the evidence, determinants and solutions. Retrieved January 2010.
Beck, T., & Demirguc-Kunt, A. (2006). Small-medium enterprise sector: access to finance as a growth constraint. Journal of Finance and Banking, 30(11), 2931-2943.
Bigsten, A. (2003). Credit constraints in the manufacturing enterprises in Africa. Journal of African Economics, 12(1), 104–125.
Black, S. E., & Philip, S. (2002). Entrepreneurship and bank credit availability. Journal of Financd, 57, 2807–2833.
Brownbridge, M. (2002). Banking reforms in Africa: what has been learnt. The New.
Central Bank of Mozambique. (2013). Annual Report, the Bank of Mozambique.
Cetorelli, N., & Michele, G. (2001). Banking market structure. Financial Dependence and Growth: Int Evidence from Industry Data Journal of Finance, 56, 617–640.
Charbonneau, J., & Menon, H. (2013). A strategic approach to SME exports growth. The section of Enterprise Competitiveness- ITC. Taipei-Taiwan: Secretariat, Confederation of Asia-Pacific Chambers of Commerce and Industry.
Cole, R. (2003). How did the financial crisis affect small business lending in the US?.
Cooper, R., & Ejarque, J. (2003). Financial friction and investment: requiem in Q review of economic dynamics.
Cooper, C. R., & Schindler, P. S. (2008). Business research methods (10th ed.). Boston: McGraw-Hill.
Covas, F., & Den Haan, W. J. (2006). The cyclical behaviour of debt and equity: evidence from a panel of Canadian firms.
Deakins, D. (2008). SMEs' access to finance: is there still a debt finance gap. Belfast: The Institute of Small Business and the Entrepreneurship.
Diagne, A., & Zeller, M. (2002). The determinant of household access and participation in formal and informal credit market. The Institute of International Food Policy Research, 7(2), 23–31.
DTI. (2010). National Directory of Small Business Support Programs. Pretoria: The DTI.
Easterby-Smith, M., Thorpe, R., & Jackson, P. R. (1999). The management research: an introduction. London: the Sage Publication Limited.
Falkena, H., Abedian, I., von Blottnitz, M., Coovadia, C., Davel, G., Madungandaba, J., Masilela, E., & Rees, S. (2001). SMEs access to finance in South Africa - a supply side regulatory review. Pretoria, South Africa: Policy Board for Financial Services and Regulation.
Fatoki, O. O., & Smit, A. V. (2011). Constraints to credit access by new SMEs in South Africa: a supply side analysis. The African Journal of Business and Management, 5(4), 1413–1425.
Foltz, J. D. (2004). Credit market access and profitability in Tunisian agriculture. Journal of Agricultural Economics, 130, 229–240.
FSB. (2012). The UK leading business organization. London: Federation of Small Business.
Garrett, J. F. (2009). Bank and their customers. New York: Dobbs Ferry: Oceana Publications.
Gitman, L. J. (2003). The principles of managerial finance (7th ed.). New York: Pearson Education Inc.
Gomes, J. F., Yaron, A., & Zhang, L. (2006). Asset pricing implications of firms financing constraints. Review of the financial studies.
IBGE. (2007). The research of innovation technology—PINTEC 2005. Rio de Janeiro: The Brazilian institute of geography and statistic.
INE. (2012). The profile of SMEs in Mozambique. Maputo: National Statistics.
IPEME. (2013). Financing SMEs in Mozambique (p. 1,3). Maputo: Know and Use financing SMEs.
Joppe, M. (2000). The research process; Retrieved 25 February 1998, from www.ryerson.ca
Kaufmann, F., & Wilhelm, P. (2006). The dilemma of small business in Mozambique: a research note, in the developmental entrepreneurship: adversity, risk, and isolation. Amsterdam: Galbraith and Curt Stiles.
Kihimbo, B. W., Ayako, B. A., & Omoka, K. w. (2012). Collateral requirements for financing of small and medium enterprises (SMEs) in Kakamega municipality in Kenya. International Journal of current Research, 4(6), 21–26.
Kothari, C. R. (2004). Research methodology: methods and techniques (2nd ed.). New Delhi: New Age International Publisher.
Leffileur, J. (2009). Financing SMEs in the context of strong information asymmetry. Issue 1, Financing SMEs in Sub Saharan Africa. Private Sector Development. Paris: Proparco.
Mahembe, E. (2011). Literature review on small and medium enterprises' access to finance. Pretoria: National Credit Regulator.
Manasseh, P. N. (2004). A text book of business finance (3rd ed.). Nairobi: McMore Accounting Books.
Matthews, K., & Thompson, J. (2008). The economics of banking. Chichester: Wiley.
MIC. (2007). Small and medium enterprises in Mozambique situation, perspectives and challenges. Maputo, Mozambique: the Ministry of Trade and Industry.
MIE. (2010). The strategic program of industry development. Maputo, Mozambique: the Ministry of Energy.
MPD. (2007). The development of enterprises in Mozambique: results based on questionnaire in manufacturer sector 2002–2006. Maputo, Mozambique.
Mugenda, A. G. (2008). Social science research: theory and principles. Nairobi: Applied Research and Training Services.
Mugenda, M. O., & Mugenda, A. G. (2003). Research methods: quantitative and qualitative approaches. Nairobi: Acts Press.
Mullei, A., & Bokea, A. (2000). Micro and small enterprises in Kenya: agenda for improving the policy environment. Nairobi: I.C.E.G.
Olomi, D., & Urassa, G. (2008). The constraints to access the capital by SMEs of Tanzania. Dar es Salaam: REPOA.
Othieno, E. A. (2010). Bank lending, information on asymmetry, credit. Kampala: Makerere University.
Rambo, C. M. (2013). Time required to break-even for small and medium enterprises: the Evidence from Kenya. The International Journal of Marketing Research and Management, 6(1), 81-94.
Scott, J., & William, D. (2001). Competition and credit market outcomes: small firm perspectives. Philadelphia, PA: The Temple University mimeo.
Sekaran, U., & Bougie, R. (2013). Research methods for business: a skill-building approach (6th Ed). New York: John Wiley & Sons.
Thorsten, B., & Maksimovic, V. (2003). The competition of the bank and access to finance: international evidence.
USAID. (2005). Financial services to support international trade in Mozambique. Maputo: Nathan associates Inc.
USAID. (2007). Constraints in financial sector for development of private sector in Mozambique. Maputo: Nathan associates Inc.
Watanabe, W. (2005). How are loans by their main bank priced? The bank effects, the information and non price terms of a contract. RIETI Discussion Paper Series 05-E-028.
World Bank. (2003). Pilot investment assessment: Mozambique industrial performance and investment climate. Maputo: CTA/CPI/RPED/APSG/WB.
Yitayal, A. M. (2004). Determinants of smallholder farmers access to formal credit: the case of Metema Woreda North Gondar in Ethiopia. Addis Ababa: The World Bank Report.
Africa Nazarene University, P.O. Box 53067 - 00200, Nairobi, Kenya
Hezron Mogaka Osano
Gorongosa Turismo Transportes e Serviços, Lda., Avenida Eduardo Mondlane, no. 1139, Beira City, Mozambique
Hilario Languitone
Search for Hezron Mogaka Osano in:
Search for Hilario Languitone in:
Correspondence to Hezron Mogaka Osano.
The corresponding author has written the article and co-author has contributed in the collection, analysis and interpretation of the results. Both authors read approved the final manuscript. | CommonCrawl |
\begin{document}
\begin{frontmatter} \title{One-dimensional completed scattering and quantum nonlocality of entangled states} \author{N L Chuprikov}
\address{Tomsk State Pedagogical University, 634041, Tomsk, Russia}
\begin{abstract} Entanglement is usually associated with compound systems. We first show that a one-dimensional (1D) completed scattering of a particle on a static potential barrier represents an entanglement of two alternative one-particle sub-processes, transmission and reflection, macroscopically distinct at the final stage of scattering. The wave function for the whole ensemble of scattering particles can be uniquely presented as the sum of two isometrically evolved wave packets to describe the (to-be-)transmitted and (to-be-)reflected subensembles of particles at all stages of scattering. A noninvasive Larmor-clock timing procedure adapted to either subensemble shows that namely the dwell time gives the time spent, on the average, by a particle in the barrier region, and it denies the Hartman effect. As regards the group time, it cannot be measured and hence it cannot be accepted as a measure of the tunneling time. We argue that nonlocality of entangled states appears in quantum mechanics due to inconsistency of its superposition principle with the corpuscular properties of a particle. For example, this principle associates a 1D completed scattering with a single (one-way) process, while a particle, as an indivisible object, cannot take part in transmission and reflection, simultaneously. \end{abstract}
\begin{keyword} transmission\sep reflection\sep Larmor\sep nonlocality\sep entanglement
\PACS 03.65.Ca, 03.65.Xp \end{keyword} \end{frontmatter}
\newcommand{A_{in}}{A_{in}} \newcommand{B_{in}}{B_{in}} \newcommand{A_{out}}{A_{out}} \newcommand{B_{out}}{B_{out}} \newcommand{a_{in}}{a_{in}} \newcommand{b_{in}}{b_{in}} \newcommand{a_{out}}{a_{out}} \newcommand{b_{out}}{b_{out}} \newcommand{a_{in}}{a_{in}} \newcommand{b_{in}}{b_{in}} \newcommand{a_{out}}{a_{out}} \newcommand{b_{out}}{b_{out}}
\section{Introduction} \label{aI}
For a long time scattering a particle on one-dimensional (1D) static potential barriers have been considered in quantum mechanics as a representative of well-understood phenomena. However, solving the so-called tunneling time problem (TTP) for a 1D completed scattering (see reviews \cite{Ha2,La1,Olk,Ste,Mu0,Nu0,Ol3} and references therein) showed that this is not the case.
At present there is a variety of approaches to introduce characteristic times for the process. They are the group (Wigner) tunneling times (more known as the "phase" tunneling times) \cite{Ha2,Wig,Har,Ha1,Ter}, different variants of the dwell time \cite{Ha1,Smi,Ja1,Ja2,But,Le1,Nus,Go1,Mue,Bra}, the Larmor time \cite{But,Baz,Ryb,Aer,Bu1,Lia,Zhi}, and the concept of the time of arrival which is based on introducing either a suitable time operator (see, e.g., \cite{Aha,Mu4,Hah,Noh}) or the positive operator valued measure \cite{Mu0,Mu9} (see also \cite{Le3,Le5,Mu5,Le6}). A particular class of approaches to study the temporal aspects of a 1D scattering includes the Bohmian \cite{Le3,Le2,Gru,Bo1,Kr1}, Feynman and Wigner ones (see \cite{Sok,Yam,Ymm,Ya1,Kre} as well as \cite{La1,Mu0} and references therein). One has also point out the papers \cite{Ga1,Ga2,Ga3} to study the characteristic times of "the forerunner preceding the main tunneling signal of the wave created by a source with a sharp onset".
The source of a long-lived controversy in solving the TTP, which still persists, is usually associated with the absence of a Hermitian time operator. However, our analysis shows that this problem is closely connected to the mystery of quantum nonlocality of entangled states \cite{Ens,Bel}. As is known, the main peculiarity of such states is the availability of nonzero correlations between two events separated with space-like intervals.
The main intrigue is that, though this prediction of quantum theory contradicts special relativity, now it has been reliably stated (theoretically and experimentally \cite{Asp}) that nonlocality is indeed an inherent property of existing quantum mechanics (a deep analysis of this question is done in \cite{No1,Wis}).
It is now widely accepted that nonlocal correlations of entangled states do not violate special relativity, for they are not associated with a superluminal transmission of signals (see, e.g., \cite{Gis,Shi}). However, with regards to this 'no-signalling' interpretation, Bell pointed out that "... we have lost the idea that correlations can be explained, or at least this idea awaits reformulation. More importantly, the ‘no signaling’ notion rests on concepts which are desperately vague, or vaguely applicable...." (quoted from \cite{No1}).
We agree entirely with this doubt: if nonzero correlations between two events are not a consequence of a causal relationship between them, then the very notion of 'correlations' becomes physically meaningless. It is just the main challenge of quantum mechanics that its principles imply introducing such strange correlations. So that, it is worthwhile to reveal an imperfectness in the foundation of quantum theory, which creates such a paradoxical situation.
In this paper, the origin of quantum nonlocality is analyzed in the case of a 1D completed scattering. Studying this particular problem suggests the way of how to reconcile quantum mechanics with special relativity. We show (Section \ref{a0}) that existing quantum mechanics does not allow any consistent model of this process. Its superposition principle, applied to entangled states, contradicts corpuscular properties of particles. A new, consistent model of a 1D completed scattering, free of nonlocality, is presented in Sections \ref{a2} and \ref{a3}.
\section{Towards a local model of a 1D completed scattering.} \label{a0} \subsection{On the inconsistency of the existing model of a 1D completed scattering.} \label{a01}
It is evident that a proper theoretical description of any physical phenomenon must obey the following three requirements which are connected with each other: {\it (i) it must explain the phenomenon; (ii) it must be consistent; (iii) it can be verified experimentally}. However, the existing quantum-mechanical model of a 1D completed scattering does not obey these requirements.
{\bf Firstly}, {\it existing quantum mechanics endows a 1D completed scattering with quantum nonlocality whose reality is questionable}.
Some manifestations of nonlocality, arose in the existing approaches, have been pointed out and analyzed by Leavens and co-workers (see \cite{Aer,Le5,Le6,Le2}). For example, the Bohmian model of a 1D completed scattering predicts that the fate of the incident particle (to be transmitted or to be reflected by the barrier) depends on the coordinate of its starting point (see \cite{Le2}). In this case, that of the critical spatial point to separate the starting regions of to-be-transmitted and to-be-reflected particles depends on the shape of the potential barrier, though it is located at a considerable distance from the particle's source.
Further, the time-of-arrival concept \cite{Mu9} predicts a nonzero probability of arriving a particle at the spatial regions where the probability density is {\it a priori} zero (see \cite{Le5,Le6}). The Larmor time concept predicts the precession of the average spin of reflected particles, under the magnetic field localized beyond the barrier, on the side of transmission where reflected particles are absent {\it a priori} (see \cite{Aer}).
However, perhaps the most known manifestation of quantum nonlocality, predicted by the existing model of a 1D completed scattering, is the so-called Hartman effect (and its versions) which is associated with the anomalously short (or even negative) times of tunneling a particle through the barrier region (see, e.g., \cite{Har,Mu6,Wi1,Ol1,So1,Zh1,So5,Nim,Mar,Wi2,Ran}).
The existing explanations of this effect (see, e.g., \cite{Wi2,Nim}) are made, in fact, in the spirit of the 'no-signalling' theories. They suggest that anomalously short dwell and group times do not mean a {\it superluminal} transmission of a particle through the barrier region. In fact, this means that the notions of the dwell and group times, as characteristics of the particle's dynamics in the barrier region, loos theirs initial physical sense.
So, in the existing form, conventional quantum mechanics endows a 1D completed scattering with quantum nonlocality. Our next step is to show that this prediction results from inconsistency of the quantum-mechanical principles.
{\bf Secondly}, {\it within the existing framework of quantum mechanics, any procedure of timing the motion of a scattering particle (both without and with distinguishing transmission and reflection) is a priory inconsistent}.
On the one hand, the main feature of a particle, as an indivisible object, implies that it cannot be simultaneously transmitted and reflected by the potential barrier. So that a 1D scattering should be considered as a combined process to consist from two alternative sub-processes, transmission and reflection, macroscopically distinct at the final stage of scattering. And, thus, there should be two experimenters for studying the subensembles of transmitted or reflected particles.
In this problem, introducing characteristic times and other observables, common for these two subensembles, has no physical sense. Such quantities simply cannot be measured, since they describe neither transmitted nor reflected particles. Their introduction necessitates quantum nonlocality, and they cannot be properly interpreted (about the interpretation problem for the dwell time, see in \cite{Ha1,Wi2}). For example, the average value of the particle's position (or momentum), calculated for the whole ensemble of scattering particles, does not give the expectation (i.e., most probable) value of this quantity.
On the other hand, the superposition principle, as it stands, demands of treating a 1D completed scattering as a single one-particle process, even at its final stage. By this principle, the set of one-particle's observables should be introduced namely for the whole ensemble of scattering particles, i.e., without distinguishing transmission and reflection.
One has to stress that the existing model of a 1D completed scattering denies, not only on the conceptual level, introducing individual characteristic times and observables for transmission and reflection. This model does not provide any description of these sub-processes at all stages of scattering. All the existing approaches, which notwithstanding introduce the transmission (or reflection) time, deal in fact with the subensembles, in which the number of particles is not conserved.
{\bf Thirdly}, {\it existing quantum mechanics does not allow a consistent procedure of measuring the time spent by a particle in the barrier region.}
This equally concerns experiments on photonic tunneling which are at present more reliable than those for electronic tunneling. As is known (see, e.g., \cite{Wi2}), such experiments imply two steps. At the fist step, a light pulse is sent through a barrier-free region. The arrival time of the peak of this pulse at a detector is needed as a reference time. At the second step, an investigated potential barrier is inserted in the path of the pulse. The arrival time of the transmitted peak at the detector is measured and then compared with the reference time. The difference of these two arrival times is considered as a searched-for the group delay time.
The main difficulty of measuring this {\it asymptotic} characteristic time is usually associated with reshaping the incident light pulse (or, wave packet) in the barrier region. At the same time there is once more problem which has remained obscure. It relates to the fact that the above procedure is based on the implicit assumption that the transmitted and free-evolved peaks start from the same spatial point.
However, as it follows from our model of a 1D competed scattering, this is not the case even for the resonant tunneling. So that this procedure gives the time delay neither for transmitted nor for reflected parts of the incident wave packet. We are sure that the same is valid for photonic tunneling. Moreover, as will be seen from our analysis, there is a reason by which the group time is a physical quantity of secondary importance.
\subsection{How to reconcile a quantum model of a 1D scattering with special relativity?} \label{a02}
So, as it follows from the above analysis, a principal shortcoming of the existing quantum model of a 1D completed scattering is that it endows a particle with the properties to contradict its corpuscular nature.
In this paper we present a new model of this process, which is based on two main ideas: (i) the state of a particle taking part in a 1D completed scattering is an entangled (combined) state; (ii) quantum mechanics must distinguish, on the conceptual level, entangled and unentangled (elementary) states.
We first show that in the problem under consideration, for a given potential and initial state of a particle, the wave function to describe the whole ensemble of particles can be uniquely presented as a sum of two isometrically evolved wave packets which describe alternative sub-processes, transmission and reflection, at all stages of scattering.
Note, at present all quantum-mechanical rules are equally applied to macroscopically distinct states and their superpositions. However, the main lesson of solving the TTP is just that this rule is erroneous. A single system (however, macroscopic or microscopic) cannot take part simultaneously in two or more macroscopically distinct sub-processes. This means that the averaging rule (Born's formula) is not applicable to entangled states.
The phenomenon of quantum nonlocality results from ignoring this restriction. In other words, it appears when one attempts to associate, contrary to the nature of entangled states, the interference pattern formed by a superposition of macroscopically distinct sub-processes with a single causally evolved process.
\newcommand{\kappa_0^2}{\kappa_0^2} \newcommand{\kappa_j^2}{\kappa_j^2} \newcommand{\kappa_j d_j}{\kappa_j d_j} \newcommand{\kappa_0\kappa_j}{\kappa_0\kappa_j}
\newcommand{R_{j+1}}{R_{j+1}} \newcommand{R_{(1,j)}}{R_{(1,j)}} \newcommand{R_{(1,j+1)}}{R_{(1,j+1)}}
\newcommand{T_{j+1}}{T_{j+1}} \newcommand{T_{(1,j)}}{T_{(1,j)}} \newcommand{T_{(1,j+1)}}{T_{(1,j+1)}}
\newcommand{w_{j+1}}{w_{j+1}} \newcommand{w_{(1,j)}}{w_{(1,j)}} \newcommand{w_{(1,j+1)}}{w_{(1,j+1)}}
\newcommand{u^{(+)}_{(1,j)}}{u^{(+)}_{(1,j)}} \newcommand{u^{(-)}_{(1,j)}}{u^{(-)}_{(1,j)}}
\newcommand{t_{j+1}}{t_{j+1}} \newcommand{t_{(1,j)}}{t_{(1,j)}} \newcommand{t_{(1,j+1)}}{t_{(1,j+1)}}
\newcommand{\vartheta_{(1,j)}}{\vartheta_{(1,j)}}
\newcommand{\tau_{j+1}}{\tau_{j+1}} \newcommand{\tau_{(1,j)}}{\tau_{(1,j)}} \newcommand{\tau_{(1,j+1)}}{\tau_{(1,j+1)}}
\newcommand{\chi_{(1,j)}}{\chi_{(1,j)}}
\newcommand {\aro}{(k)} \newcommand {\da}{\partial} \newcommand{\mbox{\hspace{5mm}}}{\mbox{\hspace{5mm}}} \newcommand{\mbox{\hspace{3mm}}}{\mbox{\hspace{3mm}}} \newcommand{\mbox{\hspace{1mm}}}{\mbox{\hspace{1mm}}}
\section{Wave functions for transmission and reflection}\label{a2} \subsection{Setting the problem} \label{a1}
Let us consider a particle incident from the left on the static potential barrier
$V(x)$ confined to the finite spatial interval $[a,b]$ $(a>0)$; $d=b-a$ is the barrier width. Let its in-state, $\psi_{in}(x),$ at $t=0$ be a normalized function to belong to the set $S_{\infty}$ consisting from infinitely differentiable functions vanishing exponentially in the limit $|x|\to \infty$. The Fourier-transform of such functions are known to belong to the set $S_{\infty},$ too. In this case the position, $\hat{x},$ and momentum, $\hat{p},$ operators both are well-defined. Without loss of generality we will suppose that \begin{eqnarray} \label{444}
<\psi_{in}|\hat{x}|\psi_{in}>=0,\mbox{\hspace{3mm}} <\psi_{in}|\hat{p}|\psi_{in}> =\hbar k_0 >
0,\mbox{\hspace{3mm}} <\psi_{in}|\hat{x}^2|\psi_{in}> =l_0^2; \end{eqnarray} here $l_0$ is the wave-packet's half-width at $t=0$ ($l_0<<a$).
We consider a completed scattering. This means that the average velocity, $\hbar k_0/m,$ is large enough, so that the transmitted and reflected wave packets do not overlap each other at late times. As for the rest, the relation of the average energy of a particle to the barrier's height may be any by value.
We begin our analysis with the derivation of expressions for the incident, transmitted and reflected wave packets to describe, in the problem at hand, the whole ensemble of particles. For this purpose we will use the variant (see \cite{Ch1}) of the well-known transfer matrix method \cite{Mez}. Let the wave function $\psi_{full}(x,k)$ to describe the stationary state of a particle in the out-of-barrier regions be written in the form \begin{eqnarray} \label{1} \psi_{full}(x;k)=e^{ikx}+b_{out}(k)e^{ik(2a-x)}, \mbox{\hspace{3mm}} for \mbox{\hspace{3mm}} x\le a; \end{eqnarray} \begin{eqnarray} \label{2} \psi_{full}(x;k)=a_{out}(k)e^{ik(x-d)}, \mbox{\hspace{3mm}} for \mbox{\hspace{3mm}} x>b. \end{eqnarray} \newcommand{\mbox{\hspace{10mm}}}{\mbox{\hspace{10mm}}} The coefficients entering this solution are connected by the transfer matrix ${\bf Y}$: \begin{eqnarray} \label{50} \left(\begin{array}{c} 1 \\ b_{out}e^{2ika} \end{array} \right)={\bf Y} \left(\begin{array}{c} a_{out}e^{-ikd} \\ 0 \end{array} \right), \mbox{\hspace{5mm}} {\bf Y}=\left(\begin{array}{cc} q & p \\ p^* & q^* \end{array} \right); \end{eqnarray} \begin{eqnarray} \label{500} q=\frac{1}{\sqrt{T(k)}}\exp\left[i(kd-J(k))\right],\mbox{\hspace{1mm}} p=\sqrt{\frac{R(k)}{T(k)}}\exp\left[i\left(\frac{\pi}{2}+ F(k)-ks\right)\right] \end{eqnarray} \noindent where $T$, $J$ and $F$ are the real tunneling parameters: $T(k)$ (the transmission coefficient) and $J(k)$ (phase) are even and odd functions of $k$, respectively; $F(-k)=\pi-F(k)$; $R(k)=1-T(k)$; $s=a+b$. We will suppose that the tunneling parameters have already been calculated.
In the case of many-barrier structures, for this purpose one may use the recurrence relations obtained in \cite{Ch1} just for these real parameters. For the rectangular barrier of height $V_0$, \begin{eqnarray} \label{501} T=\left[1+\vartheta^2_{(+)}\sinh^2(\kappa d)\right]^{-1},\mbox{\hspace{3mm}} J=\arctan\left(\vartheta_{(-)}\tanh(\kappa d)\right),\\F=0,\mbox{\hspace{3mm}} \kappa=\sqrt{2m(V_0-E)}/\hbar,\nonumber \end{eqnarray} if $E<V_0$; and \begin{eqnarray} \label{502} T=\left[1+\vartheta^2_{(-)}\sin^2(\kappa d)\right]^{-1},\mbox{\hspace{3mm}} J=\arctan\left(\vartheta_{(+)}\tan(\kappa d)\right),\\ F=\left\{\begin{array}{c} 0,\mbox{\hspace{3mm}} if \mbox{\hspace{3mm}} \vartheta_{(-)}\sin(\kappa d)\geq 0 \\ \pi,\mbox{\hspace{3mm}} otherwise, \end{array} \right.\mbox{\hspace{3mm}} \kappa=\sqrt{2m(E-V_0)}/\hbar,\nonumber \end{eqnarray} if $E\geq V_0$; in both cases $\vartheta_{(\pm)}=\frac{1}{2}\left(\frac{k}{\kappa}\pm \frac{\kappa}{k}\right)$ (see \cite{Ch1}).
Now, taking into account Exps. (\ref{50}) and (\ref{500}), we can write down in-asymptote $\psi_{in}(x,t)$ and out-asymptote $\psi_{out}(x,t)$ for the time-dependent scattering problem (see \cite{Tei}): \begin{eqnarray} \label{59} \psi_{in}(x,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} f_{in}(k,t) e^{ikx}dk,\mbox{\hspace{3mm}} f_{in}=A_{in}(k)\exp[-i E(k)t/\hbar]; \end{eqnarray} \begin{eqnarray} \label{60} \psi_{out}(x,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} f_{out}(k,t) e^{ikx}dk,\mbox{\hspace{3mm}} f_{out}= f_{out}^{tr}+f_{out}^{ref}; \end{eqnarray} \begin{eqnarray} \label{61} f_{out}^{tr}=\sqrt{T(k)}A_{in}(k) \exp[i(J(k)-kd-E(k)t/\hbar)] \end{eqnarray} \begin{eqnarray} \label{62} f_{out}^{ref}=\sqrt{R(k)}A_{in}(-k)\exp[-i(J(k)-F(k)-\frac{\pi}{2}+2ka+E(k)t/\hbar)]; \end{eqnarray} where Exps. (\ref{59}), (\ref{61}) and (\ref{62}) describe, respectively, the incident, transmitted and reflected wave packets. Here $A_{in}(k)$ is the Fourier-transform of $\psi_{in}(x).$ For example, for the Gaussian wave packet to obey condition (\ref{444}), $A_{in}(k)=c\cdot \exp(-l_0^2(k-k_0)^2);$ $c$ is a normalization constant.
Let us now show that by the final states (\ref{60})-(\ref{62}) one can uniquely reconstruct the prehistory of the subensembles of transmitted and reflected particles, impinging the barrier from the left, at all stages of scattering. Let $\psi_{tr}(x,t)$ and $\psi_{ref}(x,t)$ be searched-for wave packets to describe transmission (TWF) and reflection (RWF), respectively. By our approach their sum should give the (full) wave function $\psi_{full}(x,t)$ to describe the whole 1D completed scattering: \begin{eqnarray} \label{261} \psi_{full}(x,t)=\psi_{tr}(x,t)+\psi_{ref}(x,t). \end{eqnarray}
\subsection{Incoming waves for transmission and reflection}\label{a21}
We begin our analysis with searching for the stationary wave functions for reflection, $\psi_{ref}(x;k),$ and transmission, $\psi_{tr}(x;k),$ in the region $x \le a$. Let us write down these two solutions to the stationary Schr\"odinger equation in the form, \begin{eqnarray} \label{265} \psi_{ref}(x;k)=A_{in}^{ref}e^{ikx}+B_{out}^{ref}e^{-ikx},\mbox{\hspace{3mm}} \psi_{tr}(x;k)= A_{in}^{tr}e^{ikx}+B_{out}^{tr}e^{-ikx}, \end{eqnarray} where coefficients obey the following conditions: \begin{eqnarray} \label{2650} A_{in}^{tr}+A_{in}^{ref}=1;\mbox{\hspace{3mm}} B_{out}^{tr}=0;\mbox{\hspace{3mm}} B_{out}^{ref}=b_{out}e^{2ika}. \end{eqnarray} Besides, we suppose that reflected particles do not cross the barrier region, and, hence, the probability flux for $\psi_{ref}(x;k)$ should be equal to zero: \begin{eqnarray} \label{264}
|A_{in}^{ref}|^2=|b_{out}|^2\equiv R(k). \end{eqnarray} By the same reason, the probability flux for $\psi_{full}(x;k)$ and $\psi_{tr}(x;k)$ should be the same, \begin{eqnarray}\label{263}
|A_{in}^{tr}|^2=|a_{out}|^2\equiv T(k) \end{eqnarray} Taking into account that $\psi_{tr}=\psi_{full}-\psi_{ref},$ we can exclude $\psi_{tr}$ from Eq. (\ref{263}). As a result, we obtain \begin{eqnarray} \label{2630}
\Re\left(A_{in}^{ref} \right)-|b_{out}|^2=0. \end{eqnarray} Thus, from Eqs. (\ref{264}) and (\ref{2630}) it follows that $A_{in}^{tr}=\sqrt{T}(\sqrt{T}\mp i\sqrt{R})$; $A_{in}^{ref}=\sqrt{R}(\sqrt{R}\pm i\sqrt{T}) \equiv \sqrt{R}\exp(i\lambda)$; $\lambda=\pm\arctan(\sqrt{T/R})$.
So, a coherent superposition of the incoming waves to describe transmission and reflection, for a given $E$, yields the incoming wave of unite amplitude, that describes the whole ensemble of incident particles. In this case, not only
$A_{in}^{tr}+A_{in}^{ref}=1$, but also $|A_{in}^{tr}|^2+|A_{in}^{ref}|^2=1$! Besides, the phase difference for the incoming waves to describe reflection and transmission equals $\pi/2$ irrespective of the value of $E$.
Our next step is to show that only one root of $\lambda$ gives a searched-for $\psi_{ref}(x;k).$ For this purpose the above solution should be extended into the region $x>a$. To do this, we will further restrict ourselves by symmetric potential barriers, though the above analysis is valid in the general case.
\subsection{Wave functions for transmission and reflection in the region of a symmetric potential barrier}\label{a22}
Let $V(x)$ be such that $V(x-x_c)=V(x_c-x);$ $x_c=(a+b)/2.$ As is known, for the region of a symmetric potential barrier, one can always find odd, $u(x-x_c)$, and even, $v(x-x_c)$, solutions to the Schr\"odinger equation. We will suppose here that these functions are known. For example, for the rectangular potential barrier (see Exps. (\ref{501}) and (\ref{502})), \[ u(x)=\sinh(\kappa x),\mbox{\hspace{3mm}} v(x)=\cosh(\kappa x),\mbox{\hspace{3mm}} if\mbox{\hspace{3mm}} E\le V_0;\] \[ u(x)=\sin(\kappa x),\mbox{\hspace{3mm}} v(x)=\cos(\kappa x),\mbox{\hspace{3mm}} if\mbox{\hspace{3mm}} E\ge V_0.\] Note, $\frac{du}{dx}v-\frac{dv}{dx}u$ is a constant, which equals $\kappa$ in the case of the rectangular barrier. Without loss of generality we will keep this notation for any symmetric potential barrier.
Before finding $\psi_{ref}(x;k)$ and $\psi_{tr}(x;k)$ in the barrier region, we have firstly to derive expressions for the tunneling parameters of symmetric barriers. Let in the barrier region $\psi_{full}(x;k)=a_{full}\cdot u(x-x_c,k)+b_{full}\cdot v(x-x_c,k).$ "Sewing" this expression together with Exps. (\ref{1}) and (\ref{2}) at the points $x=a$ and $x=b$, respectively, we obtain \begin{eqnarray*} a_{full}=\frac{1}{\kappa}\left(P+P^*b_{out}\right)e^{ika}= -\frac{1}{\kappa}P^*a_{out}e^{ika};\nonumber\\ b_{full}=\frac{1}{\kappa}\left(Q+Q^*b_{out}\right)e^{ika}= \frac{1}{\kappa}Q^*a_{out}e^{ika};\nonumber \end{eqnarray*} \begin{eqnarray*}
Q=\left(\frac{du(x-x_c)}{dx}+i k u(x-x_c)\right)\Bigg|_{x=b};\nonumber\\
P=\left(\frac{dv(x-x_c)}{dx}+i k v(x-x_c)\right)\Bigg|_{x=b}.\nonumber \end{eqnarray*} As a result, \begin{eqnarray} \label{300} a_{out}=\frac{1}{2}\left(\frac{Q}{Q^*}-\frac{P}{P^*}\right);\mbox{\hspace{3mm}} b_{out}=-\frac{1}{2}\left(\frac{Q}{Q^*}+\frac{P}{P^*}\right). \end{eqnarray} As it follows from (\ref{50}), $a_{out}=\sqrt{T}\exp(iJ),$ $b_{out}=\sqrt{R}\exp\left(i\left(J-F-\frac{\pi}{2}\right)\right)$. Hence
$T=|a_{out}|^2,$ $R=|b_{out}|^2,$ $J=\arg(a_{out})$. Besides, for symmetric potential barriers $F=0$ when $\Re(QP^*)>0$; otherwise, $F=\pi$.
Then, one can show that "sewing" the general solution $\psi_{ref}(x;k)$ in the barrier region together with Exp. (\ref{265}) at $x=a$, for both the roots of $\lambda$, gives odd and even functions in this region. For the problem considered, only the former has a physical meaning. The corresponding roots for $A_{in}^{ref}$ and $A_{in}^{tr}$ read as \begin{eqnarray} \label{301} A_{in}^{ref}=b_{out}\left(b^*_{out}-a^*_{out}\right);\mbox{\hspace{1mm}} A_{in}^{tr}=a^*_{out}\left(a_{out}+b_{out}\right) \end{eqnarray} One can easily show that in this case \begin{eqnarray} \label{302} \frac{Q^*}{Q}=-\frac{A_{in}^{ref}}{b_{out}}=\frac{A_{in}^{tr}}{a_{out}}; \end{eqnarray} for $a\le x\le b$ \begin{eqnarray} \label{3000} \psi_{ref}=\frac{1}{\kappa}\left(PA_{in}^{ref}+P^*b_{out}\right)e^{ika}u(x-x_c,k). \end{eqnarray} The extension of this solution onto the region $x\ge b$ gives \begin{eqnarray} \label{3001} \psi_{ref}=-b_{out}e^{ik(x-d)}-A_{in}^{ref}e^{-ik(x-s)}. \end{eqnarray}
So, Exps. (\ref{265}), (\ref{3000}) and (\ref{3001}) give the solution to the Schr\"odinger equation, which we expect to describe reflection. Then, the corresponding solution for transmission is $\psi_{tr}(x;k)=\psi_{full}(x;k)-\psi_{ref}(x;k).$
Note that $\psi_{full}(x;k)$ does not contain an incoming wave impinging the barrier from the right, while the found TWF and RWF include such waves. That is, the superposition of these probability waves leads, due to interference, to their macroscopical reconstruction: in the superposition, both outgoing waves are connected only with the left source of particles. One can show that, in this case, the reflected and transmitted waves are connected causally with the incoming waves $A_{in}^{ref}e^{ikx}$ and $A_{in}^{tr}e^{ikx}$, respectively.
Indeed, let us firstly consider reflection. As is seen from Exp. (\ref{3000}), $\psi_{ref}(x;k)$ is equal to zero at the point $x_c$, for all values of $k$. As a result, the probability flux, for any time-dependent wave function formed from $\psi_{ref}(x;k)$, is equal to zero at this point, for any value of time. This means that, in the case of reflection, particles impinging the symmetric barrier from the left do not enter the region $x\ge x_c$. In other words, the wave packet, $\tilde{\psi}_{ref}(x;k),$ to describe such particles can be written in the form \begin{eqnarray} \label{3002} \tilde{\psi}_{ref}(x;k)\equiv \psi_{ref}(x;k)\mbox{\hspace{3mm}} for\mbox{\hspace{3mm}} x\le x_c; \mbox{\hspace{5mm}} \tilde{\psi}_{ref}(x;k)\equiv 0 \mbox{\hspace{3mm}} for\mbox{\hspace{3mm}} x\ge x_c. \end{eqnarray}
Note, for a given potential, $\tilde{\psi}_{ref}(x;k)$ does not obey the Schr\"odinger equation at the point $x=x_c$. Nevertheless the probability density for this function is everywhere continuous and the probability flux is everywhere equal to zero. This means that the wave packet, $\tilde{\psi}_{ref}(x,t),$ formed from the functions $\tilde{\psi}_{ref}(x;k)$ with different $k$, despite discontinuity its first derivation at the point $x_c$, is everywhere continuous and evolves with a fixed norm. As is said above, namely this packet describes the subensemble of particles which impinge the barrier from the left and are reflected by it.
The above suggests that the subensemble of incident particles to be transmitted by the barrier is described by the incident wave $A_{in}^{tr}e^{ikx}$ of the solution $\psi_{tr}(x;k).$ Namely this incident wave is causally connected with the transmitted one $a_{out}(k)e^{ik(x-d)}$ of the solution $\psi_{full}(x;k)$.
One can easily show that the function $\tilde{\psi}_{tr}(x;k),$ where $\tilde{\psi}_{tr}(x;k)=\psi_{full}(x;k) -\tilde{\psi}_{ref}(x;k),$ is everywhere continuous and the corresponding probability flux is everywhere constant. In this case, \begin{eqnarray} \label{3003} \tilde{\psi}_{tr}(x;k)\equiv \psi_{tr}(x;k)\mbox{\hspace{1mm}} for\mbox{\hspace{1mm}} x\le x_c; \mbox{\hspace{3mm}} \tilde{\psi}_{tr}(x;k)\equiv \psi_{full}(x;k) \mbox{\hspace{1mm}} for\mbox{\hspace{1mm}} x\ge x_c. \end{eqnarray}
As in the case of reflection, the wave packet, $\tilde{\psi}_{tr}(x,t),$ formed from the functions $\tilde{\psi}_{tr}(x;k)$ (despite discontinuity its first derivation at the point $x_c$) is everywhere continuous and evolves with a fixed norm. Hence namely this packet describes the subensemble of particles which impinge the barrier from the left and are transmitted by it.
One can easily show that \begin{eqnarray} \label{303} \tilde{\psi}_{tr}(x;k)=a^l_{tr}u(x-x_c,k)+b_{tr}v(x-x_c,k)\mbox{\hspace{3mm}} for\mbox{\hspace{3mm}} a\le x\le x_c; \end{eqnarray} \begin{eqnarray} \label{304} \tilde{\psi}_{tr}(x;k)=a^r_{tr}u(x-x_c,k)+b_{tr}v(x-x_c,k)\mbox{\hspace{3mm}} for\mbox{\hspace{3mm}} x_c\le x\le b; \end{eqnarray} \begin{eqnarray} \label{305} \tilde{\psi}_{tr}(x;k)=a_{out}e^{ik(x-d)}\mbox{\hspace{3mm}} for\mbox{\hspace{3mm}} x\ge b.; \end{eqnarray} where \begin{eqnarray*} a^l_{tr}=\frac{1}{\kappa}PA_{in}^{tr}e^{ika},\mbox{\hspace{1mm}} b_{tr}=b_{full}=\frac{1}{\kappa}Q^*a_{out}e^{ika},\mbox{\hspace{1mm}} a^r_{tr}=a_{full}=-\frac{1}{\kappa}P^*a_{out}e^{ika} \end{eqnarray*}
Note, for any value of $t$
\begin{eqnarray*} {\bf T}=<\tilde{\psi}_{tr}(x,t)|\tilde{\psi}_{tr}(x,t)>=const;\mbox{\hspace{1mm}} {\bf R}=<\tilde{\psi}_{ref}(x,t)|\tilde{\psi}_{ref}(x,t)>=const; \end{eqnarray*} ${\bf T}$ and ${\bf R}$ are the average transmission and reflection coefficients, respectively. Besides, \begin{eqnarray} \label{700100}
<\psi_{full}(x,t)|\psi_{full}(x,t)> ={\bf T}+{\bf R}=1. \end{eqnarray}
From this it follows, in particular, that the scalar product of the wave packets for transmission and reflection, $<\tilde{\psi}_{tr}(x,t)|\tilde{\psi}_{ref}(x,t)>,$ is a purely imagine quantity to approach zero when $t\to\infty$.
We have to stress that these wave packets are not solutions to the Schr\"odinger equation for a given potential, just as transmission and reflection described by them are not independent quantum processes. These wave packets may be considered only as parts of an entangled state to describe a 1D completed scattering, like the sub-processes may be considered only as two different alternatives to constitute the same one-particle scattering process.
Of importance is that namely these two wave packets describe the (to-be-)transmitted and (to-be-)reflected subensembles of particles at all stages of scattering. In this case \begin{eqnarray} \label{2610} \psi_{full}(x,t)=\psi_{tr}(x,t)+\psi_{ref}(x,t)= \tilde{\psi}_{ref}(x,t)+\tilde{\psi}_{ref}(x,t). \end{eqnarray} (Below we will deal only with $\tilde{\psi}_{ref}$ and $\tilde{\psi}_{tr}.$ For this reason these notations will be used without tilde.)
Now we can proceed to the study of temporal aspects of a 1D completed scattering. The found wave packets for transmission and reflection permit us to introduce characteristic times for either sub-process. As will be seen from the following, the motion of either subensemble of particles in the barrier region can be investigated with help of the Larmor-clock timing procedure adapted to the sub-processes.
\section{Characteristic times for transmission and reflection} \label{a3}
So, our main purpose now is to find, for each sub-process, the time spent, on the average, by a particle in the barrier region. In doing so, we have to take into account that a chosen timing procedure must not influence an original value of the characteristic time.
Under such conditions, perhaps, the only way to measure the tunneling time for a completed scattering is to exploit internal degrees of freedom of quantum particles. As is known, namely this idea underlies the Larmor-time concept based on the Larmor precession of the particle's spin under the infinitesimal magnetic field.
However, as will be seen from the following, the Larmor-time concept is directly connected to the dwell time to describe the stationary scattering problem. By this reason, we define firstly the dwell times for transmission and reflection for a particle in the stationary state.
\subsection{Dwell times for transmission and reflection}\label{a32}
Note, in the case of transmission the density of the probability flux, $I_{tr}$, for $\psi_{tr}(x;k)$ is everywhere constant and equal to $T\cdot\hbar k/m$. The velocity, $v_{tr}(x,k)$, of an infinitesimal element of the flux, at the point $x,$
equals $v_{tr}(x)=I_{tr}/|\psi_{tr}(x;k)|^2.$ Outside the barrier region the velocity is everywhere constant: $v_{tr}=\hbar k/m$. In the barrier region it depends on $x$. In the case of an opaque rectangular potential barrier, $v_{tr}(x)$ decreases exponentially when the infinitesimal element approaches the midpoint
$x_c$. One can easily show that $|\psi_{tr}(a;k)|=|\psi_{tr}(b;k)|=\sqrt{T}$, but
$|\psi_{tr}(x_c;k)|\sim\sqrt{T}\exp(\kappa d/2)$.
Thus, any selected infinitesimal element of the flux passes the barrier region for the time $\tau^{tr}_{dwell}$, where \begin{eqnarray} \label{4005}
\tau^{tr}_{dwell}(k)=\frac{1}{I_{tr}}\int_a^b|\psi_{tr}(x;k)|^2 dx. \end{eqnarray} By analogy with \cite{But} we will call this time scale the dwell time for transmission.
For the rectangular barrier this time reads (for $E< V_0$ and $E\ge V_0$, respectively) as \begin{eqnarray} \label{4007} \tau^{tr}_{dwell}=\frac{m}{2\hbar k\kappa^3}\left[\left(\kappa^2-k^2\right)\kappa d +\kappa_0^2 \sinh(\kappa d)\right], \end{eqnarray} \begin{eqnarray} \label{4009} \tau^{tr}_{dwell}=\frac{m}{2\hbar k\kappa^3}\left[\left(\kappa^2+k^2\right)\kappa d -\beta \kappa_0^2 \sin(\kappa d)\right]. \end{eqnarray}
In the case of reflection the situation is less simple. The above arguments are not applicable here, for the probability flux for $\psi_{ref}(x,k)$ is zero. However, as is seen, the dwell time for transmission coincides, in fact, with Buttiker's dwell time introduced however on the basis of the wave function for transmission. Therefore, making use of the arguments by Buttiker, let us define the dwell time for reflection, $\tau^{ref}_{dwell}$, as \begin{eqnarray} \label{40014}
\tau^{ref}_{dwell}(k)=\frac{1}{I_{ref}} \int_a^{x_c}|\psi_{ref}(x,k)|^2 dx; \end{eqnarray} where $I_{ref}=R\cdot \hbar k/m$ is the incident probability flux for reflection.
Again, for the rectangular barrier \begin{eqnarray} \label{40030} \tau^{ref}_{dwell}=\frac{m k}{\hbar \kappa}\cdot\frac{\sinh(\kappa d)-\kappa d}{\kappa^2+\kappa^2_0 \sinh^2(\kappa d/2)}\mbox{\hspace{3mm}} for \mbox{\hspace{3mm}} E<V_0; \end{eqnarray} \begin{eqnarray} \label{40031} \tau^{ref}_{dwell}=\frac{m k}{\hbar \kappa}\cdot\frac{\kappa d-\sin(\kappa d)}{\kappa^2+\beta\kappa^2_0 \sin^2(\kappa d/2)}\mbox{\hspace{3mm}} for \mbox{\hspace{3mm}} E\ge V_0. \end{eqnarray} As is seen, for rectangular barriers the dwell times for transmission and reflection do not coincide with each other, unlike the asymptotic group times.
We have to stress once more that Exps. (\ref{4005}) and (\ref{40014}), unlike Smith's, Buttiker's and Bohmian dwell times, are defined in terms of the TWF and RWF. As will be seen from the following, the dwell times introduced can be justified in the framework of the Larmor-time concept.
\subsection{Larmor times for transmission and reflection}\label{a33}
As was said above, both the group (see \cite{Ch5}) and dwell time concepts do not give the way of measuring the time spent by a particle in the barrier region. This task can be solved in the framework of the Larmor time concept. As is known, the idea to use the Larmor precession as clocks was proposed by Baz' \cite{Baz} and developed later by Rybachenko \cite{Ryb} and B\"{u}ttiker \cite{But} (see also \cite{Aer,Lia}).
However, we have to stress that the existing concept of the Larmor time was introduced on the basis of incoming and outgoing waves (see \cite{But,Aer,Lia}). In this connection, our next step is to redefine the Larmor times for the barrier region, making use the expressions of the corresponding wave functions just for this region.
\subsubsection{Preliminaries} \label{a330}
Let us consider the quantum ensemble of electrons moving along the $x$-axis and interacting with the symmetrical time-independent potential barrier $V(x)$ and small magnetic field (parallel to the $z$-axis) confined to the finite spatial interval $[a,b].$ Let this ensemble be a mixture of two parts. One of them consists from electrons with spin parallel to the magnetic field. Another is formed from particles with antiparallel spin.
Let at $t=0$ the in state of this mixture be described by the spinor \begin{eqnarray} \label{9001} \Psi_{in}(x)=\frac{1}{\sqrt{2}}\left(\begin{array}{c} 1 \\ 1 \end{array} \right)\psi_{in}(x), \end{eqnarray} where $\psi_{in}(x)$ is a normalized function to satisfy conditions (\ref{444}). So that we will consider the case, when the spin coherent in state (\ref{9001}) is the eigenvector of $\sigma_x$ with the eigenvalue 1 (the average spin of the ensemble of incident particles is oriented along the $x$-direction); hereinafter, $\sigma_x,$ $\sigma_y$ and $\sigma_z$ are the Pauli spin matrices.
For electrons with spin up (down), the potential barrier effectively decreases (increases), in height, by the value $\hbar\omega_L/2$; here $\omega_L$ is the frequency of the Larmor precession; $\omega_L=2\mu B/\hbar,$ $\mu$ denotes the magnetic moment. The corresponding Hamiltonian has the following form, \begin{eqnarray} \label{900200} \hat{H}=\frac{\hat{p}^2}{2m}+V(x)-\frac{\hbar\omega_L}{2}\sigma_z, \mbox{\hspace{3mm}} if\mbox{\hspace{3mm}} x\in[a,b];\mbox{\hspace{1mm}} \hat{H}=\frac{\hat{p}^2}{2m}, \mbox{\hspace{3mm}} otherwise. \end{eqnarray} For $t>0$, due to the influence of the magnetic field, the states of particles with spin up and down become different. The probability to pass the barrier is different for them. Let for any value of $t$ the spinor to describe the state of particles read as \begin{eqnarray} \label{9002} \Psi_{full}(x,t)=\frac{1}{\sqrt{2}}\left(\begin{array}{c} \psi_{full}^{(\uparrow)}(x,t) \\ \psi_{full}^{(\downarrow)}(x,t) \end{array} \right). \end{eqnarray}
In accordance with (\ref{261}) (or (\ref{2610})), either spinor component can be uniquely presented as a coherent superposition of two probability fields to describe transmission and reflection: \begin{eqnarray} \label{9003} \psi_{full}^{(\uparrow\downarrow))}(x,t)= \psi_{tr}^{(\uparrow\downarrow))}(x,t)+\psi_{ref}^{(\uparrow\downarrow))}(x,t); \end{eqnarray} note that $\psi_{ref}^{(\uparrow\downarrow)}(x,t)\equiv 0$ for $x\ge x_c$. As a consequence, the same decomposition takes place for spinor (\ref{9002}): $\Psi_{full}(x,t)= \Psi_{tr}(x,t)+\Psi_{ref}(x,t).$
We will suppose that all the wave functions for transmission and reflection are known. It is important to stress here (see (\ref{700100}) that \begin{eqnarray} \label{900100}
<\psi_{full}^{(\uparrow\downarrow)}(x,t)|\psi_{full}^{(\uparrow\downarrow)}(x,t)> =T^{(\uparrow\downarrow)}+R^{(\uparrow\downarrow)}=1;\\
T^{(\uparrow\downarrow)}=<\psi_{tr}^{(\uparrow\downarrow)}(x,t)| \psi_{tr}^{(\uparrow\downarrow)}(x,t)>=const;\nonumber\\
R^{(\uparrow\downarrow)}=<\psi_{ref}^{(\uparrow\downarrow)}(x,t)| \psi_{ref}^{(\uparrow\downarrow)}(x,t)>=const;\nonumber \end{eqnarray} $T^{(\uparrow\downarrow)}$ and $R^{(\uparrow\downarrow)}$ are the (real) transmission and reflection coefficients, respectively, for particles with spin up $(\uparrow)$ and down $(\downarrow)$. Let further $T=(T^{(\uparrow)}+T^{(\downarrow)})/2$ and $R=(R^{(\uparrow)}+R^{(\downarrow)})/2$ be quantities to describe all particles.
\subsubsection{Time evolution of the spin polarization of particles} \label{a332}
To study the time evolution of the average particle's spin, we have to find the expectation values of the spin projections $\hat{S}_x$, $\hat{S}_y$ and $\hat{S}_z$. Note, for any $t$ \begin{eqnarray*} <\hat{S}_x>_{full}\equiv \frac{\hbar}{2}\sin(\theta_{full})\cos(\phi_{full})=\hbar
\cdot \Re(<\psi_{full}^{(\uparrow)}|\psi_{full}^{(\downarrow)}>); \end{eqnarray*} \begin{eqnarray} \label{9006} <\hat{S}_y>_{full}\equiv \frac{\hbar}{2}\sin(\theta_{full})\sin(\phi_{full})=
\hbar\cdot \Im(<\psi_{full}^{(\uparrow)}|\psi_{full}^{(\downarrow)}>); \end{eqnarray} \begin{eqnarray*} <\hat{S}_z>_{full}\equiv \frac{\hbar}{2}\cos(\theta_{full})=\frac{\hbar}{2}
\left[<\psi_{full}^{(\uparrow)}|\psi_{full}^{(\uparrow)}>
-<\psi_{full}^{(\downarrow)}|\psi_{full}^{(\downarrow)}>\right]. \end{eqnarray*} Similar expressions are valid for transmission and reflection: \begin{eqnarray*} <\hat{S}_x>_{tr}=\frac{\hbar}{T}
\Re(<\psi_{tr}^{(\uparrow)}|\psi_{tr}^{(\downarrow)}>),\mbox{\hspace{3mm}} <\hat{S}_y>_{tr}=\frac{\hbar}{T}
\Im(<\psi_{tr}^{(\uparrow)}|\psi_{tr}^{(\downarrow)}>),\\ <\hat{S}_z>_{tr}=\frac{\hbar}{2T}
\Big(<\psi_{tr}^{(\uparrow)}|\psi_{tr}^{(\uparrow)}>
-<\psi_{tr}^{(\downarrow)}|\psi_{tr}^{(\downarrow)}>\Big), \end{eqnarray*} \begin{eqnarray*} <\hat{S}_x>_{ref}=\frac{\hbar}{R}
\Re(<\psi_{ref}^{(\uparrow)}|\psi_{ref}^{(\downarrow)}>),\mbox{\hspace{3mm}} <\hat{S}_y>_{ref}=\frac{\hbar}{R}
\Im(<\psi_{ref}^{(\uparrow)}|\psi_{ref}^{(\downarrow)}>),\\ <\hat{S}_z>_{ref}=\frac{\hbar}{2R}
\left(<\psi_{ref}^{(\uparrow)}|\psi_{ref}^{(\uparrow)}>
-<\psi_{ref}^{(\downarrow)}|\psi_{ref}^{(\downarrow)}>\right). \end{eqnarray*}
Note, $\theta_{full}=\pi/2,$ $\phi_{full}=0$ at $t=0.$ However, this is not the case for transmission and reflection. Namely, for $t=0$ we have \begin{eqnarray*}
\phi_{tr,ref}^{(0)}=\arctan\left(\frac{\Im(<\psi_{tr,ref}^{(\uparrow)}(x,0)| \psi_{tr,ref}^{(\downarrow)}(x,0)>)}
{\Re(<\psi_{tr,ref}^{(\uparrow)}(x,0)|\psi_{tr,ref}^{(\downarrow)}(x,0)>)}\right); \end{eqnarray*} \begin{eqnarray*}
\theta_{tr,ref}^{(0)}=\arccos\Big(<\psi_{tr,ref}^{(\uparrow)}(x,0)| \psi_{tr,ref}^{(\uparrow)}(x,0)>\\
-<\psi_{tr,ref}^{(\downarrow)}(x,0)|\psi_{tr,ref}^{(\downarrow)}(x,0)>\Big); \end{eqnarray*}
Since the norms of $\psi_{tr}^{(\uparrow\downarrow)}(x,t)$ and $\psi_{ref}^{(\uparrow\downarrow)}(x,t)$ are constant, $\theta_{tr}(t)=\theta_{tr}^{(0)}$ and $\theta_{ref}(t)=\theta_{ref}^{(0)}$ for any value of $t$. For the $z$-components of spin we have \begin{eqnarray} \label{90018} <\hat{S}_z>_{tr}(t)=\hbar\frac{T^{(\uparrow)}- T^{(\downarrow)}}{T^{(\uparrow)}+T^{(\downarrow)}},\mbox{\hspace{3mm}} <\hat{S}_z>_{ref}(t)=\hbar\frac{R^{(\uparrow)}- R^{(\downarrow)}}{R^{(\uparrow)}+R^{(\downarrow)}}. \end{eqnarray}
So, since the operator $\hat{S}_z$ commutes with Hamiltonian (\ref{900200}), this projection of the particle's spin should be constant, on the average, both for transmission and reflection. From the most beginning the subensembles of transmitted and reflected particles possess a nonzero average $z$-component of spin (though it equals zero for the whole ensemble of particles, for the case considered) to be conserved in the course of scattering. By our approach the angles $\theta_{tr}^{(0)}$ and $\theta_{ref}^{(0)}$ cannot be used as a measure of the time spent by a particle in the barrier region.
\subsubsection{Larmor precession caused by the infinitesimal magnetic field confined to the barrier region} \label{a333}
As in \cite{But,Lia}, we will suppose further that the applied magnetic field is infinitesimal. In order to introduce characteristic times let us find the derivations $d\phi_{tr}/dt$ and $d\phi_{ref}/dt.$ For this purpose we will use the Ehrenfest equations for the average spin of particles: \begin{eqnarray*} \frac{d<\hat{S}_x>_{tr}}{dt}=-\frac{\hbar\omega_L}{T} \int_a^b \Im[(\psi_{tr}^{(\uparrow)}(x,t))^*\psi_{tr}^{(\downarrow)}(x,t)]dx\\ \frac{d<\hat{S}_y>_{tr}}{dt}=\frac{\hbar\omega_L}{T} \int_a^b \Re[(\psi_{tr}^{(\uparrow)}(x,t))^*\psi_{tr}^{(\downarrow)}(x,t)]dx\\ \frac{d<\hat{S}_x>_{ref}}{dt}=-\frac{\hbar\omega_L}{R} \int_a^{x_c} \Im[(\psi_{ref}^{(\uparrow)}(x,t))^*\psi_{ref}^{(\downarrow)}(x,t)]dx\\ \frac{d<\hat{S}_y>_{ref}}{dt}=\frac{\hbar\omega_L}{R} \int_a^{x_c} \Re[(\psi_{ref}^{(\uparrow)}(x,t))^*\psi_{ref}^{(\downarrow)}(x,t)]dx. \end{eqnarray*} Note, $\phi_{tr,ref}= \arctan\left(<\hat{S}_y>_{tr,ref}/<\hat{S}_x>_{tr,ref}\right).$ Hence, considering that the magnetic field is infinitesimal and
$|<\hat{S}_y>_{tr,ref}|\ll|<\hat{S}_x>_{tr,ref}|,$ we have \begin{eqnarray*} \frac{d \phi_{tr}}{dt}=\frac{1}{<\hat{S}_x>_{tr}}\cdot \frac{d<\hat{S}_y>_{tr}}{dt};\mbox{\hspace{3mm}} \frac{d \phi_{ref}}{dt}=\frac{1}{<\hat{S}_x>_{ref}}\cdot \frac{d<\hat{S}_y>_{ref}}{dt}. \end{eqnarray*} Then, considering the above expressions for the spin projections and their derivatives on $t$, we obtain \[\frac{d \phi_{tr}}{dt}=\omega_L \frac{\int_a^b \Re[(\psi_{tr}^{(\uparrow)}(x,t))^*\psi_{tr}^{(\downarrow)}(x,t)]dx} {\int_{-\infty}^\infty \Re[(\psi_{tr}^{(\uparrow)}(x,t))^*\psi_{tr}^{(\downarrow)}(x,t)]dx};\] \[\frac{d \phi_{ref}}{dt}=\omega_L \frac{\int_a^{x_c} \Re[(\psi_{ref}^{(\uparrow)}(x,t))^*\psi_{ref}^{(\downarrow)}(x,t)]dx} {\int_{-\infty}^{x_c} \Re[(\psi_{ref}^{(\uparrow)}(x,t))^*\psi_{ref}^{(\downarrow)}(x,t)]dx}.\] Or, taking into account that in the first order approximation on $\omega_L$, when $\psi_{tr}^{(\uparrow)}(x,t)=\psi_{tr}^{(\downarrow)}(x,t)=\psi_{tr}(x,t)$ and $\psi_{ref}^{(\uparrow)}(x,t)= \psi_{ref}^{(\downarrow)}(x,t)=\psi_{ref}(x,t),$ we have \[\frac{d \phi_{tr}}{dt}\approx\frac{\omega_L}{{\bf T}} \int_a^b
|\psi_{tr}(x,t)|^2dx;\mbox{\hspace{3mm}} \frac{d \phi_{ref}}{dt}\approx\frac{\omega_L}{{\bf R}}
\int_a^{x_c} |\psi_{ref}(x,t)|^2dx;\] note, in this limit, $T\to{\bf T}$ and $R\to{\bf R}$.
As is supposed in our setting the problem, both at the initial and final instants of time, a particle does not interact with the potential barrier and magnetic field. In this case, without loss of exactness, the angles of rotation ($\Delta\phi_{tr}$ and $\Delta\phi_{ref}$) of spin under the magnetic field, in the course of a completed scattering, can be written in the form, \begin{eqnarray} \label{90020}
\Delta\phi_{tr}=\frac{\omega_L}{{\bf T}} \int_{-\infty}^\infty dt \int_a^b dx|\psi_{tr}(x,t)|^2,\mbox{\hspace{1mm}} \Delta\phi_{ref}=\frac{\omega_L}{{\bf R}}
\int_{-\infty}^\infty dt \int_a^{x_c} dx|\psi_{ref}(x,t)|^2 \end{eqnarray} On the other hand, both the quantities can be written in the form: $\Delta\phi_{tr}=\omega_L \tau^L_{tr}$ and $\Delta\phi_{eef}=\omega_L \tau^L_{ref},$ where $\tau^L_{tr}$ and $\tau^L_{ref}$ are the Larmor times for transmission and reflection. Comparing these expressions with (\ref{90020}), we eventually obtain \begin{eqnarray} \label{922}
\tau^L_{tr}=\frac{1}{{\bf T}} \int_{-\infty}^\infty dt \int_a^b dx|\psi_{tr}(x,t)|^2,\mbox{\hspace{3mm}} \tau^L_{ref}=\frac{1}{{\bf R}} \int_{-\infty}^\infty dt
\int_a^{x_c} dx|\psi_{ref}(x,t)|^2. \end{eqnarray} These are just the searched-for definitions of the Larmor times for transmission and reflection.
Let us write down the wave packets for transmission and reflection in the form, \begin{eqnarray*} \psi_{tr,ref}(x,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} A_{in}(k)\psi_{tr,ref}(x,k)e^{-i E(k)t/\hbar}dk; \end{eqnarray*} expressions for $\psi_{tr}(x,k)$ and $\psi_{ref}(x,k)$ see in Section \ref{a2}. Then Exps. (\ref{922}) can be rewritten in terms of dwell times (\ref{4005}) and (\ref{40014}): \begin{eqnarray} \label{823} \tau^L_{tr}=\frac{1}{{\bf T}}\int_{0}^{\infty}\varpi(k) T(k)\tau^{tr}_{dwell}(k) dk,\mbox{\hspace{3mm}} \tau^L_{ref}=\frac{1}{{\bf R}}\int_{0}^{\infty}\varpi(k) R(k)\tau^{ref}_{dwell}(k) dk. \end{eqnarray}
where $\varpi(k)=|A_{in}(k)|^2-|A_{in}(-k)|^2.$ Thus, the Larmor times for transmission and reflection are, in fact, the average values of dwell times (\ref{4005}) and (\ref{40014}), respectively.
In the end of this section it is useful to address the rectangular barrier. For the stationary case, in addition to Larmor times (\ref{4007}), (\ref{4009}), (\ref{40030}) and (\ref{40031})), we present explicit expressions for the initial angles $\theta_{tr}^{(0)}$ and $\phi_{tr}^{(0)}$. To the first order in $\omega_L$, we have $\theta_{tr}^{(0)}=\frac{\pi}{2}-\omega_L \tau_z,$ $\phi_{tr}^{(0)}=\omega_L \tau_0,$ $\theta_{ref}^{(0)}=\frac{\pi}{2}+\omega_L \tau_z$ and $\phi_{tr}^{(0)}=-\omega_L \tau_0,$ where \begin{eqnarray*} \tau_z=\frac{m\kappa_0^2}{\hbar\kappa^2}\cdot\frac{(\kappa^2-k^2)\sinh(\kappa d)+\kappa^2_0\kappa d\cosh(\kappa d)}{4k^2\kappa^2+\kappa_0^4\sinh^2(\kappa d)} \sinh(\kappa d)\\ \tau_z=\frac{m\kappa_0^2}{\hbar\kappa^2}\cdot\frac{\kappa^2_0\kappa d\cos(\kappa d)-\beta(\kappa^2+k^2)\sin(\kappa d)}{4k^2\kappa^2+\kappa_0^4\sin^2(\kappa d)} \sin(\kappa d), \end{eqnarray*} for $E<V_0$ and $E\geq V_0$, respectively; \begin{eqnarray} \label{90028} \tau_0=\frac{2mk}{\hbar\kappa}\cdot\frac{(\kappa^2-k^2)\sinh(\kappa d)+\kappa^2_0\kappa d\cosh(\kappa d)}{4k^2\kappa^2+\kappa_0^4\sinh^2(\kappa d)},\nonumber\\ \tau_0=\frac{2mk}{\hbar\kappa}\cdot\frac{\beta\kappa^2_0\kappa d\cos(\kappa d)-(\kappa^2+k^2)\sin(\kappa d)}{4k^2\kappa^2+\kappa_0^4\sin^2(\kappa d)}, \end{eqnarray} for $E<V_0$ and $E\geq V_0$, respectively.
Note that $\tau_z$ is just the characteristic time introduced in \cite{But} (see Exp. (2.20a)). However, we have to stress once more that this quantity does not describe the duration of the scattering process (see the end of Section \ref{a332}). As regards $\tau_0,$ this quantity is directly associated with timing a particle in the barrier region. It describes the {\it initial} position of the "clock-pointers" which they have before entering a particle into this region.
\subsection{Tunneling a particle through an opaque rectangular barrier}\label{a4}
Note, the problem of scattering a particle, with a well defined energy, on an opaque rectangular potential barrier is the most suitable case for verifying our approach. Let us denote final measured azimuthal angles, for transmission and reflection, as $\phi_{tr}^{(\infty)}$ and $\phi_{ref}^{(\infty)},$ respectively. By our approach $\phi_{tr,ref}^{(\infty)}=\phi_{tr,ref}^{(0)}+\Delta\phi_{tr,ref}$. That is, the final times are expected to be registered by the Larmor clock, for transmission and reflection, should be equal to $\tau^L_{tr}+\tau_0$ and $\tau^L_{ref}-\tau_0,$ respectively.
Note, for a particle scattering on an opaque rectangular barrier (when $\kappa d\gg 1$) we have $|\tau_0|\ll\tau^L_{ref}\ll\tau^L_{tr}$ (see Exps. (\ref{4007}), (\ref{40030}) and (\ref{90028})). As is known, Smith's dwell time $\tau^{Smith}_{dwell}$ (which coincides with the "phase" time) and Buttiker's dwell time saturate in this case with increasing the barrier's width (see Exps. (3.2) and (2.20b) in \cite{But}). Just this property of the tunneling times is interpreted as the Hartman effect.
At the same time, our approach denies the existence of this effect: transmission time (\ref{4007}) increases as exponent when $d\to\infty.$ Of course, reflection time (\ref{40030}) is naturally to saturate in this case.
As regards the Bohmian approach, it formally denies this effect, too. It predicts that the average time, $\tau_{dwell}^{Bohm},$ spent by a transmitted particle in the opaque rectangular barrier is \begin{eqnarray*} \tau^{Bohm}_{dwell}\equiv\frac{1}{T}\tau^{Smith}_{dwell}=\frac{m}{2\hbar k^3\kappa^3}\Big[\left(\kappa^2-k^2\right)k^2\kappa d +\kappa_0^4 \sinh(2\kappa d)/2\Big]. \end{eqnarray*} Thus, for $\kappa d\gg 1$ we have $\tau^{Bohm}_{dwell}/\tau^{tr}_{dwell}\sim \cosh(\kappa d),$ i.e., \[\tau^{Bohm}_{dwell}\gg\tau^{tr}_{dwell}\gg \tau^{Smith}_{dwell} \sim \tau^{Butt}_{dwell}.\]
As is seen, in comparison with our definition, $\tau^{Bohm}_{dwell}$ overestimates the duration of dwelling transmitted particles in the barrier region. Of course, at this point we can remind that the existing Bohmian model of the scattering process is inconsistent, since it contains nonlocality.
However, it is useful also to point out that $\tau_{dwell}^{Bohm}$ to describe transmission was obtained in terms of $\psi_{full}.$ One can show that the input of to-be-reflected particles into $\int_a^b|\psi_{full}(x,k)|^2 dx$ is dominant inside the region of an opaque potential barrier. Therefore treating this time scale as the characteristic time for transmission has no basis.
So, we state that the "causal" trajectories of transmitted and reflected particles introduced in the Bohmian mechanics are, in fact, ill-defined. However, we have to stress that our approach does not at all deny the Bohmian mechanics. It rather says that "causal" trajectories for scattered particles should be redefined. Indeed, an incident particle should have two possibility (both to be transmitted and to reflected by the barrier) irrespective of the location of its starting point. This means that just two causal trajectories should evolve from each staring point. Both sets of causal trajectories must be defined on the basis of $\psi_{tr}(x,t)$ and $\psi_{ref}(x,t).$ As to the rest, all mathematical tools developed in the Bohmian mechanics (see, e.g., \cite{Bo1,Kr1}) remain in force.
In the end of this section it is very important to stress that the group transmission and reflection times (see \cite{Ch5}), which are coincident for symmetric potential barriers, lead to the Hartman effect, as the previous approaches. Thus, our model reveals a deep difference between the dwell and group times. Only one of them has a physical sense, and the Larmor-clock timing procedure resolves this dilemma in favor of the former. As regards the group time, it cannot be measured for scattering particles. And, hence, it says nothing about the effective velocity of passing a particle (signal, information) through the barrier region.
\section{Conclusion}
It is shown that a 1D completed scattering can be considered as an entanglement of two alternative sub-processes, transmission and reflection, macroscopically distinct at the final stage of scattering. For this quantum process, the (entangled) state of the whole ensemble of particles can be uniquely presented as a sum of two solutions to the Schr\"odinger equation to possess all needed information about the time evolution of either sub-process, at all stages of scattering.
We develop the Larmor timing procedure to allow measuring the average time spent by particles, of either subensemble, in the barrier region. This procedure shows that namely the dwell time gives the time spent, on the average, by a particle in the barrier region. As regards the group time, for scattering particles it cannot be measured with the Larmor clock, and, hence, it has no physical sense in this case.
\end{document} | arXiv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.