text
stringlengths 0
6.23M
| __index_level_0__
int64 0
419k
|
---|---|
Major change coming to FAFSA filing
Instead of having to wait until January, students will be able to start filling it out in October.
Advertisement
Instead of having to wait until January, students will be able to start filling it out in October.
Click to visit the race’s Website.
People sang and prayed at the event.
Hundreds of people walked Sunday at White River State Park to cut that number.
The attack happened last Saturday at Obadiah’s Smoke Shop on the 3800 block of Georgetown Road.
Bruce Caffee, 44, died at the scene, according to deputies.
The fire forced eight people out of their apartments.
It happened near the intersection of 34th Street and Georgetown Road.
A waterspout formed over the water of Tampa Bay and was visible from downtown St. Pete.
The Perfect Guy was able to post strong numbers on what is traditionally the weakest weekend of the year.
At least 100 homes were destroyed.
INDIANAPOLIS (AP) — Shavonte Zellous scored 11 of her season-high 22 points in the final five minutes on Sunday to help the Indiana Fever be…
It happened in the 5200 block of Brookville Road just after 7 a.m. Sunday.
“Those stories are false.” | 298,503 |
The Beech Group has been awarded for being one of the fastest growing companies in America. Beech …
To Positively Impact the lives we touch
Your Health SOLUTIONS Company helping you achieve better outcomes.
We provide innovative programs in population health, complex case management and home healthcare to Insurance Companies, Employers, State Medicaid, Hospitals, Healthcare Providers and Individuals
Per-Capitated Home Health
Quick Facts
Beech Care News
inBusiness Magazine
Best of 2016 Award
| 206,945 |
Sale!
CEBE Wild Translucent Grey (Photochromic) 2 Lens Set
Unisex style
Matte translucent grey frame – 2 Lens Interchangeable Set supplied with photochromic Grey lenses with silver flash mirror and a Clear lenses – Cebe Zone Vario photochromic lenses automatically adjust to weather & light conditions (Cat. 1-3) – Class 1 optical quality lenses – Impact resistant, shatterproof polycarbonate lenses – Ultra-lightweight frame – Anti-scratch lens coating – Non-slip nose pads and temple tips – Adjustable nose pads – UV400 rated
Supplied with soft carry pouch and hard case
Free UK delivery
click on image for larger view
£69.95
Out of stock | 155,323 |
TITLE: how to write the Radau 2nd order methods (Butcher’s table)
QUESTION [1 upvotes]: I would like to solve the Robertson problem you can find the system here
this is a stiff system of ODE which require an implicit high (more than one) order solver. In particular the RADAU IIA was used as reported in several paper.
I would like to understand and know the final form of RADAU IIA method 2 order
about it I have the Butcher's table which is :
1/3 | 5/12 -1/12
1 | 3/4 1/4
_____|_______________
| 3/4 1/4
you can find reference about Radau and the butcher's table here
may some one explain me how to derive the radau method ? and how the table works ?
may somebody help me ? thanks in advance
REPLY [4 votes]: Let $y'(t) = f(t,y(t))$, $y(t_0)=y_0$ be your initial value problem. Set $u_0 = y_0$. Then the $i+1$-th iteration of a runge-kutta method with $s$ stages is defined as
$$
u_{i+1} = u_i + h \sum_{j=1}^{s} b_j \cdot f(t_i+c_j \cdot h,\, u^{(j)}_{i+1}) \\
u^{(j)}_{i+1} = u_i + h \sum_{k=1}^{s} a_{jk} \cdot f(t_i + c_k \cdot h,\, u^{(k)}_{i+1})
$$
where
$$\begin{array}{c|ccc}
c_1 & a_{11} & \cdots & a_{1s}\\
\vdots & \vdots & \ddots & \vdots \\
c_s & a_{s1} & \cdots & a_{ss} \\ \hline
& b_1 & \cdots & b_s
\end{array}
$$
is the given butcher tableau.
The butcher tableau for the second order RADAU IIa method (which is a l-stable runge-kutta method) yields:
$$
u_{i+1} = u_i + h \cdot \left( \tfrac34 \cdot f(t_i+\tfrac13 h,\, u^{(1)}_{i+1}) + \tfrac14 \cdot f(t_i+h,\, u^{(2)}_{i+1}) \right)
$$
with
$$
u^{(1)}_{i+1} = u_i + h \cdot \left( \tfrac{5}{12} \cdot f(t_i + \tfrac13 h,\, u^{(1)}_{i+1}) - \tfrac{1}{12} \cdot f(t_i + h,\, u^{(2)}_{i+1}) \right) \\
u^{(2)}_{i+1} = u_i + h \cdot \left( \tfrac{3}{4} \cdot f(t_i + \tfrac13 h,\, u^{(1)}_{i+1}) + \tfrac{1}{4} \cdot f(t_i + h,\, u^{(2)}_{i+1}) \right)
$$
Because Radau IIa is a implicite method, you have to solve the nonlinear equation system defined by the last two lines for $u^{(1)}_{i+1}$ and $u^{(2)}_{i+1}$ in each iteration. | 159,141 |
Last.
LOL, thanks for sharing! I love the headband. Your hair looks nice as always :-)
Aww.. Thanks! You flatter me!!. | 236,973 |
TITLE: Why are these algebraic manipulations needed to conclude this fixed point theorem?
QUESTION [0 upvotes]: In Evasiveness of Graph Properties and Topological Fixed Point Theorems on page 46 it states that (where $F$ is the associated chain map for any function $f$):
$$Tr_F(H_0(\Delta, F_p)) = 1$$
Because $F$ acts "trivially" on $H_0$.
Later (on the same page) it mentions, as, in this case, $f$ is an automorphism that the matrix $F$ has zeros on the diagnol, and hence, the trace is always zero. So:
$$Tr_F(H_n(\Delta, F_p)) = 0$$ for $n \geq 0$.
The paper then proceeds to give this long complicated series of summation/algebraic manipulations to show that $0$ does not in fact equal $1$. Why is it not already a contradiction to note that the trace is always $1$ (because $F$ acts trivially) and simutanteously $0$ because $f$ being an automorphism implies a zero trace for $H_0$?
REPLY [2 votes]: You're confusing two different actions of $f$. The reason that $Tr_F(H_n(\Delta, \mathbb{F}_p)) = 0$ for $n>0$ is not that the matrix has zeroes on the diagonal; it's simply that $H_n(\Delta, \mathbb{F}_p)=0$ so you are taking the trace of a $0\times 0$ matrix. This argument only applies for $n>0$, since $H_0(\Delta,\mathbb{F}_p)$ is not trivial.
The matrix which has zeroes on the diagonal is the action of $f$ on $K_n(\Delta,\mathbb{F}_p)$: that is, on the chains, not the homology. This is because we are assuming $f$ fixes no simplices (not because $f$ is an automorphism), and a basis for the chains is given by the simplices. This does not apply to the action of $f$ on homology, and in particular does not tell you that $Tr_F(H_0(\Delta, \mathbb{F}_p))$ should be $0$. | 3,338 |
\begin{document}
\vspace{20pt}
\begin{center}
\large
\textbf{AN EXTENSION OF FRANKLIN'S BIJECTION}
\\
\normalsize
\vspace{12pt}
\textsc{BY DAVID P. LITTLE}
\footnote{Research carried out under NSF grant support}\\
University of California, San Diego\\
\today
\end{center}
\vspace{20pt}
\begin{abstract}
We are dealing here with the power series expansion of the product
$F_m(q)=\prod_{n> m} (1-q^n)$. This expansion may be readily obtained from an
identity of Sylvester and the latter, in turn, may be given a relatively
simple combinatorial proof. Nevertheless, the problem remains to give a
combinatorial explanation for the massive cancellations which produce the
final result. The case $m=0$ is clearly explained by Franklin's proof of the
Euler Pentagonal Number Theorem. Efforts to extend the same mechanism of proof
to the general case $m>0$ have led to the discovery of an extension
of the Franklin involution which explains all the components of
the final expansion.
\end{abstract}
\vspace{12pt}
\section{Introduction}
Sylvester \cite[p. 281]{Syl82} used Durfee squares to prove the following result.
\begin{theorem}
\begin{equation}
\label{SYLVESTER}
\prod_{n\geq1}(1+zq^n)=1+\sum_{n\geq1} z^n q^{\frac{3n^2-n}{2}}(1+zq^{2n})(-zq)_{n-1}/(q)_n
\end{equation}
\end{theorem}
where $(z)_n=(1-z)(1-zq)\cdots (1-zq^{n-1}).$ Multiplying the above equation
by $1+z$ and then setting $z=-q^{m+1}$ for any $m \geq 0$ yields
\begin{equation}
\label{GENERALFORMULA}
\prod_{n>m}(1-q^n)=\sum_{n\geq0}(-1)^n \qbin{n+m}{m} q^{\frac{3n^2+n}{2}+nm}(1-q^{2n+m+1})
\end{equation}
where
\begin{equation}
\label{GAUSSIAN}
\qbin{n+m}{m}=\frac{(q)_{n+m}}{(q)_n(q)_m}
\end{equation}
is the usual $q$-analog of the binomial coefficients. When $m=0$,
formula (\ref{GENERALFORMULA}) is none other than Euler's Pentagonal Number
Theorem,
\begin{eqnarray}
\label{PENTAGONAL}
\prod_{n>0}(1-q^n)&=&\sum_{n\geq0}(-1)^nq^{\frac{3n^2+n}{2}}(1-q^{2n+1}).
\end{eqnarray}
Of course, in the process of setting $z=-q^{m+1}$, we invite a tremendous
amount of cancelation to occur, none of which is explained by Sylvester's
proof of (\ref{SYLVESTER}), which has been included in the following section
for the sake of completeness. However, Franklin's proof of (\ref{PENTAGONAL})
does exactly that, and in fact, offers an explanation for {\em every} single
cancelation which occurs. It would be of historical interest to extend
Franklin's ideas to explain as many of the cancelations as possible in
(\ref{GENERALFORMULA}) for any $m\geq 1$. This will be the focus of the
remainder of the paper.
\section{Sylvester's Proof of Theorem \ref{SYLVESTER}}
The left-hand side of (\ref{SYLVESTER}) can be thought of as the generating
function for partitions $\la{}$, with $k$ distinct parts $>0$ weighted by
$z^kq^{|\la{}|}$, where $|\la{}|=\la{1}+\la{2}+\cdots +\la{k}$. To prove
Sylvester's identity, we need to show that the right-hand side of (\ref{SYLVESTER})
enumerates the exact same objects.
We begin by noting that the Durfee square associated with $\la{}$, $\mathcal{D}(\la{})$, is the largest square contained in the Ferrers diagram \cite[p. 7]{And76} of $\la{}$. The dimension, $d(\la{})$, of this square can be defined as the maximum $i$ such that $\la{i} \geq i$. Using the Durfee square to classify these partitions, we see that $\la{}$ can fall into one of two distinct categories. The first category is comprised of partitions $\la{}$ such that $\la{n+1}<n$, where for convenience we have set $n=d(\la{})$. A typical partition in this category might look like the diagram below.
\begin{center}
\begin{pspicture}(0,0)(4,2)
\psset{unit=.33cm}
\pscustom[linewidth=.5pt,fillstyle=solid,fillcolor=white]{
\pspolygon(0,0) (12,0) (12,1) (9,1) (9,2) (6,2) (6,3) (5,3) (5,4) (3,4) (3,5) (1,5) (1,6) (0,6)
\pspolygon(0,0) (4,0) (4,4) (0,4)
}
\rput(2,2){$\mathcal{D}(\la{})$}
\end{pspicture}
\end{center}
Directly above $\mathcal{D}(\la{})$ can be any partition with distinct parts $<n$.
These partitions are generated by $(-zq)_{n-1}$. Directly to the right of
$\mathcal{D}(\la{})$ can be any partition with exactly $n$ distinct parts $\geq0$.
The generating function for these partitions is $z^nq^{n \choose 2}/(q)_n$. Putting
this all together, any partition falling into this category can be accounted for in
the following term
\begin{equation}
\label{TYPE1}
z^nq^{n^2+{n \choose 2}}(-zq)_{n-1}/(q)_n.
\end{equation}
The second category is comprised of partitions $\la{}$ such that $\la{n+1}=n$.
Note that this is the only other possibility since $\la{n+1}$ cannot be $\geq
n+1$ by the definition of $d(\la{})$. In this case, $\la{}$ must be of the
following form.
\begin{center}
\begin{pspicture}(0,0)(4.5,2.7)
\psset{unit=.33cm}
\pscustom[linewidth=.5pt,fillstyle=solid,fillcolor=white]{
\pspolygon(0,0) (13,0) (13,1) (10,1) (10,2) (7,2) (7,3) (6,3) (6,4) (4,4) (4,5) (3,5) (3,6) (1,6) (1,7) (0,7)
\pspolygon(0,0) (5,0) (5,4) (0,4)
\pspolygon(0,0) (4,0) (4,5) (0,5)
}
\psset{unit=1cm}
\rput(.7,.7){$\mathcal{D}(\la{})$}
\rput(-.1,1.5){1}
\rput(1.5,-.2){1}
\end{pspicture}
\end{center}
Directly above $\mathcal{D}(\la{})$ can be any partition with distinct parts $\leq n$
and largest part equal to $n$. Directly to the right of $\mathcal{D}(\la{})$ can be any
partition with exactly $n$ distinct parts $>0$. The following term accounts for
any partition falling into this category.
\begin{equation}
\label{TYPE2}
z^{n+1}q^{n^2+{n \choose 2}+2n}(-zq)_{n-1}/(q)_n.
\end{equation}
Combining (\ref{TYPE1}) and (\ref{TYPE2}), we get the summand in the
right-hand side of (\ref{SYLVESTER}), and summing over all values of $n\geq1$
completes the proof.
\section{Extending Franklin's Bijection}
Franklin's proof \cite[p. 10]{And76} of Euler's Pentagonal Number Theorem begins by
defining two sets of cells contained in the Ferrers diagram associated with a fixed
partition. For our purposes we will need to extend these definitions as well as
further classify the cells involved.
Fix $m\geq0$ and $\la{}$, a partition with $n$ distinct parts $>m$. Define a
{\em stair} to be a cell in the Ferrers diagram associated with $\la{}$ at the end of
a row or the top of one of the $\la{n}-m-1$ left-most columns. Of the remaining
cells, define a {\em landing} to be any cell that does not have another cell above
it. The {\em m-landing staircase} is the sequence of neighboring stairs and
landings, starting with the stair at the end of the first row, with exactly $m$
landings, using as many stairs occuring at the end of a row as possible. Let
$\mathcal{S}_m(\la{})$ refer to the cells in the {\em m}-landing staircase, with
$s_m(\la{})$ defined to be $|\mathcal{S}_m(\la{})|$, and let $\mathcal{T}(\la{})$
refer to the cells in the top row of $\la{}$, with $t(\la{})$ defined to be
$|\mathcal{T}(\la{})|=\la{n}$. Lastly, we define the weight of $\la{}$, $w(\la{})$,
to be $(-1)^nq^{|\la{}|}$.
For example, let $m=3$ and $\la{}=(14,11,9,8,6)$, then the Ferrers diagram would be
labelled as in the figure below, with stairs and landings denoted by S's and L's,
respectively and cells belonging to $\mathcal{S}_3(\la{})$ shaded.
\begin{center}
\begin{pspicture}(0,0)(5,1.7)
\mylabelledrow{yellow}{.33}{0}{0}{15}{S}{(.08,.06)}
\mylabelledrow{yellow}{.33}{0}{0}{13}{L}{(.08,.06)}
\mylabelledrow{white}{.33}{0}{0}{11}{ }{(0,0)}
\mylabelledrow{yellow}{.33}{0}{1}{11}{S}{(.08,.39)}
\mylabelledrow{yellow}{.33}{0}{1}{10}{L}{(.08,.39)}
\mylabelledrow{white}{.33}{0}{1}{9}{ }{(0,0)}
\mylabelledrow{yellow}{.33}{0}{2}{9}{S}{(.08,.72)}
\mylabelledrow{white}{.33}{0}{2}{8}{ }{(0,0)}
\mylabelledrow{yellow}{.33}{0}{3}{8}{S}{(.08,1.05)}
\mylabelledrow{white}{.33}{0}{3}{7}{L}{(.08,1.05)}
\mylabelledrow{white}{.33}{0}{3}{6}{ }{(0,0)}
\mylabelledrow{white}{.33}{0}{4}{6}{S}{(.08,1.38)}
\mylabelledrow{white}{.33}{0}{4}{5}{L}{(.08,1.38)}
\mylabelledrow{white}{.33}{0}{4}{2}{S}{(.08,1.38)}
\end{pspicture}
\end{center}
By definition, an {\em m}-landing staircase must have exactly $m$ landings and can
have anywhere from $1$ to $n$ stairs. Since it will be an extremely useful fact for
proving \ref{GENERALFORMULA}, we shall restate this in the following form
\begin{lemma}
\label{s_mLEMMA}
Let $\la{}$ be a partition with $n$ distinct parts $>m$. Then the
following inequalities must hold.
\begin{equation}
m+1 \leq s_m(\la{}) \leq m+n
\end{equation}
\end{lemma}
Armed with these definitions and the above lemma, we are now in a position
to prove the following
\begin{lemma}
\label{LEMMAM=1}
\begin{equation}
\prod_{n>1}(1-q^n)=\sum_{n\geq0}(-1)^nq^{\frac{3n^2+n}{2}}(1+q+q^2+\cdots+q^{2n}).
\end{equation}
\end{lemma}
Although its validity can be readily checked by dividing both sides of
(\ref{PENTAGONAL}) by $(1-q)$, it will prove more insightful to obtain
(\ref{LEMMAM=1}) through a combinatorial means which can be easily extended to
prove (\ref{GENERALFORMULA}).
\vspace{12pt}
\noindent \textbf{Proof of Lemma \ref{LEMMAM=1}} \newline
Notice that the left-hand side of (\ref{LEMMAM=1}) can be written in the form
\begin{equation}
\label{EXPANDLHS}
\sum_{n\geq 0}\sum_{\la{}=(\la{1}>\cdots >\la{n})}w(\la{})
\end{equation}
We will proceed by defining a bijection, $I$, that pairs off a partition,
$\la{}$, with $I(\la{})$, in such a way that $w(I(\la{}))=-w(\la{})$ whenever
$\la{} \ne I(\la{})$. This will allow us to reduce the inner summation of
(\ref{EXPANDLHS}) to a finite sum that accounts only for the fixed points of
$I$. The idea is to use 1-landing staircases in a manner similar to the way
Franklin used staircases (i.e. 0-landing staircases) to prove
(\ref{PENTAGONAL}) . The basic principle of the involution is this,
\begin{enumerate}
\item If $t(\la{})\leq s_1(\la{})$, move $\mathcal{T}(\la{})$, if possible, to the outside of $\mathcal{S}_1(\la{})$ so that $s_1(I(\la{}))=t(\la{})$ and
\item If $t(\la{})> s_1(\la{})$, move $\mathcal{S}_1(\la{})$, if possible, to the empty row above $\mathcal{T}(\la{})$.
\end{enumerate}
The best way to see what is meant by ``if possible", is to break up the
definition of $I$ into two cases. Case 1 is when $s_1(\la{})<1+n$, which
means that $\mathcal{S}_1(\la{})$ {\em cannot} reach the top row of $\la{}$, and thus
it will always be possible to move either $\mathcal{T}(\la{})$ or
$\mathcal{S}_1(\la{})$. In the event that $t(\la{})\leq s_1(\la{})$, move the landing
in $\mathcal{T}(\la{})$ so that it is directly above the landing in the first
$t(\la{})-2$ rows. If there is no landing in these rows, then place the landing at
the end of the first row. Now move the stairs in $\mathcal{T}(\la{})$ by placing one
at the end of the first $t(\la{})-1$ rows. Moving $\mathcal{T}(\la{})$ in this
manner will guarantee that $s_1(I(\la{}))=t(\la{})$, as required. This procedure is
illustrated in the following example.
\begin{center}
\begin{equation}
\label{CASE11}
\begin{pspicture}(0,0)(11,1)
\psset{unit=.3cm}
\myrow{white}{0}{0}{10}
\myrow{white}{0}{1}{8}
\myrow{white}{0}{2}{7}
\myrow{white}{0}{3}{5}
\myrow{yellow}{0}{4}{4}
\psline{->}(10.33,3)(12,3)
\myrow{white}{13}{0}{10}
\myrow{yellow}{13}{1}{9}
\myrow{white}{13}{1}{8}
\myrow{white}{13}{2}{7}
\myrow{white}{13}{3}{5}
\myrow{yellow}{13}{4}{2}
\myrow{yellow}{16}{4}{1}
\psline{->}(23.33,3)(25,3)
\myrow{yellow}{26}{0}{11}
\myrow{white}{26}{0}{10}
\myrow{yellow}{26}{1}{10}
\myrow{white}{26}{1}{8}
\myrow{yellow}{26}{2}{8}
\myrow{white}{26}{2}{7}
\myrow{white}{26}{3}{5}
\end{pspicture}
\end{equation}
\end{center}
In the event that $t(\la{})> s_1(\la{})$, move $\mathcal{S}_1(\la{})$ to the top
row, as in the diagram below.
\begin{center}
\begin{equation}
\label{CASE12}
\begin{pspicture}(0,0)(8.66,1.2)
\psset{unit=.33cm}
\myrow{yellow}{0}{0}{11}
\myrow{white}{0}{0}{10}
\myrow{yellow}{0}{1}{10}
\myrow{white}{0}{1}{8}
\myrow{yellow}{0}{2}{8}
\myrow{white}{0}{2}{7}
\myrow{white}{0}{3}{5}
\psline{->}(12,3)(14.4,3)
\myrow{white}{16}{0}{10}
\myrow{white}{16}{1}{8}
\myrow{white}{16}{2}{7}
\myrow{white}{16}{3}{5}
\myrow{yellow}{16}{4}{4}
\end{pspicture}
\end{equation}
\end{center}
Notice that this operation will not result in a partition with a part $<2$,
since $t(I(\la{}))=s_1(\la{})\geq2$, by Lemma \ref{s_mLEMMA}.
Case 2 of the involution is when $s_1(\la{})=1+n$. In this case,
$\mathcal{S}_1(\la{})$ {\em must} reach the top row of $\la{}$, and thus it might not
be possible to move either $\mathcal{T}(\la{})$ or $\mathcal{S}_1(\la{})$. In other
words, $\mathcal{S}_1(\la{})$ shares at least one cell with $\mathcal{T}(\la{})$ and
possibly two, if the landing in $\mathcal{S}_1(\la{})$ occurs in the last row of
$\la{}$. For this reason, we'll denote the row of $\la{}$ in which the landing
occurs by $r(\la{})$. For Case 2a, we will assume that $r(\la{})<n$. If $t(\la{})
\leq s_1(\la{})-1$, move $\mathcal{T}(\la{})$ in a similar manner to (\ref{CASE11})
\begin{center}
\begin{equation}
\label{CASE21}
\begin{pspicture}(0,0)(8.33,1.2)
\psset{unit=.33cm}
\myrow{white}{0}{0}{9}
\myrow{white}{0}{1}{8}
\myrow{white}{0}{2}{7}
\myrow{white}{0}{3}{5}
\myrow{yellow}{0}{4}{4}
\psline{->}(10,3)(12.4,3)
\myrow{yellow}{14}{0}{11}
\myrow{white}{14}{0}{9}
\myrow{yellow}{14}{1}{9}
\myrow{white}{14}{1}{8}
\myrow{yellow}{14}{2}{8}
\myrow{white}{14}{2}{7}
\myrow{white}{14}{3}{5}
\end{pspicture}
\end{equation}
\end{center}
and if $t(\la{})-1>s_1(\la{})$, move $\mathcal{S}_1(\la{})$ in a similar manner
to (\ref{CASE12}).
\begin{center}
\begin{equation}
\label{CASE22}
\begin{pspicture}(0,0)(8.66,1.2)
\psset{unit=.33cm}
\myrow{yellow}{0}{0}{11}
\myrow{white}{0}{0}{10}
\myrow{yellow}{0}{1}{10}
\myrow{white}{0}{1}{9}
\myrow{yellow}{0}{2}{9}
\myrow{white}{0}{2}{7}
\myrow{yellow}{0}{3}{7}
\myrow{white}{0}{3}{6}
\psline{->}(12,3)(14.4,3)
\myrow{white}{16}{0}{10}
\myrow{white}{16}{1}{9}
\myrow{white}{16}{2}{7}
\myrow{white}{16}{3}{6}
\myrow{yellow}{16}{4}{5}
\end{pspicture}
\end{equation}
\end{center}
For Case 2b, we will assume that $r(\la{})=n$. If $t(\la{}) \leq s_1(\la{})-1$,
then the involution is performed just as in (\ref{CASE11}) and (\ref{CASE21}).
\begin{center}
\begin{pspicture}(0,0)(8.33,1.6)
\psset{unit=.33cm}
\myrow{white}{0}{0}{9}
\myrow{white}{0}{1}{8}
\myrow{white}{0}{2}{7}
\myrow{white}{0}{3}{6}
\myrow{yellow}{0}{4}{5}
\psline{->}(10,3)(12.4,3)
\myrow{yellow}{14}{0}{11}
\myrow{white}{14}{0}{9}
\myrow{yellow}{14}{1}{9}
\myrow{white}{14}{1}{8}
\myrow{yellow}{14}{2}{8}
\myrow{white}{14}{2}{7}
\myrow{yellow}{14}{3}{7}
\myrow{white}{14}{3}{6}
\end{pspicture}
\end{center}
Notice that the above example was previously a fixed point of Franklin's involution.
And finally, if $t(\la{})-2>s_1(\la{})$, then the involution is similar to
(\ref{CASE12}) and (\ref{CASE22}).
\begin{center}
\begin{pspicture}(0,0)(8.66,1.6)
\psset{unit=.33cm}
\myrow{yellow}{0}{0}{11}
\myrow{white}{0}{0}{10}
\myrow{yellow}{0}{1}{10}
\myrow{white}{0}{1}{9}
\myrow{yellow}{0}{2}{9}
\myrow{white}{0}{2}{8}
\myrow{yellow}{0}{3}{8}
\myrow{white}{0}{3}{6}
\psline{->}(12,3)(14.4,3)
\myrow{white}{16}{0}{10}
\myrow{white}{16}{1}{9}
\myrow{white}{16}{2}{8}
\myrow{white}{16}{3}{6}
\myrow{yellow}{16}{4}{5}
\end{pspicture}
\end{center}
In the event that $\la{}$ does not fit into one of the above categories,
simply define $I(\la{})=\la{}$. For example, moving $\mathcal{T}(\la{})$
could shorten $\mathcal{S}_1(\la{})$ to the point that $\mathcal{T}(\la{})$ is
too big to move, as in (\ref{FIXED}a). Similarly, moving
$\mathcal{S}_1(\la{})$ could shorten $\mathcal{T}(\la{})$ to the point where
$\mathcal{S}_1(\la{})$ is also too big, as in (\ref{FIXED}b).
\begin{center}
\begin{equation}
\label{FIXED}
\begin{pspicture}(0,0)(8,1)
\psset{unit=.33cm}
\rput(-1,2){a)}
\myrow{white}{0}{0}{9}
\myrow{white}{0}{1}{7}
\myrow{white}{0}{2}{6}
\myrow{yellow}{0}{3}{5}
\rput(12.6,2){b)}
\myrow{yellow}{14}{0}{10}
\myrow{white}{14}{0}{9}
\myrow{yellow}{14}{1}{9}
\myrow{white}{14}{1}{7}
\myrow{yellow}{14}{2}{7}
\myrow{white}{14}{2}{6}
\myrow{yellow}{14}{3}{6}
\myrow{white}{14}{3}{5}
\end{pspicture}
\end{equation}
\end{center}
\noindent The following table summarizes the fixed points of $I$.
\begin{center}
\begin{tabular}{|c|c|c|l|}
\hline
$s_1(\la{})$ & $t(\la{})$ & $r(\la{})$ & $|\la{}|$ \\
\hline
$n+1$ & $n+1$ & $\{1,2,\dots,n-1\}$ & $n^2 + {n+1\choose 2} + r(\la{})$ \\
$n+1$ & $n+2$ & $\{1,2,\dots,n-1\}$ & $n^2 + {n+1\choose 2} + n + r(\la{})$ \\
$n+1$ & $n+1$ & $n$ & $n^2 + {n+1\choose 2}$ \\
$n+1$ & $n+2$ & $n$ & $n^2 + {n+1\choose 2} + n$ \\
$n+1$ & $n+3$ & $n$ & $n^2 + {n+1\choose 2} + 2n$ \\
\hline
\end{tabular}
\end{center}
\noindent We can now replace the inner summation in (\ref{EXPANDLHS}) with
$$
\sum_{\la{}=I(\la{})} w(\la{})
=(-1)^n q^{\frac{3n^2+n}{2}}(1+q+q^2+\cdots+q^{2n})
$$
which completes our proof.
\vspace{20pt}
We are now in possession of a mechanism that can be easily generalized to prove formula (\ref{GENERALFORMULA}). However, we must first formalize the definition of our involution for a fixed $m\geq 1$. Having done that, a simple observation regarding $m$-landing staircases will provide the key to determining a necessary and sufficient characteristic of fixed points.
\vspace{20pt}
\noindent \textbf{Proof of Theorem \ref{GENERALFORMULA}} \newline
Let $\la{}$ be a partition with $n$ distinct parts $>m$. Let $\tau(\la{})$ be the result of moving $\mathcal{T}(\la{})$ to the outside of $\mathcal{S}_m(\la{})$. This is accomplished by placing a landing from $\mathcal{T}(\la{})$ on top of each landing in the $t(\la{})-m-1$ bottommost rows of $\mathcal{S}_m(\la{})$. Any landings still remaining in $\mathcal{T}(\la{})$ should be placed at the end of the first row. Next, place the stairs from $\mathcal{T}(\la{})$ at the ends of the $t(\la{})-m$ bottommost rows. This process will insure that $s_m(\tau(\la{}))=t(\la{})$, which is necessary in order to reverse the process. Let $\sigma(\la{})$ be the result of moving $\mathcal{S}_m(\la{})$ to the empty row above $\mathcal{T}(\la{})$. Notice that we cannot apply $\tau$ and $\sigma$ to just any partition $\la{}$ with parts $>m$, so to make up for this, we define $I$ as follows.
$$
I(\la{})=
\left\{
\begin{array}{cl}
\tau(\la{})& \mbox{\rm if $t(\la{})\leq s_m(\la{})$ \quad \& \quad
$t(\la{})<m+n$,}\\
\sigma(\la{})& \mbox{\rm if $t(\la{})-|\mathcal{T}(\la{}) \cap
\mathcal{S}_m(\la{})|>s_m(\la{})$,}\\
\la{}& \mbox{\rm otherwise.}
\end{array}
\right.
$$
$I$ is an involution since $\tau$ and $\sigma$ are inverses of each other and if
$\mu=\tau(\la{})$, then
$$
t(\mu)-|\mathcal{T}(\mu) \cap \mathcal{S}_m(\mu)| = \la{n-1}
> \la{n} = t(\la{}) = s_m(\mu)
$$
and if $\mu=\sigma(\la{})$, then
$$
t(\mu)=s_m(\la{})\leq s_m(\mu) \quad \& \quad t(\mu)=s_m(\la{})\leq m+n.
$$
Notice that if $\la{}$ is a fixed point, then $t(\la{})\geq m+n$ and $s_m(\la{})=m+n$.
This means that the partition $\la{}^*=(2n-1+m,2n-2+m,\ldots ,n+m)$ is the smallest
fixed point of $I$ with exactly $n$ parts. The weight of $\la{}^*$ is given by
\begin{equation}
w(\la{}^*)=(-1)^nq^{|\la{}^*|}=(-1)^nq^{\frac{3n^2-n}{2}+nm}.
\end{equation}
Unfortunately, it is not enough for $t(\la{})\geq m+n$ and $s_m(\la{})=m+n$. In
order to come up with a necessary and sufficient condition for $\la{}$ to be a fixed
point, we need the following observation.
\begin{center}
If $s_m(\la{})=m+n$ then $\mathcal{S}_m(\la{})$ will start and finish at\\
opposite corners of an $n\times m+n$ rectangle.
\end{center}
Of course this is none other than a simple fact regarding taxicab distances, but
using this observation, we can prove the following crucial lemma.
\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand{\labelenumi}{\rm \theenumi)}
\begin{lemma}
Let $\la{}=(\mu_1+2n-1+m,\mu_2+2n-2+m,\ldots ,\mu_n+n+m)$ where
$\mu_1\geq \mu_2 \geq \cdots \geq \mu_n \geq 0$. Then
$\la{}$ is a fixed point if and only if
$$
\mu_1\leq m \qquad \mbox{\rm or} \qquad \mu_1=m+1 \;\; \& \;\; \mu_n\geq 1.
$$
\end{lemma}
\noindent \textbf{Proof} \newline
Let us start by assuming that $\la{}$ is a fixed point. In particular, this means
that $s_m(\la{})=m+n$ and that $\mathcal{S}_m(\la{})$ cannot be moved, or
symbolically,
\begin{equation}
\label{CANTMOVESM}
t(\la{})-|\mathcal{T}(\la{}) \cap \mathcal{S}_m(\la{})| \leq m+n.
\end{equation}
Notice that the observation we made above allows us to compute the left-hand side of
(\ref{CANTMOVESM}) exactly.
\begin{equation}
\label{TMINUSS}
t(\la{})-|\mathcal{T}(\la{}) \cap \mathcal{S}_m(\la{})| = \mu_1+n-1
\end{equation}
Therefore, $\mu_1 \leq m+1$. If $\mu_1 \leq m$, then we are done. If $\mu_1
= m+1$, then using the observation again, the left-most cell of
$\mathcal{S}_m(\la{})$ ocurrs in the top row of $\mu$, and thus we must
also have that $\mu_n \geq 1$.
Now we need to show that this condition is sufficient. If $\mu_1 \leq m$,
then one of the stairs in $\mathcal{S}_m(\la{}^*)$ will be used as a landing
in $\mathcal{S}_m(\la{})$. This insures that $s_m(\la{})=m+n$. It also
allows us to use equation (\ref{TMINUSS}) again to see that
$$
t(\la{})-|\mathcal{T}(\la{}) \cap \mathcal{S}_m(\la{})| = \mu_1+n-1
\leq m+n-1,
$$
which means that $I(\la{})=\la{}$.
In the event that $\mu_1 = m+1$ and $\mu_n \geq 1$, one of the cells in
the first column of $\mu$ will be used as a landing, insuring that
$s_m(\la{})=m+n$. Again we see that
$$
t(\la{})-|\mathcal{T}(\la{}) \cap \mathcal{S}_m(\la{})| = \mu_1+n-1 = m+n,
$$
which means that $I(\la{})=\la{}$ in this case as well.
\vspace{20pt}
Using this lemma, we see that any partition $\mu$ that fits in an $n \times
m$ box will lead to a fixed point, as will any partition $\tilde{\mu}$ that
fits in an $n \times m+1$ box with $\tilde{\mu}_1=m+1$ and $\tilde{\mu}_n\geq
1$. Therefore, the weights of all fixed points with exactly $n$ parts are accounted
for in
\begin{equation}
\label{ALLFIXED}
w(\la{}^*)\left(\qbin{n+m}{m}+q^{n+m}\qbin{n+m-1}{m}\right).
\end{equation}
Summing (\ref{ALLFIXED}) over all values of $n\geq 0$, we see that
\begin{equation}
\label{FINAL}
\prod_{n>m}(1-q^n)=\sum_{n\geq0}(-1)^n
q^{\frac{3n^2-n}{2}+nm}\qbin{n+m-1}{m-1}\frac{1-q^{2n+m}}{1-q^m}.
\end{equation}
Multiplying both sides of equation (\ref{FINAL}) by $(1-q^m)$ and making a change of
variable
$m\rightarrow m+1$ yields (\ref{GENERALFORMULA}).
\vspace{20pt}
One property of Franklin's bijection is that it accounts for all of the cancelation
occurring in the left-hand side of equation (\ref{GENERALFORMULA}).
Unfortunately, this is not always the case for $I$. In fact, as soon as $m=3$ there
is some unexplained cancelation. For example, the two partitions $(14,13,12,11)$ and
$(12,11,10,9,8)$ are both partitions of 50 and both are fixed points of $I$. On the
other hand, there are 31,571,191 partitions of 250 with parts $>10$. Of those
31,571,191 partitions, 3,537 are fixed points of $I$. Of those 3,537 fixed points,
just 47 have a positive sign associated with them, and can therefore be cancelled out. | 203,043 |
TITLE: With regards to vector spaces, what does it mean to be 'closed under addition?'
QUESTION [11 upvotes]: My linear algebra book uses this term in the definition of a vector space with no prior explanation whatsoever. It proceeds to then use this term to explain the proofs.
Is there something painfully obvious I'm missing about this terminology and is this something I should already be familiar with?
The proof uses $u + v$ is in $V$
REPLY [8 votes]: Sometimes it's easiest to understand a definition by finding objects that don't satisfy the definition. Let me show you a few sets that are not closed under addition. (A set $S$ is "closed under addition" if whenever you pick two elements of $S$, their sum is again in $S$.)
The set $S=\{1,2,3\}$ is not closed under addition because $2$ and $3$ are in $S$ but $2+3=5$ is not in $S$.
The set $S=\{1\}$ is not closed under addition because $1$ and $1$ are in $S$ but $1+1=2$ is not in $S$. (So we can pick the same point twice.)
Along the lines of the previous two examples, any finite set of real numbers except for $\{0\}$ or $\varnothing$ is not closed under addition.
The circle of radius $1$ centered at the origin (which consists of points $(x,y)$ where $x^2+y^2=1$) is not closed under addition because, for example, $(0,1)$ and $(0,-1)$ lie on the circle but $(0,1)+(0,-1)=(0,0)$ does not.
Why is the "closed under addition" property useful in linear algebra? As the name suggests, linear algebra is the study of objects that you can add together and multiply by scalars -- vectors -- which live in vector spaces. It would be problematic for a vector space to not be closed under addition since that would violate the "linear" part of linear algebra.
REPLY [3 votes]: If a set of vectors is closed under addition, it means that if you perform vector addition on any two vectors within that set, the result is another vector within the set.
For instance, the set containing vectors of the form $<x, 2x>$ would be closed under vector addition. Adding two arbitrary vectors from this space, say $<x_1, 2x_1> + <x_2, 2x_2>$, results in $<x_1+x_2, 2(x_1+x_2)>$, which is also in the vector space (let $x = x_1+x_2$).
However, the set containing vectors of the form $<1, x>$ would not be closed under vector addition (add two arbitrary vectors from the set and you'll see that the resulting vector is $<2, x_1+x_2>$, which is not in the set, as $1\neq2$).
REPLY [1 votes]: Consider the collection of points that literally lie on the $x$-axis or on the $y$-axis. We could still use Cartesian vector addition to add two such things together, like $(2,0)+(0,3)=(2,3)$, but we end up with a result that is not part of the original set. So this set is not closed under this kind of addition.
If addition is defined at all on a set, to be closed on that set, the result of an addition needs to still be in that set. | 45,047 |
TITLE: application of entire function $f$ of order $\gamma$, $f(z) = e^{g(z)}$ for polynomial case.
QUESTION [1 upvotes]: First, Let me explain the definition
Let $f: \mathbb{C} \rightarrow \mathbb{C}$ is entire function, we say that $f$ is of finite order if
\begin{align}
|f(z)| \leq \alpha e^{\beta|z|^{\gamma}}, \quad \forall z \in \mathbb{C}
\end{align}
and the infimum of the exponents $\gamma$ is called the order of $f$.
Now consider $F$ as a polynomial, clearly
\begin{align}
|F(z)| = |a_n z^n + a_{n-1} z^{n-1} + \cdots a_1 z + a_0| \leq \alpha e^{|z|^{\epsilon}}
\end{align}
so I see polynomial $f$ is finite order and its order is $0$.
Now I have following Lemma
An entire function $f(z)$ of order $\gamma$ which does not have zeros can be written as $f(z) = e^{g(z)}$ where $g(z)$ is a polynomial of degree $\gamma$.
Now I have trouble. If i assume $f(z)$ be a polynomial of degree $n$, then $\gamma=0$ implies $g(z) = C$ where $C$ is constant. Then $f(z) = e^C = D$ with $D$ is constant. But this is not ture.
Is something missing in the lemma? The lemma above comes from lemma 8.1 "lectures on the riemann zeta function" by iwaniec.
REPLY [1 votes]: Let me read the lemma again, An entire function f(z) of order $\gamma$ which does not have zeros can be written as $f(z)=e^{g(z)}$ where $g(z)$ is a polynomial of degree $\gamma$.
Note: The polynomial is of degree $\gamma$, it did not say that the polynomial is of order $\gamma$.
The order of a polynomial is zero but the degree of that polynomial may not be zero. Which was your primary confusion.
The time you assumed "$f(z)$ be a polynomial of degree n..." then the order of that polynomial is zero but you avoided the fact that it has zeros inside $\Bbb C$ [due to Fundamental theorem of algebra]. But the lemma holds for those functions which have no zeros.
Hope this works. | 33,490 |
TITLE: Hartshorne Chapter 1 exercise 6.4: Maps of curves and function fields.
QUESTION [0 upvotes]: I'm solving problems in Hartshorne. I don't know how to solve the following exercise(6.4 of Chapter 1):
Let $Y$ be a nonsingular projective curve. Show that every nonconstant rational function $f$ on $Y$ defines a surjective morphism $\phi:Y\rightarrow \mathbb P^1$.
I know I can use 6.7 and 6.8 to extend the morphism $f$ to such a $\phi$. But I don't know how to prove it's surjective. I have tried something:
Through $\phi$ we get an injection $k(x)\rightarrow K(Y)$. Since all $DVR$ in $k(x)$ looks like $k[x]_{(x)}$, it's enough for us to just prove there is a DVR induced by a point in $Y$ dominates $k[x]_{(x)}$. I can find one DVR (denote it by $B$)by taking the integral closure of $k[x]$ in $K$ and do a localization at a prime that dominates $(x)$.(Just as the proof of 6.5) But I can't show it is a DVR induced by a point in Y. Could you provide some help? Thanks!
REPLY [0 votes]: Question: "I know I can use 6.7 and 6.8 to extend the morphism f to such a ϕ. But I don't know how to prove it's surjective."
Answer: Let $C$ be an irreducible smooth projective curve over an algebraically closed field $k$ and let $K:=K(C)$ be the function field of $C$ with $x\in K$ a "non-constant" rational function. Let $P(t)\in k[t]$ and assume $P(x)=0$. Since $k$ is algebraically closed this implies $x\in k$ a contradiction, hence $x$ is "transcendental over $k$", meaning the subring $k[x] \subseteq K$ generated by $k$ and $x$ is isomorphic to a polynomial ring. You get an embedding $k \subseteq k(x) \subseteq K$ where $x$ behaves like an independent variable over $k$. This gives (by the proof of HH.I.6.12) a surjective morphism
$$\phi: C \rightarrow \mathbb{P}^1_k.$$
Since $C$ is projective and irreducible, the image is closed and irreducible and hence $Im(\phi)=\mathbb{P}^1_k$. The inverse image $\phi^{-1}(x)$ of a closed point $x\in \mathbb{P}^1_k$ is a strict closed subvariety of $C$ (since $\phi$ is non-constant) and since $C$ is irreducible it follows $\phi^{-1}(x)$ must be a finite set of points. | 194,109 |
Draghi, Lagarde & more QE, as Macron moves to rule Europe (Video)
The Duran’s Alex Christoforou and Editor-in-Chief Alexander Mercouris discuss European Central Bank President Mario Draghi decision to cut deposit rates and start open-ended bond purchases. Draghi is making a final run at reflating the euro-area economy, before Christine Lagarde takes over as ECB chief.
Draghi and Lagarde are moving to further integrate the EU along the lines of French President Emmanuel Macron’s vision of an all-powerful, centralized Brussels ruling over weakened, periphery member states.
Remember to Please Subscribe to The Duran’s YouTube Channel.
Follow The Duran Audio Podcast on Soundcloud.
To push through QE and to preserve his legacy, Mario Draghi just started the countdown on central banking.
Apparently ripping a page out of Jean-Claude Juncker’s play book – “When it’s serious, you have to lie” – outgoing ECB President appears to have been caught in a big fat fib.
In his grand finale press conference today, Draghi unleashed QEternity (albeit with the limits we have noted), proudly proclaiming that “there was no need to vote” because there was “full agreement on the need to act” and a “significant majority” were for QE.
When pressed by a reporter, he further refused to detail who dissented – apparently signaling that it was as good as consensus as it could be.
.”
It turns out – he lied (just as he lied to Zero Hedge when he said there was no Plan B over Greece… when there was).
Bloomberg is reporting that in an “unprecedented revolt”, ECB governors representing the top European economies defied Mario Draghi’s ultimately successful bid to restart quantitative easing, according to officials with knowledge of the matter..
Those three governors alone represent roughly half of the euro region as measured by economic output and population. Other dissenters included, but weren’t limited, to their colleagues from Austria and Estonia, as well as members on the ECB’s Executive Board including Sabine Lautenschlaeger and the markets chief, Benoit Coeure, the officials said.
Indeed, as Reuters adds, the QE motion was actually passed with a “relatively narrow majority” which explains why Draghi refused to take a vote as it would show that only Europe’s B and C grade nations – such as Italy, Spain, Portugal, Estonia, Malta and Cyprus – were for a restart in debt monetization.
As Bloomberg concludes, such disagreement over a major monetary policy measure has never been seen during Draghi’s eight-year tenure.
As Bloomberg adds, current stimulus commitments, an approach that could provoke a market backlash.
Good luck dealing with that mutiny Christine!
Get notified about exclusive offers every week! | 73,496 |
TITLE: How do I prove the set S is finite?
QUESTION [1 upvotes]: I have a complex analysis question that is giving me a lot of trouble. I think I have to argue by contradiction, saying S is not finite - but I don't know where to go from here. My friend said something about comparing it with the function $g(z)=\frac{z}{z+1}$. Here is the question;
Let $U\subseteq \Bbb{C}$ be an open set such that $R_2(0)\subset U$ where;
$R_2(0)=[-2,2]+i[-2,2]$$=${$z∈\Bbb{C}$:$\vert Re(z)\vert \le 2, \vert Im(z)\vert \le 2 $}
and suppose $f: U\to \Bbb{C}$ is a holomorphic function on $U$. Definte the (possibly empty) set $S\subseteq \Bbb{N}$ by;
$S =$ {$n∈\Bbb{N} : f(\frac{1}{n})=\frac{1}{n+1}$}
Show that the number of elements in $S$ is finite.
Thank you so much for the help!
REPLY [2 votes]: Let $g(z)=(1+z)f(z)-z$. If $n \in S$ then $g(\frac 1 n )=0$. If $S$ has infinitely many points then 0 would be a limit point of zeros of $g$ which makes $g \equiv 0$. But then $z=(1+z)f(z)$ for all $z \in R_2 (0)$. You get a contradiction by putting $z=-1$. | 158,151 |
Andy Blue: on behalf of the Glasgow ATLAS upgrade group • Replacing all the Silicon in the SCT • Presentation will focus on the Si Strip detectors • (also work at Glasgow on the upgrade to the pixel detectors) • Major changes previous modules • More Asics (now 20) • Will be reduced to 10 @130nmCMOS • • • • More Strips 20*128*2=5120/module However, no fan in/link between Si Hybrids now glued directly ON to the Si Taken from “Parameters of the phase2 ITK Draft 27.5.2012” • Layer 1: 28 staves (at r = 412) [Short @ 2.4cm] • Layer 2: 40 staves (at r= 555) [Short @ 2.4cm] • Layer 3: 48 staves (at r=698) [Long @ 4.8cm] • Layer 4: 60 staves (at r=866) [Long @ 4.8cm] • Layer 5: 72 staves (at r=996) [Long @ 4.8cm] • =248 Staves in total • The number of modules in stave = 26 • 6448 modules in 3 years (150 weeks) • 42.986modules per week • 8.597 modules per day for all module building groups • Assumes 100% yield! • 1) Glue ASICS on to a FR4 strip (‘Hybrid’) • Wire bonds for ASIC-ASIC • 2) Glue 2 Hybrids onto a Si sensor (‘Module’) • Wire bond from ASIC to Si Strips • Goal is to accurately glue 20 Si CMOS chips on to a FR-4 based panel to make a Hybrid • Place ASICs in holder (Machined acetal with metal top) • Attach a vacuum jig to the holder • Apply a stencil and spread glue evenly on back to leave 5 spots per asic • Place vacuum jig on to FR4 board and apply a brass weight to cure the glue • Goal is to glue to Hybrids (shown before) to a Silicon sensor Frame is placed on a jig, then Silicon sensor placed in frame (held by small lip round edge of frame) • A cardboard stencil is placed on top of the Si sensor and glue is spread • Glue used is an epolite epoxy • 2 hybrid pick up tools are then attached to the module jig •Hybrid Building •Wire Bonding •Readout Bonding • Purchased and installed Bondjet 820 • Fully wire bonded our first hybrid panel • Wire bonded 1st half of module • Successfully bonded trial of new pitch for modified module design • HSIO FPGA development board • Runs SCTDAQ software • Can test • Hybrids • Modules • Stavelets (multiple modules) • Testing both hybrids and modules now at Glasgow • Test hybrids bonded at Glasgow • Begin construction of Modules • Test • Begin QA and full assembly structure • Get ready for pre production | 144,082 |
TITLE: How do I choose an element from a non-empty set?
QUESTION [16 upvotes]: Suppose I have a non-empty set $A$.
How do I choose an element $x\in A$?
More precisely, I believe I would like to find a formula $P(x,y)$ of ZF such that for every non-empty set $y$ there is exactly one $x\in y$ such that $P(x,y)$ is true.
I have thought about this for a while without success and I am beginning to doubt such a formula is even possible. But in mathematical proofs it is quite common to "choose a fixed element" from a non-empty set. So how exactly do we achieve such a thing? Am I missing something?
Thank you in advance.
REPLY [2 votes]: David Hilbert introduced his $\tau$-operator (sometimes called $\epsilon$-operator) to extract such a choice function. Foundationally this is somewhere between choice and global choice. Bourbaki when they developed their own foundations choice to use Hilbert's $\epsilon$, getting them into no end of trouble as described in detail in the writings of Adrian Mathias; see e.g., this article. | 172,408 |
The documentary film "Mustafa", telling about the life of the leader of the Crimean Tatar people Mustafa Dzhemilev, will be shown at the Berlin International Festival "Berlinale", the producer Tamila Tasheva informed on her Facebook page:
“Going on a work "holiday".
From February 11 to 20 we will be in Berlin at the "Berlinale".
We will have 2 screenings of the film on the European Film Market (EFM).”
Reference: The film "Mustafa" tells about the facts of Mustafa Dzhemilev’s life and reveals his character through the prism of numerous interviews with human rights activists and dissidents in five countries.
Premiere took place on October 28, 2016 in Kyiv.
Photo: Internet
QHA | 165,543 |
\section{Unsupervised label shift}
\label{sec:unlabeled-target-data}
In this section, we consider the unsupervised label shift problem. In this problem, the learner has access to $ \dataunlabeled $, which consists of $ n_P $ many labeled data-points from source domain $ (X_1^P, Y_1^P), \dots , (X_{n_P}^P , Y_{n_P}^P) \sim \iid \ P $ and $ n_Q $ many unlabeled data-points from the target domain $ X_1^Q , \dots , X_{n_Q}^Q \sim \iid\ Q_X \triangleq Q(\cdot, Y\in \{0,1\}) $. We assume the data generating distribution in both domains are from $\Pi$ (see definition \ref{def:distribution-class}).
First, we present a lower bound for the convergence rate of the excess risk in the unsupervised label shift problem.
The lower bound is valid for any learning algorithm $\cA:\samplespaceunlabeled \to \cH$, where $\samplespaceunlabeled \triangleq (\cX \times \cY)^{n_P}\times \cX^{n_Q}$ is the space of possible datasets in the unsupervised label shift problem and $\cH \triangleq \{ h : [0,1]^d \to\{0,1\} \}$ is the set of classifiers on $[0,1]^d.$
\begin{theorem}[lower bound for the unsupervised label shift problem]
\label{th:lower-bound-unlabeled}
Let $ C_\beta \ge \left( \frac {38} {13} \right)^\beta $ and $ \mu_- \le \frac 3 {16} \le 3 \le \mu_+ $ in definition \ref{def:distribution-class}. Then \[ \inf_{\cA : \samplespaceunlabeled \to \cH }\left\{ \sup_{(P,Q)\in \Pi} \Ex_{\dataunlabeled} \left[\cE_Q\left( \cA \left( \dataunlabeled \right)\right)\right]\right\} \gtrsim \left( n_P^{-\frac{2\alpha}{2\alpha+d}} \vee n_Q^{-1} \right)^{\frac{1+\beta}{2}}. \]
\end{theorem}
To show that the lower bound is sharp, we show that the distributional matching approach of \citet{lipton2018Detecting} has the same rate of convergence under the standing assumptions. The superior empirical performance of this approach has led researchers to study its theoretical properties \cite{azizzadenesheli2019Regularized,garg2020UnifiedView}.
At a high-level, the distributional matching approach estimates the class probability ratios $ w_0 $ and $ w_1 .$ Looking at the population counterparts of this approach we see that:
\begin{align*}
[C_P(g)w]_1 & = C_{0,0}(g)w_0 + C_{0,1}(g)w_1\\ & = P(g(X) = 0, Y = 0) \frac{Q(Y = 0)}{P(Y = 0)} + P(g(X) = 0, Y = 1) \frac{Q(Y = 1)}{P(Y = 1)}\\
& = P(g(X) = 0| Y = 0) Q(Y = 0) + P(g(X)=0|Y = 1)Q(Y = 1) \\
& = Q(g(X) = 0| Y = 0) Q(Y = 0) + Q(g(X)=0|Y = 1)Q(Y = 1) \\
& = Q(g(X) = 0) = 1- \xi_Q(g),
\end{align*}
where we recalled $P(\cdot\vert Y = k)= Q(\cdot|Y = k)$ in the fourth step.
Similarly, it is possible to show that $ [C_P(g)w]_1 = \xi_Q(g). $ This implies \[C_P(g)w = \begin{bmatrix}1-\xi_Q(g) & \xi_Q(g)\end{bmatrix}^T.\]
The distributional matching approach uses an empirical version of the above equality to estimate $w.$
Once we have an estimate of the class probability ratios, it is possible train a classifier for the target domain using data from the source domain by reweighing. We summarize the distributional matching approach in algorithm \ref{alg:distributional-matching}.
\begin{algorithm}
\caption{distributional matching}
\label{alg:distributional-matching}
\begin{algorithmic}[1]
\State {\bfseries inputs:} pilot classifier $g:[0,1]^d\to\{0,1\}$ such that $C_P(g)$ is invertible
\State estimate $C(g)$: $\widehat C_{i,j} (g) = \frac1{n_P} \sum_{l=1}^{n_P} \indicator\left\{ g(X_l^P) = i, Y_l^P = j \right\}$
\State estimate $\xi_Q(g)$: $\hat \xi_Q(g) = \frac{1}{n_Q} \sum_{l=1}^{n_Q} \indicator \left\{ g(X_l^Q) = 1 \right\}$
\State estimate $\widehat w \triangleq \begin{bmatrix}w_0 \\ w_1\end{bmatrix}$: $\widehat w = \widehat C_P(g) ^{-1} \begin{bmatrix}1 - \hat \xi_Q(g) \\ \hat \xi_Q(g)\end{bmatrix}$
\end{algorithmic}
\end{algorithm}
Armed with estimates of the class probability ratios $\hat{w}_0$ and $\hat{w}_1$ from distributional matching, we estimate the regression function $ \eta_Q(x) $ by reweighing the usual non-parametric estimator of $\eta_Q$:
\[
\hat \eta_Q(x) = \argmin_{a\in [0,1]} \left[\sum_{ l =1}^{n_P}\ell (Y_l^P, a) K_h (x-X_l^P)\left( \widehat w_1 Y_l^P + \widehat w_0 (1-Y_l^P) \right)\right],
\]
where $\ell$ is a loss function. If $\ell$ is the squared loss function, then the estimate of $\eta_Q$ has the closed form
\begin{equation}
\hat \eta_Q(x) = \frac{\frac1{n_P}\sum_{ l =1}^{n_P} Y_l^P\widehat w_1K_h (x-X_l^P)}{\frac1{n_P}\sum_{ l =1}^{n_P}Y_l^P \widehat w_1 K_h (x-X_l^P) + \frac1{n_P}\sum_{ l =1}^{n_P}(1-Y_l^P) \widehat w_0 K_h (x-X_l^P)}.
\label{eq:unlabeled-regfn}
\end{equation}
As we shall see, this $\hat{\eta}_Q$ is basically a plug in estimator for $\eta_Q$. Let $ n_{P,1} $ and $ n_{P,0} $ be the number of samples from the source domain with label $ 1 $ and $ 0 $ respectively. The estimated regression function is equivalently
\[
\hat \eta_Q(x) = \frac{\frac{n_{P,1}}{n_P}\widehat w_1 \frac1{n_{P,1}}\sum_{ l =1}^{n_P} Y_l^PK_h (x-X_l^P)}{\frac{n_{P,1}}{n_P}\widehat w_1 \frac1{n_{P,1}}\sum_{ l =1}^{n_P} Y_l^PK_h (x-X_l^P) + \frac{n_{P,0}}{n_P}\widehat w_0 \frac1{n_{P,0}}\sum_{ l =1}^{n_P}(1-Y_l^P) K_h (x-X_l^P)}.
\]
To simplify the preceding expression, we note that
\begin{itemize}
\item $ \hat \pi_P = \frac{n_{P,1}}{n_P} $ is an estimator of $\pi_P$. Recalling $ \widehat w_1 $ is the estimator of the ratio $ \frac{Q(Y=1)}{P(Y=1)} $ from distributional matching, we see that $ \widetilde \pi_Q \triangleq \hat \pi_P \widehat w_1 $ is an estimator of $ \pi_Q$. Similarly, it is not hard to see that $ \widetilde{1-\pi_Q} \triangleq \frac{n_{P,0}}{n_P}\widehat w_0 $ is an estimator of $ 1- \pi_Q$.
\item $\widetilde g_1(x) \triangleq \frac1{n_{P,1}}\sum_{ l =1}^{n_P} Y_l^PK_h (x-X_l^P) $ is a kernel density estimator of the class conditional density $ g_1(x) $ at a point $ x $. Similarly, $ \widetilde g_0(x) \triangleq \frac1{n_{P,0}}\sum_{ l =1}^{n_P}(1-Y_l^P) K_h (x-X_l^P) $ is a kernel density estimator of $ g_0(x). $
\end{itemize}
In terms of $\widetilde \pi_Q$, $\widetilde{1-\pi_Q}$, $\widetilde g_0$, and $\widetilde g_1$, the estimator of the regression function $\hat{\eta}_Q$ is
\[ \hat \eta_Q(x) = \frac{\widetilde{\pi}_Q\widetilde g_1(x) }{\widetilde{\pi}_Q\widetilde{g}_1(x) + (\widetilde{1-\pi_Q})\widetilde g_0(x)}. \]
Comparing the preceding expression and \eqref{eq:regression-function}, we recognize $\hat{\eta}_Q$ as a plug in estimator of the regression function $\eta_Q$.
Before moving on to the theoretical properties of this estimator, we elaborate on two practical issues with the estimator. First, the estimator of the regression function depends on a bandwidth parameter $ h>0. $ As we shall see, there is a choice of $h$ (depending on the smoothness parameter $ \alpha $, sample sizes $n_P$, and dimension $d$) that leads to a minimax rate optimal plug in classifier: $ \hat f(x) \triangleq \ones\{\hat{\eta}_Q(x) \ge \frac12\} $. In practice, we pick $h$ by cross-validation.
Second, the pilot classifier $ g $ in algorithm \ref{alg:distributional-matching} plays a crucial role in forming $ \hat f. $ Finding the best choice of $ g $ is a practically relevant area of research, but it is beyond the scope of this paper. We remark that the only requirement on the pilot classifier is non-singularity of the confusion matrix $ C_P(g) $ in the source domain. In our simulations, we use logistic regression in the source domain to obtain a pilot classifier $g(x) \triangleq \indicator\{\hat b^Tx > 0\}$, where \[ \hat b \triangleq (\hat b_0, \hat b_1^T) \in \argmin_{(b_0, b_1^T)^T \in \reals^{d+1}} \frac 1{n_P} \sum_{l=1}^{n_P} \left( Y_l^P (b_0 + b_1^TX_l^P) - \log\left(1+ e^{b_0 + b_1^TX_l^P} \right) \right). \]
As long as there is $ \delta >0 $ and $ \phi>0 $ such that $\inf_{\|b-b^*\|_2 \le \delta }|\text{det}(C_P(h_b))| \ge \phi$, where $b^*$ is the population counterpart of $\hat b$ in the source domain, Theorem \ref{th:upper-bound-unlabeled} provides an the upper bound for the excess risk (see appendix \ref{appendix:choice-of-g} Theorem \ref{th:upper-bound-no-label-g}).
\begin{theorem}[upper bound for the unsupervised label shift problem]
\label{th:upper-bound-unlabeled}
Let $ \hat f$ be the plug-in classifier defined above with any pilot classifier $g$ that leads to a non-singular confusion matrix $C_P(g)$ and bandwidth $ h \triangleq C_1 n_P^{-\frac{1}{2\alpha+d}} $ for some $C_1>0.$ Then \[ \sup_{(P,Q)\in \Pi} \Ex_{\dataunlabeled} \left[\cE_Q \left( \hat f
\right)\right] \lesssim \left( n_P^{-\frac{2\alpha}{2\alpha+d}} \vee n_Q^{-1} \right)^{\frac{1+\beta}{2}}. \]
\end{theorem}
Proofs of Theorems \ref{th:upper-bound-unlabeled} and \ref{th:lower-bound-unlabeled} are presented in appendix \ref{sec:supp-results}. Theorems \ref{th:upper-bound-unlabeled} and \ref{th:lower-bound-unlabeled} together show that the minimax convergence rate of the excess risk in the unsupervised label shift problem is \[ \inf_{\cA : \samplespaceunlabeled \to \cH}\left\{ \sup_{(P,Q)\in \Pi} \Ex_{\dataunlabeled} \left[\cE_Q\left( \cA \left( \dataunlabeled \right)\right)\right]\right\} \asymp \left( n_P^{-\frac{2\alpha}{2\alpha+d}} \vee n_Q^{-1} \right)^{\frac{1+\beta}{2}}. \]
Before moving on, we compare the minimax rates in the supervised and unsupervised label shift problems. The only difference between the minimax rates is in the first term in the rate, which, we recall, depends on the hardness of estimating the class conditional densities. In the supervised label shift problem, the samples from the target domain come with labels, so they can be used to estimate the class conditional densities along with the source data, whilst in the unsupervised label shift problem, the samples from the target domain are unlabeled and cannot be used to estimate the conditional densities, and we rely solely on the data from the source distribution. This changes the first term from $(n_P + n_Q)^{-\frac{2\alpha}{2\alpha + d}}$ in the supervised problem to $n_P^{-\frac{2\alpha}{2\alpha + d}}$ in its unsupervised counterpart. We wrap up with a few additional remarks about the minimax rate in the unsupervised label shift problem.
\begin{remark}
\label{rem:distributional-matching-convergence-rate}
The second term in the rate that depends on the hardness of estimating the class probability ratios is actually $O(\frac{1}{n_P}) \vee O(\frac{1}{n_Q})$, but we drop the $O(\frac{1}{n_P})$ term because the first term in the rate (that depends on the hardness of estimating the class conditional densities) is slower.
\end{remark}
\begin{remark}
In practice, it is common to have $ n_P \gg n_Q. $ In this setting, the minimax rate simplifies to
\[ \inf_{\cA : \samplespaceunlabeled \to \cH}\left\{ \sup_{(P,Q)\in \Pi} \Ex_{\dataunlabeled} \left[\cE_Q\left( \cA \left( \dataunlabeled \right)\right)\right]\right\} \asymp \begin{cases}
n_P^{-\frac{\alpha(1+\beta)}{2\alpha + d}} & \text{if } n_P \ll n_Q^{1+\frac d{2\alpha}},\\
n_Q ^{-\frac{1+\beta}{2}} & \text{if } n_P \gg n_Q^{1+\frac d{2\alpha}}.
\end{cases} \]
We can interpret these rates in the following way. Looking back at the classifier, we see that there are two main sources of errors that contribute to the excess risk:
\begin{enumerate}
\item errors in the estimation of class probability ratios $ w_0 $ and $ w_1, $ which lead to the $O(n_Q ^{-\frac{1+\beta}{2}})$ term in the minimax rate,
\item error in estimation of the class conditional densities $ g_0(x) $ and $ g_1(x), $ which lead to the $O(n_P^{-\frac{\alpha(1+\beta)}{2\alpha + d}})$ term in the rate.
\end{enumerate}
If $ n_P \gg n_Q^{1+\frac d{2\alpha}} $ then the errors in estimation of $ w_0 $ and $ w_1 $ dominate the excess risk. In this case, improving the estimates of the class conditional densities (by increasing $ n_P $) does not improve the overall convergence rate.
\end{remark}
\begin{remark}
If $ n_P \ll n_Q^{1+\frac d{2\alpha}}, $ the minimax rate simplifies to \[ \inf_{\cA : \samplespaceunlabeled \to \cH} \left\{ \sup_{(P,Q)\in \Pi} \Ex_{\dataunlabeled} \left[\cE_Q\left( \cA \left( \dataunlabeled \right)\right)\right] \right\} \asymp n_P^{-\frac{\alpha(1+\beta)}{2\alpha + d}}, \]
which is the minimax rate of IID non-parametric classification in the source domain. In other words, given enough unlabeled samples from target distribution, the error in the non-parametric parts of the unsupervised label shift problem dominate. As this is also the essential difficulty in the IID classification problem in the source domain, it is unsurprising that the minimax rates coincide.
\end{remark}
\begin{remark}
Recall the minimax rate of IID non-parametric classification in the target domain is $n_Q^{-\frac{\alpha(1+\beta)}{2\alpha + d}}$ \cite{audibert2007fast}. Comparing the minimax rates of IID non-parametric classification (in the target domain) and the unsupervised label shift problem, we see that as long as there are enough samples in the target domain (so we are in the regime of the preceding remark),then labeled examples from the source domain are as informative as labeled examples from the target domain. An extremely important practical implication of this observation is if labeled examples are hard to obtain in the target domain, then it is possible to substitute them with labeled examples in a label shifted source domain.
\end{remark} | 167,858 |
section \<open>Robbins Conjecture\<close>
theory Robbins_Conjecture
imports Main
begin
text \<open>The document gives a formalization of the proof of the Robbins
conjecture, following A. Mann, \emph{A Complete Proof of the
Robbins Conjecture}, 2003, DOI 10.1.1.6.7838\<close>
section \<open>Axiom Systems\<close>
text \<open>The following presents several axiom systems that shall be under study.
The first axiom sets common systems that underly all of
the systems we shall be looking at.
The second system is a reformulation of Boolean algebra. We shall
follow pages 7--8 in S. Koppelberg. \emph{General Theory of Boolean
Algebras}, Volume 1 of \emph{Handbook of Boolean Algebras}. North
Holland, 1989. Note that our formulation deviates slightly from this,
as we only provide one distribution axiom, as the dual is redundant.
The third system is Huntington's algebra and the fourth system is
Robbins' algebra.
Apart from the common system, all of these systems are demonstrated
to be equivalent to the library formulation of Boolean algebra, under
appropriate interpretation.\<close>
subsection \<open>Common Algebras\<close>
class common_algebra = uminus +
fixes inf :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixl "\<sqinter>" 70)
fixes sup :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixl "\<squnion>" 65)
fixes bot :: "'a" ("\<bottom>")
fixes top :: "'a" ("\<top>")
assumes sup_assoc: "x \<squnion> (y \<squnion> z) = (x \<squnion> y) \<squnion> z"
assumes sup_comm: "x \<squnion> y = y \<squnion> x"
context common_algebra begin
definition less_eq :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<sqsubseteq>" 50) where
"x \<sqsubseteq> y = (x \<squnion> y = y)"
definition less :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<sqsubset>" 50) where
"x \<sqsubset> y = (x \<sqsubseteq> y \<and> \<not> y \<sqsubseteq> x)"
definition minus :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixl "-" 65) where
"minus x y = (x \<sqinter> - y)"
(* We shall need some object in order to define falsum and verum *)
definition secret_object1 :: "'a" ("\<iota>") where
"\<iota> = (SOME x. True)"
end
class ext_common_algebra = common_algebra +
assumes inf_eq: "x \<sqinter> y = -(- x \<squnion> - y)"
assumes top_eq: "\<top> = \<iota> \<squnion> - \<iota>"
assumes bot_eq: "\<bottom> = -(\<iota> \<squnion> - \<iota>)"
subsection \<open>Boolean Algebra\<close>
class boolean_algebra_II =
common_algebra +
assumes inf_comm: "x \<sqinter> y = y \<sqinter> x"
assumes inf_assoc: "x \<sqinter> (y \<sqinter> z) = (x \<sqinter> y) \<sqinter> z"
assumes sup_absorb: "x \<squnion> (x \<sqinter> y) = x"
assumes inf_absorb: "x \<sqinter> (x \<squnion> y) = x"
assumes sup_inf_distrib1: "x \<squnion> y \<sqinter> z = (x \<squnion> y) \<sqinter> (x \<squnion> z)"
assumes sup_compl: "x \<squnion> - x = \<top>"
assumes inf_compl: "x \<sqinter> - x = \<bottom>"
subsection \<open>Huntington's Algebra\<close>
class huntington_algebra = ext_common_algebra +
assumes huntington: "- (-x \<squnion> -y) \<squnion> - (-x \<squnion> y) = x"
subsection \<open>Robbins' Algebra\<close>
class robbins_algebra = ext_common_algebra +
assumes robbins: "- (- (x \<squnion> y) \<squnion> - (x \<squnion> -y)) = x"
section \<open>Equivalence\<close>
text \<open>With our axiom systems defined, we turn to providing equivalence
results between them.
We shall begin by illustrating equivalence for our formulation and
the library formulation of Boolean algebra.\<close>
subsection \<open>Boolean Algebra\<close>
text \<open>The following provides the canonical definitions for order and
relative complementation for Boolean algebras. These are necessary
since the Boolean algebras presented in the Isabelle/HOL library have
a lot of structure, while our formulation is considerably simpler.
Since our formulation of Boolean algebras is considerably simple, it is
easy to show that the library instantiates our axioms.\<close>
context boolean_algebra_II begin
lemma boolean_II_is_boolean:
"class.boolean_algebra minus uminus (\<sqinter>) (\<sqsubseteq>) (\<sqsubset>) (\<squnion>) \<bottom> \<top>"
apply unfold_locales
apply (metis inf_absorb inf_assoc inf_comm inf_compl
less_def less_eq_def minus_def
sup_absorb sup_assoc sup_comm
sup_compl sup_inf_distrib1
sup_absorb inf_comm)+
done
end
context boolean_algebra begin
lemma boolean_is_boolean_II:
"class.boolean_algebra_II uminus inf sup bot top"
apply unfold_locales
apply (metis sup_assoc sup_commute sup_inf_absorb sup_compl_top
inf_assoc inf_commute inf_sup_absorb inf_compl_bot
sup_inf_distrib1)+
done
end
subsection \<open>Huntington Algebra\<close>
text \<open>We shall illustrate here that all Boolean algebra using our
formulation are Huntington algebras, and illustrate that every
Huntington algebra may be interpreted as a Boolean algebra.
Since the Isabelle/HOL library has good automation, it is convenient
to first show that the library instances Huntington algebras to exploit
previous results, and then use our previously derived correspondence.\<close>
context boolean_algebra begin
lemma boolean_is_huntington:
"class.huntington_algebra uminus inf sup bot top"
apply unfold_locales
apply (metis double_compl inf_sup_distrib1 inf_top_right
compl_inf inf_commute inf_compl_bot
compl_sup sup_commute sup_compl_top
sup_compl_top sup_assoc)+
done
end
context boolean_algebra_II begin
lemma boolean_II_is_huntington:
"class.huntington_algebra uminus (\<sqinter>) (\<squnion>) \<bottom> \<top>"
proof -
interpret boolean:
boolean_algebra minus uminus "(\<sqinter>)" "(\<sqsubseteq>)" "(\<sqsubset>)" "(\<squnion>)" \<bottom> \<top>
by (fact boolean_II_is_boolean)
show ?thesis by (simp add: boolean.boolean_is_huntington)
qed
end
context huntington_algebra begin
lemma huntington_id: "x \<squnion> -x = -x \<squnion> -(-x)"
proof -
from huntington have
"x \<squnion> -x = -(-x \<squnion> -(-(-x))) \<squnion> -(-x \<squnion> -(-x)) \<squnion>
(-(-(-x) \<squnion> -(-(-x))) \<squnion> -(-(-x) \<squnion> -(-x)))"
by simp
also from sup_comm have
"\<dots> = -(-(-x) \<squnion> -(-x)) \<squnion> -(-(-x) \<squnion> -(-(-x))) \<squnion>
(-(-(-x) \<squnion> -x) \<squnion> -(-(-(-x)) \<squnion> -x))"
by simp
also from sup_assoc have
"\<dots> = -(-(-x) \<squnion> -(-x)) \<squnion>
(-(-(-x) \<squnion> -(-(-x))) \<squnion> -(-(-x) \<squnion> -x)) \<squnion>
-(-(-(-x)) \<squnion> -x)"
by simp
also from sup_comm have
"\<dots> = -(-(-x) \<squnion> -(-x)) \<squnion>
(-(-(-x) \<squnion> -x) \<squnion> -(-(-x) \<squnion> -(-(-x)))) \<squnion>
-(-(-(-x)) \<squnion> -x)"
by simp
also from sup_assoc have
"\<dots> = -(-(-x) \<squnion> -(-x)) \<squnion> -(-(-x) \<squnion> -x) \<squnion>
(-(-(-x) \<squnion> -(-(-x))) \<squnion> -(-(-(-x)) \<squnion> -x))"
by simp
also from sup_comm have
"\<dots> = -(-(-x) \<squnion> -(-x)) \<squnion> -(-(-x) \<squnion> -x) \<squnion>
(-(-(-(-x)) \<squnion> -(-x)) \<squnion> -(-(-(-x)) \<squnion> -x))"
by simp
also from huntington have
"\<dots> = -x \<squnion> -(-x)"
by simp
finally show ?thesis by simp
qed
lemma dbl_neg: "- (-x) = x"
apply (metis huntington huntington_id sup_comm)
done
lemma towards_sup_compl: "x \<squnion> -x = y \<squnion> -y"
proof -
from huntington have
"x \<squnion> -x = -(-x \<squnion> -(-y)) \<squnion> -(-x \<squnion> -y) \<squnion> (-(-(-x) \<squnion> -(-y)) \<squnion> -(-(-x) \<squnion> -y))"
by simp
also from sup_comm have
"\<dots> = -(-(-y) \<squnion> -x) \<squnion> -(-y \<squnion> -x) \<squnion> (-(-y \<squnion> -(-x)) \<squnion> -(-(-y) \<squnion> -(-x)))"
by simp
also from sup_assoc have
"\<dots> = -(-(-y) \<squnion> -x) \<squnion> (-(-y \<squnion> -x) \<squnion> -(-y \<squnion> -(-x))) \<squnion> -(-(-y) \<squnion> -(-x))"
by simp
also from sup_comm have
"\<dots> = -(-y \<squnion> -(-x)) \<squnion> -(-y \<squnion> -x) \<squnion> -(-(-y) \<squnion> -x) \<squnion> -(-(-y) \<squnion> -(-x))"
by simp
also from sup_assoc have
"\<dots> = -(-y \<squnion> -(-x)) \<squnion> -(-y \<squnion> -x) \<squnion> (-(-(-y) \<squnion> -x) \<squnion> -(-(-y) \<squnion> -(-x)))"
by simp
also from sup_comm have
"\<dots> = -(-y \<squnion> -(-x)) \<squnion> -(-y \<squnion> -x) \<squnion> (-(-(-y) \<squnion> -(-x)) \<squnion> -(-(-y) \<squnion> -x))"
by simp
also from huntington have
"y \<squnion> -y = \<dots>" by simp
finally show ?thesis by simp
qed
lemma sup_compl: "x \<squnion> -x = \<top>"
by (simp add: top_eq towards_sup_compl)
lemma towards_inf_compl: "x \<sqinter> -x = y \<sqinter> -y"
by (metis dbl_neg inf_eq sup_comm sup_compl)
lemma inf_compl: "x \<sqinter> -x = \<bottom>"
by (metis dbl_neg sup_comm bot_eq towards_inf_compl inf_eq)
lemma towards_idem: "\<bottom> = \<bottom> \<squnion> \<bottom>"
by (metis dbl_neg huntington inf_compl inf_eq sup_assoc sup_comm sup_compl)
lemma sup_ident: "x \<squnion> \<bottom> = x"
by (metis dbl_neg huntington inf_compl inf_eq sup_assoc
sup_comm sup_compl towards_idem)
lemma inf_ident: "x \<sqinter> \<top> = x"
by (metis dbl_neg inf_compl inf_eq sup_ident sup_comm sup_compl)
lemma sup_idem: "x \<squnion> x = x"
by (metis dbl_neg huntington inf_compl inf_eq sup_ident sup_comm sup_compl)
lemma inf_idem: "x \<sqinter> x = x"
by (metis dbl_neg inf_eq sup_idem)
lemma sup_nil: "x \<squnion> \<top> = \<top>"
by (metis sup_idem sup_assoc sup_comm sup_compl)
lemma inf_nil: "x \<sqinter> \<bottom> = \<bottom>"
by (metis dbl_neg inf_compl inf_eq sup_nil sup_comm sup_compl)
lemma sup_absorb: "x \<squnion> x \<sqinter> y = x"
by (metis huntington inf_eq sup_idem sup_assoc sup_comm)
lemma inf_absorb: "x \<sqinter> (x \<squnion> y) = x"
by (metis dbl_neg inf_eq sup_absorb)
lemma partition: "x \<sqinter> y \<squnion> x \<sqinter> -y = x"
by (metis dbl_neg huntington inf_eq sup_comm)
lemma demorgans1: "-(x \<sqinter> y) = -x \<squnion> -y"
by (metis dbl_neg inf_eq)
lemma demorgans2: "-(x \<squnion> y) = -x \<sqinter> -y"
by (metis dbl_neg inf_eq)
lemma inf_comm: "x \<sqinter> y = y \<sqinter> x"
by (metis inf_eq sup_comm)
lemma inf_assoc: "x \<sqinter> (y \<sqinter> z) = x \<sqinter> y \<sqinter> z"
by (metis dbl_neg inf_eq sup_assoc)
lemma inf_sup_distrib1: "x \<sqinter> (y \<squnion> z) = (x \<sqinter> y) \<squnion> (x \<sqinter> z)"
proof -
from partition have
"x \<sqinter> (y \<squnion> z) = x \<sqinter> (y \<squnion> z) \<sqinter> y \<squnion> x \<sqinter> (y \<squnion> z) \<sqinter> -y" ..
also from inf_assoc have
"\<dots> = x \<sqinter> ((y \<squnion> z) \<sqinter> y) \<squnion> x \<sqinter> (y \<squnion> z) \<sqinter> -y" by simp
also from inf_comm have
"\<dots> = x \<sqinter> (y \<sqinter> (y \<squnion> z)) \<squnion> x \<sqinter> (y \<squnion> z) \<sqinter> -y" by simp
also from inf_absorb have
"\<dots> = (x \<sqinter> y) \<squnion> (x \<sqinter> (y \<squnion> z) \<sqinter> -y)" by simp
also from partition have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
((x \<sqinter> (y \<squnion> z) \<sqinter> -y \<sqinter> z) \<squnion> (x \<sqinter> (y \<squnion> z) \<sqinter> -y \<sqinter> -z))" by simp
also from inf_assoc have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
((x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> z))) \<squnion> (x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> -z))))" by simp
also from demorgans2 have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
((x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> z))) \<squnion> (x \<sqinter> ((y \<squnion> z) \<sqinter> -(y \<squnion> z))))" by simp
also from inf_compl have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
((x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> z))) \<squnion> (x \<sqinter> \<bottom>))" by simp
also from inf_nil have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
((x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> z))) \<squnion> \<bottom>)" by simp
also from sup_idem have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
((x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> z))) \<squnion> \<bottom>)" by simp
also from sup_ident have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
(x \<sqinter> ((y \<squnion> z) \<sqinter> (-y \<sqinter> z)))" by simp
also from inf_comm have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
(x \<sqinter> ((-y \<sqinter> z) \<sqinter> (y \<squnion> z)))" by simp
also from sup_comm have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
(x \<sqinter> ((-y \<sqinter> z) \<sqinter> (z \<squnion> y)))" by simp
also from inf_assoc have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> (y \<sqinter> z)) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion>
(x \<sqinter> (-y \<sqinter> (z \<sqinter> (z \<squnion> y))))" by simp
also from inf_absorb have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> (y \<sqinter> z)) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion> (x \<sqinter> (-y \<sqinter> z))"
by simp
also from inf_comm have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> (z \<sqinter> y)) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion> (x \<sqinter> (z \<sqinter> -y))"
by simp
also from sup_assoc have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> ((x \<sqinter> (z \<sqinter> y)) \<squnion> (x \<sqinter> y \<sqinter> -z))) \<squnion> (x \<sqinter> (z \<sqinter> -y))"
by simp
also from sup_comm have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> ((x \<sqinter> y \<sqinter> -z) \<squnion> (x \<sqinter> (z \<sqinter> y)))) \<squnion> (x \<sqinter> (z \<sqinter> -y))"
by simp
also from sup_assoc have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion> ((x \<sqinter> (z \<sqinter> y)) \<squnion> (x \<sqinter> (z \<sqinter> -y)))"
by simp
also from inf_assoc have
"\<dots> = ((x \<sqinter> y \<sqinter> z) \<squnion> (x \<sqinter> y \<sqinter> -z)) \<squnion> ((x \<sqinter> z \<sqinter> y) \<squnion> (x \<sqinter> z \<sqinter> -y))" by simp
also from partition have "\<dots> = (x \<sqinter> y) \<squnion> (x \<sqinter> z)" by simp
finally show ?thesis by simp
qed
lemma sup_inf_distrib1:
"x \<squnion> (y \<sqinter> z) = (x \<squnion> y) \<sqinter> (x \<squnion> z)"
proof -
from dbl_neg have
"x \<squnion> (y \<sqinter> z) = -(-(-(-x) \<squnion> (y \<sqinter> z)))" by simp
also from inf_eq have
"\<dots> = -(-x \<sqinter> (-y \<squnion> -z))" by simp
also from inf_sup_distrib1 have
"\<dots> = -((-x \<sqinter> -y) \<squnion> (-x \<sqinter> -z))" by simp
also from demorgans2 have
"\<dots> = -(-x \<sqinter> -y) \<sqinter> -(-x \<sqinter> -z)" by simp
also from demorgans1 have
"\<dots> = (-(-x) \<squnion> -(-y)) \<sqinter> (-(-x) \<squnion> -(-z))" by simp
also from dbl_neg have
"\<dots> = (x \<squnion> y) \<sqinter> (x \<squnion> z)" by simp
finally show ?thesis by simp
qed
lemma huntington_is_boolean_II:
"class.boolean_algebra_II uminus (\<sqinter>) (\<squnion>) \<bottom> \<top>"
apply unfold_locales
apply (metis inf_comm inf_assoc sup_absorb
inf_absorb sup_inf_distrib1
sup_compl inf_compl)+
done
lemma huntington_is_boolean:
"class.boolean_algebra minus uminus (\<sqinter>) (\<sqsubseteq>) (\<sqsubset>) (\<squnion>) \<bottom> \<top>"
proof -
interpret boolean_II:
boolean_algebra_II uminus "(\<sqinter>)" "(\<squnion>)" \<bottom> \<top>
by (fact huntington_is_boolean_II)
show ?thesis by (simp add: boolean_II.boolean_II_is_boolean)
qed
end
subsection \<open>Robbins' Algebra\<close>
context boolean_algebra begin
lemma boolean_is_robbins:
"class.robbins_algebra uminus inf sup bot top"
apply unfold_locales
apply (metis sup_assoc sup_commute compl_inf double_compl sup_compl_top
inf_compl_bot diff_eq sup_bot_right sup_inf_distrib1)+
done
end
context boolean_algebra_II begin
lemma boolean_II_is_robbins:
"class.robbins_algebra uminus inf sup bot top"
proof -
interpret boolean:
boolean_algebra minus uminus "(\<sqinter>)" "(\<sqsubseteq>)" "(\<sqsubset>)" "(\<squnion>)" \<bottom> \<top>
by (fact boolean_II_is_boolean)
show ?thesis by (simp add: boolean.boolean_is_robbins)
qed
end
context huntington_algebra begin
lemma huntington_is_robbins:
"class.robbins_algebra uminus inf sup bot top"
proof -
interpret boolean:
boolean_algebra minus uminus "(\<sqinter>)" "(\<sqsubseteq>)" "(\<sqsubset>)" "(\<squnion>)" \<bottom> \<top>
by (fact huntington_is_boolean)
show ?thesis by (simp add: boolean.boolean_is_robbins)
qed
end
text \<open>Before diving into the proof that the Robbins algebra is Boolean,
we shall present some shorthand machinery\<close>
context common_algebra begin
(* Iteration Machinery/Shorthand *)
primrec copyp :: "nat \<Rightarrow> 'a \<Rightarrow> 'a" (infix "\<otimes>" 80)
where
copyp_0: "0 \<otimes> x = x"
| copyp_Suc: "(Suc k) \<otimes> x = (k \<otimes> x) \<squnion> x"
no_notation
Product_Type.Times (infixr "\<times>" 80)
primrec copy :: "nat \<Rightarrow> 'a \<Rightarrow> 'a" (infix "\<times>" 85)
where
"0 \<times> x = x"
| "(Suc k) \<times> x = k \<otimes> x"
(* Theorems for translating shorthand into syntax *)
lemma one: "1 \<times> x = x"
proof -
have "1 = Suc(0)" by arith
hence "1 \<times> x = Suc(0) \<times> x" by metis
also have "\<dots> = x" by simp
finally show ?thesis by simp
qed
lemma two: "2 \<times> x = x \<squnion> x"
proof -
have "2 = Suc(Suc(0))" by arith
hence "2 \<times> x = Suc(Suc(0)) \<times> x" by metis
also have "\<dots> = x \<squnion> x" by simp
finally show ?thesis by simp
qed
lemma three: "3 \<times> x = x \<squnion> x \<squnion> x"
proof -
have "3 = Suc(Suc(Suc(0)))" by arith
hence "3 \<times> x = Suc(Suc(Suc(0))) \<times> x" by metis
also have "\<dots> = x \<squnion> x \<squnion> x" by simp
finally show ?thesis by simp
qed
lemma four: "4 \<times> x = x \<squnion> x \<squnion> x \<squnion> x"
proof -
have "4 = Suc(Suc(Suc(Suc(0))))" by arith
hence "4 \<times> x = Suc(Suc(Suc(Suc(0)))) \<times> x" by metis
also have "\<dots> = x \<squnion> x \<squnion> x \<squnion> x" by simp
finally show ?thesis by simp
qed
lemma five: "5 \<times> x = x \<squnion> x \<squnion> x \<squnion> x \<squnion> x"
proof -
have "5 = Suc(Suc(Suc(Suc(Suc(0)))))" by arith
hence "5 \<times> x = Suc(Suc(Suc(Suc(Suc(0))))) \<times> x" by metis
also have "\<dots> = x \<squnion> x \<squnion> x \<squnion> x \<squnion> x" by simp
finally show ?thesis by simp
qed
lemma six: "6 \<times> x = x \<squnion> x \<squnion> x \<squnion> x \<squnion> x \<squnion> x"
proof -
have "6 = Suc(Suc(Suc(Suc(Suc(Suc(0))))))" by arith
hence "6 \<times> x = Suc(Suc(Suc(Suc(Suc(Suc(0)))))) \<times> x" by metis
also have "\<dots> = x \<squnion> x \<squnion> x \<squnion> x \<squnion> x \<squnion> x" by simp
finally show ?thesis by simp
qed
(* Distribution Laws *)
lemma copyp_distrib: "k \<otimes> (x \<squnion> y) = (k \<otimes> x) \<squnion> (k \<otimes> y)"
proof (induct k)
case 0 show ?case by simp
case Suc thus ?case by (simp, metis sup_assoc sup_comm)
qed
corollary copy_distrib: "k \<times> (x \<squnion> y) = (k \<times> x) \<squnion> (k \<times> y)"
by (induct k, (simp add: sup_assoc sup_comm copyp_distrib)+)
lemma copyp_arith: "(k + l + 1) \<otimes> x = (k \<otimes> x) \<squnion> (l \<otimes> x)"
proof (induct l)
case 0 have "k + 0 + 1 = Suc(k)" by arith
thus ?case by simp
case (Suc l) note ind_hyp = this
have "k + Suc(l) + 1 = Suc(k + l + 1)" by arith+
hence "(k + Suc(l) + 1) \<otimes> x = (k + l + 1) \<otimes> x \<squnion> x" by (simp add: ind_hyp)
also from ind_hyp have
"\<dots> = (k \<otimes> x) \<squnion> (l \<otimes> x) \<squnion> x" by simp
also note sup_assoc
finally show ?case by simp
qed
lemma copy_arith:
assumes "k \<noteq> 0" and "l \<noteq> 0"
shows "(k + l) \<times> x = (k \<times> x) \<squnion> (l \<times> x)"
using assms
proof -
from assms have "\<exists> k'. Suc(k') = k"
and "\<exists> l'. Suc(l') = l" by arith+
from this obtain k' l' where A: "Suc(k') = k"
and B: "Suc(l') = l" by fast+
from this have A1: "k \<times> x = k' \<otimes> x"
and B1: "l \<times> x = l' \<otimes> x" by fastforce+
from A B have "k + l = Suc(k' + l' + 1)" by arith
hence "(k + l) \<times> x = (k' + l' + 1) \<otimes> x" by simp
also from copyp_arith have
"\<dots> = k' \<otimes> x \<squnion> l' \<otimes> x" by fast
also from A1 B1 have
"\<dots> = k \<times> x \<squnion> l \<times> x" by fastforce
finally show ?thesis by simp
qed
end
text \<open>The theorem asserting all Robbins algebras are Boolean
comes in 6 movements.
First: The Winker identity is proved.
Second: Idempotence for a particular object is proved.
Note that falsum is defined in terms of this object.
Third: An identity law for falsum is derived.
Fourth: Idempotence for supremum is derived.
Fifth: The double negation law is proven
Sixth: Robbin's algebras are proven to be Huntington Algebras.\<close>
context robbins_algebra begin
definition secret_object2 :: "'a" ("\<alpha>") where
"\<alpha> = -(-(\<iota> \<squnion> \<iota> \<squnion> \<iota>) \<squnion> \<iota>)"
definition secret_object3 :: "'a" ("\<beta>") where
"\<beta> = \<iota> \<squnion> \<iota>"
definition secret_object4 :: "'a" ("\<delta>") where
"\<delta> = \<beta> \<squnion> (-(\<alpha> \<squnion> -\<beta>) \<squnion> -(\<alpha> \<squnion> -\<beta>))"
definition secret_object5 :: "'a" ("\<gamma>") where
"\<gamma> = \<delta> \<squnion> -(\<delta> \<squnion> -\<delta>)"
definition winker_object :: "'a" ("\<rho>") where
"\<rho> = \<gamma> \<squnion> \<gamma> \<squnion> \<gamma>"
definition fake_bot :: "'a" ("\<bottom>\<bottom>") where
"\<bottom>\<bottom> = -(\<rho> \<squnion> -\<rho>)"
(* Towards Winker's Identity *)
(* These lemmas are due to Alan Mann *)
lemma robbins2: "y = -(-(-x \<squnion> y) \<squnion> -(x \<squnion> y))"
by (metis robbins sup_comm)
lemma mann0: "-(x \<squnion> y) = -(-(-(x \<squnion> y) \<squnion> -x \<squnion> y) \<squnion> y)"
by (metis robbins sup_comm sup_assoc)
lemma mann1: "-(-x \<squnion> y) = -(-(-(-x \<squnion> y) \<squnion> x \<squnion> y) \<squnion> y)"
by (metis robbins sup_comm sup_assoc)
lemma mann2: "y = -(-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion> -(-x \<squnion> y))"
by (metis mann1 robbins sup_comm sup_assoc)
lemma mann3: "z = -(-(-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion> -(-x \<squnion> y) \<squnion> z) \<squnion> -(y \<squnion> z))"
proof -
let ?w = "-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion> -(-x \<squnion> y)"
from robbins[where x="z" and y="?w"] sup_comm mann2
have "z = -(-(y \<squnion> z) \<squnion> -(?w \<squnion> z))" by metis
thus ?thesis by (metis sup_comm)
qed
lemma mann4: "-(y \<squnion> z) =
-(-(-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion> -(-x \<squnion> y) \<squnion> -(y \<squnion> z) \<squnion> z) \<squnion> z)"
proof -
from robbins2[where x="-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion> -(-x \<squnion> y) \<squnion> z"
and y="-(y \<squnion> z)"]
mann3[where x="x" and y="y" and z="z"]
have "-(y \<squnion> z) =
-(z \<squnion> -(-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion> -(-x \<squnion> y) \<squnion> z \<squnion> -(y \<squnion> z)))"
by metis
with sup_comm sup_assoc show ?thesis by metis
qed
lemma mann5: "u =
-(-(-(-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion>
-(-x \<squnion> y) \<squnion> - (y \<squnion> z) \<squnion> z) \<squnion> z \<squnion> u) \<squnion>
-(-(y \<squnion> z) \<squnion> u))"
using robbins2[where x="-(-(-(-x \<squnion> y) \<squnion> x \<squnion> y \<squnion> y) \<squnion>
-(-x \<squnion> y) \<squnion> -(y \<squnion> z) \<squnion> z) \<squnion> z"
and y="u"]
mann4[where x=x and y=y and z=z]
sup_comm
by metis
lemma mann6:
"-(- 3\<times>x \<squnion> x) = -(-(-(- 3\<times>x \<squnion> x) \<squnion> - 3\<times>x) \<squnion> -(-(- 3\<times>x \<squnion> x) \<squnion> 5\<times>x))"
proof -
have "3+2=(5::nat)" and "3\<noteq>(0::nat)" and "2\<noteq>(0::nat)" by arith+
with copy_arith have \<heartsuit>: "3\<times>x \<squnion> 2\<times>x = 5\<times>x" by metis
let ?p = "-(- 3\<times>x \<squnion> x)"
{ fix q
from sup_comm have
"-(q \<squnion> 5\<times>x) = -(5\<times>x \<squnion> q)" by metis
also from \<heartsuit> mann0[where x="3\<times>x" and y="q \<squnion> 2\<times>x"] sup_assoc sup_comm have
"\<dots> = -(-(-(3\<times>x \<squnion> (q \<squnion> 2\<times>x)) \<squnion> - 3\<times>x \<squnion> (q \<squnion> 2\<times>x)) \<squnion> (q \<squnion> 2\<times>x))"
by metis
also from sup_assoc have
"\<dots> = -(-(-((3\<times>x \<squnion> q) \<squnion> 2\<times>x) \<squnion> - 3\<times>x \<squnion> (q \<squnion> 2\<times>x)) \<squnion> (q \<squnion> 2\<times>x))" by metis
also from sup_comm have
"\<dots> = -(-(-((q \<squnion> 3\<times>x) \<squnion> 2\<times>x) \<squnion> - 3\<times>x \<squnion> (q \<squnion> 2\<times>x)) \<squnion> (q \<squnion> 2\<times>x))" by metis
also from sup_assoc have
"\<dots> = -(-(-(q \<squnion> (3\<times>x \<squnion> 2\<times>x)) \<squnion> - 3\<times>x \<squnion> (q \<squnion> 2\<times>x)) \<squnion> (q \<squnion> 2\<times>x))" by metis
also from \<heartsuit> have
"\<dots> = -(-(-(q \<squnion> 5\<times>x) \<squnion> - 3\<times>x \<squnion> (q \<squnion> 2\<times>x)) \<squnion> (q \<squnion> 2\<times>x))" by metis
also from sup_assoc have
"\<dots> = -(-(-(q \<squnion> 5\<times>x) \<squnion> (- 3\<times>x \<squnion> q) \<squnion> 2\<times>x) \<squnion> (q \<squnion> 2\<times>x))" by metis
also from sup_comm have
"\<dots> = -(-(-(q \<squnion> 5\<times>x) \<squnion> (q \<squnion> - 3\<times>x) \<squnion> 2\<times>x) \<squnion> (2\<times>x \<squnion> q))" by metis
also from sup_assoc have
"\<dots> = -(-(-(q \<squnion> 5\<times>x) \<squnion> q \<squnion> - 3\<times>x \<squnion> 2\<times>x) \<squnion> 2\<times>x \<squnion> q)" by metis
finally have
"-(q \<squnion> 5\<times>x) = -(-(-(q \<squnion> 5\<times>x) \<squnion> q \<squnion> - 3\<times>x \<squnion> 2\<times>x) \<squnion> 2\<times>x \<squnion> q)" by simp
} hence \<spadesuit>:
"-(?p \<squnion> 5\<times>x) = -(-(-(?p \<squnion> 5\<times>x) \<squnion> ?p \<squnion> - 3\<times>x \<squnion> 2\<times>x) \<squnion> 2\<times>x \<squnion> ?p)"
by simp
from mann5[where x="3\<times>x" and y="x" and z="2\<times>x" and u="?p"]
sup_assoc three[where x=x] five[where x=x] have
"?p =
-(-(-(-(?p \<squnion> 5\<times>x) \<squnion> ?p \<squnion> -(x \<squnion> 2\<times>x) \<squnion> 2\<times>x) \<squnion> 2\<times>x \<squnion> ?p) \<squnion>
-(-(x \<squnion> 2\<times>x) \<squnion> ?p))" by metis
also from sup_comm have
"\<dots> =
-(-(-(-(?p \<squnion> 5\<times>x) \<squnion> ?p \<squnion> -(2\<times>x \<squnion> x) \<squnion> 2\<times>x) \<squnion> 2\<times>x \<squnion> ?p) \<squnion>
-(-(2\<times>x \<squnion> x) \<squnion> ?p))" by metis
also from two[where x=x] three[where x=x] have
"\<dots> =
-(-(-(-(?p \<squnion> 5\<times>x) \<squnion> ?p \<squnion> - 3\<times>x \<squnion> 2\<times>x) \<squnion> 2\<times>x \<squnion> ?p) \<squnion>
-(- 3\<times>x \<squnion> ?p))" by metis
also from \<spadesuit> have "\<dots> = -(-(?p \<squnion> 5\<times>x) \<squnion> -(- 3\<times>x \<squnion> ?p))" by simp
also from sup_comm have "\<dots> = -(-(?p \<squnion> 5\<times>x) \<squnion> -(?p \<squnion> - 3\<times>x))" by simp
also from sup_comm have "\<dots> = -(-(?p \<squnion> - 3\<times>x) \<squnion> -(?p \<squnion> 5\<times>x))" by simp
finally show ?thesis .
qed
lemma mann7:
"- 3\<times>x = -(-(- 3\<times>x \<squnion> x) \<squnion> 5\<times>x)"
proof -
let ?p = "-(- 3\<times>x \<squnion> x)"
let ?q = "?p \<squnion> - 3\<times>x"
let ?r = "-(?p \<squnion> 5\<times>x)"
from robbins2[where x="?q"
and y="?r"]
mann6[where x=x]
have "?r = - (?p \<squnion> - (?q \<squnion> ?r))" by simp
also from sup_comm have "\<dots> = - (- (?q \<squnion> ?r) \<squnion> ?p)" by simp
also from sup_comm have "\<dots> = - (- (?r \<squnion> ?q) \<squnion> ?p)" by simp
finally have \<spadesuit>: "?r = - (- (?r \<squnion> ?q) \<squnion> ?p)" .
from mann3[where x="3\<times>x" and y="x" and z="- 3\<times>x"]
sup_comm have
"- 3\<times>x = -(-(-(?p \<squnion> 3\<times>x \<squnion> x \<squnion> x) \<squnion> ?p \<squnion> - 3\<times>x) \<squnion> ?p)" by metis
also from sup_assoc have
"\<dots> = -(-(-(?p \<squnion> (3\<times>x \<squnion> x \<squnion> x)) \<squnion> ?q) \<squnion> ?p)" by metis
also from three[where x=x] five[where x=x] have
"\<dots> = -(-(?r \<squnion> ?q) \<squnion> ?p)" by metis
finally have "- 3\<times>x = -(-(?r \<squnion> ?q) \<squnion> ?p)" by metis
with \<spadesuit> show ?thesis by simp
qed
lemma mann8:
"-(- 3\<times>x \<squnion> x) \<squnion> 2\<times>x = -(-(-(- 3\<times>x \<squnion> x) \<squnion> - 3\<times>x \<squnion> 2\<times>x) \<squnion> - 3\<times>x)"
(is "?lhs = ?rhs")
proof -
let ?p = "-(- 3\<times>x \<squnion> x)"
let ?q = "?p \<squnion> 2\<times>x"
let ?r = "3\<times>x"
have "3+2=(5::nat)" and "3\<noteq>(0::nat)" and "2\<noteq>(0::nat)" by arith+
with copy_arith have \<heartsuit>: "3\<times>x \<squnion> 2\<times>x = 5\<times>x" by metis
from robbins2[where x="?r" and y="?q"] and sup_assoc
have "?q = -(-(- 3\<times>x \<squnion> ?q) \<squnion> -(3\<times>x \<squnion> ?p \<squnion> 2\<times>x))" by metis
also from sup_comm have
"\<dots> = -(-(?q \<squnion> - 3\<times>x) \<squnion> -(?p \<squnion> 3\<times>x \<squnion> 2\<times>x))" by metis
also from \<heartsuit> sup_assoc have
"\<dots> = -(-(?q \<squnion> - 3\<times>x) \<squnion> -(?p \<squnion> 5\<times>x))" by metis
also from mann7[where x=x] have
"\<dots> = -(-(?q \<squnion> - 3\<times>x) \<squnion> - 3\<times>x)" by metis
also from sup_assoc have
"\<dots> = -(-(?p \<squnion> (2\<times>x \<squnion> - 3\<times>x)) \<squnion> - 3\<times>x)" by metis
also from sup_comm have
"\<dots> = -(-(?p \<squnion> (- 3\<times>x \<squnion> 2\<times>x)) \<squnion> - 3\<times>x)" by metis
also from sup_assoc have
"\<dots> = ?rhs" by metis
finally show ?thesis by simp
qed
lemma mann9: "x = -(-(- 3\<times>x \<squnion> x) \<squnion> - 3\<times>x )"
proof -
let ?p = "-(- 3\<times>x \<squnion> x)"
let ?q = "?p \<squnion> 4\<times>x"
have "4+1=(5::nat)" and "1\<noteq>(0::nat)" and "4\<noteq>(0::nat)" by arith+
with copy_arith one have \<heartsuit>: "4\<times>x \<squnion> x = 5\<times>x" by metis
with sup_assoc robbins2[where y=x and x="?q"]
have "x = -(-(-?q \<squnion> x) \<squnion> -(?p \<squnion> 5\<times>x))" by metis
with mann7 have "x = -(-(-?q \<squnion> x) \<squnion> - 3\<times>x)" by metis
moreover
have "3+1=(4::nat)" and "1\<noteq>(0::nat)" and "3\<noteq>(0::nat)" by arith+
with copy_arith one have \<spadesuit>: "3\<times>x \<squnion> x = 4\<times>x" by metis
with mann1[where x="3\<times>x" and y="x"] sup_assoc have
"-(-?q \<squnion> x) = ?p" by metis
ultimately show ?thesis by simp
qed
lemma mann10: "y = -(-(-(- 3\<times>x \<squnion> x) \<squnion> - 3\<times>x \<squnion> y) \<squnion> -(x \<squnion> y))"
using robbins2[where x="-(- 3\<times>x \<squnion> x) \<squnion> - 3\<times>x" and y=y]
mann9[where x=x]
sup_comm
by metis
theorem mann: "2\<times>x = -(- 3\<times>x \<squnion> x) \<squnion> 2\<times>x"
using mann10[where x=x and y="2\<times>x"]
mann8[where x=x]
two[where x=x] three[where x=x] sup_comm
by metis
corollary winkerr: "\<alpha> \<squnion> \<beta> = \<beta>"
using mann secret_object2_def secret_object3_def two three
by metis
corollary winker: "\<beta> \<squnion> \<alpha> = \<beta>"
by (metis winkerr sup_comm)
corollary multi_winkerp: "\<beta> \<squnion> k \<otimes> \<alpha> = \<beta>"
by (induct k, (simp add: winker sup_comm sup_assoc)+)
corollary multi_winker: "\<beta> \<squnion> k \<times> \<alpha> = \<beta>"
by (induct k, (simp add: multi_winkerp winker sup_comm sup_assoc)+)
(* Towards Idempotence *)
lemma less_eq_introp:
"-(x \<squnion> -(y \<squnion> z)) = -(x \<squnion> y \<squnion> -z) \<Longrightarrow> y \<sqsubseteq> x"
by (metis robbins sup_assoc less_eq_def
sup_comm[where x=x and y=y])
corollary less_eq_intro:
"-(x \<squnion> -(y \<squnion> z)) = -(x \<squnion> y \<squnion> -z) \<Longrightarrow> x \<squnion> y = x"
by (metis less_eq_introp less_eq_def sup_comm)
lemma eq_intro:
"-(x \<squnion> -(y \<squnion> z)) = -(y \<squnion> -(x \<squnion> z)) \<Longrightarrow> x = y"
by (metis robbins sup_assoc sup_comm)
lemma copyp0:
assumes "-(x \<squnion> -y) = z"
shows "-(x \<squnion> -(y \<squnion> k \<otimes> (x \<squnion> z))) = z"
using assms
proof (induct k)
case 0 show ?case
by (simp, metis assms robbins sup_assoc sup_comm)
case Suc note ind_hyp = this
show ?case
by (simp, metis ind_hyp robbins sup_assoc sup_comm)
qed
lemma copyp1:
assumes "-(-(x \<squnion> -y) \<squnion> -y) = x"
shows "-(y \<squnion> k \<otimes> (x \<squnion> -(x \<squnion> -y))) = -y"
using assms
proof -
let ?z = "-(x \<squnion> - y)"
let ?ky = "y \<squnion> k \<otimes> (x \<squnion> ?z)"
have "-(x \<squnion> -?ky) = ?z" by (simp add: copyp0)
hence "-(-?ky \<squnion> -(-y \<squnion> ?z)) = ?z" by (metis assms sup_comm)
also have "-(?z \<squnion> -?ky) = x" by (metis assms copyp0 sup_comm)
hence "?z = -(-y \<squnion> -(-?ky \<squnion> ?z))" by (metis sup_comm)
finally show ?thesis by (metis eq_intro)
qed
corollary copyp2:
assumes "-(x \<squnion> y) = -y"
shows "-(y \<squnion> k \<otimes> (x \<squnion> -(x \<squnion> -y))) = -y"
by (metis assms robbins sup_comm copyp1)
lemma two_threep:
assumes "-(2 \<times> x \<squnion> y) = -y"
and "-(3 \<times> x \<squnion> y) = -y"
shows "2 \<times> x \<squnion> y = 3 \<times> x \<squnion> y"
using assms
proof -
from assms two three have
A: "-(x \<squnion> x \<squnion> y) = -y" and
B: "-(x \<squnion> x \<squnion> x \<squnion> y) = -y" by simp+
with sup_assoc
copyp2[where x="x" and y="x \<squnion> x \<squnion> y" and k="0"]
have "-(x \<squnion> x \<squnion> y \<squnion> x \<squnion> -(x \<squnion> -y)) = -y" by simp
moreover
from sup_comm sup_assoc A B
copyp2[where x="x \<squnion> x" and y="y" and k="0"]
have "-(y \<squnion> x \<squnion> x \<squnion> -(x \<squnion> x \<squnion> -y)) = -y" by fastforce
with sup_comm sup_assoc
have "-(x \<squnion> x \<squnion> y \<squnion> -(x \<squnion> (x \<squnion> -y))) = -y" by metis
ultimately have
"-(x \<squnion> x \<squnion> y \<squnion> -(x \<squnion> (x \<squnion> -y))) = -(x \<squnion> x \<squnion> y \<squnion> x \<squnion> -(x \<squnion> -y))" by simp
with less_eq_intro have "x \<squnion> x \<squnion> y = x \<squnion> x \<squnion> y \<squnion> x" by metis
with sup_comm sup_assoc two three show ?thesis by metis
qed
lemma two_three:
assumes "-(x \<squnion> y) = -y \<or> -(-(x \<squnion> -y) \<squnion> -y) = x"
shows "y \<squnion> 2 \<times> (x \<squnion> -(x \<squnion> -y)) = y \<squnion> 3 \<times> (x \<squnion> -(x \<squnion> -y))"
(is "y \<squnion> ?z2 = y \<squnion> ?z3")
using assms
proof
assume "-(x \<squnion> y) = -y"
with copyp2[where k="Suc(0)"]
copyp2[where k="Suc(Suc(0))"]
two three
have "-(y \<squnion> ?z2) = -y" and "-(y \<squnion> ?z3) = -y" by simp+
with two_threep sup_comm show ?thesis by metis
next
assume "-(-(x \<squnion> -y) \<squnion> -y) = x"
with copyp1[where k="Suc(0)"]
copyp1[where k="Suc(Suc(0))"]
two three
have "-(y \<squnion> ?z2) = -y" and "-(y \<squnion> ?z3) = -y" by simp+
with two_threep sup_comm show ?thesis by metis
qed
lemma sup_idem: "\<rho> \<squnion> \<rho> = \<rho>"
proof -
from winkerr two
copyp2[where x="\<alpha>" and y="\<beta>" and k="Suc(0)"] have
"-\<beta> = -(\<beta> \<squnion> 2 \<times> (\<alpha> \<squnion> -(\<alpha> \<squnion> -\<beta>)))" by simp
also from copy_distrib sup_assoc have
"\<dots> = -(\<beta> \<squnion> 2 \<times> \<alpha> \<squnion> 2 \<times> (-(\<alpha> \<squnion> -\<beta>)))" by simp
also from sup_assoc secret_object4_def two
multi_winker[where k="2"] have
"\<dots> = -\<delta>" by metis
finally have "-\<beta> = -\<delta>" by simp
with secret_object4_def sup_assoc three have
"\<delta> \<squnion> -(\<alpha> \<squnion> -\<delta>) = \<beta> \<squnion> 3 \<times> (-(\<alpha> \<squnion> -\<beta>))" by simp
also from copy_distrib[where k="3"]
multi_winker[where k="3"]
sup_assoc have
"\<dots> = \<beta> \<squnion> 3 \<times> (\<alpha> \<squnion> -(\<alpha> \<squnion> -\<beta>))" by metis
also from winker sup_comm two_three[where x="\<alpha>" and y="\<beta>"] have
"\<dots> = \<beta> \<squnion> 2 \<times> (\<alpha> \<squnion> -(\<alpha> \<squnion> -\<beta>))" by fastforce
also from copy_distrib[where k="2"]
multi_winker[where k="2"]
sup_assoc two secret_object4_def have
"\<dots> = \<delta>" by metis
finally have \<heartsuit>: "\<delta> \<squnion> -(\<alpha> \<squnion> -\<delta>) = \<delta>" by simp
from secret_object4_def winkerr sup_assoc have
"\<alpha> \<squnion> \<delta> = \<delta>" by metis
hence "\<delta> \<squnion> \<alpha> = \<delta>" by (metis sup_comm)
hence "-(-(\<delta> \<squnion> -\<delta>) \<squnion> -\<delta>) = -(-(\<delta> \<squnion> (\<alpha> \<squnion> -\<delta>)) \<squnion> -\<delta>)" by (metis sup_assoc)
also from \<heartsuit> have
"\<dots> = -(-(\<delta> \<squnion> (\<alpha> \<squnion> -\<delta>)) \<squnion> -(\<delta> \<squnion> -(\<alpha> \<squnion> -\<delta>)))" by metis
also from robbins have
"\<dots> = \<delta>" by metis
finally have "-(-(\<delta> \<squnion> -\<delta>) \<squnion> -\<delta>) = \<delta>" by simp
with two_three[where x="\<delta>" and y="\<delta>"]
secret_object5_def sup_comm
have "3 \<times> \<gamma> \<squnion> \<delta> = 2 \<times> \<gamma> \<squnion> \<delta>" by fastforce
with secret_object5_def sup_assoc sup_comm have
"3 \<times> \<gamma> \<squnion> \<gamma> = 2 \<times> \<gamma> \<squnion> \<gamma>" by fastforce
with two three four five six have
"6 \<times> \<gamma> = 3 \<times> \<gamma>" by simp
moreover have "3 + 3 = (6::nat)" and "3 \<noteq> (0::nat)" by arith+
moreover note copy_arith[where k="3" and l="3" and x="\<gamma>"]
winker_object_def three
ultimately show ?thesis by simp
qed
(* Idempotence implies the identity law *)
lemma sup_ident: "x \<squnion> \<bottom>\<bottom> = x"
proof -
have I: "\<rho> = -(-\<rho> \<squnion> \<bottom>\<bottom>)"
by (metis fake_bot_def inf_eq robbins sup_comm sup_idem)
{ fix x have "x = -(-(x \<squnion> -\<rho> \<squnion> \<bottom>\<bottom>) \<squnion> -(x \<squnion> \<rho>))"
by (metis I robbins sup_assoc) }
note II = this
have III: "-\<rho> = -(-(\<rho> \<squnion> -\<rho> \<squnion> -\<rho>) \<squnion> \<rho>)"
by (metis robbins[where x="-\<rho>" and y="\<rho> \<squnion> -\<rho>"]
I sup_comm fake_bot_def)
hence "\<rho> = -(-(\<rho> \<squnion> -\<rho> \<squnion> -\<rho>) \<squnion> -\<rho>)"
by (metis robbins[where x="\<rho>" and y="\<rho> \<squnion> -\<rho> \<squnion> -\<rho>"]
sup_comm[where x="\<rho>" and y="-(\<rho> \<squnion> -\<rho> \<squnion> -\<rho>)"]
sup_assoc sup_idem)
hence "-(\<rho> \<squnion> -\<rho> \<squnion> -\<rho>) = \<bottom>\<bottom>"
by (metis robbins[where x="-(\<rho> \<squnion> -\<rho> \<squnion> -\<rho>)" and y="\<rho>"]
III sup_comm fake_bot_def)
hence "-\<rho> = -(\<rho> \<squnion> \<bottom>\<bottom>)"
by (metis III sup_comm)
hence "\<rho> = -(-(\<rho> \<squnion> \<bottom>\<bottom>) \<squnion> -(\<rho> \<squnion> \<bottom>\<bottom> \<squnion> -\<rho>))"
by (metis II sup_idem sup_comm sup_assoc)
moreover have "\<rho> \<squnion> \<bottom>\<bottom> = -(-(\<rho> \<squnion> \<bottom>\<bottom>) \<squnion> -(\<rho> \<squnion> \<bottom>\<bottom> \<squnion> -\<rho>))"
by (metis robbins[where x="\<rho> \<squnion> \<bottom>\<bottom>" and y="\<rho>"]
sup_comm[where y="\<rho>"]
sup_assoc sup_idem)
ultimately have "\<rho> = \<rho> \<squnion> \<bottom>\<bottom>" by auto
hence "x \<squnion> \<bottom>\<bottom> = -(-(x \<squnion> \<rho>) \<squnion> -(x \<squnion> \<bottom>\<bottom> \<squnion> -\<rho>))"
by (metis robbins[where x="x \<squnion> \<bottom>\<bottom>" and y=\<rho>]
sup_comm[where x="\<bottom>\<bottom>" and y=\<rho>]
sup_assoc)
thus ?thesis by (metis sup_assoc sup_comm II)
qed
(* The identity law implies double negation *)
lemma dbl_neg: "- (-x) = x"
proof -
{ fix x have "\<bottom>\<bottom> = -(-x \<squnion> -(-x))"
by (metis robbins sup_comm sup_ident)
} note I = this
{ fix x have "-x = -(-(-x \<squnion> -(-(-x))))"
by (metis I robbins sup_comm sup_ident)
} note II = this
{ fix x have "-(-(-x)) = -(-(-x \<squnion> -(-(-x))))"
by (metis I II robbins sup_assoc sup_comm sup_ident)
} note III = this
show ?thesis by (metis II III robbins)
qed
(* Double negation implies Huntington's axiom, hence Boolean*)
theorem robbins_is_huntington:
"class.huntington_algebra uminus (\<sqinter>) (\<squnion>) \<bottom> \<top>"
apply unfold_locales
apply (metis dbl_neg robbins sup_comm)
done
theorem robbins_is_boolean_II:
"class.boolean_algebra_II uminus (\<sqinter>) (\<squnion>) \<bottom> \<top>"
proof -
interpret huntington:
huntington_algebra uminus "(\<sqinter>)" "(\<squnion>)" \<bottom> \<top>
by (fact robbins_is_huntington)
show ?thesis by (simp add: huntington.huntington_is_boolean_II)
qed
theorem robbins_is_boolean:
"class.boolean_algebra minus uminus (\<sqinter>) (\<sqsubseteq>) (\<sqsubset>) (\<squnion>) \<bottom> \<top>"
proof -
interpret huntington:
huntington_algebra uminus "(\<sqinter>)" "(\<squnion>)" \<bottom> \<top>
by (fact robbins_is_huntington)
show ?thesis by (simp add: huntington.huntington_is_boolean)
qed
end
no_notation secret_object1 ("\<iota>")
and secret_object2 ("\<alpha>")
and secret_object3 ("\<beta>")
and secret_object4 ("\<delta>")
and secret_object5 ("\<gamma>")
and winker_object ("\<rho>")
and less_eq (infix "\<sqsubseteq>" 50)
and less (infix "\<sqsubset>" 50)
and inf (infixl "\<sqinter>" 70)
and sup (infixl "\<squnion>" 65)
and top ("\<top>")
and bot ("\<bottom>")
and copyp (infix "\<otimes>" 80)
and copy (infix "\<times>" 85)
notation
Product_Type.Times (infixr "\<times>" 80)
end
| 183,705 |
TITLE: The relation between group cohomology and the cohomology of the classifying space
QUESTION [8 upvotes]: We know that the Borel group cohomology (group cohomology of measurable functions) of a group $G$, ${\cal H}_B^d(G,Z)$, is given by the cohomology of the classifying space: ${\cal H}_B^d(G,Z)=H^d(BG,Z)$. However, ${\cal H}_B^d(G,R/Z)\neq H^d(BG,R/Z)$, since for example: $H^d(BU(1),R/Z) = R/Z$ for even $d$ and $H^d(BU(1),R/Z) = 0$ for odd $d$; while ${\cal H}_B^d(BU(1),R/Z)=0$ for even $d$ and ${\cal H}_B^d(BU(1),R/Z)=Z$ for odd $d$. Instead, we have ${\cal H}^d(G,R/Z) = H^{d+1}(BG,Z)$.
My question is that do we have any relations between ${\cal H}_B^*(G,M)$ and $H^*(BG,M')$ where $G$ can be continuous, $M$ can be $Z_n$, and $M'$ can be different from $M$? In particular, I would like to know how ${\cal H}_B^*(G,Z_n)$ is related to $H^*(BG,M')$ when $G$ is continuous.
A related question group cohomology and cohomology of classifying space is closed. I hope it can be reopen.
REPLY [1 votes]: For a modern and general treatment of group cohomology and classifying space, see Flach's paper. | 105,266 |
1/20/2020 – (1) End-of-Life Caregiving, (2) Expanding End-of-Life Rights
- 20
- Jan
- 2020
On today’s broadcast of The Senior Zone, we discussed the following:
Segment 1: “End-of-Life Care with Humor, Honesty & Wisdom”. Guest: Ann Marie Hancock, Author, You Can’t Drive Your Own Care To Your Own Funeral.
Segment 2: “Expanding End-of-Life Rights”. Guest, Donna Smith, Legislative & Field Consultant, Compassion & Choices.
Podcast: Play in new window | Download | 75,821 |
TAIPEI (Reuters) - Taiwan President Ma Ying-jeou said on Thursday his upcoming meeting with President Xi Jinping was about further normalizing ties with China and had nothing to do with trying to revive his party’s fortunes ahead of the island’s elections in January.
The talks in Singapore on‘s)future, the future of cross-strait ties,” Ma said in his first public remarks since the surprise announcement of the meeting at midnight on.”
Communist China deems Taiwan a breakaway province to be taken back, by force if necessary, particularly if it makes moves toward formal independence..
“If the DPP returns to power next year, ties across the Taiwan Strait will likely deteriorate,” Eric Chu, the KMT’s presidential candidate, who started his own campaigning only three weeks ago, told reporters.
The Communists and KMT both agree there is “one China” but agree to disagree on the interpretation. Taiwan has been self-ruled since Chiang Kai-shek’s KMT fled to the island following their defeat by Mao Zedong’s Communists at the end of the Chinese civil war.
Chu, who met Xi in May after he succeeded Ma as KMT party chief, said he told the Chinese leader at the time that under the basis of such “one China” both sides can “continue to cooperate and reach win-win”. of the Singapore meeting had come out of the blue and said the timing was suspect, with elections 10 weeks away.
The Global Times, an influential tabloid published by the Chinese Communist Party’s official People’s Daily, said in an editorial.
Additional reporting by Ben Blanchard and Megha Rajagopalan in Beijing; Writing by Dean Yates; Editing by Alex Richardson and Nick Macfie | 377,033 |
Paroles de Don't turn his crushed face on me
Dead Infection.
Les autres musiques de Dead Infection
A wild stench
Dead Infection
After accident
Dead Infection
Airplane's catastrophe
Dead Infection
Paroles de Ambulance
Dead Infection
Autophagia
Dead Infection
Colitis ulcerosa
Dead Infection
Damaged elevator
Dead Infection
Paroles de Day of decay
Dead Infection
Dead again
Dead Infection
Deformed creature
Dead Infection
Fermentation
Dead Infection
Paroles de Fire in the forest
Dead Infection
From the anatomical deeps
Dead Infection
Gangrene of skin
Dead Infection
Haemorrhage of uterus
Dead Infection
Paroles de Her heart in your hands
Dead Infection
Hospital
Dead Infection
In the grey memory
Dead Infection
In the name of gore
Dead Infection
Paroles de don't turn his crushed face on me de dead infection. | 199,238 |
Lippo-Caesars South Korea Casino Project Clouded by ‘Uncertainties’
Hong Kong-based property developer Lippo Ltd. said earlier in the day this week that its joint project with US gaming giant Caesars Entertainment Corp. for the construction of an integrated resort in Incheon, South Korea may possibly not be materialized due to ‘a quantity of uncertainties.’
Late in 2014, the consortium of Lippo and Caesars Entertainment subsidiaries reached a conditional deal for the purchase of a 90,000-square-meter part of land for the planned hotel and casino resort from merchant MIDAN City developing Co. Ltd. Lippo holds a 55% stake in the second business.
Earlier this week, but, it became clear that the involved events have not decided on all of the necessary conditions concerning the sale associated with the stated portion of land. Right Here it is critical to note that the purchase agreement is set to expire on December 31, 2015. Lippo stated in a filing towards the Hong Kong Stock Exchange which they may never be in a position to continue utilizing the casino task due to ‘a number of uncertainties.’
The real-estate designer explained that the said ‘uncertainties’ are related to perhaps the conditional land deal would in the course of time be finalized and or perhaps a consortium user would agree on various investment terms.
LOCZ Korea Corp., once the consortium was called, comprises Lippo internationally, a wholly owned subsidiary of Lippo, OUE International, a business partly owned by the Hong Kong-based real-estate developer, and Caesars Entertainment’s Caesars Korea. Leggi tutto “Lippo-Caesars South Korea Casino Project Clouded by ‘Uncertainties’” | 188,797 |
We loved the place and our host - Rico - attitude :) He told us everything we needed to know - adviced us what restaurants are worth visiting, and what interesting places are in Calgiari ! Rooms are very cosy and nice :)
We are searching for
the best rooms at the best prices,
so you don't have to.
| 140,237 |
« Indonesia: Trial of Suspected Jemaah Islamiyah Islamist |
| Japan: Muslim Extremist Tried To Establish Islamist Group »
December 30, 2005
US: Department of Justice Launches Investigation in NSA Leak Scandal
Fox News reports the Department of Justice has launched an investigation to find the leaker or leakers who informed the New York Times about the "Domestic Spying" program run by the National Security Agency. The program, authorized by President Bush and reviewed by Congressional so-called leaders, allowed the NSA to intercept electronic communications initiated outside the United States by individual with known terrorist ties. Critics of the program claim it violates the U.S. Constitution's prohibition against "unreasonable searches". (My opinion on this: what's so friggin unreasonable about that. At any rate, if somebody wants to initiate a debate on this matter please do so in the comments.)
To conclude, I wish the Department of Justice the best of luck in finding the compulsive talkers who could not keep their mouth shut on a matter of crucial national security. The leaking culture should not stand; our citizens should not be slaughtered because some hyperventilating moralizers in Washington D.C. cannot understand what the U.S. constitution actually says.
Posted by Ruy Diaz at December 30, 2005 12:46 PM | 295,007 |
ASM Failure Analysis Case Histories: Power Generating Equipment
“On-Load Corrosion” in Tubes of High Pressure Boilers
- Published:2019
Abstract
The phenomenon of on-load corrosion is directly associated with the production of magnetite on the water-side surface of boiler tubes. On-load corrosion may first be manifested by the sudden, violent rupture of a boiler tube, such failures being found to occur predominantly on the fire-side surface of tubes situated in zones exposed to radiant heat where high rates of heat transfer pertain. In most instances, a large number of adjacent tubes are found to have suffered, the affected zone frequently extending in a horizontal band across the boiler. In some instances, pronounced local attack has taken place at butt...
“On-Load Corrosion” in Tubes of High Pressure Boilers, ASM Failure Analysis Case Histories: Power Generating Equipment, ASM International, 2019,
Download citation file:
IMAT Registration
September 12–15, 2022
New Orleans, Louisiana
Keep up to date with the latest materials, applications and technologies. Register today for IMAT 2022! | 398,920 |
TITLE: Bayesian Statistics: Estimators and Posterior Probability
QUESTION [0 upvotes]: If I let $M ∼ Γ(α,β)$ (where $α, β$ are known)
Let $X_1,...,X_n$ be discrete random variables such that
$X_i$|$θ$ ∼ i.i.d. Poisson with parameter $θ$, where $θ$ is a realization of $M$.
I have two questions...
How do I compute the posterior probability for $θ$?
How can I then compute the Bayesian estimators of $θ$ for the quadratic loss?
Here is the solution I came up with so far...
Y~Γ(α,β) if $\frac{1}{Γ(α)β^α}\theta^{\alpha-1}e^{-\theta/\beta}$ ...(0,inf)
Z~P($\theta$) : $P_Z(z)=e^{-\theta}\frac{\theta^z}{z!}$
$f_{Y,X_1,X_2...}(\theta,x_1,x_2,...)$ =(Bayes) $f_Y(\theta)$... (???)
=$\frac{1}{Γ(α)β^\alpha}\theta^{\alpha-1}e^{-\theta/\beta}$ where $\theta$ limits are (0,inf)
Γ(α',β') with parameters α'=$\alpha+\sum_{1}^n x_i$, and β' = ???
I have no idea where to go from here.
REPLY [0 votes]: It seems that your parameter $\theta\sim\Gamma(\alpha,\beta)$ and observations $(X_1,X_2,\ldots,X_n)\sim P(\theta)$.
So prior density $\pi(\theta)$=density of $\Gamma(\alpha,\beta)$ and p.d.f of the random vector $P(x_1,x_2,\ldots,x_n\setminus \theta)=P(\theta)$.
Therefore, posterior density is =$f(\theta\setminus x_1,x_2,\ldots,x_n)=\frac{\pi(\theta)P(x_1,x_2,\ldots,x_n \setminus \theta)}{\int_0^\infty \pi(\theta)P(x_1,x_2,\ldots,x_n \setminus \theta)d\theta}$ where $\theta\in (0,\infty)$.
Bayes estimator of $\theta$ with respect to quadratic loss is the posterior mean, that is $E(\theta\setminus x_1,x_2,\ldots,x_n)=\int_0^ \infty\theta f(\theta\setminus x_1,x_2,\ldots,x_n)d\theta$. | 148,023 |
Termin.Termin.
The sequel reunited director James Cameron and the two major stars, Arnold Schwarzenegger and Linda Hamilton. The screenplay was co-written by Cameron and William Wisher, and Cameron was responsible for both production and direction. science-fiction blockbuster is known for its computer-generated special effects (created by George Lucas' Industrial Light and Magic) and dazzling, non-stop action sequences. In the first film, the Terminator was stop-motion animated as an armature model unlike the second Terminator that was a product of CGI (computer-generated imagery). The film won four Academy Awards for its Sound, Visual Effects, Makeup, and Sound Effects Editing. It was also the first film in history to cost $100 million to produce.
Under the credits, the film opens with a scene of Los Angeles on a hot, sunny, summer day. [It is soon learned that it is August 29, 1997 - pre-Holocaust.] Cars are moving along on the freeway. Children are playing on swings in a sun-lit playground - a destructive, apocalyptic, unholy white light suddenly envelopes the scene and vaporizes everything - hotter than many suns combined.
As a title card fades in: Los Angeles 2029 A.D., the camera pans from left to right over desolate images of future death and destruction - blackened cars, skeletal drivers, a dark sky. The intense heat has dissolved and half-melted everything, including the bars of the jungle-gym where the children were playing. In the smoking ruins, skulls lie on the ground amidst the ash-drifts - the camera lingers on the charred remains of toys, swings, and slides, and then pauses on one tiny skull, as a voice over [of Sarah Connor] speaks:
Three billion human lives ended on August 29, 1997. The survivors of the nuclear fire called the war Judgment Day. They lived only to face a new nightmare, a war against the machines...
A metal foot from a high-tech figure crushes the skull from above with a bone-shattering sound. The camera pans up to reveal a silvery, skeletal, humanoid machine holding a massive battle rifle. It scans the black horizon of the war-torn terrain, revealing its red, glowing eyes. War is raging behind the chrome skeleton in the post-nuclear inferno - there are flashes of light from searchlights. Bombs explode and laser-like beam-weapons shoot across the sky. A battle is in progress between human guerrilla troops fighting against stalking robots (terminators), tanks, flying HK's and death-hungry machines. The voice-over continues, describing the supercomputer of the future - Skynet:.
Terminators have been dispatched to the past from the future:
- The first Terminator was sent to the year 1984 [the setting of the first film]. The relentless, unstoppable Terminator character was on a mission to kill Sarah Connor, the waitress whose unborn son John would lead the rebel war against Skynet. That mission failed and the young son was born.
- Around the year 1995, two Terminators are then sent to the world of young John Connor, the boy who is destined to grow up to become the savior-leader of the Resistance against the cyborgs. One Terminator (sent by the humans) will be programmed to protect the ten-year-old boy, the other (a new prototype sent by Skynet and the artificially-intelligent robotic cyborgs) will try to destroy him.
As the above voice-over introduces the main context for the film and the major characters, the camera pans in on the figure of John Connor, the rebel, freedom-forces leader, who scans the combat with night-vision binoculars. He leads the remaining human Resistance forces after the nuclear disaster left the world under the domination of evil, killer cyborgs in a life-and-death struggle. His face is rugged with heavy scars.
The remainder of the credits play above reddish-yellow, billowing flames and the burning furnace of the war - the playground horses, swings, seesaw, and other apparatus are on fire. A Terminator endo-skeleton emerges from the fire - the camera ominously closes in on the eyes of the evil, shiny figure.
Electrical arcs of blue-white light snap and spark behind two parked tractor-trailers in an all-night truck stop. A global time-machine delivers the figure of a naked man, a Terminator (Arnold Schwarzenegger). He is a replica of the Terminator model T-800 from the original film - with a muscle-bound frame and a perfect physique. [Whether he is sent to protect or kill John Connor is left open to question.] He scans his surroundings without any emotion, and his computerized brain registers the results of a digitized, electronic scan of the Harley-Davidson motorcycles sitting outside a bikers' hangout called The Corral.
In an amusing scene, he calmly strolls stark-naked into the country-western cafe. As waitresses and patrons turn their wide-open eyes toward him, his alphanumeric readouts calculate body outlines to estimate and analyze which one of the customers is deemed suitable for leather clothing and boots. One of the tough-looking, cigar-smoking bikers is a "MATCH." The Terminator walks up and demands his attire - and bike:
I need your clothes, your boots, and your motorcycle.
The biker laughs with his pool-playing buddies and responds: "You forgot to say please." Then, he takes a long, red-hot draw on his cigar and stubs it out on the Terminator's chest. The Terminator, naturally, feels no pain. In the ensuing action sequence, the Terminator breaks the man's upper arm, throws the man's pool partner out the nearest window, and then heaves the cigar-smoking biker into the kitchen. He lands on the hamburger grill - his hands sizzle like bacon. That's enough to be convincing - the Terminator takes the man's .45 automatic gun and bike keys - and his clothes (off-screen).
In the next scene, a direct cut, the Terminator is already outside - from a boots-eye view. To the tune of "Bad to the Bone," the camera pans up showing him fully dressed in the bruised biker's leather clothes. As he swings his leg over the biker's wheels, another biker appears at the diner's door with a shotgun, threatening that he can't take the man's bike. The cyborg turns and coldly stops, sets the bike's kickstand, and walks over to the guy. He quickly yanks the 10-gauge shotgun from the man, closes in, and then snatches the man's sunglasses from his shirt pocket. He puts them on and then takes off on the Harley.
In another area of run-down Los Angeles where papers swirl in the night air, a Los Angeles policeman investigates a blue-white glare and more crackling electrical arcs in the air. While surveying a vaporized, circular section of chain-link fence, he is attacked from behind by another menacing, naked man - the second Terminator time traveler sent from the future. The lean cyborg changes into the man's uniform and sits in the squad car.
With access to the onboard computer terminal in the car, the Terminator (Robert Patrick) types in an on-screen inquiry for: "Connor, John" - the dramatic reason for his mission. Although John is only ten years old [it is 1995], his police record is extensive:
Information concerning his natural mother and father is unknown. His legal guardians (foster parents) are Todd and Janelle Voight - the cyborg memorizes their address in Reseda, California, a suburb of Los Angeles.Information concerning his natural mother and father is unknown. His legal guardians (foster parents) are Todd and Janelle Voight - the cyborg memorizes their address in Reseda, California, a suburb of Los Angeles.
- Trespassing
- Shoplifting
- Disturbing the Peace
- Vandalism
[Those who saw the earlier 1984 film assume that the first Terminator is there to complete the job that his predecessor failed to finish - to kill the boy. For a while, that appears to be the case - their clothes seem to reflect their personalities: the 'Arnold' Terminator wears bad-ass biker clothes, and the other Terminator wears a policeman's outfit. To turn the tables - in a neat role reversal - the former cyborg assassin from the first film is really a good-guy Terminator, programmed to protect the young boy.]
In a smooth transitional cut to the next scene, it is the next day. John Connor (Edward Furlong) is working on reassembling the carburetor of his Honda 125 dirtbike - amidst the noise of his boombox music and bike, he ignores his foster mother Janelle (Jenette Goldstein) yelling at him to clean up his room. When Todd (Xander Berkeley) orders his foster-son to get inside and obey his mother, John responds defiantly toward his parental authority figures: "She's not my mother, Todd!" - and zooms off on his bike. John is being raised in a foster home because his mother has been institutionalized in an asylum.
In another transitional scene to the next character - John's mother - a sign on a fence reads:
PESCADERO STATE HOSPITAL - State of California - A Criminally Disordered Retention Facility
In one of the institutional, bare brick cubicles of the high-security wing, one of the female inmates is grunting and sweating while doing pull-ups on the upturned frame of her bed - the tendons and muscles of her arms bulge as she dips and pulls up rhythmically, like a machine.
In the corridor outside, a group of young hospital interns are led by Dr. Peter Silberman (Earl Boen), who introduces the next patient. Because of her mad ravings about terminator robots and her delusional fantasies and recurring dreams about Judgment Day, she has been diagnosed as paranoid schizophrenic:.
At the door to the patient's room, Silberman greets her through the intercom: "Morning Sarah." She (Linda Hamilton) turns and her wild, angry eyes peer out through a tangle of hair, as she responds: "Good morning, Dr. Silberman. How's the knee?" He turns to the interns and is forced to confess that she has made repeated attempts to escape: "She stabbed me in the kneecap with my pen a few weeks ago. Repeated escape attempts."
The police squad car (with its emblem "to protect and to serve" emblazoned on the car door) pulls up in front of the Voight home. At the door, the Terminator questions John's foster parents and finds that John is away. He borrows a snapshot of John, and then registers what they tell him: "There was a guy here this morning looking for him, too...Yeah, a big guy on a bike." [Both Terminators are hunting for John - up until this point, it is unclear which one is the bad-guy killer cyborg from the future.]
John's character is demonstrated in the next scene at a bank's ATM machine. In a voice-over, he flippantly reveals that he is robbing the automatic teller machine: "Please insert your stolen card now." The stolen ATM card is rigged with a ribbon wire-band that is attached to the back of his lap-top computer, where he can crack the PIN number. He tells his friend Tim (Danny Cooksey) how he learned to defraud the bank: "From my mom. My real mom, I mean." After withdrawing three hundred dollars, his friend notices a picture in a plastic sleeve in his knapsack - it is a Polaroid of John's mother. Sounding macho, John tells Tim about his screwed-up mother, but reveals hurt in his eyes:
She's a complete psycho. That's why she's up at Pescadero. It's a mental institute, OK? She tried to blow up a computer factory, but she got shot and arrested...She's a total loser.
In the Pescadero Hospital, in one of the brightly-lit interview rooms, a video screen plays a tape of a previous session with her at least six months earlier - Sarah and the doctor watch dispassionately as she hysterically describes her recurring nightmare about the cataclysmic end of the world on Judgment Day, August 29, 1997:
It's like a giant strobe light, burning right through my eyes, but somehow I can still see. We know the dream's the same every night, why do I have to...Children look like burnt paper. Black, not moving. And then the blast wave hits them. And they fly apart like leaves...It's not a dream, you moron, it's real. I know the date it happens...on August 29th, 1997, it's gonna feel pretty f--kin' real to you, too! Anybody not wearing two million sunblock is gonna have a real bad day. Get it?!...God, you think you're safe and alive. You're already dead. Everybody! Him. You. You're dead already. This whole place, everything you see is gone. You're the one livin' in a f--kin' dream, 'cause I know it happens. IT HAPPENS!
After the tape is freeze-framed on her angry hysterics, Sarah stonily comments to the doctor: "I feel much better now. Clearer." As she is questioned further, the camera withdraws back behind a one-way mirror in an adjacent room. From there, Sarah is being videotaped and notes are being taken as she explains her improvement over six months and her desire to see her son John:
Silberman: Let's go back to what you were saying about those terminator machines. Now you think they don't exist?
Sarah: (in a hollow voice)?
The scene transitions to the "company," Cyberdyne Systems, the corporate headquarters of a mega-electronics corporation - Sarah was taped saying that there is no "evidence" of remaining artifacts left from the Terminator that was crushed in a hydraulic press. One long tracking shot follows Mr. Miles Dyson (Joe Morton), a black computer scientist, into a bluish-lighted, high-security area. He then enters a high-tech, stainless-steel vault to check out the artifacts from the first Terminator. In front of the cabinet, he expresses blind fascination at the first artifact, computer chips from the Terminator which are sealed in a glass-container. Then he moves over to the second artifact - it is an entirely intact metallic fist and forearm - a mechanical arm which stands upright in the vacuum-sealed cabinet. [Obviously, Sarah has been telling the truth about the wreckage of the Terminator, but no one believes her.]
Back in the interview room at the hospital, Sarah is denied her request to see John by her doctor: "I know how smart you are, and I think you're just telling me what I want to hear. I don't think you really believe what you're telling me today. I think if I put you in minimum security, you'd just try to escape again....I don't see any choice but to recommend to the review board that you stay here for another six months." Not taking the news well that she can't see her son and is ordered into isolation for another six months, Sarah leaps across the table and grabs Silberman's throat, viciously attacking him. She is quickly restrained by attendants and sedated. To the camera on the other side of the one-way mirror, Silberman quips: "Model citizen."
The Terminator has been riding around Los Angeles on his motorcycle trying to positively ID John. In contrast, the second, more advanced Terminator has been using sophisticated methods to track John: the use of the police computer system, questioning of the foster parents and two girls on the sidewalk, etc. From an overpass, the Terminator spots John coming up from a drainage canal, and pursues him to a large shopping mall called The Galleria (in Sherman Oaks, CA) where he parks his larger bike next to John's smaller Honda.
From this point on, the film is composed of five exciting, action sequences:
- Action Sequence One:
The Galleria chase sequence with both Terminators in pursuit of John. The sequence concludes with the high-speed Flood Control Channel Chase involving a commandeered big-rig tow truck
- Action Sequence Two:
The hospital rescue and breakout sequence, in which the Terminator and the boy help Sarah get away from the hospital. This sequence is followed by their retreat to the desert. Sarah experiences her apocalyptic dream in full-force
- Action Sequence Three:
The sequence of Sarah's aborted attempt to terminate Miles Dyson, the futuristic computer scientist who will be responsible for the eventual creation of Skynet, a supercomputer of the future
- Action Sequence Four:
The laboratory sequence at Cyberdyne, in which the Terminator holds off hundreds of police officers, while Dyson meets a heroic death by destroying the artifacts from the future. A helicopter chase after a SWAT van leads to the last sequence
- Action Sequence Five:
The final action sequence at the steel foundry, where the two Terminators duel and battle to the death. The 'good' Terminator sends the T-1000 to his death in a vat of molten steel - and then heroically sacrifices himself for the sake of humankind - to ensure that his own micro-chip will not be found | 180,141 |
TITLE: Difficulty in understanding the proof of measurable functions
QUESTION [0 upvotes]: If $f$ & $g$ are measurable, then
show that the integer powers $f^k$ , $k\ge 1$ are measurable.
The proof goes as
Proof:- we consider the following cases:-
I case : k is odd
$ { f^k \gt a} = {f \gt a^{\frac{1}{k}}} $
Now since f is measurable
${ {f\gt a^{\frac{1}{k}}}}$ is measurable & so ${{f^k \gt a}} $ is measurable.
II case:- k is even
Subcase1 :- $a\lt 0$
In this case ${f^k \ge a}=E $ where E is domain of f which is measurable.
Subcase II:- $a \gt 0$
In similar lines we prove this**
Now I have a doubt regarding how in case II subcase I we get $ { f^k \gt a} $ as set E?
And why such cases considered. when k is odd there are no subcases for a.
Please help
REPLY [1 votes]: The graph of $x^k$ with $k$ even looks like the quadratic function. So when $a>0$:
$$f^k>a=(f>a^\frac{1}{k})\cup(f<-a^\frac{1}{k})$$
Both $f>a^\frac{1}{k}$ and $f<-a^\frac{1}{k}$ are measurable (as preimages of a [measurable] interval through a measurable function), so their union $f^k>a$ is measurable by the properties of a $\sigma$-algebra. | 198,488 |
\begin{document}
\bibliographystyle{abbrv}
\title{A Class of few-Lee weight $\Z_2[u]$-linear codes using simplicial complexes and minimal codes via Gray map}
\author{Pramod Kumar Kewat$^1$ and Nilay Kumar Mondal$^{1,*}$}
\footnotetext[1]{
\small{\,
Department of Mathematics and Computing, Indian Institute of Technology (Indian School of Mines), Dhanbad 826 004, India.\\}}
\email {[email protected] (P.K. Kewat),
[corresponding author][email protected] (N.K. Mondal)}
\subjclass{94B05}
\keywords{Few-Lee weight codes, Mixed alphabet ring, Simplicial complexes, Minimal codes.}
\thispagestyle{plain} \setcounter{page}{1}
\begin{abstract}
Recently some mixed alphabet rings are involved in constructing few-Lee weight codes with optimal or minimal Gray images using suitable defining sets or down-sets. Inspired by these works, we choose the mixed alphabet ring $\Z_2\Z_2[u]$ to construct a special class of linear code $C_L$ over $\Z_2[u]$ with $u^2=0$ by employing simplicial complexes generated by a single maximal element. We show that $C_L$ has few-Lee weights by determining the Lee weight distribution of $C_L$. Also we have an infinite family of minimal codes over $\Z_2$ via Gray map, which can be used to secret sharing schemes.
\end{abstract}
\maketitle
\markboth{P.K. Kewat, N.K. Mondal}{A class of few-Lee weight $\Z_2[u]$-linear codes from simplicial complexes and minimal codes via Gray map}
\section{Introduction}\label{section 1}
Constructing few-weight codes over finite fields is a long standing area of interest in algebraic coding theory. It has key applications in graph theory, finite geometry and combinatorial designs (see \cite{calderbank1986applications}). The authors, in \cite{shi2016first} first introduced few-Lee weight codes over the finite ring $\F_2+u\F_2$ with $u^2=0$ and as an application they found some infinite families of optimal (with respect to the Griesmer bound) few-weight codes over $\F_2$ through Gray map. Apart from this, using a famous characterization of minimal codes, namely the Ashikhmin-Barg lemma, they showed that some of the Gray images are also minimal over $\F_2$, which have direct applications in secret sharing schemes and authentication codes (see \cite{anderson1998secret}, \cite{carlet2005secret}, \cite{ding2007authentication}, \cite{yuan2006secret}). Actually finding minimal codes over finite fields is itself a central topic in algebraic coding theory and in \cite{shi2016first} the authors succesfully clubbed these two important topics together. After that lots of researchers have constructed several classes of few-Lee weight codes over different finite chain rings of characteristic $p$ having optimal or minimal codes as appliactions. In particular for $p=2$, see \cite{shi2018trace2}, \cite{shi2021simplicial}, \cite{shi2021simplicial2}, \cite{shi2021trace3}, \cite{shi2020generator}, \cite{wu2020simplicial}. The researchers mainly used suitable defining sets, simplicial complexes or down-sets(for $p>2$ only) to construct such codes (see \cite{chang2018simplicial}, \cite{ding2015trace}, \cite{wu2020down}). Very recently the authors in \cite{shi2022nonchain} first studied few-Lee weight codes over a non-chain ring $\F_2+u\F_2+v\F_2+uv\F_2$.
\vskip 2pt
On the other hand, the authors, in \cite{dougherty2016mixedfirst} constructed one-Lee weight additive codes over the mixed alphabet ring $\Z_2\Z_4$. In \cite{shi2020mixedsecond}, the authors studied one-Lee weight and two-Lee weight codes over $\Z_2\Z_2[u,v]$ with $u^2=0,v^2=0$ and $uv=vu$. The authors, in \cite{sole2021mixedthird} generalized \cite{dougherty2016mixedfirst} and studied the construction of one-Lee weight and two-Lee weight $\Z_2\Z_4[u]$-additive codes. In \cite{wu2021mixedtrace}, the authors constructed few-Lee weight additive codes over $\Z_p\Z_p[u]$ with $u^2=0$ having good codes as Gray images by using suitable defining sets. After that the authors, in \cite{wang2021mixeddown}, used the mixed alphabet ring $\Z_p\Z_p[u]$ with $u^2=u$ and $p>2$ to study some few-Lee weight codes by employing down-sets.
\vskip 2pt
Inspired by these works, in this paper, we choose the mixed alphabet ring $\Z_2\Z_2[u]$ to construct a class of linear codes over $\Z_2[u]$ with $u^2=0$ using simplicial complexes generated by a single element. Further we compute the Lee weight distribution for this class of codes and find it to be of few-Lee weight. We have also studied their Gray images and have an infinite family of binary minimal codes with the help of Ashikhmin-Barg lemma. This shows that we may employ simplicial complexes to obatin few weight codes even in the case of mixed alphabet rings.
\vskip 5pt
This paper is organized as follows. Section 2 covers the basic definitions and related prerequisites. In Section 3, we compute the Lee weight distributions of the linear code $C_L$ over $\Z_2[u]$ as defined in Section 2. As an application, we also find an infinite family of minimal codes over $\Z_2$ as Gray images of $C_L$ in Section 4. Finally we give an overview of this material in Section 5.
\section{Preliminaries}\label{section 2}
\subsection{Rings and simplicial complexes}
Let $\Z_2$ be the ring of integers modulo $2$ and $\Z_2[u]=\Z_2+u\Z_2,~u^2=0$. Since $\Z_2$ is a subring of $\Z_2[u]$, we define $\mathcal{R}=\Z_2\Z_2[u]=\{(x,y+uz)|x,y,z\in\Z_2\}$. We define $\mathcal{R}^m=\{(p,q+ur)|p,q,r\in\Z_2^m\}$, where $m$ is any positive integer and it is easy to observe that $\mathcal{R}^m$ is a $\Z_2[u]$-module under component-wise addition and $\Z_2[u]$-scalar multiplication defined as: $(p_1,q_1+ur_1)+(p_2,q_2+ur_2)=((p_1+p_2),(q_1+q_2)+u(r_1+r_2))$ and $(y+uz)(p,q+ur)=(yp,yq+u(yr+zq))$.
\vskip 2pt
For any vector $w\in\Z_2^m$, the support of $w$ denoted by $supp(w)$ is defined as the set of all nonzero coordinate positions of $w$. Clearly, $w\mapsto supp(w)$ is a bijection between $\Z_2^m$ and the power set of $[m]=\{1,\ldots,m\}$ i.e., $2^{[m]}$.
Let $\Delta\subseteq\Z_2^m$ such that if $w_1\in\Delta$ and $supp(w_2)\subseteq supp(w_1)$ imply $w_2\in\Delta$, then $\Delta$ is called a simplicial complex. For a vector $w\in\Delta$, if there does not exist any vector $v\in\Delta$ such that $supp(w)\subseteq supp(v)$, then we call $w$ is a maximal element of $\Delta$. Clearly a simplical complex $\Delta$ may admit more than one maximal elements. Here we consider the simplicial complexes with a single maximal element. The simplicial complex $\Delta_{supp(w)}$ generated by a single maximal element $w$ is defined to be the set of all subsets of $supp(w)\subseteq[m]$ and is of size $2^{|supp(w)|}$, where $|\cdot|$ denotes the size of a set.
\subsection{Lee weight, inner product and code}\label{subsec 2.2}
A Gray map $\phi:\Z_2[u]\rightarrow\Z_2^2$ is defined as $y+uz\mapsto(z,y+z)$, where $y,z\in\Z_2$.
We can naturally extend the map $\phi$ to $\Phi:\Z_2^m+u\Z_2^m\rightarrow\Z_2^{2m}$ as
$\Phi(q+ur)=(r,q+r)$, where $q,r\in\Z_2^m$. For a vector $w\in\Z_2^m$, the Hamming weight of $w$ denoted by $wt_H(w)$ is defined as $wt_H(w)=|supp(w)|$. The Lee weight of a vector $w'=(q+ur)\in\Z_2^m+u\Z_2^m$ denoted by $wt_L(a)$ is defined as its Hamming weight under the Gray map $\Phi$, i.e. $wt_L(w')=wt_L(q+ur)=wt_H(r)+wt_H(q+r)$. The Lee distance of $w'_1,w'_2\in\Z_2^m+u\Z_2^m$ is defined as $wt_L(w'_1-w'_2)$. It is quite easy to check that $\Phi$ is an isometry between $(\Z_2^m+u\Z_2^m,d_L)$ and $(\Z_2^{2m},d_H)$.
\vskip 2pt
A linear code $C$ of length $m$ over a finite commutative ring $R$ is an $R$-submodule of $R^m$. The Euclidean inner product of vectors $a=(a_1,a_2,\ldots,a_m),b=(b_1,b_2,\ldots,b_m)\in R^m$ is $\langle a,b\rangle=\sum\limits_{i=1}^ma_ib_i\in R$. Let $w_1,w_2\in\Z_2^m$ and $w'_1,w'_2\in\Z_2^m+u\Z_2^m$. Now we define the inner product of two vectors $(w_1,w'_1)$ and $(w_2,w'_2)$ in $\mathcal{R}^m$ as $((w_1,w'_1)\cdot(w_2,w'_2))=u\langle w_1,w_2\rangle+\langle w'_1,w'_2\rangle$.
\vskip 2pt
Let $\Delta_1,\Delta_2,\Delta_3\subseteq\Z_2^m$ be three simplicial complexes generated by a single maximal elements having support $D,E,F\subseteq[m]$ respectively and they are not equal to $\Z_2^m$ at the same time. From now on we denote $\Delta_1,\Delta_2,\Delta_3$ as $\Delta_D,\Delta_E,\Delta_F$ throughout this paper. We set $L=\{l=(t_1,t_2+ut_3)|t_1\in\Delta_D^c,t_2\in\Delta_E^c,t_3\in\Delta_F^c\}\subseteq\mathcal{R}^m$, where $\Delta^c=\Z_2^m\setminus\Delta$. We define $C_L=\{c_a=((a\cdot l))_{l\in L}|a\in\mathcal{R}^m\}$. It is easy to check that $C_L$ is a linear code over $\Z_2[u]$ of length $|L|$. The code $C_L$ is said to be a $t$-Lee weight code if all of its codewords have only $t$ different nonzero Lee weights.
\subsection{Generating function and characteristic function}
Let $X$ be a subset of $\Z_2^m$. Chang and Hyun (\cite{chang2018simplicial}) introduced the following $m$-varriable generating function associated with the set $X$:
\begin{equation}\label{eq 1}
\mathcal{H}_X(x_1,x_2,\ldots,x_m)=\sum\limits_{u\in X}\prod\limits_{i=1}^m(x_i)^{u_i}\in\Z[x_1,x_2,\ldots,x_m], ~\text{where}~u=(u_1,u_2,\ldots,u_m)\in\Z_2^m.
\end{equation}
\begin{lemma}\cite[Lemma 2]{chang2018simplicial}\label{lem 2.1} Let $\Delta\subseteq\Z_2^m$ be a simplicial complex. Then the following results hold.
\begin{itemize}
\item [(1)] $\mathcal{H}_\Delta(x_1,x_2,\ldots,x_m)+\mathcal{H}_{\Delta^c}(x_1,x_2,\ldots,x_m)=\mathcal{H}_{\Z_2^m}(x_1,x_2,\ldots,x_m)=\prod\limits_{i\in[m]}(1+x_i)$,
\item [(2)] $\mathcal{H}_{\Delta_S}(x_1,x_2,\ldots,x_m)=\prod\limits_{i\in S}(1+x_i)$, where $\Delta_S$ is a simplicial complex generated by a single maximal elemnt having support $S\subseteq[m]$.
\end{itemize}
\end{lemma}
\vskip 2pt
Suppose $X$ and $Y$ are two subsets of $[m]$. We define a characteristic function $\chi:2^{[m]}\times2^{[m]}\rightarrow\{0,1\}$ as $\chi(X|Y)=1$ if and only if $X\cap Y=\emptyset$ and zero otherwise.
\begin{lemma}\cite[Lemma 3.1]{wu2020simplicial}\label{lem 2.2} Let $D,E\subseteq[m]$. Then
\begin{itemize}
\item [(1)] $|\{X\subseteq[m]~|~\emptyset\ne X,\chi(X|D)=1\}|=2^{m-|D|}-1$ and $|\{X\subseteq[m]~|~\chi(X|D)=0\}|=2^m-2^{m-|D|}$.
\item [(2)] $|\{X\subseteq[m]~|~\emptyset\ne X,\chi(X|D)\chi(X|E)=1\}|=2^{m-|D\cup E|}-1$ and $|\{X\subseteq[m]~|~\chi(X|D)\chi(X|E)=0\}|=2^m-2^{m-|D\cup E|}$.
\item [(3)] Define $D\oplus E=(D\cup E)\setminus(D\cap E)$. Let $T_i=|\{(X,Y)~|~\emptyset\ne X,Y\subseteq[m],X\ne Y,\chi(Y|E)=1,\chi(X|D)+\chi(X\oplus Y|D)=i\}|$, where $i\in\{0,1,2\}$. Then $T_0=2^m(2^{m-|E|}-1)+2^{m-|D|}(1+2^{m-|D\cup E|}-2^{m+1-|E|})$, $T_1=2(2^{m-|D|}-1)(2^{m-|E|}-2^{m-|D\cup E|})$, and $T_2=(2^{m-|D|}-2)(2^{m-|D\cup E|}-1)$.
\end{itemize}
\end{lemma}
\section{Main Result}\label{section 3}
Let $a=(p,q+ur),l=(t_1,t_2+ut_3)$, where $a\in\mathcal{R}^m$ and $l\in L$. The Lee weigt of the codeword $c_a$ of $C_L$ is
\begin{align*}
wt_L(c_a)&=wt_L(((p,q+ur)\cdot(t_1,t_2+ut_3))_{t_1,t_2,t_3})\\
&=wt_L((u\langle p,t_1\rangle+\langle q+ur,t_2+ut_3\rangle)_{t_1,t_2,t_3})\\
&=wt_L((q t_2+u(p t_1+q t_3+r t_2))_{t_1,t_2,t_3})\\
&=wt_H((p t_1+q t_3+r t_2)_{t_1,t_2,t_3})+wt_H((p t_1+(q+r) t_2+q t_3)_{t_1,t_2,t_3})\\
&=|L|-\frac{1}{2}\sum\limits_{y\in\Z_2}\sum\limits_{t_1\in\Delta_D^c}\sum\limits_{t_2\in\Delta_E^c}\sum\limits_{t_3\in\Delta_F^c}(-1)^{(p t_1+q t_3+r t_2)y}\\
&~+|L|-\frac{1}{2}\sum\limits_{y\in\Z_2}\sum\limits_{t_1\in\Delta_D^c}\sum\limits_{t_2\in\Delta_E^c}\sum\limits_{t_3\in\Delta_F^c}(-1)^{(p t_1+(q+r) t_2+q t_3)y}\\
&=|L|-\frac{1}{2}\sum\limits_{t_1\in\Delta_D^c}(-1)^{p t_1}\sum\limits_{t_2\in\Delta_E^c}(-1)^{r t_2}\sum\limits_{t_3\in\Delta_F^c}(-1)^{q t_3}\\
&~-\frac{1}{2}\sum\limits_{t_1\in\Delta_D^c}(-1)^{p t_1}\sum\limits_{t_2\in\Delta_E^c}(-1)^{(q+r) t_2}\sum\limits_{t_3\in\Delta_F^c}(-1)^{q t_3}.
\end{align*}
Using Equation \eqref{eq 1}, it is easy to observe that for $X=\Delta_D$ and $x_i=(-1)^{p_i}$, where $\Delta_D$ is defined as above and $p=(p_1,p_2,\ldots,p_m)\in\Z_2^m$, we have $\mathcal{H}_{\Delta_D}((-1)^{p_1},(-1)^{p_2},\ldots,(-1)^{p_m})=\sum\limits_{t\in \Delta_D}\prod\limits_{i=1}^m(-1)^{p_it_i}=\sum\limits_{t\in\Delta_D}(-1)^{p t}$, where $t=(t_1,t_2,\ldots,t_m)\in\Delta_D\subseteq\Z_2^m$. So, by Lemma \ref{lem 2.1}, $\sum\limits_{t\in\Delta_D^c}(-1)^{p t}= \mathcal{H}_{\Delta_D^c}((-1)^{p_1},(-1)^{p_2},\ldots,(-1)^{p_m})=\prod\limits_{i\in[m]}(1+(-1)^{p_i})-\mathcal{H}_{\Delta_D}((-1)^{p_1},(-1)^{p_2},\ldots,(-1)^{p_m})$ $=2^m\delta_{0,p}-\mathcal{H}_{\Delta_D}((-1)^{p_1},(-1)^{p_2},\ldots,(-1)^{p_m})$, where $\delta_{i,j}$ is the Kronecker delta function i.e. $\delta_{i,j}=1$ if and only if $i=j$ and zero otherwise. Also we have $\mathcal{H}_{\Delta_D}((-1)^{p_1},(-1)^{p_2},\ldots,(-1)^{p_m})=\prod\limits_{i\in D}(1+(-1)^{p_i})=2^{|D|}\chi(supp(p)|D)$, where $\chi$ is the characteristic function on the set $2^{[m]}\times2^{[m]}$ defined in Section \ref{section 2}.
\vskip 2pt
So, further we can write
\begin{align}\label{eq 2}
wt_L(c_a)=\nonumber&|L|-\frac{1}{2}[(2^m\delta_{0,p}-2^{|D|}\chi(supp(p)|D))(2^m\delta_{0,r}-2^{|E|}\chi(supp(r)|E))\\\nonumber&(2^m\delta_{0,q}-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(2^m\delta_{0,p}-2^{|D|}\chi(supp(p)|D))\\
&(2^m\delta_{0,(q+r)}-2^{|E|}\chi(supp(q+r)|E))(2^m\delta_{0,q}-2^{|F|}\chi(supp(q)|F))].
\end{align}
\begin{lemma}\label{lem 3.1}
Let $D,E,F\subseteq[m]$. Then
\begin{itemize}
\item [(1)] $|\{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=0\}|=2^m-(2^{m-|D\cup E|})[(2^{|D|-|D\cap E|})+(2^{|E|-|D\cap E|})-1]$ and $|\{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=1\}|=(2^{m-|D\cup E|})(2^{|D|-|D\cap E|}-1)$ and $|\{X\subseteq[m]~|~\chi(X|D)=1,\chi(X|E)=0\}|=(2^{m-|D\cup E|})(2^{|E|-|D\cap E|}-1)$.
\item [(2)] Let $T'_i=|\{(X,Y)~|~\emptyset\ne X,Y\subseteq[m],X\ne Y,\chi(Y|E)=0,\chi(X|D)+\chi(X\oplus Y|D)=i\}|$, where $i\in\{0,1,2\}$. Then $\sum\limits_{i=1}^{i=3}T'_i=(2^m-2^{m-|E|})(2^m-2)$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item [(1)] Let us consider the set $\{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=1\}$. Since $X\cap D\ne\emptyset$, $X$ has $(2^{|D|}-1)$ choices but also $X\cap E=\emptyset$, so the choices redeuces to $(2^{|D|-|D\cap E|}-1)$ and for each of these choices we have $(2^{m-|D\cup E|})$ possibilities as $X\subseteq[m]$. So all total we have $|\{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=1\}|=(2^{m-|D\cup E|})(2^{|D|-|D\cap E|}-1)$. Using similar arguments we have $|\{X\subseteq[m]~|~\chi(X|D)=1,\chi(X|E)=0\}|=(2^{m-|D\cup E|})(2^{|E|-|D\cap E|}-1)$.
\vskip 2pt
Now it is easy to observe that $\{X\subseteq[m]~|~\chi(X|D)\chi(X|E)=0\}=\{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=0\}\sqcup \{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=1\}\sqcup \{X\subseteq[m]~|~\chi(X|D)=1,\chi(X|E)=0\}$. So using the above two results and (2) of lemma \ref{lem 2.2}, we have $|\{X\subseteq[m]~|~\chi(X|D)=0,\chi(X|E)=0\}|=2^m-(2^{m-|D\cup E|})[(2^{|D|-|D\cap E|})+(2^{|E|-|D\cap E|})-1]$.
\item [(2)] It is easy to observe that $\sum\limits_{i=1}^{i=3}T'_i=|\{(X,Y)~|~ \emptyset\ne X,Y\subseteq[m],X\ne Y,\chi(Y|E)=0\}|$. Hence the result is clear as $Y$ has $(2^m-2^{m-|E|})$ choices (using (1) of Lemma \ref{lem 2.2}) and $X$ has $(2^m-2)$ as $X\ne\emptyset$ and $X\ne Y$.
\end{itemize}
\end{proof}
\begin{theorem}
The linear code $C_L$ over $\Z_2[u]$, as defined in Subsection \ref{subsec 2.2}, is of length $(2^m-2^{|D|})(2^m-2^{|E|})(2^m-2^{|F|})$, size $2^{3m}$ with the Lee weight distribution given in Table \ref{Tab 3.1}.
\end{theorem}
\begin{proof}
The length of $C_L$ is quite straightforward. Now, first we compute the Lee weight distribution of $C_L$ and then we compute the size of $C_L$.
\vskip 2pt
We have $supp(q+r)=supp(q)\oplus supp(r)$ as decsribed in Lemma \ref{lem 2.2}. Let $f_i~1\le i\le57$ be the frequency of $wt_L(c_a)$. Next we divide the proof in the following cases and calculate $f_i$ using Equation \eqref{eq 2}, Lemma \ref{lem 2.2} and Lemma \ref{lem 3.1}.
\begin{itemize}
\item [(1)] $p=q=r=0$, $wt_L(c_a)=0$ and $f_1=1$.
\item [(2)] $p=q=0,r\ne0$, $wt_L(c_a)=|L|-[(2^m-2^{|D|})(-2^{|E|}\chi(supp(r)|E))(2^m-2^{|F|})]$.
\begin{itemize}
\item If $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|$ and $f_2=2^m-2^{m-|E|}$.
\item If $\chi(supp(r)|E)=1$, then $wt_L(c_a)=|L|+[(2^m-2^{|D|})(2^{|E|})(2^m-2^{|F|})]$ and $f_3=2^{m-|E|}-1$.
\end{itemize}
\item [(3)] $p=0,q\ne0,r=0$, $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(2^m-2^{|D|})(-2^{|E|}\chi(supp(q)|E))(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item If $\chi(supp(q)|E)=0$, $\chi(supp(q)|F)=0$, then $wt_L(c_a)=|L|$ and $f_4=2^m-(2^{m-|E\cup F|})$ $[(2^{|E|-|E\cap F|})+(2^{|F|-|E\cap F|})-1]$.
\item If $\chi(supp(q)|E)=1$, $\chi(supp(q)|F)=0$, then $wt_L(c_a)=|L|$ and $f_5=(2^{m-|E\cup F|})$ $(2^{|F|-|E\cap F|}-1)$.
\item If $\chi(supp(q)|E)=0$, $\chi(supp(q)|F)=1$, then $wt_L(c_a)=|L|+\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(2^{|F|})]$ and $f_6=(2^{m-|E\cup F|})(2^{|E|-|E\cap F|}-1)$.
\item If $\chi(supp(q)|E)=1$, $\chi(supp(q)|F)=1$, $wt_L(c_a)=|L|+\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(2^{|F|})]-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]$ and $f_7=2^{m-|E\cup F|}-1$.
\end{itemize}
\item [(4)] $p=0,q\ne0,r\ne0$, $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(-2^{|E|}\chi(supp(r)|E))(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(2^m-2^{|D|})(2^m\delta_{0,(q+r)}-2^{|E|}\chi(supp(q+r)|E))(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item Let $q\ne r$, $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(-2^{|E|}\chi(supp(r)|E))(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(2^m-2^{|D|})(-2^{|E|}\chi(supp(q+r)|E))(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item If $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$, $\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_8$ be the frequency in this case.
\item If $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$, $\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_9$ be the frequency in this case.
\item If $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$, $\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{10}$ be the frequency in this case.
\item If $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$, $\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{11}$ be the frequency in this case. Now, $\sum\limits_{i=8}^{11}f_i=(2^m-2^{m-|F|})(2^m-2)$.
\item If $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$, $\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and $f_{12}=2^m(2^{m-|F|}-1)+2^{m-|E|}(1+2^{m-|E\cup F|}-2^{m+1-|F|})$.
\item If $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$, $\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]$ and let $f_{13}$ be the frequency in this case.
\item If $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$, $\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]$ and let $f_{14}$ be the frequency in this case. Now, $f_{13}+f_{14}=2(2^{m-|E|}-1)(2^{m-|F|}-2^{m-|E\cup F|})$.
\item If $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$, $\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]$ and $f_{15}=(2^{m-|E|}-2)(2^{m-|E\cup F|}-1)$.
\end{itemize}
\item Let $q=r$, $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(-2^{|E|}\chi(supp(r)|E))(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item If $\chi(supp(r)|E)=0$, $\chi(supp(q)|F)=0$, then $wt_L(c_a)=|L|$ and $f_{16}=2^m-(2^{m-|E\cup F|})[(2^{|E|-|E\cap F|})+(2^{|F|-|E\cap F|})-1]$.
\item If $\chi(supp(r)|E)=1$, $\chi(supp(q)|F)=0$, then $wt_L(c_a)=|L|$ and $f_{17}=(2^{m-|E\cup F|})$ $(2^{|F|-|E\cap F|}-1)$.
\item If $\chi(supp(r)|E)=0$, $\chi(supp(q)|F)=1$, then $wt_L(c_a)=|L|+\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(2^{|F|})]$ and $f_{18}=(2^{m-|E\cup F|})(2^{|E|-|E\cap F|}-1)$.
\item If $\chi(supp(r)|E)=1$, $\chi(supp(q)|F)=1$, $wt_L(c_a)=|L|-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]+\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(2^{|F|})]$ and $f_{19}=2^{m-|E\cup F|}-1$.
\end{itemize}
\end{itemize}
\item [(5)] $p\ne0,q=r=0$, $wt_L(c_a)=|L|+[(2^{|D|}\chi(supp(p)|D))(2^m-2^{|E|})(2^m-2^{|F|})]$.
\begin{itemize}
\item If $\chi(supp(p)|D)=0$, then $wt_L(c_a)=|L|$ and $f_{20}=2^m-2^{m-|D|}$.
\item If $\chi(supp(p)|D)=1$, then $wt_L(c_a)=|L|+[(2^{|D|})(2^m-2^{|E|})(2^m-2^{|F|})]$ and $f_{21}=2^{m-|D|}-1$.
\end{itemize}
\item [(6)] $p\ne0,q=0,r\ne0$, $wt_L(c_a)=|L|-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(r)|E))(2^m-2^{|F|})]-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(r)|E))(2^m-2^{|F|})]$.
\begin{itemize}
\item If $\chi(supp(p)|D)=0$, $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|$ and $f_{22}=(2^m-2^{m-|D|})(2^m-2^{m-|E|})$.
\item If If $\chi(supp(p)|D)=1$, $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|$ and $f_{23}=(2^{m-|D|}-1)(2^m-2^{m-|E|})$.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(r)|E)=1$, then $wt_L(c_a)=|L|$ and $f_{24}=(2^m-2^{m-|D|})$ $(2^{m-|E|}-1)$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(r)|E)=1$, $wt_L(c_a)=|L|-[(2^{|D|})(2^{|E|})(2^m-2^{|F|})]$ and $f_{25}=(2^{m-|D|}-1)(2^{m-|E|}-1)$.
\end{itemize}
\item [(7)] $p\ne0,q\ne0,r=0$, $wt_L(c_a)=|L|-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(2^m-2^{|E|})(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(q)|E))(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(q)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{26}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(q)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{27}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(q)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{28}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(q)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{29}$ be the frequency in this case. Now, $\sum\limits_{i=26}^{29}f_i=(2^m-2^{m-|D|})(2^m-1)$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(q)|E)=0$, then $wt_L(c_a)=|L|$ and $f_{30}=(2^{m-|D|}-1)[2^m-(2^{m-|E\cup F|})\{(2^{|E|-|E\cap F|})+(2^{|F|-|E\cap F|})-1\}]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(q)|E)=1$, then $wt_L(c_a)=|L|$ and $f_{31}=(2^{m-|D|}-1)[(2^{m-|E\cup F|})(2^{|F|-|E\cap F|}-1)]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(q)|E)=0$, then $wt_L(c_a)=|L|-\frac{1}{2}[(2^{|D|})(2^m-2^{|E|})(2^{|F|})]$ and $f_{32}=(2^{m-|D|}-1)[(2^{m-|E\cup F|})(2^{|E|-|E\cap F|}-1)]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(q)|E)=1$, then $wt_L(c_a)=|L|-\frac{1}{2}[(2^{|D|})(2^m-2^{|E|})(2^{|F|})]+\frac{1}{2}[(2^{|D|})(2^{|E|})(2^{|F|})]$ and $f_{33}=(2^{m-|D|}-1)[2^{m-|E\cup F|}-1]$.
\end{itemize}
\item [(8)] $p\ne0,q\ne0,r\ne0$, $wt_L(c_a)=|L|-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(r)|E))(-2^{|F|}\\\chi(supp(q)|F))]-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(2^m\delta_{0,(q+r)}-2^{|E|}\chi(supp(q+r)|E))(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item Let $q\ne r$, $wt_L(c_a)=|L|-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(r)|E))\\(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(q+r)|E))(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{34}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{35}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{36}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{37}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{38}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{39}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{40}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{41}$ be the frequency in this case.
Now, $\sum\limits_{i=34}^{41}f_i=(2^m-2^{m-|D|})(2^m-1)(2^m-2)$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{42}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{43}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{44}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{45}$ be the frequency in this case. Now, $\sum\limits_{i=42}^{45}f_i=(2^{m-|D|}-1)[(2^m-2^{m-|F|})(2^m-2)]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|$ and $f_{46}=(2^{m-|D|}-1)[2^m(2^{m-|F|}-1)+2^{m-|E|}(1+2^{m-|E\cup F|}-2^{m+1-|F|})]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|+\frac{1}{2}[(2^{|D|})(2^{|E|})(2^{|F|})]$ and let $f_{47}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=0$, then $wt_L(c_a)=|L|+\frac{1}{2}[(2^{|D|})(2^{|E|})(2^{|F|})]$ and let $f_{48}$ be the frequency in this case. Now $f_{47}+f_{48}=(2^{m-|D|}-1)[2(2^{m-|E|}-1)(2^{m-|F|}-2^{m-|E\cup F|})]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$,
$\chi(supp(q+r)|E)=1$, then $wt_L(c_a)=|L|+[(2^{|D|})(2^{|E|})(2^{|F|})]$ and $f_{49}=(2^{m-|D|}-1)[(2^{m-|E|}-2)(2^{m-|E\cup F|}-1)]$.
\end{itemize}
\item Let $q=r$, $wt_L(c_a)=|L|-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(-2^{|E|}\chi(supp(r)|E))\\(-2^{|F|}\chi(supp(q)|F))]-\frac{1}{2}[(-2^{|D|}\chi(supp(p)|D))(2^m-2^{|E|})(-2^{|F|}\chi(supp(q)|F))]$.
\begin{itemize}
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{50}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{51}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|$ and let $f_{52}$ be the frequency in this case.
\item If $\chi(supp(p)|D)=0$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$, then $wt_L(c_a)=|L|$ and let $f_{53}$ be the frequency in this case. Now, $\sum\limits_{i=50}^{53}f_i=(2^m-2^{m-|D|})(2^m-1)$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|$ and $f_{54}=(2^{m-|D|}-1)[2^m-(2^{m-|E\cup F|})\{(2^{|E|-|E\cap F|})+(2^{|F|-|E\cap F|})-1\}]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=0$, $\chi(supp(r)|E)=1$, then $wt_L(c_a)=|L|$ and $f_{55}=(2^{m-|D|}-1)[(2^{m-|E\cup F|})(2^{|F|-|E\cap F|}-1)]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=0$, then $wt_L(c_a)=|L|-\frac{1}{2}[(2^{|D|})(2^m-2^{|E|})(2^{|F|})]$ and $f_{56}=(2^{m-|D|}-1)[(2^{m-|E\cup F|})(2^{|E|-|E\cap F|}-1)]$.
\item If $\chi(supp(p)|D)=1$, $\chi(supp(q)|F)=1$, $\chi(supp(r)|E)=1$, then $wt_L(c_a)=|L|+\frac{1}{2}[(2^{|D|})(2^{|E|})(2^{|F|})]-\frac{1}{2}[(2^{|D|})(2^m-2^{|E|})(2^{|F|})]$ and $f_{57}=(2^{m-|D|}-1)[2^{m-|E\cup F|}-1]$.
\end{itemize}
\end{itemize}
\end{itemize}
To compute the size of $C_L$, first we observe from the above computation that $((a\cdot l))_{l\in L}=0$ if and only if $a=0$. Now to show $C_L$ has full size, we have to prove that $((a_1\cdot l))_{l\in L}=((a_2\cdot l))_{l\in L}$ if and only if $a_1=a_2$ or equivalently it will be enough to show that $(a_1\cdot l)+(a_2\cdot l)=((a_1+a_2)\cdot l)$. Let $a_1=(p_1,q_1+ur_1)$ and $a_2=(p_2,q_2+ur_2)$. Then $(a_1\cdot l)=(q_1 t_2+u(p_1 t_1+q_1 t_3+r_1 t_2))$ and $(a_2\cdot l)=(q_2 t_2+u(p_2 t_1+q_2 t_3+r_2 t_2))$ and $((a_1+a_2)\cdot l)=((q_1+q_2) t_2+u((p_1+p_2) t_1+(q_1+q_2) t_3+(r_1+r_2) t_2))=(a_1\cdot l)+(a_2\cdot l)$ (as $\Z_2[u]$ is a ring under component-wise addition).
\end{proof}
\begin{table}[h]
\begin{center}
\caption{Lee weight distribution of $C_L$} \label{Tab 3.1}
\begin{tabular}{|c|c|c|}
\hline
Lee weight&frequency\\\hline
$0$&$f_1=1$\\\hline
$|L|$&$\sum\limits_{i=2}^{i=57}f_i,~i\ne3,6,7,13,14,15,18,19,$\\
&$21,25,32,33,47,48,49,56,57$\\\hline
$|L|+(2^m-2^{|D|})(2^{|E|})(2^m-2^{|F|})$&$f_3$\\\hline
$|L|+\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(2^{|F|})]$&$f_6+f_{18}$\\\hline
$|L|+\frac{1}{2}[(2^m-2^{|D|})(2^m-2^{|E|})(2^{|F|})-(2^m-2^{|D|})(2^{|E|})(2^{|F|})]$&$f_7+f_{19}$\\\hline
$|L|-\frac{1}{2}[(2^m-2^{|D|})(2^{|E|})(2^{|F|})]$&$f_{13}+f_{14}$\\\hline
$|L|-(2^m-2^{|D|})(2^{|E|})(2^{|F|})$&$f_{15}$\\\hline
$|L|+(2^{|D|})(2^m-2^{|E|})(2^m-2^{|F|})$&$f_{21}$\\\hline
$|L|-(2^{|D|})(2^{|E|})(2^m-2^{|F|})$&$f_{25}$\\\hline
$|L|-\frac{1}{2}[(2^{|D|})(2^m-2^{|E|})(2^{|F|})]$&$f_{32}+f_{56}$\\\hline
$|L|-\frac{1}{2}[(2^{|D|})(2^m-2^{|E|})(2^{|F|})-(2^{|D|})(2^{|E|})(2^{|F|})]$&$f_{33}+f_{57}$\\\hline
$|L|+\frac{1}{2}[(2^{|D|})(2^{|E|})(2^{|F|})]$&$f_{47}+f_{48}$\\\hline
$|L|+(2^{|D|})(2^{|E|})(2^{|F|})$&$f_{49}$\\\hline
\end{tabular}
\end{center}
\end{table}
\section{Minimal codes via Gray map}\label{section 4}
First of all we observe that: Since $\Phi$ is a linear map and $C_L$ is a linear code over $\Z_2[u]$, $\Phi(C_L)$ is a linear code over $\Z_2$. Next we will use a characterization for linear codes regarding minimality.
\vskip 5pt
We call a linear code $C$ over $\F_p$ is minimal if all of its nonzero codewords are minimal. Further a codeword $c\in C$ is said to be minimal if $supp(c')\subseteq supp(c)$ if and only if $c'=ac$, where $a(\ne0)\in\F_p$. In general, it is hard to determine whether a linear code is minimal or not but the famous Ashikhmin-Barg condition (\cite{barg1998minimal}) is a sufficient condition for a linear code to be minimal and it is relatively easy to check.
\begin{lemma}(Ashikhmin-Barg condition for p=2) \cite{barg1998minimal}\label{lem 4.1}
Let $C$ be a linear code over $\Z_2$. Denote $w_0$ as the minimum(nonzero) and $w_{\infty}$ as the maximum Hamming weights. If $\frac{w_0}{w_{\infty}}>\frac{1}{2}$, then $C$ is minimal.
\end{lemma}
\begin{theorem}\label{thm 4.2}
Suppose $|D|=|E|=|F|=n(<m)$. Then the code $\Phi(C_L)$ is minimal if $n\le m-2$.
\end{theorem}
\begin{proof}
Suppose $|D|=|E|=|F|=n(<m)$. Then $|L|=(2^m-2^n)^3$ and $(2^m-2^n)\ge2^n,\forall m,n$. Now we divide this proof in two cases.\\
\textit{Case I}: Let $n<m-1$. Then we observe that $(2^m-2^n)>2^n$. Now with these assumptions and using Table \ref{Tab 3.1}, we have $w_0=|L|-[(2^m-2^n)(2^n)^2]=(2^m-2^n)(2^m)(2^m-2^{n+1})$ and $w_\infty=|L|+[(2^m-2^n)^2(2^n)]=(2^m-2^n)^2(2^m)$. So, $\frac{w_0}{w_\infty}=\frac{2^m-2^{n+1}}{2^m-2^n}$ (as $m\ne0$ and $n\ne m$). Now to satisfy Lemma \ref{lem 4.1}, it must be $2^{m+1}-2^{n+2}>2^m-2^n$ i.e. $2^m>2^{n+2}-2^n$ and this will be true if and only if $n+2\le m$, which is also our assumption in this case.\\
\textit{Case II}: Let $n=m-1$. Then we observe that $|L|=(2^n)^3$ and $(2^m-2^n)=2^n$. Now with these assumptions and using Table \ref{Tab 3.1}, we have $w_0=|L|-\frac{1}{2}(2^n)^3=\frac{1}{2}|L|$ and $w_\infty=|L|+(2^n)^3=2|L|$. So, $\frac{w_0}{w_\infty}=\frac{1}{4}$ (as $n\ne0$). Since Lemma \ref{lem 4.1} is only a sufficient condition, we cannot conclude anything in this case. Hence the theorem.
\end{proof}
\begin{example}
Suppose $m=4$ in Theorem \ref{thm 4.2}. Then
\begin{itemize}
\item [(1)] If $n=3$, we can not conclude anything about the minimality of the binary linear four-Lee weight code $\Phi(C_L)$ having parameters $[1024,12,256]$.
\item [(2)] If $n=2$, then $\Phi(C_L)$ is a minimal binary linear nine-Lee weight code with parameters $[3456,12,1536]$.
\item [(3)] If $n=1$, then $\Phi(C_L)$ is a minimal binary linear nine-Lee weight code with parameters $[5488,12,2688]$.
\end{itemize}
\end{example}
\section{Conclusion}\label{section5}
In \cite{wu2021mixedtrace}, the authors constructed additive few-Lee weight codes over the mixed alphabet $\Z_p\Z_p[u],u^2=0$ with optimal Gray images by using suitable defining sets. In \cite{wang2021mixeddown}, the authors used the mixed alphabet ring $\Z_p\Z_p[u]$ with $u^2=u$ and $p>2$ to study some few-Lee weight codes by employing down-sets. In this paper, we choose the mixed alphabet ring $\Z_2\Z_2[u]$ to construct a class of few-Lee weight linear codes over $\Z_2[u]$ with $u^2=0$ using simplicial complexes generated by a single element and also have an infinite family of minimal codes over $\Z_2$ as Gray images. This shows that like in the case of single alphabet rings, we may employ all three popular means of constructing few-Lee weight codes including simplicial complexes when mixed alphabet rings are involved. | 112,466 |
Once he visits the branch office, appreciates Giri’s sincerity, gives him the promotion and he is transferred to head office. Yesudas in India and abroad. Pandian’s adamance to marry Seetha causes a strange turn of events. Music released on Audio Company. Captain Nagarjuna Nagarjuna, Khushboo. While working with the troupe, he penned his first composition, Ilaiyaraaja specialized in classical guitar and had taken a course in it at the Trinity College of Music, London. Hema Hemeelu is a Telugu Action film, produced by S. Films directed by Relangi Narasimha Rao.
The film was remade in Hindi as Takkar under the same banner. Chikkadu Dorakadu Rajendra Prasad, Rajani. Chattu Nair was musically inclined and all his children grew with an ability to sing. Dream Rajendra Prasad, Pavani Reddy. Allarodu Rajendra Prasad, Surabhi. The film was remade as the Hindi movie Amiri Garibi Kothapelli Kuthuru Chandra Mohan, Vijayashanthi. And after almost 25 years, he received his second Nandi Award for Best Actor for Aa Naluguru, additionally, he has also received Honorary doctorate from Andhra University.
Non-consanguineous arranged marriage is one where the bride and groom do not share a grandparent or near ancestor and this type of arranged marriages is common in Hindu and Buddhist South Asia, Southeast Asia, East Asia and Christian Latin America and sub-Saharan Africa.
Prasad at the audio launch event of Quick Gun Murugunsporting his look from the film. The film was remade in Hindi as Takkar under the same banner. Coincidentally, those in the wrong house are also expecting a lad to see their daughter on the same time and the same day. With the suggestion from N. Pelllam came with the group to Chennai and acted in the film and he even sang a song for the film under the baton of G. His nakh predeceased him in Navayugam Rajendra Prasad, Vinod Kumar.
In a career spanned over four decades, he worked in over feature films in Malayalam, Tamil, Telugu.
Raakshasudu Chiranjeevi, Radha, Suhasini, Sumalatha. Naku pellam kavali nutan prasad chandramohan rajendra prasad comedy scene.
After being hired as the assistant to Kannada film composer G. Malaysia Vasudevans parents were from Palakkad, in the early years of the last century, Chattu Nair of Ottappalam, Ammalu of Polpulli along with their respective families migrated to Malaysia in search of livelihood. Desamlo Dongalupaddaru Suman, Vijayshanti. Comexy Suman, Bhanupriya. Makutamleni Maharaju is a Telugu Action drama pellsm, produced by B.
Then she went on recording some songs under A. R in My Movie Minutes. Nevertheless, he takes a liking to the girl, Seetha and decides to marry her. Kobbari Bondam Rajendra Prasad, Nirosha.
The film marks the debut of famous telugu actor Kota Shankar Rao. Brundavanam Rajendra Prasad, Satyanarayana Kaikala. At the age of 14, he joined a musical troupe headed by his elder brother, Pavalar Varadarajan. The film was recorded as a Super Hit at the box office. Chithra or simply Chithra, is an Indian playback singer from Kerala.
Appuchesi Pappukudu Rajendra Prasad, Madhumita. Chithra received her training in Carnatic music from Dr. Beena and her younger brother Mano is also a playback singer.
Video nirmalamma
Jeevana Ganga Rajendra Prasad, Rajani. Ladies Doctor Rajendra Prasad, Keerthana. Ammaye Navvithe Rajendra Prasad, Bhavana. Parugo Parugu Rajendra Prasad, Sruthi. The film was a remake of the Tamil film Akka Thangai This chance was made possible by his friendship with the films producer, after that he joined the Pavalar Brothers troupe which was run by Ilaiyaraaja and his brothers. His family had a friendship with N. Sampoorna Premayanam Sobhan Babu, Jayaprada.
The film was remade as the Hindi movie Amiri Garibi He was also honoured to walk the green carpet at the IIFA film festival held inmarking his performance in the Bollywood film, Quick Gun Murugun.
Mahendran was searching for a cinematographer to shoot his directorial debut Mullum Malarum, Babu who was busy with his Malayalam films at the time suggested Ashok Kumar to Mahendran.
R in My Movie Minutes 6.
Ganpati Bappa Kavali – Hey Lambodar Gajmukh Mere Morya: скачать и слушать mp3
The film was recorded as a Blockbuster at the box office. However following differences between them, they started living separately from and were granted divorce on 23 April by Chennai Additional Family Court, Revathi said in an interview that they would remain kqvali friends even after the divorce.
You can help Wikipedia by expanding it. | 117,808 |
Yesterday morning I decided to take a short drive before work to see what I could find to photograph. I drove out to a rural area and caught the sun coming up over a pasture.
It was a little foggy, but the shot turned out pretty well.
And this interesting creation made out of hay bales. I am guessing it is a lion but am not really sure. It was a fun find!
Have a great Thursday!
xoxo,
SouthernSass
Gorgeous sunrise. Love the rolling fields with the geese. The hay bales are so cute, some people are so creative.
Those are some great shots! The sunset is beautiful. I look at geese all day long and the haybale lion is just too cute.
You really found some wonderful scenes.
Have a great day.
Pam
Wonderful shots! The hay bale bear or whatever creature it's supposed to be is so adorable! Lucky you!
What a gorgeous drive! I loved all the photographs, especially the one with the geese.
What a beautiful sunrise. Love the whimsical hay bale!
Thanks everyone!
The sky is gorgeous . A Beautiful entry.
Love the bales . They sure made me smile.
Happy Frieday
Great post! LOVE that sunrise - and the haybale animal is sooooo cute!!!
Wonderful photos!
Sun rises are always something special to witness, I like seeing them on my drive home from work (as I work midnights).
We have lots of farms around here (there's even a corn maze a couple towns over) — one of the local farms has a giant pig made out of hay bales — it's quite amusing.
: ) | 354,017 |
TITLE: What classes of mathematical programs can be solved exactly or approximately, in polynomial time?
QUESTION [31 upvotes]: I am rather confused by the continuous optimization literature and TCS literature about which types of (continuous) mathematical programs (MPs) can be solved efficiently, and which cannot. The continuous optimization community seems to claim that all convex programs can be solved efficiently, but I believe their definition of "efficient" does not coincide with the TCS definition.
This question has been bothering me a lot in the last few years, and I can not seem to find a clear answer to it. I hope you could help me settle this once and for all: Which classes of MPs can be solved exactly in polynomial time, and by which means; and what is known about approximating the optimal solution of MPs that we can not solve exactly in polynomial time?
Below, I give an incomplete answer to this question that is also possibly incorrect at some places, so I hope you could verify and correct me at the points where I'm wrong. It also states some questions that I cannot answer.
We all know that linear programming can be solved exactly in polynomial time, by running the ellipsoid method or an interior point method, and subsequently running some rounding procedure. Linear programming can even be solved in time polynomial in the number of variables when facing a family of LPs with any super large amount of linear constraints, as long as one can provide a "separation oracle" for it: an algoritm that, given a point, either determines whether that point is feasible or outputs a hyperplane that separates the point from the polyhedron of feasible points. Similarly, linear programming in time polynomial in the number of constraints when facing a family of LPs with any super large amount of variables, if one provides a separation algorithm for the duals of these LPs.
The ellipsoid method is also able to solve quadratic programs in polynomial time, in case the matrix in the objective function is positive (semi?)definite. I suspect that, by using the separation oracle trick, we can in some cases also do this if we are dealing with an incredible number of constraints. Is that true?
Lately semidefinite programming (SDP) has gained a lot of popularity in the TCS community. One can solve them up to arbitrary precision by using interior point methods, or the ellipsoid method. I think, SDPs cannot be solved exactly due to the problem that square roots can not be computed exactly. (?) Would it then be correct if I say there is an FPTAS for SDP? I have not seen that stated anywhere, so that's probably not right. But why?
We can solve LPs exactly and SDPs up to arbitrary precision. What about other classes of conic programs? Can we solve second-order cone programs up to arbitrary precision, using the ellipsoid method? I don't know.
On which classes of MPs can we use the ellipsoid method? What properties does such an MP need to satisfy such that an answer can be given up to arbitrary precision, and what additional properties do we need in order to be able to obtain an exact solution in polynomial time? Same questions for interior point methods.
Oh, and finally, what is it that causes continuous optimizers to say that convex programs can be solved efficiently? Is it true that an arbitrary-precision answer to a convex program can be found in polynomial time? I believe not, so in what aspects does their definition of "efficient" differ from ours?
Any contribution is appreciated! Thanks in advance.
REPLY [5 votes]: I don't know if all convex problems are in P, but I can answer a related question: nonconvex optimization is NP-hard. See "Quadratic programming with one negative eigenvalue is NP-hard". | 36,715 |
New Cake Decorating Ideas. Decorating cakes is a good hobby to learn, take pleasure in and also unleash your creativity, in fact it is also a great skill that may make good money for you. Whether they are birthday cakes or even wedding cakes, or perhaps cakes for your kids, each of them demand that unique and also appealing cake decoration to really make the celebration more complete, however, you don’t always have to rely on the chefs, a person can actually discover some easy cake decorating ideas.
New Cake Decorating Ideas inside 7 Easy Cake Decorating Trends For Beginners | Baking | Cake Photo source from i.pinimg.com
There are many ways you can beautify a cake, but before you begin with it, it is actually important to learn several things about cake decoration. Be sure you know what kind pf frosting that you cake would have and make sure they work with the heat where you may be bringing the cake as well. If an individual are going to enhance being married cake or any kind of cake to have an outdoor celebration on a very hot summer time day, you may want to stay away from whipped ointment for your frostings because this may melt upon exposure to the sunlight. Of course, a person would like that to happen to your creation.
New Cake Decorating Ideas in Top 20 Easy Birthday Cake Decorating Ideas – Oddly Satisfying Cake Videos Cakes Style 2017 Photo source from i.ytimg.com
One of the particular easy cake decorating ideas you can make use is to experiment the different frostings for your cake. If you want beatifully made cakes like these for weddings, then go on and coat your wedding cake with fondant icing and decorate it with buttercream or perhaps royal icing. A person can also include ready-to-eat real flowers, some 3D figures that are additionally made of icing, lace, butterflies and other buildings that can boost it is design.
For Cake Decorations, You can find many ideas on the topic new home cake decorating ideas, new year's cake decorating ideas, new job cake decorating ideas, new cake decorating ideas, and many more on the internet, but in the post of New Cake Decorating Ideas we have tried to select the best visual idea about Cake Decorations You also can look for more ideas on Cake Decorations category apart from the topic New Cake Decorating Ideas.
This post published on . Read Cake Decorating Solutions Caringbah or find other post and pictures about Cake Decorations.
New Cake Decorating Ideas Gallery | 307,215 |
How to deploy ADF Application on Oracle WebLogic 10.3
By raghu yadav on Nov 11, 2008
Referring.
Posted by Mucahid on December 29, 2008 at 06:42 AM PST # | 407,617 |
REVEALED: Why Napoli's Koulibaly snubbed Chelsea
Kalidou Koulibaly snubbed the chance to play at Chelsea because of his ambitions to lift the Serie A trophy with Napoli, according to his agent, Bruno Satin.
With former captain, John Terry joining Aston Villa, the English Premier League champions were keen on reinforcing their backline ahead of the new season.
Instead, Antonio Conte completed a £29 million deal for Antonio Rudiger from Roma when a deal for the 26-year-old failed to materialoulibaly’s agent has now revealed that the conviction a Maurizio Sarri-led Napoli stand a better chance of lifting the Scudetto was the reason the Senegal star stayed at the Stadio San Paolo.
"The Scudetto pact?" Satin told Radio Kiss Kiss Napoli.
"Kalidou stayed because he has a strong contract and he knows that with these teammates and this staff they can have important ambitions and try to win the Scudetto." | 48,738 |
A lost generation of working women: How to bring them back
Despite the inroads women have made into management positions in recent years, the majority of boards of directors are still dominated by men. Women comprise 28% of the boards of directors for S&P 500 companies and minority women make up just 10% of those seats. Since March 2020, nearly 1.8 million women have dropped out of the labor force and a survey of 1,508 women found that 69% expect to stay at home instead of returning to work. It’s an issue that Christine Tao, CEO and co-founder of Sounding Board, an AI-powered coaching and mentoring solution, will discuss during a panel titled “Improving the Women’s Leadership Pipeline” at the Women in HR Tech Summit next week at the HR Tech Conference in Las Vegas.
HRE spoke with Tao recently to discuss the pandemic’s impact on women who are balancing responsibilities at work and at home, how technology can help and why coaching has a role in advancing women leaders in the hybrid office. Here’s a lightly edited version of the conversation.
HRE: Women left the workforce in record numbers during the pandemic. How can HR leaders help them return to their former roles or to even better positions? Do we have to wait out the pandemic before work/life gets back to normal?
Christine Tao of Sounding Board
Tao: No. At Sounding Board, we put out a white paper entitled Beyond Burnout about the [pandemic’s] impact on women business leaders. The reason that we called it Beyond Burnout was [to emphasize] just how urgent the issue is, and behind that was a belief that there needs to be very intentional and immediate action on a systemic level to address [these issues]. Otherwise, there is going to be a longstanding impact on the number of women in the workforce.
All of the progress that we had made in the last 10 years has been negatively impacted. Deloitte put out a survey that said more than 53% of women were not only responsible for work but were also homeschooling their children. More than 50% of women said they were feeling overwhelmed or that they had to be constantly available. It shows the importance of the issue.
Related: How the pandemic is affecting women’s progress to pay equity
HRE: And Gen X female workers are often caring for elderly parents.
Tao: It’s like a sandwich, right? You’ve got children and older folks who were sick and then you have companies where there was [growth during] the pandemic. Some companies benefited greatly and went into scale-up mode, which still increases demands at work. Other companies were negatively impacted by the pandemic. Then you’ve also got the stress of keeping your job, staying employed and all of that. We know that there was a huge impact of the pandemic across the workforce but for women, it was a disproportionate impact.
HRE: How do we get them into their rightful roles?
Tao: At Sounding Board, we have a solution that is aimed at scaling leadership coaching. I think of it as investing in upskilling managers and leaders to be more effective in the current environment.
Companies themselves have to be intentional about bringing women back into the workforce. This means thinking about how you bring that intentionality into your recruiting and hiring, and then once you get them there, making sure that you have flexibility around your policies and how you allow people to come to work in a hybrid environment. [This includes] company policies to give them the ability to balance these different things that they’re juggling. Then, how do you help them further develop the skills that they need to be successful?
HRE: You mentioned upskilling and reskilling. What specific skills do you mean?
Tao: There’s a broad range. During the pandemic, you saw a huge increase in investment in the growth of companies that were offering health and wellness benefits. How do you address what we call individual needs around being able to make sure your nutrition, sleep, wellness, and mental and financial health are all taken care of? The second set is those skills that they will need to be more effective, functional and productive in the current environment. We talk a lot about adaptability, resilience and communication because [those are among] the most critical challenges in being able to keep people aligned in this work environment. These are what I call more functional or tactical skills, such as time management, prioritization and being able to adapt and juggle many things.
HRE: Can technology play a role?
Tao: Absolutely. Our mission is to help companies develop their most impactful leaders. My co-founder, Lori Mazen, and I wanted to see a model that would allow for investment earlier in people’s career paths to help them move up the leadership ladder, and technology has been a core part of that. We use it to deliver coaching virtually because companies are able to scale that more broadly. They’re able to offer it to leaders that they wouldn’t have been able to do before because the cost was prohibitive. The other simple part is just that we’re all at home. So, without technology, how else are you going to be able to access the learning opportunities? We’re seeing a lot of increases in digital experience that allow for connection to others in a remote environment as a way to develop and connect their employees.
HRE: How can employees network and grow their careers in a remote office? Can Zoom ever replace the office water cooler?
Tao: We have a fundamental belief that leaders have to be very intentional about the culture that they’re creating and how they generate and encourage these moments of interaction that maybe would have happened more naturally if they were in person. Now, you have to create the space for them and model that very visibly through your leaders. One thing we saw during the pandemic is that the investment in manager enablement increased because when you’re in your home now, you don’t have the physical location, the space and all of these other things that contributed to your employee experience. It becomes a smaller circle. It’s their team and specifically their manager that is now driving a large percentage of understanding and experience of the company.
Related: Bersin: These are the 2 disruptions reshaping coaching
This led to a lot of companies realizing that [they have] to get back to basics. Managers have to be much more skilled. They have had to be good managers [in the pandemic and] understand how to communicate, prioritize, adapt and empathize with their team because that is going to be the thing that keeps their employees engaged. | 402,786 |
>>.
More Hot Leads – SEO Winnipeg | A Digital Marketing Company That Cares About Getting Your Business Results! & Internet Marketing Services | Search Engine Marketing Experts in MB
A good online marketing company should be able to rank themselves. Here is an example, and another, and some rankings we have achieved for our Winnipeg clients here and here.….
Check us out on Google Plus. Follow us on Facebook and Twitter. Check out our reviews on Google+
Phone: 204-818-0877? | 36,397 |
Walmart, a chain of discount stores started by Sam Walton in 1962, is the nation’s largest retailer. The company uses competitive prices and marketing to squeeze out retailers across the country. Consumers shop at Walmart because of their prices, selection, and locations. Unfortunately, with the high traffic of customers at Walmart’s, there are thousands of people each year injured in Walmart stores and/or by-products sold at Walmart.
Were you assaulted in a parking lot or on the premises at a Walmart? You may have legal rights to compensation under premises liability law.
There are situations where a property owner may be held legally responsible for an attack or assault that occurs on his or her property. Business property owners and managers have a duty to safeguard the well-being of anyone visiting or doing business.. When businesses, such as Walmart, fail to protect people from any type of assault, the experienced California Walmart accident attorneys at the Law Office of Howard Alan Kitay, are prepared to help victims hold them accountable. We represent Walmart victims of assault, sexual assault and other acts of physical violence.
The potential complexity of a Walmart assault case makes it all the more important to involve an experienced Walmart premises liability attorney as soon as possible. You need a legal professional who understands the ins and outs of the law regarding property owner negligence in order to prove that your assault or attack should be grounds for a civil lawsuit. You will need proper legal counsel if you are to have the best opportunity of recovering full financial compensation for your injuries.
Our experienced assault injury attorneys understand that victims struggle with long-term challenges in dealing with an assault, particularly victims of sexual assault who may never have told anyone out of fear or guilt. Our firm helps its clients aggressively pursue justice and compensation for the physical, emotional and financial consequences as a result of an assault or sexual assault on Walmart premises, including:
Often assaults and attacks result in injuries such as soft tissue damage, broken bones, traumatic brain injury, damage to internal organs, and knife or gunshot wounds. The injuries can require expensive prolonged hospitalization and long-term rehabilitation. An injured victim may be unable to work and thus experience stress and financial hardship from a loss of income..
At the Law Office of Howard Alan Kitay,. Our experienced Walmart accident attorneys have over two decades of extensive litigation experience and top personal injury attorney Kitay has the skills needed to make your case a success. Contact us today at (877) 442-0542 to set up a free no-hassle, no-fee consultation.
DISCLAIMER: The publication of this page does not insure that The Law Offices of Howard Kitay offers any legal representation for cases of this type. Any information on this page is for knowledge and general informational purposes.
Free Case Evaluation | 44,990 |
Is it a Good Time to Invest in Real Estate in India? (Video)
It may now be a great time to invest in India, finally. Previously, there had been significant hurdles for foreign investors in India. However, according to Anuj Puri, Country Chairman and Head of JLL India, the election of a new government, the first in 30 years, bodes very well for foreign investing, particularly in commercial real estate.
With a great deal of excitement, Anuj tells me that, with an overwhelming public mandate, a new government was swept into power in India, headed by Narendra Modi, as Prime Minister. The new Prime Minister ran on a promise to end corruption and improve the economy.
Formerly the Chief Minister of the State of Gujarat from 2001 – 2014, Modi led Gujarat with strong agricultural policies and an emphasis on industrialization with very business and investment friendly policies. The hope is that he will continue the same policies for the whole of India.
Anuj believes that this change in the government will generate a great deal of interest from private equity and institutional funds, particularly in commercial real estate.
Here is a not so surprising fact: Blackstone is the 2nd largest developer in India with over 21 million square feet of office space in India.
Will REITs open up in India over the next 12 – 18 months?
Expect capital to start flowing in from China, the United States, the middle east, Singapore, etc.
Despite all of the excitement and high expectations, Anuj doesn’t think that Modi will be able to “waive a magic wand”. It will take some time and Anuj counsels patience.
This video was filmed in Las Vegas in May 2014 at the ICSC Annual Recon convention. | 347,460 |
Young Mammals Emerge From Hibernation At Big Star Tonight
Rocks Off is still getting used to running into (or emailing) local musicians we haven't seen for a while, asking them where they've been and getting the answer "on tour." We're not complaining.
That's the case with Young Mammals - after a sharp set at Free Press Summer Fest, the band.
Upcoming Events.
RO: Have you been recording at all? When can we expect some new music?
We're recording now for a 2011 release.
RO: What has the band been listening to lately?
We're all pretty obsessed with the Lower Dens album. Carlos is listening to Syd Barrett a lot lately. [Brian Eno's] Here Come the Warm Jets, Baby Huey and The Walkmen are old favorites that came back pretty constantly on the last tour, along with Balaclavas over and over.
RO: Have you seen any good shows lately? What were they?
Weird Weeds blew everyone in attendance at Mango's away on Sunday. Twin Sister ripped it up at Walter's last week and they've been our summer jam. The Energy opening for Screeching Weasel was outstanding.
RO: Are you looking forward to any fall shows? Which ones?
Devo at Fun Fun Fun. Of Montreal's new record is great and Janelle Monae promises to be inspiring. We also can't wait to play with Warpaint in October.
RO: What's the best rumor you heard in or about the local music scene this summer?
There was the one about that club and the owner and the band that plays there sometimes. Heard that might not happen! | 307,400 |
\begin{document}
\bibliographystyle{amsplain}
\relax
\title[ Local geometry of bihamiltonian structures]{ Webs, Lenard schemes, and
the local geometry of bihamiltonian Toda and Lax structures }
\author{ Israel~M.~Gelfand }
\author{ Ilya Zakharevich }
\address{ Dept. of Mathematics, Rutgers University, Hill Center, New
Brunswick, NJ, 08903}
\email{igelfand\atSign{}math.rutgers.edu}
\address{ Department of Mathematics, Ohio State University, 231 W.~18~Ave,
Columbus, OH, 43210 }
\email {ilya\atSign{}math.ohio-state.edu}
\date{ March 1999 (Revision III: March 2000) Archived as
\url{math.DG/9903080} Printed: \today }
\setcounter{section}{-1}
\maketitle
\begin{abstract}
We introduce a criterion that a given bihamiltonian structure
admits a local coordinate system where both brackets have constant
coefficients. This criterion is applied to the bihamiltonian open Toda
lattice in a generic point, which is shown to be locally isomorphic to a
Kronecker odd-dimensional pair of brackets with constant coefficients.
This shows that the open Toda lattice cannot be locally represented as a
product of two bihamiltonian structures.
In a generic point the bihamiltonian periodic Toda lattice is shown
to be isomorphic to a product of two open Toda lattices (one of which is
a (trivial) structure of dimension 1).
While the above results might be obtained by more traditional
methods, we use an approach based on general results on geometry of webs.
This demonstrates a possibility to apply a geometric language to problems
on bihamiltonian integrable systems, such a possibility may be no less
important than the particular results proven in this paper.
Based on these geometric approaches, we conjecture that
decompositions similar to the decomposition of the periodic Toda lattice
exist in local geometry of the Volterra system, the complete Toda
lattice, the multidimensional Euler top, and a regular bihamiltonian Lie
coalgebra. We also state general conjectures about geometry of more
general ``homogeneous'' finite-dimensional bihamiltonian structures.
The class of homogeneous structures is shown to coincide with the class
of system integrable by Lenard scheme. The bihamiltonian structures which
admit a non-degenerate Lax structure are shown to be locally isomorphic to
the open Toda lattice.
\end{abstract}
\tableofcontents
\section{Introduction }\label{h01}\myLabel{h01}\relax
A {\em local-geometric approach\/} consists of considering a geometric
structure (for the purpose of our discussion this is a collection of
tensor fields) up to a local diffeomorphism, studying its local
automorphisms, invariant tensor fields for these automorphisms, and a
possibility to decompose the structure into direct products. When applied
to integrable systems, this accounts to forgetting all the information
related to the given coordinate system (say, whether the structure is
polynomial {\em in this system\/}).
This approach cannot explain the phenomenon of integrability of a
Hamiltonian system, when the initial geometric structure is a Poisson
bracket and a function on a manifold. This local geometric structure has
too large group of automorphism, and there is no additional invariant
functions one could have used to integrate the system. One needs global
(or non-invariant) data to integrate a Hamiltonian system.
There is an alternative {\em bihamiltonian\/} approach to dynamic
systems in which integrability becomes meaningful on the local level
already. In this approach one starts with two {\em compatible\/}\footnote{Two Poisson brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ on $ M $ are {\em compatible\/} if
the bracket $ \lambda_{1}\left\{,\right\}_{1} +\lambda_{2}\left\{,\right\}_{2} $ is Poisson for any $ \lambda_{1} $, $ \lambda_{2} $.
} Poisson
brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ on $ M $. Basing on these brackets one constructs a
dynamical system which is Hamiltonian with respect to any one of these
brackets (and in fact to any linear combination of the brackets). The
construction of the dynamical system basing on the brackets is called
{\em Lenard scheme}. It provides a family of functions in involution (w.r.t.~
any linear combination of the brackets). Considering any function of this
family as a Hamiltonian w.r.t.~any bracket of two one obtains many
Hamiltonian flows. In most cases which appear in practice the above
family of functions is large enough to make these dynamics integrable
(compare with examples in Section~\ref{h48} and statements of Section~\ref{h55}).
Lenard scheme was formalized in \cite{Lax76Alm,Mag78Sim,
GelDor79Ham,FokFuch80Str}, see also \cite{KosMag96Lax}. Most of these
formalizations assume that at least one of the brackets is symplectic\footnote{Any symplectic structure carries a Poisson bracket. We call such Poisson
brackets {\em symplectic}.}
(thus $ M $ is even-dimensional). That time it was not realized how these
formalizations relate to known applications of Lenard scheme, which
consist of a recurrence relation, and of initial data for these
relations. The above formalizations of \cite{Lax76Alm,Mag78Sim,
GelDor79Ham,FokFuch80Str,KosMag96Lax} studied the recurrence
relations only, ignoring the initial data.
When even-dimensional bihamiltonian structures were classified in
\cite{Tur89Cla,Mag88Geo,Mag95Geo,McKeanPC,GelZakh93}, it became
clear that there is exactly one case where the above ``symplectic''
formalizations are compatible with the initial data for recurrence. This
case is in no way analogous to known examples (see Remark~\ref{rem55.80}).
Later, when the analysis of \cite{GelZakhFAN,GelZakh94Spe} had shown
that the periodic KdV system should be considered as an odd-dimensional
(though infinite-dimensional) bihamiltonian structure, an alternative
approach to the Lenard scheme became necessary. The philosophy of
\cite{GelZakhWeb} and \cite{GelZakh93} is that such a substitute is given by the
local classification of bihamiltonian structures.
By this philosophy the mentioned above ``symplectic'' formalizations
of Lenard scheme are substituted by the local descriptions of generic
even-dimensional bihamiltonian structures in \cite{Tur89Cla,Mag88Geo,
Mag95Geo,McKeanPC,GelZakh93}. Indeed, these descriptions provide
all the information contained in \cite{Mag78Sim} and \cite{GelDor79Ham}, and
demystify the assumptions of the former papers.
From the classification of even-dimensional bihamiltonian structures
in general position, it turns out that this geometry is pretty rigid: on
an open subset the structure may be canonically decomposed into a direct
product of two-dimensional components, with one distinguished canonically
defined coordinate on each of these components. (It is this rigidity
which allows a local construction of a big family of commuting
Hamiltonians.) However, as in the case of a Hamiltonian system, locally
it has discrete parameters only (up to minor details the only parameter
is dimension). The morale of this classification is that only
$ 2 $-dimensional geometry is important, anything else can be combined from
$ 2 $-dimensional building blocks.
The situation becomes very different in an odd-dimensional case: the
structures in general position are indecomposable. In fact such
structures are even {\em micro-indecomposable}, i.e., one cannot represent them
as a product of two structures of smaller dimension---even if one
restricts attention to one tangent space to a point of the manifold. For
{\em analytical\/} structures in general position a local classification is also
possible (\cite{GelZakhWeb,GelZakh93}), but it is equivalent to a (local)
classification of {\em non-linear\/} $ 1 $-dimensional bundles over a rational curve,
i.e., analytical surfaces which have a submanifold isomorphic to $ {\mathbb P}^{1} $ and a
fixed projection onto this curve\footnote{Dimension of the initial bihamiltonian structure depends on the
degree of the normal (line) bundle to this curve.}. This classification involves
functional parameters (several functions of two complex variables).
The geometry of such bihamiltonian structures is also very rigid,
thus basing on local geometric data one can canonically construct enough
functions in involution, thus produce integrable systems. Out of this
huge pool of micro-indecomposable integrable systems of the given odd
dimension one can single out one particular {\em flat\/} structure, with {\em both\/}
Poisson structures having constant coefficients in the same coordinate
system (any two flat odd-dimensional indecomposable structures are
locally isomorphic, compare \cite{GelZakhFAN}).
However, after the heuristic of \cite{GelZakhFAN} that the KdV system is
in fact an infinite-dimensional analogue of an odd-dimensional
bihamiltonian structure, no other bihamiltonian structure were
(explicitly) considered from the point of view of classification up to a
diffeomorphism\footnote{Since the geometry of many ``classical'' bihamiltonian structures is
investigated up to minor details, a specialist could easily concoct an
answer to such a question from the known results. The conjectural reason
why this was not done before is that the answer would not fit into the
fixed mindset of ``everything is a product of $ 2 $-dimensional components'',
compare with discussion in Section~\ref{h005}.}. One of the targets of this paper is to investigate from
this point of view the simplest classical bihamiltonian structures: the
open and the periodic finite-dimensional Toda lattices.
While we proceed to this goal, we also provide generally-useful
easy-to-check criteria of flatness, investigate Lenard scheme in context
of odd-dimensional bihamiltonian geometry, and provide geometric
description of systems which admit a Lax representation.
For a detailed overview of the presented results in Section~\ref{h003} we
need to introduce some notions which are going to be used throughout the
paper. We do this in Section~\ref{h002}. Here we only list the principal steps
of our presentation:
\begin{enumerate}
\item
criteria of being homogeneous and being Kronecker of corank 1;
\item
introduction of webs as a way to encode mutual positions of Casimir
functions;
\item
proof of the criteria;
\item
examples of bihamiltonian structures which demonstrate purposes of
different conditions of the criteria;
\item
relation of Lenard integrability and homogeneous structures;
\item
relation of Lax structures and flatness;
\item
application of criteria to Toda lattices.
\end{enumerate}
We also discuss geometric conjecture which might provide geometric
description of many other finite-dimensional bihamiltonian structures.
Authors are indebted to A.~S.~Fokas, A.~Givental,
Y.~Kosmann-Schwarzbach, F.~Magri, H.~McKean, T.~Ratiu, N.~Reshetikhin for
fruitful discussions, and to A.~Gorokhovsky, M.~Braverman, B.~Khesin,
A.~Panasyuk, and V.~Serganova for the remarks which lead to improvements
of this paper. Special thanks go to A.~Panasyuk for letting us see the
preprint of \cite{Pan99Ver} before it went to print, and to M.~Gekhtman for
his suggestions on using known B\"acklund--Darboux transformation for
Volterra systems.
{\bf Revisions: }The revision II of this paper (January 2000) introduced
references to new papers \cite{Tur99Equi} and \cite{Zakh99Kro}, expanded
bibliography on ``classical'' bi-Hamiltonian systems, and minor stylistic
corrections. The revision III (March 2000) added Remark~\ref{rem48.91}.
Numbering of statements did not change. The archive name of this paper is
\url{math.DG/9903080} at \url{http://arXiv.org/math/abs}.
\section{Basic notions }\label{h002}\myLabel{h002}\relax
All the geometric definitions which follow are applicable in $ C^{\infty} $ and
analytic geometry. We state only the $ C^{\infty} $-variant, the analytic one can be
obtained by substituting $ {\mathbb R} $ by $ {\mathbb C} $.
In what follows if $ f $ is a function or a tensor field on $ M $, $ f|_{m} $
denotes the value of $ f $ at $ m\in M $.
\begin{definition} A {\em bracket\/} on a manifold $ M $ is a $ {\mathbb R} $-bilinear skewsymmetric
mapping $ f,g \mapsto \left\{f,g\right\} $ from pairs of smooth functions on $ M $ to smooth
functions on $ M $. This mapping should satisfy the Leibniz identity
$ \left\{f,gh\right\}=g\left\{f,h\right\}+h\left\{f,g\right\} $. A bracket is {\em Poisson\/} if it satisfies Jacobi
identity too (thus defines a structure of a Lie algebra on functions on
$ M $).
A {\em Poisson structure\/} is a manifold $ M $ equipped with a Poisson
bracket. \end{definition}
\begin{remark} \label{rem002.20}\myLabel{rem002.20}\relax Leibniz identity implies $ \left\{f,g\right\}|_{m}=0 $ if $ f $ has a zero of
second order at $ m\in M $. Thus a bracket is uniquely determined by describing
functions $ \left\{f_{i},f_{j}\right\} $, here $ \left\{f_{i}\right\}_{i\in I} $ is an arbitrary collection of smooth
functions on $ M $ such that for any $ m\in M $ the collection $ \left\{df_{i}|_{m}\right\}_{i\in I} $ of vectors
in $ {\mathcal T}_{m}^{*}M $ generates $ {\mathcal T}_{m}^{*}M $ as a vector space. \end{remark}
\begin{definition} Call two Poisson brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ on $ M $ {\em compatible\/} if
the bracket $ \lambda_{1}\left\{,\right\}_{1} +\lambda_{2}\left\{,\right\}_{2} $ is Poisson for any $ \lambda_{1} $, $ \lambda_{2} $.
A {\em bihamiltonian structure\/} is a manifold $ M $ with a pair of compatible
Poisson brackets. \end{definition}
In fact it is possible to show that if {\em one\/} linear combination $ \lambda_{1}\left\{,\right\}_{1}
+\lambda_{2}\left\{,\right\}_{2} $ of two Poisson brackets is Poisson and $ \lambda_{1}\not=0 $, $ \lambda_{2}\not=0 $, then {\em any\/}
linear combination $ \lambda_{1}\left\{,\right\}_{1} +\lambda_{2}\left\{,\right\}_{2} $ is Poisson. In the analytic situation
the coefficients $ \lambda_{1} $, $ \lambda_{2} $ may be taken to be complex numbers.
If $ M $ is a $ C^{\infty} $-manifold with a bracket, we may consider the extension
of the bracket to the $ {\mathbb C} $-vector space of complex-valued functions on $ M $. In
this case $ \lambda_{1}\left\{,\right\}_{1} +\lambda_{2}\left\{,\right\}_{2} $ is well-defined even for complex values of
$ \lambda_{1},\lambda_{2} $. By the above remarks, complex linear combinations of brackets of a
bihamiltonian structure are also Poisson. In what follows we always
consider brackets as acting on the spaces of complex-valued functions.
\begin{definition} Given two brackets, $ \left\{\right\}_{M} $ on $ M $ and $ \left\{\right\}_{N} $ on $ N $, the {\em direct
product\/} of brackets $ \left\{\right\}_{M} $ and $ \left\{\right\}_{N} $ is the bracket on $ M\times N $ defined by
\begin{equation}
\left\{f_{M}\times f_{N},g_{M}\times g_{N}\right\}_{M\times N} \buildrel{\text{def}}\over{=} \left\{f_{M},g_{M}\right\}_{M}\times\left(f_{N}g_{N}\right)+\left(f_{M}g_{M}\right)\times\left\{f_{N},g_{N}\right\}_{N}.
\notag\end{equation}
Call a bihamiltonian structure {\em decomposable\/} if it isomorphic to a direct
product of two bihamiltonian structures of positive dimension. \end{definition}
Obviously, a direct product of two Poisson structures is a Poisson
structure, and a direct product of two bihamiltonian structures is a
bihamiltonian structure.
\begin{definition} \label{def002.40}\myLabel{def002.40}\relax Consider a bihamiltonian structure $ \left(V,\left\{,\right\}_{1},\left\{,\right\}_{2}\right) $,
here $ V $ is a vector space. The bihamiltonian structure is
{\em translation-invariant\/} if $ \left\{{\mathfrak T}f,{\mathfrak T}g\right\}_{a}={\mathfrak T}\left\{f,g\right\}_{a} $, $ a=1,2 $, for any parallel
translation $ {\mathfrak T} $ on $ V $, any $ f $, and any $ g $. \end{definition}
\begin{definition} \label{def002.43}\myLabel{def002.43}\relax A bihamiltonian structure on $ M $ is {\em flat\/} if it is
locally isomorphic to a translation-invariant bihamiltonian structure,
i.e., there is a collection of open subsets $ M_{i}\subset M $ such that $ M=\bigcup_{i\in I}M_{i} $, and
for any $ i\in I $ the restriction of the bihamiltonian structure on $ M $ to $ M_{i} $ is
isomorphic to an open subset $ \widetilde{M}_{i}\subset V_{i} $, here $ V_{i} $ is a vector space with a
translation-invariant bihamiltonian structure.
A bihamiltonian structure on $ M $ is {\em generically flat\/} if it is flat on
a dense open subset $ U\subset M $. \end{definition}
\begin{remark} Throughout the paper the phrase ``{\em at generic points\/}'' means ``at
points of an appropriate open dense subset''. Similarly, a ``{\em small open
subset\/}'' is used instead of ``an appropriate neighborhood of any given
point''. \end{remark}
\begin{remark} It is possible to give a complete classification of
translation-invariant bihamiltonian structures and a complete local
classification of flat bihamiltonian structures. (See Remark~\ref{rem6.13}.)
Classification of generically flat bihamiltonian structures is an
interesting unsolved problem which we do not consider in this paper. \end{remark}
\begin{remark} Any flat structure is generically flat, and any
translation-invariant structure is flat, but the opposite is not true. To
construct an example of non-translation-invariant flat structure one can
take a quotient of a translation-invariant structure on $ V $ by an arbitrary
discrete subgroup of $ V $. Later we will construct many generically flat
structures which are not flat. One of the simplest possible cases will be
provided in Example~\ref{ex002.45}, see also Theorems~\ref{th01.60},~\ref{th01.70}.
Not every bihamiltonian structure is generically flat. Important
examples of non-generically-flat structures will be constructed in
Section~\ref{h47}. \end{remark}
\begin{remark} The classification of Remark~\ref{rem6.13} shows that {\em indecomposable\/}
flat bihamiltonian structures break into two types with principally
different geometries: even-dimensional structures are modeled by Jordan
blocks, and odd-dimensional ones are modeled by Kronecker blocks. \end{remark}
Consider an interesting example of a translation-invariant
bihamiltonian structure. In fact it is going to be a key example of this
paper: we are going to show that this example is a ``building block'' in
decomposition of many ``classical'' examples of bihamiltonian structures.
\begin{example} Consider a vector space $ V $ with coordinates $ x_{0},\dots ,x_{2k-2} $ and
the Poisson brackets of coordinates
\begin{equation}
\left\{x_{2l},x_{2l+1}\right\}_{1}=1,\qquad \left\{x_{2l+1},x_{2l+2}\right\}_{2}=1,\qquad 0\leq l\leq k-2,
\label{equ45.20}\end{equation}\myLabel{equ45.20,}\relax
any other brackets of coordinate functions $ x_{0},\dots ,x_{2k-2} $ vanishing. This
pair of brackets is in fact a translation-invariant bihamiltonian
structure. \end{example}
The following example is the simplest of classical examples of
bihamiltonian structures arising in theory of integrable systems.
\begin{example} \label{ex002.45}\myLabel{ex002.45}\relax Given a Lie algebra $ {\mathfrak g} $ and an element $ \alpha\in{\mathfrak g}^{*} $, define a
bihamiltonian structure on $ {\mathfrak g}^{*} $ as in \cite{Bol91Com}. An element $ X\in{\mathfrak g} $ defines a
linear function $ f_{X} $ on $ {\mathfrak g}^{*} $. Due to Remark~\ref{rem002.20}, to define a
bihamiltonian structure on $ {\mathfrak g}^{*} $ it is enough to describe brackets $ \left\{f_{X},f_{Y}\right\}_{a} $,
$ a=1,2 $, $ X,Y\in{\mathfrak g} $.
Let $ \left\{f_{X},f_{Y}\right\}_{1} $ be a constant function on $ {\mathfrak g}^{*} $ and $ \left\{f_{X},f_{Y}\right\}_{2} $ be a linear
function on $ {\mathfrak g}^{*} $ given by the formulae
\begin{equation}
\left\{f_{X},f_{Y}\right\}_{1}\equiv c\left(X,Y\right)\buildrel{\text{def}}\over{=}f_{\left[X,Y\right]}\left(\alpha\right),\qquad \left\{f_{X},f_{Y}\right\}_{2}=f_{\left[X,Y\right]}.
\notag\end{equation}
The bracket $ \left\{,\right\}_{2} $ is the natural Lie--Kirillov--Kostant--Souriau Poisson
bracket on $ {\mathfrak g}^{*} $. The bracket $ \left\{,\right\}_{1} $ is translation-invariant. The bracket
$ \left\{,\right\}_{2} $ is translation-invariant only if $ {\mathfrak g} $ is abelian.
Call this bihamiltonian structure {\em regular\/} if $ {\mathfrak g} $ is semisimple and $ \alpha $
is regular semisimple. In such a case Conjecture~\ref{con01.100} states that
this structure is in fact generically flat (compare with \cite{Pan99Ver},
where a weaker property is proven\footnote{Paper \cite{Zakh99Kro} contains a proof of generic flatness of this
structure.}). In the case $ {\mathfrak g}={\mathfrak s}{\mathfrak l}_{2} $ the conjecture
follows from Theorem~\ref{th1.10}. This provides an example of generically
flat, but not flat and not translation-invariant structure.
In the case $ {\mathfrak g}={\mathfrak s}{\mathfrak l}_{2} $ it is easy to see that this structure is not flat.
Indeed, $ \left\{f,g\right\}_{2}|_{0}=0 $ for any $ f,g $. If the structure were flat, this would
imply $ \left\{f,g\right\}_{2}=0 $ for any $ f,g $, which is obviously false. \end{example}
By its definition, any flat bihamiltonian structure is locally
isomorphic to a direct product of several translation-invariant
indecomposable bihamiltonian structures. Introduce a special class of
bihamiltonian structures by allowing only special class of factors in the
above direct product.
\begin{definition} \label{def01.103}\myLabel{def01.103}\relax A bihamiltonian structure is a {\em Kronecker\/} structure
if it is locally isomorphic to a direct product of several
translation-invariant odd-dimensional indecomposable structures. A {\em type\/}
of a Kronecker structure is the sequence of dimensions of factors in the
above direct product. The Kronecker structure is {\em indecomposable\/} if the
above product consists of one factor only.
A structure is {\em generically Kronecker\/} if it is Kronecker on an open
dense subset. \end{definition}
Note that a direct product of translation-invariant structures is
translation-invariant. In Section~\ref{h25} we will see that components of a
product of translation-invariant structures are uniquely determined by
the product. Thus Kroneker structures are flat structures open subsets of
which have {\em no even-dimensional\/} indecomposable components.
\begin{remark} The restriction of having no even-dimensional factors looks
very artificial. Moreover, one may think that bihamiltonian structures
which have {\em only\/} Jordan blocks should be the common case. Say, the
classification of even-dimensional bihamiltonian structures in general
position (\cite{Tur89Cla,Mag88Geo,Mag95Geo,McKeanPC,GelZakh93})
shows that on an open dense subset such pairs are isomorphic to direct
product of $ 2 $-dimensional bihamiltonian factors (thus have Jordan blocks
only in their decompositions). However, as we show later, some
``classical'' bihamiltonian systems are in fact generically Kronecker, and
we conjecture that many more such examples exist.
The condition of having no Jordan blocks is equivalent to the
condition of {\em completeness\/} of \cite{Bol91Com}. Note that the idea of the last
condition is to be one of possible {\em integrability criteria\/}: bihamiltonian
structures which are complete deserve to be called integrable. \end{remark}
By Remark~\ref{rem6.13}, flat bihamiltonian structures are essentially
pairs of skewsymmetric pairings on vector spaces, thus objects of linear
algebra. These objects of linear algebra have a classification, but the
building blocks of this classification are not only Jordan blocks, but
also some new blocks, constructed by Kronecker one year after Jordan.
This was the reason for our choice of the name.
\begin{remark} As Remark~\ref{rem6.13} will show, indecomposable odd-dimensional
flat bihamiltonian structures are locally isomorphic to the structure
given by~\eqref{equ45.20}. Thus the local geometry of a Kronecker structure is
uniquely determined by its type. \end{remark}
\begin{definition} \label{def01.120}\myLabel{def01.120}\relax Consider a bracket $ \left\{,\right\} $ on a manifold $ M $. The
{\em associated bivector\/}\footnote{A {\em bivector field\/} is a skewsymmetric contravariant tensor of valence 2.} {\em field\/} $ \eta $ is the section of $ \Lambda^{2}{\mathcal T}M $ given by $ \left\{f,g\right\}|_{m}=\left<
\eta|_{m},df\wedge dg|_{m} \right> $, $ m\in M $, here $ \left<, \right> $ denotes the canonical pairing between
$ \Lambda^{2}{\mathcal T}_{m}M $ and $ \Omega_{m}^{2}M $. \end{definition}
\begin{definition} Consider a bracket $ \left\{,\right\} $ on $ M $ and $ m_{0}\in M $. The {\em associated pairing\/}
(,) in $ {\mathcal T}_{m_{0}}^{*}M $ is defined as $ \left(\alpha,\beta\right)=\left\{f,g\right\}|_{m_{0}} $ if $ \alpha=df|_{m_{0}} $, $ \beta=dg|_{m_{0}} $. \end{definition}
Obviously, the associated bivector field uniquely determines the
bracket and visa versa. The associated pairing is a skewsymmetric
bilinear pairing.
Given a pair of brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $, one obtains two bivector
fields $ \eta_{1} $, $ \eta_{2} $. Analogously, one obtains two skewsymmetric bilinear
pairings $ \left(,\right)_{1} $, $ \left(,\right)_{2} $ on $ {\mathcal T}_{m}^{*}M $, so that $ \left(\alpha,\beta\right)_{a}=\left\{f,g\right\}_{a}|_{m} $ if $ \alpha=df|_{m} $, $ \beta=dg|_{m} $,
$ a=1,2 $.
\begin{definition} The {\em rank\/} of the bracket $ \left\{,\right\} $ at $ m\in M $ is $ r $ if the associated
skewsymmetric bilinear pairing on $ {\mathcal T}_{m}^{*}M $ has rank $ r $. In this case the
{\em corank\/} of the bracket is $ \dim M-r $.
A bracket has a {\em constant (co)rank\/} if its rank does not depend on the
point $ m\in M $. A bracket is {\em symplectic\/} if the corank is constant and equal to
0. \end{definition}
\begin{definition} Given a pair of vector spaces $ V^{\alpha} $ and $ V^{\beta} $, each equipped with
a pair of skewsymmetric bilinear pairings, equip $ V^{\alpha}\oplus V^{\beta} $ with two pairings
$ \left(,\right)_{a}\buildrel{\text{def}}\over{=}\left(,\right)_{a}^{\alpha}\oplus\left(,\right)_{a}^{\beta} $, $ a=1,2 $. If a pair is isomorphic to such a direct sum
with $ \dim V^{i}\not=0 $, $ i=\alpha,\beta $, it is {\em decomposable}. \end{definition}
It is possible to provide a complete description of indecomposable
pairs of skewsymmetric pairings (we will do it in Theorem~\ref{th6.10}).
\begin{definition} \label{def01.105}\myLabel{def01.105}\relax A bihamiltonian structure $ \left(M,\left\{\right\}_{1},\left\{\right\}_{2}\right) $ is {\em homogeneous\/}\footnote{A similar definition appears in \cite{Pan99Ver}.}
of type $ \left(2k_{1}-1,2k_{2}-1,\dots ,2k_{l}-1\right) $ if for any $ m\in M $ the pair of bilinear
pairings on $ {\mathcal T}_{m}^{*}M $ decomposes into a direct sum of indecomposable blocks of
dimensions $ 2k_{1}-1 $, $ 2k_{2}-1 $, \dots , $ 2k_{l}-1 $.
Such homogeneous system is {\em micro-indecomposable\/} if $ l=1 $. \end{definition}
By uniqueness of decomposition into indecomposable blocks (Theorem
~\ref{th6.10}), Kronecker structures are those bihamiltonian structures which
are simultaneously homogeneous and flat. There exist important examples
of homogeneous structures which are not flat (see Section~\ref{h47}).
What makes homogeneous structures important is the fact that the
standard algorithm of ``complete integration'' (so-called {\em anchored Lenard
scheme\/}) is applicable to these structures, and this algorithm provides
enough functions in involution for these structures only. (See Section
~\ref{h55} for details.)
In fact Kronecker structures are a {\em very special\/} case of homogeneous
structures:
\begin{conjecture} Given a sequence $ \left(2k_{1}-1,2k_{2}-1,\dots ,2k_{l}-1\right) $ there exist $ N>0 $
and a natural ways to assign tensor fields $ K_{1},\dots ,K_{N} $ to a homogeneous
bihamiltonian structure such that the structure is Kronecker iff $ K_{i}=0 $,
$ 1\leq i\leq N $. \end{conjecture}
In \cite{GelZakhWeb} we proved this conjecture in the case of
micro-indecomposable structures of dimension 3. This generalized to the
case of a general micro-indecomposable structure. In these cases $ N=1 $, and
the tensor field $ K_{1} $ is in fact a $ 2 $-form of curvature of a connection on
an appropriate line bundle (compare with \cite{Rig98Sys}). This $ 2 $-form plays
the same r\^ole for bihamiltonian structures as tensor of curvature plays
for Riemannian structures.
In what follows we provide criteria of homogeneity and of being an
indecomposable Kronecker structure. All these criteria are going to be
expressed in the following terms:
\begin{definition} Call a smooth function $ F $ on a manifold $ M $ with a Poisson
bracket $ \left\{,\right\} $ a {\em Casimir\/} function if $ \left\{F,f\right\}=0 $ for any smooth function $ f $ on $ M $.
\end{definition}
Obviously, any function $ \varphi\left(F_{1},F_{2},\dots ,F_{k}\right) $ of several Casimir functions
is again Casimir.
\begin{definition} A collection of smooth functions $ F_{1},\dots ,F_{r} $ on $ M $ is {\em dependent\/}
if $ \varphi\left(F_{1},\dots F_{r}\right)\equiv 0 $ for an appropriate smooth function $ \varphi\not\equiv 0 $. \end{definition}
We will use this definition when we want to pick up a small
independent collection of Casimir function out of the set of all Casimir
functions (possibly Casimir functions for several different brackets).
\section{Overview }\label{h003}\myLabel{h003}\relax
One of the principal targets of this paper is to state three
criteria which for a given bihamiltonian structure determine whether it
is
\begin{enumerate}
\item
homogeneous micro-indecomposable structure (Theorem~\ref{th1.07});
\item
indecomposable Kronecker structure (Theorem~\ref{th1.10});
\item
homogeneous structure (Amplification~\ref{amp1.07}).
\end{enumerate}
We will use the criterion of Theorem~\ref{th1.10} to prove that open and
periodic {\em Toda lattices\/} are generically Kronecker (in Theorems~\ref{th01.60}
and~\ref{th01.70}), and to show that so-called {\em Lax structures\/} are
indecomposable Kronecker structures provided some conditions of general
position hold (in Theorem~\ref{th60.30}.)
The most interesting feature of all these criteria is that they are
stated in terms of {\em mutual position\/}\footnote{Given several functions $ \left\{F_{i}\right\}_{i\in I} $ on a manifold $ M $ and a point $ m_{0}\in M $,
consider the directions of differentials $ dF_{i}|_{m_{0}} $ of these functions at $ m_{0} $.
These directions can be considered as points of the projectivization
$ {\mathcal P}\left({\mathcal T}_{m_{0}}^{*}M\right) $ of the vector space $ {\mathcal T}_{m_{0}}^{*}M $. Thus we obtain a configuration of $ |I| $
points in a projective space, and this configuration depends on $ m_{0}\in M $. The
term ``mutual position'' refers to studying these configurations of points.} of Casimir functions for different
linear combinations $ \lambda_{1}\left\{,\right\}_{1} +\lambda_{2}\left\{,\right\}_{2} $ of Poisson brackets of the
bihamiltonian structure. We propose a way to encode these mutual
positions in a geometric structure of a new type, which we call a {\em web}.
Recall that the traditional Liouville approach to complete
integration of a dynamical system is to provide a system of so-called
{\em action-angle variables}. It so happens that in typical examples the
Casimir functions depend on action variables only. Moreover, the action
variables are typically much easier to find than the angle variable. This
indicates a fundamental asymmetry between action variables and angle
variables.
The notion of web (Definition~\ref{def02.20}) amplifies this asymmetry by
providing a way to remove angle variables from consideration whatsoever.
Since the Casimir functions do not depend on angle variables, it is
possible to study the mutual position of Casimir functions in terms of
the geometry of the web which corresponds to the given bihamiltonian
structure. Thus the conditions of the above criteria (of
being homogeneous or Kronecker structures) may be reformulated in terms
of webs.
The webs for micro-indecomposable bihamiltonian structures coincide
with {\em Veronese webs\/} which were studied\footnote{A beginning of a similar study in the case of general homogeneous
structures is done in \cite{Pan99Ver}.} in \cite{GelZakhWeb} and \cite{GelZakh93}.
After the criterion of being a Kronecker structure is reformulated as a
statement about webs, it becomes a direct corollary of results of
\cite{GelZakhWeb}. The results of \cite{GelZakhWeb} we use here only scratch the
surface of the beautiful theories of \cite{GelZakhWeb,GelZakh93,
Pan99Ver}, in Section~\ref{h2} we provide an independent formulation of
these results, and prove the simplest of them. In Section~\ref{h45} we deduce
from these results the criterion~\ref{th1.10} of being an indecomposable
Kronecker structure.
Though the criteria~\ref{th1.07} and~\ref{amp1.07} of being a homogeneous
system may be formulated in terms of webs, in fact both the hypotheses
and the conclusions of these statements may be stated in terms of
individual cotangent spaces $ {\mathcal T}_{m}^{*}M $ to the bihamiltonian structure $ M $. Thus
these statements may be reduced to appropriate statements of linear
algebra. We do this reduction in Section~\ref{h25}.
The criterion~\ref{th1.10} of being an indecomposable Kronecker structure
is expressed in terms of several inequalities. In Sections~\ref{h47} and~\ref{h62}
we provide examples of bihamiltonian structures which show that no
inequality may be weakened without breaking the criterion. These examples
are homogeneous bihamiltonian structures which are not flat. One of these
examples shows that even a presence of a family of Casimir functions
which depend {\em polynomially\/} on a parameter does not guarantee flatness.
Note that all the examples of Section~\ref{h47} are completely
integrable. Here we use this vague term in the following sense: the
``anchored'' Lenard scheme works for these examples, and provides enough
functions in involution to construct action-angle variables. In Section
~\ref{h48} we describe the anchored Lenard scheme, and show its relations with
Casimir functions (thus with webs).
In Section~\ref{h55} we show that any homogeneous structure is completely
integrable via the anchored Lenard scheme. Theorem~\ref{th55.50} shows that in
fact the class of bihamiltonian structures which may be completely
integrated via the anchored Lenard scheme {\em coincides\/} with the class of
homogeneous structures. This answers a long-standing question in the
theory of integrable systems.
We finish the paper with applications of the criterion of flatness
to classical examples of integrable systems. After recalling (in Section
~\ref{h0}) definitions of {\em Toda lattices}, we show that the open and the periodic
Toda lattices are in fact generically flat (Theorems~\ref{th01.60} and
~\ref{th01.70}).
In Section~\ref{h60} we introduce a notion of a {\em Lax structure}. It is a
natural modification of the notion of Lax operator from \cite{KosMag96Lax}. We
show that under appropriate non-degeneracy conditions all the Lax
structures (in generic points) are indecomposable Kronecker structures.
In particular, two non-degenerate Lax structures of the same dimension
become isomorphic when restricted to appropriate open subsets.
Section~\ref{h005} contains conjectures which extend results of this
paper to the case of homogeneous systems which are not
micro-indecomposable.
\section{The principal criteria }\label{h1}\myLabel{h1}\relax
One of the key ideas of this paper (compare with Conjecture
~\ref{con01.100}) is that many integrable systems admit a decomposition into a
product of ``simple'' bihamiltonian structures given by~\eqref{equ45.20}. Theorem
~\ref{th1.10} will provide an easy-to-check criterion when an open subset of a
given bihamiltonian structure is {\em isomorphic\/} to one given by~\eqref{equ45.20}.
Note that to check the criterion all one needs to know are Casimir
functions.
Note that a structure is locally isomorphic to one given by~\eqref{equ45.20}
iff it is an indecomposable Kronecker structure. In other words, it is
simultaneously a micro-indecomposable homogeneous structure, and a flat
structure. The following statement provides a criterion for the first
part, being a micro-indecomposable homogeneous structure.
\begin{theorem} \label{th1.07}\myLabel{th1.07}\relax Consider a manifold $ M $, $ \dim M\not=0 $, with two compatible
Poisson structures $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $. Consider an open subset $ {\mathcal U}\subset{\mathbb R} $ and a
family of smooth functions $ F_{\lambda} $, $ \lambda\in{\mathcal U} $, on $ M $. Suppose that for any $ \lambda\in{\mathcal U} $ the
function $ F_{\lambda} $ is Casimir w.r.t.~the Poisson bracket $ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $, and that
$ dF_{\lambda}|_{m}\in{\mathcal T}_{m}^{*}M $ depends continuously on $ \lambda $ for any $ m\in M $. For $ m\in M $ denote by
$ W_{1}\left(m\right)\subset{\mathcal T}_{m}^{*}M $ the vector subspace spanned by the the differentials $ dF_{\lambda}|_{m} $ for
all possible $ \lambda\in{\mathcal U} $. If
\begin{enumerate}
\item
for one particular value $ m_{0}\in M $ one has $ \dim W_{1}\left(m_{0}\right)\geq\frac{\dim M}{2} $;
\item
for one particular value of $ \lambda_{1},\lambda_{2}\in{\mathbb R}^{2} $ the Poisson structure
$ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has at most one independent Casimir function on any open
subset of $ M $ near $ m_{0} $;
\end{enumerate}
then $ \dim M $ is odd, and the bihamiltonian structure on $ M $ is
homogeneous of type $ \left(\dim M\right) $ on an open subset $ U\subset M $ such that $ m_{0} $ is in
the closure of $ U $. \end{theorem}
The proof of this theorem is finished with the proof of Corollary
~\ref{cor25.40} in Section~\ref{h25}. Note that this proof implies also that $ \dim
W_{1}\left(m_{0}\right)=\frac{\dim M+1}{2} $. In fact the proof will show that if the Poisson
bracket $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ is of constant corank 1, then one may require that
$ m_{0}\in U $.
Amplification~\ref{amp1.07} provides a similar criterion of homogeneity
with an arbitrary type.
The following statement shows what one needs to know about
a micro-indecomposable homogeneous structure to ensure its flatness (thus
it being Kronecker):
\begin{theorem} \label{th1.10}\myLabel{th1.10}\relax In addition to the conditions of Theorem~\ref{th1.07} suppose
that $ M $ is analytic, and $ F_{\lambda}\left(m\right) $ depends polynomially on $ \lambda $:
\begin{equation}
F_{\lambda}\left(m\right)=\sum_{k=0}^{d}f_{k}\left(m\right)\lambda^{k},
\notag\end{equation}
with analytic coefficients $ f_{k}\left(m\right) $ and the degree $ d $ satisfying $ d<\frac{\dim M}{2} $.
Then the bihamiltonian structure on $ M $ is flat indecomposable of odd
dimension on an open subset $ U $ the closure of which contains $ m_{0} $. \end{theorem}
The proof of this theorem takes up to Section~\ref{h45}. Note that this
proof implies also that $ d=\frac{\dim M -1}{2} $. Note that Conjecture~\ref{con01.120}
may provide a similar criterion applicable to arbitrary (i.e., not
necessarily indecomposable) Kronecker structures. The proof will actually
show the following statement (which cannot be expressed in terms of
Casimir functions only):
\begin{amplification} \label{amp1.12}\myLabel{amp1.12}\relax In the case when in addition to conditions of
Theorem~\ref{th1.10} the Poisson structure $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ is of constant corank
1, the open subset $ U $ is in fact a neighborhood of $ m_{0} $. \end{amplification}
Remark~\ref{rem6.13} will show that all flat indecomposable structures of
dimension $ 2k-1 $ are locally isomorphic to each other, thus to the
structure given by~\eqref{equ45.20}. It is easy to see that for the structure of
~\eqref{equ45.20} one has $ \dim M=2k-1 $, the vector space $ W_{1}\left(m\right) $ is spanned by $ dx_{0} $,
$ dx_{2},\dots $, $ dx_{2k-2} $, and the family $ F_{\lambda}\left(x\right) $ of degree $ k-1 $ is given by
~\eqref{equ45.25}.
\begin{remark} Not all homogeneous bihamiltonian structures of type $ \left(2k-1\right) $ are
flat, as the examples of Section~\ref{h47} show (already in the case $ k=2 $).
The example of $ \left\{,\right\}_{1}=\left\{,\right\}_{2}\equiv 0 $ shows that in Theorem~\ref{th1.07} one cannot
drop the restriction on the number of independent Casimir functions.
Considering a direct product of $ M $ with any bihamiltonian structure shows
the significance of the bound on $ \dim W_{1} $. Moreover, Proposition~\ref{prop47.40}
implies that one cannot weaken the bound $ d<\frac{\dim M}{2} $ of Theorem~\ref{th1.10}.
\end{remark}
\begin{remark} As Theorem~\ref{th01.60} will show, one can also consider Theorem
~\ref{th1.10} as a criterion that a given bihamiltonian structure is locally
isomorphic to an open subset of the open Toda lattice. \end{remark}
\begin{remark} Theorems~\ref{th1.07} and~\ref{th1.10} are almost immediate corollaries
of results of \cite{GelZakhWeb} and \cite{GelZakh93}. However, since we will need
many results of these papers anyway, the following three sections provide
almost self-contained proof of these theorems. The only component of the
proof which requires a reference to \cite{GelZakhWeb} is the last statement of
Theorem~\ref{th2.07}. The proof of this statement is outside of the scope of
this paper (compare with Remark~\ref{rem2.017}). \end{remark}
\section{Linear case and criterion of homogeneity }\label{h25}\myLabel{h25}\relax
Recall the classification of pairs of skewsymmetric bilinear
pairings from \cite{GelZakhFAN} (see also \cite{GelZakhWeb,GelZakh93}). For $ k\in{\mathbb N} $
consider the identity $ k\times k $ matrix $ I_{k} $. For $ \mu\in{\mathbb C} $ consider the Jordan block
$ J_{k,\mu} $ of size $ k $ and eigenvalue $ \mu $. The pair of matrices
\begin{equation}
{\text H}_{1}^{\left(\mu\right)}= \left(
\begin{matrix}
0 & J_{k,\mu}
\\
-J_{k,\mu}^{t} & 0
\end{matrix}
\right),\qquad {\text H}_{2}^{\left(\mu\right)}=\left(
\begin{matrix}
0 & I_{k}
\\
-I_{k} & 0
\end{matrix}
\right)
\notag\end{equation}
defines a pair of skewsymmetric bilinear pairings on vector space $ {\mathbb C}^{2k} $. The
limit case of $ \mu \to \infty $ may be deformed to
\begin{equation}
{\text H}_{1}^{\left(\infty\right)}= \left(
\begin{matrix}
0 & I_{k}
\\
-I_{k} & 0
\end{matrix}
\right),\qquad {\text H}_{2}^{\left(\infty\right)}= \left(
\begin{matrix}
0 & J_{k,0}
\\
-J_{k,0}^{t} & 0
\end{matrix}
\right).
\notag\end{equation}
Denote the pair $ \left({\text H}_{1}^{\left(\mu\right)},{\text H}_{2}^{\left(\mu\right)}\right) $ of skewsymmetric bilinear pairings by
$ {\mathcal J}_{2k,\mu} $, $ k\in{\mathbb N} $, $ \mu\in{\mathbb C}{\mathbb P}^{1} $.
Add to this list the so-called Kroneker pair $ {\mathcal K}_{2k-1} $. This is a pair
in a vector space $ {\mathbb C}^{2k-1} $ with a basis $ \left({\mathbit w}_{0},{\mathbit w}_{1},\dots ,{\mathbit w}_{2k-2}\right) $. The only non-zero
pairings are
\begin{equation}
\left({\mathbit w}_{2l},{\mathbit w}_{2l+1}\right)_{1}=1,\qquad \left({\mathbit w}_{2l+1},{\mathbit w}_{2l+2}\right)_{2}=1,
\label{equ2.10}\end{equation}\myLabel{equ2.10,}\relax
for $ 0\leq l\leq k-2 $. Obviously, different pairs from this list are not isomorphic.
\begin{theorem} \label{th6.10}\myLabel{th6.10}\relax (\cite{GelZakhFAN,Thom91Pen}) Any pair of skewsymmetric
bilinear pairings on a finite-dimensional complex vector space can be
decomposed into a direct sum of pairs of the pairings isomorphic to
$ {\mathcal J}_{2k,\mu} $, $ k\in{\mathbb N} $, $ \mu\in{\mathbb P}^{1} $, and $ {\mathcal K}_{2k-1} $, $ k\in{\mathbb N} $. The types of the components of this
decomposition are uniquely determined. \end{theorem}
Though this simple statement was known for a long time (say, the
preprint of \cite{Thom91Pen} existed in 1973), we do not know whether it was
published before it was used in \cite{GelZakhFAN}. The discussions in
\cite{Gan59The} and \cite{TurAith61Int} come very close, but do not state this
result.
\begin{remark} The papers \cite{GelZakhFAN,GelZakh94Spe} described significance
of Kronecker blocks in the spectral theory of pencils $ A_{\lambda}=A+\lambda B $, $ \lambda\in{\mathbb C} $, of
differential operators. Though it is not used in this paper, let us
highlight the details of this description.
The Jordan blocks which appear in spectral theory of pencils
correspond to values of $ \lambda $ where the dimension of $ \operatorname{Ker} A_{\lambda} $ jumps up. It so
happens that due to special properties of the pencil $ A_{\lambda} $ (say, skew
symmetry of operators) it may happen that $ \operatorname{Ker} A_{\lambda}\not=0 $ for any $ \lambda $ (this is
what actually happens in the pencil related to the periodic case of KdV
equation). In such a case the direct sum of Jordan blocks has a
non-trivial complement in the vector space where the pencil acts.
For so-called {\em finite gap potentials\/} this {\em defect space\/} happens to be
exactly the Kronecker block $ {\mathcal K}_{2k-1} $ (here $ k $ is the number of gaps), thus the
situation is absolutely parallel to the finite-dimensional case discussed
above. In the case of infinitely many gaps an appropriate
infinite-dimensional analogue of Kronecker blocks may be described.
Note, however, that it is absolutely unclear how to translate this
description of the linear situation (which is associated to one cotangent
space to the phase space of KdV) to the nonlinear bihamiltonian geometry
of KdV. While results and conjectures of this paper illuminate the
bihamiltonian geometry of finite-dimensional systems in many details,
they do not look applicable in infinite-dimensional situation.
The main obstruction is that while all the Kronecker blocks of the
same dimension are isomorphic, infinite-dimensional Kronecker blocks
acquire new invariants---{\em fuzzy eigenvalues}. Though fuzzy, these data in
fact completely disambiguate points which may be distinguished by Casimir
functions (at least for real-analytic potentials, for details see
\cite{GelZakhFAN}).
One can see that the linearized geometry of periodic KdV is very
similar to geometry on odd-dimensional manifolds---there is exactly one
Kronecker block, the rest is Jordan blocks with $ k=1 $, and in generic
points there is no Jordan block. But the non-linear geometry of KdV is in
some regards also similar to even-dimensional geometry in the sense that
the points $ m_{1},m_{2}\in M $ which are separated by Casimir functions also have
non-isomorphic pairings in $ {\mathcal T}_{m_{1}}^{*}M $, $ {\mathcal T}_{m_{2}}^{*}M $. \end{remark}
\begin{remark} \label{rem6.13}\myLabel{rem6.13}\relax Given a skewsymmetric bilinear pairing (,) on a vector
space $ V^{*} $, consider the bracket $ \left\{,\right\} $ on the vector space $ V $ described by
$ \left\{f,g\right\}|_{m}=\left(df|_{m},dg|_{m}\right) $. As it is easy to check, this bracket is
translation-invariant and Poisson. Given a pair of such pairings $ \left(,\right)_{1} $,
$ \left(,\right)_{2} $ on $ V^{*} $ one obtains a translation-invariant bihamiltonian structure on
$ V $. Obviously, any translation-invariant bihamiltonian structure may be
obtained this way.
Similarly, any decomposable flat bihamiltonian structure is locally
isomorphic to a product of two flat bihamiltonian structures. Indeed, it
is enough to show that if an open subset $ U $ of the above bihamiltonian
structure on $ V $ is decomposable, then the pair of pairings on $ V^{*} $ is
decomposable, which is obvious.
Thus Theorem~\ref{th6.10} gives also a complete classification of
translation-invariant bihamiltonian structures, a complete local
classification of flat bihamiltonian structures, and a description of
indecomposable flat structures. \end{remark}
For the topics we discuss here it is not necessary to answer the
following question, but it is interesting nevertheless:
\begin{conjecture} Consider two bihamiltonian structures on $ M_{1} $ and $ M_{2} $.
Suppose that $ M_{1}\times M_{2} $ is flat. Then $ M_{1} $ and $ M_{2} $ are flat. \end{conjecture}
The first step in the proof of Theorem~\ref{th1.07} is the following
\begin{proposition} \label{prop6.15}\myLabel{prop6.15}\relax Consider a pair of skewsymmetric bilinear pairings
$ \left(,\right)_{1} $, $ \left(,\right)_{2} $ on a finite-dimensional complex vector space $ W $. Suppose there
is a finite set $ L $ and there are families of vectors $ w_{l,\lambda}\in W $, $ l\in L $,
polynomially depending on $ \lambda $ such that $ \lambda\left(w_{l,\lambda},w\right)_{1}+\left(w_{l,\lambda},w\right)_{2}=0 $ for any $ w\in W $,
$ l\in L $, and $ \lambda\in{\mathbb C} $. Denote by $ W_{1} $ the vector subspace spanned by $ w_{l,\lambda} $, $ l\in L $, $ \lambda\in{\mathbb C} $.
Suppose that for one particular value of $ \lambda_{1} $, $ \lambda_{2} $ the corank of the
bilinear pairing $ \lambda_{1}\left(,\right)_{1}+\lambda_{2}\left(,\right)_{2} $ is $ r $. If $ \dim W_{1} \geq \frac{\dim W+r-1}{2} $, then the
pair $ \left(,\right)_{1} $, $ \left(,\right)_{2} $ is isomorphic to $ \oplus_{t=1}^{r}{\mathcal K}_{2k_{t}-1} $ with $ \sum_{t}k_{t}=\dim W_{1} $. In
particular, $ \dim W_{1} = \frac{\dim W+r}{2} $. \end{proposition}
\begin{proof} We may assume that the pair $ \left(,\right)_{1} $, $ \left(,\right)_{2} $ is a direct sum of
several blocks of the form $ {\mathcal J}_{2k,\mu} $ and $ {\mathcal K}_{2k-1} $, and that for any $ l\in L $ the
family $ w_{l,\lambda}\not\equiv 0 $. We suppose that $ \left(\lambda_{1},\lambda_{2}\right)\not=\left(0,0\right) $, it is easy to consider the
remaining case separately.
Start with supposing that there are only blocks of the form $ {\mathcal K}_{2k_{t}-1} $,
$ t=1,\dots ,T $. Then the only things we need to prove is that $ T=r $, and $ \dim W_{1}
\leq\sum_{t}k_{t} $. The first statement is obvious.
The following lemma follows immediately from the explicit
description of the pair $ {\mathcal K}_{2k-1} $:
\begin{lemma} \label{lm25.20}\myLabel{lm25.20}\relax For the pair $ {\mathcal K}_{2k-1} $ of skewsymmetric pairings there exists a
family of vectors $ \widetilde{w}_{\lambda}\in W $ polynomially depending on $ \lambda $ such that
$ \lambda\left(\widetilde{w}_{\lambda},w\right)_{1}+\left(\widetilde{w}_{\lambda},w\right)_{2}=0 $ for any $ w\in W $ and $ \lambda\in{\mathbb C} $, and the degree of $ \widetilde{w}_{\lambda} $ in $ \lambda $ is $ k-1 $.
This family is defined uniquely up to multiplication by a constant, and
it spans a $ k $-dimensional vector subspace. Any other polynomial family $ w_{\lambda} $
such that $ \lambda\left(w_{\lambda},w\right)_{1}+\left(w_{\lambda},w\right)_{2}=0 $ for any $ w\in W $ and $ \lambda\in{\mathbb C} $ may be written as $ p\left(\lambda\right)\widetilde{w}_{\lambda} $
for an appropriate scalar polynomial $ p $. \end{lemma}
Denote the family $ \widetilde{w}_{\lambda} $ for the Kronecker block $ {\mathcal J}_{2k_{t}-1} $ by $ \widetilde{w}_{\lambda}^{\left(t\right)} $. Due to
this lemma one can write $ w_{l,\lambda}=\sum_{t=1}^{T}p_{lt}\left(\lambda\right)\widetilde{w}_{\lambda}^{\left(t\right)} $, thus $ \dim W_{1}\leq\sum_{t=1}^{r}k_{t}=\frac{\dim
W +r}{2} $. Since $ \dim W + r $ is even, this shows that $ \dim W_{1} = \frac{\dim W+r}{2} $,
thus finishes proof of the proposition in the case when there are no
Jordan blocks.
Consider now the general case. First of all, $ w_{l,\lambda}\not=0 $ for a generic $ \lambda $,
thus $ w_{l,\lambda} $ (for a generic $ \lambda $) is in the null-space of the linear
combination $ \lambda\left(,\right)_{1}+\left(,\right)_{2} $. Since for a block of the form $ {\mathcal J}_{2k,\mu} $ and generic $ \lambda $
this combination has no null-space, it is obvious that $ w_{l,\lambda} $ is in the sum
of components of the form $ {\mathcal K}_{2k-1} $. Since removing a component of the form
$ {\mathcal J}_{2k,\mu} $ decreases $ \dim W $ by $ 2k $, does not change $ \dim W_{1} $, and may only
decrease $ r $, one can see that conditions of the proposition are applicable
to the sum of components of the form $ {\mathcal K}_{2k-1} $, but the equality on $ \dim W_{1} $ is
sharpened by at least $ k $. However, we have seen that it is not possible to
sharpen this inequality more than by $ \frac{1}{2} $, which proves that $ W $
contains no Jordan components. \end{proof}
\begin{amplification} In Lemma~\ref{lm25.20} and Proposition~\ref{prop6.15} one may
suppose (without changing the conclusions\footnote{With an obvious exception that $ p $ in Lemma~\ref{lm25.20} becomes a continuous
function.} of these statements) that
families $ w_{l,\lambda} $ are continuous functions of $ \lambda $ defined on a given open
subset $ {\mathcal U}\subset{\mathbb C} $ or $ {\mathcal U}\subset{\mathbb R} $. \end{amplification}
\begin{corollary} \label{cor25.40}\myLabel{cor25.40}\relax In conditions of Theorem~\ref{th1.07} the dimension of $ M $
is odd. There is a point $ m_{1} $ of $ M $ such that the pair of skewsymmetric
bilinear pairings in $ {\mathcal T}_{m_{1}}^{*}M $ is isomorphic to $ {\mathcal K}_{2k-1} $ with $ \dim M=2k-1 $. \end{corollary}
\begin{proof} In this prove we assume that $ M $ is a complex manifold, so that
$ {\mathcal T}_{m}^{*}M $ is a complex vector space for any $ m\in M $. If $ M $ is a $ C^{\infty} $-manifold, one
should substitute $ {\mathcal T}_{m}^{*}M\otimes{\mathbb C} $ instead of $ {\mathcal T}_{m}^{*}M $ in the arguments below.
In conditions of Theorem~\ref{th1.07} if $ m_{1} $ in a neighborhood $ \widetilde{U} $ of
the point $ m_{0}\in M $, then vectors $ dF_{\lambda}|_{m_{1}}\in{\mathcal T}_{m_{1}}^{*}M $ span a vector subspace $ W_{1}\left(m_{1}\right) $
satisfying $ \dim W_{1}\left(m_{1}\right)>\frac{\dim M}{2} $. There is an open subset $ U_{r}\subset\widetilde{U} $ where
$ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has a constant corank $ r $. Obviously, there is $ r\in{\mathbb Z} $ such that
the point $ m_{0} $ is in the closure of $ U_{r} $. Restrict our attention to this
value of $ r $. Let $ m_{1} $ be in $ U_{r} $, and $ W={\mathcal T}_{m_{1}}^{*}M $, $ L=\left\{\bullet\right\} $, and $ w_{\bullet,\lambda}=dF_{\lambda}|_{m_{1}} $. Then
the span $ W_{1} $ of vectors $ w_{\bullet,\lambda} $ considered for all possible $ \lambda\in{\mathcal U} $ satisfies $ \dim
W_{1}>\frac{\dim W}{2} $, thus $ \dim W_{1}\geq\frac{\dim W+1}{2} $.
The vector space $ W $ is equipped with two skewsymmetric bilinear
pairings $ \left(,\right)_{1} $, $ \left(,\right)_{2} $ given by values of $ \eta_{1} $, $ \eta_{2} $ (see Definition
~\ref{def01.120}) at $ m_{1} $. Obviously, $ w_{\bullet,\lambda} $ is in the kernel of $ \lambda\left(,\right)_{1}+\left(,\right)_{2} $.
By the conditions of Theorem~\ref{th1.07}, there is at most one
independent Casimir function near $ m_{1} $, thus $ r\leq1 $. Obviously, this is the
same $ r $ as in Proposition~\ref{prop6.15}, thus $ \dim M\not=0 $ implies $ r\not=0 $. Hence the
pair $ \left(,\right)_{1} $, $ \left(,\right)_{2} $ is isomorphic to $ {\mathcal K}_{2k-1} $ for an appropriate $ k $, thus $ \dim M $
is odd. \end{proof}
This proves Theorem~\ref{th1.07}. In Section~\ref{h2} we show that it also
allows one to apply the results of \cite{GelZakhWeb,GelZakh93} to prove
Theorem~\ref{th1.10} as well.
Corollary~\ref{cor25.40} uses a particular case of Proposition~\ref{prop6.15}
with $ r=1 $. While we will not need it in this paper, it is possible to
strengthen Corollary~\ref{cor25.40} so that it uses the full power of
Proposition~\ref{prop6.15}. This result would move us one step in the
direction of Conjecture~\ref{con01.120}.
\begin{amplification} \label{amp1.07}\myLabel{amp1.07}\relax Consider a manifold $ M $ with two compatible Poisson
structures $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $. Consider a finite set $ L $, open subsets $ {\mathcal U}_{l}\subset{\mathbb C} $,
$ l\in L $, and families of smooth functions $ F_{l,\lambda} $, $ l\in L $, $ \lambda\in{\mathcal U}_{l} $, on $ M $. Suppose that
for any $ l\in L $ and any $ \lambda\in{\mathcal U}_{l} $ the function $ F_{l,\lambda} $ is Casimir w.r.t.~the Poisson
bracket $ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $, and that $ dF_{l,\lambda}|_{m}\in{\mathcal T}_{m}^{*}M $ depends continuously on $ \lambda $ for
any $ l\in L $ and $ m\in M $. For $ m\in M $ denote by $ W_{1}\left(m\right)\subset{\mathcal T}_{m}^{*}M $ the vector subspace spanned
by the the differentials $ dF_{l,\lambda}|_{m} $ for all possible $ l $ and $ \lambda\in{\mathcal U}_{l} $. If for an
appropriate $ R\in{\mathbb Z}_{\geq0} $
\begin{enumerate}
\item
for one particular value $ m_{0}\in M $ one has $ \dim W_{1}\left(m_{0}\right)\geq\frac{\dim M+R}{2} $;
\item
for one particular value of $ \lambda_{1},\lambda_{2}\in{\mathbb C}^{2} $ the Poisson structure
$ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has at most $ R $ independent Casimir functions on any open
subset of $ M $ near $ m_{0} $;
\end{enumerate}
then $ \dim M-R $ is even, $ \dim W_{1}\left(m_{0}\right)=\frac{\dim M+R}{2} $, and the bihamiltonian
structure on $ M $ is homogeneous of type $ \left(t_{1},\dots ,t_{R}\right) $ on an open subset $ U\subset M $
such that $ m_{0} $ is in the closure of $ U $. Here $ t_{k}\in{\mathbb Z}_{>0} $ are appropriate numbers
with $ \sum_{k}t_{k}=\dim M $. \end{amplification}
\begin{proof} First of all, one can proceed as in Corollary~\ref{cor25.40} up to
the moment we concluded $ r\leq1 $. Under the conditions of the amplification we
conclude that $ r\leq R $, thus $ \dim W_{1}\left(m_{1}\right)\geq\frac{\dim W+r}{2} $. Proposition~\ref{prop6.15}
implies that $ \dim W_{1}\left(m_{1}\right)=\frac{\dim M+r}{2} $, thus $ r\geq R $. This shows that in fact
$ r=R $.
We can conclude that for $ m $ in an appropriate open subset $ U\subset M $ the pair
of bilinear pairings on the vector space $ {\mathcal T}_{m}^{*}M $ is isomorphic to a direct sum
of $ R $ Kronecker blocks. What remains to prove is that the dimensions of
these blocks do not depend on $ m $ in an appropriate open subset of $ U $.
Fix a vector space $ V $. For a sequence $ T=\left(t_{1}\leq\dots \leq t_{R}\right) $ denote by
$ {\mathfrak F}_{T}\subset\Lambda^{2}V^{*}\times\Lambda^{2}V^{*} $ the set of pairs of skewsymmetric bilinear pairings which are
isomorphic to $ \bigoplus_{a=k}^{R}{\mathcal K}_{t_{k}} $. In particular, $ {\mathfrak F}_{T} $ is not empty iff all $ t_{k} $ are
odd and $ \sum t_{k}=\dim V $. Moreover, $ {\mathfrak F}_{T} $ is a $ \operatorname{GL}\left(V\right) $-orbit.
It follows that if $ {\mathfrak F}_{T'} $ intersects the closure of $ {\mathfrak F}_{T} $, then $ {\mathfrak F}_{T'} $ is
contained in this closure. Fix a neighborhood $ U_{1} $ of $ m_{0} $, let $ T^{\left(1\right)},\dots ,T^{\left(N\right)} $
be such sequences that there are points $ m $ in $ U\cap U_{1} $ where the pair of
pairings is in each of $ {\mathfrak F}_{T^{\left(k\right)}} $, $ 1\leq k\leq N $. Suppose that $ {\mathfrak F}_{T^{\left(1\right)}},\dots ,{\mathfrak F}_{T^{\left(M\right)}} $ are of
maximal possible dimension among $ {\mathfrak F}_{T^{\left(1\right)}},\dots ,{\mathfrak F}_{T^{\left(N\right)}} $, then the points $ m $ in
$ U\cap U_{1} $ where the pair of pairings is in any one of $ {\mathfrak F}_{T^{\left(k\right)}} $, $ 1\leq k\leq M $, form an open
subset. Obviously, at least one of these subsets has $ m_{0} $ in its closure. \end{proof}
\begin{remark} It is not clear whether one can improve the statement of
Amplification~\ref{amp1.07} provided that the rank of $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ is
constant near $ m_{0} $. Recall that in Theorem~\ref{th1.07} one {\em could\/} conclude that
the structure is homogeneous in a {\em neighborhood\/} of $ m_{0} $. However, under the
condition of constant rank one can weaken the condition on dimension to
become $ \dim W_{1}\left(m_{0}\right)\geq\frac{\dim M+R-1}{2} $.
To recognize a possibility of a jump of the type of decomposition of
$ {\mathcal T}_{m}^{*}M $, consider the vector space with a basis $ {\mathbit w}_{0},\dots ,{\mathbit w}_{4},{\mathbit W} $ with the only
non-zero pairings being
\begin{equation}
\left({\mathbit w}_{2l},{\mathbit w}_{2l+1}\right)_{1}=1,\qquad \left({\mathbit w}_{2l+1},{\mathbit w}_{2l+2}\right)_{2}=1,
\notag\end{equation}
for $ 0\leq l\leq1 $, and $ \left({\mathbit W},{\mathbit w}_{1}\right)=\left({\mathbit W},{\mathbit w}_{3}\right)=\varepsilon $. If $ \varepsilon\not=0 $, then this pair is of the type
$ {\mathcal K}_{3}\oplus{\mathcal K}_{3} $, if $ \varepsilon=0 $, it is of the type $ {\mathcal K}_{5}\oplus{\mathcal K}_{1} $. Thus different orbits $ {\mathfrak F}_{T} $ may be
adjacent\footnote{The recent preprint \cite{Pan99Ver} contains example of a {\em bihamiltonian
structure\/} where such an adjacency takes place. Thus the statement of
Amplification~\ref{amp1.07} cannot be improved.} indeed. \end{remark}
\section{Bihamiltonian structures and webs }\label{h02}\myLabel{h02}\relax
Consider a manifold $ M $ with a Poisson bracket $ \left\{,\right\} $. To define the
notion of a symplectic leaf on $ M $, consider Casimir functions on $ M $. The
local classification of Poisson structures of {\em constant rank\/} \cite{Kir76Loc,
Wei83Loc} shows that for an arbitrary Poisson bracket there is an open
(and in interesting cases dense) subset $ U\subset M $ and $ k\in{\mathbb Z}_{\geq0} $ such that on $ U $
there are $ k $ independent Casimir functions $ F_{1},\dots ,F_{k} $, and any Casimir
function on $ U $ may be written as a function of $ F_{1},\dots ,F_{k} $ (we do not
exclude the case $ k=0 $). The common level sets $ F_{1}=C_{1},\dots ,F_{k}=C_{k} $ form an
invariantly defined foliation on $ U $, which is called the {\em symplectic
foliation}. Note that one can define this foliation as an equivalence
relation given by $ m_{1}\sim m_{2} $ iff $ F\left(m_{1}\right)=F\left(m_{2}\right) $ for any Casimir function $ F $ on $ U $.
Consider now a pair $ \left\{,\right\}_{1} $, $ \left\{,\right\}_{2} $ of compatible Poisson structures on $ M $
(i.e., a bihamiltonian structure). Proceed as with the above construction
of leaves, and consider
\begin{definition} A smooth function $ F $ on $ M $ is {\em semi-Casimir\/} if there is
$ \left(\lambda_{1},\lambda_{2}\right)\not=\left(0,0\right) $ such that $ F $ is a Casimir function for $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $. \end{definition}
For any open subset $ U\subset M $ define an equivalence relation on $ U $ by $ m_{1}\sim m_{2} $
iff $ F\left(m_{1}\right)=F\left(m_{2}\right) $ for any semi-Casimir function $ F $ on $ U $. Denote by $ {\mathcal B}_{U} $ the
topological space of equivalence classes. Then any semi-Casimir function
$ F $ on $ U $ induces a continuous function on $ {\mathcal B}_{U} $. Any function on $ {\mathcal B}_{U} $ induces a
pull-back function on $ U $.
As a result, to any local bihamiltonian structure $ \left(U,\left\{,\right\}_{1},\left\{,\right\}_{2}\right) $ we
associated a topological space $ {\mathcal B}_{U} $. Let $ \lambda=\left(\lambda_{1}:\lambda_{2}\right)\in{\mathbb C}{\mathbb P}^{1} $, let $ {\mathfrak C}_{\lambda} $ be the
vector space of functions on $ {\mathcal B}_{U} $ pull-backs of which are Casimir functions
for $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $. Note that $ \varphi\left(F_{1},F_{2},\dots ,F_{k}\right)\in{\mathfrak C}_{\lambda} $ if $ \varphi $ is smooth and
$ F_{1},F_{2},\dots ,F_{k}\in{\mathfrak C}_{\lambda} $. This allows one to consider $ {\mathfrak C}_{\lambda} $ as a $ C^{0} $-analogue of a
set of local equations of a foliation.
Later we will see that in the cases we study here $ {\mathcal B}_{U} $ is a
manifold, and for any $ \lambda $ the space $ {\mathfrak C}_{\lambda} $ is the set of local equations of a
foliation on $ {\mathcal B}_{U} $. The codimension of this foliation is not going to depend
on $ \lambda\in{\mathbb C}{\mathbb P}^{1} $. Anyway, we come to
\begin{definition} \label{def02.20}\myLabel{def02.20}\relax A {\em web\/}\footnote{The reason for this name is that $ {\mathcal B} $ is equipped with a huge family of
canonically defined subsets: for any $ \lambda $ one consider intersections of
level sets of functions from $ {\mathfrak C}_{\lambda} $. Moreover, one can consider intersections
of such subsets for different values of $ \lambda $. If one assumes that $ {\mathcal B} $ and
these intersections are manifolds, then one gets a delicate network of
submanifolds, with infinitely many of them passing through each given
point $ b\in{\mathcal B} $.} is a topological space $ {\mathcal B} $ with a given subset
$ {\mathfrak C}_{\lambda} $ of the set of continuous functions on $ {\mathcal B} $ for any $ \lambda\in{\mathbb C}{\mathbb P}^{1} $. We require that
$ \varphi\left(F_{1},F_{2},\dots ,F_{k}\right)\in{\mathfrak C}_{\lambda} $ if $ \varphi $ is smooth and $ F_{1},F_{2},\dots ,F_{k}\in{\mathfrak C}_{\lambda} $. \end{definition}
One can also introduce a notion of $ {\mathcal U} $-{\em web\/} for any subset $ {\mathcal U}\subset{\mathbb C}{\mathbb P}^{1} $, the
only change being that $ \lambda\in{\mathcal U} $ instead of $ \lambda\in{\mathbb C}{\mathbb P}^{1} $.
\begin{proposition} To any bihamiltonian structure $ \left(M,\left\{\right\}_{1},\left\{\right\}_{2}\right) $ one can
associate a structure of a web on $ {\mathcal B}_{M}=M/\sim $. \end{proposition}
In \cite{GelZakhWeb} and \cite{GelZakh93} it was shown that in some
particularly interesting types of bihamiltonian structures the class of
the web $ {\mathcal B}_{U} $ up to an isomorphism determines the class of bihamiltonian
structure on $ U $ up to an isomorphism (compare Theorem~\ref{th2.07}), at least
for small open subsets $ U\subset M $. This is going to be the main instrument used
in this paper: we show that the bihamiltonian structure from Theorem
~\ref{th1.10} and the structure given by~\eqref{equ45.20} are of the type mentioned
above, and show that the corresponding webs are locally isomorphic. This
will imply a local isomorphism of bihamiltonian structures.
To illustrate advantages of the approach of \cite{GelZakhWeb} and
\cite{GelZakh93} introduce
\begin{definition} A smooth function $ F $ on $ M $ is an {\em action\/} function if locally on
each small open subset $ U\subset M $ it is a pull-back from a function on $ {\mathcal B}_{U} $. \end{definition}
Obviously, any function of the form $ \varphi\left(F_{1},\dots ,F_{l}\right) $ with semi-Casimir
functions $ F_{1},\dots ,F_{l} $ (not necessarily corresponding to the same $ \lambda $) is an
action function. (The name is related to the fact that in bihamiltonian
geometry {\em action-\/} and {\em angle-variables\/} may be defined by local means.
Action function are functions of action variables.)
In these terms the approach of \cite{GelZakhWeb} and \cite{GelZakh93} states
that to construct an isomorphism of bihamiltonian structures $ M' $ and $ M'' $
it is enough to associate to each action function on $ M' $ an action
function on $ M'' $ (with appropriate compatibilities conditions this is
equivalent to constructing a diffeomorphism of the webs). One needs not
care about ``angle'' variables. Since explicit constructions of ``angle''
variables is the most complicated step of integration of a dynamical
system, this leads to very significant simplifications.
In particular, we are going to construct an isomorphism of manifolds
of (approximately) half the dimension of the initial manifolds. Moreover,
these smaller manifolds have a very {\em rigid\/} geometric structure\footnote{In particular, it has at most $ 1 $-dimensional group of automorphisms
preserving a given point, as opposed to the group of automorphisms of
bihamiltonian structures themselves. Recall that in \cite{GelZakhWeb} and
\cite{GelZakh93} it was shown that automorphisms of bihamiltonian structures
are enumerated by several functions of two variables.}, so it is
quite straightforward to construct an explicit diffeomorphism---the
moment one suspects that such a diffeomorphism exists.
\section{Webs for odd-dimensional bihamiltonian structures }\label{h2}\myLabel{h2}\relax
In this section we suppose that $ \dim M=2k-1 $.
\begin{definition} A pair of bilinear skewsymmetric pairings on a
finite-dimensional vector space $ V $ is {\em indecomposable\/} if the decomposition
of Theorem~\ref{th6.10} has only one component.
Call a pair of brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ on $ M $ {\em micro-indecomposable\/} at
$ m\in M $ if the corresponding pair of bilinear pairings on $ {\mathcal T}_{m}^{*}M $ is
indecomposable. \end{definition}
\begin{definition} A pair of brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ on $ M $ is {\em generic\/} at $ m\in M $, if
two corresponding bilinear pairings on $ {\mathcal T}_{m}^{*}M $ are in general position\footnote{For the purpose of this discussion, this means that $ \operatorname{GL}\left({\mathcal T}_{m}^{*}M\right) $-orbit of
the given pair of pairings is open.}. \end{definition}
Note that Theorem~\ref{th6.10} implies that an indecomposable pair of
parings on an odd-dimensional vector space $ W $ is isomorphic to $ {\mathcal K}_{2k-1} $, here
$ \dim W=2k-1 $.
Now we can codify the program outlined in Section~\ref{h02}:
\begin{theorem} \label{th2.07}\myLabel{th2.07}\relax (\cite{GelZakhWeb,GelZakh93}) Consider a pair of compatible
Poisson structures on an odd-dimensional manifold $ M $. This pair is generic
at $ m $ iff it is micro-indecomposable at $ m $. If it is micro-indecomposable
at $ m $, then it is micro-indecomposable at $ m' $ for any $ m' $ in a neighborhood
of $ m $.
If a pair is micro-indecomposable at $ m\in M $, then
\begin{enumerate}
\item
The web $ {\mathcal B}_{U} $ is a manifold for any small open neighborhood $ U $ of $ m $, in
other words, for any open $ U\ni m $ there is an open subset $ U' $, $ m\in U'\subset U $, such
that $ {\mathcal B}_{U'} $ is a manifold;
\item
The dimension of the manifold $ {\mathcal B}_{U} $ is $ \frac{\dim M+1}{2} $;
\item
For any $ \lambda\in{\mathbb C}{\mathbb P}^{1} $ there is a foliation $ {\mathcal F}_{\lambda} $ on $ {\mathcal B}_{U} $ of codimension 1 such
that the subspace $ {\mathfrak C}_{\lambda} $ consists of smooth functions which are constant on
leaves of the foliation $ {\mathcal F}_{\lambda} $.
\item
Consider a micro-indecomposable pair of compatible Poisson
structures on a manifold $ M' $, and the corresponding manifold $ {\mathcal B}_{U'} $ with
foliations $ {\mathcal F}_{\lambda}' $. Suppose that both $ M $ and $ M' $ are analytic. If there is a
diffeomorphism $ \xi\colon {\mathcal B}_{U} \to {\mathcal B}_{U'} $ which sends the foliation $ {\mathcal F}_{\lambda} $ to the foliation
$ {\mathcal F}_{\lambda}' $ for any $ \lambda\in{\mathbb C}{\mathbb P}^{1} $, then the bihamiltonian structures on $ M $ and $ M' $ are
locally diffeomorphic. This local diffeomorphism is compatible with the
diffeomorphism $ \xi $.
\end{enumerate}
\end{theorem}
\begin{remark} Note that the conjecture of \cite{GelZakhWeb} implies that the last
statement of this theorem holds in the $ C^{\infty} $-case too. In \cite{Tur99Equi}
it is announced that this conjecture holds. \end{remark}
We are not going to repeat the proof of this theorem here, but we
sketch some arguments which should convince the reader that the first
several statements are true (it is the last one which is complicated).
Note that for the pair~\eqref{equ2.10} the pairing $ \lambda_{1}\left(,\right)_{1}+\lambda_{2}\left(,\right)_{2} $ is
degenerate (as any skewsymmetric pairing on an odd-dimensional vector
space) for any $ \left(\lambda_{1},\lambda_{2}\right)\in{\mathbb C}^{2} $, $ \left(\lambda_{1},\lambda_{2}\right)\not=\left(0,0\right) $, and has a $ 1 $-dimensional
null-space. (In other words, the dimension of the kernel does not jump up
for any $ \left(\lambda_{1},\lambda_{2}\right)\not=\left(0,0\right) $.) Moreover, Theorem~\ref{th6.10} momentarily implies
that a pair of pairings is indecomposable iff for any $ \left(\lambda_{1},\lambda_{2}\right)\not=\left(0,0\right) $ the
null-space of $ \lambda_{1}\left(,\right)_{1}+\lambda_{2}\left(,\right)_{2} $ is $ 1 $-dimensional.
This together with compactness of $ {\mathbb C}{\mathbb P}^{1} $ immediately implies that a
small deformation of $ {\mathcal K}_{2k-1} $ is indecomposable, thus isomorphic to $ {\mathcal K}_{2k-1} $.
In turn, this implies that a Zariski open (thus dense) subset of all
possible pairs consists of pairs isomorphic to $ {\mathcal K}_{2k-1} $. This shows that
the property of being generic coincides with indecomposability.
As a corollary, if a pair of brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ is generic at
$ m\in M $, then in an appropriate neighborhood $ U $ of $ m $ the bracket $ \left\{,\right\}^{\left(\lambda_{1},\lambda_{2}\right)}
\buildrel{\text{def}}\over{=} \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has a corank 1 in $ U $ for any $ \left(\lambda_{1},\lambda_{2}\right)\in{\mathbb C}^{2}\smallsetminus\left\{0\right\} $.
Now suppose that the bracket $ \left\{,\right\}^{\left(\lambda_{1},\lambda_{2}\right)} $ is Poisson. Since the rank
of the corresponding tensor field $ \eta $ is constant it is easy to see
(\cite{Kir76Loc,Wei83Loc}) that there is a (locally defined) Casimir
function $ F_{\lambda_{1},\lambda_{2}} $. Since the corank is 1, the level hypersurfaces of $ F_{\lambda_{1},\lambda_{2}} $
are canonically defined.
On the other hand, the normal direction $ {\mathbit n}_{\lambda_{1},\lambda_{2}} $ to the level
hypersurfaces of $ F_{\lambda_{1},\lambda_{2}} $ at $ m $ is the kernel of the corresponding
skewsymmetric pairing on $ {\mathcal T}_{m}^{*}M $. Let $ \lambda=\left(\lambda_{1}:\lambda_{2}\right)\in{\mathbb C}{\mathbb P}^{1} $, $ {\mathbit n}_{\lambda}\buildrel{\text{def}}\over{=}{\mathbit n}_{\lambda_{1},\lambda_{2}} $. Use the
isomorphism of $ {\mathcal T}_{m}^{*}M $ with the form~\eqref{equ2.10} to investigate how $ {\mathbit n}_{\lambda} $
depends on $ \lambda $. It is easy to see that the image of the vectors $ {\mathbit n}_{\lambda} $ in the
coordinate system of~\eqref{equ2.10} is proportional to
\begin{equation}
{\mathbit w}_{0}+\lambda{\mathbit w}_{2}+\dots +\lambda^{k}{\mathbit w}_{2k},
\label{equ2.20}\end{equation}\myLabel{equ2.20,}\relax
thus taken for any $ k+1 $ distinct values $ \left\{\lambda_{i}\right\} $ of $ \lambda $ the vectors $ {\mathbit n}_{\lambda_{i}} $ span the
vector subspace $ \left< {\mathbit w}_{0},{\mathbit w}_{2},\dots ,{\mathbit w}_{2k} \right> $. Translating back to the language of
differential geometry, one obtains
\begin{corollary} Consider foliations $ {\mathcal F}_{\lambda} $ given by level sets of $ F_{\lambda_{1},\lambda_{2}} $, here
$ \lambda=\left(\lambda_{1}:\lambda_{2}\right) $. For any $ k+1 $ distinct values $ \lambda^{\left(0\right)},\lambda^{\left(1\right)},\dots ,\lambda^{\left(k\right)}\in{\mathbb C}{\mathbb P}^{1} $ the
foliations $ {\mathcal F}_{\lambda^{\left(0\right)}} $, $ {\mathcal F}_{\lambda^{\left(1\right)}} $, \dots , $ {\mathcal F}_{\lambda^{\left(k\right)}} $ intersect transversally, and the
intersection foliation $ {\mathcal F} $ does not depend on the choice of
$ \lambda^{\left(0\right)},\lambda^{\left(1\right)},\dots ,\lambda^{\left(k\right)} $. The foliation $ {\mathcal F} $ is a subfoliation of the foliation $ {\mathcal F}_{\lambda} $
for any $ \lambda\in{\mathbb C}{\mathbb P}^{1} $. \end{corollary}
Now one can momentarily see that the local base of the foliation $ {\mathcal F} $
coincides with the web $ {\mathcal B}_{U} $ of the bihamiltonian structure, and
the push-forward of $ {\mathcal F}_{\lambda} $ to the base of $ {\mathcal F} $ is a foliation on $ {\mathcal B}_{U} $ which
corresponds to the subspace $ {\mathfrak C}_{\lambda} $ of functions on $ {\mathcal B}_{U} $.
\begin{remark} \label{rem2.017}\myLabel{rem2.017}\relax The proof of the last statement of Theorem~\ref{th2.07} might
be broken into two parts. The first one proves this statement under the
condition that both bihamiltonian structures admit an involution $ i\colon M \to
M $ such that $ \pi\circ i=i\circ\pi $, and such that $ i^{*}\left\{f,g\right\}_{a}=-\left\{i^{*}f,i^{*}g\right\}_{a} $ for any functions
$ f $ and $ g $ on $ M $ and any $ a=1,2 $ (here $ \pi $ is the projection from $ M $ to its web
$ {\mathcal B}_{M} $, and we suppose that $ M $ is small enough for the conditions of Theorem
~\ref{th2.07} to be applicable). Given such an involution, one can use the set
of fixed points of $ i $ as the common level set $ \left\{\varphi_{i}=0\right\} $ of would-be angle
variables $ \varphi_{i} $. After this choice it is possible to construct the angle
variables in purely geometric terms.
The second part of the proof consists of showing local existence of
such a section for any bihamiltonian structure of Theorem~\ref{th2.07}. This
part of the proof uses a hard cohomological statement related to
solvability of some overdetermined partial differential equations with
variable coefficients. The constant-coefficient variant of this
cohomological statement bears some similarity to the Dolbeault lemma.
In fact for the proof of Theorem~\ref{th1.10} only this constant
coefficients variant is needed, so it is possible that our proof of
Theorem~\ref{th1.10} may be significantly simplified. \end{remark}
\begin{remark} Note also that in many applications one may avoid using the
above cohomological statement, since one may be able to construct an
involution $ i $ explicitly. Say, for the structure~\eqref{equ45.20} the involution
is given by $ x_{j} \mapsto \left(-1\right)^{j}x_{j} $. \end{remark}
\section{Criterion of flatness }\label{h45}\myLabel{h45}\relax
Here we prove Theorem~\ref{th1.10}. Suppose that conditions of Theorem
~\ref{th1.07} are satisfied. By Corollary~\ref{cor25.40} we know that the manifold
in question is odd-dimensional, and the pair of Poisson structures is
micro-indecomposable on an open subset $ U $. By Theorem~\ref{th2.07} the
corresponding web $ {\mathcal B}_{U} $ is a manifold with a family of foliations depending
on parameter $ \lambda\in{\mathbb C}{\mathbb P}^{1} $.
On the other hand, it is easy to describe this web explicitly. By
definition, for any given $ \lambda\in{\mathcal U} $ the level sets of the function $ F_{\lambda} $ are
unions of fibers of the foliation $ {\mathcal F}_{\lambda} $ on $ U $. Since the foliation $ {\mathcal F} $ is a
subfoliation of $ {\mathcal F}_{\lambda} $, the function $ F_{\lambda} $ is constant on leaves of $ {\mathcal F} $, thus
induces a function $ \widetilde{F}_{\lambda} $ on $ {\mathcal B}_{U} $. We obtain a mapping $ \widetilde{F}_{\bullet}\colon {\mathcal B}_{U}\times{\mathcal U} \to {\mathbb C}\colon \left(b,\lambda\right) \mapsto
\widetilde{F}_{\lambda}\left(b\right) $. Considered for variable $ \lambda\in{\mathcal U} $, the mapping $ \widetilde{F}_{\bullet} $ induces a mapping from
$ {\mathcal B}_{U} $ to the topological vector space $ C^{0}\left({\mathcal U}\right) $ of continuous functions on $ {\mathcal U} $.
This mapping sends a given point $ b\in{\mathcal B}_{U} $ to the function $ \widetilde{F}_{\lambda}\left(b\right) $ considered as
a function of $ \lambda $.
The topological space $ C^{0}\left({\mathcal U}\right) $ carries a canonical structure of a $ {\mathcal U} $-web
with $ \lambda\in{\mathcal U} $, with the subspace $ {\mathfrak C}_{\lambda} $ which consists of functions on $ C^{0}\left({\mathcal U}\right) $ of
the form $ f \mapsto \varphi\left(f|_{\lambda}\right) $ with an arbitrary smooth function $ \varphi $. The above
description of the mapping $ {\mathcal B}_{U} \mapsto C^{0}\left({\mathcal U}\right) $ shows that the $ {\mathcal U} $-web structure on
$ {\mathcal B}_{U} $ is induced from the $ {\mathcal U} $-web structure on $ C^{0}\left({\mathcal U}\right) $. One can also note that
by the condition of Theorem~\ref{th1.07} the mapping $ \widetilde{F}_{\bullet} $ is an immersion.
Indeed, the rank of $ d\widetilde{F}_{\bullet}|_{b} $ is exactly the dimension of the span of $ dF_{\lambda}|_{m} $
for $ \lambda\in{\mathcal U} $ (here $ b $ is the projection of $ m $ to $ {\mathcal B}_{U} $), which coincides with $ \dim
{\mathcal B}_{U} $.
Now suppose that the conditions of Theorem~\ref{th1.10} are satisfied, so
$ F_{\lambda} $ depends polynomially on $ \lambda $. Denote the degree of $ F_{\lambda} $ in $ \lambda $ by $ d $. By
conditions of Theorem~\ref{th1.10} one has $ d<\frac{\dim M}{2} $. On the other hand, by
Lemma~\ref{lm25.20} the degree in $ \lambda $ of $ dF_{\lambda}|_{m} $ cannot be less than $ \frac{\dim M-1}{2} $.
Thus the degree of $ F_{\lambda}\left(m\right) $ is exactly $ \frac{\dim M-1}{2} $ near $ m_{0} $.
In particular, for any $ b\in{\mathcal B}_{U} $ the image $ \widetilde{F}_{\bullet}\left(b\right) $ of $ b $ is in fact inside
the $ \left(d+1\right) $-dimensional vector space of polynomials of degree $ d $ in $ \lambda $.
Denote this vector space by $ {\mathcal P}_{d} $. Similarly to $ C^{0}\left({\mathcal U}\right) $, it carries a natural
structure of $ {\mathbb C} $-web, moreover, this structure may be extended to become
$ {\mathbb C}{\mathbb P}^{1} $-web by noting that $ {\mathcal P}_{d}=\Gamma\left({\mathbb C}{\mathbb P}^{1},{\mathcal O}\left(d\right)\right) $. Since $ \dim {\mathcal P}_{d}=d+1=\dim {\mathcal B}_{U} $, and $ \widetilde{F} $ is
an immersion, one can see that
\begin{proposition} In conditions of Theorem~\ref{th1.10}
\begin{enumerate}
\item
The mapping $ \widetilde{F} $ is a local diffeomorphism compatible with structures
of webs on $ {\mathcal B}_{U} $ and $ {\mathcal P}_{d} $;
\item
The structure of the web on $ {\mathcal P}_{d} $ is invariant w.r.t.~parallel
translations on $ {\mathcal P}_{d} $;
\item
For any polynomial $ p\left(\lambda\right) $ of degree $ d $ the
family of functions $ G_{\lambda}\left(m\right)\buildrel{\text{def}}\over{=}F_{\lambda}\left(m\right)+p\left(\lambda\right) $ on $ M $ satisfies the conditions of
Theorem~\ref{th1.10};
\item
The mapping $ \widetilde{G}_{\bullet}\colon {\mathcal B}_{U} \to {\mathcal P}_{d} $ associated to $ G_{\lambda}\left(m\right) $ is a parallel translation
of the mapping $ \widetilde{F}_{\bullet} $.
\end{enumerate}
\end{proposition}
In other words, for any point $ p\in{\mathcal P}_{d} $ and any point $ b\in{\mathcal B}_{U} $ one can find a
local diffeomorphism of the webs $ {\mathcal B}_{U} $ and $ {\mathcal P}_{d} $ which sends $ b $ to $ p $. This shows
that if two bihamiltonian structures satisfy the conditions of Theorem
~\ref{th1.10}, then two corresponding webs are isomorphic.
\begin{lemma} There is a pair of compatible translation-invariant Poisson
brackets on $ {\mathbb C}^{2k-1} $ which may be equipped with a family of functions
satisfying Theorem~\ref{th1.10}. \end{lemma}
\begin{proof} For translation-invariant brackets the tensor fields $ \eta_{1} $ and $ \eta_{2} $
are constant, thus to describe the bracket we need to describe the
pairing on {\em one\/} cotangent space. On the other hand, we know that to
satisfy Theorem~\ref{th1.07} this pairing should be isomorphic to $ {\mathcal K}_{2k-1} $, thus
any pair which satisfies the lemma is isomorphic to one given by the
brackets~\eqref{equ45.20}.
Obviously,
\begin{equation}
F_{\lambda}=x_{0}+\lambda x_{2}+\lambda^{2}x_{4}+\dots +\lambda^{k-1}x_{2k-2}
\label{equ45.25}\end{equation}\myLabel{equ45.25,}\relax
satisfies the conditions of Theorem~\ref{th1.10}, and the bracket $ \left\{,\right\}_{1} $ has
exactly one independent Casimir function $ x_{2k-2} $. \end{proof}
\begin{corollary} \label{cor45.40}\myLabel{cor45.40}\relax In conditions of Theorem~\ref{th1.10} the web $ {\mathcal B}_{U} $ is
locally isomorphic to the web corresponding to the bihamiltonian structure
given by Equation~\eqref{equ45.20}, here $ \dim M=2k-1 $. \end{corollary}
Indeed, both these webs are locally isomorphic to the web on $ {\mathcal P}_{k-1} $.
Now the last part of Theorem~\ref{th2.07} implies that the bihamiltonian
structure on $ M $ is isomorphic to the structure given by~\eqref{equ45.20}, which
finishes the proof of Theorem~\ref{th1.10}.
\section{Examples of non-flat structures }\label{h47}\myLabel{h47}\relax
Here we show that Theorem~\ref{th1.10} is not a tautology. To do this, we
construct a huge pool of bihamiltonian structures which satisfy the
conditions of Theorem~\ref{th1.07}, but are not isomorphic to each other (in
particular, only one of them is flat). All these structures are
integrable by the anchored Lenard scheme (see Section~\ref{h48}, compare with
descriptions \cite{Mag78Sim,GelDor79Ham} in symplectic settings).
All the constructions below can be performed in $ C^{\infty} $-geometry and in
analytic geometry (unless explicitly specified). We state the $ C^{\infty} $-case
only.
Fix an open subset $ {\mathcal B}\subset{\mathbb R}^{2} $ and a smooth function $ f\left(x,y\right) $ of two
variables $ \left(x,y\right)\in{\mathcal B} $. Consider two brackets on $ {\mathcal B}\times{\mathbb R} $ defined by
\begin{equation}
\begin{gathered}
\left\{x,y\right\}_{1}=\left\{y,z\right\}_{1}=0,\qquad \left\{x,y\right\}_{2}=\left\{x,z\right\}_{2}=0,
\\
\left\{x,z\right\}_{1}=\frac{\partial f\left(x,y\right)}{\partial y},\qquad \left\{y,z\right\}_{2}=-\frac{\partial f\left(x,y\right)}{\partial x}.
\end{gathered}
\label{equ47.10}\end{equation}\myLabel{equ47.10,}\relax
Obviously, these two brackets form a bihamiltonian structure on $ {\mathcal B}\times{\mathbb R} $.
\begin{definition} \label{def47.11}\myLabel{def47.11}\relax Denote this bihamiltonian structure on $ {\mathcal B}\times{\mathbb R} $ by $ M_{f} $. \end{definition}
Assume that both $ \frac{\partial f}{\partial x} $ and $ \frac{\partial f}{\partial y} $ do not vanish in $ {\mathcal B} $. One can see
that
any non-zero linear combination $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has rank 2, thus has
exactly one independent Casimir function near $ b\times z\in{\mathcal B}\times{\mathbb R} $. Moreover, it is
easy to construct a family of Casimir functions for different $ \lambda\buildrel{\text{def}}\over{=}\lambda_{1}:\lambda_{2} $
which depend smoothly on $ \lambda\in{\mathcal U}\subset{\mathbb R} $ (compare with Section~\ref{h48}). Thus the
structure~\eqref{equ47.10} satisfies conditions of Theorem~\ref{th1.07}, and is
homogeneous of type\footnote{Using Theorem~\ref{th2.07}, it is easy to show that any {\em analytic\/} homogeneous
bihamiltonian structure of type (3) is locally isomorphic to this
structure for an appropriate $ f $. For the discussion of global geometry for
such structures, see \cite{Rig95Tis}.} (3).
One can write explicitly Casimir functions for the values $ \lambda_{1}:\lambda_{2} $
being $ \infty $, 0, and 1, i.e., for $ \left\{,\right\}_{1} $, for $ \left\{,\right\}_{2} $, and for $ \left\{,\right\}_{1}+\left\{,\right\}_{2} $. They have
the form $ F_{\infty}\left(y\right) $, $ F_{0}\left(x\right) $, and $ F_{1}\left(f\left(x,y\right)\right) $ for arbitrary functions $ F_{0} $, $ F_{1} $,
$ F_{\infty} $. This implies
\begin{lemma} Consider a bihamiltonian structure $ \left(M,\left\{,\right\}_{a},\left\{,\right\}_{b}\right) $ and a mapping
$ p\colon M \to M_{f} $ which is a local isomorphism of bihamiltonian structures.
Consider $ x,y,f $ as functions on $ M_{f} $. Then $ x\circ p $ is a Casimir function for
$ \left\{,\right\}_{b} $, $ y\circ p $ is a Casimir function for $ \left\{,\right\}_{a} $, and $ f\circ p $ is a Casimir function
for $ \left\{,\right\}_{a}+\left\{,\right\}_{b} $. \end{lemma}
Since $ \left\{,\right\}_{1} $, $ \left\{,\right\}_{2} $, and $ \left\{,\right\}_{1}+\left\{,\right\}_{2} $ are of corank 1, the structure
$ \left(M,\left\{,\right\}_{a},\left\{,\right\}_{b}\right) $ of the lemma determines the functions $ x'=x\circ p $, $ y'=y\circ p $, and
$ f'=f\circ p $ uniquely up to transformations of the form $ \widetilde{A}=\alpha_{A}\left(A\right) $, $ A\in\left\{x',y',p'\right\} $.
Thus $ \left(M,\left\{,\right\}_{a},\left\{,\right\}_{b}\right) $ determines the function $ f\left(x,y\right) $ up to transformations
of the form $ \widetilde{f}=\gamma\left(f\left(\alpha\left(x\right),\beta\left(y\right)\right)\right) $. Indeed, the graph of $ f\left(x,y\right) $ coincides with
the image of $ M $ w.r.t.~the mapping $ x'\times y'\times f' $.
\begin{lemma} Consider a smooth function $ f\colon U_{1}\times U_{2} \to U_{3} $, $ U_{1,2,3}\subset{\mathbb R} $, and
diffeomorphisms $ \alpha\colon U_{1}' \to U_{1} $, $ \beta\colon U_{2}' \to U_{2} $, $ \gamma\colon U_{3} \to U_{3}' $. Let
\begin{equation}
g\left(x,y\right) \buildrel{\text{def}}\over{=} \gamma\left(\alpha\left(x\right),\beta\left(y\right)\right).
\notag\end{equation}
Then the bihamiltonian structures $ M_{f} $ and $ M_{g} $ are isomorphic. \end{lemma}
\begin{proof} It is easy to write the diffeomorphism explicitly as $ x'=\alpha\left(x\right) $,
$ y'=\beta\left(y\right) $, $ z'=\xi\left(x,y\right)z $ with an appropriate function $ \xi\left(x,y\right) $. \end{proof}
The next step is to introduce additional conditions on a function
$ g\left(x,y\right) $ which would define the diffeomorphisms $ \alpha $, $ \beta $, $ \gamma $ almost uniquely.
First, assume $ \left(0,0\right)\in{\mathcal B} $, $ f\left(0,0\right)=0 $. Consider $ O=\left(0,0,0\right) $ as a marked point on
$ {\mathcal B}\times{\mathbb R} $. Then the condition that $ x'- $ and $ y' $-coordinates of $ O $ remain 0 leads
to $ \alpha\left(0\right)=0 $ and $ \beta\left(0\right)=0 $.
Given a function $ f\left(x,y\right) $ such that $ f\left(0,0\right)=0 $, $ \partial f/\partial x\not=0 $ and $ \partial f/\partial y\not=0 $ near
$ x=y=0 $, one can find local coordinate changes $ x'=\alpha\left(x\right) $, $ y'=\beta\left(y\right) $, and
$ f'=\gamma\left(f\right) $, $ \alpha\left(0\right)=\beta\left(0\right)=\gamma\left(0\right)=0 $, such that $ \varphi\left(x',y'\right)\buildrel{\text{def}}\over{=}\gamma\left(f\left(\alpha^{-1}\left(x'\right),\beta^{-1}\left(y'\right)\right)\right) $
satisfies
\begin{equation}
\begin{gathered}
\frac{\partial\varphi}{\partial x'}|_{x'=0}=1\text{, }\frac{\partial\varphi}{\partial y'}|_{x'=0}=1,
\\
\frac{\partial\varphi}{\partial x'}|_{y'=0}=\frac{\partial\varphi}{\partial y'}|_{y'=0},
\end{gathered}
\qquad \varphi\left(0,0\right)=0.
\label{equ47.20}\end{equation}\myLabel{equ47.20,}\relax
Moreover, such a coordinate change and the function $ \varphi $ are defined
uniquely up to simultaneous multiplication of $ x' $, $ y' $ and $ \varphi $ by the same
constant. If $ \frac{\partial^{2}\varphi}{\partial x'\partial y'}|_{\left(0,0\right)}\not=0 $, then this last degree of freedom may
be eliminated by a requirement that $ \frac{\partial^{2}\varphi}{\partial x'\partial y'}|_{\left(0,0\right)}=1 $. (In fact if
$ \varphi\left(x',y'\right)\not\equiv x'+y' $, then one can fix coordinates $ x' $ and $ y' $ by normalizing an
appropriate derivative of $ \varphi $ of higher order.)
These arguments lead to the following statement from (\cite{GelZakhWeb,
GelZakh93}):
\begin{theorem} \label{th47.13}\myLabel{th47.13}\relax Consider two functions $ \varphi\left(x,y\right) $ and $ \varphi'\left(x,y\right) $ defined in a
neighborhoods $ {\mathcal B} $, $ {\mathcal B}' $ of (0,0). Suppose that both $ \varphi $ and $ \varphi' $ satisfy
~\eqref{equ47.20}. There exists a local diffeomorphism between $ M_{\varphi} $ and $ M_{\varphi'} $ which
preserves the point (0,0,0) iff there exists $ C\in{\mathbb R} $, $ C\not=0 $, such that
$ \varphi\left(Cx,Cy\right)\equiv C\varphi'\left(x,y\right) $ near $ x=y=0 $. \end{theorem}
\begin{corollary} \label{cor47.30}\myLabel{cor47.30}\relax If $ \varphi\left(x,y\right) $ satisfies~\eqref{equ47.20}, then $ M_{\varphi} $ is flat iff
$ \varphi\left(x,y\right)=x+y $. \end{corollary}
\begin{proof} Indeed, $ \varphi\left(x,y\right)=f\left(x,y\right)=x+y $ defines a structure with constant
coefficients (compare with~\eqref{equ45.20}), thus a flat one. Any other
function $ \varphi\left(x,y\right) $ which satisfies~\eqref{equ47.20} will define a non-isomorphic
bihamiltonian structure, thus a non-flat one. \end{proof}
As a corollary, one obtains a lot of structures which are not flat,
thus cannot satisfy the conditions of Theorem~\ref{th1.10}. The next logical
step is to check whether these bihamiltonian structures are ``integrable''.
To do so, we need a formalization of the notion of integrability. One of
the simplest such notions is integrability by the anchored Lenard scheme,
which is introduced in Section~\ref{h48}. Example~\ref{ex48.80} will demonstrate
that any homogeneous bihamiltonian structure $ M_{f} $ is integrable\footnote{In fact Proposition~\ref{prop55.60} will show that {\em any\/} homogeneous structure is
Lenard-integrable.} in this
sense.
\section{One counterexample }\label{h62}\myLabel{h62}\relax
The examples of Section~\ref{h47} show that one cannot expect to prove
the conclusion of Theorem~\ref{th1.10} in conditions of Theorem~\ref{th1.07}, even
if one requires $ {\mathcal U} $ to become the whole complex plane $ {\mathbb C} $, and requires $ F_{\lambda} $ to
depend analytically on $ \lambda $. Recall that the notation $ M_{f} $ was introduced in
Definition~\ref{def47.11}.
To show that in fact even the restriction on the degree of the
polynomial in Theorem~\ref{th1.10} cannot be improved if $ k=2 $, consider
\begin{definition} Consider a bihamiltonian structure on $ M $ and a smooth
function on $ {\mathbb R}\times M $, $ \left(\lambda,m\right) \mapsto C_{\lambda}\left(m\right) $. Call this function a $ \left[d\right] $-{\em family\/} if
for any fixed $ m\in M $ the function $ C_{\lambda}\left(m\right) $ of $ \lambda $ depends on $ \lambda $ as a polynomial of
degree $ d $ or less, and for any fixed $ \lambda\in{\mathbb R} $ the function $ C_{\lambda} $ of $ m $ is a Casimir
function for $ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $. \end{definition}
\begin{proposition} \label{prop47.40}\myLabel{prop47.40}\relax There exists a function $ \varphi\left(x,y\right)\not\equiv x+y $ which satisfies
~\eqref{equ47.20}, and such that the bihamiltonian structure $ M_{\varphi} $ admits a
$ \left[2\right] $-family $ F_{\lambda} $. \end{proposition}
\begin{proof} Actually it is possible to describe {\em all\/} analytic functions
$ f\left(x,y\right) $ such that $ M_{f} $ admits a $ \left[2\right] $-family $ F_{\lambda} $, at least if we are allowed to
restrict our attention to smaller open subsets.
If $ G_{\lambda} $ is a $ \left[1\right] $-family, then $ \left(d\lambda+e\right)G_{\lambda}+a\lambda^{2}+b\lambda+c $, $ a,b,c,d,e\in{\mathbb C} $, gives a
$ \left[2\right] $-family. Call such families {\em simple\/} families. By Theorem~\ref{th1.10} a
simple family may exist on a flat bihamiltonian structure only.
Given an open subset $ {\mathcal U}\subset{\mathbb C} $ with two analytic functions $ \eta,\zeta\colon {\mathcal U} \to {\mathbb C} $,
and an open subset $ {\mathfrak B}\subset{\mathbb C}\times{\mathbb C} $ with an analytic function $ \Lambda\colon {\mathfrak B} \to {\mathcal U} $, let
\begin{equation}
F_{\lambda}^{\left(\eta\zeta\Lambda\right)}\left(x,y\right)\buildrel{\text{def}}\over{=}\left(\Lambda\left(x,y\right)-\lambda\right)^{2}y+\zeta\left(\Lambda\left(x,y\right)\right)+\lambda\eta\left(\Lambda\left(x,y\right)\right).
\notag\end{equation}
\begin{lemma} \label{lm62.20}\myLabel{lm62.20}\relax Consider an analytic bihamiltonian structure $ M_{f} $ with a
$ \left[2\right] $-family $ F_{\lambda} $ which is not simple. Then there exists an open subset $ {\mathcal U}\subset{\mathbb C} $
with two analytic functions $ \eta,\zeta\colon {\mathcal U} \to {\mathbb C} $, and an open subset $ {\mathfrak B}\subset{\mathbb C}\times{\mathbb C} $ with an
analytic function $ \Lambda\colon {\mathfrak B} \to {\mathcal U} $ such that $ F_{\lambda}\left(x,y,z\right)=F_{\lambda}^{\left(\eta\zeta\Lambda\right)}\left(x,y\right) $ and
\begin{equation}
\frac{d\zeta}{dt}+t\frac{d\eta}{dt}=0\quad \text{if }t\in{\mathcal U},\qquad x=\Lambda^{2}\left(x,y\right)y+\zeta\left(\Lambda\left(x,y\right)\right)\quad \text{if }\left(x,y\right)\in{\mathfrak B},
\notag\end{equation}
In particular, $ M_{f} $ is locally isomorphic to an open subset of
$ M_{F_{1}^{\left(\eta\zeta\Lambda\right)}} $. \end{lemma}
\begin{proof} Write $ F_{\lambda}=\lambda^{2}H_{0}+\lambda H_{1}+H_{2} $. Being a Casimir function for $ \left\{,\right\}_{1} $, $ H_{2} $
depends on $ x $ only, similarly $ H_{0} $ depends on $ y $ only. If $ H_{0} $ does not depend
on $ y $, then $ H_{0}=\operatorname{const} $, thus $ F_{\lambda} $ is simple. Similarly one can exclude the
case when $ H_{2} $ does not depend on $ x $. Restricting to a smaller open subset,
one can assume that $ x=\alpha\left(H_{2}\right) $, $ y=\beta\left(H_{0}\right) $, thus one can consider $ H_{0} $ and $ H_{2} $
instead of coordinates $ x $ and $ y $ on $ {\mathcal B} $. In particular, we may assume that
$ H_{2}=x $, $ H_{0}=y $.
By Lemma~\ref{lm25.20}, given a point $ \left(x,y\right)\in{\mathcal B} $, $ dF_{\lambda}|_{\left(x,y\right)} $ may be written
as $ p\left(\lambda\right)\widetilde{w}_{\lambda} $, here $ p\left(\lambda\right) $ is a scalar polynomial in $ \lambda $, and $ \widetilde{w}_{\lambda} $ is a
vector-valued polynomial of degree 1 in $ \lambda $. Thus $ \deg p=1 $, denote the zero
of $ p $ by $ \Lambda $. We conclude that for each $ \left(x,y\right) $ there is $ \Lambda\left(x,y\right) $ such that
$ dF_{\lambda}|_{\left(x,y\right)}=0 $ if $ \lambda=\Lambda\left(x,y\right) $.
Restricting to an appropriate open subset of $ {\mathcal B} $, we may assume that $ \Lambda $
depends analytically on $ x $ and $ y $. If $ \Lambda $ is constant, then $ dF_{\Lambda}=0 $ implies
that $ \frac{F_{\lambda}\left(x,y\right)-F_{\lambda}\left(x_{0},y_{0}\right)}{\lambda-\Lambda} $ is linear, thus $ F_{\lambda} $ is simple. Thus,
decreasing $ {\mathcal B} $ again, we may assume that either $ \left(\Lambda,y\right) $ or $ \left(\Lambda,x\right) $ give a local
coordinate system on $ {\mathcal B} $. Assume that $ \left(\Lambda,y\right) $ is a local coordinate system.
The condition $ dF_{\Lambda}|_{\left(x,y\right)}=0 $ implies
\begin{equation}
dx=-\Lambda^{2}dy-\Lambda dH_{1},
\label{equ47.30}\end{equation}\myLabel{equ47.30,}\relax
Thus $ 2 $-form $ d\left(-\Lambda^{2}dy-\Lambda dH_{1}\right) $ vanishes, in other words, $ 2\Lambda d\Lambda dy+d\Lambda dH_{1}=0 $. One
can conclude that in the coordinate system $ \left(\Lambda,y\right) $ one has
$ \frac{\partial H_{1}}{\partial y}|_{\Lambda=\operatorname{const}}=-2\Lambda $, or $ H_{1}=-2\Lambda y+\eta\left(\Lambda\right) $ with an unknown function
$ \eta\left(t\right) $. Equation~\eqref{equ47.30} leads to
\begin{equation}
x=\Lambda^{2}y+\zeta\left(\Lambda\right),\qquad \frac{d\zeta}{dt}=-t\frac{d\eta}{dt}.
\label{equ47.40}\end{equation}\myLabel{equ47.40,}\relax
This leads to a formula for $ F_{\lambda} $ in coordinates $ y $ and $ \Lambda $:
\begin{equation}
F_{\lambda}=\left(\lambda-\Lambda\right)^{2}y+\zeta\left(\Lambda\right)+\lambda\eta\left(\Lambda\right),\qquad \frac{d\zeta}{dt}=-t\frac{d\eta}{dt}.
\label{equ47.45}\end{equation}\myLabel{equ47.45,}\relax
By~\eqref{equ47.10}, $ F_{1}=\gamma\left(f\right) $ for an appropriate function $ \gamma $. If
$ F_{1}\left(x,y\right)\equiv \operatorname{const} $, then $ \frac{F_{\lambda}-F_{1}}{\lambda-1} $ is linear, thus $ F_{\lambda} $ is simple. Hence
decreasing $ {\mathcal B} $ we may assume that one can write $ f=\varepsilon\left(F_{1}\right) $ for an appropriate
function $ \varepsilon $. Thus $ M_{F_{1}} $ locally isomorphic to $ M_{f} $.
Moreover,~\eqref{equ47.40} implies that $ \left(\Lambda,x\right) $ is also a coordinate system
on an open subset of $ {\mathcal B} $. Exchanging $ x $ and $ y $, we see that our assumption
that $ \left(\Lambda,y\right) $ is a local coordinate system is {\em always\/} satisfied. \end{proof}
\begin{lemma} \label{lm62.30}\myLabel{lm62.30}\relax There is a way to associate to an open subset $ {\mathcal U}\subset{\mathbb C} $ and a
function $ \eta\colon {\mathcal U} \to {\mathbb C} $ a homogeneous bihamiltonian structure $ M^{\left(\eta\right)} $ of type (3)
with a family of function $ F_{\lambda}^{\left(\eta\right)} $ which is quadratic in $ \lambda $. In conditions of
Lemma~\ref{lm62.20} there is a diffeomorphism of an open subset of $ M_{f} $ with an
open subset of $ M^{\left(\eta\right)} $ and $ C\in{\mathbb C} $ such that the diffeomorphism sends one
bihamiltonian structure to another, and the family $ F_{\lambda} $ to the family
$ F_{\lambda}^{\left(\eta\right)}+C $. A change of $ \eta $ of the form $ \eta'\left(t\right)=\eta\left(t\right)+at+b $ leads to an isomorphic
bihamiltonian structure with the isomorphism sending $ F_{\lambda}^{\left(\eta\right)} $ to
$ F'{}_{\lambda}^{\left(\eta\right)}+A\lambda^{2}+B\lambda+C $, $ A,B,C\in{\mathbb C} $. \end{lemma}
\begin{proof} Indeed, given the functions $ \zeta $ and $ \eta $, let $ \bar{\Sigma} = \left\{ \left(x,y,\Lambda\right)\in{\mathbb C}^{3} \mid x
=\Lambda^{2}y+\zeta\left(\Lambda\right)\right\} $. Let $ \Sigma=\left\{\left(x,y,\Lambda\right)\in\bar{\Sigma} \mid y\not=\frac{1}{2}\frac{d\eta}{d\Lambda},\Lambda\not=0\right\} $ be the subset of $ \bar{\Sigma} $
where $ x $ and $ y $ provide a local coordinate system. Plugging into~\eqref{equ47.45},
one obtains functions $ \Lambda $, $ x $, $ y $, $ F_{\lambda} $ defined on $ \Sigma $.
Functions $ x $ and $ y $ provide a local coordinate system near any point
of $ \Sigma $. On an open subset $ \Sigma_{1}\subset\Sigma $ one has $ \frac{\partial F_{1}}{\partial x}\not=0 $, $ \frac{\partial F_{1}}{\partial y}\not=0 $, thus putting
$ F_{1} $ into~\eqref{equ47.10} instead of $ f $ defines a homogeneous bihamiltonian
structure on $ M^{\left(\eta,\zeta\right)}\buildrel{\text{def}}\over{=}\Sigma_{1}\times{\mathbb C} $. As we have seen in the proof of Lemma
~\ref{lm62.20}, an open piece of $ M_{f} $ is isomorphic to an open piece of $ M^{\left(\eta,\zeta\right)} $,
moreover, the families $ F_{\lambda} $ are preserved by this isomorphism.
By~\eqref{equ47.40},~\eqref{equ47.45}, a change of the form $ \eta'\left(t\right)=\eta\left(t\right)+2a t+b $
together with the change $ \zeta'\left(t\right)= \zeta\left(t\right) $-at$ ^{2}+d $ would lead to a parallel
translation of the surface $ \bar{\Sigma} $, and to the required change of functions $ F_{\lambda} $.
In particular, a change in $ \zeta $ only will not change $ M^{\left(\eta,\zeta\right)} $, and will change
the family $ F_{\lambda} $ by an additive constant only. \end{proof}
\begin{lemma} In the conditions of Lemma~\ref{lm62.30} the family $ F_{\lambda}^{\left(\eta\right)} $ on $ M^{\left(\eta\right)} $
is a $ \left[2\right] $-family. \end{lemma}
\begin{proof} By construction of bihamiltonian structure on $ M^{\left(\eta\right)} $, the
functions $ F_{0}^{\left(\eta\right)}\equiv x $, $ F_{1}^{\left(\eta\right)} $, and the leading coefficient $ H_{0} $ of the quadratic
family $ F_{\lambda}^{\left(\eta\right)} $ are Casimir functions for $ \left\{,\right\}_{1} $, $ \left\{,\right\}_{1}+\left\{,\right\}_{2} $, and $ \left\{,\right\}_{2} $
correspondingly. Fix $ m_{0}\in M $. Then $ dF_{\lambda}^{\left(\eta\right)}|_{m_{0}} $ is a vector-function which is
quadratic in $ \lambda $. Moreover, at $ \lambda=\Lambda\left(m_{0}\right) $ this vector-function vanishes, thus
$ dF_{\lambda}^{\left(\eta\right)}|_{m_{0}}=\left(\lambda-\Lambda\left(m_{0}\right)\right)w\left(\lambda\right) $, here $ w\left(\lambda\right) $ is a vector-function of degree 1 in $ \lambda $.
Extend $ w $ to become a homogeneous function $ w\left(\lambda_{1},\lambda_{2}\right) $ of homogeneity degree
1, here $ \lambda=\lambda_{1}:\lambda_{2} $.
This function $ w\left(\lambda_{1},\lambda_{2}\right) $ is in the null-space of $ \lambda_{1}\left(,\right)_{1}+\lambda_{2}\left(,\right)_{2} $ for
three values 0, 1 and $ \infty $ of $ \lambda_{1}:\lambda_{2} $. However, the null-space for a pair of
pairing which is isomorphic to $ {\mathcal K}_{3} $ depends linearly on $ \lambda_{1}:\lambda_{2} $ (compare with
~\eqref{equ2.10}). We conclude that $ w\left(\lambda_{1},\lambda_{2}\right) $ is in the null-space for
$ \lambda_{1}\left(,\right)_{1}+\lambda_{2}\left(,\right)_{2} $ for any $ \lambda_{1},\lambda_{2} $, thus $ F_{\lambda}^{\left(\eta\right)} $ is a $ \left[2\right] $-family indeed.\end{proof}
\begin{lemma} The bihamiltonian structure $ M^{\left(\eta\right)} $ of Lemma~\ref{lm62.30} is flat on
any open subset iff $ \frac{d^{2}\eta}{dt^{2}}\equiv 0 $. \end{lemma}
\begin{proof} By arguments of Section~\ref{h47}, $ M_{F_{1}^{\left(\eta\right)}} $ is flat on an open subset
iff there is a local dependence between $ F_{1}^{\left(\eta\right)} $, $ x $ and $ y $ of the form
$ a\left(F_{1}^{\left(\eta\right)}\right)+b\left(x\right)+c\left(y\right)=0 $. If $ \eta=\zeta\equiv 0 $, then $ F_{\lambda}^{\left(\eta\right)}=\lambda^{2}y\pm2\lambda\sqrt{xy}+x $, thus
$ F_{1}^{\left(\eta\right)}=\left(\sqrt{x}\pm\sqrt{y}\right)^{2} $, thus $ M_{F_{1}^{\left(\eta\right)}} $ is flat. By Lemma~\ref{lm62.30} this proves the
``if'' part.
If a dependence $ a\left(F_{1}^{\left(\eta\right)}\right)+b\left(x\right)+c\left(y\right)=0 $ exists, then
$ \frac{\partial}{\partial\Lambda}|_{y=\operatorname{const}}\left(a\left(F_{1}^{\left(\eta\right)}\right)+b\left(x\right)\right)=0 $, or
\begin{equation}
\left(\Lambda-1\right)\alpha\left(F_{1}^{\left(\eta\right)}\right)=-\Lambda\beta\left(x\right),
\label{equ47.47}\end{equation}\myLabel{equ47.47,}\relax
here $ \alpha\left(F_{1}^{\left(\eta\right)}\right)=da/dF_{1}^{\left(\eta\right)} $, $ \beta\left(x\right)=db/dx. $
Taking derivative $ \frac{\partial}{\partial y}|_{\Lambda=\operatorname{const}} $ of~\eqref{equ47.47}, and dividing by the
cube of~\eqref{equ47.47}, one obtains $ \left(\alpha^{-3}\frac{d\alpha}{dF_{1}^{\left(\eta\right)}}\right)\left(F_{1}^{\left(\eta\right)}\right)=\beta^{-3}\frac{d\beta}{dx}\left(x\right) $.
Since $ F_{1}^{\left(\eta\right)} $ and $ x $ are independent, and $ \alpha\not\equiv 0 $, $ \beta\not\equiv 0 $, we conclude that
\begin{equation}
\frac{d\alpha}{dF_{1}^{\left(\eta\right)}}=C\alpha^{3},\qquad \frac{d\beta}{dx}=C\beta^{3}.
\notag\end{equation}
Thus $ \alpha\left(F_{1}^{\left(\eta\right)}\right)=D/\sqrt{F_{1}^{\left(\eta\right)}-\varphi_{0}} $, $ \beta\left(x\right)=D/\sqrt{x-x_{0}} $, $ x_{0},\varphi_{0},D\in{\mathbb C} $, $ D\not=0 $. Hence
$ \left(\Lambda-1\right)^{2}\left(x-x_{0}\right)=\Lambda^{2}\left(F_{1}-\varphi_{0}\right) $ by~\eqref{equ47.47}, and $ \left(-2\Lambda+1\right)\left(\zeta\left(\Lambda\right)-\zeta_{0}\right)=\Lambda^{2}\left(\eta\left(\Lambda\right)-\eta_{0}\right) $ for
appropriate $ \eta_{0},\zeta_{0}\in{\mathbb C} $. Together with $ \zeta'\left(t\right)=-t\eta'\left(t\right) $ this shows that
$ \eta\left(\Lambda\right)=C\Lambda+E $. \end{proof}
This finishes the proof of Proposition~\ref{prop47.40}. \end{proof}
\section{Anchored Lenard scheme }\label{h48}\myLabel{h48}\relax
Recall how Lenard scheme works\footnote{For history of Lenard scheme and of the term ``Lenard scheme'' see
\cite{KosMag96Lax}.}. Since descriptions of Lenard scheme
are in many cases based on an assumption that the Poisson bracket is
symplectic, here we supply as many details as possible to unravel the
relation of the anchored Lenard scheme with Casimir functions which
depend on parameters (such functions do not exists in symplectic
situation). In turn, such functions are directly related to webs.
\begin{remark} \label{rem48.02}\myLabel{rem48.02}\relax Before we proceed with description of the problem which
the Lenard scheme solves, we need to resolve a possible ambiguity. The
Lenard scheme ``as a method of integration'' consists of recurrence
relations and initial data for these relations. However, the existing
{\em formalizations\/} of Lenard scheme (e.g., \cite{Mag78Sim,GelDor79Ham,
KosMag96Lax}) consider the recurrence relations only, omitting the
initial data. The latter approach has an advantage of being more general,
in particular, it works in symplectic case too. However, this approach
does not address the question when recurrence relations have solutions
(these relations are overdetermined), in particular they do not specify
how to find the initial data which would make the Lenard scheme succeed.
Since in our settings symplectic Poisson structures have only a
tangential r\^ole, here we consider only the variant of Lenard scheme which
is important in applications, when both the initial data and the
recurrence relation are specified. To avoid any confusion, we call this
variant the {\em anchored Lenard scheme}. For this scheme we will not only be
able to describe when recurrence relations are solvable, we will also
describe which bihamiltonian systems are completely integrable by Lenard
scheme. As Theorem~\ref{th55.50} will show, such systems are never symplectic,
which may explain why the anchored Lenard scheme was not formalized
before. \end{remark}
The method of {\em first integrals\/} to ``integrate'' a system of ordinary
differential equations $ \frac{d}{dt}m\left(t\right)=v\left(m\left(t\right)\right) $, $ m\in M $, starts with writing the
{\em Hamiltonian representation\/} for this system, i.e.,
\begin{equation}
\frac{d}{dt}\left(f|_{m\left(t\right)}\right)=\left\{H,f\right\}|_{m\left(t\right)}\qquad \text{for any function }f\text{ on }M.
\notag\end{equation}
To do this one needs to find the Poisson bracket $ \left\{,\right\} $ and the function $ H $
(called the {\em Hamiltonian\/} of the equation). Note that $ H $ and $ \left\{,\right\} $ uniquely
determine the initial vector field $ v $.
Additionally, one needs to find a large enough independent
collection of functions $ H_{i} $ on $ M $ which all commute with each other w.r.t.~
$ \left\{,\right\} $ and such that $ H $ can be expressed as a function of $ H_{i} $. Alternatively,
one starts with a given bracket $ \left\{,\right\} $ and a function $ H $, then the problem is
to find the family $ H_{i} $. In fact, given the family $ H_{i} $, one can take as $ H $
any function of $ H_{i} $.
Thus to construct an integrable system a key problem is to find a
large family of independent functions $ H_{i} $ {\em in involution}, i.e., such that
$ \left\{H_{i},H_{j}\right\}=0 $. The anchored Lenard scheme is a particular algorithm to
construct such a family on a bihamiltonian manifold.
Start with a way to find many functions in involutions, not
necessarily independent. Most statements below are applicable both in
$ C^{\infty} $-geometry and in analytic geometry. In such cases we state the smooth
variant only, for the corresponding analytic statement one needs to
substitute $ {\mathbb R}{\mathbb P}^{1} $ by $ {\mathbb C}{\mathbb P}^{1} $.
\begin{definition} \label{def48.05}\myLabel{def48.05}\relax Consider a bihamiltonian structure on $ M $, an open
subset $ {\mathcal U}\subset{\mathbb R}{\mathbb P}^{1} $ and a smooth function on $ {\mathcal U}\times M $, $ \left(\lambda,m\right) \mapsto C_{\lambda}\left(m\right) $. Consider this
function as a family $ C_{\lambda} $, $ \lambda\in{\mathcal U} $, of functions on $ M $. Call $ C_{\lambda} $ a
$ \lambda $-{\em Casimir family\/} on $ M $ if $ C_{\lambda_{1}:\lambda_{2}} $ is a Casimir function for $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $
for $ \left(\lambda_{1}:\lambda_{2}\right)\in{\mathcal U} $. \end{definition}
\begin{proposition} \label{prop48.10}\myLabel{prop48.10}\relax Consider a bihamiltonian structure $ M $ and a point
$ m_{0}\in M $. Fix $ \lambda^{0}\in{\mathbb R}{\mathbb P}^{1} $. Suppose that the corank of the bracket $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ at
$ m $ is $ r\in{\mathbb Z} $ for any $ m $ near $ m_{0} $ and any $ \lambda_{1}:\lambda_{2} $ near $ \lambda^{0} $. Then there is a
neighborhood $ U\times{\mathcal U} $ of $ \left(m_{0},\lambda^{0}\right)\in M\times{\mathbb R}{\mathbb P}^{1} $ and $ r $ families $ C_{t,\lambda} $, $ 1\leq t\leq r $, $ \lambda\in{\mathcal U} $, of
functions on $ U $ such that
\begin{enumerate}
\item
for any given $ t $, $ 1\leq t\leq r $, $ C_{t,\lambda} $ is a $ \lambda $-Casimir family on $ U $, and
\item
for any given $ \lambda\in{\mathcal U} $ the functions $ C_{t,\lambda} $, $ 1\leq t\leq r $, are independent.
\end{enumerate}
\end{proposition}
\begin{proof} Let $ \lambda^{0}=\left(\lambda_{1}^{0}:\lambda_{2}^{0}\right)\in{\mathbb R}{\mathbb P}^{1} $. Consider the symplectic leaf of
$ \lambda_{1}^{0}\left\{,\right\}_{1}+\lambda_{2}^{0}\left\{,\right\}_{2} $ passing through $ m_{0} $. The codimension of this leaf is $ r $,
fix a transversal manifold $ N $ of dimension $ r $, and coordinate functions $ c_{t} $,
$ 1\leq t\leq r $, on this manifold. Obviously, if $ \left(\lambda_{1}:\lambda_{2}\right) $ is close to $ \left(\lambda_{1}^{0}:\lambda_{2}^{0}\right) $ and
$ m $ is close to $ m_{0} $, then the symplectic leaf of $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ passing
through $ m $ intersects $ N $ in exactly one point, and this point depends
smoothly on $ \lambda=\lambda_{1}:\lambda_{2} $ and $ m $.
Thus there is exactly one Casimir function $ C_{t,\lambda} $ for $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $
which coincides with $ c_{t} $ when restricted to $ N $. Obviously, it satisfies the
conditions of the proposition. \end{proof}
\begin{lemma} Consider two $ \lambda $-Casimir families $ C_{\lambda} $, $ \lambda\in{\mathcal U} $, and $ C'_{\lambda} $, $ \lambda\in{\mathcal U}' $, on $ M $.
Then
\begin{equation}
\left\{C_{\lambda},C'_{\mu}\right\}_{1}=\left\{C_{\lambda},C'_{\mu}\right\}_{2}=0,\qquad \lambda\in{\mathcal U},\quad \mu\in{\mathcal U}'.
\notag\end{equation}
\end{lemma}
\begin{proof} To simplify notations assume $ \infty\notin{\mathcal U} $. Let $ \left\{,\right\}^{\lambda}\buildrel{\text{def}}\over{=}\lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $. Since
$ \left\{C_{\lambda},f\right\}^{\lambda}=\left\{f,C'_{\mu}\right\}^{\mu}=0 $ for any function $ f $, we see that $ \left\{C_{\lambda},C'_{\mu}\right\}^{\nu}=0 $ if $ \nu=\lambda $ or
$ \nu=\mu $. Since $ \left\{,\right\}^{\nu} $ is linear in $ \nu $, $ \left\{C_{\lambda},C'_{\mu}\right\}^{\nu}=0 $ for any $ \nu $ as far as $ \lambda\not=\mu $. On
the other hand, the same identity is true for $ \lambda=\mu $ by continuity in $ \lambda $. \end{proof}
Proposition~\ref{prop48.10} provides a way to obtain a giant collection
of functions which commute with each other w.r.t.~{\em both\/} the brackets. Out
of this huge collection of functions on $ M $ only a finite number of
functions are independent (since this number is bounded by the dimension
of the manifold). One needs a way to extract a finite subset out of this
continuum. The anchored Lenard scheme provides such a way, moreover, it
allows one to find this small collection without actually finding the whole
continuum of Casimir functions.
The idea of the anchored Lenard scheme is to put $ \lambda^{0}=\infty $ and write a
formal series in\footnote{To follow the standard description of Lenard scheme, we use expansion in
formal variable $ \lambda^{-1} $, though by exchanging $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $ one might use
more natural expansion in $ \lambda $.} $ \lambda^{-1} $ for a $ \lambda $-Casimir family $ C_{\lambda} $ defined near $ \lambda_{0} $:
\begin{equation}
C_{\lambda}=H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots .
\notag\end{equation}
Obviously, commutativity of Casimir functions implies $ \left\{H_{i},H_{j}\right\}_{1}=\left\{H_{i},H_{j}\right\}_{2}=0 $
for any $ i,j $. On the other hand, the condition
\begin{equation}
\left\{H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots ,f\right\}_{1}+\lambda^{-1}\left\{H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots ,f\right\}_{2}=0,
\notag\end{equation}
which describes the formal-variables analogue of the condition on a
$ \lambda $-Casimir family, can be written as
\begin{enumerate}
\item
function $ H_{0} $ is a Casimir function for $ \left\{,\right\}_{1} $;
\item
for any function $ f $ on $ M $
\begin{equation}
\left\{H_{i},f\right\}_{2}=-\left\{H_{i+1},f\right\}_{1}.
\label{equ48.20}\end{equation}\myLabel{equ48.20,}\relax
\end{enumerate}
\begin{remark} It is easy to see that given $ H_{i} $, the relation~\eqref{equ48.20} is
equivalent for a system of equations of the form $ dH_{i+1}|_{{\mathcal L}}=\omega_{{\mathcal L}} $, here $ {\mathcal L} $ runs
over symplectic leaves of $ \left\{,\right\}_{1} $, and $ \omega_{{\mathcal L}} $ is a $ 1 $-form on $ {\mathcal L} $ which is
determined by $ H_{i} $. In particular, if $ \left\{,\right\}_{1} $ has a constant rank, then
~\eqref{equ48.20} has a local solution iff all the forms $ \omega_{{\mathcal L}} $ are closed.
If so, one can find $ H_{i+1} $ by integrating $ \omega_{{\mathcal L}} $. Thus if a solution to
~\eqref{equ48.20} exists, it is easy to find. One can also see why in Lenard
scheme one takes $ \lambda^{0}=\infty $: in applications $ \left\{,\right\}_{1} $ is much simpler than $ \left\{,\right\}_{2} $ or
any other combination $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $, thus taking $ \lambda^{0}=\infty $ simplifies the
integration of relations~\eqref{equ48.20}. \end{remark}
\begin{definition} \label{def48.25}\myLabel{def48.25}\relax Consider a formal series $ H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots $ in $ \lambda^{-1} $
with $ H_{i} $ being functions on $ M $. Call it a {\em formal\/} $ \lambda $-{\em family\/} on $ M $ if the
sequence $ H_{k} $ satisfies the recurrence relation~\eqref{equ48.20}. Call this formal
$ \lambda $-family {\em anchored\/} if $ H_{0} $ is a Casimir function for $ \left\{,\right\}_{1} $. \end{definition}
\begin{proposition} Given two anchored formal $ \lambda $-families $ H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots $
and $ H'_{0}+\lambda^{-1}H'_{1}+\lambda^{-2}H'_{2}+\dots $ one has $ \left\{H_{i},H'_{j}\right\}_{1}=\left\{H_{i},H'_{j}\right\}_{2}=0 $ for any $ i $ and $ j $.
\end{proposition}
\begin{proof} Put $ H_{i}=H'_{i}=0 $ for $ i<0 $. This makes~\eqref{equ48.20} applicable for $ i<0 $
too. For any $ i $ and $ j $
\begin{equation}
\left\{H_{i},H'_{j}\right\}_{1}=-\left\{H_{i-1},H'_{j}\right\}_{2}=\left\{H'_{j},H_{i-1}\right\}_{2}=-\left\{H'_{j+1},H_{i-1}\right\}_{1}=\left\{H_{i-1},H'_{j+1}\right\}_{1}.
\notag\end{equation}
Repeating this process $ i+1 $ times, one gets $ \left\{H_{i},H'_{j}\right\}_{1}=\left\{H_{-1},H'_{i+j+1}\right\}_{1}=0 $.
Moreover, $ \left\{H_{i},H'_{j}\right\}_{2}=-\left\{H_{i+1},H'_{j}\right\}_{1}=0 $. \end{proof}
If one considers one chain of solutions to~\eqref{equ48.20}, then the
anchoring condition may be dropped:
\begin{amplification} Given a formal $ \lambda $-family $ H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots $ one has
$ \left\{H_{i},H_{j}\right\}_{1}=\left\{H_{i},H_{j}\right\}_{2}=0 $ for any $ i $ and $ j $. \end{amplification}
\begin{proof} For any $ i\geq1 $ and $ j\geq0 $ one gets $ \left\{H_{i},H_{j}\right\}_{1}=\left\{H_{i-1},H_{j+1}\right\}_{1} $ again.
Repeating this several times, one can decrease $ |i-j| $ until it becomes 0
or 1 (depending on $ i-j $ being even or odd). If $ i-j $ is even, use
$ \left\{H_{k-l},H_{k+l}\right\}_{1}=\left\{H_{k},H_{k}\right\}_{1}=0 $, if $ i-j $ is odd, use
$ \left\{H_{k-l+1},H_{k+l}\right\}_{1}=\left\{H_{k+1},H_{k}\right\}_{1}=-\left\{H_{k},H_{k}\right\}_{2}=0 $. Thus $ \left\{H_{i},H_{j}\right\}_{1} $ is always 0,
moreover, $ \left\{H_{i},H_{j}\right\}_{2}=-\left\{H_{i+1},H_{j}\right\}_{1}=0 $. \end{proof}
\begin{lemma} \label{lm48.30}\myLabel{lm48.30}\relax Suppose that conditions of Proposition~\ref{prop48.10} are
satisfied. Fix $ n\geq0 $. Given $ m_{0}\in M $ and any sequence of functions $ H_{0},\dots ,H_{n} $ on
$ M $ such that $ H_{0} $ is a Casimir function for $ \left\{,\right\}_{1} $, and $ H_{k} $ satisfy equations
~\eqref{equ48.20} for $ i=0,\dots ,n-1 $, there exists a neighborhood $ U $ of $ \left(m_{0},\infty\right)\in M\times{\mathbb R}{\mathbb P}^{1} $
and a $ \lambda $-Casimir family $ C_{\lambda}\left(m\right) $ defined for $ \left(m,\lambda\right)\in U $ such that
\begin{equation}
C_{\lambda}=H_{0}+\lambda^{-1}H_{1}+\lambda^{-2}H_{2}+\dots +\lambda^{-n}H_{n}+o\left(\lambda^{-n}\right).
\notag\end{equation}
In particular, there is a function $ H_{n+1} $ defined near $ m_{0} $ which solves
~\eqref{equ48.20} for $ i=n $. \end{lemma}
\begin{proof} To simplify notations, suppose $ r=1 $ (the case of general $ r $ is
absolutely parallel). Then $ H_{0} $ is defined uniquely up to a change
$ H_{0}'=\varphi_{0}\left(H_{0}\right) $. Additionally, given $ H_{i} $, Equation~\eqref{equ48.20} determines $ H_{i+1} $ up
to a change $ H_{i+1}'=H_{i+1}+\varphi_{i+1}\left(H_{0}\right) $.
Moreover, the Taylor series for $ C_{1,\lambda} $ provides {\em one\/} solution to the
recursion relations~\eqref{equ48.20}. Since the change of the form $ H_{0}'=\varphi_{0}\left(H_{0}\right) $
corresponds to a change of the form $ c_{1}'=\varphi_{0}\left(c_{1}\right) $ in the proof of
Proposition~\ref{prop48.10}, we conclude that there is a locally defined
solution to the recursion relations~\eqref{equ48.20} for any initial data $ H_{0} $
which is a Casimir function for $ \left\{,\right\}_{1} $.
Next, proceed by induction in $ n $. To do the step of induction, it is
enough to prove the following statement: given a $ \lambda $-Casimir family $ C_{\lambda} $ near
$ \lambda=\infty $ such that $ C_{\infty}=H_{0} $, and given any function $ \varphi_{n}\left(h\right) $ of one variable defined
in a neighborhood of $ h=H_{0}\left(m_{0}\right) $ one can find another $ \lambda $-Casimir family $ C'_{\lambda} $
such that $ C'_{\lambda}-C_{\lambda} =\varphi_{n}\left(H_{0}\right)\lambda^{-n}+o\left(\lambda^{-n}\right) $. Putting $ C'_{\lambda}=C_{\lambda}+\varphi\left(C_{\lambda}\right)\lambda^{-n} $ finishes the
proof in the case $ r=1 $. \end{proof}
The following statement is obvious:
\begin{lemma} Suppose that $ r=1 $ and a sequence $ \left(H_{i}\right) $ satisfies conditions of
Lemma~\ref{lm48.30}. If $ H_{k} $ depends functionally on $ H_{0},\dots ,H_{k-1} $, then $ H_{l} $
depends functionally on $ H_{0},\dots ,H_{k-1} $ for any $ l $ such that $ k\leq l\leq n $. \end{lemma}
This shows that a maximal independent subset of the sequence $ \left(H_{l}\right) $
can be chosen to be the starting subsequence. The situation in the case
$ r>1 $ is slightly more complicated, however, it is easy to show that
\begin{proposition} \label{prop48.50}\myLabel{prop48.50}\relax Consider a maximal collection $ H_{0}^{\left(1\right)},\dots ,H_{0}^{\left(r\right)} $ of
independent Casimir functions for $ \left\{,\right\}_{1} $ near $ m_{0}\in M $. Let $ H_{i}^{\left(t\right)} $, $ t=1,\dots ,r $,
$ i\geq0 $, be solutions to recursion relations~\eqref{equ48.20} with $ H_{0}^{\left(t\right)} $ as the
initial data. Then there are numbers $ k_{1},\dots ,k_{r} $ such that the collection
$ \left\{H_{i}^{\left(t\right)}\right\} $, $ 1\leq t\leq r $, $ 0\leq i\leq k_{t} $, is functionally independent, and all functions
$ H_{i}^{\left(t\right)} $, $ 1\leq t\leq r $, $ i\geq0 $, depend functionally on this collection. \end{proposition}
\begin{definition} {\em Anchored Lenard scheme\/} of finding a large family of
functions on a bihamiltonian structure which mutually commute w.r.t.~both
brackets consists of two steps: first one finds a maximal independent
collection of Casimir functions for the bracket $ \left\{,\right\}_{1} $, then one solves
recurrence relations~\eqref{equ48.20} with these functions as initial data until
new functions start depend on the old ones. \end{definition}
In fact it is not necessary to consider many chains of solutions of
recurrence relations:
\begin{amplification} In conditions of Proposition~\ref{prop48.10} suppose that the
bihamiltonian structure on $ M $ is analytic. Then there is a sequence of
functions $ H_{0},\dots ,H_{n} $ defined near a given point $ m_{0}\in M $ such that
\begin{enumerate}
\item
function $ H_{0} $ is a Casimir function for $ \left\{,\right\}_{1} $;
\item
functions $ \left(H_{i}\right) $ satisfy the recurrence relation~\eqref{equ48.20};
\item
functions $ \left(H_{i}\right) $ are independent, and for any $ 1\leq t\leq r $ and $ \lambda $ near $ \infty $ the
function $ C_{t,\lambda} $ of Proposition~\ref{prop48.50} depends on $ \left(H_{0},\dots ,H_{n}\right) $.
\end{enumerate}
\end{amplification}
\begin{proof} Since the Taylor series for $ C_{t,\lambda} $ in $ \lambda^{-1} $ converge, it is enough
to show that the Taylor coefficients for $ C_{t,\lambda} $ depend on $ \left(H_{0},\dots ,H_{n}\right) $.
Fix numbers $ \alpha_{t} $, $ 2\leq t\leq r $. Let $ n=\sum_{t=1}^{r}k_{t}+r-1 $, and
\begin{equation}
C_{\lambda}= C_{1,\lambda}+\alpha_{2}\lambda^{-k_{1}+1}C_{2,\lambda}+\dots +\alpha_{r}\lambda^{-k_{1}-\dots -k_{r-1}+1}C_{r,\lambda}.
\notag\end{equation}
Obviously, this is a $ \lambda $-Casimir family.
It is easy to show that for generic values of $ \alpha_{2},\dots ,\alpha_{r} $ the first
$ n+1 $ Taylor coefficients $ H_{0},\dots ,H_{n} $ of $ C_{\lambda} $ are independent, which finishes
the proof. \end{proof}
\begin{remark} Since the functions $ H_{i}^{\left(t\right)} $ of the anchored Lenard scheme are
obtained by doing manipulations (taking Taylor coefficients) with Casimir
functions, they can be pushed down to the web $ {\mathcal B}_{M} $ of $ M $. Thus they should
be considered as action functions on $ M $ (see Section~\ref{h02}).
In interesting cases (see Section~\ref{h55} and \cite{Pan99Ver}) the functions
$ H_{i}^{\left(t\right)} $ provide a local coordinate system on $ {\mathcal B}_{M} $. (This shows that in fact
$ {\mathcal B}_{U} $ is a smooth manifold if $ U $ is a small subset of $ M $.) In these cases the
submanifolds $ \left\{H_{i}^{\left(t\right)}=\operatorname{const}_{i}^{\left(t\right)} \mid i\geq0\right\} $ carry a natural local affine
structure, thus one can find a complementary set of {\em angle variables\/} $ \varphi_{j} $
such that functions $ \left\{H_{i},\varphi_{j}\right\}_{k}\buildrel{\text{def}}\over{=}c_{\text{ijk}} $ depends on $ H_{l} $ only\footnote{Another problem is to find such change-of-variables in action variables
$ \overset{\,\,{}_\circ}{H}_{i}=\overset{\,\,{}_\circ}{H}_{i}\left(H_{0},\dots \right) $ that the corresponding functions $ \overset{\,\,{}_\circ}{c}_{\text{ijk}} $ become as simple as
possible. As Theorem~\ref{th47.13} shows, in general it is {\em not possible\/} to
make all $ \overset{\,\,{}_\circ}{c}_{\text{ijk}} $ into constants. However, it is obviously possible for
bihamiltonian structures with constant coefficients, thus for flat
structures.}.
\end{remark}
\begin{example} \label{ex48.80}\myLabel{ex48.80}\relax Consider the bihamiltonian structure defined by
~\eqref{equ47.10}. In this case $ r=1 $, and Casimir functions are functions of $ x $ and
$ y $ only. Thus $ H_{i} $ are functions of $ x $ and $ y $ too. Moreover, one can write an
explicit formula for $ H_{i} $.
Indeed, let $ \Phi\left(x,y\right)=\frac{\partial f/\partial x}{\partial f/\partial y} $. Obviously, the symplectic leaves
for $ \left\{,\right\}_{1}+\lambda^{-1}\left\{,\right\}_{2} $ can be described as surfaces $ \left\{\left(x,y,z\right) \mid y=\Psi\left(x\right)\right\} $, here $ \Psi $
is a solution of the ODE
\begin{equation}
\frac{d\Psi}{dx}=-\lambda^{-1}\Phi\left(x,\Psi\right)
\label{equ48.81}\end{equation}\myLabel{equ48.81,}\relax
Given $ \left(x_{0},y_{0}\right) $ which is close to (0,0), let $ \Psi_{\lambda,x_{0},y_{0}}\left(x\right) $ be the solution of
~\eqref{equ48.81} which passes through the point $ \left(x_{0},y_{0}\right) $. Let
$ F_{\lambda}\left(x_{0},y_{0}\right)\buildrel{\text{def}}\over{=}\Psi_{\lambda,x_{0},y_{0}}\left(0\right) $. Obviously, $ F_{\lambda}\left(x,y\right) $ is well-defined for large
$ |\lambda| $ and small $ \left(x,y\right) $. Moreover, $ F_{\lambda} $ is a Casimir function for $ \left\{,\right\}_{1}+\lambda^{-1}\left\{,\right\}_{2} $.
Taking Laurent coefficients of $ F_{\lambda} $ near $ \lambda=\infty $, one obtains functions $ H_{i} $
from the anchored Lenard scheme. Obviously,
\begin{equation}
H_{0}\left(x,y\right) = y,\qquad H_{1}\left(x,y\right)=\int_{0}^{x}\frac{\partial f\left(t,y\right)/\partial t}{\partial f\left(t,y\right)/\partial y}dt=x+o\left(x\right).
\notag\end{equation}
This implies that all other functions $ H_{i} $ depend on $ H_{0} $ and $ H_{1} $. One can see
that $ z $ provides an example of an angle variable, and any other angle
variable can be written as $ a\left(x,y\right)z+b\left(x,y\right) $ with arbitrary $ a\left(x,y\right) $ and
$ b\left(x,y\right) $.
\end{example}
Summing up, we obtain
\begin{proposition} The bihamiltonian structure $ M_{f} $ given by~\eqref{equ47.10} is
completely integrable by the anchored Lenard scheme. If $ \varphi $ satisfies
~\eqref{equ47.20} and $ \varphi\left(x,y\right)\not\equiv x+y $, then $ M_{\varphi} $ is not flat. \end{proposition}
\begin{remark} It is possible to provide similar examples of homogeneous but
not flat bihamiltonian structures of any given type. In Section~\ref{h55} we
will see that all these structures are completely integrable by the
anchored Lenard scheme. In the case of type $ \left(2k-1\right) $, $ k\in{\mathbb N} $, one can write
such a bihamiltonian structure\footnote{In a slightly different language such bihamiltonian structures were
described in \cite{GelZakhWeb} and \cite{GelZakh93}.} based on $ k-1 $ functions
$ \varphi_{1}\left(x,y\right),\dots ,\varphi_{k-1}\left(x,y\right) $ of two complex variables (though one cannot do it
as explicitly as in~\eqref{equ47.10}). Any two of these bihamiltonian structures
are not locally isomorphic, thus only one of them (for any given $ k\in{\mathbb N} $) is
flat. What is very surprising is that (apparently) they did not appear in
examples of integrable systems arising in problems of mathematical
physics. \end{remark}
\begin{remark} \label{rem48.91}\myLabel{rem48.91}\relax Let us point out the relation of the anchored Lenard
scheme with the {\em algebraic Zakharov\/}--{\em Shabat scheme\/} of \cite{DriSok84Alg}.
Recall how the latter scheme works. Given a Poisson structure $ \left\{,\right\} $ on $ M $ and a
function $ H $ on $ M $, define the {\em Hamiltonian vector field\/} $ {\mathcal V}_{H} $ of $ H $ by the
identity $ {\mathcal V}_{H}\cdot f=\left\{H,f\right\} $ for any function $ f $ on $ M $. Given two Poisson
structures $ \left\{,\right\}_{1} $, $ \left\{,\right\}_{2} $, one obtains two Hamiltonian vector fields $ {\mathcal V}_{H}^{\left(1\right)} $,
$ {\mathcal V}_{H}^{\left(2\right)} $. Note that the Hamiltonian vector field of $ H $ for the bracket
$ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $ is $ \lambda{\mathcal V}_{H}^{\left(1\right)}+{\mathcal V}_{H}^{\left(2\right)} $.
Consider a family $ {\mathcal H}_{\lambda} $ of function on $ M $ which depends polynomially on
$ \lambda $. Say that a vector field $ V $ on $ M $ is {\em associated\/} with $ {\mathcal H}_{\lambda} $ if
$ V=\lambda{\mathcal V}_{{\mathcal H}_{\lambda}}^{\left(1\right)}+{\mathcal V}_{{\mathcal H}_{\lambda}}^{\left(2\right)} $ for any $ \lambda $, in particular, for the association to hold,
the right-hand side should not depend on $ \lambda $. The associated vector fields
are the central tool of the algebraic Zakharov--Shabat scheme. In many
examples such vector fields commute, and are plentiful enough to
completely integrate the bihamiltonian structure.
To explain this phenomenon write $ {\mathcal H}_{\lambda}=\sum_{k=0}^{K}H_{k}\lambda^{K-k} $. Clearly, the finite
sequence $ \left(H_{k}\right) $ satisfies the same conditions as an anchored $ \lambda $-series:
function $ H_{0} $ is a Casimir function for $ \left\{,\right\}_{1} $, and the relation~\eqref{equ48.20}
holds. Moreover, any vector field in the span of Hamiltonian vector
fields of $ \left(H_{k}\right) $ w.r.t.~any Poisson structure $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ can be written
as an associated vector field of an appropriate family.
Thus one can consider the algebraic Zakharov--Shabat scheme as
a different formulation of the anchored Lenard scheme. \end{remark}
\section{Lenard-integrable structures }\label{h55}\myLabel{h55}\relax
Here we show that the class of bihamiltonian structures for which
the anchored Lenard scheme gives ``many'' functions in involution coincides
with the class of homogeneous structures. In fact, since our approach to
Lenard scheme is based on a formal analogue of $ \lambda $-Casimir families, the
result of this section are closely related to ones in \cite{Bol91Com} (compare
with discussion of ``completeness'' in \cite{Pan99Ver}).
\begin{definition} The {\em action dimension\/} of a Poisson structure $ \left(M,\left\{,\right\}_{1}\right) $ of
constant corank $ r $ is $ \frac{\dim M+r}{2} $. The action dimension of an arbitrary
Poisson structure on $ M $ at $ m_{0}\in M $ is the minimum action dimension of open
subsets $ U\subset M $ which contain $ m_{0} $ in its closure, and such that the Poisson
structure is of constant corank on $ U $. \end{definition}
This definition gives a {\em lower\/} bound on the number of functions in
involution which are enough to completely integrate the dynamical system
on $ M $ given by some Hamiltonian $ H $. Indeed, in the case of constant corank
$ r $ one needs $ r $ functions to disambiguate symplectic leaves, and $ \frac{\dim
M-r}{2} $ functions to provide action variables inside the leaves.
To do the same in the case of a bihamiltonian structure, introduce
\begin{definition} The {\em action dimension\/} of a complex vector space $ V $ with two
skewsymmetric bilinear pairings is $ \frac{\dim V +r}{2} $, here $ r $ is the number of
Kronecker blocks of $ V $. \end{definition}
\begin{definition} The {\em action dimension\/} at $ m_{0}\in M $ of a bihamiltonian structure on
$ M $ is the lower limit of action dimensions of\footnote{If $ M $ is analytic, one should consider $ {\mathcal T}_{m}^{*}M $ instead of $ {\mathcal T}_{m}^{*}M\otimes{\mathbb C} $.} $ {\mathcal T}_{m}^{*}M\otimes{\mathbb C} $ for $ m \to m_{0} $. \end{definition}
Note that the number of Kronecker blocks of a pair of skewsymmetric
pairings $ \left(,\right)_{1} $, $ \left(,\right)_{2} $ is equal to $ \min _{\lambda_{1},\lambda_{2}}\dim \operatorname{Ker} \left(\lambda_{1}\left(,\right)_{1}+\lambda_{2}\left(,\right)_{2}\right) $, here $ \operatorname{Ker} $
denotes null-space of the pairing. Thus the action dimension of a
bihamiltonian structure provides a {\em lower\/} bound on the number of functions
in involution necessary to completely integrate the structure w.r.t.~{\em at
least one particular\/} Poisson structure of the form $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ on an
open subset of $ M $ near $ m_{0} $.
\begin{definition} Call a bihamiltonian structure on $ M $ {\em Lenard-integrable\/} at
$ m_{0}\in M $ if the number of independent functions provided by the anchored
Lenard scheme in an appropriate neighborhood of $ m_{0} $ coincides with the
action dimension of $ M $ at $ m_{0} $.
Call a bihamiltonian structure on $ M $ {\em strictly Lenard-integrable\/} at $ m_{0} $
if it is Lenard-integrable at $ m_{0} $ and the sequences $ H_{i}^{\left(t\right)} $ of the anchored
Lenard scheme can be continued for $ i>k_{t} $ as well. \end{definition}
\begin{remark} Recall that Section~\ref{h48} describes the anchored Lenard scheme
as a formal-series counterpart of $ \lambda $-Casimir families. For this
description to work one needs to assume some constant rank conditions, as
in Proposition~\ref{prop48.10}. The condition of Proposition~\ref{prop48.10} was
not very restrictive, since one could achieve it by a small deformation
of $ \left(m_{0},\lambda^{0}\right) $. However, in the anchored Lenard scheme $ \lambda^{0} $ is fixed to be $ \infty $,
thus the restriction of Proposition~\ref{prop48.10} is in fact non void. Thus
Lemma~\ref{lm48.30} {\em does not\/} imply that any Lenard-integrable structure is
strictly Lenard-integrable. \end{remark}
\begin{theorem} \label{th55.50}\myLabel{th55.50}\relax If a bihamiltonian system on $ M $ is strictly
Lenard-integrable at $ m_{0}\in M $, then it is homogeneous on an open subset $ U $ of
$ M $ with $ m_{0} $ being in the closure of $ U $. \end{theorem}
\begin{proof} Indeed, if a structure is Lenard-integrable at $ m_{0} $, then it is
also Lenard-integrable at $ m $ for $ m $ in an appropriate open subset of $ M $. It
is easy to show that by decreasing this subset $ U $ one may assume that at
any point $ m\in U $ the sizes of Kronecker blocks of the pair of pairings on
$ {\mathcal T}_{m}^{*}M\otimes{\mathbb C} $ are the same.
Functions $ H_{i}^{\left(t\right)} $, $ 1\leq t\leq r $, $ 0\leq i\leq k_{t} $, given by the anchored Lenard scheme
provide a mapping $ {\mathbit H}\colon U \to {\mathbb R}^{K} $, $ K=\sum_{t}\left(k_{t}+1\right) $. Decreasing $ U $ yet more, we may
assume that the differential of this mapping is of constant rank $ K $
(recall that components of $ {\mathbit H} $ are independent). Fix a point $ m\in U $ and $ t $,
$ 1\leq t\leq r $. Let $ \beta_{i}=dH_{i}^{\left(t\right)}|_{m}\in{\mathcal T}_{m}^{*}M $. Let $ W_{{\mathbb R}}={\mathcal T}_{m}^{*}M $, $ W=W_{{\mathbb R}}\otimes{\mathbb C} $. By Equation~\eqref{equ48.20},
$ \beta_{0} $ is in the null space of pairing $ \left(,\right)_{1} $ on $ W $, and $ \left(\beta_{i},w\right)_{2}=\left(\beta_{i+1},w\right)_{1} $ for
any $ w\in W $.
An immediate check shows that if $ W\simeq{\mathcal J}_{2k,\mu} $, $ \mu\in{\mathbb C}{\mathbb P}^{1} $, then $ \beta_{i}=0 $, $ i=0,\dots . $
Similarly, if $ W\simeq{\mathcal K}_{2k-1} $, then all vectors $ \beta_{i} $ are in the subspace
$ W_{1}=\left<{\mathbit w}_{0},{\mathbit w}_{2},\dots ,{\mathbit w}_{2k-2}\right> $ of $ {\mathcal K}_{2k-1} $. The dimension of this subspace is $ k $ (it
is the same subspace which appears in a similar context in Lemma
~\ref{lm25.20}). In general case, taking a decomposition of $ W $ into a sum of
indecomposable components, one can see that all vectors $ \beta_{i} $ are in the sum
of Kronecker blocks of $ W $, moreover, they are in a direct sum of subspaces
$ W_{1} $ for these blocks.
This shows that $ N\leq\sum_{t=1}^{r}k_{r} $, here $ {\mathcal K}_{2k_{t}-1} $, $ 1\leq t\leq r $, are Kronecker blocks
of $ W $. The restriction on the action dimension shows that $ \dim
W\leq\sum_{t=1}^{r}\left(2k_{r}-1\right) $, thus $ W $ has no Jordan blocks, which finishes the proof of
the theorem. \end{proof}
\begin{proposition} \label{prop55.60}\myLabel{prop55.60}\relax Any homogeneous bihamiltonian structure is
strictly Lenard-integrable on small open subsets. \end{proposition}
\begin{proof} A tiny modification of the above proof together with
Proposition~\ref{prop48.10} imply this statement immediately. \end{proof}
This shows that the ``strict'' anchored Lenard scheme integrates
homogeneous structures and only them. Note that a linear combination of
brackets of homogeneous structure is never symplectic.
\begin{remark} The Lenard schemes of \cite{Mag78Sim,GelDor79Ham,KosMag96Lax}
differ from what we describe here, the difference being that they
consider non-anchored formal $ \lambda $-families. Though our condition is more
restrictive, note that in applications the Lenard scheme usually provides
an anchored $ \lambda $-family. Moreover, in non-symplectic cases there is no
simple way to find a non-anchored family, thus it is not obvious whether
non-anchored Lenard scheme may be used to integrate a system (unless
applied to the traces of powers of recursion operator). \end{remark}
\begin{remark} \label{rem55.80}\myLabel{rem55.80}\relax The amount of our knowledge about classification of
bihamiltonian structures is not enough to describe finite-dimensional
Lenard-integrable system which are not strict. The situation is slightly
more promising if one consider non-strict structures for which {\em one\/}
anchored Lenard chain provides enough functions in involution.
In this case slightly more elaborate arguments than those in the
proof of Theorem~\ref{th55.50} show that there is an open subset $ U\subset M $ such that
at a point $ m $ of $ U $ the pair of brackets in $ {\mathcal T}_{m}^{*}M $ has one Jordan block only,
and this block is of the form $ {\mathcal J}_{2k,\infty} $. (The remaining blocks are
Kronecker.)
In particular, if at least one linear combination $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ is
symplectic near $ m_{0} $, then there are no Kronecker blocks. Thus the pairings
on $ {\mathcal T}_{m}^{*}M $ are isomorphic to $ {\mathcal J}_{2k,\infty} $ for any $ m\in U $.
Such symplectic structures were classified in \cite{Tur89Cla}, they turn
out to be flat (thus isomorphic to the natural bihamiltonian structure on
the dual space to the vector space $ {\mathcal J}_{2k,\infty} $). These are exactly the
structures for which the arguments of \cite{Mag78Sim} and \cite{GelDor79Ham} are
actually applicable to {\em anchored\/} formal $ \lambda $-families. It is again an
interesting question to find physically interesting bihamiltonian
structures of this form. \end{remark}
\begin{remark} Note asymmetry between Theorem~\ref{th55.50} and Proposition
~\ref{prop55.60}: one of them is applicable on small open neighborhoods of any
point, another on small open neighborhood of a {\em dense\/} collection of
points. Note that \cite{Pan99Ver} introduces a more general notion than
homogeneity: bihamiltonian structure is {\em complete\/} if the pairs of pairings
at $ {\mathcal T}_{m}^{*}M $ for any $ m\in M $ do not contain a Jordan block (thus the condition of
Kronecker blocks having the same sizes for all the points of $ M $ is
dropped). For complete structures Proposition~\ref{prop48.10} is applicable
for any point of $ M $, and it is easy to see that the following statement
holds: \end{remark}
\begin{amplification} The class of bihamiltonian structures which are strictly
Lenard-integrable at any point of $ M $ coincides with the class of complete
bihamiltonian structures. \end{amplification}
\section{Bihamiltonian Toda lattices }\label{h0}\myLabel{h0}\relax
\begin{definition} The {\em open Toda lattice\/} (\cite{FadTakh87Ham}) is the
$ \left(2k+1\right) $-dimensional vector space $ V_{2k+1} $ over $ {\mathbb C} $ with coordinates $ v_{0},\dots ,v_{2k} $
and the two compatible Poisson brackets defined as follows. The bracket
$ \left\{,\right\}_{1} $ is defined by the condition $ \left\{v_{i},v_{j}\right\}=0 $ for $ |i-j|>1 $, and
\begin{equation}
\left\{v_{2l},v_{2l\pm1}\right\}_{1} = \mp v_{2l\pm1}.
\label{equ0.20}\end{equation}\myLabel{equ0.20,}\relax
The bracket $ \left\{,\right\}_{2} $ is defined by the condition $ \left\{v_{i},v_{j}\right\}=0 $ for $ |i-j|>2 $, and
\begin{equation}
\begin{aligned}
\left\{v_{2l},v_{2l\pm1}\right\}_{2} & = \mp v_{2l}v_{2l\pm1},
\\
\left\{v_{2l},v_{2l+2}\right\}_{2} & = -2v_{2l+1}^{2},
\\
\left\{v_{2l-1},v_{2l+1}\right\}_{2} & = -\frac{1}{2}v_{2l-1}v_{2l+1},
\end{aligned}
\label{equ0.10}\end{equation}\myLabel{equ0.10,}\relax
for all $ l $ such that the the left-hand sides make sense. \end{definition}
We denote a point of $ V_{2k+1} $ by $ {\mathbit v} $. Define transformation $ {\mathfrak T}_{\lambda} $, $ \lambda\in{\mathbb C} $, by
\begin{equation}
{\mathfrak T}_{\lambda}\colon V_{2k+1} \to V_{2k+1}\colon {\mathbit v} \mapsto {\mathbit v}+\lambda{\mathbit v}^{0},\qquad {\mathbit v}^{0}=\left(1,0,1,0,1,\dots ,0,1\right).
\label{equ0.15}\end{equation}\myLabel{equ0.15,}\relax
Translating bracket $ \left\{,\right\}_{2} $ by the transformation $ {\mathfrak T}_{-\lambda} $, one obtains a Poisson
bracket $ \left\{,\right\}^{\left(\lambda\right)} $ which depends on a parameter $ \lambda $.
\begin{remark} \label{rem01.57}\myLabel{rem01.57}\relax Note that for any $ i $, $ j $ the bracket $ \left\{v_{i},v_{j}\right\}_{2} $ depends
linearly on $ v_{2l} $, $ l=0,\dots ,k $, thus $ \left\{,\right\}^{\left(\lambda\right)} $ depends linearly on $ \lambda $. In fact
$ \left\{,\right\}^{\left(\lambda\right)} $ may be written as
\begin{equation}
\left\{,\right\}^{\left(\lambda\right)} =\lambda\left\{,\right\}_{1} + \left\{,\right\}_{2}.
\notag\end{equation}
One can use this remark to simplify the proof of compatibility and
Poisson property of brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $. Indeed, if we know that $ \left\{,\right\}_{2} $
is Poisson, then $ \left\{,\right\}^{\lambda} $ is Poisson, thus is $ \left\{,\right\}_{1} $ as a limit of $ \left\{,\right\}^{\left(\lambda\right)}/\lambda $. To
check that $ \left\{,\right\}_{2} $ is Poisson, one can use the symmetry of~\eqref{equ0.10} of the
form $ l \mapsto 2m\pm l $, so it is enough to check Jacobi identity for $ v_{0},v_{1},v_{2} $,
for $ v_{1},v_{2},v_{3} $, for $ v_{0},v_{1},v_{3} $, for $ v_{0},v_{2},v_{3} $, for $ v_{0},v_{2},v_{4} $, and for $ v_{1},v_{3},v_{5} $.
\end{remark}
\begin{definition} The {\em infinite Toda lattice\/} is the manifold with coordinates
$ v_{l} $, $ l\in{\mathbb Z} $, and the Poisson brackets\footnote{These brackets are well-defined on functions which depend on finite
number of coordinates $ v_{l} $ only.}~\eqref{equ0.20},~\eqref{equ0.10}. Considering
sequences $ v_{l} $ with period $ 2k $, one obtains a pair of well-defined Poisson
brackets on a $ 2k $-dimensional subvariety. Denote this bihamiltonian
structure by $ V_{2k} $, call it the {\em periodic Toda lattice}. \end{definition}
In Sections~\ref{h4} and~\ref{h10} we prove that open dense subsets of the
bihamiltonian structures $ V_{2k+1} $ and $ V_{2k} $ are Kronecker bihamiltonian
structure. In other words, in these sections we prove the following
theorems:
\begin{theorem} \label{th01.60}\myLabel{th01.60}\relax The open Toda lattice (of dimension $ 2k-1 $) is a
bihamiltonian structure which is generically Kronecker of type $ \left(2k-1\right) $. \end{theorem}
\begin{theorem} \label{th01.70}\myLabel{th01.70}\relax The periodic Toda lattice (of dimension 2k) is a
bihamiltonian structure which is generically Kronecker of type $ \left(2k-1,1\right) $.
\end{theorem}
\begin{remark} \label{rem0.20}\myLabel{rem0.20}\relax Note that one can also consider a manifold $ \widetilde{V}_{2k} $ with
coordinates $ v_{0},\dots ,v_{2k-1} $ and brackets~\eqref{equ0.20},~\eqref{equ0.10}. It is also
bihamiltonian, but it is not a Kronecker structure, so it cannot be
described by the methods of this paper. Say, at a generic point both the
Poisson structures are in fact symplectic, while all linear combinations
of Poisson structures of a Kronecker structure are degenerated. While
this structure may be described by the means of \cite{Tur89Cla,Mag88Geo,
Mag95Geo,McKeanPC,GelZakh93}, note that the in applications $ \widetilde{V}_{2k} $
appears not by itself, but as a reduction of the structure $ V_{2k+1} $ w.r.t.~
forgetting the variable $ v_{2k} $.
This supports the point of view from Section~\ref{h005} that Kronecker
structures are more important in applications than structures which may
be described in symplectic terms. \end{remark}
\section{Casimir families on the open Toda lattice }\label{h4}\myLabel{h4}\relax
Apply the description of Section~\ref{h2} to the bihamiltonian Toda
structure. First, construct a family of would-be semi-Casimir functions
$ F_{\lambda} $, $ \lambda\in{\mathbb C} $.
Consider the inclusion $ \iota $ of $ V_{2k+1} $ into $ \operatorname{Mat}\left(k+1,k+1\right) $ which sends
$ \left(v_{0},\dots ,v_{2k}\right) $ to a symmetric $ 3 $-diagonal matrix with diagonal elements
$ \left(v_{0},v_{2},\dots ,v_{2k}\right) $ and over-diagonal elements $ \left(v_{1},v_{3},\dots ,v_{2k-1}\right) $. Taking
determinant of the resulting matrix, one obtains a polynomial function $ F_{0} $
on $ V_{2k+1} $.
Any proof of integrability of Toda lattice is based on the following
statement:
\begin{lemma} \label{lm4.05}\myLabel{lm4.05}\relax The function $ F_{0} $ is Casimir, in other words, for any
function $ f $ on $ V_{2k+1} $ the Poisson bracket $ \left\{F_{0},f\right\}_{2} $ is identically 0. \end{lemma}
\begin{proof} Let $ d_{2m} $ be the determinant of the upper-left minor of
$ \iota\left({\mathbit v}\right) $ of size $ \left(m+1\right)\times\left(m+1\right) $. We need to show that $ \left\{v_{l},d_{2k}\right\}_{2}=0 $, $ 0\leq l\leq2k $. Let
us show that $ \left\{v_{l},d_{2m}\right\}_{2}=0 $, $ 0\leq l\leq2m $, $ m\leq k $.
Use induction in $ m $. Plugging the identity
\begin{equation}
d_{2m}=v_{2m}d_{2m-2}-v_{2m-1}^{2}d_{2m-4}
\notag\end{equation}
into $ \left\{v_{l},d_{2m}\right\}_{2} $ shows that the step of induction will work as far as
$ l\leq2m-4 $. On the other hand, due to obvious symmetry $ v_{t}\iff v_{2m-t} $ of brackets
~\eqref{equ0.10} and the determinant $ d_{2m} $, it is enough to check $ \left\{v_{l},d_{2m}\right\}_{2}=0 $ for
$ 0\leq l\leq m $. Moreover, if we know $ \left\{v_{l},d_{2m}\right\}_{2}=0 $ for $ 0\leq l\leq m-1 $, then we know it for
$ m+1\leq l\leq2m $, thus $ \left\{d_{2m},d_{2m}\right\}=\frac{\partial d_{2m}}{\partial v_{m}}\left\{v_{m},d_{2m}\right\} $. Since all these expressions
are polynomials in $ v_{i} $, and $ \frac{\partial d_{2m}}{\partial v_{m}}\not\equiv 0 $, one would be able to conclude
that $ \left\{v_{m},d_{2m}\right\}=0 $.
Thus the only relations to check are $ \left\{v_{l},d_{2m}\right\}_{2}=0 $ for $ 0\leq l\leq m-1 $ such
that $ 2m-l\leq3 $. This leaves only $ \left\{v_{0},d_{2}\right\} $ and $ \left\{v_{1},d_{4}\right\} $, which are easy to
check (using one step of induction for the latter one). \end{proof}
\begin{remark} In Remark~\ref{rem01.57} we used the fact that the right-hand sides
of~\eqref{equ0.10} are linear in variables $ v_{2l} $. The last sentence of the above
proof is the only other place were we use the particular form of
right-hand sides of~\eqref{equ0.10}. \end{remark}
Consider the translation $ {\mathfrak T} $ defined in~\eqref{equ0.15}. Motivated by the
above lemma, define $ F_{\lambda}\buildrel{\text{def}}\over{=}{\mathfrak T}_{\lambda}^{*}F $, $ \lambda\in{\mathbb C} $. By definition of $ \left\{,\right\}^{\left(\lambda\right)} $, the bracket
$ \left\{F_{\lambda},f\right\}^{\left(\lambda\right)} $ is identically 0 for any function $ f $. On the other hand, for any
given $ {\mathbit v}\in V_{2k+1} $ the function $ F_{\lambda}\left({\mathbit v}\right) $ of $ \lambda $ is the characteristic polynomial of
$ \iota\left({\mathbit v}\right) $. Thus the degree of $ F_{\lambda}\left({\mathbit v}\right)+\left(-1\right)^{k}\lambda^{k+1} $ in $ \lambda $ is $ k $. We obtain
\begin{proposition} The family $ \overset{\,\,{}_\circ}{F}_{\lambda}\left({\mathbit v}\right)\buildrel{\text{def}}\over{=}F_{\lambda}\left({\mathbit v}\right)+\left(-1\right)^{k}\lambda^{k+1} $ of functions on $ V_{2k+1} $
depends polynomially on $ \lambda $ with the degree being $ k $. For each $ \lambda $ the
function $ \overset{\,\,{}_\circ}{F}_{\lambda} $ is a Casimir function for the bracket $ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $. \end{proposition}
However, this proposition is not yet enough to put us in the context
of Theorem~\ref{th1.10}, since we do not know the dimension of the span of
$ d\overset{\,\,{}_\circ}{F}_{\lambda}|_{{\mathbit v}} $ for any given $ {\mathbit v} $ and variable $ \lambda $. To find this dimension, we need to
investigate the functions $ F_{\lambda} $ in more details.
Denote the set of polynomials of degree $ d $ in $ \lambda $ with the leading
coefficient $ \left(-1\right)^{d} $ by $ {\mathfrak P}_{d} $. Functions $ F_{\lambda} $ (considered as polynomials in $ \lambda $)
define a mapping $ F_{\bullet}\colon V_{2k+1} \to {\mathfrak P}_{k+1} $, $ {\mathbit v} \mapsto F_{\bullet}\left({\mathbit v}\right) $.
To describe the geometry of this mapping, associate with each
$ {\mathbit v}=\left(v_{i}\right)\in V_{2k+1} $ a finite sequence of polynomials $ C_{I_{p}} $ in $ \lambda $. First, construct
a partition of the set of even numbers $ \left\{0,2,\dots ,2k\right\} $: consider numbers
$ 2l+1 $ such that $ v_{2l+1}=0 $ as walls, they separate $ \left\{0,2,\dots ,2k\right\} $ into
continuous intervals $ I_{1},\dots ,I_{q} $, which we call {\em runs}. To each run
$ I_{p}=\left\{2l_{p},2l_{p}+2,\dots ,2l_{p+1}-2\right\} $ associate the characteristic polynomial $ C_{I_{p}} $ of
the corresponding principal minor (with columns and rows $ l_{p}+1,\dots ,l_{p+1} $)
of the matrix $ \iota\left({\mathbit v}\right) $. Obviously, $ \det \left(\iota\left({\mathbit v}\right)-\lambda\right) $ coincides with the product of
polynomials $ C_{I_{p}} $.
Call $ {\mathbit v}\in V_{2k+1} S $-{\em generic\/} if any two of polynomials $ C_{I_{p}} $ are mutually
prime. Non-$ S $-generic points form a submanifold of codimension 2: one of
$ v_{2l+1} $ should vanish, and two polynomials should have a common zero.
\begin{proposition} At an $ S $-generic point $ {\mathbit v}\in V_{2k+1} $ the mapping $ F_{\bullet}\colon V_{2k+1} \to {\mathfrak P}_{k+1} $
is a submersion\footnote{I.e., its derivative is an epimorphism.}. At non-$ S $-generic points it is not a submersion. \end{proposition}
\begin{proof} It is enough to consider the case when no $ v_{2l+1} $ vanishes.
Indeed, if we leave all the variables $ v_{m} $ except $ v_{2l+1} $ fixed, then $ \det \iota\left({\mathbit v}\right) $
is quadratic in $ v_{2l+1} $ without the linear term. Thus $ v_{2l+1}=0 $ implies
$ \frac{\partial\det }{\partial v_{2l+1}}=0 $. On the other hand, if $ v_{2l+1}=0 $, the matrix breaks into
two blocks, and the derivatives w.r.t.~other variables can be calculated
when we consider two blocks separately. Now the case when some $ v_{2l+1} $
vanish can be proved by induction using the following obvious
\begin{lemma} \label{lm4.12}\myLabel{lm4.12}\relax The multiplication mapping $ {\mathfrak P}_{a}\times{\mathfrak P}_{b} \to {\mathfrak P}_{a+b} $ is a submersion
at $ \left(P_{1},P_{2}\right) $ iff $ P_{1} $ and $ P_{2} $ are mutually prime. \end{lemma}
In the case when all $ v_{2l+1}\not=0 $ the matrix $ \iota\left({\mathbit v}\right) $ is similar to a
$ 3 $-diagonal matrix with diagonal entries $ v_{2l} $, above-diagonal entries 1,
and below-diagonal entries $ v_{2l+1}^{2} $. Denote by $ Q_{k+1} $ the set of $ 3 $-diagonal
$ \left(k+1\right)\times\left(k+1\right) $ matrices with the above-diagonal entries being 1. Denote by
$ \widetilde{F}_{\bullet} $ the mapping $ Q_{k+1} \to {\mathfrak P}_{k+1} $ of taking the characteristic polynomial.
Denote the diagonal entries of $ q\in Q $ by $ a_{l} $, $ l=0,\dots ,k $, the below-diagonal
entries by $ b_{l} $, $ l=1,\dots ,k $. Now the proposition is an immediate corollary
of the following
\begin{lemma} The mapping $ \widetilde{F}_{\bullet} $ restricted on the subset $ b_{l}\not=0 $, $ l=1,\dots ,k $, is a
submersion. \end{lemma}
To prove this lemma, denote the characteristic polynomial of the
upper-left principal $ l\times l $ minor by $ d_{l} $. The lemma is an immediate corollary
of
\begin{lemma} The mapping $ \left(d_{k},d_{k+1}\right)\colon Q_{k+1} \to {\mathfrak P}_{k}\times{\mathfrak P}_{k+1} $ restricted on the subset
$ b_{l}\not=0 $, $ l=1,\dots ,k $, is a bijection onto the subset of mutually prime
polynomials $ \left(P_{1},P_{2}\right)\in{\mathfrak P}_{k}\times{\mathfrak P}_{k+1} $. \end{lemma}
This lemma is a direct discrete analogue of the inverse problem for
Sturm--Liouville equation by the spectrum with fixed ends and normalizing
numbers (compare \cite{Lev87Inv}). In fact zeros of $ d_{k+1} $ determine the
spectrum, and
values of $ d_{k} $ at these points determine the normalizing numbers.
\begin{proof} Indeed, extending the sequence $ d_{l} $ by $ d_{0}=1 $, $ d_{-1}=0 $, one can see
that this sequence is uniquely determined by the recurrence relation
\begin{equation}
d_{l}=\left(a_{l-1}-\lambda\right)d_{l-1}-b_{l-1}d_{l-2}.
\notag\end{equation}
From this relation one can immediately see that if $ b_{m} $, $ m<l $, do not
vanish, then $ d_{l} $ and $ d_{l-1} $ are mutually prime. On the other hand, given
mutually prime $ d_{l}\in{\mathfrak P}_{l} $ and $ d_{l-1}\in{\mathfrak P}_{l-1} $, one can uniquely determine $ d_{l-2}\in{\mathfrak P}_{l-2} $
and two numbers $ a_{l-1} $ and $ b_{l-1} $ from the above relation, and $ b_{l-1}\not=0 $. \end{proof}
This finishes the proof of the proposition. \end{proof}
We conclude that at an $ S $-generic point $ {\mathbit v} $ the derivatives $ d\overset{\,\,{}_\circ}{F}_{\lambda}|_{{\mathbit v}} $ span
$ k+1 $-dimensional space (since $ \dim {\mathfrak P}_{k+1}=k+1 $). Now the only condition of
Theorem~\ref{th1.10} (in fact, of Amplification~\ref{amp1.12}) which is missing is
the calculation of the rank of $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ for an appropriate $ \lambda_{1} $ and
$ \lambda_{2} $. One can easily see that
\begin{lemma} \label{lm6.50}\myLabel{lm6.50}\relax The rank of the bracket $ \left\{,\right\}_{1} $ at the point $ {\mathbit v} $ is $ 2k-2d $, here
$ d $ is the number of indices $ l=0,\dots ,k-1 $, such that $ v_{2l+1}=0 $. \end{lemma}
This shows that on the subset $ v_{2l+1}\not=0 $, $ l=0,\dots ,k-1 $, the bihamiltonian
structure satisfies conditions of Theorem~\ref{th1.10} and Amplification
~\ref{amp1.12}, thus is flat indecomposable. This finishes the proof of Theorem
~\ref{th01.60}.
Moreover, since for a flat indecomposable structure both brackets
have corank 1 everywhere, Lemma~\ref{lm6.50} implies that in a neighborhood of
a point $ {\mathbit v} $ with $ v_{2l+1}=0 $ for some $ l=0,\dots ,k-1 $ the bihamiltonian open Toda
structure is {\em not\/} flat indecomposable.
\section{Periodic Toda lattice }\label{h10}\myLabel{h10}\relax
Recall that $ V_{2k} $ denotes the periodic Toda lattice.
\begin{lemma} The function $ N=v_{1}v_{3}\dots v_{2k-1} $ on $ V_{2k} $ is Casimir w.r.t.~both Poisson
brackets $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $. \end{lemma}
\begin{proof} Since this function is invariant w.r.t.~translation $ {\mathfrak T}_{\lambda} $, it is
enough to show this for the bracket $ \left\{,\right\}_{2} $. When one calculates $ \left\{N,v_{2l}\right\} $,
only the factor $ v_{2l-1}v_{2l+1} $ of $ N $ matters, and by~\eqref{equ0.10}
$ \left\{v_{2l-1}v_{2l+1},v_{2l}\right\} $ vanishes. Similarly, for $ \left\{N,v_{2l-1}\right\} $ only
$ \left\{v_{2l-3}v_{2l+1},v_{2l-1}\right\} $ matters, and it also vanishes. \end{proof}
Since dimension of $ V_{2k} $ is even, this shows that symplectic leaves of
$ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ have codimension at least 2. Any hypersurface $ N=\operatorname{const} $ is
decomposed into a union of such leaves for any $ \left(\lambda_{1},\lambda_{2}\right)\not=\left(0,0\right) $. In
particular, each hypersurface $ N=\operatorname{const} $ carries an odd-dimensional
bihamiltonian structure.
\begin{theorem} \label{th10.10}\myLabel{th10.10}\relax For any $ c\not=0 $ the bihamiltonian structure on the
hypersurface $ N=c $ is generically flat indecomposable. \end{theorem}
Note that this theorem implies Theorem~\ref{th01.70}, since one can
easily modify Theorem~\ref{th2.07} to cover families of bihamiltonian
structures as well:
\begin{amplification} Consider a family of bihamiltonian structures
$ \left(\left\{,\right\}_{1}^{\left(\mu\right)},\left\{\right\}_{2}^{\left(\mu\right)}\right) $ on a manifold $ M $ which depends smoothly on a parameter
$ \mu\in{\mathcal M} $. Suppose that for any $ \mu $ the bihamiltonian structure is flat
indecomposable. Then for any $ m_{0}\in M $ and $ \mu_{0}\in{\mathcal M} $ there is a neighborhood $ U $ of
$ m $, a neighborhood $ U' $ of $ \mu_{0} $ and a family of coordinate system $ \left(x_{i}^{\left(\mu\right)}\right) $ on $ U $
depending smoothly on a parameter $ \mu\in U' $ such that the bihamiltonian
structure $ \left(\left\{,\right\}_{1}^{\left(\mu\right)},\left\{\right\}_{2}^{\left(\mu\right)}\right) $ in the coordinate system $ \left(x_{i}^{\left(\mu\right)}\right) $ is given by
~\eqref{equ45.20} for any $ \mu\in U' $. \end{amplification}
Since the bihamiltonian structure corresponding to $ {\mathcal K}_{1} $ has both
bracket being 0, this amplification implies Theorem~\ref{th01.70}.
\begin{proof}[Proof of Theorem~\ref{th10.10} ] Associate to a point $ {\mathbit v} $ of the infinite Toda
lattice an infinite $ 3 $-diagonal matrix $ \iota\left({\mathbit v}\right) $ in the same way we did it in
Section~\ref{h4}. Consider a matrix equation $ \iota\left({\mathbit v}\right){\mathbit x}=0 $, here $ {\mathbit x}\in{\mathbb C}^{\infty} $ is a
two-side-infinite vector. Since this equation may be written as the
recursion relation
\begin{equation}
v_{2l-1}x_{l-1}+v_{2l}x_{l}+v_{2l+1}x_{l+1}=0,\qquad l\in{\mathbb Z},
\label{equ10.10}\end{equation}\myLabel{equ10.10,}\relax
this matrix equation has a two-dimensional space of solutions if $ v_{2l-1}\not=0 $
for any $ l\in{\mathbb Z} $.
If $ {\mathbit v} $ is in the periodic Toda lattice, then the equation $ \iota\left({\mathbit v}\right){\mathbit x}=0 $ is
invariant with respect to the shift $ x_{l} \mapsto x_{l+k} $ of coordinates of $ {\mathbit x} $. This
shift induces a linear transformation $ {\mathcal M}={\mathcal M}\left({\mathbit v}\right) $ of monodromy in the
$ 2 $-dimensional vector space of solutions. As in Section~\ref{h0}, denote by $ {\mathbit v}^{0} $
an element of $ {\mathbb C}^{\infty} $ with 1 on even positions, 0 on odd positions.
\begin{lemma} If $ v_{2l-1}\not=0 $ for any $ l\in{\mathbb Z} $, then $ \det {\mathcal M}=1 $, and $ \operatorname{Tr} {\mathcal M}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right) $ is a
polynomial of degree $ k $ in $ \lambda $ with the leading coefficient $ N^{-1} $. \end{lemma}
\begin{proof} Indeed, the recursion~\eqref{equ10.10} induces a linear transformation
$ \left(x_{l},x_{l+1}\right)=m_{l}\left(x_{l-1},x_{l}\right)/v_{2l+1} $, $ m_{l}=\left(
\begin{matrix}
0 & v_{2l+1} \\ -v_{2l-1} & -v_{2l}
\end{matrix}
\right) $. In an appropriate
basis $ N\cdot{\mathcal M} $ can be written as $ m_{k}m_{k-1}\dots m_{1} $, and each matrix $ m_{l}=m_{l}\left({\mathbit v}\right) $ has
determinant $ v_{2l-1}v_{2l+1} $. Moreover, $ m_{l}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right) $ is of degree 1 in $ \lambda $ with the
leading term $ \left(
\begin{matrix}
0 & 0 \\ 0 & \lambda
\end{matrix}
\right) $.
Thus $ N\cdot{\mathcal M}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right) $ is a polynomial in $ \lambda $ of degree $ k $ with the leading term
being $ \left(
\begin{matrix}
0 & 0 \\ 0 & \lambda^{k}
\end{matrix}
\right) $, which finishes the proof. \end{proof}
\begin{lemma} \label{lm10.50}\myLabel{lm10.50}\relax The function $ \operatorname{Tr} {\mathcal M}\left({\mathbit v}\right) $ defined on the open subset $ v_{2l-1}\not=0 $,
$ l=1,\dots ,k $, of $ V_{2k} $ is a Casimir function for the Poisson bracket $ \left\{,\right\}_{2} $. \end{lemma}
We do not prove this standard statement about the periodic Toda
lattice. As in the case of Lemma~\ref{lm4.05}, the proof is reduced to a check
of a finite number of identities.
The following lemma is obvious:
\begin{lemma} On the open subset $ v_{2l-1}\not=0 $, $ l=1,\dots ,k $, of $ V_{2k} $ the Poisson
bracket~\eqref{equ0.20} has symplectic leaves of codimension 2 given by
the equations $ v_{0}+v_{2}+\dots +v_{2k-2}=C_{0} $, $ v_{1}v_{3}\dots v_{2k-1}=C_{1} $. \end{lemma}
This shows that $ r=2 $ in Proposition~\ref{prop6.15}.
To demonstrate Theorem~\ref{th10.10} the only thing which remains to be
proved is that at a generic point $ {\mathbit v}\in V_{2k} $ the differentials $ d \operatorname{Tr} {\mathcal M}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right)|_{{\mathbit v}} $
for different $ \lambda\in{\mathbb C} $ and the differential of $ N\equiv v_{1}v_{3}\dots v_{2k-1} $ span a
$ k+1 $-dimensional vector subspace of $ {\mathcal T}_{{\mathbit v}}^{*}V_{2k} $. It is enough to show that for
a generic $ {\mathbit v} $ the differentials of $ N\cdot{\mathcal M}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right) $ for different $ \lambda\in{\mathbb C} $
span a $ k $-dimensional vector subspace of the hyperplane $ d\left(v_{1}v_{3}\dots v_{2k-1}\right)=0 $
in $ {\mathcal T}_{{\mathbit v}}^{*}V_{2k} $.
The leading coefficient in $ \lambda $ of $ N\cdot\operatorname{Tr} {\mathcal M}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right) $ is 1, thus the
function $ N\cdot\operatorname{Tr} {\mathcal M}\left({\mathbit v}-\lambda{\mathbit v}^{0}\right)-\lambda^{k} $ defines a mapping $ {\mathfrak M}\colon V_{2k} \to {\mathcal P}_{k-1} $. Again, it is
enough to show that the restriction of this polynomial mapping to
$ H_{c}=\left\{v_{1}v_{3}\dots v_{2k-1}=c\right\} $ is a submersion for a generic $ {\mathbit v} $ and $ c\not=0 $. On the other
hand, multiplication of $ v_{i} $ by the same non-zero constant does not change
$ {\mathcal M}\left({\mathbit v}\right) $, thus if we prove this statement for one $ c\not=0 $, is it true for any
$ c\not=0 $. Thus it is enough to demonstrate this statement for $ c\approx0 $, $ c\not=0 $. Again,
it is enough to show that the restriction of $ {\mathfrak M} $ to an open subset of $ c=0 $
is a submersion.
However, if $ v_{1}=v_{2}=\dots =v_{2k-1}=0 $, then
\begin{equation}
\lambda^{k}+{\mathfrak M}\left({\mathbit v}\right)=\left(\lambda-v_{2}\right)\left(\lambda-v_{4}\right)\dots \left(\lambda-v_{2k}\right),
\notag\end{equation}
thus the restriction of $ {\mathfrak M} $ to $ \left\{v_{1}=v_{2}=\dots =v_{2k-1}=0\right\} $ is a surjection, thus is
a submersion in a generic point. This shows that Theorem~\ref{th1.10} is
applicable, thus the bihamiltonian structure is indeed flat
indecomposable at a generic point. \end{proof}
\section{Lax structures }\label{h60}\myLabel{h60}\relax
The following definition is inspired by \cite{KosMag96Lax}. In this paper
a notion of a Lax operator is introduced, this is a matrix-valued
function on a bihamiltonian structure which satisfies some compatibility
relations. However, since these relations are expressed in terms of the
characteristic polynomial of the matrix, it is more convenient to work
directly with the mapping into polynomials.
Recall that $ {\mathcal P}_{n} $ was defined in Section~\ref{h45}. Denote the value at $ \lambda $ of
a polynomial $ p\in{\mathcal P}_{n} $ by $ p|_{\lambda} $.
\begin{definition} Consider a bihamiltonian structure $ \left(M,\left\{,\right\}_{1},\left\{,\right\}_{2}\right) $. Consider a
mapping $ {\mathbit L} $ from $ M $ to the set $ {\mathcal P}_{n-1} $ of polynomials of degree $ n-1 $. This
mapping is a {\em weak Lax structure\/} on $ M $ of rank $ n $ if for any $ \lambda\in{\mathbb R} $ the
function $ C_{\lambda} $ on $ M $ defined by $ m \mapsto {\mathbit L}\left(m\right)|_{\lambda} $ is a Casimir function for
$ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $.
Consider a point $ m_{0}\in M $. Suppose that the action dimension of $ M $ at
$ m_{0}\in M $ is $ n $. A {\em Lax structure\/} on $ M $ near $ m_{0} $ is a weak Lax structure $ {\mathbit L} $ of rank
$ n $ such that the mapping $ {\mathbit L} $ is a submersion. \end{definition}
Note that if the bihamiltonian structure is in fact analytic, then
$ C_{\lambda} $ is Casimir for complex $ \lambda $ too (since the conditions of being a
$ \lambda $-Casimir family are polynomial in $ \lambda $).
\begin{theorem} \label{th60.30}\myLabel{th60.30}\relax If an analytic bihamiltonian structure on $ M $ admits a
Lax structure near $ m_{0}\in M $, and for one particular $ \left(\lambda_{1},\lambda_{2}\right)\in{\mathbb C}^{2} $ the Poisson
structure $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has a constant corank 1, then the bihamiltonian
structure is a Kronecker structure of type $ \left(\dim M\right) $ near $ m_{0} $. \end{theorem}
In other words, the manifold $ M $ is odd-dimensional and one can find a
local coordinate system where both brackets have constant coefficients
and are given by~\eqref{equ45.20}. In particular, all such bihamiltonian
structures of the same dimension are locally isomorphic.
\begin{proof} Reduce this statement to one of Amplification~\ref{amp1.12}.
In our case $ d=n-1 $, and, by submersion condition, $ \dim W_{1}=n $. Thus the
only thing one needs to show is that $ \dim M=2n-1 $. This momentarily follows
from the definition of the action dimension. \end{proof}
\begin{remark} In applications the Poisson bracket $ \left\{,\right\}_{1} $ usually has a much
simpler form than $ \left\{,\right\}_{2} $, thus most of the time one would check the rank
condition for the bracket $ \left\{,\right\}_{1} $. (Recall that for Kronecker structures {\em all\/}
the nonzero linear combinations of brackets have the same rank.) \end{remark}
Let us spell out the relation of our definition with one of
\cite{KosMag96Lax}. Consider the Newton symmetric functions $ s_{k}=\sum_{i}\lambda_{i}^{k} $ of roots
$ \left\{\lambda_{i}\right\} $ of polynomial $ \lambda^{n}+p\left(\lambda\right) $, $ p\in{\mathcal P}_{n-1} $ as functions on $ {\mathcal P}_{n-1} $, let
$ H_{k-1}\buildrel{\text{def}}\over{=}s_{k}\circ{\mathbit L}/k $. Then the conditions of \cite{KosMag96Lax} are that $ H_{k} $, $ k\geq0 $,
satisfy Lenard recursion relations~\eqref{equ48.20}. As in Section~\ref{h48},
consider a formal power series $ c\left(t\right)=\sum_{k\geq1}s_{k}t^{1-k}/k $ in $ t^{-1} $ with coefficients
in functions on $ {\mathcal P}_{n-1} $. Then $ e^{-c\left(t\right)/t}=\Pi_{i}\left(1-\lambda_{i}/t\right)=t^{-n}\left(t^{n}+p\left(t\right)\right) $, $ p\in{\mathcal P}_{n-1} $. Let
$ C\left(t\right)=\sum_{k\geq0}H_{k}t^{-k} $, then $ e^{-C\left(t\right)/t}|_{m}=t^{-n}{\mathbit L}\left(m\right)|_{t} $ for any $ m\in M $.
Since the latter expression is a formal series in $ t^{-1} $ with a finite
number of non-zero coefficients, it is an anchored formal $ \lambda $-family iff it
is a $ \lambda $-Casimir family. Since for any function $ \alpha $ of one variable $ \alpha\left(C\right) $ is a
Casimir function if $ C $ is such, we conclude that $ {\mathbit L}|_{t} $ is a $ \lambda $-Casimir family
iff $ C\left(t\right) $ is an anchored formal $ \lambda $-family. Thus the condition that $ {\mathbit L} $ is a
weak Lax structure is equivalent to the pair of conditions: of $ H_{k} $
satisfying Lenard recursion relations~\eqref{equ48.20}, {\em and additionally\/} of $ H_{0} $
being a Casimir function for $ \left\{,\right\}_{1} $. This shows
\begin{proposition} Suppose that $ L\colon M \to \operatorname{Mat}\left(n\right) $ is a Lax operator in the
sense of \cite{KosMag96Lax}. Let $ {\mathbit L} $ be the mapping
\begin{equation}
M \to {\mathcal P}_{n-1}\colon m \mapsto \det \left(t\boldsymbol1-L\left(m\right)\right)-t^{n}.
\notag\end{equation}
Then $ {\mathbit L} $ is a weak Lax structure iff $ \operatorname{Tr} L $ is a Casimir function for $ \left\{,\right\}_{1} $.
\end{proposition}
As in Section~\ref{h55}, note that in applications the Lenard scheme is
most frequently used when $ \operatorname{Tr} L $ is a Casimir function for $ \left\{,\right\}_{1} $. Note also
that one can consider a weak Lax structure as an ``anchored'' variant of a
Lax operator of \cite{KosMag96Lax} (compare with Remark~\ref{rem48.02} and
Definition~\ref{def48.25}).
\begin{remark} By Theorem~\ref{th01.60}, in conditions of Theorem~\ref{th60.30} an open
subset of the bihamiltonian structure is locally isomorphic to the
structure of Toda lattice. This isomorphism provides the subset $ U $ with a
Lax operator in the most usual sense of this word, i.e., with a mapping
$ L\colon U \to \operatorname{Mat}\left(n\right) $ such that for any action function\footnote{See Section~\ref{h02}.} $ H $ on $ U $ there is a
mapping $ A_{H}\colon U \to \operatorname{Mat}\left(n\right) $ such that $ H $-Hamiltonian flow on $ U $ corresponds to
$ \frac{dL}{dt}=\left[A_{H},L\right] $.
In other words, Theorem~\ref{th60.30} provides a partial explanation for
the relation between Lax operator and Lax--Nijenhuis operators discovered
in \cite{KosMag96Lax}. \end{remark}
\begin{remark} Note that the conditions of Theorem~\ref{th60.30} break into four
separate parts: the condition of being a weak Lax structure, the
condition that coefficients of $ {\mathbit L} $ provide enough functions to completely
integrate $ M $, the submersion condition, and the condition of having small
corank. Note that the corank of the structure cannot be less than 1,
since we {\em require\/} existence of Casimir function for any $ \lambda $. Thus two last
conditions taken together may be interpreted as conditions of
non-degeneracy of the Lax structure. \end{remark}
\begin{nwthrmi} Which conditions on a weak Lax family imply that the
bihamiltonian structure is Kronecker at generic points? \end{nwthrmi}
Conjecture~\ref{con01.100} claims that many bihamiltonian structures
which admit a Lax structure are in fact Kronecker at generic points. An
answer on the above question might have provided a better understanding
for the statement of Conjecture~\ref{con01.100}.
\section{Geometric conjectures }\label{h005}\myLabel{h005}\relax
Note that the Theorems~\ref{th01.60},~\ref{th01.70}, and~\ref{th60.30} run against
the common intuition, which says that integrable systems should be
expressed as direct products of two-dimensional blocks. However, this
point of view comes from the symplectic approach to integrable systems,
where everything is {\em forced\/} to be even-dimensional.
The above theorems show that this common intuition has historical
roots only, and some new type of intuition for geometric approach to
integrable systems may be needed.
Our meta-conjecture is that the mindset of ``everything is a product
of odd-dimensional components (given by~\eqref{equ45.20})'' is much more
appropriate for the geometric study of bihamiltonian structures, compare
with Remark~\ref{rem01.95} and Conjecture~\ref{con01.100}.
Again, if one believes in the above meta-conjecture, one can see
that the Procrustean approach of symplectic geometry forces a reduction
of dimension (as in Remark~\ref{rem0.20}, which gives an analogue of
restriction to a hypersurface), which reduces a feature-rich
bihamiltonian structure to a non-rigid symplectic structure.
\begin{remark} \label{rem01.95}\myLabel{rem01.95}\relax Definition~\ref{def01.105} provides an example of micro-local
approach to bihamiltonian systems. By Theorem~\ref{th6.10}, in each tangent
space any bihamiltonian structure decomposes into a direct sum of Jordan
blocks and Kronecker blocks. Thus a natural question arises: given a
bihamiltonian structure $ M $, which indecomposable pairs $ {\mathcal J}_{2k,\lambda} $ and $ {\mathcal K}_{2k-1} $
appear at which points of $ M? $
Theorems~\ref{th01.60} and~\ref{th01.70} answer this question for generic
points of the open and the periodic {\em Toda lattice}. We think we can answer
this question\footnote{After the initial release of this paper M.~Gekhtman explained us that
the result on the open Toda lattice implies the statements about
the open odd-dimensional Kac--van~Moerbeke--Volterra lattice, as well as a
similar statement about the open relativistic Toda lattice \cite{Ruj90Rel}.
\endgraf
This is an immediate corollary of the existence of local
isomorphisms of these bihamiltonian systems similar to those constructed in
\cite{DeiLi91Poi,Dam94Mul}, see \cite{GekhShap99Non} and \cite{FayGekh99Ele}.} for generic points of the odd-dimensional open or
even-dimensional periodic {\em Kac\/}--{\em van\/}~{\em Moerbeke\/}--{\em Volterra system\/}
\cite{Kacvan75Exp,FerSan97Int}, of the {\em full Toda lattice\/} \cite{Kos79Sol},
and of the multidimensional {\em Euler top\/} \cite{MorPiz96Eul}. In tangent spaces at
generic points the open Toda lattice is an indecomposable Kroneker block,
the periodic Toda lattice is a direct product of indecomposable
$ 1 $-dimensional and $ 2k-1 $-dimensional Kroneker blocks. The complete Toda
lattice and the multidimensional Euler top are products of Kroneker
blocks with the dimensions of components being $ \left(2k-1,2k-3,2k-5,\dots \right) $ and
$ \left(2k-1,2k-5,2k-9,\dots \right) $ correspondingly.
Additionally, results of \cite{Pan99Ver} show that a similar
decomposition exists for the regular case of Example~\ref{ex002.45}. In this
case the dimensions of components have the form $ 2e_{1}-1,\dots ,2e_{r}-1 $, $ e_{i} $ being
the exponents of the Weyl group of $ {\mathfrak g} $, $ r $ being the rank of $ {\mathfrak g} $. \end{remark}
The above descriptions of tangent spaces together with Theorems
~\ref{th01.60} and~\ref{th01.70} suggest the following
\begin{conjecture} \label{con01.100}\myLabel{con01.100}\relax The odd-dimensional open Volterra system, the
even-dimensional periodic Volterra system, the full Toda lattice, the
multidimensional Euler top, and the regular case of Example~\ref{ex002.45}
are\footnote{Paper \cite{Zakh99Kro} contains a proof of the part of the conjecture related
to Example~\ref{ex002.45}, see the previous footnote for some other cases.} generically Kronecker bihamiltonian structures. \end{conjecture}
As shown in this paper, the powerful methods of \cite{GelZakhWeb,
GelZakh93} are enough to translate some simple properties\footnote{The existence of Casimir functions given by Lemmas~\ref{lm4.05} and~\ref{lm10.50}.} of the open
and the periodic Toda lattices into description of the {\em local\/} geometry of
these structures. One may hope that it is possible to generalize the
results of \cite{GelZakhWeb,GelZakh93} so that they cover structures with
geometry of tangent spaces as in Remark~\ref{rem01.95}. This would allow one
to prove Conjecture~\ref{con01.100} using some simple results about these
integrable systems\footnote{Again, since the geometry of these system is very well investigated, it
may be possible to prove this conjecture directly using appropriate
systems of action-angle variables for these manifolds.
\endgraf
However, an approach based on Conjecture~\ref{con01.110} would allow one to
prove Conjecture~\ref{con01.100} using only simple-to-obtain action
variables, i.e., families of Hamiltonians for the above manifolds.}.
Using language of Section~\ref{h02}, one can state such conjectures in the
following form.
\begin{conjecture} \label{con01.110}\myLabel{con01.110}\relax Suppose that two bihamiltonian structures
$ \left(M,\left\{\right\}_{1},\left\{\right\}_{2}\right) $ and $ \left(M',\left\{\right\}'_{1},\left\{\right\}'_{2}\right) $ are both homogeneous. Consider webs\footnote{See Section~\ref{h02}.} $ {\mathcal B}_{U} $
and $ {\mathcal B}_{U'} $ which correspond to small open subsets $ U\subset M $, $ U'\subset M' $. If webs $ {\mathcal B}_{U} $ and
$ {\mathcal B}_{U'} $ are locally isomorphic, then the bihamiltonian structures on $ M $ and $ M' $
are locally isomorphic. In particular, the types of $ M $ and $ M' $ coincide. \end{conjecture}
This conjecture may be augmented by the following description
of webs for homogeneous structures \cite{Pan99Ver}:
\begin{proposition} The web $ {\mathcal B}_{U} $ corresponding to a small open subset $ U $ of
homogeneous bihamiltonian structure of type $ \left(2k_{1}-1,2k_{2}-1,\dots ,2k_{l}-1\right) $ is a
manifold of dimension $ k_{1}+k_{2}+\dots +k_{l} $, and the subspace $ {\mathfrak C}_{\lambda} $ of the space of
functions on $ {\mathcal B}_{U} $ consists of local equations of a foliation $ {\mathcal F}_{\lambda} $ on $ {\mathcal B}_{U} $ of
codimension $ l $. \end{proposition}
Conjecture~\ref{con01.110}, together with Amplification~\ref{amp1.07}, lead to
the following
\begin{conjecture} \label{con01.120}\myLabel{con01.120}\relax Consider a manifold $ M $ with two compatible Poisson
structures $ \left\{,\right\}_{1} $ and $ \left\{,\right\}_{2} $. Consider a finite set $ L $ with $ r $ elements.
Consider families of smooth functions $ F_{l,\lambda} $, $ l\in L $, $ \lambda\in{\mathbb C} $, on $ M $ such that for
any $ l\in L $ and any $ \lambda\in{\mathbb C} $ the function $ F_{l,\lambda} $ is Casimir w.r.t.~the Poisson
bracket $ \lambda\left\{,\right\}_{1}+\left\{,\right\}_{2} $. Suppose that $ F_{l,\lambda} $ depends polynomially on $ \lambda $
\begin{equation}
F_{l,\lambda}\left(m\right)= \sum_{k=0}^{d_{l}}f_{l,k}\left(m\right)\lambda^{k},
\notag\end{equation}
with smooth coefficients $ f_{l,k}\left(m\right) $. For $ m\in M $ denote by $ W_{1}\left(m\right)\subset{\mathcal T}_{m}^{*}M $ the vector
subspace spanned by the the differentials $ df_{l,k}|_{m} $ for all possible $ l $ and
$ 0\leq k\leq d_{l} $. If
\begin{enumerate}
\item
for one particular value $ m_{0}\in M $ one has $ \dim W_{1}\left(m_{0}\right)\geq\frac{\dim M+r}{2} $;
\item
for one particular value of $ \lambda_{1},\lambda_{2}\in{\mathbb C}^{2} $ the Poisson structure
$ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has at most $ r $ independent Casimir functions on any open
subset of $ M $ near $ m_{0} $;
\item
the degrees $ d_{l} $ satisfy $ \sum_{L}\left(2d_{l}+1\right)\leq\dim M $;
\end{enumerate}
then $ \dim M-r $ is even, $ \dim W_{1}\left(m_{0}\right)=\frac{\dim M+r}{2} $, the degrees $ d_{l} $ satisfy
$ 2\sum_{L}d_{l}+r=\dim M $, and the bihamiltonian structure on $ M $ is Kronecker of
type $ \left(2d_{1}+1,\dots ,2d_{r}+1\right) $ on an open subset $ U\subset M $ such that $ m_{0} $ is in the
closure of $ U $. \end{conjecture}
Conjecture~\ref{con01.120} immediately implies Conjecture~\ref{con01.100},
since the explicit formulae for Hamiltonians for the dynamic systems of
Conjecture~\ref{con01.100} are well-known and may be included into families as
in Conjecture~\ref{con01.120}.
To understand the significance of Conjecture~\ref{con01.120}, note that
by Remark~\ref{rem6.13} all the Kronecker structures of the given type are
locally isomorphic, and obviously satisfy the conditions of the
conjecture. Thus this conjecture provides a criterion of being a
Kronecker structure in terms of the mutual position of Casimir functions
for the combinations of brackets of bihamiltonian structure.
\begin{conjecture} In the settings of Conjecture~\ref{con01.120} if one supposes
that the Poisson structure $ \lambda_{1}\left\{,\right\}_{1}+\lambda_{2}\left\{,\right\}_{2} $ has constant corank $ r $, then one
may weaken the condition on $ \dim W_{1} $ to become $ \dim W_{1}\left(m_{0}\right)\geq\frac{\dim M+r-1}{2} $,
and amplify the conclusion to so that the open subset $ U $ contains $ m_{0} $. \end{conjecture}
The above theorems and conjectures lead one to the following
\begin{nwthrmii} Why each ``classical'' finite-dimensional bihamiltonian
structure has an open subset which is Kronecker, or may be ``naturally''
considered as a reduction of dimension starting from a larger
bihamiltonian structure which is Kronecker? \end{nwthrmii}
This question is amplified by the fact that in \cite{GelZakhWeb,
GelZakh93} we constructed a huge family of non-Kronecker integrable
bihamiltonian structures (see also examples in Section~\ref{h47} for the
dimension being 3). Such integrable systems are {\em actually nonlinear}, as
opposed to {\em manifestly nonlinear\/} systems, which may become linear after an
appropriate coordinate change (compare with Definition~\ref{def002.43}). One
would see that an answer to the above question would unravel some
mechanism by which the actually nonlinear integrable systems avoid
attention of mathematical physicists.
Note that Theorem~\ref{th01.60} allows one to restate the above question
using direct products of open Toda lattices instead of Kronecker
structures:
{\em Why many ``classical'' bihamiltonian structures are (in generic points)
locally isomorphic to direct products of open Toda lattices?\/}
While Section~\ref{h60} singles out flat indecomposable structures as
those which admit non-degenerate Lax structures, we do not consider this
as a legitimate explanation to the above selection principle. Lax
representation is only one of multiple approaches to integration of
dynamical systems, so explaining the above selection principle by using
Theorem~\ref{th60.30} just substitutes one question (why all the classical
systems are flat) by another one (why all the classical systems admit Lax
representation).
\begin{remark} Note that a flat bihamiltonian structure of dimension $ d $ may be
extended locally to a $ d\left(d-1\right)/2 $-parametric linear family of Poisson
structures: those which have constant coefficients in the above
coordinate system. Our meta-conjecture about the r\^ole of Kronecker
structures may explain an abundance of multi-hamiltonian structures in
mathematical physics (for example, see \cite{DamPasSok95Tri,OlvRos96Tri,
TsuTakKaj97Tri,BlaFerGom98Som}).\footnote{However, note that what is commonly called a ``multi-hamiltonian''
structure is frequently just a figure of speech: the additional
``brackets'' which augment the bihamiltonian structure are not only not
Poisson (thus do not satisfy Jacobi condition), but not even brackets
(thus $ \left\{f,g\right\}_{3} $ would be defined for {\em some\/} $ f $ and $ g $ only).} \end{remark}
\bibliography{ref,outref,mathsci}
\end{document} | 133,942 |
TITLE: Confidence interval and normal distribution
QUESTION [1 upvotes]: For question (a), is the answer 0.7143?
For question (b), is the answer 10.85 and 11.95 ?
REPLY [0 votes]: For the first problem, note that $9$ months is $1.5$ standard deviation units below the mean. From a table of the standard normal, we can see that the pobability that $Z\gt -1.5$ is approximately $0.9332$. Call this number $p$.
If the probability of "success" is $p$, then the probability of $9$ or more successes in $10$ trials is
$$\binom{10}{9}p^9(1-p)+\binom{10}{10}p^{10}(1-p)^0.$$
The calculator gives an answer substantially larger than yours.
We have not verified your second computation. | 84,281 |
Nutritional Info
- Servings Per Recipe: 60
- Amount Per Serving
- Calories: 96.7
- Total Fat: 4.3 g
- Cholesterol: 16.9 mg
- Sodium: 67.2 mg
- Total Carbs: 14.6 g
- Dietary Fiber: 0.6 g
- Protein: 1.6 g
View full nutritional breakdown of Susie's Honey Maple Cookies calories by ingredient
Susie's Honey Maple CookiesSubmitted by: LIVE_TO_LOVE
IntroductionThis is a wholesome chewy cookie that satisfies the sweet tooth, but isn't just empty calories--it's a great way to incorporate whole grains. Substitute a multi-grain mix for added nutrition and fiber. Add mini chocolate chips, raisins, nuts, or coconut for variation..
Number of Servings: 60
Ingredients
1/2 cup pure maple syrup
1/4 cup pure raw honey
1 egg
1 teaspoon vanilla extract
1 cup shortening
~*~*~*~*~*~*~*~*~*~*~*~*~
4 cups seven grain cereal mix or rolled oats (old fashioned oatmeal)
2 cups all-purpose flour
1 teaspoon baking soda
1 teaspoon salt
Tips
Bob's Red Mill makes both seven- and nine-grain hot cereal mixes that would be suitable for this recipe.
Directions
Heat oven to 375 degrees. Mix the first 6 ingredients. Add and stir in remaining ingredients till moistened and well mixed. Shape dough into balls. Place them about 2 inches apart on ungreased cookie sheet. Bake until golden brown for 8-10 minutes.
Makes about 5 dozen cookies
60 servings
Makes about 5 dozen cookies
60 servings
Great Stories from around the Web
Rate This Recipe
Member Ratings For This Recipe
- Its junk. I love using SparkPeople, but I have yet to understand why they allow the posting of junk recipes. There's nothing clean about brown sugar or flour. Multi-grain does not alway = healthy. It must be whole grain. And really, 100 calories for 1 unfilling cookie. Think for yourself! - 3/25/13
-
- Sounds good, I plan to try these. Those of you with negative comments, please realize if a person is craving sweets, this is far better than many other choices. Sometimes we are also cooking for other family members who are not dieting, and all can share the same "healthier" treats! - 3/25/13
- I like the idea of using whole grains, but not shortening.
"Margarine, vegetable shortening and most commercial baked goods contain these artificially hardened fats and, along with them, TFAs. TFAs are just as bad if not worse for the heart and arteries than saturated fats. - Dr.Weil - 12/18/12
- These are wonderful! Will make them again and again and.....
Thank You for the recipe. - 7/15/09
Reply from LIVE_TO_LOVE (9/17/09)
I'm glad you like them! :) Thank YOU!
-
-
- These are great! - 3/21/09
Reply from LIVE_TO_LOVE (3/21/09)
Thank you!! How nice to get a comment!! =] (and a positive one too!)
Susie
-
-
-
-
-
-
- It is sad that some people are narrow-minded and can't learn to substitute good items for what they call bad ones. I have not made this yet but I will and if I prefer not to use shortening I will use 1/2c. unsweetened applesauce in its place like I have in other recipes and they have been good. - 12/12/13
-
- For those who just want to have a cookie and have budgeted it into their daily calories, this is a great sounding recipe. I'm glad to see the high oats content. I'll substitute gluten-free flour for the flour as both DH and I are gluten-free. Definitely going to try this! - 12/11/13
-
-
-
- I 'm not crazy about recipes that depend on totally unrealistic serving sizes to meet a low calorie ideal. How many people were actually able to make 60 servings out of this recipe? Mine got stale long before I could finish what I made and resisting scarfing down 10 in one sitting is tough. - 3/25/13
-
-
-
- | 118,283 |
Elle Macpherson's Chocolate Crunch Truffle Eggs
10 hours ago
January 10th 2015 / 0 comment
Ever find yourself stumped on what to ask at the end of an interview? We spoke to a careers coach about the top 5 things that you need to be probing and what to avoid…
That final interview hurdle of your job application is arguably the trickiest part to master. You have to tell the interviewee why you’re so great without seeming overly cocky, make sure you get all of your achievements across clearly and dodge your way around a minefield of difficult and awkward questions.
Then when things are wrapping up, you hear those five words: “Do you have any questions?” This sentence can make or break you and is actually one of the most important questions you’ll be asked during any job interview. Potential employers EXPECT you to ask questions, and many have noted that failure to ask any can result in a big fat cross next to your name.
So to help you out, we spoke to Executive and Careers Coach Anna Percy-Davis for the 5 type of questions you should have up your sleeve and what you should stay clear of…
WHAT TO AVOID
- Firstly, any questions about pay, working hours, holiday entitlement (particularly if it’s the first time you’re meeting the interviewer) need to be avoided. You need to asking questions from the perspective of how you can add value to the job - not what is in it for you.
- Don’t ask questions that you could have got answered by looking up the company on the net. Do as much research as you can about the role and the organisation beforehand so you ask informed questions.
- Finally, don't ask questions that imply that you see the role as just a stepping stone to something bigger and better. It is fine to come across as ambitious but you need to show genuine commitment to the role.
5 TYPE OF QUESTIONS YOU SHOULD BE ASKING
1. What is the single most important thing that you would like to see achieved by this role in the next 3/6/12 months?
This demonstrates that you’re thinking about the role from a long-term point-of-view and will also give you a detailed and clearer picture of what your potential employer will expect from you.
2. Where does the role sit within the organisation and where do you see the role adding most value?
By asking this, you’re demonstrating that you’re thinking about your position in the company of the whole and which areas you can make yourself most useful in.
3. What is the most important trait that the individual needs to have in this role? (if this hasn't come up already)
This gives you an opening to further explain why you’re most suited to the position, especially if the trait is something you haven’t discussed much during the first part of the interview.
4. Ask a question about the organisation that demonstrates you have done some research (For example, I notice that you have recently announced a commitment to recycling as an organisation - will this role have any involvement in this?)
5. Finally, putting forward a question about colleagues can be good. Discussing the size of team may well have been covered, but the split of responsibilities and how communication and team working style operates may all be good topics to query.
10 hours ago
12 hours ago
Join the conversation | 192,279 |
TITLE: A sequence of roots of polynomials depending on an integer parameter
QUESTION [3 upvotes]: For $n\in \mathbb N-\{0\}$, let
$$Q_n=(2n-1)X^n+(2n-3)X^{n-1}+(2n-5)X^{n-2}+\cdots+3X^2+X$$
I want to show that there is a unique $x_n\geq 0$ such that $Q_n(x_n)=1$ and then show that the sequence $x_n$ is convergent.
Since $Q_n(0)=0$ and $\lim_{x\to \infty}Q_n=\infty$ then it suffices to show that $Q_n$ is increasing but how to do that and how to calculate the limit of the sequence $x_n$ ? thank you for your help !!
REPLY [3 votes]: Here is an alternative proof.
Let $P_n=Q_n-1$. Then $P_n=\sum_{j=0}^n (2j-1)X^j$. $P'_n(x)=\sum_{j=1}^n j(2j-1)x^{j-1}$
is plainly nonnegative when $x\geq 0$, so $P_n$ is increasing on $[0,+\infty)$.
Since $P_n(0)=-1<0$ and ${\lim}_{+\infty}P_n=+\infty$, we see
that there is a unique $x_n\geq 0$ such that $P_n(x_n)=0$.
Also, we have the identity (see DanielR’s answer for more explanations
on how it is obtained)
$$
P_n=\frac{(2n-1)X^{n+2}-(2n+1)X^{n+1}+3X-1}{(1-X)^2} \tag{1}
$$
whence
$$
P_n(\frac{1}{3})=\frac{9}{4}\left(\frac{1}{3}\right)^{n+1}
(-\frac{4n+4}{3})<0 \tag{2}
$$
and for $\varepsilon >0$,
$$
P_n(\frac{1}{3}+\varepsilon)=\frac{1}{(\frac{2}{3}-\varepsilon)^2}
\Bigg((2n-1)\left(\frac{1}{3}+\varepsilon\right)^{n+2}-(2n+1))\left(\frac{1}{3}+\varepsilon\right)^{n+1}+3\varepsilon\Bigg)
\tag{3}
$$
It follows that
$$
\lim_{n\to+\infty} P_n(\frac{1}{3}+\varepsilon)=
\frac{3\varepsilon}{(\frac{2}{3}-\varepsilon)^2}>0, \tag{4}
$$
so that $x_n\in[\frac{1}{3},\frac{1}{3}+\varepsilon]$ for large enough $n$.
REPLY [1 votes]: To show that $Q_n(x)$ is increasing for $x \geqslant 0$, take the derivative and note that all of the terms are positive.
Once convergence is shown (see lhf's answer), the limit can be calculated this way:
$$\begin{align}
\sum_{k=1}^\infty(2k-1)x^k&=2\sum_{k=1}^\infty kx^k-\sum_{k=1}^\infty x^k\\
&=\{x\in(0,1)\}\\
&=\frac{2x}{(x-1)^2}+\frac1{x-1} \equiv p(x)\\
\end{align} $$
Now, solving $p(x)-1=0$ gives you $x=\frac13$. | 198,928 |
Infrastructure Destroyed and Residents Displaced
Hurricane Florence arrived on the Carolina
Coast in the fall of 2018 and destroyed
infrastructure, property, displaced residents, and
the recovery is not over. Making matters worse
is the areas’ recent encounter with a similar
hurricane two years ago; Hurricane Matthew.
Hurricane Matthew was first hypothesized to be
a storm that would only be experienced once in
every half millennium.
Only two years later Hurricane Florence
was said to be a storm bringing in the type of
waterlogged destruction only to be seen every
1,000 years, and it impacted the same region.
“In an area like Robeson County, there is an
immense area of 100-year floodplain, because
it’s so flat and the areas that are close to the
river, the swampy areas, are large,” said Martin
Farley, professor and chair of the department
of geology and geography at the University of
North Carolina at Pembroke.
Cities like Lumberton, which sit beside the
Lumber River have floodplains within their city
limits.
“My simple advice, don’t build in
floodplains,” said Wesley Highfield, an associate
professor of marine biology at Texas Agricultural
& Mechanical University.
Some residents who live in floodplains are
advised to move and simply choose not to. The
cost of moving to another location or leaving
behind a home that they hold dear is worth the
risk of flooding.
For the floodplains to be fully evacuated and
used for environmental conservation, the residents
living in the area must all agree to let their
land be purchased by the local government.
This is difficult because the local government
must first get all residents to agree to the process
and secondly because the funding necessary
for projects like this are often too great to be
covered without outside sponsors.
Pembroke officials recently published new
zoning plans that should alleviate damage from
flooding experienced in past hurricane seasons.
In the wake of Hurricane Matthew many
questions concerning public safety, emergency
relief, and city plans for future disasters have
arisen. One major area of concern is the possibility
that the city of Lumberton and CSX, a railroad
company, neglected to do all that they could to
deter flooding, potentially endangering the lives
of residents.
According to an article posted on WRAL.com, a
gap created by a CSX railroad underpass through
Interstate 95 in the town’s levees was the cause for
significant flooding to the western section of Lumberton .
The city of Lumberton opened a lawsuit against
CSX claiming that the company prioritized its railways
over the lives of Lumberton citizens.
North Carolina Dams have been pushed to
their limits after Matthew and Florence came in
such quick succession. Officials sent warnings
to evacuate to cities like New Bern in fear of
dams breaching and heavily monitored water
activity around the dams. In Fayetteville,
residents are suing the city because it failed to
rebuild broken and damaged dams in the area.
Slow response rates to matters of public
safety have left many citizens questioning their
local government.
False reports of breaking dams were issued
by members of communities like Hope Mills,
causing many families to evacuate prematurely in fear
for their lives. Hoke County also experienced a
similar situation where residents jumped the gun
and, instead of waiting for official instruction,
hastily left the area because of false reports of
the dam breaking.
The dam in Hope Mills broke under the
stress of the consistent downpour caused by
Hurricane Matthew two years prior.
Due to Florence, dams in Anson County,
Boiling Springs and Wilmington broke and
caused considerable damage and resident
displacement, especially in Wilmington’s case
where the city was surrounded by water which
trapped residents in the city limits.
According to data submitted to the National
Inventory of Dams, North Carolina has 1,445
dams rated high hazard out of about 5,700 dams
total. The hazard rate doesn’t indicate the condition
of the dam but rather the risk the community
will face if the dam were to break. However, of
those 1,445 dams that are placed in precarious
positions 185 are classified as being in poor or
unsatisfactory condition.
Another area of concern is left unaddressed
by local government - renovated drainage
systems. In the city of Lumberton residents said
that they don’t believe the city was prepared
for Hurricane Matthew and that there was little
changed in preparation for Florence.
“I can recall...driving around my neighborhood,
watching the water rise, thinking this can’t
happen again,” said James Bass, director of the
Givens Performing Arts Center at the University
of North Carolina at Pembroke.
Many residents claim that they have yet to
receive any help from FEMA after applying.
In some cases, applicants were simply denied
while in other cases the applicants were given
insubstantial amounts of money to make a lasting
difference in their situation. Another issue with
FEMA is the inconsistency with who is accepted
and who is denied. Many claim that those in
better financial positions and who suffered less
from the storm are often given assistance priority
over poorer, more needing applicants.
“They denied me, because I got help with
the first flood...but I know people that they
helped with Matthew and also helped them with
[Hurricane Florence]” said Urshula Locklear, a
resident of Pembroke whose trailer was damaged
by both Hurricanes Matthew and Florence.
BreAnna Branch, Program Director for
Communities in Schools of Robeson County,
saw a need and filled it. Branch helped get the
community of Lumberton involved in the relief
effort by organizing groups of students from the
University of North Carolina at Pembroke and
providing resources for middle schoolers whose
families were impacted by the storm.
Branch stated “[The] Local community getting
things done and mobilizing immediately...Our
area knows our area,” on what matters most.
Members of the nearby University of North
Carolina at Pembroke also came to the aid of
those devastated by Hurricane Florence.
“I’m from the west coast so I never realized
how much damage water could do,” said
Michael Perez, a business marketing major
who attends the University of North Carolina at
Pembroke.
Perez, after seeing the destruction that Florence
wreaked on the community raised over $3,000
for the relief effort. “It’s a problem that a lot of
people who aren’t from here don’t understand.
When I showed them the pictures of the houses
we were helping at they were happy to help.”
| 369,729 |
TITLE: Does the left and right eigenvector of a matrix corresponding to different eigenvalues ALWAYS have a zero dot product?
QUESTION [2 upvotes]: Let $\mathcal{E} = \lbrace v^1 ,v^2, \dotsm, v^m \rbrace$ be the set of right
eigenvectors of $P$ and let $\mathcal{E^*} = \lbrace \omega^1 ,\omega^2,
\dotsm, \omega^m \rbrace$ be the set of left eigenvectors of $P.$ Given any two
vectors $v \in \mathcal{E}$ and $ \omega \in \mathcal{E^*}$ which correspond to
the eigenvalues $\lambda_1$ and $\lambda_2$ respectively. If $\lambda_1 \neq
\lambda_2$ then $\langle v, \omega^\tau\rangle = 0.$
Proof. For any eigenvector $v\in \mathcal{E}$ and $ \omega \in
\mathcal{E^*}$ which correspond to the eigenvalues $\lambda_1$ and $\lambda_2$
where $\lambda_1 \neq \lambda_2$ we have, \begin{equation*} \begin{split}\langle\omega,v\rangle = \frac{1}{\lambda_2}\langle \lambda_2 \omega, v\rangle = \frac{1}{\lambda_2} \langle P^ \tau \omega ,v\rangle = \frac{1}{\lambda_2}\langle\omega,P v\rangle = \frac{1}{\lambda_2} \langle \omega,\lambda_1v\rangle = \frac{\lambda_1}{\lambda_2}\langle \omega,v\rangle .\end{split}
\end{equation*} This implies $(\frac{\lambda_1}{\lambda_2} - 1)\langle\omega,v\rangle =
0.$ If $\lambda_1 \neq \lambda_2$ then $\langle \omega,v\rangle = 0.$
My question: what if $ \lambda_2 = 0 \neq \lambda_1,$ how can I include this case in my proof.
REPLY [4 votes]: Let $Ax=\lambda x$ and $y^T A = \mu y^T$ with $\lambda \neq \mu$. Multiply $Ax=\lambda x$ from the left by $y^T$ and you get $\mu y^T x = \lambda y^T x$ or equivalently $(\lambda - \mu) y^T x =0$. Since $\lambda - \mu \neq 0$ this yields $y^T x=0$.
Note that i assumed that we can "cancel" $(\lambda - \mu)$. This is true as long as the set of our scalars forms an integral domain. For the purposes of standard linear algebra this is always true, since the scalars form a field, and a field is an integral domain.
If say $\lambda=0$, then the relation $(\lambda - \mu) y^T x =0$ becomes $\mu y^T x =0$. Since $\mu \neq 0$ (we assume $\lambda \neq \mu$), this yields $y^T x=0$.
REPLY [4 votes]: Do not be afraid to use the formulation of the dot/inner product as a transpose/conjugate vector multiplied by actual vector. Recall that $v$ is a right eigenvector if and only if $Pv=\lambda_1 v$ and that $\omega$ is a left eigenvector if and only if $\omega^t P=\lambda_2\omega^t$.
Then $\left<\omega,v\right>=\omega^tv$. Sticking a $P$ in the middle and using associativity of matrix multiplication and commutativity of scalar multiplication we get
$$\lambda_2(\omega^tv)=(\lambda_2\omega^t)v=(\omega^tP)v=\omega^t(Pv)=\omega^t(\lambda_1v)=\lambda_1(\omega^tv)$$
Subtracting the far sides from each other we obtain $(\lambda_1-\lambda_2)(\omega^tv)=0$, and since the scalars $\lambda_1-\lambda_2$ and $\left<\omega,v\right>=\omega^tv$ belong to a field (which in particular has no zero-divisors other than $0$ itself) we must have either $\lambda_1=\lambda_2$ or $0=\omega^tv=\left<\omega,v\right>$. | 166,866 |
March 16, 2015 was another exceptional day at The Eye Newspaper as we got another interesting email that started with the word “congratulations”. It was strange and fascination that the small idea that we initiated and launched last year in Nkambe during the The Eye Annual National Awards of Excellence/Donga Mantung Achievement Awards was into the final of 2degrees Champions Awards 2015, we couldn’t believe it. Yet it came to past that programme our small idea, huge impact had got international admiration commonly known as “A Clean Community Program”. The Eye Newspaper is into the Social Value Category.
What You Can Do
2degrees Champion Awards it should be noted offers a huge opportunity to judge the most innovative projects, solutions, organizations and individuals making sustainable business happen around the globe. As for who wins? It is you our Readers to decide. Just by voting, you could be a winner as well. Cast your vote below to be entered into a prize draw to win five tickets for you and your colleagues to attend the 2degrees Champions Awards Ceremony taking place at Porchester Hall in London on 8 July 2015*. It is very easy for you to cast a vote especially when you have a LinkedIn Account.
What did we do
Given that the idea was part of a social communication strategy and sensitization strategy, we first toured the communities to announce the competition, carried out radio talk shows and then created a coordination unit to oversee the cleanup campaign in villages (communities), and the planting of trees. When the results announced and winners were handed basic equipment (wheelbarrows, spades, and cutlasses, hoes) to continue to keep their communities clean. The traditional rulers representing the various villages also received awards at a fun-filled National Award of Excellence by The Eye Newspaper in Nkambe.
This entry deserves to win due to its uniqueness and the impact that it has created within a very short time. It is a great initiative that should be considered at this point in time that Africa is witnessing emerging diseases like Ebola.
A Clean Community Program was Sponsored by Yaah Patience Tamfu (USA), Dr. Nick Ngwanyam(CEO St. Louis), Juliette Schlegl(Journalist/Writer-Germany), Mafor Achidi Achu Judith (Commercial Director-Camtel), Hon. Awudu Mbaya(MP Donga Mantung Centre), Awah Cletus alias AC Risky (CEO Necla Hotel Bamenda, Tata Mbinglo (Youth Leader-Thailand) and Eric Motumo (Publisher/Editor-Chronicle newspaper) with technical support from YODO Cameroon. Logistics from Blue Pearl Hotel-Bamenda
Click on the link below to give us your Vote. Thanks for the support
Click on the link below to give us your Vote. Thanks for the support
When News Breaks Out, We Break In. (The 2014 Bloggies Finalist) | 160,246 |
\begin{document}
\title{On computing the Lyapunov exponents of reversible cellular automata\thanks{The work was partially supported by the Academy of Finland grant 296018 and by the Vilho, Yrjö and Kalle Väisälä Foundation}}
\author{Johan Kopra}
\affil{Department of Mathematics and Statistics, \\FI-20014 University of Turku, Finland}
\affil{[email protected]}
\date{}
\maketitle
\setcounter{page}{1}
\begin{abstract}
We consider the problem of computing the Lyapunov exponents of reversible cellular automata (CA). We show that the class of reversible CA with right Lyapunov exponent $2$ cannot be separated algorithmically from the class of reversible CA whose right Lyapunov exponents are at most $2-\delta$ for some absolute constant $\delta>0$. Therefore there is no algorithm that, given as an input a description of an arbitrary reversible CA $F$ and a positive rational number $\epsilon>0$, outputs the Lyapunov exponents of $F$ with accuracy $\epsilon$. We also compute the average Lyapunov exponents (with respect to the uniform measure) of the CA that perform multiplication by $p$ in base $pq$ for coprime $p,q>1$.
\end{abstract}
\providecommand{\keywords}[1]{\textbf{Keywords:} #1}
\noindent\keywords{Cellular automata, Lyapunov exponents, Reversible computation, Computability}
\section{Introduction}
A cellular automaton (CA) is a model of parallel computation consisting of a uniform (in our case one-dimensional) grid of finite state machines, each of which receives input from a finite number of neighbors. All the machines use the same local update rule to update their states simultaneously at discrete time steps. The following question of error propagation arises naturally: If one changes the state at some of the coordinates, then how long does it take for this change to affect the computation at other coordinates that are possibly very far away? Lyapunov exponents provide one tool to study the asymptotic speeds of error propagation in different directions. The concept of Lyapunov exponents originally comes from the theory of differentiable dynamical systems, and the discrete variant of Lyapunov exponents for CA was originally defined in~\cite{She92}.
The Lyapunov exponents of a cellular automaton $F$ are interesting also when one considers $F$ as a topological dynamical system, because they can be used to give an upper bound for the topological entropy of $F$ \cite{Tis00}. It is previously known that the entropy of one-dimensional cellular automata is uncomputable \cite{HKC92} (and furthermore from \cite{GZ12} it follows that there exists a single cellular automaton whose entropy is uncomputable), which gives reason to suspect that also the Lyapunov exponents are uncomputable in general.
The uncomputability of Lyapunov exponents is easy to prove for (not necessarily reversible) cellular automata by using the result from \cite{Kari92} which says that nilpotency of cellular automata with a spreading state is undecidable. We will prove the more specific claim that the Lyapunov exponents are uncomputable even for reversible cellular automata. In the context of proving undecidability results for reversible CA one cannot utilize undecidability of nilpotency for non-reversible CA. An analogous decision problem, the (local) immortality problem, has been used to prove undecidability results for reversible CA \cite{Luk10}. We will use in our proof the undecidability of a variant of the immortality problem, which in turn follows from the undecidability of the tiling problem for $2$-way deterministic tile sets. This result has been published previously in the proceedings of UCNC 2019~\cite{Kop19}.
In the other direction, there are interesting classes of cellular automata whose Lyapunov exponents have been computed. In \cite{DMM03} a closed formula for the Lyapunov exponents of linear one-dimensional cellular automata is given. We present results concerning the family of multiplication automata that simulate multiplication by $p$ in base $pq$ for coprime $p,q>1$. It is easy to find out their Lyapunov exponents and it is more interesting to consider a measure theorical variant, the average Lyapunov exponent. We compute the average Lyapunov exponents (with respect to the uniform measure) for these automata. This computation is originally from the author's Ph.D. thesis~\cite{KopDiss}.
\section{Preliminaries}
For sets $A$ and $B$ we denote by $B^A$ the collection of all functions from $A$ to $B$.
A finite set $A$ of \emph{letters} or \emph{symbols} is called an \emph{alphabet}. The set $A^{\Z}$ is called a \emph{configuration space} or a \emph{full shift} and its elements are called \emph{configurations}. An element $x\in A^{\Z}$ is interpreted as a bi-infinite sequence and we denote by $x[i]$ the symbol that occurs in $x$ at position $i$. A \emph{factor} of $x$ is any finite sequence $x[i] x[i+1]\cdots x[j]$ where $i,j\in\Z$, and we interpret the sequence to be empty if $j<i$. Any finite sequence $w=w[1] w[2]\cdots w[n]$ (also the empty sequence, which is denoted by $\epsilon$) where $w[i]\in A$ is a \emph{word} over $A$. If $w\neq\epsilon$, we say that $w$ \emph{occurs} in $x$ at position $i$ if $x[i]\cdots x[i+n-1]=w[1]\cdots w[n]$ and we denote by $w^\Z\in A^\Z$ the configuration in which $w$ occurs at all positions of the form $in$ ($i\in\Z$). The set of all words over $A$ is denoted by $A^*$, and the set of non-empty words is $A^+=A^*\setminus\{\epsilon\}$. More generally, for $L,K\subseteq A^*$ we denote $LK=\{w_1w_2\mid w_1\in L, w_2\in K\}$ and $L^*=\{w_1\cdots w_n\mid n\geq 0, w_i\in L\}$. If $\epsilon\notin L$, define $L^+=L^*\setminus\{\epsilon\}$ and if $\epsilon\in L$, define $L^+=L^*$. The set of words of length $n$ is denoted by $A^n$. For a word $w\in A^*$, $\abs{w}$ denotes its length, i.e. $\abs{w}=n\iff w\in A^n$.
If $A$ is an alphabet and $C$ is a countable set, then $A^C$ becomes a compact metrizable topological space when endowed with the product topology of the discrete topology of $A$ (in particular a set $S\subseteq A^C$ is compact if and only if it is closed). In our considerations $C=\Z$ or $C=\Z^2$. We define the \emph{shift} $\sigma:A^\Z\to A^\Z$ by $\sigma(x)[i]=x[i+1]$ for $x\in A^\Z$, $i\in\Z$, which is a homeomorphism. We say that a closed set $X\subseteq A^\Z$ is a \emph{subshift} if $\sigma(X)=X$. Any $w\in A^+$ and $i\in\Z$ determine a \emph{cylinder} of $X$
\[\cyl_X(w,i)=\{x\in X\mid w \mbox{ occurs in }x\mbox{ at position }i\}.\]
Every cylinder is an open set of $X$ and the collection of all cylinders
\[\mathcal{C}_X=\{\cyl_X(w,i)\mid w\in A^+, i\in\Z\}\]
form a basis for the topology of $X$.
Occasionally we consider configuration spaces $(A_1\times A_2)^\Z$ and then we may write $(x_1,x_2)\in (A_1\times A_2)^\Z$ where $x_i\in A_i^\Z$ using the natural bijection between the sets $A_1^\Z\times A_2^\Z$ and $(A_1\times A_2)^\Z$. We may use the terminology that $x_1$ is on the upper layer or on the $A_1$-layer, and similarly that $x_2$ is on the lower layer or on the $A_2$-layer.
\begin{definition}\label{cadef}
Let $X\subseteq A^\Z$ be a subshift. We say that the map $F:X\to X$ is a \emph{cellular automaton} (or a CA) on $X$ if there exist integers $m\leq a$ (memory and anticipation) and a \emph{local rule} $f:A^{a-m+1}\to A$ such that $F(x)[i]=f(x[i+m],\dots,x[i],\dots,x[i+a])$. If we can choose $f$ so that $-m=a=r\geq 0$, we say that $F$ is a radius-$r$ CA and if we can choose $m=0$ we say that $F$ is a \emph{one-sided} CA. A one-sided CA with anticipation $1$ is called a radius-$\frac{1}{2}$ CA.\end{definition}
We can extend any local rule $f:A^{a-m+1}\to B$ to words $w=w[1]\cdots w[a-m+n]\in A^{a-m+n}$ with $n\in\Npos$ by $f(w)=u=u[1]\cdots u[n]$, where $u[i]= f(w[i],\dots,w[i+a-m])$.
The CA-functions on $X$ are characterized as those continuous maps on $X$ that commute with the shift \cite{Hed69}. We say that a CA $F:X\to X$ is \emph{reversible} if it is bijective. Reversible CA are homeomorphisms on $X$. The book \cite{LM95} is a standard reference for subshifts and cellular automata on them.
For a given reversible CA $F:X\to X$ and a configuration $x\in X$ it is often helpful to consider the \emph{space-time diagram} of $x$ with respect to $F$. Formally it is the map $\theta\in A^{\Z^2}$ defined by $\theta(i,-j)=F^j(x)[i]$: the minus sign in this definition signifies that time increases in the negative direction of the vertical coordinate axis. Informally the space-time diagram of $x$ is a picture which depicts elements of the sequence $(F^i(x))_{i\in\Z}$ in such a way that $F^{i+1}(x)$ is drawn below $F^i(x)$ for every $i$.
The definition of Lyapunov exponents is from \cite{She92,Tis00}. For a fixed subshift $X\subseteq A^\Z$ and for $x\in X$, $s\in\Z$, denote $W_{s}^{+}(x)=\{y\in X \mid \forall i\geq s: y[i]=x[i]\}$ and $W_{s}^{-}(x)=\{y\in X \mid \forall i\leq s: y[i]=x[i]\}$. Then for given cellular automaton $F:X\to X$, $x\in X$, $n\in\N$, define
\begin{flalign*}
&\Lambda_{n}^{+}(x,F)=\min\{s\geq 0\mid\forall 1\leq i\leq n: F^i(W_{-s}^{+}(x))\subseteq W_0^{+}(F^i(x))\} \\
&\Lambda_{n}^{-}(x,F)=\min\{s\geq 0\mid\forall 1\leq i\leq n: F^i(W_{s}^{-}(x))\subseteq W_0^{-}(F^i(x))\}.
\end{flalign*}
These have shift-invariant versions $\overline{\Lambda}_n^{\pm}(x,F)=\max_{i\in\Z} \Lambda_n^{\pm}(\sigma^i(x),F)$. The quantities
\[\lambda^{+}(x,F)=\lim_{n\to\infty}\frac{\overline{\Lambda}_n^{+}(x,F)}{n},\qquad \lambda^{-}(x,F)=\lim_{n\to\infty}\frac{\overline{\Lambda}_n^{-}(x,F)}{n}\]
are called (when the limits exist) respectively the right and left \emph{Lyapunov exponents of} $x$ (with respect to $F$).
A global version of these are the quantities
\[\lambda^{+}(F)=\lim_{n\to\infty}\max_{x\in X}\frac{\Lambda_n^{+}(x,F)}{n},\qquad \lambda^{-}(F)=\lim_{n\to\infty}\max_{x\in X}\frac{\Lambda_n^{-}(x,F)}{n}\]
that are called respectively the right and left \emph{(maximal) Lyapunov exponents} of $F$. These limits exist by an application of Fekete's subadditive lemma (e.g. Lemma 4.1.7 in \cite{LM95}).
Occasionally we consider cellular automata from the measure theoretical point of view. For a subshift $X$ we denote by $\Sigma(\mathcal{C}_X)$ the \emph{sigma-algebra} generated by the collection of cylinders $\mathcal{C}_X$. It is the smallest collection of subsets of $X$ which contains all the elements of $\mathcal{C}_X$ and which is closed under complement and countable unions. A \emph{measure} on $X$ is a countably additive function $\mu:\Sigma(\mathcal{C}_X)\to[0,1]$ such that $\mu(X)=1$, i.e. $\mu(\bigcup_{i=0}^\infty A_i)=\sum_{i=0}^\infty\mu(A_i)$ whenever all $A_i\in \Sigma(\mathcal{C}_X)$ are pairwise disjoint. A measure $\mu$ on $X$ is completely determined by its values on cylinders. We say that a cellular automaton $F:X\to X$ is \emph{measure preserving} if $\mu(F^{-1}(S))=\mu(S)$ for all $S\in\Sigma(\mathcal{C})$.
On full shifts $A^\Z$ we are mostly interested in the \emph{uniform measure} determined by $\mu(\cyl(w,i))=\abs{A}^{-\abs{w}}$ for $w\in A^+$ and $i\in\Z$. By Theorem 5.4 in \cite{Hed69} any surjective CA $F:A^\Z\to A^\Z$ preserves this measure. For more on measure theory of cellular automata, see \cite{Piv09}.
Measure theoretical variants of Lyapunov exponents are defined as follows. Given a measure $\mu$ on $X$ and for $n\in\N$, let $I_{n,\mu}^{+}(F)=\int_{x\in X}\Lambda_n^{+}(x,F) d\mu$ and $I_{n,\mu}^{-}(F)=\int_{x\in X}\Lambda_n^{-}(x,F) d\mu$. Then the quantities
\[I_{\mu}^{+}(F)=\liminf_{n\to\infty}\frac{I_{n,\mu}^{+}(F)}{n},\qquad I_{\mu}^{-}(F)=\liminf_{n\to\infty}\frac{I_{n,\mu}^{-}(F)}{n}\]
are called respectively the right and left \emph{average Lyapunov exponents of} $F$ (with respect to the measure $\mu$).
We will write $W_{s}^{\pm}(x)$, $\Lambda_n^{\pm}(x)$, $\lambda^{+}(x)$, $I_{n,\mu}^{+}$ and $I_{\mu}^{+}$ when $X$ and $F$ are clear by the context.
\section{Tilings and Undecidability}
In this section we recall the well-known connection between cellular automata and tilings on the plane. We use this connection to prove an auxiliary undecidability result for reversible cellular automata.
\begin{definition}A \emph{Wang tile} is formally a function $t:\{N,E,S,W\}\to C$ whose value at $I$ is denoted by $t_I$. Informally, a Wang tile $t$ should be interpreted as a unit square with edges colored by elements of $C$. The edges are called \emph{north}, \emph{east}, \emph{south} and \emph{west} in the natural way, and the colors in these edges of $t$ are $t_N,t_E,t_S$ and $t_W$ respectively. A \emph{tile set} is a finite collection of Wang tiles.\end{definition}
\begin{definition}A tiling over a tile set $T$ is a function $\eta\in T^{\Z^2}$ which assigns a tile to every integer point of the plane. A tiling $\eta$ is said to be valid if neighboring tiles always have matching colors in their edges, i.e. for every $(i,j)\in\Z^2$ we have $\eta(i,j)_N=\eta(i,j+1)_S$ and $\eta(i,j)_E=\eta(i+1,j)_W$. If there is a valid tiling over $T$, we say that $T$ \emph{admits} a valid tiling.\end{definition}
We say that a tile set $T$ is NE-deterministic if for every pair of tiles $t,s\in T$ the equalities $t_N=s_N$ and $t_E=s_E$ imply $t=s$, i.e. a tile is determined uniquely by its north and east edge. A SW-deterministic tile set is defined similarly. If $T$ is both NE-deterministic and SW-deterministic, it is said to be \emph{2-way deterministic}.
The \emph{tiling problem} is the problem of determining whether a given tile set $T$ admits a valid tiling.
\begin{theorem}{\cite[Theorem 4.2.1]{Luk10}}\label{tiling} The tiling problem is undecidable for 2-way deterministic tile sets.\end{theorem}
\begin{definition}Let $T$ be a 2-way deterministic tile set and $C$ the collection of all colors which appear in some edge of some tile of $T$. $T$ is \emph{complete} if for each pair $(a,b)\in C^2$ there exist (unique) tiles $t,s\in T$ such that $(t_N,t_E)=(a,b)$ and $(s_S,s_W)=(a,b)$.\end{definition}
A 2-way deterministic tile set $T$ can be used to construct a complete tile set. Namely, let $C$ be the set of colors which appear in tiles of $T$, let $X\subseteq C\times C$ be the set of pairs of colors which do not appear in the northeast of any tile and let $Y\subseteq C\times C$ be the set of pairs of colors which do not appear in the southwest of any tile. Since $T$ is 2-way deterministic, there is a bijection $p:X\to Y$. Let $T^\complement$ be the set of tiles formed by matching the northeast corners $X$ with the southwest corners $Y$ via the bijection $p$. Then the tile set $A=T\cup T^\complement$ is complete.
Every complete 2-way deterministic tile set $A$ determines a local rule $f:A^2\to A$ defined by $f(a,b)=c\in A$, where $c$ is the unique tile such that $a_S=c_N$ and $b_W=c_E$. This then determines a reversible CA $F:A^\Z\to A^\Z$ with memory $0$ by $F(x)[i]=f(x[i],x[i+1])$ for $x\in A^\Z$, $i\in\Z$. The space-time diagram of a configuration $x\in A^\Z$ corresponds to a valid tiling $\eta$ via $\theta(i,-j)=F^j(x)[i]=\eta(i,-i-j)$, i.e. configurations $F^j(x)$ are diagonals of $\eta$ going from northwest to southeast and the diagonal corresponding to $F^{j+1}(x)$ is below the diagonal corresponding to $F^j(x)$.
\begin{definition}
A cellular automaton $F:A^\Z\to A^\Z$ is $(p,q)$-locally immortal ($p,q\in\N$) with respect to a subset $B\subseteq A$ if there exists a configuration $x\in A^\Z$ such that $F^{iq+j}(x)[ip]\in B$ for all $i\in\Z$ and $0\leq j\leq q$. Such a configuration $x$ is a $(p,q)$-witness.
\end{definition}
Generalizing the definition in \cite{Luk10}, we call the following decision problem the $(p,q)$-\emph{local immortality problem}: given a reversible CA $F:A^\Z\to A^\Z$ and a subset $B\subseteq A$, find whether $F$ is $(p,q)$-locally immortal with respect to $B$.
\begin{theorem}{\cite[Theorem 5.1.5]{Luk10}}\label{01immortal} The $(0,1)$-local immortality problem is undecidable for reversible CA.\end{theorem}
We now adapt the proof of Theorem \ref{01immortal} to get the following result, which we will use in the proof of Theorem~\ref{TheoremLyapSofic}.
\begin{lemma}\label{local}The $(1,5)$-local immortality problem is undecidable for reversible radius-$\frac{1}{2}$ CA.\end{lemma}
\begin{proof}We will reduce the problem of Theorem~\ref{tiling} to the $(1,5)$-local immortality problem. Let $T$ be a 2-way deterministic tile set and construct a complete tile set $T\cup T^\complement$ as indicated above. Then also $A_1=(T\times T_1)\cup(T^\complement\times T_2)$ ($T_1$ and $T_2$ as in \rfig{arrows}) is a complete tile set.\footnote{The arrow markings are used as a shorthand for some coloring such that the heads and tails of the arrows in neighboring tiles match in a valid tiling.} We denote the blank tile of the set $T_1$ by $t_b$ and call the elements of $R=A_1\setminus(T\times\{t_b\})$ arrow tiles. As indicated above, the tile set $A_1$ determines a reversible radius-$\frac{1}{2}$ CA $G_1:A_1^\Z\to A_1^\Z$.
\begin{figure}
\centering
\begin{tikzpicture}
\def\rectanglepath{-- ++(1.5cm,0cm)-- ++(0cm,1.5cm)-- ++(-1.5cm,0cm) -- cycle}
\draw (0,0) \rectanglepath;
\draw (2,0) \rectanglepath; \draw [->] (3.5cm,0.75cm) -- (2.05cm,0.75cm);
\draw (4,0) \rectanglepath; \draw [->] (4.75cm,1.5cm) -- (4.75cm,0.05cm);
\draw (6,0) \rectanglepath; \draw [->] (7.5cm,0.75cm) -- (6.05cm,0.75cm); \draw [->] (6.75cm,1.5cm) -- (6.75cm,0.05cm);
\draw (0,-2) \rectanglepath; \draw [->] (0.75,-1.25) -- (0.05,-1.25);
\draw (2,-2) \rectanglepath; \draw [->] (3.5,-1.25) -- (2.05,-1.25); \draw [->] (2.75,-1.25) -- (2.75,-1.95);
\draw (4,-2) \rectanglepath; \draw [->] (4.75cm,-0.5cm) -- (4.75cm,-1.95cm);
\draw (6,-2) \rectanglepath; \draw [->] (7.5cm,-1.25cm) -- (6.80cm,-1.25cm); \draw [->] (6.75cm,-0.5cm) -- (6.75cm,-1.20cm);
\end{tikzpicture}
\caption{The tile sets $T_1$ (first row) and $T_2$ (second row). These are originally from \cite{Luk10} (up to a reflection with respect to the northwest - southeast diagonal).}
\label{arrows}
\end{figure}
Let $A_2=\{0,1,2\}$. Define $A=A_1\times A_2$ and natural projections $\pi_i:A\to A_i$, $\pi_i(a_1,a_2)=a_i$ for $i\in\{1,2\}$. By extension we say that $a\in A$ is an arrow tile if $\pi_1(a)\in R$. Let $G:A^\Z\to A^\Z$ be defined by $G(c,e)=(G_1(c),e)$ where $c\in A_1^\Z$ and $e\in A_2^\Z$, i.e. $G$ simulates $G_1$ in the upper layer. We construct involutive CA $J_1$, $J_2$ and $H$ of memory $0$ with local rules $j_1:A_2\to A_2$, $j_2:A_2^2\to A_2$ and $h:(A_1\times A_2)\to (A_1\times A_2)$ respectively defined by
\begin{flalign*}
&\begin{array}{c}
j_1(0)=0 \\
j_1(1)=2 \\
j_1(2)=1
\end{array}
\qquad
j_2(a,b)=
\left\{
\begin{array}{l}
1 \mbox{ when } (a,b)=(0,2) \\
0 \mbox{ when } (a,b)=(1,2) \\
a \mbox{ otherwise }
\end{array}
\right. \\
&h((a,b))=
\left\{
\begin{array}{l}
(a,1) \mbox{ when } a\in R \mbox{ and } b=0 \\
(a,0) \mbox{ when } a\in R \mbox{ and } b=1 \\
(a,b) \mbox{ otherwise.}
\end{array}
\right.
\end{flalign*}
If $\id:A_1^\Z\to A_1^\Z$ is the identity map, then $J=(\id\times J_2)\circ (\id\times J_1)$ is a CA on $A^\Z=(A_1\times A_2)^\Z$. We define the radius-$\frac{1}{2}$ automaton $F=H\circ J\circ G:A^\Z\to A^\Z$ and select $B=(T\times\{t_b\})\times\{0\}$. We will show that $T$ admits a valid tiling if and only if $F$ is $(1,5)$-locally immortal with respect to $B$.
Assume first that $T$ admits a valid tiling $\eta$. Then by choosing $x\in A^\Z$ such that $x[i]=((\eta(i,-i),t_b),0)\in A_1\times A_2$ for $i\in\Z$ it follows that $F^j(x)[i]\in B$ for all $i,j\in\Z$ and in particular that $x$ is a $(1,5)$-witness.
Assume then that $T$ does not admit any valid tiling and for a contradiction assume that $x$ is a $(1,5)$-witness. Let $\theta$ be the space-time diagram of $x$ with respect to $F$. Since $x$ is a $(1,5)$-witness, it follows that $\theta(i,-j)\in B$ whenever $(i,-j)\in N$, where $N=\{(i,-j)\in \Z^2\mid 5i\leq j\leq 5(i+1)\}$. There is a valid tiling $\eta$ over $A_1$ such that $\pi_1(\theta(i,j))=\eta(i,j-i)$ for $(i,j)\in\Z^2$, i.e. $\eta$ can be recovered from the upper layer of $\theta$ by applying a suitable linear transformation on the space-time diagram. In drawing pictorial representations of $\theta$ we want that the heads and tails of all arrows remain properly matched in neighboring coordinates, so we will use tiles with ``bent'' labelings, see \rfig{bentarrows}. Since $T$ does not admit valid tilings, it follows by a compactness argument that $\eta(i,j)\notin T\times T_1$ for some $(i,j)\in D$ where $D=\{(i,j)\in\Z^2\mid j>-6i \}$ and in particular that $\eta(i,j)$ is an arrow tile. Since $\theta$ contains a ``bent'' version of $\eta$, it follows that $\theta(i,j)$ is an arrow tile for some $(i,j)\in E$, where $E=\{(i,j)\in\Z^2\mid j>-5i\}$ is a ``bent'' version of the set $D$. In \rfig{stdArr} we present the space-time diagram $\theta$ with arrow markings of tiles from $T_1$ and $T_2$ replaced according to the \rfig{bentarrows}. In \rfig{stdArr} we have also marked the sets $N$ and $E$. Other features of the figure become relevant in the next paragraph.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\def\rectanglepath{-- ++(1.5cm,0cm)-- ++(0cm,1.5cm)-- ++(-1.5cm,0cm) -- cycle}
\draw (0,0) \rectanglepath;
\draw (2,0) \rectanglepath; \draw [->] (3.5cm,1.5cm) -- (2.05cm,0.05cm);
\draw (4,0) \rectanglepath; \draw [->] (4.75cm,1.45cm) -- (4.75cm,0cm);
\draw (6,0) \rectanglepath; \draw [->] (7.5cm,1.5cm) -- (6.05cm,0.05cm); \draw [->] (6.75cm,1.45cm) -- (6.75cm,0cm);
\draw (0,-2) \rectanglepath; \draw [->] (0.75,-1.25) -- (0.05,-1.95);
\draw (2,-2) \rectanglepath; \draw [->] (3.5,-0.5) -- (2.05,-1.95); \draw [->] (2.75,-1.25) -- (2.75,-1.95);
\draw (4,-2) \rectanglepath; \draw [->] (4.75cm,-0.5cm) -- (4.75cm,-1.95cm);
\draw (6,-2) \rectanglepath; \draw [->] (7.5cm,-0.5cm) -- (6.85cm,-1.20cm); \draw [->] (6.75cm,-0.5cm) -- (6.75cm,-1.20cm);
\end{tikzpicture}
\caption{The tile sets $T_1$ and $T_2$ presented in a ``bent'' form.}
\label{bentarrows}
\vspace{3.00mm}
\begin{tikzpicture}[scale=0.6]
\begin{scope}[yscale=-1,xscale=1]
\def\band{-- ++(0,5)-- ++(1,0) ++(0,-6) -- ++(0,5)-- ++(1,0)}
\def\rectanglepath{-- ++(1cm,0cm)-- ++(0cm,1cm)-- ++(-1cm,0cm) -- cycle}
\draw (0,1) \band; \draw (1,6) \band;
\draw [->] (2.5,0) -- (2.5,3);
\draw [->] (2.5,3) -- (2.5,4.4);
\draw [->] (6.5,0.5) -- (2.6,4.4);
\draw [->] (3.5,3.5) -- (3.5,6.4);
\draw [->] (7.5,2.5) -- (3.6,6.4);
\draw (5.5,1.5) -- (5.5,11);
\draw [decorate,decoration=brace] (0.5,1.5) -- (2.5,1.5) node[midway,yshift=10pt]{$d_1$};
\draw [decorate,decoration=brace] (2.5,4.5) -- (2.5,10.5) node[midway,xshift=10pt]{$d_2$};
\draw[thick] (2,2) \rectanglepath;
\draw[thick] (2,4) \rectanglepath;
\node at (1,5.5) {$N$};
\node at (4.5,7.5) {$E$};
\node at (3.9,2.5) {$\theta(p,q)$};
\node at (4.4,4.5) {$\theta(p,q-2)$};
\end{scope}
\end{tikzpicture}
\caption{The space-time diagram $\theta$ with ``bent'' arrow markings. An arrow tile $\theta(p,q-2)$ in $E$ with minimal horizontal and vertical distances to $N$ has been highlighted.}
\label{stdArr}
\end{figure}
The minimal distance between a tile in $N$ and an arrow tile in $E$ situated on the same horizontal line in $\theta$ is denoted by $d_1>0$. Then, among those arrow tiles in $E$ at horizontal distance $d_1$ from $N$, there is a tile with minimal vertical distance $d_2>0$ from $N$ (see \rfig{stdArr}). Fix $p,q\in\Z$ so that $\theta(p,q-2)$ is one such tile and in particular $(p-d_1,q-2),(p,q-2-d_2)\in N$. Then $\theta(p,q-j)$ contains an arrow for $-2\leq j \leq 2$, because if there is a $j\in[-2,2)$ such that $\theta(p,q-j)$ does not contain an arrow and $\theta(p,q-j-1)$ does, then $\theta(p,q-j-1)$ must contain one of the three arrows on the left half of \rfig{bentarrows}. These three arrows continue to the southwest, so then also $\theta(p-1,q-j-2)$ contains an arrow. Because $\theta(p',q')\in B$ for $(p',q')\in N$, it follows that $(p-1,q-j-2)\notin N$ and thus $(p-1,q-j-2)\in E$. Since $(p-d_1,q-2)\in N$, it follows that one of the $(p-d_1-1,q-j-2)$, $(p-d_1,q-j-2)$ and $(p-d_1+1,q-j-2)$ belong to $N$. Thus the horizontal distance of the tile $\theta(p-1,q-j-2)$ from the set $N$ is at most $d_1$, and is actually equal to $d_1$ by the minimality of $d_1$. Since $N$ is invariant under translation by the vector $-(1,-5)$, then from $(p,q-2-d_2)\in N$ it follows that $(p-1,q+3-d_2)\in N$ and that the vertical distance of the tile $\theta(p-1,q-j-2)$ from $N$ is at most $(q-j-2)-(q+3-d_2)\leq d_2-3$, contradicting the minimality of $d_2$. Similarly, $\theta(p-i,q-j)$ does not contain an arrow for $0<i\leq d_1,$ $-2\leq j\leq 2$ by the minimality of $d_1$ and $d_2$.
Now consider the $A_2$-layer of $\theta$. For the rest of the proof let $y=F^{-q}(x)$. Assume that $\pi_2(\theta(p-i,q))=\pi_2(y[p-i])$ is non-zero for some $i\geq 0$, $(p-i,q)\in E$, and fix the greatest such $i$, i.e. $\pi_2(y[s])=0$ for $s$ in the set
\[I_0=\{p'\in\Z\mid p'<p-i, (p',q)\in N\cup E\}.\]
We start by considering the case $\pi_2(y[p-i])=1$. Denote
\[I_1=\{p'\in\Z\mid p'<p-i, (p',q-1)\in N\cup E\}\subseteq I_0.\]
From the choice of $(p,q)$ it follows that $\pi_1(\theta(s,q-1))=\pi_1(G(y)[s])$ are not arrow tiles for $s\in I_1$, and therefore we can compute step by step that
\begin{flalign*}
&\pi_2((\id\times J_1)(G(y))[p-i])=2, \quad\pi_2((\id\times J_1)(G(y))[s])=0\mbox{ for }s\in I_0\subseteq I_1, \\
&\pi_2(J(G(y))[p-(i+1)])=1, \quad\pi_2(J(G(y))[s])=0\mbox{ for }s\in I_1\setminus\{p-(i+1)\}, \\
&\pi_2(F(y))[p-(i+1)])=1, \quad\pi_2(F(y)[s])=0\mbox{ for }s\in I_1\setminus\{p-(i+1)\}
\end{flalign*}
and $\pi_2(\theta(p-(i+1),q-1))=1$. By repeating this argument inductively we see that the digit $1$ propagates to the lower left in the space-time diagram as indicated by \rfig{left} and eventually reaches $N$, a contradiction. If on the other hand $\pi_2(\theta(p-i,q))=2$, a similar argument shows that the digit $2$ propagates to the upper left in the space-time diagram as indicated by \rfig{left} and eventually reaches $N$, also a contradiction.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{scope}[yscale=-1,xscale=1]
\def\rectanglepath{-- ++(1cm,0cm)-- ++(0cm,1cm)-- ++(-1cm,0cm) -- ++(0cm,-1cm)}
\def\fivesquares{\rectanglepath ++(0,1) \rectanglepath ++(0,1) \rectanglepath ++(0,1) \rectanglepath ++(0,1) \rectanglepath}
\def\fivelines{++(0.5,0) -- ++(0,1) -- ++(0,1) -- ++(0,1) -- ++(0,1) -- ++(0,0.4)}
\draw (3,0) \fivesquares; \draw [->] (3,0) \fivelines; \draw [->] (4,4) -- (3.6,4.4);
\draw[dashed] (2,2) \rectanglepath; \draw[dashed] (1,3) \rectanglepath; \draw[dashed] (0,4) \rectanglepath;
\node at (0.5,4.5) {$1$}; \node at (1.5,4.5) {$2$};
\node at (0.5,3.5) {$0$}; \node at (1.5,3.5) {$1$}; \node at (2.5,3.5) {$2$};
\node at (0.5,2.5) {$0$}; \node at (1.5,2.5) {$0$}; \node at (2.5,2.5) {$1$};
\node at (4.7,2.5) {$\theta(p,q)$};
\draw (9,0) \fivesquares; \draw [->] (9,0) \fivelines; \draw [->] (10,4) -- (9.6,4.4);
\draw[dashed] (8,2) \rectanglepath; \draw[dashed] (7,1) \rectanglepath; \draw[dashed] (6,0) \rectanglepath;
\node at (6.5,2.5) {$0$}; \node at (7.5,2.5) {$0$}; \node at (8.5,2.5) {$2$};
\node at (6.5,1.5) {$0$}; \node at (7.5,1.5) {$2$}; \node at (8.5,1.5) {$1$};
\node at (6.5,0.5) {$2$}; \node at (7.5,0.5) {$1$};
\node at (10.7,2.5) {$\theta(p,q)$};
\end{scope}
\end{tikzpicture}
\caption{Propagation of digits to the left of $\theta(p,q)$.}
\label{left}
\end{figure}
Assume then that $\pi_2(\theta(p-i,q))$ is zero whenever $i\geq 0$, $(p-i,q)\in E$. If $\pi_2(\theta(p+1,q))=\pi_2(y[p+1])\neq 1$, then $\pi_2((\id\times J_1)(G(y))[p+1])\neq 2$ and $\pi_2(J(G(y))[p])=0$. Since $\pi_1(\theta(p,q-1))$ is an arrow tile, it follows that $\pi_2(\theta(p,q-1))=\pi_2(H(J(G(y)))[p])=1$. The argument of the previous paragraph shows that the digit $1$ propagates to the lower left in the space-time diagram as indicated by the left side of \rfig{right} and eventually reaches $N$, a contradiction.
Finally consider the case $\pi_2(\theta(p+1,q))=\pi_2(y[p+1])=1$. Then
\begin{flalign*}
&\pi_2(J(G(y))[p])\pi_2(J(G(y))[p+1])=12 \mbox{ and } \\
&\pi_2(F(y)[p])\pi_2(F(y)[p+1])=02.
\end{flalign*}
As in the previous paragraph we see that $\pi_2(\theta(p,q-2))=1$. This occurrence of the digit $1$ propagates to the lower left in the space-time diagram as indicated by the right side of \rfig{right} and eventually reaches $N$, a contradiction.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{scope}[yscale=-1,xscale=1]
\def\rectanglepath{-- ++(1cm,0cm)-- ++(0cm,1cm)-- ++(-1cm,0cm) -- ++(0cm,-1cm)}
\def\threesquares{\rectanglepath ++(0,1) \rectanglepath ++(0,1) \rectanglepath}
\def\threelines{++(0.5,0) -- ++(0,1) -- ++(0,1) -- ++(0,0.5)}
\draw (3,1) \threesquares; \draw [->] (3,1) \threelines; \draw [->] (4,3) -- (3.6,3.4);
\draw[dashed] (2,3) \rectanglepath; \draw[dashed] (1,4) \rectanglepath;
\node at (1.5,4.5) {$1$}; \node at (2.5,4.5) {$2$};
\node at (1.5,3.5) {$0$}; \node at (2.5,3.5) {$1$}; \node[fill=white] at (3.5,3.5) {$2$};
\node at (1.5,2.5) {$0$}; \node at (2.5,2.5) {$0$}; \node[fill=white] at (3.5,2.5) {$1$};
\node at (1.5,1.5) {$0$}; \node at (2.5,1.5) {$0$}; \node[fill=white] at (3.5,1.5) {$0$}; \node at (4.5,1.5) {$\not 1$};
\node at (3.5,0.5) {$\theta(p,q)$};
\draw (9,1) \threesquares; \draw [->] (9,1) \threelines; \draw [->] (10,3) -- (9.6,3.4);
\draw[dashed] (8,4) \rectanglepath;
\node at (7.5,4.5) {$0$}; \node at (8.5,4.5) {$1$}; \node at (9.5,4.5) {$2$};
\node at (7.5,3.5) {$0$}; \node at (8.5,3.5) {$0$}; \node[fill=white] at (9.5,3.5) {$1$};
\node at (7.5,2.5) {$0$}; \node at (8.5,2.5) {$0$}; \node[fill=white] at (9.5,2.5) {$0$}; \node at (10.5,2.5) {$2$};
\node at (7.5,1.5) {$0$}; \node at (8.5,1.5) {$0$}; \node[fill=white] at (9.5,1.5) {$0$}; \node at (10.5,1.5) {$1$};
\node at (9.5,0.5) {$\theta(p,q)$};
\end{scope}
\end{tikzpicture}
\caption{Propagation of digits at $\theta(p,q)$.}
\label{right}
\end{figure}
\end{proof}
\begin{remark}
It is possible that the $(p,q)$-local immortality problem is undecidable for reversible radius-$\frac{1}{2}$ CA whenever $p\in\N$ and $q\in\Npos$. We proved this in the case $(p,q)=(1,5)$ but for our purposes it is sufficient to prove this just for some $p>0$ and $q>0$. The important (seemingly paradoxical) part will be that for $(1,5)$-locally immortal radius-$\frac{1}{2}$ CA $F$ the ``local immortality'' travels to the right in the space-time diagram even though in reality there cannot be any information flow to the right because $F$ is one-sided.
\end{remark}
\section{Uncomputability of Lyapunov exponents}
In this section we will prove our main result saying that there is no algorithm that can compute the Lyapunov exponents of a given reversible cellular automaton on a full shift to an arbitrary precision.
To achieve greater clarity we first prove this result in a more general class of subshifts. For the statement of the following theorem, we recall for completeness that a sofic shift $X\subseteq A^\Z$ is a subshift that can be represented as the set of labels of all bi-infinite paths on some labeled directed graph. This precise definition will not be of any particular importance, because the sofic shifts that we construct are of very specific form. We will appeal to the proof of the following theorem during the course of the proof of our main result.
\begin{theorem}\label{TheoremLyapSofic}For reversible CA $F:X\to X$ on sofic shifts such that $\lambda^{+}(F)\in [0,\frac{5}{3}]\cup\{2\}$ it is undecidable whether $\lambda^{+}(F)\leq \frac{5}{3}$ or $\lambda^{+}(F)=2$.\end{theorem}
\begin{proof}We will reduce the decision problem of \rlem{local} to the present problem. Let $G:A_2^\Z\to A_2^\Z$ be a given reversible radius-$\frac{1}{2}$ cellular automaton and $B\subseteq A_2$ some given set. Let $A_1=\{0,\wall,\lfast,\rfast,\lslow,\rslow\}$ and define a sofic shift $Y\subseteq A_1^\Z$ as the set of those configurations containing a symbol from $Q=\{\lfast,\rfast,\lslow,\rslow\}$ in at most one position. We will interpret elements of $Q$ as particles going in different directions at different speeds and which bounce between walls denoted by $\wall$. Let $S:Y\to Y$ be the reversible radius-$2$ CA which does not move occurrences of $\wall$ and which moves $\lfast$ (resp. $\rfast$, $\lslow$, $\rslow$) to the left at speed $2$ (resp. to the right at speed $2$, to the left at speed $1$, to the right at speed $1$) with the additional condition that when an arrow meets a wall, it changes into the arrow with the same speed and opposing direction. More precisely, $S$ is the CA with memory $2$ and anticipation $2$ determined by the local rule $f:A_1^5\to A_1$ defined as follows (where $*$ denotes arbitrary symbols):
\begin{flalign*}
\begin{array}{l l l}
f(\rfast,0,0,*,*)=\rfast & \quad & f(*,\rslow,0,*,*)=\rslow \\
f(*,\rfast,0,\wall,*)=\lfast & \quad & f(*,*,\rslow,0,*) = 0, \\
f(*,*,\rfast,0,*) = 0 & \quad & f(*,*,\rslow,\wall,*) =\lslow,\\
f(*,0,\rfast,\wall,*)=0 & & \\
f(*,\wall,\rfast,\wall,*)=\rfast & & \\
f(*,*,0,\rfast,\wall)=\lfast & &
\end{array}
\end{flalign*}
with symmetric definitions for arrows in the opposite directions at reflected positions and $f(*,*,a,*,*)=a$ ($a\in A_1$) otherwise. Then let $X=Y\times A_2^\Z$ and $\pi_1:X\to Y$, $\pi_2:X\to A_2^\Z$ be the natural projections $\pi_i(x_1,x_2)=x_i$ for $x_1\in Y,x_2\in A_2^\Z$ and $i\in\{1,2\}$.
Let $x_1\in Y$ and $x_2\in A_2^\Z$ be arbitrary. We define reversible CA $G_2,F_1:X\to X$ by $G_2(x_1,x_2)=(x_1,G^{10}(x_2))$, $F_1(x_1,x_2)=(S(x_1),x_2)$. Additionally, let $F_2:X\to X$ be the involution which maps $(x_1,x_2)$ as follows: $F_2$ replaces an occurrence of $\rfast 0\in A_1^2$ in $x_1$ at a coordinate $i\in\Z$ by an occurrence of $\lslow\wall\in A_1^2$ (and vice versa) \emph{if and only if}
\begin{flalign*}
& G^j(x_2)[i]\notin B\mbox{ for some }0\leq j\leq 5 \\
\mbox{or } &G^j(x_2)[i+1]\notin B \mbox{ for some } 5\leq j\leq 10,
\end{flalign*}
and otherwise $F_2$ makes no changes. Finally, define $F=F_1\circ G_2\circ F_2:X\to X$. The reversible CA $F$ works as follows. Typically particles from $Q$ move in the upper layer in the intuitive manner indicated by the map $S$ and the lower layer is transformed according to the map $G^{10}$. There are some exceptions to the usual particle movements: If there is a particle $\rfast$ which does not have a wall immediately at the front and $x_2$ does not satisfy a local immortality condition in the next $10$ time steps, then $\rfast$ changes into $\lslow$ and at the same time leaves behind a wall segment $\wall$. Conversely, if there is a particle $\lslow$ to the left of the wall $\wall$ and $x_2$ does not satisfy a local immortality condition, $\lslow$ changes into $\rfast$ and removes the wall segment.
We will show that $\lambda^{+}(F)=2$ if $G$ is $(1,5)$-locally immortal with respect to $B$ and $\lambda^{+}(F)\leq \frac{5}{3}$ otherwise. Intuitively the reason for this is that if $x,y\in X$ are two configurations that differ only to the left of the origin, then the difference between $F^i(x)$ and $F^i(y)$ can propagate to the right at speed $2$ only via an arrow $\rfast$ that travels on top of a $(1,5)$-witness. Otherwise, a signal that attempts to travel to the right at speed $2$ is interrupted at bounded time intervals and forced to return at a slower speed beyond the origin before being able to continue its journey to the right. We will give more details.
Assume first that $G$ is $(1,5)$-locally immortal with respect to $B$. Let $x_2\in A_2^\Z$ be a $(1,5)$-witness and define $x_1\in Y$ by $x_1[0]=\rfast$ and $x_1[i]=0$ for $i\neq 0$. Let $x=(0^\Z,x_2)\in X$ and $y=(x_1,x_2)\in X$. It follows that $\pi_1(F^i(x))[2i]=0$ and $\pi_1(F^i(y))[2i]=\rfast$ for every $i\in\N$, so $\lambda^{+}(F)\geq 2$. On the other hand, $F$ has memory $2$ so necessarily $\lambda^{+}(F)=2$.
Assume then that there are no $(1,5)$-witnesses for $G$. Let us denote
\[C(n)=\{x\in A_2^\Z\mid G^{5i+j}(x)[i]\in B\mbox{ for } 0\leq i\leq n,0\leq j\leq 5\}\mbox{ for } n\in\N.\]
Since there are no $(1,5)$-witnesses, by a compactness argument we may fix some $N\in\Npos$ such that $C(2N)=\emptyset$. We claim that $\lambda^{+}(F)\leq \frac{5}{3}$, so let us assume that $(x^{(n)})_{n\in\N}$ with $x^{(n)}=(x_1^{(n)},x_2^{(n)})\in X$ is a sequence of configurations such that $\Lambda_n^{+}(x^{(n)},F)=s_n n$ where $(s_n)_{n\in\N}$ tends to $\lambda^{+}$. There exist $y^{(n)}=(y_1^{(n)},y_2^{(n)})\in X$ such that $x^{(n)}[i]=y^{(n)}[i]$ for $i>-s_n n$ and $F^{t_n}(x)[i_n]\neq F^{t_n}(y)[i_n]$ for some $0\leq t_n\leq n$ and $i_n\geq 0$.
First assume that there are arbitrarily large $n\in\N$ for which $x_1^{(n)}[i]\in \{0,\wall\}$ for $i>-s_n n$ and consider the subsequence of such configurations $x^{(n)}$ (starting with sufficiently large $n$). Since $G$ is a one-sided CA, it follows that $\pi_2(F^{t_n}(x^{(n)}))[j]=\pi_2(F^{t_n}(y^{(n)}))[j]$ for $j\geq 0$. Therefore the difference between $x^{(n)}$ and $y^{(n)}$ can propagate to the right only via an arrow from $Q$, so without loss of generality (by swapping $x^{(n)}$ and $y^{(n)}$ if necessary) $\pi_1(F^{t_n}(x^{(n)}))[j_n]\in Q$ for some $0\leq t_n\leq n$ and $j_n\geq i_n-1$. Fix some such $t_n,j_n$ and let $w_n\in Q^{t_n+1}$ be such that $w_n(i)$ is the unique state from $Q$ in the configuration $F^i(x^{(n)})$ for $0\leq i\leq t_n$. The word $w_n$ has a factorization of the form $w_n=u(v_1 u_1\cdots v_k u_k)v$ ($k\in\N$) where $v_i\in\{\rfast\}^+$, $v\in\{\rfast\}^*$ and $u_i\in(Q\setminus\{\rfast\})^+$, $u\in (Q\setminus\{\rfast\})^*$. By the choice of $N$ it follows that all $v_i,v$ have length at most $N$ and by the definition of the CA $F$ it is easy to see that each $u_i$ contains at least $2(\abs{v_i}-1)+1$ occurrences of $\lslow$ and at least $2(\abs{v_i}-1)+1$ occurrences of $\rslow$ (after $\rfast$ turns into $\lslow$, it must return to the nearest wall to the left and back and at least once more turn into $\lslow$ before turning back into $\rfast$. If $\rfast$ were to turn into $\lfast$ instead, it would signify an impassable wall on the right). If we denote by $k_n$ the number of occurrences of $\rfast$ in $w_n$, then $k_n\leq \abs{w_n}/3+\ord(1)$ (this upper bound is achieved by assuming that $\abs{v_i}=1$ for every $i$) and
\[s_n n\leq \abs{w_n}+2k_n\leq \abs{w_n}+\frac{2}{3}\abs{w_n}+\ord(1)\leq\frac{5}{3}n+\ord(1).\]
After dividing this inequality by $n$ and passing to the limit we find that $\lambda^{+}(F)\leq\frac{5}{3}$.\footnote{By performing more careful estimates it can be shown that $\lambda^{+}(F)=1$, but we will not attempt to formalize the argument for this.}
Next assume that there are arbitrarily large $n\in\N$ for which $x_1^{(n)}[i]\in Q$ for some $i>-s_n n$. The difference between $x^{(n)}$ and $y^{(n)}$ can propagate to the right only after the element from $Q$ in $x^{(n)}$ reaches the coordinate $-s_n n$, so without loss of generality there are $0<t_{n,1}<t_{n,2}\leq n$ and $i_n\geq 0$ such that $\pi_1(F^{t_{n,1}}(x^{(n)}))[-s]\in Q$ for some $s\geq s_n n$ and $\pi_1(F^{t_{n,2}}(x^{(n)}))[i_n]\in Q$. From this the contradiction follows in the same way as in the previous paragraph.
\end{proof}
We are ready to prove the result for CA on full shifts.
\begin{theorem}For reversible CA $F:A^\Z\to A^\Z$ such that $\lambda^{+}(F)\in [0,\frac{5}{3}]\cup\{2\}$ it is undecidable whether $\lambda^{+}(F)\leq \frac{5}{3}$ or $\lambda^{+}(F)=2$.\end{theorem}
\begin{proof}
Let $G:A_2^\Z\to A_2^\Z$, $A_1$, $F=F_1\circ G_2\circ F_2:X\to X$, etc. be as in the proof of the previous theorem. We will adapt the conveyor belt construction from \cite{GS17} to define a CA $F'$ on a full shift which simulates $F$ and has the same right Lyapunov exponent as $F$.
Denote $Q=\{\lfast,\rfast,\lslow,\rslow\}$, $\Sigma=\{0,\wall\}$, $\Delta=\{-,0,+\}$, define the alphabets
\[\Gamma=(\Sigma^2\times\{+,-\})\cup(Q\times \Sigma\times\{0\})\cup(\Sigma\times Q\times\{0\})\subseteq A_1\times A_1\times\Delta\]
and $A=\Gamma\times A_2$ and let $\pi_{1,1},\pi_{1,2}:A^\Z \to A_1^\Z$, $\pi_\Delta:A^\Z\to\Delta^\Z$, $\pi_2:A^\Z\to A_2^\Z$ be the natural projections $\pi_{1,1}(x)=x_{1,1}$, $\pi_{1,2}(x)=x_{1,2}$, $\pi_\Delta(x)=x_\Delta$, $\pi_2(x)=x_2$ for $x=(x_{1,1},x_{1,2},x_\Delta,x_2)\in A^\Z\subseteq (A_1\times A_1\times \Delta\times A_2)^\Z$. For arbitrary $x=(x_1,x_2)\in(\Gamma\times A_2)^\Z$ define $G_2':A^\Z\to A^\Z$ by $G_2'(x)=(x_1,G^{10}(x_2))$.
Next we define $F_1':A^\Z\to A^\Z$. Every element $x=(x_1,x_2)\in(\Gamma\times A_2)^\Z$ has a unique decomposition of the form
\[(x_1,x_2)=\cdots(w_{-2},v_{-2})(w_{-1},v_{-1})(w_{0},v_{0})\allowbreak(w_{1},v_{1})(w_{2},v_{2})\cdots\]
where
\begin{flalign*}
w_i\in &(\Sigma^2\times\{+\})^*((Q\times \Sigma\times\{0\})\cup(\Sigma\times Q\times\{0\}))(\Sigma^2\times\{-\})^* \\
&\cup(\Sigma^2\times\{+\})^*(\Sigma^2\times\{-\})^*
\end{flalign*}
with the possible exception of the leftmost $w_i$ beginning or the rightmost $w_i$ ending with an infinite sequence from $\Sigma^2\times\{+,-\}$.
Let $(c_i,e_i)\in (\Sigma\times \Sigma)^*((Q\times \Sigma)\cup (\Sigma\times Q))(\Sigma\times \Sigma)^*\cup(\Sigma\times \Sigma)^*$ be the word that is derived from $w_i$ by removing the symbols from $\Delta$. The pair $(c_i,e_i)$ can be seen as a conveyor belt by gluing the beginning of $c_i$ to the beginning of $e_i$ and the end of $c_i$ to the end of $e_i$. The map $F_1'$ will shift arrows like the map $F_1$, and at the junction points of $c_i$ and $e_i$ the arrow can turn around to the opposite side of the belt. More precisely, define the permutation $\rho:A_1\to A_1$ by
\begin{flalign*}
\begin{array}{l l l l l l l}
\rho(0)=0 & \quad & \rho(\wall)=\wall & & & & \\
\rho(\lfast)=\rfast & \quad & \rho(\rfast)=\lfast & \quad & \rho(\lslow)=\rslow & \quad & \rho(\rslow)=\lslow
\end{array}
\end{flalign*}
and for a word $u\in A_1^*$ let $\rho(u)$ denote the coordinatewise application of $\rho$. For any word $w=w[1]\cdots w[n]$ define its reversal by $w^R[i]=w[n+1-i]$ for $1\leq i\leq n$. Then consider the periodic configuration $y=[(c_i,v_i)(\rho(e_i),v_i)^R]^\Z\in(A_1\times A_2)^\Z$. The map $F_1:X\to X$ extends naturally to configurations of the form $y$: $y$ can contain infinitely many arrows, but they all point in the same direction and occur in identical contexts. By applying $F_1$ to $y$ we get a new configuration of the form $[(c_i',v_i)(\rho(e_i'),v_i)^R]$. From this we extract the pair $(c'_i,e'_i)$, and by adding plusses and minuses to the left and right of the arrow (or in the same coordinates as in $(c_i,e_i)$ if there is no occurrence of an arrow) we get a word $w_i'$ which is of the same form as $w_i$. We define $F_1':A^\Z\to A^\Z$ by $F_1'(x)=x'$ where $x'=\cdots(w_{-2}',v_{-2})(w_{-1}',v_{-1})(w_{0}',v_{0})\allowbreak(w_{1}',v_{1})(w_{2}',v_{2})\cdots$. Clearly $F_1'$ is shift invariant, continuous and reversible.
We define the involution $F_2':A^\Z\to A^\Z$ as follows. For $x\in A^\Z$ and $j\in\{1,2\}$ $F_2'$ replaces an occurrence of $\rfast 0$ in $\pi_{1,j}(x)$ at coordinate $i\in\Z$ by an occurrence of $\lslow\wall$ (and vice versa) \emph{if and only if} $\pi_\Delta(x)[i+1]=-$ and
\begin{flalign*}
&G^j(\pi_2(x))[i]\notin B\mbox{ for some }0\leq j\leq 5 \\
\mbox{or } &G^j(\pi_2(x))[i+1]\notin B \mbox{ for some } 5\leq j\leq 10,
\end{flalign*}
and otherwise $F_2$ makes no changes. $F_2'$ simulates the map $F_2$ and we check the condition $\pi_\Delta(x)[i+1]=-$ to ensure that $F_2'$ does not transfer information between neighboring conveyor belts.
Finally, we define $F'=F_1'\circ G_2'\circ F_2':A^\Z\to A^\Z$. The reversible CA $F'$ simulates $F:X\to X$ simultaneously on two layers and it has the same right Lyapunov exponent as $F$.
\end{proof}
The following corollary is immediate.
\begin{corollary}There is no algorithm that, given a reversible CA $F:A^\Z\to A^\Z$ and a rational number $\epsilon>0$, returns the Lyapunov exponent $\lambda^{+}(F)$ within precision $\epsilon$. \end{corollary}
\section{Lyapunov Exponents of Multiplication \\Automata}
In this section we present a class of multiplication automata which perform multiplication by nonnegative numbers in some integer base. After the definitions and preliminary lemmas we compute their average Lyapunov exponents.
For this section denote $\digs_n=\{0,1,\dots,n-1\}$ for $n\in\N$, $n>1$. To perform multiplication using a CA we need be able to represent a nonnegative real number as a configuration in $\digs_n^{\Z}$. If $\xi\geq0$ is a real number and $\xi=\sum_{i=-\infty}^{\infty}{\xi_i n^i}$ is the unique base-$n$ expansion of $\xi$ such that $\xi_i\neq n-1$ for infinitely many $i<0$, we define $\config_n(\xi)\in \digs_n^{\Z}$ by
\[\config_n(\xi)[i]=\xi_{-i}\]
for all $i\in\Z$. In reverse, whenever $x\in \digs_n^{\Z}$ is such that $x[i]=0$ for all sufficiently small $i$, we define
\[\real_n(x)=\sum_{i=-\infty}^{\infty}{x[-i] n^i}.\]
For words $w=w[1]w[2]\cdots w[k]\in \digs_n^k$ we define analogously
\[\real_n(w)=\sum_{i=1}^{k}{w[i]n^{-i}}.\]
Clearly $\real_n(\config_n(\xi))=\xi$ and $\config_n(\real_n(x))=x$ for every $\xi\geq0$ and every $x\in \digs_n^{\Z}$ such that $x[i]=0$ for all sufficiently small $i$ and $x[i]\neq n-1$ for infinitely many $i>0$.
The fractional part of a number $\xi\in\R$ is
\[\fractional(\xi)=\xi-\lfloor \xi\rfloor\in[0,1).\]
For integers $p,n\geq2$ where $p$ divides $n$ let $\mul_{p,n}:\digs_{n}\times \digs_{n}\to \digs_{n}$ be defined as follows. Let $q$ be such that $pq=n$. Digits $a,b\in \digs_{pq}$ are represented as $a=a_1q+a_0$ and $b=b_1q+b_0$, where $a_0,b_0\in \digs_q$ and $a_1,b_1\in \digs_p$: such representations always exist and they are unique. Then
\[\mul_{p,n}(a,b)=\mul_{p,n}(a_1q+a_0,b_1q+b_0)=a_0p+b_1.\]
An example in the particular case $(p,n)=(3,6)$ is given in Figure \ref{taul}.
\begin{figure}[h]
\centering
\begin{tabular} {c | c c c c c c}
$a\backslash b$ & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline
0 & 0 & 0 & 1 & 1 & 2 & 2 \\
1 & 3 & 3 & 4 & 4 & 5 & 5 \\
2 & 0 & 0 & 1 & 1 & 2 & 2 \\
3 & 3 & 3 & 4 & 4 & 5 & 5 \\
4 & 0 & 0 & 1 & 1 & 2 & 2 \\
5 & 3 & 3 & 4 & 4 & 5 & 5 \\
\end{tabular}
\caption{The values of $\mul_{3,6}(a,b)$.}
\label{taul}
\end{figure}
We define the CA $\Mul_{p,n}:\digs_{n}^{\Z}\to \digs_{n}^{\Z}$ by $\Mul_{p,n}(x)[i]=\mul_{p,n}(x[i],x[i+1])$, so $\Mul_{p,n}$ has memory $0$ and anticipation $1$. The CA $\Mul_{p,n}$ performs multiplication by $p$ in base $n$ in the sense of the following lemma.
\begin{lemma}\label{vastaavuus}$\real_{n}(\Mul_{p,n}(\config_{n}(\xi)))=p\xi$ for all $\xi\geq 0$.\end{lemma}
We omit the proof of the lemma, which can be found for example in~\cite{Kari12b}. The idea of the proof is to notice that the local rule $\mul_{p,n}$ mimics the usual school multiplication algorithm. Because $p$ divides $n$, the carry digits cannot propagate arbitrarily far to the left.
For the statement of the following lemmas, which were originally proved in~\cite{KK17}, we define a function $\integ:\digs_{pq}^+\to\N$ by
\[\integ(w[1]w[2]\cdots w[k])=\sum_{i=0}^{k-1}w[k-i](pq)^i,\]
i.e. $\integ(w)$ is the integer having $w$ as a base-$pq$ representation.
\begin{lemma}Let $w_1,w_2\in \digs_{pq}^k$ for some $k\geq 2$ and let $t>0$ be a natural number. Then
\begin{enumerate}
\item $\integ(w_1)<q^t \implies \integ(\mul_{p,pq}(w_1))<q^{t-1}$ and
\item $\integ(w_2)\equiv \integ(w_1)+q^t\pmod{(pq)^k} \\ \implies \integ(\mul_{p,pq}(w_2))\equiv \integ(\mul_{p,pq}(w_1))+q^{t-1} \pmod{(pq)^{k-1}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x_i\in \digs_{pq}^\Z$ ($i=1,2$) be such that $x_i[-(k-1),0]=w_i$ and $x_i[j]=0$ for $j<-(k-1)$ and $j>0$. From this definition of $x_i$ it follows that $\integ(w_i)=\real_{pq}(x_i)$. Denote $y_i=\Mul_{p,pq}(x_i)$. We have
\[\sum_{j=-\infty}^{\infty}y_i[-j](pq)^j=\real_{pq}(y_i)=p\real_{pq}(x_i)=p\integ(w_i)\]
and
\begin{flalign*}
\integ(\mul_{p,pq}(w_i))&=\integ(y_i[-(k-1),-1]) \\
&=\sum_{j=1}^{k-1}y_i[-j](pq)^{j-1}\equiv\lfloor \integ(w_i)/q\rfloor\pmod{(pq)^{k-1}}.
\end{flalign*}
Also note that $\integ(\mul_{p,pq}(w_i))<(pq)^{k-1}$.
For the proof of the first part, assume that $\integ(w_1)<q^t$. Combining this with the observations above yields $\integ(\mul_{p,pq}(w_1))\leq\lfloor \integ(w_1)/q\rfloor<q^{t-1}$.
\begin{sloppypar}
For the proof of the second part, assume that $\integ(w_2)\equiv \integ(w_1)+q^t\pmod{(pq)^k}$. Then there exists $n\in\Z$ such that $\integ(w_2)=\integ(w_1)+q^t+n(pq)^k$ and
\end{sloppypar}
\begin{flalign*}
\integ(\mul_{p,pq}(w_2))&\equiv \lfloor \integ(w_2)/q\rfloor\equiv\lfloor \integ(w_1)/q\rfloor+q^{t-1}+np(pq)^{k-1} \\
&\equiv\lfloor \integ(w_1)/q\rfloor+q^{t-1}\equiv \integ(\mul_{p,pq}(w_1))+q^{t-1}\pmod{(pq)^{k-1}}.
\end{flalign*}
\end{proof}
\begin{lemma}\label{godometer}Let $t>0$ and $w_1,w_2\in \digs_{pq}^k$ for some $k\geq t+1$.
\begin{enumerate}
\item If $\integ(w_1)<q^{t}$, then $\integ(\mul_{p,pq}^t(w_1))=0$.
\item If $\integ(w_2)\equiv \integ(w_1)+q^{t}\pmod{(pq)^k}$, then \\ $\integ(\mul_{p,pq}^t(w_2))\equiv \integ(\mul_{p,pq}^t(w_1))+1 \pmod{(pq)^{k-t}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Both claims follow by repeated application of the previous lemma.
\end{proof}
The content of Lemma \ref{godometer} is as follows. Assume that $\{w_i\}_{i=0}^{(pq)^k-1}$ is the enumeration of all the words in $\digs_{pq}^k$ in the lexicographical order, meaning that $w_0=00\cdots 00$, $w_1=00\cdots 01$, $w_2=00\cdots 02$ and so on. Then let $i$ run through all the integers between $0$ and $(pq)^k-1$. For the first $q^{t}$ values of $i$ we have $\mul_{p,pq}^t(w_i)=00\cdots 00$, for the next $q^{t}$ values of $i$ we have $\mul_{p,pq}^t(w_i)=00\cdots 01$, and for the following $q^{t}$ values of $i$ we have $\mul_{p,pq}^t(w_i)=00\cdots 02$. Eventually, as $i$ is incremented from $q^{t}(pq)^{k-t}-1$ to $q^{t}(pq)^{k-t}$, the word $\mul_{p,pq}^t(w_i)$ loops from $(pq-1)(pq-1)\cdots (pq-1)(pq-1)$ back to $00\cdots 00$.
From now on let $p,q>1$ be coprime integers. We consider the Lyapunov exponents of the multiplication automaton $\Mul_{p,pq}$. Since $\Mul_{p,pq}$ has memory $0$ and anticipation $1$, it is easy to see that for any $x\in \digs_{pq}^\Z$ we must have $\lambda^{+}(x)=0$ and $\lambda^{-}(x)\leq 1$ and therefore $\lambda^{+}(\Mul_{p,pq})=0$, $\lambda^{-}(\Mul_{p,pq})\leq 1$.
Now consider a positive integer $m>0$. Multiplying $m$ by $p^n$ yields a number whose base-$pq$ representation has length approximately equal to $\log_{pq}(mp^n)=n(\log_{pq}p)+\log_{pq} m$. By translating this observation to the configuration space $\digs_{pq}^\Z$ it follows that $\lambda^{-}(0^\Z,\Mul_{p,pq})=\log_{pq}p$. One might be tempted to conclude from this that $\lambda^{-}(\Mul_{p,pq})=\log_{pq}p$. It turns out that this conclusion is not true.
\begin{theorem}
For coprime $p,q>1$ there is a configuration $x\in\digs_{pq}^\Z$ such that $\lambda^{-}(x,\Mul_{p,pq})=1$. In particular $\lambda^{-}(\Mul_{p,pq})=1$.
\end{theorem}
\begin{proof}
For every $n\in\Npos$ define $x_n=\config_{pq}(q^n-1)$ and $y_n=\config_{pq}(q^n)$. By Lemma \ref{vastaavuus}, $\real(\Mul_{p,pq}^n(x_n))=p^n(q^n-1)<(pq)^n$ and $\real(\Mul_{p,pq}^n(y_n))=p^n q^n=(pq)^n$, which means that $\Mul_{p,pq}^n(x_n)[-n]=0$ and $\Mul_{p,pq}^n(y_n)[-n]=1$. Since $\Mul_{p,pq}$ has memory $0$ and anticipation $1$, it follows that $\Mul_{p,pq}^i(x_n)[-i]\neq \Mul_{p,pq}^i(y_n)[-i]$ when $0\leq i\leq n$ (note that $q^n$ isn't divisible by $pq$ for any $n\in\Npos$, which means that $x_n$ and $y_n$ differ only at the origin). Then choose $x,y\in \digs_{pq}^\Z$ such that $(x,y)\in \digs_{pq}^\Z\times \digs_{pq}^\Z$ is the limit of some converging subsequence of $((x_n,y_n))_{n\in\Npos}$. Then $x$ and $y$ differ only at the origin and $\Mul_{p,pq}^i(x)[-i]\neq \Mul_{p,pq}^i(y)[-i]$ for all $i\in\N$. It follows that $\lambda^{-}(x,\Mul_{p,pq})=1$.
\end{proof}
The intuition that the left Lyapunov exponent of $\Mul_{p,pq}$ ``should be'' equal to $\log_{pq}p$ is explained by the following computation of the average Lyapunov exponent.
\begin{theorem}\label{mulmeasLyap}For coprime $p,q>1$ we have $I_{\mu}^{-}(\Mul_{p,pq})=\log_{pq}p$, where $\mu$ is the uniform measure on $\digs_{pq}^\Z$.\end{theorem}
\begin{proof}
First note that for any $n\in\Npos$ and any $w\in \digs^{n+1}$ the equality $\Lambda_n^{-}(x)=\Lambda_n^{-}(y)$ holds for each pair $x,y\in\cyl(w,0)$, so we may define the quantity $\Lambda_n^{-}(w)=\Lambda_n^{-}(x)$ for $x\in\cyl(w,0)$. For any $i\in\N$ denote $(\Lambda_n^{-})^{-1}(i)=\{x\in\digs_{pq}^\Z\mid \Lambda_n^{-}(x)=i\}$. Then, note that always $\Lambda_n^{-}(x)\leq n$ and define for $0\leq i\leq n$
\[P_n(i)=\{w\in \digs_{pq}^{n+1}\mid \Lambda_n^{-}(w)=i\}\]
which form a partition of $\digs_{pq}^{n+1}$.
From these definitions it follows that
\[I_{n,\mu}^{-}=\int_{x\in \digs_{pq}^\Z}\Lambda_n^{-}(x) d\mu=\sum_{i=0}^{\infty} i\mu((\Lambda_n^{-})^{-1}(i))=(pq)^{-(n+1)}\sum_{i=0}^{n} i\abs{P_n(i)}.\]
To compute $\abs{P_n(i)}$ we define an auxiliary quantity
\[p_n(i)=\{w\in \digs_{pq}^{n+1}\mid i\leq \Lambda_n^{-}(w)\leq n\}:\]
then clearly $P_n(n)=p_n(n)$ and $P_n(i)=p_n(i)\setminus p_n(i+1)$ for $0\leq i<n$. Note that $w\in p_n(i)$ $(0\leq i\leq n)$ is equivalent to the existence of words $u\in \digs_{pq}^i$, $v_1,v_2\in \digs_{pq}^{n+1-i}$ such that $w=uv_1$ and $\mul_{p,pq}^t(uv_1)[1]\neq\mul_{p,pq}^t(uv_2)[1]$ for some $i\leq t\leq n$. By denoting
\[d_n(i)=\{u\in \digs_{pq}^{i}\mid \exists v_1,v_2\in A^{n+1-i},t\in[i,n]:\mul_{p,pq}^t(uv_1)[1]\neq \mul_{p,pq}^t(uv_2)[1]\},\]
it follows that $\abs{p_n(i)}=(pq)^{n+1-i}\abs{d_n(i)}$. By Lemma \ref{godometer}, for a word $u\in \digs_{pq}^i$ the condition $u\in d_n(i)$ is equivalent to the existence of a number divisible by $q^t$ on the open interval $J(u)_t=(\integ(u)(pq)^{t+1-i},(\integ(u)+1)(pq)^{t+1-i})$ for some $t\in[i,n]$. Furthermore, if an integer $m$ is divisible by $q^t$ and $m\in J(u)_t$, then $m(pq)^{n-t}\in J(u)_n$ is divisible by $q^n$. Thus it is sufficient to consider only the interval $J(u)_n$. We use this to compute $\abs{d_n(i)}$.
In the case $(pq)^{n+1-i}>q^n$ (equivalently: $n\log_{pq}q+i<n+1$) each interval $J(u)_n$ contains a number divisible by $q^n$ and therefore $\abs{d_n(i)}=(pq)^i$.
In the case $(pq)^{n+1-i}<q^n$ (equivalently: $n\log_{pq}q+i>n+1$) each interval $J(u)_n$ contains at most one number divisible by
$q^n$. Then $\abs{d_n(i)}$ equals the number of elements on the interval $[0,(pq)^{n+1})$ which are divisible by $q^n$ but not divisible by $(pq)^{n+1-i}$. Divisibility by both $q^n$ and $(pq)^{n+1-i}$ is equivalent to divisibility by $q^n p^{n+1-i}$ because $p$ and $q$ are coprime. Therefore $\abs{d_n(i)}=(pq)^{n+1}/q^n-(pq)^{n+1}/(q^n p^{n+1-i})=(pq)p^n-qp^i$.
Let us denote $\kappa=\left\lfloor n-n\log_{pq}q+1\right\rfloor$. We can see that when $i<\kappa$,
\begin{flalign*}
\abs{P_n(i)}&=\abs{p_n(i)}-\abs{p_n(i+1)}=(pq)^{n+1-i}\abs{d_n(i)}-(pq)^{n-i}\abs{d_n(i+1)} \\
&=(pq)^{n+1}-(pq)^{n+1}=0.
\end{flalign*}
We may compute
\begin{flalign*}
&(pq)^{n+1}I_{n,\mu}^{-}=\sum_{i=0}^{\kappa-1} i\abs{P_n(i)}+ \sum_{i=\kappa}^{n} i\abs{P_n(i)} \\
&=n\abs{p_n(n)}+\sum_{i=\kappa}^{n-1}i(\abs{p_n(i)}-\abs{p_n(i+1)})=\kappa\abs{p_n(\kappa)}+\sum_{i=\kappa+1}^{n}\abs{p_n(i)},
\end{flalign*}
in which
\[\kappa\abs{p_n(\kappa)}=\kappa(pq)^{n+1-\kappa}\abs{d_n(\kappa)}=\kappa(pq)^{n+1-\kappa}(pq)^\kappa=\kappa(pq)^{n+1}\]
and
\begin{flalign*}
&\sum_{i=\kappa+1}^{n}\abs{p_n(i)}=\sum_{i=\kappa+1}^{n}(pq)^{n+1-i}\abs{d_n(i)}=\sum_{i=\kappa+1}^{n}(pq)^{n+1-i}((pq)p^n-qp^i) \\
&=(pq)p^n\sum_{i=\kappa+1}^{n}(pq)^{n+1-i}-q(pq)^{n+1}\sum_{i=\kappa+1}^{n}q^{-i}\leq(pq)p^n(pq)^{n-\kappa}\sum_{i=0}^{\infty}(pq)^{-i} \\
&\leq 2(pq)p^n(pq)^{n-(n-n\log_{pq}q+1)+1}\leq 2(pq)p^n(pq)^{\log_{pq}q^n}=2(pq)^{n+1}.
\end{flalign*}
Finally, the left average Lyapunov exponent is
\begin{flalign*}
I_{\mu}^{-}&=\lim_{n\to\infty}\frac{I_{n,\mu}^{-}}{n}=\lim_{n\to\infty}\frac{\kappa\abs{p_n(\kappa)}}{(pq)^{n+1}n}+\lim_{n\to\infty}\frac{\sum_{i=\kappa+1}^{n}\abs{p_n(i)}}{(pq)^{n+1}n}=\lim_{n\to\infty}\frac{\kappa}{n} \\
&=1-\log_{pq}q=\log_{pq}p.
\end{flalign*}
\end{proof}
\begin{remark}
Multiplication automata can also be defined more generally. Denote by $\Mul_{\alpha,n}:\digs_n^\Z\to\digs_n^\Z$ the CA that performs multiplication by $\alpha\in\Rpos$ in base $n\in\N$, $n>1$ (if it exists). A characterization of all admissible pairs $\alpha,n$ can be extracted from the paper~\cite{BHM96}, which considers multiplication automata on one-sided configuration spaces $\digs_n^\N$. We believe that $I_{\mu}^{-}(\Mul_{\alpha,n})=\log_{n}\alpha$ for all $\alpha\geq 1$ and all natural numbers $n>1$ such that $\Mul_{\alpha,n}$ is defined (when $\mu$ is the uniform measure of $\digs_n^\Z$). Replacing the application of Lemma~\ref{godometer} by an application of Lemma~5.7 from \cite{KK17} probably yields the result for $\Mul_{p/q,pq}$ when $p>q>1$ are coprime. A unified approach to cover the general case would be desirable.
\end{remark}
\section{Conclusions}
We have shown that the Lyapunov exponents of a given reversible cellular automaton on a full shift cannot be computed to arbitrary precision. Ultimately this turned out to follow from the fact that the tiling problem for 2-way deterministic Wang tile sets reduces to the problem of computing the Lyapunov exponents of reversible CA. Note that the result does not restrict the size of the alphabet $A$ of the CA $F:A^\Z\to A^\Z$ whose Lyapunov exponents are to be determined. Standard encoding methods might be sufficient to solve the following problem.
\begin{problem}
Is there a fixed full shift $A^\Z$ such that the Lyapunov exponents of a given reversible CA $F:A^\Z\to A^\Z$ cannot be computed to arbitrary precision? Can we choose here $\abs{A}=2$?
\end{problem}
In our constructions we controlled only the right exponent $\lambda^+(F)$ and let the left exponent $\lambda^-(F)$ vary freely. Controlling both Lyapunov exponents would be necessary to answer the following.
\begin{problem}Is it decidable whether the equality $\lambda^{+}(F) +\lambda^{-}(F)=0$ holds for a given reversible cellular automaton $F:A^\Z\to A^\Z$?\end{problem}
We mentioned in the introduction that there exists a single CA whose topological entropy is an uncomputable number. We ask whether a similar result holds also for the Lyapunov exponents.
\begin{problem}Does there exist a single cellular automaton $F:A^\Z\to A^\Z$ such that $\lambda^+(F)$ is an uncomputable number?\end{problem}
By an application of Fekete's lemma the limit that defines $\lambda^+(F)$ is actually the infimum of a sequence whose elements are easily computable when $F:A^\Z\to A^\Z$ is a CA on a full shift. This yields the natural obstruction that $\lambda^+(F)$ has to be an upper semicomputable number. We are not aware of a cellular automaton on a full shift that has an irrational Lyapunov exponent (see Question 5.7 in \cite{CFK19}), so constructing such a CA (or proving the impossibility of such a construction) should be the first step. This problem has an answer for CA $F:X\to X$ on general subshifts $X$, and furthermore for every real $t\geq 0$ there is a subshift $X_t$ and a reversible CA $F_t:X_t\to X_t$ such that $\lambda^+(F_t)=\lambda^-(F_t)=t$ \cite{Hoch11}.
In the previous section we computed that the average Lyapunov exponent $I_{\mu}^{-}$ of the multiplication automaton $\Mul_{p,pq}$ is equal to $\log_{pq}p$ when $p,q>1$ are coprime integers. This in particular shows that average Lyapunov exponents can be irrational numbers. We do not know whether such examples can be found in earlier literature.
\bibliographystyle{plain}
\bibliography{mybib}{}
\end{document} | 111,190 |
Boylan's Lemon Soda 8oz Glass Bottle
Product Description
Boylan's Lemon Soda comes in an 8 oz (237 mL) glass bottle with a vintage-style molded design and slightly eggshell tinted glass. The almost 1920s Art Deco-style logo starkly captures the nostalgia of the time of neighborhood soda fountains. The brand's Irish name is in lime green, referencing the lime association with the slightly tart, citrus flavor, reading "Boylan Bottling Works. Brand LEMON Trademark.". A white slogan on the neck reads "All Natural with Pure Cane Sugar", and the bottle cap reads "Boylan Bttlg Co. 1891".
Containing citrus oils, Boylan's Lemon Soda gives off a lemon and nighttime perfume of white flowers with an undertone of peach. With a gentle, tea-like soft pour, it has low viscosity and carbonation, offering a small amount of nuanced, tiny bubbles. Be sure to serve it chilled for best results. Tasting it, you'll discover a soft, warm lemon flavor along with a blush of tangerine, peach, and twist of lime. The subtle aftertaste of dry white melon is so refreshing, you'll want to gulp it all down quickly.
In summary, Boylan's Lemon Soda:
- captures the nostalgia of neighborhood soda fountains
- is all natural with pure cane sugar
- offers a lemon flavor with a touch of peach and lime
Pair it with deep, and earthy flavor profiles such as salted nuts or even a chocolate lava cake. It's a delicious, complex, and fun flavor profile in a comfortable-to-hold bottle. Try Boylan's Lemon Soda today!
Boylan's Lemon Soda is bottled under the authority of: Boylan Bottling Co. Moonachie, NJ. 07074
Product Details
Ingredients: Carbonated Water, Cane Sugar, Citric Acid, Citrus Oils, Natural Lemon Flavors, Potassium Citrate. Caffeine Free, Sodium Free, Preservative Free.
Nutrition Facts: Serving Size 1 Bottle: Amount Per Serving, Calories 100, Total Fat 0g (0% DV), Sodium 0mg (0% DV), Total Carb. 26g (9% DV), Sugars 26g, Protein 0g (0% DV). Percent Daily Values are based on a 2,000 calorie diet.
UPC: 760712200029
Product Categories
We Also Recommend
Special Discount from Specialty Sodas!
Receive $5 Off Your Order!
Join our email club to receive the coupon code. | 132,560 |
TITLE: Characteristic Subgroups in Finite Cyclic Groups
QUESTION [1 upvotes]: Suppose $E = <e>$ is a cylic subgroup of $C = <c>$ such that $e=c^j$. How can I show that for any automorphism $F: C \rightarrow C$ that $e \in f(E)$ i.e that $E \subset F(E)$?
REPLY [1 votes]: $\newcommand{\Z}{\mathbb{Z}}$ Ok, I'm going to pick some different notation because I want to use $e$ for the identity. Let $G$ be a cyclic group of order $n$ generated by $g$, so it is canonically isomorphic to $\Z_n$ under the isomorphism that sends $g$ to $1$ call this isomorphism $f$. Then let $H$ be the cyclic subgroup of $G$ of order $j$ generated by $h$. We have that $H$ sits as a subgroup of $\Z_n$ under $f$ where $f(h) = m$. Every automorphism of $\Z_n$ is given by the map that sends 1 to an element of $A = \{ a : 0 < a < n, \gcd(a,n)=1 \}$. If $a \in A$ then $h(k) = ak$ is an automorphism of $\Z_n$, we need to prove that the set $\{ab : b \in f(H) \}$ contains $f(H)$. Once you've reduced this to the $\Z_n$ case, the rest of the proof is using Lagranges theorem and a coprimality condition on $b$... Do you see it?
REPLY [0 votes]: Well, one way to get at this would be to recall two facts:
1) $E$ is the unique subgroup of $C$ of order $|E|$ (cyclic groups have exactly one subgroup of order $d$ for each divisor $d$ of $|C|$).
2) Automorphisms preserve the order of an element. That is, if $f$ is an automorphism of $C$, then the order of $f(e)$ is equal to the order of $e$.
Fact 2 implies that the order of $f(e)$ is equal to the order of $e$, and therefore $f(e)$ generates a cyclic subgroup of order $|E|$. In other words, $|<f(e)>| = |<e>| = |E|$. Since there is only one subgroup of order $|E|$ (which is of course $E$), we can conclude that $f(E)$ must be equal to $E$. | 43,698 |
By.
Stefan Lumiere, 46, was sentenced by U.S. District Judge Jed Rakoff in Manhattan federal court, according to federal prosecutors.
Prosecutors had said in a brief filed in court that they believed a sentence of up to 14 years would be appropriate, while Lumiere's lawyers argued for a lighter sentence.
"We are pleased that our arguments resonated with the court," Lumiere's lawyers, Jonathan Halpern and Jon Friedman, said in a statement. They said Lumiere would appeal his conviction.
Lumiere, whose sister was married to Visium founder Jacob Gottlieb when he worked at the hedge fund, was found guilty of conspiracy, securities fraud and wire fraud charges in January after a six-day trial.
The trial followed a investigation of Visium that prompted the $8 billion firm's wind-down and charges against three others, including Sanjay Valvani, a portfolio manager who committed suicide in June after being accused of insider trading.
Prosecutors said Lumiere.
The case is U.S. v. Lumiere, U.S. District Court, Southern District of New York, No. 16-cr-00483.
(Reporting By Brendan Pierson in New York; Additional reporting by Nate Raymond; Editing by Bill Trott) | 265,414 |
Leawo Video Converter Free Download is the ultimate video converting software for Windows as well as Mac operating systems. The best free video converter helps to converts any of your SD or HD video and audio files from one file format to any other file format. Moreover, this free video converter supports to converts various types of video and audio file formats, including HD formats, also. This best video converter supports to create 3D videos out of common and HD videos in 6 different 3D effects.
The video converting software also supports converting videos play well on various portable devices, such as iPod, iPod Touch 4, iPhone, iPhone 4, iPad, iPad 2, Apple TV, Apple TV 2, PSP, PS3, Mobile Phones, Android Mobile, etc. Moreover, the video converter also provides basic video editing tools to make your video and audio more attractive looks. With a simple click, you can make creative or own videos easily and quickly.
Furthermore, Leawo Video Converter Full Version is freeware available on the present market. If you want to download from our website, then click on the below-mentioned download button at the end of the content. Leawo Video Converter works on Windows Vista, 7, 8, 8.1, and 10. Lastly, the professional and most popular video converter is also compatible with both Windows 32-bit and 64-bit configurations.
Leawo Video Converter Free Download Full Features:
- Leawo Video Converter Free Download is well-designed with a new, modern, graphical user-friendly interface. So, even first-time users can manage or use it quickly and effortlessly.
- The best free video converter supports to convert any HD or SD video or audio files into any popular video or audio file formats easily.
- At the same time, it allows you to easily watch HD movies without quality loss on trending devices like iPhone 6, iPad Air, Galaxy S 5, Lumia 920, etc.
- Leawo Video Converter Full Version supports to converts a wide range of video and audio file formats, such as AVI, DivX, Xvid, VOB, MOV, WMV, ASF, RMVB, RM, MPEG, FLV, MPG, MP4, MP3, WMA, 3GP, MKV, HD, UHD, UHD, BD, etc.
- Moreover, it also provides fantastic 2D to 3D video converter by using various 3D setting modes for your preferences like Red/Cyan, Red/Green, Red/Blue, Blue/Yellow, Interleaved, and Side by Side.
- Leawo Video Converter Free Download also helps to reset the video and audio parameters like video codec, bit rate, aspect ratio, frame rate, audio codec, channel, etc.
- You can also apply many special effects to make your videos more attractive looks like crop the video image for better display, adjust the brightness, contrast, and saturation of the video, add text or image watermarks.
- Besides, it also provides a built-in video player that helps to preview the HD videos and capture the video frames and then save them to JPG, BMP, or PNG files.
- The video converter allows you to convert a batch of HD videos at a time to processing multiple videos turning simultaneous helps to save time and energy.
- Leawo Video Converter Full Version joins several GD or SD video files or different segments to make them as a single file.
- Finally, the welcome window of Leawo Video Converter is presently available in various or multiple languages. That includes German, Spanish, Italiano, Dansk, Russian, Nederlands, English, Espanol, Portugues, and others.
Leawo Video Converter for Windows System Requirements:
- Operating System: Windows Vista, 7, 8, 8.1, and 10 (both 32-bit and 64-bit).
- Processor: 1.2 GHz Intel/AMD CPU or higher.
- RAM: 1 GB of Memory.
- Hard Disk: 120 MB free disk space.
- Developer: Leawo Software
Leawo Video Converter Conclusion:
Leawo Video Converter Free Download is the best video converter, that converts your downloaded, recorded, imported, or existing video or audio files into any other video or audio formats. The best free video converter supports converts and plays different types of video and audio files, including HD files also. Moreover, it allows you to import video and audio files from all internal and external storage drives as well as all popular social sites. Lastly, Leawo Video Converter is free to download from our website and works on all versions of Windows and Mac platforms. | 291,160 |
Chelsea Handler’s in love – the brassy talk show host has spilled details about her very high profile relationship with boyfriend Andre Balazs. She also had nice words to say about her former flame, 50 Cent.
Handler gushed over her 56-year-old hotelier beau, telling Oprah Winfrey in a new episode of “Oprah’s Next Chapter” that she’s in love.
“I’ve met my match. We couldn’t be more opposite. I bring out the stupidity in him probably.”
The 38-year-old E! personality even shared details about their plans for the future.
“We were talking about living our lives together, my boyfriend and I,” she said. “We’ve broken up a couple of times and gotten back together, and we’re trying to figure out how to live a life together.”
However, the “Chelsea Lately” host admits that Balzas’ jet-set lifestyle can be “taxing” on their relationship since she’s based in Los Angeles for her show.
“It’s just so hard and it’s just so taxing,” she complained. .”
Handler explained that she’s not concerned about tying the knot or having kids with her be.”
As for her previous relationship with 50 Cent, Handler said “he’s a sweetheart and he’s so cute.”
The two met in 2010 and dated for several months.
“It wasn’t the most serious relationship. He came on my show and he sent me flowers. And I was like, ‘I’m not gonna date somebody whose name is a number.'” | 212,626 |
interest in Canadian history in the schools and among the public. | 365,900 |
Vertriebsinnendienst International (m/w/d)
- Location: Ravensburg
- Working time: Full-time
- Job level: Professionals
- Place of work: Hybrid work
- Department: Marketing & Sales
- Steuerung des operativen Tagesgeschäfts im definierten Aufgabenbereich
- Auftragsbearbeitung inkl. Planung, Terminierung und Überwachung der Warensendungen
- Erstellung aller notwendigen Versand- und Exportdokumente für die Zollabwicklung
- Schnittstelle und Ansprechpartner für unsere internationalen Kunden und Kollegen
This is what you can score highly with
- Kaufmännische Ausbildung mit Berufserfahrung in vergleichbarem Umfeld
- Sehr gute Englischkenntnisse in Wort und Schrift
- Sicherer Umgang mit MS-Office Anwendungen, SAP-Kenntnisse sind von Vorteil
- Selbstständige, gewissenhafte und strukturierte Arbeitsweise
- Flexibilität, Teamgeist und ausgeprägte Kommunikationsstärke
Your Benefits
Flexible working hours
With our flexible working time models, you can easily manage your job and your private life. And if your position allows it, you can work part of the time from home.
More in your pocket
In addition to your attractive salary, you will benefit from employer contributions towards employee savings and employee discounts on our products.
Perfectly insured
We offer you a company pension plan and special conditions for occupational disability, accident and health insurance.
Live your dreams
We are happy to support you if you need some time off, want to realize your dreams and take a sabbatical. Let's discuss the possibilities together..
Robert Keddie
International. | 105,300 |
Bitvavo Review
Bitvavo is one of the largest and best-known European crypto exchange with over a million users. Bitvavo has only been around for a few years, but has grown rapidly. I, too, have been using Bitvavo for quite some time to buy & sell crypto. In this review, I will share my experiences with this trading platform so you can decide if Bitvavo is a good fit for you.
Would you like to try trading cryptos with Bitvavo? Use the button to register and trade your first €1000 in cryptos without paying commissions:
Bitvavo review summary
Bitvavo is a good choice for anyone looking for an easy-to-use solution for buying and selling Bitcoin and other cryptocurrencies. The transaction fees are low (never more than 0.25%) and the selection of cryptocurrencies is growing rapidly. Bitvavo is a reliable crypto exchange: they are supervised by the Dutch Central Bank and must adhere to strict regulations.
Advantages of Bitvavo
- You never pay more than 0.25% in fees
- Reliable & Dutch party
- User-friendly online trader
- Over 150+ different cryptocurrencies
- Wallets insured up to $255 million
Disadvantages of Bitvavo
- No phone support
- Credit cards are not accepted
- Not suitable for active speculation
- Basic mobile application
- Verification required
Free €1000 at Bitvavo
Before we discuss Bitvavo in this review, I’d like to share this special promotion with you! When you open an account with Bitvavo using the link below, you can trade your first €1000 in cryptos for free. This makes it the perfect exchange to start trading cryptos:
How does the crypto exchange work?
Bitvavo is a so-called crypto exchange: this means that you can buy popular cryptocurrencies like Bitcoin, Ethereum and Cardano directly on the platform. Buying your first crypto within the platform only takes a few minutes.
The first step is to open an account with Bitvavo. To open an account, you have to fill in some basic information.
After you have opened an account, the second step is to link your bank account to your Bitvavo account. Click on the Deposit button and make your first deposit. Don’t forget to verify your identity to lift your account restrictions.
Opening your first crypto position within Bitvavo is easy! Navigate to the cryptocurrency you want to buy and indicate for which amount you want to open a position. After you have placed your order, Bitvavo will do the rest and the crypto will be added to your account.
Do you want to learn in more detail how the Bitvavo crypto exchange works? Then read this extensive guide. In the rest of this article, we will discuss in more detail what the advantages & disadvantages of Bitvavo are.
What are the trading fees at Bitvavo?
At Bitvavo, you always benefit from low costs. Many platforms charge at least a 1% transaction fee on every transaction. This is not the case with Bitvavo: you will never pay more than 0.25%!
These transaction fees decrease even more when you place an order: your maximum transaction fee will then be 0.15%. By placing an order, you help Bitvavo to create a market, and they want to reward you for that.
You will also receive substantial discounts when you trade in large volumes. When trading in large volumes, transaction fees can even drop to 0.03%. Since Bitvavo’s costs are relative to your order size, the exchange is attractive for both beginning crypto traders and serious investors.
Don’t forget to also pay attention to the spread: this is the difference between the buy and sell price of a crypto. Especially in times of high volatility, this spread can rise sharply, which can affect your results.
Do you want to read about the costs at Bitvavo in more detail? Then read this article about costs at Bitvavo.
The Bitvavo trading platform
You can use the user-friendly Bitvavo trading platform in your web browser. After logging in, you will immediately see a clear overview of the cryptocurrencies you can trade. It is possible to sort the cryptos by name, but also by degree of increase/decrease in price.
Within the standard trading platform, you’ll always immediately see how your portfolio performs. It’s a pity that profits and losses are not properly tracked within the application. It is therefore advisable to keep a record of your purchase prices somewhere, so you can keep a clear overview of your winning and losing trades.
The advanced trader of Bitvavo
For the more serious trader, there is also an advanced trading platform. Within the advanced trader, you can perform analyses on the charts using technical analysis. Within the advanced trader, you also have access to the orderbook, which enables you to see how many buy and sell orders are coming in.
Useful within the advanced trader are the different orders you can place:
- Market order: the order is placed directly, and you will receive your cryptos at the best available price
- Limit order: you determine the price at which you want to buy the crypto. It is not certain that the order will actually be executed.
The advanced trader is especially useful for the active trader, who wants to execute several transactions per day. The extra analysis tools can help you make the right decisions.
Bitvavo mobile app
Bitvavo also has its own mobile application which excels in usability. With the mobile application, you can quickly buy or sell crypto regardless of where you are. You can also use the mobile application to track your results.
It is a pity that all the advanced trading possibilities are missing from the mobile application. If you want to trade actively, the Bitvavo mobile application is not sufficient. In that case, it’s recommended to use a computer.
Which cryptos can you trade at Bitvavo?
In the past, the amount of tradeable crypto’s at Bitvavo was limited, but they have more than made up for this. In 2021, Bitvavo added dozens of cryptocurrencies, which means they now offer more than 150 different cryptocurrencies. They clearly pay attention to the latest developments: new hype coins like Dogecoin and Shiba Inu are added quickly.
At some international exchanges, you can trade in an even wider number of cryptos. Some exotic cryptos may be missing at Bitvavo. This does mean that the tradeable cryptos at Bitvavo are already known somewhat, which decreases the chances of investing in a scam coin.
However, in my experience, there is a lack of alternative crypto products. For example, you cannot trade in leveraged products or short sell. NTFs and special crypto credit cards are also missing. However, if you are specifically looking for a solid crypto exchange, then this is not a big problem.
Staking on Bitvavo
At Bitvavo, you also have the option to stake your coins. When you stake a cryptocurrency, you will receive interest on it. At the same time, you also contribute to the technology behind the cryptocurrency.
It is easy to stake coins with Bitvavo. All you have to do is indicate that you want to stake your cryptos in the settings. Within the staking section of your account, you can immediately see how much you have already earned.
Don’t forget, however, that staking doesn’t come without risks! In this article, we will discuss in detail how staking works, so you are well-prepared.
Verification with Bitvavo
Because Bitvavo has a registration with the Dutch Central bank, you need to verify your account first. Bitvavo needs to verify that the customer is really who he says he is. This is done by uploading your ID and confirming your bank account by making a small deposit.
Fortunately, you only have to go through the verification process once: irritating, but necessary to comply with the rules of the Dutch government. Fortunately, the verification process doesn’t take long at Bitvavo and even during busy times you can start trading cryptos within 24 hours.
Withdrawing & depositing funds at Bitvavo
Depositing money at Bitvavo is easy: you can use bank transfer or iDEAL. It is possible to start trading from as little as 1 euro. With iDEAL, the funds will be added to your account right away so you can quickly get started.
I have also regularly withdrawn money from my Bitvavo account. Officially, this can take up to 48 hours, but from experience I know it is often processed immediately. The money is then visible on your European bank account within minutes.
Reliability of Bitvavo
I mainly choose Bitvavo as my crypto exchange because of its high level of reliability. Bitvavo is a Dutch company with a registration with the Dutch Central bank, which means it has to comply with European laws and regulations.
At Bitvavo, your funds are always stored in a separate foundation. This protects you from losing your money in case Bitvavo would file for bankruptcy.
The wallets stored at Bitvavo are insured up to $255 million by custody providers. The crypto assets are stored in high-security vaults worldwide, making it difficult for criminals to rob the company.
Even though Bitvavo is doing everything it can to protect the assets and cryptos of its users, it can still be smart to store some of them in cold wallets. A cold wallet is not connected to the internet, making it more difficult for hackers to gain access.
Do you want to read more about the reliability of Bitvavo? Please read our more extensive article about the reliability of Bitvavo.
Bitvavo wallet
Another big advantage of Bitvavo, is that they offer their wallet for all cryptos offered. This allows you to easily store the cryptos you buy at Bitvavo without further technical knowledge.
It is also possible to transfer cryptos stored at another exchange to your Bitvavo wallet. Just make sure you copy the Bitvavo wallet address correctly: if you make a mistake, you might lose your entire deposit.
Do you desire even more security and safety? Then you can also easily transfer the cryptos on your Bitvavo wallet to an external (cold) wallet. Do you want to read more about how wallets work within the Bitvavo platform? Read this article.
Bitvavo customer service
Bitvavo’s customer service is adequate, but not very personal. You can reach the crypto exchange via email or live chat during business hours. However, it’s not possible to speak to someone over the phone, which would have been useful for more complex questions.
Bitvavo’s customer quickly resolves any problems. However, they could score some extra points by making their customer service more easily accessible,
The minimum deposit at Bitvavo is €1. Even with small amounts, it can be attractive to invest in crypto’s: for example, you can periodically buy a small amount of crypto.
There is no maximum investment: it is possible to trade cryptos for millions of euros at Bitvavo. If you trade with a large amount, you can often only buy a crypto in smaller batches. This way, Bitvavo prevents you from paying too much.
Everyone over the age of 18 can open an account with Bitvavo.
Bitvavo has been verifying the identity of anyone who wants to trade crypto at Bitvavo in accordance with European guidelines since 2021.
Bitvavo pays out your money within 48 hours. Often, the money will be in your bank account right away.
Bitvavo is available in all European countries. Currently, the site is translated into English, French, German, Dutch, Spanish and Italian.
You can use Bitvavo to store and manage cryptos. After registering, you will receive a unique wallet address for each crypto that you can use.
At Bitvavo, you pay a maximum of 0.25% in trading costs for your transactions. This is unprecedentedly low: at many other crypto exchanges, you can easily pay 1% or more. Moreover, as a new user, you don’t have to pay transaction fees for your first €1000, in transactions.
When you use bank transfer or iDEAL, you pay no extra costs for depositing money. Using Bitvavo’s products is 100% free, you only pay for executed transactions.
Bitvavo review conclusion
I have excellent experiences with the trading platform Bitvavo. The low costs, the user-friendly platform and the wide range of cryptocurrencies make it a good choice for anyone interested in investing in crypto.
Are you wondering whether my opinion differs from the crowd? Below, you can see how Bitvavo is rated by hundreds of other users:
Not convinced about Bitvavo yet? Open a free account & pay no transaction fees on your first €1000, of crypto trades.
On this website, you will find much more useful information about Bitvavo to help you get started. | 92,365 |
Buy from Amazon.ca, Amazon.com, or IndieBound
Author’s website
Publication date – June 21, 2011
Summary: (Taken from GoodReads)!
Thoughts: There’s something I want to get out of the way here: I almost didn’t read this book past the first chapter. It started out seeming like a big mess, like the author didn’t know if she wanted to create a fantasy world or an alternate earth. Real-world mythology and religion (or rather, religious organizations all co-existing peacefully without any mention of actual religion) existing side-by-side with magic, fictional places mentioned alongside real places. It felt like a mess, like the author was perhaps banking on nobody having ever heard of an angel named Mastema or a place called Walachia, instead just hoping they’ll consider it all a part of the fantasy.
Then chapter 2 hits, and you realize, with a jump to the modern real world, that things aren’t actually as messed up as they seem, at least not when it comes to the world that the novel takes place in. It’s revealed that there are layers of reality, worlds in addition to our own, and that the veil between then sometimes gets thin enough to allow people to pass through from one world to the next. Not an original concept, I’ll grant you, but it did explain why mentions of real and fake places went hand in hand. There was a method to the madness, and it renewed my faith in the novel and made me want to keep reading.
Heavy with Judeo-Christian-Islamic mythology but still inclusive of any other belief system you can think of, Miserere takes place in Woerld, the plane of reality that’s one step closer to Hell than we are. The real action takes place around Lucian, who escapes the clutches of his power-hungry sister Catarina, the woman who’s working with a Fallen Angel to acquire yet more power and to take over Woerld. After his escape he meets Lindsay, a young girl who passed through the veil from our world into Woerld and who has become, in an instant, his protege. But Catarina’s not the only one looking to bring Lucian back. The forces of God, believing Lucian to be a criminal in exhile, are after him too. But conspiracy runs deep, and even those who claim to follow the light may have a sinister purpose.
What started off so chaotically ended up making a lot of sense by the end, and the story had a great deal of depth to it that isn’t always easy to come by when you’re essentially saying that God, Heaven, and Hell are real. Miserere was far from bible-thumping; it had quite a good message of inclusion, acceptance, and tolerance for the fact that even when people pray to different gods they’re still essentially praying to the same powers of goodness and light. Frohock plays with mythology in a wonderful and compelling way that makes you desperate to keep turning pages. The characters are richly detailed, well defined and interesting, and even though you’ve got adversaries who are working for the forces of evil, they remain three-dimensional and don’t simply become caricatures.
Frohock’s got some real talent here, and I was very impressed to find that this was her debut novel. This is normally the kind of quality you get from people who’ve been around the block a few times, so to speak. If this is Frohock’s starting point, then I’m very excited to see what she’s going to do next.
When all is said and done, the real reason this book lost points with me is because of the beginning. First impressions are important, and I know I can’t expect everything to be revealed within the first ten pages, but it sat so wrongly with me until I forced my way through what seemed like a poor and unpolished opening that I can’t help but have that impression colour my final review. I can only caution others to not be so thrown off when they read it. But in spite of a shaky start, the book turned out so much better than I thought it was going to, and this is one I can definitely recommend to those who enjoy a little world-crossing in their fantasy novels.
(Received for review from the published via NetGalley.) | 117,271 |
Microsoft MB2-716 Dumps PDF
Name Microsoft Dynamics 365 Customization and Configuration
Exam Code: MB2-716
Vendor: Microsoft
Total Question: 99
Last Updated: Aug 02 2022
Best Microsoft MB2-716 Dumps PDF To Help You Out In Your Exam:
Looking online for the most appropriate MB2-716 dumps pdf for preparation? Then you are on the right place for the Microsoft MB2-716 exam dumps. We provide you with the most effective and efficient MB2-716 dumps pdf. Now, to pass the Exam is no more a dream for the students. Students don’t need to burn their all midnight to pass the MB2-716 exam questions. But how? You have to just visit the Gratisdumps and get the MB2-716 study material. The MB2-716 Dumps pdf are available in the pdf format along with user-friendliness and will surely help you out in the MB2-716 exam preparation.
Price : $59.99 | 157,312 |
TITLE: What's the probability that you find a working pen?
QUESTION [2 upvotes]: You find a container of $27$ old pens in your
school supplies and continue to test them (without replacement), until you find
one that works. If each individual pen works $25$% of the time (regardless of the
other pens), what is the probability that you find one that works within the
first four tries?
My best guess was
$0.25 = \cfrac{1}{4}$ is the Probability of us having the 1st pen work ($0.25$ = probability of success)
$0.75 \cdot 0.25 = \cfrac{3}{16}$ is the probability of us having the 2nd pen work where $0.75$ is the probability of the 1st pen failing
$0.75 \cdot 0.75 \cdot 0.25 = \cfrac{9}{64}$ is the probability of us having the 3rd pen work where $0.75$ is the probability of the 1st and 2nd pen failing
$0.75 \cdot 0.75 \cdot 0.75 \cdot 0.25 = \cfrac{27}{256}$ is the probability of us having the 4th pen work $0.75$ is the probability of the 1st and 2nd and 3rd pen failing
So probability of us having a success within four tries $= \cfrac{1}{4} + \cfrac{3}{16} + \cfrac{9}{64} + \cfrac{27}{256} = \cfrac{175}{256}$
What I wanted to ask this website is if I had this right? I ask because the fact that they emphasize the fact that there's exactly 27 pens and that you test them without replacement makes me think that I'm missing a key step/have a logical flaw somewhere. Can someone tell me if I am a missing a step, because it feels like I am.
REPLY [0 votes]: You are correct. The number $27$ is a bit of a red herring; it doesn't really matter how many there are (so long as there are at least four).
Another way to do it: suppose that all four pens fail. The chance of this is $$\left(\frac34\right)^4 = \frac{81}{256}.$$
Now, if that didn't happen, then at least one of them worked: $$1-\frac{81}{256}=\frac{175}{256}.$$
This agrees with your answer; the advantage is that it's a much simpler calculation, no matter how many pens you have to test (try both methods with all $27$ instead of $4$ pens...). | 16,199 |
\begin{document}
\maketitle
\begin{abstract} Let $d\geq 2$ be given and let $\mu$ be an
involution-invariant probability measure on
the space of trees $T\in\cT_d$ with maximum
degrees at most $d$. Then $\mu$ arises as the local limit of some sequence
$\{G_n\}^\infty_{n=1}$ of graphs with all degrees at most $d$.
This answers Question 6.8 of Bollob\'as and Riordan \cite{Bol}.
\end{abstract}
\section{Introduction}
Let $\Gd$ denote the set of all finite simple
graphs $G$ (up to isomorphism)
for which $\deg(x) \leq d$ for every $x \in V(G)$.
For a graph $G$ and $x,y \in V(G)$ let $d_G(x,y)$ denote the distance
of $x$ and $y$, that is the length of the shortest path from $x$ to
$y$. A rooted $(r,d)$-ball is a graph $G \in \Gd$ with a marked
vertex $x \in V(G)$ called the root such that $d_G(x,y) \leq r$ for
every $y \in V(G)$. By $U^{r,d}$ we shall denote the set of rooted
$(r,d)$-balls.
If $G \in \Gd$ is a graph and $x\in V(G)$ then $B_r(x) \in U^{r,d}$
shall denote the rooted $(r,d)$-ball around $x$ in $G$.
For any $\alpha \in U^{r,d}$ and $G \in \Gd$ we define the set
$T(G,\alpha) \defeq \{x \in V(G): B_r(x) \cong \alpha\}$ and let
$p_G(\alpha) \defeq \frac{|T(G,\alpha)|}{|V(G)|}$.
A graph sequence $\G = \{G_n\}_{n=1}^{\infty} \subset \Gd$ is
{\bf weakly convergent} if $\lim_{n \to \infty} |V(G_n)| = \infty$
and for every $r$ and every $\alpha \in U^{r,d}$ the limit
$\lim_{n\to\infty}p_{G_n}(\alpha)$ exists (see \cite{BS}).
\noindent
Let $\Grd$ denote the set of all countable, connected rooted
graphs
$G$ for which $\deg(x) \leq d$ for every $x \in V(G)$.
If $G,H\in\Grd$ let $d_g(G,H)=2^{-r}$, where
$r$ is the maximal number such that the $r$-balls around the roots
of $G$ resp. $H$ are rooted isomorphic. The distance $d_g$ makes
$\Grd$ a compact metric space. Given an
$\alpha \in U^{r,d}$ let $T(\Grd,\alpha) = \{(G,x) \in \Grd : B_r(x)
\cong\alpha\}$. The sets $T(\Grd,\alpha)$ are closed-open sets.
A convergent graphs sequence $\{G_n\}^\infty_{n=1}$ define a
{\bf local limit measure} $\mu_{\bf G}$ on $\Grd$,
where $\mug(T(\Grd,\alpha)) =\limn p_{G_n}(\alpha)$.
However, not all the probability measures on $\Grd$ arise as local
limits. A necessary condition for a measure $\mu$ being a local limit
is its {\bf involution invariance} (see Section \ref{invol}).
The goal of this paper is to answer a question of
Bollob\'as and Riordan (Question 6.8 \cite{Bol}):
\begin{theorem}\label{theorem1}
Any involution-invariant measure $\mu$ on $\Grd$ concentrated on
trees arises as a local limit of some convergent graph sequence.
\end{theorem}
As it was pointed out in \cite{Bol}
such graph sequences are asymptotically treelike, thus
$\mu$ must arise as the local limit of a convergent large girth
sequence.
\section{Involution invariance} \label{invol}
Let $\ogr$ be the compact space of all connected
countable rooted graphs $\vec{G}$ (up to isomorphism)
of vertex degree bound $d$
with a distinguished directed edge pointing out from the root.
Note that $\vec{G}$ and $\vec{H}$ are considered isomorphic if there
exists a rooted isomorphism between them mapping distinguished
edges into each other.
Let $\our$ be the isomorphism classes
of all rooted $(r,d)$-graphs $\oa$ with a distinguished edge
$e(\oa)$ pointing out from the root.
Again, $T(\ogr,\oa)$ is well-defined for any $\oa\in\our$ and defines
a closed-open set in $\ogr$. Clearly,
the forgetting map $\cF:\ogr\to\Grd$ is continuous.
Let $\mu$ be a probability measure on $\Grd$. Then we define
a measure $\omu$ on $\ogr$ the following way.
\noindent
Let $\oa\in \our$ and let $\cF(\oa)=\alpha\in\urd$ be the
underlying rooted ball. Clearly,
$\cF(T(\ogr,\oa))=T(\Grd,\alpha)$.Let
$$\omu(T(\ogr,\oa)):=l\,,$$ where
$l$ is the number of edges $e$ pointing out from the root such that there
exists a rooted automorphism of $\alpha$ mapping $e(\oa)$ to $e$.
Observe that
$$\omu(\cF^{-1}(T(\Grd,\alpha))=\deg(\alpha)\mu(T(\Grd,\alpha))\,.$$
We define the map $T:\ogr\to\ogr$ as follows. Let $T(\vec{G})=\vec{H}$,
where :
\begin{itemize}
\item the underlying graphs of $\vec{G}$ and $\vec{H}$ are the same,
\item the root of $\vec{H}$ is the endpoint of $e(\vec{G})$,
\item the distinguished edge of $\vec{H}$ is pointing
to the root of $\vec{G}$.
\end{itemize}
Note that $T$ is a continuous involution.
Following Aldous and Steele \cite{AS}, we
call $\mu$ {\bf involution-invariant} if $T_*(\omu)=\omu$.
It is important to note \cite{AS},\cite{AL} that the limit measure
of convergent graphs sequences are always involution-invariant.
\noindent
We need to introduce the notion of {\bf edge-balls}.
Let $\vec{G}\in\ogr$. The edge-ball $B^e_r(\vec{G})$ of radius
$r$ around the root of $\vec{G}$ is the following spanned rooted
subgraph of $\vec{G}$:
\begin{itemize}
\item The root of $B^e_r(\vec{G})$ is the same as the root of $\vec{G}$.
\item $y$ is a vertex of $B^e_r(\vec{G})$ if
$d(x,y)\leq r$ or $d(x',y)\leq r$, where $x$ is the root of $\vec{G}$
and $x'$ is the endpoint of the directed edge $e(\vec{G})$.
\item The distinguished edge of $B^e_r(\vec{G})$ is $(\vec{x,x'})$.
\end{itemize}
Let $\erd$ be the set of all edge-balls of radius $r$ up to isomorphism.
Then if $\vec{\phi}\in\erd$, let
$s(\vep)\in \our$ be the rooted ball around the root of $\vep$.
Also, let $t(\vep)\in\our$ be the $r$-ball around $x'$ with
distinguished edge $(\vec{x',x})$.
\noindent
The involution $T^{r,d}:\erd\to\erd$ is
defined the obvious way and $t(T^{r,d}(\vep))=s(\vep)$,
$s(T^{r,d}(\vep))=t(\vep)$.
Since $\omu$ is a measure we have
\begin{equation} \label{e1}
\omu(T(\ogr,\oa))=\sum_{\vep,s(\vep)=\oa}\omu(T(\ogr,\vep)).
\end{equation}
Also, by the involution-invariance
\begin{equation} \label{e3}
\omu(T(\ogr,\vep))=\omu(T(\ogr,T^{r,d}(\vep)),
\end{equation}
since $T(T(\ogr,\vep))=T(\ogr,T^{r,d}(\vep)$.
Therefore by (\ref{e1}),
\begin{equation} \label{e2}
\omu(T(\ogr,\oa))=\sum_{\vep,t(\vep)=\oa}\omu(T(\ogr,\vep))
\end{equation}
\section{Labeled graphs}
Let $\ogrn$ be the isomorphism classes of
\begin{itemize}
\item connected countable rooted graphs with vertex degree bound $d$
\item with a distinguished edge pointing out from the root
\item with vertex labels from the set $\{1,2,\dots,n\}$.
\end{itemize}
Note that if $\vec{G}_*$ and $\vec{H}_*$ are such graphs then
they called isomorphic if there exists a map $\rho:V(\vec{G}_*)\to
V(\vec{H}_*)$ preserving both the underlying $\ogr$-structure and the
the vertex labels.
The labeled $r$-balls $\our_n$ and the labeled $r$-edge-balls
$\erd_n$ are defined accordingly.
Again, $\ogrn$ is a compact metric space and
$T(\ogrn,\oa_*)$,\,$T(\ogrn,\vep_*)$ are closed-open sets,
where $\oa_*\in \our$, $\vep_*\in \erd_n$.
Now let $\mu$ be an involution-invariant probability
measure on $\Grd$ with induced measure $\omu$. The associated
measure $\omu_n$ on $\ogrn$ is defined the following way.
\noindent
Let $\oa\in\our$ and $\kappa_1,\kappa_2$ be vertex labelings of $\oa$ by
$\{1,2,\dots,n\}$. We say that $\kappa_1$ and $\kappa_2$ are equivalent
if there exists a rooted automorphism of $\oa$ preserving the
distinguished edge and mapping $\kappa_1$ to $\kappa_2$.
Let $C(\kappa)$ be the equivalence class of the vertex labeling $\kappa$
of $\oa$. Then we define
$$\omu_n(T(\ogrn,[\kappa])):=\frac{|C(\kappa)|}{n^{|V(\oa)|}}
\omu(T(\ogr,\oa))\,.$$
\begin{lemma} \label{borel}
\begin{enumerate}
\item $\omu_n$ extends to a Borel-measure.
\item $\omu(T(\ogr,\oa))=\sum_{\oa_*,\,\cF(\oa_*)=\oa}
\omu_n(T(\ogrn,\oa_*))\,.$
\end{enumerate} \end{lemma}
\proof
The second equation follows directly from th definition. In order to prove
that $\omu_n$ extends to a Borel-measure it is enough to prove that
$$\omu_n(T(\ogrn,\oa_*))=
\sum_{\ob_*\in N_{r+1}(\oa_*)}
\omu_n(T(\ogrn,\ob_*))\,,$$
where $\oa_*\in\our_n$ and $N_{r+1}(\oa_*)$ is the
set of elements $\ob_*$ in $\vec{U}^{r+1,d}_n$ such that the
$r$-ball around the root of $\ob_*$ is isomorphic to $\oa_*$.
Let $\oa=\cF(\oa_*)\in\our$ and let
$N_{r+1}(\oa)\subset \our$ be the set of elements $\ob$ such that
the $r$-ball around the root of $\ob$ is isomorphic to $\oa$.
Clearly
\begin{equation} \label{gyors}
\omu(T(\ogr,\oa))=\sum_{\ob\in N_{r+1}(\oa)} \omu(T(\ogr,\ob))\,.
\end{equation}
Let $\kappa$ be a labeling of $\oa$ by $\{1,2,\dots,n\}$ representing
$\oa_*$. For $\ob\in N_{r+1}(\oa)$ let $L(\ob)$ be the set of labelings
of $\ob$ that extends some labeling of $\oa$ that is equivalent to $\kappa$.
\noindent
Note that
$$\omu_n(T(\ogrn,\oa_*))=\omu(T(\ogr,\oa))\frac{|C(\kappa)|}{n^{|V(\oa)|}}\,.$$
Also,
$$\sum_{\ob_*\in N_{r+1}(\oa_*)} \omu_n (T(\ogrn,\ob_*))=
\sum_{\ob\in N_{r+1}(\oa)} \omu(T(\ogr,\ob))
\frac{|L(\ob)|}{n^{|V(\ob)|}}\,.$$
Observe that
$|L(\ob)|=|C(\kappa)| n^{|V(\ob)|-V(\oa)|}$.
Hence
$$\sum_{\ob_*\in N_{r+1}(\oa_*)} \omu_n (T(\ogrn,\ob_*))=
\sum_{\ob\in N_{r+1}(\oa)}\omu(T(\ogr,\ob))
\frac{|C(\kappa)|}{n^{|V(\oa)|}}\,.$$
Therfore using equation (\ref{gyors}) our lemma follows. \qed
\vskip0.2in
\noindent
The following proposition shall be crucial in our construction.
\begin{propo}\label{master}
For any $\oa_*\in\our_n$ and $\veps_*\in\erd_n$
\begin{itemize}
\item
$\omu_n(T(\ogrn,\oa_*))=\sum_{\vep_*\in\erd_n,\,s(\vep_*)=\oa_*}
\omu_n(T(\ogrn,\vep_*))$
\item
$\omu_n(T(\ogrn,\oa_*))=\sum_{\vep_*\in\erd_n,\,t(\vep_*)=\oa_*}
\omu_n(T(\ogrn,\vep_*))$
\item
$\omu_n(T(\ogrn,\veps_*))=
\omu_n(T(\ogrn,T^{r,d}_n(\veps_*))\,.$
\end{itemize}
\end{propo}
\proof
The first equation follows from the fact that $\omu_n$ is a Borel-measure.
Thus the second equation will be an immediate corollary of the third one.
So, let us turn to the third equation.
Let $\cF(\veps_*)=\veps\in\erd$ and let $\kappa$ be a vertex-labeling
of $\veps$ representing $\veps_*$. It is enough to prove that
$$\omu_n(T(\ogrn,\veps_*))=
\frac{|C(\kappa)|}{n^{|V(\veps)|}} \omu(T(\ogr,\veps))\,,$$
where $C(\kappa)$ is the set of labelings of $\veps$ equivalent to $\kappa$.
Let $N_{r+1}(\veps)\in\our$ be the set of elements $\ob$ such that the
edge-ball of radius $r$ around the root of $\ob$ is isomorphic to $\veps$.
Then
\begin{equation} \label{gyors2}
\omu(T(\ogr,\veps))=\sum_{\ob\in N_{r+1}(\veps)} \omu(T(\ogr,\ob))\,.
\end{equation}
Observe that
$$\omu_n(T(\ogrn,\veps_*))=\sum_{\ob\in N_{r+1}(\veps)} \omu(T(\ogr,\ob))
\frac{k(\ob,\veps_*)}{n^{|V(\ob)|}}\,,$$
where $k(\ob,\veps_*)$ is the number of labelings
of $\ob$ extending an element that is equivalent to $\kappa$.
Notice that $k(\ob,\veps_*)=|C(\kappa)|n^{|V(\ob)|-|V(\veps)|}\,.$
Hence by (\ref{gyors2})
$\omu_n(T(\ogrn,\veps_*))=
\frac{|C(\kappa)|}{n^{|V(\veps)|}} \omu(T(\ogr,\veps))\,,$ thus our
proposition follows. \qed
\section{Label-separated balls}
Let $\grnd$ be the isomorphism classes of
\begin{itemize}
\item connected countable rooted graphs with vertex degree bound $d$
\item with vertex labels from the set $\{1,2,\dots,n\}$.
\end{itemize}
Again, we define the space of labeled $r$-balls $\urd_n$. Then
$\grnd$ is a compact space with closed-open sets $T(\grnd,M), M\in\urnd$.
Similarly to the previous section we define an associated probability measure
$\mun$, where $\mu$ in an involution-invariant probability measure on $\Grd$.
\noindent
Let $M\in\urnd$ and let $R(M)$ be the set of elements of
$\our_n$ with underlying graph $M$.
If $A\in R(M)$, then the multiplicity of $A$, $l_A$ is the number of
edges $e$ pointing out from the root of $A$ such that
there is a label-preserving rooted automorphism of $A$ moving the
distinguished edge to $e$.
Now let
$$\mun(M):=\frac{1}{\deg(M)}\sum_{A\in R(M)} l_A\omu_n(A)\,.$$
The following lemma is the immediate consequence of Lemma \ref{borel}.
\begin{lemma} \label{egyenletek}
$\mu_n$ is a Borel-measure on $\grnd$ and
$\sum_{M\in M(\alpha)}\mu_n(M)=\mu(A)$ if
$\alpha\in\urd$ and $M(\alpha)$ is the set of labelings of $\alpha$ by
$\{1,2,\dots,n\}$.
\end{lemma}
\begin{defin}
$M\in\urnd$ is called label-separated
if all the labels of $M$ are different.
\end{defin}
\begin{lemma}
For any $\alpha\in\urd$ and $\delta>0$ there exists an $n>0$
such that
$$|\sum_{M\in M(\alpha),\,M\,\,\mbox{ is label-separated}}
\mu_n(T(\Grd,M))-\mu(T(\Grd,\alpha))|<\delta\,.$$
\end{lemma}
\proof
Observe that
$$\sum_{M\in M(\alpha),\,M\,\,\mbox{ is label-separated}}
\mu_n(T(\Grd,M))=\frac{T(n,\alpha)}{n^{|V(\alpha)|}}\mu(T(\Grd,\alpha))\,,$$
where $T(n,\alpha)$ is the number of
$\{1,2,\dots,n\}$-labelings of $\alpha$ with
different labels.
Clearly, $\frac{T(n,\alpha)}{n^{|V(\alpha)|}}\to 1$ as $n\to\infty$. \qed
\section{The proof of Theorem \ref{theorem1}}
Let $\mu$ be an involution-invariant probability measure on $\Grd$
supported on trees.
It is enough to prove that for any $r\geq 1$ and $\epsilon >0$ there exists a
finite graph $G$ such that for any $\alpha\in\urd$
$$|p_G(\alpha)-\mu(T(\Grd,\alpha))|<\epsilon\,.$$
The idea we follow is close to the one used by Bowen in \cite{Bow}.
First, let $n>0$ be a natural number such that
\begin{equation} \label{becs1}
|\sum_{M\in M(\alpha),\,M\,\,\mbox{ is label-separated}}
\mu_n(T(\Grd,M))-\mu(T(\Grd,\alpha))|<\frac{\epsilon}{10}\,.
\end{equation}
\noindent
Then we define a directed labeled finite graph $H$ to encode some
information on $\omu_n$. If $A\in\vec{U}^{r+1,d}_n$ then let $L_A$ be the
unique element of $\erd_n$ contained in $A$.
\noindent
The set of vertices of $H$; $V(H):=\vec{U}^{r+1,d}_n$.
If $A,B\in \vec{U}^{r+1,d}_n$ and $L_A=L^{-1}_B$ (we use
the inverse notation instead of writing out the involution operator) then
there is a directed edge $(A,L_A,B)$ from $A$ to $B$ labeled by $L_A$ and
a directed edge $(B,L_B,A)$ from $B$ to $A$ labeled by $L_B=L^{-1}_A$.
Note that we might have loops.
We define the weight function $w$ on $H$ by
\begin{itemize}
\item $w(A)=\omu_n(T(\ogrn,A))$.
\item $w(A,L_A,B)=\mu(T(\ogrn,L_{A,B}))\,,$
where $L_{A,B}\in\vec{E}^{r+1,d}_n$ the unique element such that
$s(L_{A,B})=A, t(L_{A,B})=B$.
\end{itemize}
By Proposition \ref{master} we have the following equation for
all $A,B$ that are connected in $H$:
\begin{equation} \label{d1}
w(A,L_A,B)=w(B,L^{-1}_A,A)\,.
\end{equation}
Also,
\begin{equation} \label{d2}
w(A)=\sum_{w(A,L_A,B)\in E(H)} w(A,L_A,B)
\end{equation}
\begin{equation} \label{d3}
w(A)=\sum_{w(B,L^{-1}_A,A)\in E(H)} w(B,L^{-1}_A,A)
\end{equation}
Also if $M\in U^{r+1,d}_n$ then
\begin{equation} \label{d4}
\mu_n(M)=\frac{1}{\deg{(M)}}\sum_{A\in R(M)} l_A w(A),
\end{equation}
where $l_A$ is the multiplicity of $w(A)$.
\vskip0.2in
\noindent
Since the equations (\ref{d1}), (\ref{d2}), (\ref{d3}) have rational
coefficients we also have weight functions $\we$ on $H$
\begin{itemize}
\item taking only rational values
\item satisfying equations (\ref{d1}), (\ref{d2}), (\ref{d3})
\item such that $|\we(A)-w(A)|<\delta $ for any $A\in V(H)$, where
the exact value of $\delta$ will be given later.
\end{itemize}
Now let $N$ be a natural number such that
\begin{itemize}
\item $\frac{N\we(A)}{l_A}\in\N$ if $A\in V(H)$.
\item $N\we(A,L_A,B)\in\N$ if $(A,L_A,B)\in E(H)$.
\end{itemize}
\vskip0.2in
\noindent
{\bf Step 1.} We construct an edge-less graph $Q$ such that:
\begin{itemize}
\item $V(Q)=\cup_{A\in V(H)} Q(A)$\quad (disjoint union)
\item $|Q(A)|=N \we(A)\,$
\item each $Q(A)$ is partitioned into
$\cup_{(A,L_A,B)\in E(H)} Q(A,L_A,B)$ such that $|Q(A,L_A,B)|=
N\we(A,L_A,B)$.
\end{itemize}
Since $\we$ satisfy our equations such $Q$ can be constructed.
\vskip0.2in
\noindent
{\bf Step 2.} We add edges to $Q$ in order to obtain the
graph $R$. For each pair $A,B$ that are connected in the graph $H$
form a bijection $Z_{A,B}:Q(A,L_A,B)\to Q(B,L_B,A)$.
If there is a loop in $H$ consider a bijection $Z_{A,A}$.
Then draw an edge between $x\in Q(A,L_A,B)$ and $y\in Q(B,L_B,A)$
if $Z_{A,B}(x)=y$.
\vskip0.2in
\noindent
{\bf Step 3.} Now we construct our graph $G$.
If $M\in U^{r+1,d}_n$ is a rooted labeled tree
such that $\mu_n(M)\neq 0$ let $Q(M)=\cup_{A\in R(M)} Q(A)$. We partition
$Q(M)$ into $\cup^{s_M}_{i=1} Q_i(M)$ such a way that each
$Q_i(M)$ contains exactly $l_A$ elements from the set $Q(A)$.
By the definition of $N$, we can make such partition.
\noindent
The elements of $V(G)$ will be the sets $\{Q_i(M)\}_{M\in
U^{r+1,d}_n\,,1\leq i \leq s_M}$. We draw one edge between
$Q_i(M)$ and $Q_j(M')$ if there exists $x\in Q_i(M), y\in Q_j(M')$
such that $x$ and $y$ are connected in $R$. We label the
vertex $Q_i(M)$ by the label of the root of $M$.
Let $Q_i(M)$ be a vertex of $G$ such that $M$ is a label-separated
tree. Note that if $M$ is not a rooted tree then $\mu_n(M)=0$.
It is easy to see that the $r+1$-ball around $Q_i(M)$ in the graph $G$
is isomorphic to $M$ as rooted labeled balls.
Also if $M$ is not label-separated then the $r+1$-ball around $Q_i(M)$
can not be a label-separated tree.
Therefore \begin{align} \label{becs2}
\sum_{L\in\urd_n\,,\mbox{ $L$ is not a label-separated tree}}
p_G(L)
=\\=\sum_{L\in\urd_n\,,\mbox{ $L$ is not a label-separated tree}}
\sum_{A\in R(L)}\we(L)\leq \frac{\epsilon} {10} +\delta d |\urd_n|\,.
\end{align}
Also, if $M$ is a label-separated tree then
\begin{equation} \label{becs3}
|p_G(M)-\mu_n(T(\Grd,M))|\leq |R(M)|\delta\leq d\delta\,.
\end{equation}
Thus by (\ref{becs1}),(\ref{becs2}),(\ref{becs3})
if $\delta$ is choosen small enough then for any $\alpha\in U^{r+1,d}$
$$|p_G(\alpha)-\mu(T(\Grd,\alpha))|<\epsilon\,.$$
Thus our Theorem follows. \qed | 200,114 |
TITLE: If $f \circ V=f$ implies $f$ is constant, then $V$ must be ergodic.
QUESTION [1 upvotes]: I'm taking a course in measure theory, and we've been introduced to the definition of ergodicity as shown below:
"Let $(X,\mathcal{A},m)$ be a probability space. Let $V:X \to X$ be a measure-preserving bijection. We say that $V$ is ergodic if, for for every measurable set $A$ such that $V^{-1}(A)=A$, we have $m(A)=0$ or $m(A)=1$."
Following this is the remark:
"Equivalently, $V$ is ergodic if, every random variable f : X $\to$ $\mathbb R$ such that
$f \circ V = f$ is constant almost everywhere."
My question is why does this remark follow from the definition of ergodicity as provided above?
I know that the converse of this statement holds true, as shown at If $g$ is invariant under an ergodic map then it's almost everywhere constant.
My attempt
Inspired by the answer in the link, I'm wondering if $V^{-1}(A)=A$ implies that $A=\{x \in X: f\ \geq c_1\}$ where $c_1$ is some arbitrary constant and $f:X \to \mathbb R$ is some measurable function that satisfies $f \circ V=f$. If this implication is true, then it is clear to me why we can conclude $m(A)=0$ or $m(A)=1$. But I don't know whether the implication is true. The most difficult part for me is that I don't know how to progress from the statement $V^{-1}(A)=A$.
REPLY [1 votes]: Welcome to MSE!
Hint: What happens if $f = \chi_A$ is the characteristic function of a set $A$? What does it mean for $\chi_A \circ V = \chi_A$? What does it mean for $\chi_A$ to be constant a.e?
To follow up with your question in the comments, it's a kind of "standard trick" in measure theory. If you know things about sets, then we can often pass to functions by first considering characteristic functions, then simple functions (by taking linear combinations), then all measurable functions (by taking limits). For instance, this is how we define the Lebesgue integral.
Conversely, if we know things about functions, then we can often recover information about sets by seeing what happens to characteristic functions. Notice this is the "easier" direction of the two, because functions are the more complicated object. That said, this trick of considering characteristic functions is still extremely useful!
I hope this helps ^_^ | 111,350 |
We are taught in the book of Mishlei-Proverbs by King Solomon that it is better to hear criticism from a friend than compliments from someone who is truly one’s enemy. This week’s Torah reading abounds in compliments given to the Jewish people by the leading prophet of the non-Jewish world, Bilaam. From all of the compliments showered upon us by this person of evil, we are able learn the true intentions of the one blessing us. Our sages remark that the criticism leveled by our father Jacob against Shimon are to be counted amongst the blessings that he bestowed individually on each of his children.
The words of review and correction serve to save these tribes from extinction and wrongdoing. It is not only the superficial words of blessing that are important but, perhaps, much more importantly, it is the intent and goal of the one who is blessing that determines whether these seemingly beautiful words contain within them the poison of hatred and curses.
The Talmud teaches us that from the words of blessing that escaped the mouth of Bilaam, we can determine what his true intent was. The rabbis read his blessings as being delivered with a voice of sarcasm and criticism. Words and inflections can have many meanings, and since we did not actually hear the tone of voice used by Bilaam, we may be tempted to accept his words at face value and become flattered and seduced by the compliments he granted to us. The Talmud, however, judged his words more deeply, and realized that unless the Jewish people were careful in their observance of the Torah’s commandments, the words of blessing of Bilaam would only serve to mock them in later generations.
It is difficult in the extreme to resist the temptation of actually believing that flattering words could have an inglorious deception. A thousand years later, the prophets would warn us to remember the true intent of both Balak and Bilaam. Over our long history, and especially during the millennia of exile, we have suffered much persecution and negative hatred directed towards us. We also, paradoxically, have had to withstand the blandishments and false compliments paid to Judaism by those who only wish to destroy our faith and our future.
There is no question that one would rather be liked in this life. The true intent has to be judged correctly, and factored into the acceptance of compliments, seemingly bestowed by our former or current enemies and critics. The compliments given by Bilaam caused the death of thousands of Jews. That is the reason that the Jews felt justified in avenging themselves upon Bilaam.
Poison is often injected into candies and other sweet objects that are pleasant to the pallet but are destructive to the existence of the human being. This is one of the overriding messages contained in this week’s reading.
Shabbat shalom,
Rabbi Berel Wein | 413,680 |
s
Safer London - Feat. Abstract Benna
2017
In collaboration with the spoken word artist Abstract Benna, Oliver provides piano, electric piano, strings and beats for the charity 'Safer London', addressing a range of issues young Londoners come across in the city.
Oliver Patrice Weder is the keyboard player for the tropical folk-rock band 'Time for T' and is currently producing their new album.
The Tortugas (2013)
The Peppermint Beat Band (2010 - 2013) | 235,082 |
- Nigeria
- Lagos
- Xristanbod Nigeria Limited
- Reviews
- Reply to Review
Review of Xristanbod Nigeria Limited
5.0
Xristanbod Nigeria Limited29, Abeokuta Street, Bariga08098095242
1
Your last project at our warehouse at the port was really useful, great work Surveyor!
LISTING OWNERReply by Wale Omodele
16 Nov, 2015
Thanks a lot
Thanks a lot
LISTING OWNERReply by Wale Omodele
27 Nov, 2015
Than you very, very much our Engineer, greetings to your family.
Than you very, very much our Engineer, greetings to your family.
Reply to Review
CONGRATS! THIS IS A 5 STAR REVIEW. You can share it on your business fan's page: | 217,838 |
Advice
Corvus is always available to answer any questions you have about American taxation. We're proud to offer this service free of charge to our existing clients.
Our team is also happy to be contracted for advice if you’re not already using our services. We provide advisory services to international clients who have investments and/or activities in American entities, and are available to provide expert testimony in cases concerning American taxation.If you think we can be of service to you, please feel free to contact us to arrange a complimentary meeting. | 61,466 |
Chest Piercings
Chest Piercings
Aura faint transmutation; CL 3rd
Slot body; Cost minor (2,200 gp), major (3,750 gp), greater (5,750 gp); Weight –
Description
These excruciating body ornaments come in the form of sharp barbs and serrated rings inserted in the nipples, abdomen, or along the ribs. The pain they cause sharpens the mind and blocks out distractions.
- Minor: The wearer gains a +2 profane bonus on saves against effects that cause the dazed, nauseated, and sickened conditions.
- Major: The wearer gains a +5 competence bonus on concentration checks.
- Greater: The wearer gains a +4 profane bonus on saving throws against effects that deal ability damage or drain.
Construction
Requirements Craft Shadow Piercing, bear’s endurance (minor and greater), fox’s cunning (major); Cost 1,100 gp (minor), 1,875 gp (major), 2,875 gp (major) | 160,737 |
TITLE: If we have sequences $X_n,Y_m \in L^2[0,1]$, for $n \neq m$, why does it follow that $\mathbb{E}[X_nY_m] = \int_0^1X_nY_m \, dP$ for all $n,m \geq 0$?
QUESTION [2 upvotes]: If we have sequences $X_n,Y_m \in L^2[0,1]$, for $n \neq m$, why does it follow that $\mathbb{E}[X_nY_m] = \int_0^1X_nY_m \, dP$ for all $n,m \geq 0$ where $P$ is the probability measure on the space $L^2[0,1]$? This appears to be different than how expectations are normally defined, is there a trick to the form above? Thanks!
REPLY [4 votes]: The phrase "$P$ is the probability measure" could be construed as meaning the only probability measure, so "$P$ is a probability measure" is more appropriate here.
It's not a measure on $L^2[0,1]$ but rather on Borel subsets of $[0,1]$. Informally one says it's a measure on $[0,1]$. The space $L^2[0,1]$ is the set of functions $X : [0,1] \to \mathbb C$ for which $\int_{[0,1]} |X|^2\,dP<\infty$.
The usual definition of the expected value of a random variable $X$ on a probability space $\Omega$ with probability measure $P$ is
$$
\operatorname{E}(X) = \int_\Omega X\,dP.
$$
To see how this is related to the way in which expected values are often computed in courses not relying on measure theory, you might think of it as $f(x)\,dx = dP(x)$ where $f$ is the density. Then you have
$$
\operatorname{E}(X) = \int_\Omega X\,dP = \int_0^1 x \Big( f(x)\,dx\Big).
$$ | 189,847 |
Preston Kia in the Community
Here at Preston Kia, we’re extremely fortunate to have the support of a warm, welcoming community that has helped us to flourish into the successful business that we are today. We’re very grateful to our customers, friends, and neighbors for their support, which is why we’re always eager to give back to our community.
In fact, the staff at our Burton Kia dealership is always ready to return the favor by lending our support to local organizations and charities to help make our neighborhood even better. We’re eager to help build a world where the members of our community have all of the resources and support that they need.
Extending a Helping Hand
The team at our Kia dealer in Burton recognizes that everyone’s needs are different, and there’s no one organization that would be able to to offer support for every individual and every issue. That’s why we’re proud, as a local business leader, to offer support and sponsorship for as many charities and organizations as we can.
From supporting our local schools and youth sports leagues to working with organizations to fund medical research and support for veterans, we’re always eager to get behind groups that are working toward a better tomorrow. We believe that all of our neighbors deserve to lead rich, fulfilling lives, and we’re grateful to be in a position to play a small role in making that a reality.
Below, you’ll find a full list of different groups and charities that we support. If you’d like to learn more about the efforts that Preston Kia is making in our community, or if you’re interested in support for an organization you’re involved with, don’t hesitate to contact us by calling (844) 286-7905 today! | 365,256 |
\typeout{TCILATEX Macros for Scientific Word 2.0 <1 Oct 94>.}
\makeatletter
\def\FILENAME#1{#1}
\newcount\GRAPHICSTYPE
\GRAPHICSTYPE=\z@
\def\GRAPHICSPS#1{
\ifcase\GRAPHICSTYPE
ps: #1
\or
language "PS", include "#1"
\fi
}
\def\GRAPHICSHP#1{include #1}
\def\graffile#1#2#3#4{
\leavevmode
\raise -#4 \BOXTHEFRAME{
\hbox to #2{\raise #3\hbox{\null #1}}}
}
\def\draftbox#1#2#3#4{
\leavevmode\raise -#4 \hbox{
\frame{\rlap{\protect\tiny #1}\hbox to #2
{\vrule height#3 width\z@ depth\z@\hfil}
}
}
}
\newcount\draft
\draft=\z@
\def\GRAPHIC#1#2#3#4#5{
\ifnum\draft=\@ne\draftbox{#2}{#3}{#4}{#5}
\else\graffile{#1}{#3}{#4}{#5}
\fi
}
\def\addtoLaTeXparams#1{
\edef\LaTeXparams{\LaTeXparams #1}}
\newif\ifBoxFrame \BoxFramefalse
\newif\ifOverFrame \OverFramefalse
\def\BOXTHEFRAME#1{
\hbox{
\ifBoxFrame
\frame{#1}
\else
{#1}
\fi
}
}
\def\doFRAMEparams#1{\BoxFramefalse\OverFramefalse\readFRAMEparams#1\end}
\def\readFRAMEparams#1{
\ifx#1\end
\let\next=\relax
\else
\ifx#1i\dispkind=\z@\fi
\ifx#1d\dispkind=\@ne\fi
\ifx#1f\dispkind=\tw@\fi
\ifx#1t\addtoLaTeXparams{t}\fi
\ifx#1b\addtoLaTeXparams{b}\fi
\ifx#1p\addtoLaTeXparams{p}\fi
\ifx#1h\addtoLaTeXparams{h}\fi
\ifx#1X\BoxFrametrue\fi
\ifx#1O\OverFrametrue\fi
\let\next=\readFRAMEparams
\fi
\next
}
\def\IFRAME#1#2#3#4#5#6{
\bgroup
\parindent=0pt
\setbox0 = \hbox{#6}
\@tempdima = #1
\ifOverFrame
\typeout{This is not implemented yet}
\show\HELP
\else
\ifdim\wd0>\@tempdima
\advance\@tempdima by \@tempdima
\ifdim\wd0 >\@tempdima
\textwidth=\@tempdima
\setbox1 =\vbox{
\noindent\hbox to \@tempdima{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\%
\noindent\hbox to \@tempdima{\parbox[b]{\@tempdima}{#6}}
}
\wd1=\@tempdima
\else
\textwidth=\wd0
\setbox1 =\vbox{
\noindent\hbox to \wd0{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\%
\noindent\hbox{#6}
}
\wd1=\wd0
\fi
\else
\hsize=\@tempdima
\setbox1 =\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
\break
\unskip\hbox to \@tempdima{\hfill #6\hfill}
}
\wd1=\@tempdima
\fi
\@tempdimb=\ht1
\advance\@tempdimb by \dp1
\advance\@tempdimb by -#2
\advance\@tempdimb by #3
\leavevmode
\raise -\@tempdimb \hbox{\box1}
\fi
\egroup
}
\def\DFRAME#1#2#3#4#5{
\begin{center}
\ifOverFrame
#5\par
\fi
\GRAPHIC{#4}{#3}{#1}{#2}{\z@}
\ifOverFrame \else
\par #5
\fi
\end{center}
}
\def\FFRAME#1#2#3#4#5#6#7{
\begin{figure}[#1]
\begin{center}\GRAPHIC{#7}{#6}{#2}{#3}{\z@}\end{center}
\caption{\label{#5}#4}
\end{figure}
}
\newcount\dispkind
\def\FRAME#1#2#3#4#5#6#7#8{
\def\LaTeXparams{}
\dispkind=\z@
\def\LaTeXparams{}
\doFRAMEparams{#1}
\ifnum\dispkind=\z@\IFRAME{#2}{#3}{#4}{#7}{#8}{#5}\else
\ifnum\dispkind=\@ne\DFRAME{#2}{#3}{#7}{#8}{#5}\else
\ifnum\dispkind=\tw@
\edef\@tempa{\noexpand\FFRAME{\LaTeXparams}}
\@tempa{#2}{#3}{#5}{#6}{#7}{#8}
\fi
\fi
\fi
}
\def\TEXUX#1{"texux"}
\def\BF#1{{\bf {#1}}}
\def\NEG#1{\hbox{\rlap{\thinspace/}{$#1$}}}
\def\func#1{\mathop{\rm #1}}
\def\limfunc#1{\mathop{\rm #1}}
\long\def\QQQ#1#2{
\long\expandafter\def\csname#1\endcsname{#2}}
\@ifundefined{QTP}{\def\QTP#1{}}{}
\@ifundefined{Qcb}{\def\Qcb#1{#1}}{}
\@ifundefined{Qct}{\def\Qct#1{#1}}{}
\@ifundefined{Qlb}{\def\Qlb#1{#1}}{}
\@ifundefined{Qlt}{\def\Qlt#1{#1}}{}
\def\QWE{}
\long\def\QQA#1#2{}
\def\QTR#1#2{{\csname#1\endcsname #2}}
\long\def\TeXButton#1#2{#2}
\long\def\QSubDoc#1#2{#2}
\def\EXPAND#1[#2]#3{}
\def\NOEXPAND#1[#2]#3{}
\def\PROTECTED{}
\def\LaTeXparent#1{}
\def\ChildStyles#1{}
\def\ChildDefaults#1{}
\def\QTagDef#1#2#3{}
\def\QQfnmark#1{\footnotemark}
\def\QQfntext#1#2{\addtocounter{footnote}{#1}\footnotetext{#2}}
\def\MAKEINDEX{\makeatletter\input gnuindex.sty\makeatother\makeindex}
\@ifundefined{INDEX}{\def\INDEX#1#2{}{}}{}
\@ifundefined{SUBINDEX}{\def\SUBINDEX#1#2#3{}{}{}}{}
\def\initial#1{\bigbreak{\raggedright\large\bf #1}\kern 2\p@
\penalty3000}
\def\entry#1#2{\item {#1}, #2}
\def\primary#1{\item {#1}}
\def\secondary#1#2{\subitem {#1}, #2}
\@ifundefined{ZZZ}{}{\MAKEINDEX\makeatletter}
\@ifundefined{abstract}{
\def\abstract{
\if@twocolumn
\section*{Abstract (Not appropriate in this style!)}
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}
\quotation
\fi
}
}{
}
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}
\@ifundefined{maketitle}{\def\maketitle#1{}}{}
\@ifundefined{affiliation}{\def\affiliation#1{}}{}
\@ifundefined{proof}{\def\proof{\paragraph{Proof. }}}{}
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}
\@ifundefined{newfield}{\def\newfield#1#2{}}{}
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }
\newcount\c@chapter}{}
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}
\@ifundefined{subsection}{\def\subsection#1
{\par(Subsection head:)#1\par }}{}
\@ifundefined{subsubsection}{\def\subsubsection#1
{\par(Subsubsection head:)#1\par }}{}
\@ifundefined{paragraph}{\def\paragraph#1
{\par(Subsubsubsection head:)#1\par }}{}
\@ifundefined{subparagraph}{\def\subparagraph#1
{\par(Subsubsubsubsection head:)#1\par }}{}
\@ifundefined{therefore}{\def\therefore{}}{}
\@ifundefined{backepsilon}{\def\backepsilon{}}{}
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}
\@ifundefined{registered}{
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}
\@ifundefined{Eth}{\def\Eth{}}{}
\@ifundefined{eth}{\def\eth{}}{}
\@ifundefined{Thorn}{\def\Thorn{}}{}
\@ifundefined{thorn}{\def\thorn{}}{}
\def\TEXTsymbol#1{\mbox{$#1$}}
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\newdimen\theight
\def\Column{
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{
\rightline{\rlap{\box\z@}}
\vss
}
}
}
\def\qed{
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}
}
\def\cents{\hbox{\rm\rlap/c}}
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}
\def\vvert{\Vert}
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}
\def\dB{\hbox{{}}}
\def\mB#1{\hbox{$#1$}}
\def\nB#1{\hbox{#1}}
\def\note{$^{\dag}}
\def\newfmtname{LaTeX2e}
\def\chkcompat{
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtname\newfmtname
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{\Greekmath 010B }
\def\beta{\Greekmath 010C }
\def\gamma{\Greekmath 010D }
\def\delta{\Greekmath 010E }
\def\epsilon{\Greekmath 010F }
\def\zeta{\Greekmath 0110 }
\def\eta{\Greekmath 0111 }
\def\theta{\Greekmath 0112 }
\def\iota{\Greekmath 0113 }
\def\kappa{\Greekmath 0114 }
\def\lambda{\Greekmath 0115 }
\def\mu{\Greekmath 0116 }
\def\nu{\Greekmath 0117 }
\def\xi{\Greekmath 0118 }
\def\pi{\Greekmath 0119 }
\def\rho{\Greekmath 011A }
\def\sigma{\Greekmath 011B }
\def\tau{\Greekmath 011C }
\def\upsilon{\Greekmath 011D }
\def\phi{\Greekmath 011E }
\def\chi{\Greekmath 011F }
\def\psi{\Greekmath 0120 }
\def\omega{\Greekmath 0121 }
\def\varepsilon{\Greekmath 0122 }
\def\vartheta{\Greekmath 0123 }
\def\varpi{\Greekmath 0124 }
\def\varrho{\Greekmath 0125 }
\def\varsigma{\Greekmath 0126 }
\def\varphi{\Greekmath 0127 }
\def\nabla{\Greekmath 0272}
\def\Greekmath#1#2#3#4{
\if@compatibility
\ifnum\mathgroup=\symbold
\mbox{\boldmath$\mathchar"#1#2#3#4$}
\else
\mathchar"#1#2#3#4
\fi
\else
\ifnum\mathgroup=5
\mbox{\boldmath$\mathchar"#1#2#3#4$}
\else
\mathchar"#1#2#3#4
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\expandafter\ifx\csname ds@amstex\endcsname\relax
\else\message{amstex already loaded}\makeatother\endinput\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}
\def\FN@{\futurelet\next}
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}
\def\ints@{\findlimits@\ints@@}
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int}
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}
\def\intic@{
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@}
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}
\def\intdots@{\mathchoice{\plaincdots@}
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}
\def\RIfM@{\relax\protect\ifmmode}
\def\text{\RIfM@\expandafter\text@\else\expandafter\mbox\fi}
\let\nfss@text\text
\def\text@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}
\glb@settings}
\def\textdef@#1#2#3{\hbox{{
\everymath{#1}
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}
\def\Sb{_\multilimits@}
\def\endSb{\crcr\egroup\egroup\egroup}
\def\Sp{^\multilimits@}
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\overrightarrow{\mathpalette\overrightarrow@}
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\underrightarrow{\mathpalette\underrightarrow@}
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\projlim{\qopnamewl@{proj\,lim}}
\def\injlim{\qopnamewl@{inj\,lim}}
\def\varinjlim{\mathpalette\varlim@\rightarrowfill@}
\def\varprojlim{\mathpalette\varlim@\leftarrowfill@}
\def\varliminf{\mathpalette\varliminf@{}}
\def\varliminf@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\varlimsup{\mathpalette\varlimsup@{}}
\def\varlimsup@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}
\def\binom#1#2{{#1 \choose #2}}
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}
\def\QATOP#1#2{{#1 \atop #2}}
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}
\def\QABOVE#1#2#3{{#2 \above#1 #3}}
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\tint{\textstyle \int}
\def\tiint{\mathop{\textstyle \iint }}
\def\tiiint{\mathop{\textstyle \iiint }}
\def\tiiiint{\mathop{\textstyle \iiiint }}
\def\tidotsint{\mathop{\textstyle \idotsint }}
\def\toint{\textstyle \oint}
\def\tsum{\mathop{\textstyle \sum }}
\def\tprod{\mathop{\textstyle \prod }}
\def\tbigcap{\mathop{\textstyle \bigcap }}
\def\tbigwedge{\mathop{\textstyle \bigwedge }}
\def\tbigoplus{\mathop{\textstyle \bigoplus }}
\def\tbigodot{\mathop{\textstyle \bigodot }}
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}
\def\tcoprod{\mathop{\textstyle \coprod }}
\def\tbigcup{\mathop{\textstyle \bigcup }}
\def\tbigvee{\mathop{\textstyle \bigvee }}
\def\tbigotimes{\mathop{\textstyle \bigotimes }}
\def\tbiguplus{\mathop{\textstyle \biguplus }}
\def\dint{\displaystyle \int }
\def\diint{\mathop{\displaystyle \iint }}
\def\diiint{\mathop{\displaystyle \iiint }}
\def\diiiint{\mathop{\displaystyle \iiiint }}
\def\didotsint{\mathop{\displaystyle \idotsint }}
\def\doint{\displaystyle \oint }
\def\dsum{\mathop{\displaystyle \sum }}
\def\dprod{\mathop{\displaystyle \prod }}
\def\dbigcap{\mathop{\displaystyle \bigcap }}
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}
\def\dbigodot{\mathop{\displaystyle \bigodot }}
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}
\def\dcoprod{\mathop{\displaystyle \coprod }}
\def\dbigcup{\mathop{\displaystyle \bigcup }}
\def\dbigvee{\mathop{\displaystyle \bigvee }}
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}
\def\dbiguplus{\mathop{\displaystyle \biguplus }}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}\fi
\fi
\global\tag@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation{
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\tag@false
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum}
$$\global\@ignoretrue
\fi
}
\newif\iftag@ \tag@false
\def\tag{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{
\global\tag@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}
}
\makeatother
\endinput | 206,486 |
Thunder assign Jeremy Lamb and Daniel Orton to D-LeaguePosted by Inside Hoops
Dec 8
The Oklahoma City Thunder has assigned guard Jeremy Lamb and center Daniel Orton to the Tulsa 66ers of the NBA Development League, it was announced today by Executive Vice President and General Manager Sam Presti.
Lamb has appeared in eight games this season with Oklahoma City averaging 2.1 points in 4.3 minutes.
Orton has seen action in one game for the Thunder, scoring two points in two minutes versus Cleveland on Nov. 11th.
Both will be in uniform tonight when the Tulsa 66ers take on the Rio Grande Valley Vipers at the SpiritBank Event Center. | 2,470 |
We're working to improve the quality, relevance and overall value of the CyberWire’s content, and so we’ve put together a short audience survey that should take five minutes or less to complete. This survey is (obviously, we needn't add, but will) completely voluntary, anonymous and confidential. Click here to take our survey and look for your chance to win some official CyberWire swag at the.
Ars Technica and others report that Risk Sense has a BlueKeep proof-of-concept exploit.
The EU's mission to Moscow suffered a long-running "sophisticated cyber espionage event" that began in February 2017 and continued through its discovery in April, BuzzFeed reports. Russian organizations, probably intelligence services, are believed to be behind the attack, which netted the hackers an undisclosed haul of information. The EU did not disclose the incident, evidently not wishing to roil political waters on the eve of European elections.
Symantec's report on Russian influence operations in the 2016 US elections reveals Moscow's efforts to have been more extensive, more patient, and more balanced, ideologically, than previously assumed. A core group of main accounts (often bogus news services) was supported by a very large number of auxiliary accounts responsible for amplification. Messaging was designed to appeal to left and right roughly equally, with the most disaffected partisans most heavily targeted.
C4ISRNET suggests a possible motive for Russian GPS spoofing in the Black Sea: executive protection against drones. The incidents were highly correlated with President Putin's movements.
Lookout finds the advertising plug-in "BeiTaAd" in a lot of Google Play apps—about 230. This is more than just mildly irritating: BeiTaAd uses obfuscation normally seen in malware to obtrude itself into users' attention, yammering wildly across lockscreens, hooting video ads while the phone's supposed to be asleep, and so on. More than 440-million devices are believed to be infested. BeiTaAd can be hyperactive enough to render a phone effectively unusable, Threatpost comments.
Today's issue includes events affecting Australia, Canada, China, European Union, France, Israel, Russia, United Kingdom, United States and Vietnam
Bring your own context.
Is it possible to devise a security system that can't be defeated by inventive human laziness, more or less well-intentioned, but still at bottom ergophobic? What about this blockchain thing we've heard about?
"Basically, the way you interact with the blockchain is you have a secret, which is known as a private key. If you're the holder of that private key, you can commit funds to the blockchain and you can take funds out. The private key is basically like a PIN number to your bank account. If anybody is able to get that private key, they can steal your funds. I was researching one day how exactly your private key is generated, and during my research, I found that people were using the private key of 1. The private key is supposed to be 78 digits long... But, you know, somebody decided, hey, let's use 77 digits, all of those being zero, and then the last digit is 1. So, effectively, they had the private key of 1. And if you go in and look at that address that's generated from a private key of 1, you'll see thousands of transactions committed to that key. So there've been lots of people interacting and colliding using this shared private key."
—Adrian Bednarek, senior security analyst at Independent Security Evaluators, hipping everyone to Ethercombing on Reserch Saturday's 6.1.19 edition.
What? Technically, seventy-seven zeroes followed by a one is seventy-eight digits, right? So what's the problem? Next time make it seventy-seven ones and a zero. What? That wouldn't do it either? There's no pleasing this blockchain thing... SANS Institute, as Johannes Ullrich (Dean of Research and proprietor of the ISC Stormcast podcast) discusses the implications of Google's throwing its weight behind MTA-STS, a protocol intended to make e-mail more secure. Our guest, Josh Stella from Fugue, talks about security and compliance in cloud infrastructure.
In case you missed it, Recorded Future's latest podcast, produced in partnership with the CyberWire, is also up. This episode, #110, deals with "Advocating OWASP, Securing Elections, and Standing Your Ground." The featured guest is Tanya Janca, senior cloud advocate at Microsoft,
And, of course, Hacking Humans is out. In this episode, the hacker named “Alien” in Jeremy Smith’s Breaking and Entering. She has her own book coming out later this year, Data Breaches: Crisis and Opportunity. | 144,298 |
Since the SECURE Act passed in December of 2019, several clients have reached out regarding the so-called “10 Year Rule” which stipulates all retirement assets must be distributed to certain beneficiaries within 10 years of the client’s passing. Many clients who have listed their trusts as beneficiaries are concerned about how this rule impacts their estate plan. For example, some trusts are set up to provide a beneficiary with assets based on a yearly Required Minimum Distribution rule which is no longer in effect. If IRA assets must be distributed within 10 years, this may directly contradict the language in your trust document which distributes assets over a much longer time frame.
In order to address this concern, it is important to first understand how the old rules applied.
Before the passing of the SECURE Act, non-designated beneficiaries (certain ‘non-person’s’ such as trusts or charities) which did not qualify as “See-Through Trusts” (more on this in a moment) were required to distribute retirement funds by the end of the 5th year after the owner’s death. If however, the trust qualified as a See-Through Trust, the trust was treated as a single designated beneficiary and was able to ‘stretch’ the distribution using the oldest applicable trust beneficiary’s life expectancy.
A See-Through Trust is a properly drafted document which meets four requirements:
1) It is valid under state law,
2) It is irrevocable (can’t be amended) upon death,
3) Its underlying beneficiaries are identifiable, and
4) A copy of the trust document must be provided to the IRA custodian no later than October 31st of the year following death.
Requirement #3 is subject to a lot of confusion. What if some of your beneficiaries are identifiable and others are not? What if you have beneficiaries who receive income every year and other beneficiaries who will one day receive the principal (or the remainder)? Frequently clients list a charity as the final beneficiary should no living heirs survive. If you list a successor interest as charity, does that render the Trust as a non-See-Through Trust? In order to address this, Treasury Regulations state that if a trust requires the RMD from the IRA to pass directly and immediately to the underlying income beneficiary (which is typically a spouse), that only the income beneficiary’s life expectancy must be considered. This is the Conduit Trust provision. All other remainder beneficiaries’ life expectancies can be ignored for purposes of determining the oldest. Some trusts however draft language in which the trust will continue to hold the RMD as it leaves the IRA and the trustee has the discretion to pass assets to the beneficiary. This is an Accumulation Trust and all beneficiaries are considered for purposes of determining the oldest life expectancy.
The rules for non-designated beneficiaries hasn’t changed. They are still subject to the 5 Year Rule but the SECURE Act did change the rules for designated beneficiaries. They can now be broken into two categories: eligible (those listed below) and non-eligible (those that are not eligible).
Eligible designated beneficiaries are able to ‘stretch’ their distributions (interestingly they may also opt out). They must now be one of the following:
- Surviving Spouse
- Disabled Person
- Chronically ill person
- Individual who is not more than 10 years younger than the decedent
- Minor child of decedent (and this is only applicable until they reach age of majority)
Non-Eligible designated beneficiaries (those not listed above) are subject to the 10 Year Rule.
So back to trusts. Remember, there are two types of see-through trusts which could qualify for the stretch: Conduit and Accumulation. A conduit trust requires distributions to be made each year and paid to the trust beneficiaries. If a conduit trust names an eligible designated beneficiary (see list above) then they will be able to continue to stretch the distributions. But if they haven’t named eligible designated beneficiaries, they could be hit hard with the new rule.
Since Conduit Trusts require all distributions received to be passed outright to beneficiaries, they can’t get stuck inside the trust where they would be taxed at trust income tax rates. Conduits are taxed at the beneficiary’s personal income tax rate but at the expense of a loss of control (because it’s no longer in the trust itself). Accumulation trusts however will result in the assets being held within the trust and subject to trust income tax rates. So if a client is more focused on control then making sure their current document is (or will be) an accumulation trust may be recommended. It would still be paid out in ten years and subject to higher taxes, but control would be the pay-off.
For Conduit Trusts in which there is a single eligible designated beneficiary (typically the spouse), the stretch still exists. A potential problem arises with multiple income beneficiaries all of whom are eligible designated beneficiaries. (For example, you name your sister who is 5 years younger and you name your nieces and nephews all of whom are minors). What age should be used to calculate the RMD? The sister who could stretch (because she is not more than 10 years younger) or the minor children who would force the trust to switch to the 10-year payout at age of majority? It is unclear. Some have suggested creating single income beneficiary trusts and creating as many as you have beneficiaries for. This does require multiple trusts to be created and maintained, however.
So let’s say now that your client has a Conduit Trust with at least one non-eligible designated beneficiary. This will render the stretch obsolete and the 10 year rule will apply.
Lastly, the trust should contain language as to how quickly it allows distributions to be made. The trust language could allow the trustee to take out more than the RMD every year (which would provide for the most flexibility) or it could have been drafted to require the trustee to take out “only the required minimum distribution each year.” This may not be advisable as that would mean that the trustee technically can only now take the distribution on the 10th year (because that is now the only RMD requirement).
As you can see, the question as to whether or not you should name your trust as a beneficiary of your retirement accounts is more nuanced than ever before. Wondering how your estate plan works in today’s environment? Our Wealth Management team can discuss options to help you achieve your financial goals and connect you with local estate attorneys to implement your plan.
Questions? Contact PBMares’ Wealth Management team today.
About the Author: | 61,588 |
TITLE: $\mathbb R[X]/\langle X^4-1\rangle \cong \mathbb R \times \mathbb R \times \mathbb C$
QUESTION [2 upvotes]: I am trying to prove the isomorphism $\mathbb R[X]/\langle X^4-1\rangle \cong \mathbb R \times \mathbb R \times \mathbb C$. I will write what I did so you can help me from there.
First notice that $x^4-1=(x-1)(x+1)(x^2+1)$. If I could justify $\mathbb R[X]/\langle X^4-1 \rangle \cong \mathbb R[X]/\langle X-1 \rangle \times \mathbb R[X]/\langle X+1 \rangle \times \mathbb R[X]/\langle X^2+1\rangle$, then I define the following morphisms: $$\phi_1: \mathbb R[X] \to \mathbb R$$$$f \to f(1),$$$$\phi_2: \mathbb R[X] \to \mathbb R$$$$f \to f(-1),$$$$\phi_1: \mathbb R[X] \to \mathbb C$$$$f \to f(i)$$ By the first isomorphism theorem we have $\mathbb R[X]/\langle X-1 \rangle \cong \mathbb R$,$\mathbb R[X]/\langle X+1 \rangle \cong \mathbb R$ and $\mathbb R[X]/\langle X^2+1 \rangle \cong \mathbb C$ so the original isomorphism follows from here.
I would appreciate if someone could tell me how do I justify $\mathbb R[X]/\langle X^4-1\rangle \cong \mathbb R \times \mathbb R \times \mathbb C$. I know of the existence of the chinese remainder theorem,$I,J \lhd R, I+J=R \implies R/IJ \cong R/I \times R/J$. If I could show $\langle x-1 \rangle + \langle x+1 \rangle=\mathbb R[X]$, and $\langle x^2-1 \rangle + \langle x^2+1 \rangle=\mathbb R[X]$, then I would be done.
REPLY [2 votes]: Take any $f \in \mathbb R [x]$. Observe that $f = (x+1)(\frac{1}{2}f) + (x-1)(-\frac{1}{2}f)$. | 207,412 |
The hits just keep coming for Xbox Game Pass, the monthly subscription service that gives members access to a huge library of games. The platform has already seen a lot of heavy-hitters these last couple of months and you can add The Elder Scrolls 5: Skyrim Special Edition to the list.
CD Projekt Red Says They’re Not 100% Happy With Cyberpunk 2077’s Melee Combat
As of December 15th, it will be available for members to play to their heart’s content. The Special Edition includes all kinds of new features and content, including upgraded art and effects, screen-space reflections, and dynamic depth of field.
Even just accessing the base experience is an incredible opportunity, especially for those that haven’t played this fifth main installment in a beloved and award-winning series. It was well received at launch and still continues to provide compelling RPG experiences for those that have stuck around.
The Steel Championship Content Update Has Come To Crossout Adding In New Post Apocalyptic Content And A New PvP Map
You would be hard-pressed to find a more in-depth experience with limitless possibilities, from the ways you can customize your character to the compelling quests you can go on in an open-world environment that immerses you from the jump.
Bethesda has a lot to be proud of with Skyrim. It’s a game you can pick up and get lost in a world full of mystery and excitement. The main story line is compelling, which centers around the protagonist Dragonborn in his quest to defeat a powerful dragon by the name of Alduin the World-Eater.
But if you decide to hold off on the main storyline, you’ll find all kinds of interesting side quests and characters to engage with. The hours and hours of gameplay that you can put into this RPG is downright impressive.
Now that the Special Edition is coming to Xbox Game Pass, maybe a new wave of gamers can experience its awesome elements and appreciate one of the best RPGs in recent memory.
If you have an Xbox Game Pass membership, these sweet deals have been pretty frequent. Microsoft is doing everything possible to make the program worth members’ hard-earned money, whether it’s adding brand-new titles like Doom Eternal or getting others excited with tried and true gems like Skyrim.
This RPG is an amazing addition that paints a bright picture yet for the monthly membership program. Who knows what other superb titles are coming to the platform? You know Microsoft has a lot of great plans in store for fans.
Pillars Of Eternity II: Deadfire Is Coming To Consoles, Collector’s Edition Available
2021 could even be one of the biggest years for the platform as Microsoft looks to keep pushing the Xbox Series X. We shall see. | 407,139 |
Good, clear stuff from Paco Gonzalez
Cooperation vs Collaboration
We often use these words interchangeably, but they represent fundamentally different ways of contributing to a group and each comes with its own dynamics and power structures that shape groups in different ways ….
Useful to bear in mind when getting students to do group work. I wonder how beneficial it would be to make this distinction explicit to students? Probably just enough to give them a goal to collaborate towards on the one hand, and co-operative settings (like knowledge markets) on the other.
[…] third is interest. Collaboration seems to trump co-operation. These are different things, though, and I am not sure why one is better than the other. The group approach implicit in […] | 285,631 |
TITLE: Approximating $1/z$ by polynomials
QUESTION [18 upvotes]: Let $C=\{\mathrm e^{\mathrm it}, 0\le t\le 3\pi/2\}$ and $f(z)=1/z$. By Runge's theorem, there is a sequence of polynomials $p_n(z)$ such that $$\lim_n p_n(z)=f(z)$$ uniformly on $C$.
Does anyone know such a sequence?
REPLY [8 votes]: This solution is specialized to the particular problem. As in my other solution, I am working with the arc $C = \{ e^{i \theta} : \pi/4 < \theta < 7 \pi/4 \}$. Let $T_n(z)$ be the $n$-th Chebyshev polynomial, so it is the polynomial with leading term $2^{n-1} z^n$ which has $|T_n(z)|\leq 1$ for $-1 \leq z \leq 1$.
Set
$$g_n(z) = z^{-1} - \frac{(1 + 1/\sqrt{2})^n z^{n - 1}}{2^{n-1}}
T_n\left(\frac{z + z^{-1} +1-1/\sqrt{2}}{1 + 1/\sqrt{2}} \right)$$
Okay, what's going on here? The map $z \mapsto z + z^{-1}$ takes $C$ to $[-2, \sqrt{2}]$. The linear function inside the Chebyshev polynomial takes $[-2, \sqrt{2}]$ to $[-1,1]$, so the $T_n$ term is $\leq 1$. So the whole second term has absolute value at most $(1+1/\sqrt{2})^n/2^{n-1} \approx 2 \cdot 0.853^n$. This will be much less than $z^{-1}$, for $n$ large, so $g_n(z) \approx z^{-1}$.
On the other hand, $T_n\left( \mbox{stuff} \right)$ will be a Laurent polynomial with most negative term $\frac{2^{n-1} z^{-n} }{(1+1/\sqrt{2})^n}$.
After multiplying by $\frac{(1 + 1/\sqrt{2})^n z^{n - 1}}{2^{n-1}}$, the second term will be of the form $z^{-1} + \mbox{polynomial}$. So $g_n(z)$ is a polynomial.
In this picture, the black arc is $e^{i \theta}$ for $\pi/4 \leq \theta \leq \pi$. The red, blue and green arcs are $g_{10}$, $g_{20}$ and $g_{30}$ evaluated on the black arc.
How did I come up with this? Roughly: I want $z^{-1} \approx p(z)$ for a polynomial $p$. I want $z^{-1} - p(z) \approx 0$. I want $z^{-n} + \cdots + z^n \approx 0$. I want Laurent polynomials with leading term $1$ and very small values on $C$. I know a family of polynomials with leading term $1$ and very small values on $[-1, 1]$: Namely, $2^{-n+1} T_n(z)$. How can I turn one into the other?
Note that these have much smaller degree than the polynomials in my other solution. The thing I called $f_N$ in my other solution has degree $N^3$, this $g_n$ has degree $2n-1$. | 169,236 |
TITLE: For an abelian group G, the induced map $H_{\bullet}(G.M)$ is the identiy
QUESTION [2 upvotes]: I have this question for the proof of a lemma I was reading.
Let $G$ be an abelian group and $M$ a $G$-module with that I have the question about this part:
"Since e $G$ is an abelian group it's well known that for any $g\in G$, the map $g:M\rightarrow M$ induces identity homomorphism on all homology groups."
I assume that the map from above it is the multiplication by $g$. Now I was trying the next thing (The idea is from the chapter 3, section VIII of Kenneth Brown Group Cohomology).
Since $G$ is abelian the map $\alpha=id:G\rightarrow G$ satisfy that the map $g:M\rightarrow M$ is compatible with $\alpha$; i.e. $g(hm)=\alpha(h)g(m)$. Now if $F$ is a the bar resolution of $\mathbb{Z}$ over $\mathbb{Z}G$, the map $\tau:F\rightarrow F$ given by $\tau(x)=g^{-1}x$ is a chain map compatible with $\alpha$.
Therefore I have the chain map $\tau\otimes g:F\otimes_{G} M\rightarrow F\otimes_{G} M$ given by $x\otimes m\mapsto g^{-1}x\otimes gm$.
We have that $F$ is also a right $G$-module but since $G$ is abelian, we have that $gx=xg$ for all $x\in F$ and $g\in G$. Thus $g^{-1}x\otimes gm=xg^{-1}\otimes gm=x(g^{-1}g)\otimes m=x\otimes m$
Thus this chain map is the identity and therefore induces identity homomorphism on all homology groups.
That's the only idea I could think, Is that correct, specially the last part where I assume that $F$ is also a right module.
Any suggestions are welcome.
REPLY [1 votes]: Tensoring the inhomogeneous bar resolution with M we obtain a sequence of abelian groups:$$\cdots \stackrel{d}\to M^{G\times G\times G}
\stackrel{d}\to M^{G\times G}
\stackrel{d}\to M^{G}
\stackrel{d}\to M\to 0.
$$
For $m\in M$ and $g_1,g_2,\cdots,g_k\in G$, let $[g_k,g_{k-1},\cdots,g_1,m]\in M^{G^k}$ be the element with co-ordinate $m$ on the summand $M$ corresponding to $(g_1,\cdots,g_k)$ and all other co-ordinates $0$. Then the above differentials are given by: $$d\colon[g_k,g_{k-1},\cdots,g_1,m]\mapsto
$$
$$
[g_{k-1},\cdots,g_1,m]-[g_kg_{k-1},\cdots,g_1,m]+[g_k,g_{k-1}g_{k-2},\cdots,g_1,m]-\cdots +(-1)^k[g_k,g_{k-1},\cdots,g_1m]
$$
The maps induced by multiplication by $h\in G$ are given by: $$h_*\colon[g_k,g_{k-1},\cdots,g_1,m]\mapsto[g_k,g_{k-1},\cdots,g_1,hm].$$
These are isomorphisms of abelian groups, regardless of $G$ being abelian. However, if $h$ is central in $G$ then $h_*$ is a chain map. That is, if $h$ is central in $G$ then $h_*d=dh_*$ as $hg_1=g_1h$.
We define $I\colon M^{G^k}\to M^{G^{k+1}}$ for each $k\geq 0$ by:
$$
I\colon [g_k,g_{k-1},\cdots,g_1,m]\mapsto
$$
$$
[h,g_k,g_{k-1},\cdots,g_1,m]-[g_k,h,g_{k-1},\cdots,g_1,m]+\cdots+(-1)^k[g_k,g_{k-1},\cdots,g_1,h,m].
$$
Again assume that $h$ is central in $G$. The key result is: $$dI+Id=1-h_*.\qquad\qquad[1]$$
To see this, consider $(dI+Id)[g_k,g_{k-1},\cdots,g_1,m]$ and note that all terms of $Id [g_k,g_{k-1},\cdots,g_1,m]$ are cancelled out by the corresponding term in $dI[g_k,g_{k-1},\cdots,g_1,m]$.
The remaining terms in $dI[g_k,g_{k-1},\cdots,g_1,m]$ of the form $\pm [g_k,g_{k-1},\cdots,hg_r,\cdots,g_1,m]$ and $\mp [g_k,g_{k-1},\cdots,g_rh,\cdots,g_1,m]$ cancel out, as $h$ is central in $G$. This leaves the two desired terms:$$(dI+Id)[g_k,g_{k-1},\cdots,g_1,m]=[g_k,g_{k-1},\cdots,g_1,m]-[g_k,g_{k-1},\cdots,g_1,hm],$$ and we have proven $[1]$.
Finally note that if $z$ is a cycle, then $dz=0$ so by $[1]$: $$z-h_*z=dIz+Idz=dIz,$$
which is a boundary. Thus $z$ and $h_*z$ represent the same element in homology. | 152,311 |
For the past several weeks, the Web has been buzzing with the prospective news that Apple will announce its latest iteration of the popular iPhone - mostly likely dubbed the iPhone 5 - today, Sept. 12.
USA Today's Jefferson Graham says the new phone is expected to be slightly bigger with a larger screen, faster processor (iOS 6) and better camera. The phone should also have a new mapping feature - called Apple Maps. The Huffington Post reports that Apple is likely to change the phone's look, shrink the charging dock and equip it with a bigger battery for longer life on one charge.
A JP Morgan economist has said he believes the release of the new iPhone could result in a major boost to the nation's economy during the last quarter of the year.
So, will you be one of those early buyers? What do you hope the phone has?
Like us on Facebook | Follow us on Twitter | Sign up for our newsletter | 318,547 |
TITLE: Simple proof of mulitnomial covariance?
QUESTION [0 upvotes]: Choose $m$ independent trials, each of which results in any of $r$ possible outcomes with probabilities $p_1, p_2, ......, p_r$ where $\sum_{i=1}^r {P_i} = 1$.
Let $N_i$ denote the number of trials that result in outcome $i = 1,...,r$.
Find the covariance between $N_i$ and $N_j$
Now I know that $Cov(N_i,N_j) = -np_ip_j$ but what's a quick proof of that using basic laws of expected values and variance?
REPLY [0 votes]: You might as well let $N=1$. (Because the sum of two independent multinomial count vectors based on the same probability vectors and sample sizes $m$ and $n$ is again multinomial with sample size $m+n$, and the covariances of the sum of two independent vectors add.) Then $\operatorname{Cov}(N_i,N_j)=E N_i N_i - EN_i EN_j$, where the counts $N_k$ are $0,1$ random variables, the indicators that the single pick was an $i$ or a $j$. So $N_iN_j=0$ with probability 1, and $EN_i = p_i$ and $EN_j=p_j$, and the desired covariance is obtained. | 2,188 |
\subsection{Proof of Proposition \ref{prop:NE_unique}}\label{append:Prop_2}
Similar to the proof of Proposition \ref{prop:NE_unique_AF} in Appendix \ref{append:Prop_NE_unique_AF}, the proof of this proposition follows by showing that the function ${\pmb{\mathcal B}^{DF}}\left( \pmb \rho \right)$ of the formulated game is standard. In the following, we show that the function ${\pmb{\mathcal B}^{DF}}\left( \pmb \rho \right)$ satisfies the three properties of the standard function.
\emph{(1) Positivity}: As shown in Appendix \ref{append:lemma_BR_function_DF}, for any player $i$ and any strategy profile $\pmb \rho$, the best response function ${{\mathcal B}}_i^{DF}\left( \pmb \rho \right) $ is always larger than 0, which guarantees the positivity of the function ${\pmb{\mathcal B}^{DF}}\left( \pmb \rho \right)$.
\emph{(2) Monotonicity}: Suppose $\pmb \rho$ and ${\pmb \rho}^\prime$ are two different strategy profiles and $\pmb \rho \ge {\pmb \rho}^\prime$. Then, the corresponding best response functions of any player $i$ can be written as
\begin{equation}\label{eq:app_1}
\begin{split}
{{\mathcal B}_i^{DF}}\left(\pmb \rho \right)&= \left[ {\left( {{X_i}{W_i} + {X_i} + {Y_i}{Z_i} + {Z_i}} \right) - } \right. \\
&\left. {\sqrt {{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2} } \right]/\left( {2{Y_i}{Z_i}} \right),
\end{split}
\end{equation}
and
\begin{equation}\label{eq:app_2}
\begin{split}
{{\mathcal B}_i^{DF}}\left({\pmb \rho}^\prime \right)&= \left[ {\left( {{X_i}{W_i^\prime} + {X_i} + {Y_i}{Z_i} + {Z_i}} \right) - } \right. \\
&\left. {\sqrt {{{\left( {{X_i}{W_i^\prime} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2} } \right]/\left( {2{Y_i}{Z_i}} \right),
\end{split}
\end{equation}
where $X_i$, $Y_i$, $Z_i$, $W_i$ are defined in (\ref{eq:def_XYZW}), and ${W_i^\prime} = \sum\nolimits_{j = 1,j \ne i}^N {{\rho _j^\prime}\eta \left( {\sum\nolimits_{n = 1}^N {{P_n}{{\left| {{g_{nj}}} \right|}^2}} } \right){{\left| {{h_{ji}}} \right|}^2}}/ {\sigma ^2}$.
Analogous to the analyses in Appendix \ref{append:Prop_NE_unique_AF}, the proof of ${{\mathcal B}_i^{DF}}\left(\pmb \rho \right) \ge{{\mathcal B}_i^{DF}}\left({\pmb \rho}^\prime \right)$ is equivalent to proving that ${\textstyle{{\partial {{\mathcal B}_i^{DF}}\left( {{W_i}} \right)} \over {{W_i}}}} \ge 0$. Expanding ${\textstyle{{\partial {{\mathcal B}_i^{DF}}\left( {{W_i}} \right)} \over {{W_i}}}} \ge 0$, we have
\begin{equation}\label{eq:app_3}
\begin{split}
&\frac{{\partial {{\mathcal B}_i^{DF}}\left( {{W_i}} \right)}}{{{W_i}}} \\
&= \frac{{{X_i}}}{{2{Y_i}{Z_i}}}\left[ {1 - \frac{{{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}}}{{\sqrt {{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2} }}} \right].
\end{split}
\end{equation}
Since the term ${{{X_i}}}/{{\left(2{Y_i}{Z_i}\right)}} >0$ and the term in the square bracket of (\ref{eq:app_3}) is always large than zero, we can claim that ${\textstyle{{\partial {{\mathcal B}_i^{DF}}\left( {{W_i}} \right)} \over {{W_i}}}} > 0$, which complete the proof of monotonicity.
\emph{(3) Scalability}: For any $\alpha >1$, we define the function $ {\mathcal F}_i\left( {\alpha ,{\pmb \rho} } \right) = \alpha {{\mathcal B}}_i^{DF}\left( \pmb \rho \right) - {{\mathcal B}}_i^{DF}\left( \alpha {\pmb \rho} \right)$. Then, the proof of the scalability is equivalent to proving that ${\mathcal F}_i\left( {\alpha ,{\pmb \rho} } \right) > 0$ for any $\alpha >1$. Firstly, it is obvious that $ {\mathcal F}_i\left( {1 ,{\pmb \rho} } \right) = 0$. Thus, a sufficient condition for ${\mathcal F}_i\left( {\alpha ,{\pmb \rho} } \right) > 0$ is that ${\mathcal F}_i\left( {\alpha ,{\pmb \rho} } \right)$ is an increasing function of $\alpha$, i.e., ${\textstyle{{\partial {{\cal F}_i}\left( {\alpha ,{ \pmb \rho} } \right)} \over {\partial \alpha }}} > 0$. To proceed, we first derive the first-order and second-order partial derivatives of ${\mathcal F}_i\left( {\alpha ,{\pmb \rho} } \right)$ w.r.t $\alpha$ and obtain
\begin{equation}\label{eq:app_4_1}
\begin{split}
\frac{{\partial {{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial \alpha }} &= \frac{1}{{2{Y_i}{Z_i}}}\left\{ {{X_i} + {Y_i}{Z_i} + {Z_i}} \right. \\
&{ + \frac{{\left( {\alpha {X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right){X_i}{W_i}}}{{\sqrt {{{\left( {\alpha {X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}{Z_i}^2} }}} \\
&\left. - \sqrt {{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2} \right\},
\end{split}
\end{equation}
\begin{equation}\label{eq:app_4}
\begin{split}
\frac{{{\partial ^2}{{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial {\alpha ^2}}} &= \frac{{2{Z_i}{{\left( {{X_i}{W_i}} \right)}^2}}}{\left[{{{\left( {\alpha{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2}\right]^{3/2}} \\
\end{split}
\end{equation}
From (\ref{eq:app_4}), we can see that $\frac{{{\partial ^2}{{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial {\alpha ^2}}}$ is always larger than 0, which indicates that ${\textstyle{{\partial {{\cal F}_i}\left( {\alpha ,{ \pmb \rho} } \right)} \over {\partial \alpha }}}$ is increasing in $\alpha$. Thus, a sufficient condition for ${\mathcal F}_i\left( {\alpha ,{\pmb \rho} } \right) > 0$ can now be simplified as ${\left. {{\textstyle{{\partial {{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)} \over {\partial \alpha }}}} \right|_{\alpha = 1}} > 0$. Substituting $\alpha = 1$ into (\ref{eq:app_4_1}), we get
\begin{equation}\label{eq:app_5}
\begin{split}
&{\left. {\frac{{\partial {{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial \alpha }}} \right|_{\alpha {\rm{ = }}1}} = \frac{1}{{2{Y_i}{Z_i}}} \left\{ {{X_i} + {Y_i}{Z_i} + {Z_i}} \right.\\
&~~~~~~~~~~+ \frac{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right){X_i}{W_i}}}{{\sqrt {{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2} }}\\
&~~~~~~~~~~\left.- \sqrt {{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2}\right\}.
\end{split}
\end{equation}
To proceed, we derive the first-order derivative for the RHS of (\ref{eq:app_5}) with respect to $W_i$. After some algebraic manipulations, we obtain
\begin{equation*}
\begin{split}
&\partial \left( {{{\left. {{\textstyle{{\partial {{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)} \over {\partial \alpha }}}} \right|}_{\alpha = 1}}} \right)/\partial {W_i} \\
&= \frac{{2{Z_i}{W_i}X_i^2}}{{{{\left[ {{{\left( {{X_i}{W_i} + {X_i} - {Y_i}{Z_i} + {Z_i}} \right)}^2} + 4{Y_i}Z_i^2} \right]}^{3/2}}}},
\end{split}
\end{equation*}
which is shown to be always positive. Thus, ${\left. {\frac{{\partial {{\cal F}_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial \alpha }}} \right|_{\alpha {\rm{ = }}1}}$ is an increasing function in $W_i$. Since $W_i > 0$, we further have
\begin{equation}\label{eq:app_6}
\begin{split}
{\left. {\frac{{\partial {F_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial \alpha }}} \right|_{\alpha {\rm{ = }}1}} &> {\left. {\frac{{\partial {F_i}\left( {\alpha ,{\pmb \rho} } \right)}}{{\partial \alpha }}} \right|_{\alpha = 1,{W_i} = 0}}\\
&= \frac{1}{{2{Y_i}{Z_i}}}\left\{ {{X_i} + {Y_i}{Z_i} + {Z_i}} \right.\\
&~~~~\left.- \sqrt {{{\left( {{X_i} + {Y_i}{Z_i} + {Z_i}} \right)}^2} - 4{X_i}{Y_i}Z_i}\right\}\\
&>0.
\end{split}
\end{equation}
Therefore, we can claim that $\alpha {{\mathcal B}}_i^{DF}\left( \pmb \rho \right) > {{\mathcal B}}_i^{DF}\left( \alpha {\pmb \rho} \right)$, which completes the proof. | 81,656 |
TITLE: Name for functions that are anti-symmetric about $y=x$
QUESTION [1 upvotes]: Even functions are functions that are symmetric about the $Y$-axis, and odd functions are functions that are symmetric about the origin. Functions that are symmetric about $y=x$ ($y=f(x)$ implies $x=f(y)$) are involutions, i.e. functions that are their own inverse.
Is there a special name for functions that are anti-symmetric about $y=x$? In other words, is there a name for the property: If $y=f(x)$ and $x=f(y)$ then $x=y$? The word anti-involution seems to be in use, but according to Wikipedia it has a rather technical definition in terms of antihomomorphisms and doesn't seem to be what I'm looking for.
REPLY [1 votes]: Any monotonic increasing function works. If $f(x)<x$ then $f(f(x))\leq f(x)<x,$ so $f(f(x))\neq x.$ Similarly, $f(x)>x,$ then $f(f(x))\geq f(x)>x,$ so again $f(f(x))\neq x.$ You don't need $f$ to be strictly monotonic.
Also, any $f$ such that $f(x)\geq x$ for all $x.$ If $f(x)\neq x,$ then $f(x)>x$ and $f(f(x))\geq f(x)>x.$ This includes functions like $f(x)=x+x^2,$ or more generally $f(x)=x+g(x)^2,$ for any function $g,$ which are not strictly increasing.
Similarly, if $f(x)\leq x$ for all $x,$ then when $f(x)\neq x$ we get $f(x)<x$ and thus $f(f(x))\leq f(x)<x.$ So this includes functions like $f(x)=x-g(x)^2.$
$x+g(x)^2$ and $x-g(x)^2$ allows solutions to $g(x)=0,$ so we can get functions $f$ with arbitrary sets of solutions to $f(x)=0.$ Given any closed $C\subseteq\mathbb R,$ we can find continuous $g$ with $C=\{x\mid g(x)=0\}$ and then $C=\{x\mid f(x)=x\}.$
There are a lot of such functions.
Given a general $f,$ if $h(x)=f(x)-x,$ your condition $f(f(x))=x$ becomes $h(x+h(x))+h(x)=0$ so you want $$h(x+h(x))+h(x)=0\implies h(x)=0.\tag1$$
It is worth considering what $(1)$ means if $f$ is monotonic increasing.
If $h$ is differentiable, monotonicity of $f$ means $h'(x)\neq -2$ for all $x.$ Then $h(x+h(x))=h(x)+h'(c)h(x)$ for some $c$ in $(x,x+h(x)).$ So $0=h(x+h(x))+h(x)=(2+h'(c))h(x).$ But $2+h'(c)\neq 0,$ so this means $h(x)=0.$
So we can apparently use any $h(x)$ differentiable such that for all $x,$ $h'(x)\neq -2.$ So this means and differentiable $f(x)$ such that $f'(x)\neq -1.$
If $f$ is differentiable, and $f'(x)\neq -1$ for all $x,$ we have $f(f(x))=x$ if and only if $f(x)=x.$
Proven directly: If $u=f(x)$ and $u\neq x,$ then $f(u)=x$ and $u-x=f(x)-f(u)=f'(c)(x-u)$ for some $c$ between $u$ and $x.$ But $f'(c)\neq -1,$ this isn't possible.
So this includes a lot of functions which decrease "too quickly," like $f(x)=-2x.$
Darboux's Theorem says that any function which is the derivative of another satisfies the intermediate value property, so if $f$ is differentiable and $f'(x)\neq -1$ for all $x,$ then either $f'(x)<-1$ for all $x$ or $f'(x)>-1$ for all $x.$
If course, there are $f$ with some $f'(x)=-1$ which satisfy this condition. For example, when $f(x)=-(x+x^3).$ Then $f'(0)=-1,$ but $f(f(x))=x$ still implies $f(x)=x.$ This is because while $f'(0)=-1,$ there is no $u\neq v$ with $f(u)-f(v)=v-u,$ because the function $f$ crosses the tangent at $x=0.$
Of course, our mean value condition on $u,v$ excluded $f(u)+u=f(v)+v$ when $u\neq v,$ which is stronger than excluding $f(u)=v, f(v)=u.$
For example, $f(x)=x^2-x$ has $f(f(x))-x = x^4-2x^3,$ so the roots of $f(f(x))=x$ are $x=0,2$ but $f(0)=0, f(2)=2.$ So $f$ has your property, but $f(x)+x = f(-x)+(-x)=x^2,$ so $f$ has infinitely many pairs $u\neq v$ with $f(u)+u=f(v)+v$ but no such pair with $f(u)=v$ and $f(v)=u.$ | 155,451 |
\begin{document}
\pagestyle{plain}
\title{Symmetric multisets of permutations}
\author{Jonathan S. Bloom}
\date{\today\\[10pt]}
\maketitle
\begin{abstract}
The following long-standing problem in combinatorics was first posed in 1993 by Gessel and Reutenauer \cite{GesReut93}. For which multisubsets $B$ of the symmetric group $\fS_n$ is the quasisymmetric function
$$Q(B) = \sum_{\pi \in B}F_{\Des(\pi), n}$$
a symmetric function? Here $\Des(\pi)$ is the descent set of $\pi$ and $F_{\Des(\pi), n}$ is Gessel's fundamental basis for the vector space of quasisymmetric functions. The purpose of this paper is to provide a useful characterization of these multisets. Using this characterization we prove a conjecture of Elizalde and Roichman from~\cite{ElizaldeRoichman2015}. Two other corollaries are also given. The first is a short new proof that conjugacy classes are symmetric sets, a well known result first proved by Gessel and Reutenauer~\cite{GesReut93}. Our second corollary is a unified explanation that both left and right multiplication of symmetric multisets, by inverse $J$-classes, is symmetric. The case of right multiplication was first proved by Elizalde and Roichman in~\cite{ElizaldeRoichman2015}.
\end{abstract}
\section{Introduction}\label{sec:intro}
For integers $m\leq n$ set $[m,n] := \{m,m+1,\ldots, n\}$. When $m=1$ we simply write $[n]$ instead of $[1,n]$. We denote by $\fS_n$ the symmetric group on $[n]$. A multiset $B$ whose elements are taken from $\fS_n$ is denoted by $B\Subset \fS_n$. The standard notation $B\subseteq \fS_n$ is reserved to indicate that $B$ is a set. Additionally, given two multisets $A,B\Subset \fS_n$ we write $A\sqcup B$ to denote their disjoint union.
For any $\pi\in \fS_n$ its \emph{descent set} is
$$\Des(\pi):=\makeset{i\in [n-1]}{\pi_i>\pi_{i+1}}\subseteq [n-1].$$
For each $J\subseteq[n-1]$ define the \emph{inverse $J$-class} as
$$R_J^{-1} = \makeset{\pi^{-1}\in \fS_n}{\Des(\pi)\subseteq J}.$$
For multisets $A,B\Subset \fS_n$ we write $A\equiv B$ to indicate that there is descent-set-preserving bijection between $A$ and $B$ or, equivalently, in terms of generating functions, that
$$\sum_{\pi\in A} \textbf{x}^{\Des(\pi)} = \sum_{\pi\in B} \textbf{x}^{\Des(\pi)}.$$
To indicate that $\Des(\pi) = \Des(\tau)$ for $\pi,\tau\in\fS_n$ we abuse this notation and sometimes write $\pi\equiv \tau$. Further, define $AB$ to be the multiset of all products $\pi\tau$ where $\pi\in A$ and $\tau\in B$. In the case when $A= \{\pi\}$ we simply write $\pi B$. Playing a crucial role in this paper are the products $BR_J^{-1}$ and $R_J^{-1}B$. In the case that $B$ is such that
$$BR_J^{-1}\equiv R_J^{-1}B$$
for all $J\subseteq[n-1]$ we say that $B$ is \emph{$D$-commutative with all inverse $J$-classes}.
Next we establish some basic notions for the theory of symmetric functions. Let $\mathbf{x}= \{x_1,x_2,\ldots\}$ be a countably infinite set of commuting variables. We say a formal power series in $\mathbb{Q}[[\mathbf{x}]]$ is \emph{symmetric} if it is of bounded degree and invariant under all permutations of its indices. The vector space of all homogeneous symmetric functions of degree $n$ is denoted by $\Lambda(n)$. We make use of two classical bases for $\Lambda(n)$. Given an integer partition $\lambda = (\lambda_1 \geq \lambda_2\geq \cdots\geq \lambda_p )\vdash n$ we set $m_\lambda$ to be the symmetric function obtained by symmetrizing the monomial
$$x_{1}^{\lambda_1}\cdots x_{p}^{\lambda_p}.$$
The $m_\lambda$ are called \emph{monomial symmetric functions} and they constitute our first basis. We also need the \emph{Schur functions} which we denote by $\{s_\lambda\}_{\lambda\vdash n}$. For a combinatorial definition of these functions and a proof that they are a basis for $\Lambda(n)$ we direct the reader to \cite{FultonBook} or \cite{EC2}. We say that $f\in \Lambda(n)$ is \emph{Schur positive} provided that it can be written as
$$f= \sum_{\lambda\vdash n} c_\lambda s_\lambda$$
where $c_\lambda$ are nonnegative integer coefficients.
We next recall the quasisymmetric functions. For each integer composition $\alpha=(\alpha_1,\ldots, \alpha_p)\models n$ we define the \emph{monomial symmetric function} as
$$M_\alpha: = \sum_{i_1<i_2<\cdots <i_p} x_{i_1}^{\alpha_1}x_{i_2}^{\alpha_2}\cdots x_{i_p}^{\alpha_p}.$$
We denote by $\QSYM(n)$ the $\mathbb{Q}$-span of $\{M_\alpha\}_{\alpha\models n}$ and call the elements of this space \emph{quasisymmetric functions}. As the monomial symmetric functions, which are indexed by compositions of $n$, are a canonical basis for this space, we have $\dim(\QSYM(n)) = 2^{n-1}$. Lastly, since
$$m_\lambda = \sum_{\alpha} M_\alpha,$$
where the sum is over compositions $\alpha$ obtained by permuting the parts of $\lambda$, it follows that $\Lambda(n) \subseteq \QSYM(n)$.
We shall also need Gessel's fundamental basis for $\QSYM(n)$. To define this basis set
$$F_\alpha = \sum_{\beta\leq \alpha} M_\beta,$$
where $\beta\leq \alpha$ indicates that $\beta$ is a refinement of $\alpha$. The collection $\{F_\alpha\}_{\alpha\models n}$ is called the \emph{fundamental} basis for $\QSYM(n)$.
To connect quasisymmetric functions to multisets of permutations we recall the well-worn bijection between subsets $J = \{j_1<j_2<\cdots < j_s\}$ of $[n-1]$ and compositions of $n$ given by
$$J \mapsto (j_1, j_2-j_1, \ldots , j_{s}-j_{s-1}, n-j_s).$$
We denote the image of $J$ under this bijection by $\co(J)$. Using this correspondence we also index the fundamental basis of $\QSYM(n)$ by subsets of $[n-1]$ and write $F_{J,n}: = F_\alpha$ where $\alpha$ corresponds to $J\subseteq[n-1]$. For any $B\Subset \fS_n$ we recall the quasisymmetric function
$$Q(B) := \sum_{\pi\in B} F_{\Des(\pi),n}\in \QSYM(n),$$
first defined in \cite{Ges83}.
We say that $B$ is \emph{symmetric} if $Q(B)$ is symmetric. If, moreover, $Q(B)$ is Schur-positive then, following the language first established by Adin and Roichman in \cite{AdinRoichman15}, we say $B$ is \emph{fine}.
Writing $J\sim K$ whenever $\co(J)$ is a permutation of $\co(K)$, the main result of this paper is a proof that the following are equivalent:
\begin{enumerate}
\item[a)] $B$ is $D$-symmetric (defined in Section~\ref{sec:char})
\item[b)] $B$ is $D$-commutative with all $J$-classes
\item[c)] $BR_J^{-1}\equiv BR_K^{-1}$ whenever $J\sim K$
\item[d)] $R_J^{-1}B\equiv R_K^{-1}B$ whenever $J\sim K$
\item[e)] $B$ is symmetric.
\end{enumerate}
As an immediate consequence we prove Conjecture~10.4 in \cite{ElizaldeRoichman2015}, due to Elizalde and Roichman, in which they hypothesizes that if $B$ is fine then
$$BD_J^{-1} \equiv D_J^{-1}B$$
where $D_J^{-1} = \makeset{\pi^{-1}\in \fS_n}{\Des(\pi) = J}$. Two additional corollaries are proved. The first gives a new proof of the fact that conjugacy classes are symmetric. This was first established by Gessel and Reutenauer in~\cite{Ges83} in which they proved the stronger result, by way of a more involved proof, that conjugacy classes are actually fine sets. Our second corollary gives a unified explanation that the multiset obtained by either left or right multiplication of symmetric multisets by $R_J^{-1}$ is symmetric. The case of right multiplication in the context of fine sets was first proved by Elizalde and Roichman in~\cite{ElizaldeRoichman2015} although their techniques were unable to establish the case of left multiplication.
The paper is organized as follows. In the next section we establish key definitions and lemmas used throughout. In Section~\ref{sec:char} we formally state our main theorem and prove the aforementioned corollaries. As the proof of our main theorem is involved, we break it into several propositions. The propositions that follow immediately from definitions are in the main part of Section~\ref{sec:char}. Statements that imply symmetry are contained in Subsection~\ref{subsec:sufficienty} as they involve a careful analysis of the change of basis between the fundamental basis and the monomial quasisymmetric functions. The proof that symmetry implies $D$-symmetry involves techniques from the theory of tableaux and occupies Subsection~\ref{subsec:Knuth}.
\section{Preliminaries}
An \emph{ordered set partition} of $[n]$ is a sequence $\cU=(U_1,\ldots, U_s)$ of nonempty disjoint sets $U_i$, called \emph{blocks}, whose union is $[n]$. We write $\cU\vdash [n]$ to indicate that $\cU$ is an ordered set partition of $[n]$ and we denote by $\Pi(n)$ the set of all ordered set partitions of $[n]$. For each composition $\alpha\models n$ we further define $\Pi(\alpha)$ to be the collection of all $\cU\vdash [n]$ where $|U_i| = \alpha_i$. If $\alpha$ corresponds to $J\subseteq [n-1]$ then we also denote this set by $\Pi(n,J)$.
\begin{exa}
The set partitions
$$\cU=(\{2,5,8\},\{1,3,4\},\{6,7,9\})\quad\myand\quad \cV=(\{2,5,8\},\{6,7,9\},\{1,3,4\})$$
are distinct ordered with $\cU,\cV\in \Pi(3,3,3)= \Pi(\{3,6\},9)$.
\end{exa}
For the remainder of this section fix $J=\{j_1<\cdots <j_s\}\subseteq [n-1]$ and adopt the convention that $j_0=0$ and $j_{s+1} = n$. To avoid repeating ourselves the symbol $\cU$ will always denote an element of $\Pi(n,J)$ and $U_i$ will always denote its $i$th part.
\begin{definition}
Let $U$ and $V$ be disjoint alphabets and let $\pi$ and $\tau$ be permutations of $U$ and $V$ respectively. We define the \emph{shuffle} of $\pi$ and $\tau$ to be the set $\pi\shuffle \tau$ consisting of all permutations of $U\cup V$ where the letters in $U$ appear in the same order as in $\pi$ and the letters in $V$ appear in the same order as in $\tau$.
Additionally, for $\pi\in\fS_n$ and $\tau\in\fS_m$ we define
$$\pi\cshuffle \tau : = \pi \shuffle \tau^{+n}$$
where $\tau^{+n}$ is the word obtained by adding $n$ to each term of $\tau$. So $\pi\cshuffle \tau \subseteq \fS_{n+m}$.
\end{definition}
\begin{exa}
If $\pi = 12$ and $\tau = 21$ then
$$\pi\cshuffle \tau = \{ 1243, 1423, 4123, 1432, 4132, 4312\},$$
where we have dropped commas for readability.
\end{exa}
As the main results in this paper involve the sets $R_J^{-1}$ we now look at multiple ways of describing such sets. First note that
\begin{equation}\label{eq:R_J as shuf}
R_J^{-1} = (1\ldots j_1) \shuffle (j_1+1\ldots j_2) \shuffle \cdots \shuffle (j_s+1\ldots n).
\end{equation}
Hence to every $\cU\in \Pi(n,J)$ there corresponds the permutation $\delta_\cU \in R_J^{-1}$ defined so that $U_1$ is the set of positions occupied by the subsequence $(1,2,\ldots,j_1)$, $U_2$ is the set of positions occupied by the subsequence $(j_1+1,\ldots, j_2)$, etc. So
\begin{equation}\label{eq:R_J as delta}
R_J^{-1} = \makeset{\delta_\cU}{\cU\in \Pi(n,J)}
\end{equation}
from which it follows that $\Pi(n,J)$ is in correspondence with $R_J^{-1}$.
\begin{exa}
If $n=4$ and $J = \{2\}\subseteq [3]$ then we obtain the following correspondence:
\begin{center}
\begin{tabular}{ccc}
$\cU$ & $\to$ & $\delta_\cU$\\
\hline
(12, 34)&& 1234\\
(13, 24)&& 1324\\
(14, 23)&& 1342\\
(23, 14)&& 3124\\
(24, 13)&& 3142\\
(34, 12)&& 3412
\end{tabular}
\end{center}
where, for example, we have written $(12, 34)$ instead of the more verbose $(\{1,2\}, \{3,4\})$.
\end{exa}
Next we consider right multiplication by inverse $J$-classes. Recall that if $\pi,\tau\in \fS_n$ then
$$\pi\cdot (\tau_1\tau_2\ldots\tau_n) = \pi_{\tau_1}\ldots \pi_{\tau_n},$$
where $\cdot$ denotes group multiplication. Applying this to (\ref{eq:R_J as shuf}) it follows that
\begin{equation}\label{eq:shuffle pi}
\pi R_J^{-1} = (\pi_1\ldots \pi_{j_1}) \shuffle(\pi_{j_1+1}\ldots \pi_{j_2})\shuffle \cdots \shuffle (\pi_{j_s+1}\ldots \pi_{n}).
\end{equation}
\begin{exa}
When $\cU =(\{2,5,8\},\{1,3,4\},\{6,7,9\})$ we see that
$$\delta_\cU = {\color{blue}4}\ {\color{red}1}\ {\color{blue}5}\ {\color{blue}6}\ {\color{red}2}\ 7\ 8\ {\color{red}3}\ 9.$$
and if we take $\pi = {\color{red} 9\ 1\ 4\ }{\color{blue} 5\ 2\ 8\ }7\ 3\ 6$
then
$$\pi\cdot\delta_\cU = {\color{blue}5}\ {\color{red}9}\ {\color{blue}2}\ {\color{blue}8}\ {\color{red}1}\ 7\ 3\ {\color{red}4}\ 6.$$
Note that the first three terms in $\pi$ (colored red) are in positions $2,5,8$, the next three (colored blue) are in positions $1,3,4$ and the last three (black) are in positions $6,7,9$.
\end{exa}
We now require the following consequence of Stanley's famous shuffling theorem (see, \cite[Exercise~3.161]{EC1}).
\begin{lemma}\label{lem:shuf invariant}
Assume $\pi$ and $\tau$ are permutations of disjoint alphabets as are $\pi'$ and $\tau'$. If $\pi\equiv \pi'$ and $\tau \equiv \tau'$ then
$$\pi\shuffle \tau \equiv \pi'\shuffle \tau'.$$
\end{lemma}
Using this lemma together with (\ref{eq:shuffle pi}) we see that
\begin{equation}\label{eq:R_J as cshuf}
\pi R_J^{-1} \equiv \std(\pi_1\ldots \pi_{j_1})
\cshuffle \std(\pi_{j_{1}+1}\ldots \pi_{j_2})
\cshuffle \cdots
\cshuffle \std(\pi_{j_s}\ldots \pi_{s+1}),
\end{equation}
where $\std$ is \emph{standardization}.
Although not apparent at this point it turns out that it is much easier to work with the elements on the right side of (\ref{eq:R_J as cshuf}). To do this we make the following definition.
\begin{definition}
For $\cU\in \Pi(n,J)$ and $\pi\in \fS_n$ we define $\sigma_\cU(\pi)$ to be the element on the right side of (\ref{eq:R_J as cshuf}) in which $U_1$ is the set of positions occupied by the subsequence $\std(\pi_1\ldots \pi_{j_1})$, $U_2$ is the set of positions occupied by the subsequence $\std(\pi_{j_1+1}\ldots \pi_{j_2})^{+j_1}$, etc.
\end{definition}
The next lemma is now immediate and makes use of a standard convention. For any function $f:\fS_n \to \fS_n$ and $B\Subset\fS_n$ we denote by $f(B)$ the multiset obtained by applying $f$ to each element in $B$.
\begin{lemma}\label{lem:r as sigma}
For any $B\Subset \fS_n$ we have
$$BR_J^{-1} \equiv \bigsqcup_{\cU\in \Pi(n,J)} \sigma_\cU(B).$$
\end{lemma}
\medskip
Next consider left multiplication by inverse $J$-classes. Although it trivially follows by our definitions that
$$R_J^{-1}\pi = \makeset{\delta_\cU \pi}{\cU\in \Pi(n,J)},$$
a ``twist" is needed in order for us to compare left and right multiplication. As such we make the following definition.
\begin{definition}
For each $\pi\in \fS_n$ define
$$\rho_\cU(\pi) := \delta_\cW\cdot \pi$$
where $\cW$ is the ordered set partition in $\Pi(n,J)$ whose $i$th block is $\pi(U_i)$.
\end{definition}
Since the mapping $\pi:\Pi(n,J) \to \Pi(n,J)$ given by
$$(U_1,U_2,\ldots) \mapsto (\pi(U_1),\pi(U_2),\ldots)$$
is clearly bijective we obtain the next lemma.
\begin{lemma}\label{lem:l as rho}
For any $B\Subset \fS_n$ we have
$$R_J^{-1}B = \bigsqcup_{\cU\in \Pi(n,J)} \rho_\cU(B).$$
\end{lemma}
We end this section with an important description of how the descent structure of $\rho_\cU(\pi)$ and $\sigma_\cU(\pi)$ are related. For any $S\subseteq \mathbb{Z}$ set $S^*=\makeset{i\in S}{i+1\in S}$. Using this define for any $\cU= (U_1,U_2,\ldots) \vdash [n]$ the set
$$\cU^* := U_1^* \cup U_2^* \cup \cdots\subseteq [n-1]$$
\begin{lemma}\label{lem:des char}
Let $\pi,\tau\in \fS_n$ and $\cU\in \Pi(n,J)$. Then $\rho_\cU(\pi) \equiv \sigma_\cU(\tau)$
if and only if for each $u\in \cU^*$ we have
$$u \in \Des(\pi) \iff \delta_\cU(u) \in \Des(\tau).$$
Moreover we have
\begin{equation}\label{eq:des rho}
\Des(\rho_\cU(\pi)) = \Des(\delta_\cU) \cup \left(\Des(\pi)\cap \cU^*\right)
\end{equation}
and
\begin{equation}\label{eq:des sigma}
\Des(\sigma_\cU(\tau)) = \Des(\delta_\cU) \cup \makeset{u\in \cU^*}{\delta_\cU(u)\in\Des(\tau)}.
\end{equation}
\end{lemma}
\begin{proof}
As $\delta_\cU$ is increasing on the blocks of $\cU$, then $\Des(\delta_\cU)$ and $\Des(\pi)\cap\cU^*$ are disjoint. Hence the two sets on the right side of (\ref{eq:des rho}) and (\ref{eq:des sigma}) are, respectively, disjoint. Therefore to prove the first claim it suffices to show that (\ref{eq:des rho}) and (\ref{eq:des sigma}) hold. Before continuing we make the following definition. For finite sets of integers $A$ and $B$ we write $A<B$ provided that $\max(A) < \min(B)$.
We first prove (\ref{eq:des rho}). Take $\cW\in \Pi(n,J)$ so that its $i$th block is $W_i:=\pi(U_i)$. By our definition $\rho_\cU(\pi) = \delta_\cW\cdot \pi$. Observe that $\delta_\cW$ is increasing on each block $W_i$ and $\delta_\cW(W_i)> \delta_\cW(W_j)$ whenever $i>j$. So $\delta_\cW(\pi(u)) > \delta_\cW(\pi(u+1))$
if and only if either
\begin{itemize}
\item[a)] $\pi(u)> \pi(u+1)$ and $u,u+1\in U_i$, or
\item[b)] $\pi(u) \in W_i$ and $\pi(u+1)\in W_j$ with $i>j$.
\end{itemize}
As the first condition is equivalent to $u\in \Des(\pi)\cap\cU^*$ and the second is equivalent to $u\in U_i$ and $u+1\in U_j$ with $i>j$, i.e., $u\in \Des(\delta_\cU)$, we arrive at (\ref{eq:des rho}).
Next we prove (\ref{eq:des sigma}). From the definitions we see that
$$\sigma_\cU(\tau)(U_i) > \sigma_\cU(\tau)(U_j) \iff i>j \iff \delta_\cU(U_i)>\delta_\cU(U_j).$$
If $u,u+1$ are in distinct blocks of $\cU$ then
$$u\in \Des(\sigma_\cU(\tau))\iff u\in\Des(\delta_\cU).$$
Next assume $u,u+1\in U_i$, i.e., $u\in \cU^*$. Note that the subsequences of $\sigma_\cU(\tau)$ and $\delta_\cU$ indexed by $U_i$ are
$$\std(\tau_{j_{i-1}+1},\ldots, \tau_{j_i})^{+j_{i-1}}\quad \myand \quad (j_{i-1}+1, j_{i-1}+2, \ldots, j_i),$$
respectively. In this case we have
$$u\in \Des(\sigma_\cU(\tau))\iff \delta_\cU(u)\in \Des(\tau).$$
Together these cases prove that (\ref{eq:des sigma}) holds.
\end{proof}
\begin{lemma}\label{lem:mult invariance}
For any $A,B\Subset \fS_n$ if $A\equiv B$ then
$$AR_J^{-1} \equiv BR_J^{-1}\quad\myand\quad R_J^{-1}A \equiv R_J^{-1}B.$$
\end{lemma}
\begin{proof}
Let $f:A\to B$ be our descent-set-preserving bijection and let $\pi\in A$. By Equations (\ref{eq:des sigma}) and (\ref{eq:des rho}) from the previous lemma, it follows that $\sigma_\cU(\pi)\equiv \sigma_\cU(f(\pi))$ and $\rho_\cU(\pi)\equiv \rho_\cU(f(\pi))$. This lemma now follow from Lemmas~\ref{lem:r as sigma} and \ref{lem:l as rho}, respectively.
\end{proof}
\section{A characterization of symmetric sets}\label{sec:char}
The purpose of this section is to state and prove a useful characterization of symmetric multisets. Using this characterization we then simultaneously explain several well known results in the theory of symmetric sets as well as prove the aforementioned conjecture of Elizalde and Roichman. To state our main theorem we require a couple of definitions. Throughout this section $B\Subset \fS_n$ and $J,K\subseteq[n-1]$.
\begin{notation}
Let $\alpha,\beta\models n$. We write $\alpha \sim \beta$ to indicate that the sequence $\beta$ is a permutation of the sequence $\alpha$. For $J, K\subseteq [n-1]$ we also write $J\sim K$ provided that $\co(J)\sim \co(K)$.
\end{notation}
\begin{definition}
We say $B\Subset \fS_n$ is \emph{$D$-symmetric} if for every $\cU\vdash[n]$ there exists a bijection $\Psi_\cU:B\to B$ so that
for each $u \in \cU^*$ and $\pi \in B$ we have
\begin{equation}\label{eq:Dsym cond}
u\in \Des(\pi) \iff \delta_{\cU}(u) \in \Des(\Psi_\cU(\pi)).
\end{equation}
\end{definition}
We now come to our main theorem.
\begin{thm}\label{thm:main}
The following are equivalent:
\begin{enumerate}
\item[a)] $B$ is $D$-symmetric
\item[b)] $B$ is $D$-commutative with all inverse $J$-classes
\item[c)] $BR_J^{-1}\equiv BR_K^{-1}$ whenever $J\sim K$
\item[d)] $R_J^{-1}B\equiv R_K^{-1}B$ whenever $J\sim K$
\item[e)] $B$ is symmetric.
\end{enumerate}
\end{thm}
Before diving into the proof of this theorem we state and prove several corollaries.
In \cite{ElizaldeRoichman2015} Elizalde and Roichman hypothesis that if $B$ is fine then
$$BD_J^{-1} \equiv D_J^{-1}B$$
where $D_J^{-1} = \makeset{\pi^{-1}\in \fS_n}{\Des(\pi) = J}$. By a straightforward application of inclusion-exclusion their conjecture is equivalent to showing that fine sets (which of course are symmetric) $D$-commute with all inverse $J$-classes. As this is immediate from Theorem~\ref{thm:main} we record it as our first corollary.
\begin{corollary}\label{cor:conjecture}
If $B\Subset \fS_n$ is fine then $B$ is $D$-commutative with all inverse $J$-classes.
\end{corollary}
Another immediate corollary of our theorem is the well known fact that conjugacy classes are symmetric. Gessel and Reutenauer first proved the stronger fact in~\cite{GesReut93} that such sets are actually fine. Their proof relies on ideas from representation theory.
\begin{corollary}\label{cor:conjugacy}
Let $C\subseteq\fS_n$ be a conjugacy class. Then $C$ is symmetric.
\end{corollary}
\begin{proof}
As $C$ is a conjugacy class we know that $C\pi = \pi C$ for all $\pi\in \fS_n$. So for any $S\subseteq\fS_n$ we have $CS = SC$. In particular $C$ is $D$-commutative with all inverse $J$-classes and our claim follows by Theorem~\ref{thm:main}.
\end{proof}
Our next corollary simultaneously explains why the collection of symmetric multisets of $\fS_n$ is closed under multiplication by inverse $J$-classes on the right and on the left. In \cite{ElizaldeRoichman2015} Elizalde and Roichman prove that right multiplication of fine multisets by inverse $J$-classes yields fine multisets. Although not explicitly done in their paper, one can easily extend this result to conclude that the same holds for symmetric multisets. Their results again use ideas from representation theory. That said, they were unable to obtain similar results in the context of left multiplication which, one can speculate, is the reason for their Conjecture 10.4. We now provide a short uniform explanation for symmetric invariance under both left and right multiplication by inverse $J$-classes.
\begin{corollary}
For any symmetric $B\Subset \fS_n$ the multisets $R_J^{-1}B$ and $BR_J^{-1}$ are also symmetric.
\end{corollary}
\begin{proof}
Take $B$ as stated and consider $K\sim K'\subseteq [n-1]$. As $B$ is symmetric Theorem~\ref{thm:main} tells us that
$$R_K^{-1}B \equiv R_{K'}^{-1}B\quad\myand\quad BR_K^{-1} \equiv BR_{K'}^{-1}.$$
Therefore for any $J\subseteq[n-1]$ it follows from Lemma~\ref{lem:mult invariance} that
$$R_K^{-1}BR_J^{-1}\equiv R_{K'}^{-1}BR_J^{-1}\quad\myand\quad R_J^{-1}BR_K^{-1}\equiv R_{J}^{-1}BR_{K'}^{-1}.$$
By another application of Theorem~\ref{thm:main} we conclude that $BR_J^{-1}$ and $R_J^{-1}B$ are both symmetric.
\end{proof}
We note that the above corollary does not hold if $R_J^{-1}$ is replaced by an arbitrary symmetric set $A$. For example if $A = \{ 1324, 4132\}$ and $B = \{2143, 2314\}$ then
$$Q(A) =Q(B)= m_{22} + m_{211} + 2m_{1111}$$
but $Q(AB) =M_{31} + M_{22} + 2M_{112} + 2M_{121} + 2M_{211} + 4M_{1111}$.
\medskip
We now turn our attention to the proof of Theorem~\ref{thm:main}. As the proof has several parts we start with an outline. In this section we show that:
\begin{itemize}
\item a) $\then$ b) in Proposition~\ref{prop:Dsym->com}
\item a) $\then$ c) in Proposition~\ref{prop:Dsym->right}.
\end{itemize}
In Subsection~\ref{subsec:sufficienty} we establish the following sufficient conditions for symmetry:
\begin{itemize}
\item c) $\then$ e) and d) $\then$ e) in Proposition~\ref{prop:leftright->sym}
\item b) $\then$ e) in Proposition~\ref{prop:com->sym}.
\end{itemize}
In Subsection~\ref{subsec:Knuth} we finally show:
\begin{itemize}
\item e) $\then$ a) in Proposition~\ref{prop:sym->Dsym}.
\end{itemize}
Carrying out the above agenda shows that a), b), c), and e) are equivalent and that d) implies e). It remains to show that e) implies d). Assuming, for the moment, that all but d) are equivalent we can give a short proof of this fact. We do so next and then return to the agenda outlined above.
\begin{prop}
If $B$ is symmetric then $R_J^{-1}B \equiv R_K^{-1}B$ whenever $J\sim K$.
\end{prop}
\begin{proof}
If $B$ is symmetric then we know that it is $D$-commutative with all inverse $J$-classes and that $BR_J^{-1} \equiv BR_K^{-1}$ whenever $J\sim K$. Therefore
$$R_J^{-1}B \equiv BR_J^{-1}\equiv BR_K^{-1} \equiv R_K^{-1}B$$
whenever $J\sim K$.
\end{proof}
We now begin with a proof that a) implies b).
\begin{prop}\label{prop:Dsym->com}
If $B\Subset \fS_n$ is $D$-symmetric then it is $D$-commutative with all inverse $J$-classes.
\end{prop}
\begin{proof}
As $B$ is $D$-symmetric there exist bijections $\Psi_\cU:B\to B$ for each $\cU\in \Pi(n,J)$ that satisfy (\ref{eq:Dsym cond}). It now follows by Lemma~\ref{lem:des char} that
\begin{align*}
u\in \Des(\rho_\cU(\pi)) &\iff u\in \Des(\delta_\cU) \cup (\Des(\pi)\cap \cU^*)\\
&\iff u\in \Des(\delta_\cU) \myor u \in \cU^*\myand \delta_\cU(u) \in \Des(\Psi_\cU(\pi))\\
& \iff u\in \Des(\sigma_\cU(\Psi_\cU(\pi))).
\end{align*}
Using this we obtain our desired result since
$$R_J^{-1}B =\bigsqcup_{\pi \in B\atop \cU\in \Pi(n,J)} \{\rho_\cU(\pi) \}
\equiv \bigsqcup_{\pi \in B\atop \cU\in \Pi(n,J)}\sigma_\cU(\Psi_\cU(\pi))
\equiv \bigsqcup_{\pi \in B\atop \cU\in \Pi(n,J)} \{\sigma_\cU(\pi)\}
\equiv BR_J^{-1},$$
where the first step follows from Lemma~\ref{lem:l as rho}, the second step follows our previous calculation, the third since $\Psi_\cU:B\to B$ is a bijection, and the last step because of Lemma~\ref{lem:r as sigma}.
\end{proof}
Next we prove that a) implies c).
\begin{prop}\label{prop:Dsym->right}
If $B$ is $D$-symmetric, then $BR_J^{-1} \equiv BR_K^{-1}$ whenever $J\sim K$.
\end{prop}
\begin{proof}
Assume $B$ is $D$-symmetric and set $J=\{j_1<\cdots<j_p\}$. It suffices to prove the proposition when $K$ is such that $\co(K)$ is the composition obtained by transposing the $k$th and $(k+1)$st blocks of $\co(J)$. In particular, if we set $s' = r+t-s$ where
$$r = j_{k-1},\ s = j_k,\ \myand\ t = j_{k+1},$$
then $K = J\setminus\{s\} \cup \{s'\}$. Now define
$$\cU = ([r], [s+1,t], [r+1,s], [t+1,n])$$
noting that
$$\delta_\cU(u) = \begin{cases}
u & \textrm{ if } u \in [r] \cup [t+1,n]\\
u-(s-r) &\textrm{ if } u \in [s+1,t]\\
u+(t-s) &\textrm{ if } u \in [t+1,s]
\end{cases}$$
As $B$ is $D$-symmetric there exists a bijection $\Psi:B\to B$, corresponding to $\cU$, that satisfies (\ref{eq:Dsym cond}). Letting $\pi\in B$ and $\tau = \Psi(\pi)$ it follows that
$$\pi_1\ldots\pi_{r} \equiv \tau_1\ldots\tau_{r}\quad \myand \quad \pi_{t+1}\ldots\pi_{n} \equiv \tau_{t+1}\ldots\tau_{n}$$
that
$$\pi_{s+1}\ldots\pi_{t} \equiv \tau_{r+1}\ldots\tau_{s'}\quad\myand \quad \pi_{r+1}\ldots\pi_{s} \equiv \tau_{s'+1}\ldots\tau_{t}.$$
So Lemma~\ref{lem:shuf invariant} together with (\ref{eq:R_J as cshuf}) gives $\pi R_J^{-1} \equiv \Psi(\pi)R_K^{-1}$. As $\Psi:B\to B$ is bijective the lemma now follows.
\end{proof}
\bigskip
\bigskip
To conclude this section we establish some needed properties of $D$-symmetric sets. Our first lemma follows directly from the definition of $D$-symmetry. In that lemma we make use of the following convention. For any finite set of positive integers $S$ set
$$\bx^S: =\prod_{i\in S}x_i.$$
Our second lemma follows directly from the first lemma. In both cases we omit formal proofs.
\begin{lemma}\label{lem:Dsym GFcond}
Let $B\Subset \fS_n$. Then $B$ is $D$-symmetric if and only if for all $\cU\vdash [n]$ we have
\begin{equation}\label{eq:Dsym GFcond}
\sum_{\pi \in B} \bx^{\delta_\cU(\Des(\pi)\ \cap\ \cU^*)} = \sum_{\pi \in B} \bx^{\Des(\pi)\ \cap\ \delta_\cU(\cU^*)}.
\end{equation}
\end{lemma}
\begin{lemma}\label{lem:Dsym prop}
Set $A,B\Subset \fS_n$. We then have
\begin{enumerate}
\item[a)] If $B$ is $D$-symmetric and $B\equiv A$ then $A$ is also $D$-symmetric.
\item[b)] If $\{B_i\}_{i\in I}$ is a collection of $D$-symmetric multisets then $\bigsqcup_{i\in I} B_i$ is $D$-symmetric.
\item[c)] If $A,B$ are $D$-symmetric with $A\subseteq B$ then so is $B\setminus A$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem:Dsym reduction}
A multiset $B\Subset \fS_n$ is $D$-symmetric if and only if for each $\cU=(U,V)\vdash [n]$ there exists a bijection $\Psi_\cU:B\to B$ satisfying (\ref{eq:Dsym cond}).
\end{lemma}
\begin{proof}
The forward direction is trivial. We concentrate on the reverse direction. For the set partition $(\{1,2,\ldots, n\})$ consisting of 1 block, observe that the identity function on $B$ satisfies (\ref{eq:Dsym cond}). Now assume, for an inductive proof, that for every set partition with $p$ blocks there exists a corresponding bijection that satisfies (\ref{eq:Dsym cond}). Consider a set partition $\cU = (U_1,\ldots, U_p, U_{p+1})$ with $p+1$ blocks and define
$$\cV = (U_1,\ldots, U_{p-1}, U_p\cup U_{p+1}).$$
As $\cV$ has $p$ blocks we know by induction that there exists some bijection $\Psi_\cV:B\to B$ that satisfies (\ref{eq:Dsym cond}). Define $\cW = ([n]\setminus W, W)\vdash [n]$ where $W = \delta_\cV(U_{p+1})$. As this set partition has two blocks there exits a bijection $\Psi_\cW:B\to B$ satisfying (\ref{eq:Dsym cond}). It now suffices to prove that $\Psi_\cW\circ \Psi_\cV$ is a bijection corresponding to $\cU$ that satisfies (\ref{eq:Dsym cond}).
For $u\in \cU^*\subseteq \cV^*$ we know that $u,u+1$ are in the same block in $\cV$ and hence $\delta_\cV(u)+1 = \delta_\cV(u+1)$. As the pair $u,u+1$ are also in the same block of $\cU$ then $u,u+1\in U_{p+1}$ or $u,u+1\in [n]\setminus U_{p+1}$. So $\delta_\cV(u),\delta_\cV(u+1)$ are in the same block of $\cW$, i.e., $\delta_\cV(u)\in \cW^*$. Combining these pieces it now follows that if $u\in \cU^*$ then
\begin{align*}
u\in \Des(\pi) &\iff \delta_\cV(u)\in \Des(\Psi_\cV(\pi)) \\
&\iff \delta_\cW\circ \delta_\cV(u) \in \Des(\Psi_\cW\circ \Psi_\cV(\pi)) \\
&\iff \delta_\cU(u) \in \Des(\Psi_\cW\circ \Psi_\cV(\pi)),
\end{align*}
where the last equivalence follows by the easy-to-check fact that $\delta_\cU = \delta_\cW \circ \delta_\cV$.
\end{proof}
\subsection{Sufficiency}\label{subsec:sufficienty}
The goal of this subsection is to prove that each of b), c), and d) individually implies e). We begin with some discussion and definitions. Recall that
\begin{equation}\label{eq:Q(B)2}
Q(B) = \sum_{\pi\in B} F_{\Des(\pi),n} = \sum_{\alpha\models n}b_\alpha M_\alpha \in \QSYM(n)
\end{equation}
for some integers $b_\alpha$. Next recall the correspondence $\co$ between subsets and compositions and the notation $\alpha\geq \beta$, for $\alpha,\beta\models n$, meaning that $\beta$ is a refinement of $\alpha$. Now for each $J\subseteq [n-1]$ the definition of the fundamental basis means that
$$b_{\co(J)} = |\makeset{\pi\in B}{\co(\Des(\pi)) \geq \co(J)}| = |\makeset{\pi\in B}{\Des(\pi) \subseteq J}|.$$
Defining $B_J:= \makeset{\pi\in B}{\Des(\pi)\subseteq J}$ we have $b_{\co(J)} = |B_J|$. Recall that the monomial symmetric function $m_\lambda$ can be written as
$$m_\lambda = \sum_{\alpha\sim \lambda} M_\alpha.$$
Therefore to show $Q(B) \in \Lambda(n)$ it suffices to prove that $|B_J| = |B_K|$ whenever $J\sim K$.
With this discussion in mind we turn to the proof that c) and d) each imply e).
\begin{prop}\label{prop:leftright->sym}
If $BR_J^{-1} \equiv BR_K^{-1}$ whenever $J\sim K$ or $R_J^{-1}B \equiv R_K^{-1}B$ whenever $J\sim K$, then $B$ is symmetric.
\end{prop}
\begin{proof}
We start with the first claim. By Lemma~\ref{lem:r as sigma} we have
$$\sum_{\pi \in BR_J^{-1}} \bx^{\Des(\pi)} = \sum_{\cU\in \Pi(n,J)\atop \pi\in B} \bx^{\Des(\sigma_\cU(\pi))}.$$
Now observe that the coefficient on $\bx^\emptyset$ in this expression is
$$|\makeset{\pi\in B}{\Des(\pi)\subseteq J}| = |B_J|.$$
This can be seen by considering (\ref{eq:des sigma}) of Lemma~\ref{lem:des char} and noting that in order for $\Des(\sigma_\cU(\pi)) = \emptyset$ we must have $\delta_\cU = 1$. By our assumption we know that if $J\sim K$ then $BR_J^{-1} \equiv BR_K^{-1}$. So $|B_J| = |B_K|$ which, in light of discussion above, proves our first claim.
To prove the second claim, we know by Lemma~\ref{lem:l as rho} that
$$\sum_{\pi \in R_J^{-1}B} \bx^{\Des(\pi)} = \sum_{\cU\in \Pi(n,J)\atop \pi\in B} \bx^{\Des(\rho_\cU(\pi))}.$$
By appealing to (\ref{eq:des rho}) in Lemma~\ref{lem:des char}, a similar proof to that in the first case establishes our second claim.
\end{proof}
We now turn our attention to proving that if $B$ is $D$-commutative with all inverse $J$-classes then it is symmetric, i.e., that b) implies e) in our main theorem. For reference and to set the stage note that Lemmas~\ref{lem:r as sigma} and \ref{lem:l as rho} imply that if $BR_J^{-1} \equiv R_J^{-1}B$ then
\begin{equation}\label{eq:B commutes}
\sum_{\cU\in\Pi(n,J)\atop \pi\in B} \textbf{x}^{\Des(\sigma_\cU(\pi))}= \sum_{\cU\in\Pi(n,J)\atop \pi\in B} \textbf{x}^{\Des(\rho_\cU(\pi))}.
\end{equation}
They key idea in the coming proofs is to consider the coefficient $c_i$ on $\bx^{\{i\}}$ in this generating function. To describe this coefficient we make the following definitions.
\begin{definitions}
For any $\cU\in \Pi(n)$ define
$$r(\cU):= [n-1]\setminus \cU^*\quad \myand\quad s(\cU):=[n-1]\setminus \delta_\cU(\cU^*).$$
Additionally, for any $\alpha\models n$ and $i\in [n-1]$ set
$$\Pi_i(\alpha) = \makeset{\cU\in \Pi(\alpha)}{\Des(\delta_\cU) = \{i\}}$$
and let $\Gamma_i(\alpha)$ consist of all $\cU\in \Pi_i(\alpha)$ such that all blocks in $\cU$ are intervals.
\end{definitions}
We now have the following lemma.
\begin{lemma}\label{lem:coef on i}
Fix $\alpha\models n$. Take $f_\cU= \rho_\cU$ or $\sigma_\cU$ and define $c_i$ to be the coefficient on $\bx^{\{i\}}$ in
$$\sum_{\cU\in \Pi(\alpha)\atop \pi\in B} \bx^{\Des(f_\cU(\pi))}.$$
Then there is some $a\geq 0$ so that
$$c_i = \begin{cases}
a+ \sum_{\cU\in \Pi_i(\alpha)}|B_{r(\cU)}|& \textrm{ if } f_\cU = \sigma_\cU\\
a+ \sum_{\cU\in \Pi_i(\alpha)}|B_{s(\cU)}|& \textrm{ if } f_\cU = \rho_\cU.
\end{cases}
$$
\end{lemma}
\begin{proof}
First consider the case when $f_\cU=\rho_\cU$. Appealing to (\ref{eq:des rho}) in Lemma~\ref{lem:des char} we have $\Des(\rho_\cU(\pi))=\{i\}$ if and only if
$$\Des(\delta_\cU)=\emptyset \quad\myand\quad \Des(\pi)\cap \cU^* = \{i\}$$
or vice versa. In the displayed case, note that $\Des(\delta_\cU)=\emptyset$ if and only if $\cU$ is the unique partition $\cI\in \Pi(\alpha)$ whose first block is the interval $[\alpha_1]$ and whose second block is the following $\alpha_2$ integers, etc. Set $a = |\makeset{\pi\in B}{\Des(\pi)\cap \cI^*}|$.
Now consider the other case when
$$\Des(\delta_\cU)=\{i\} \quad\myand \quad \Des(\pi)\cap \cU^* = \emptyset.$$
This occurs if and only if $\cU\in \Pi_i(\alpha)$ and $\Des(\pi)\subseteq [n-1]\setminus \cU^* = r(\cU)$. This establishes the case when $f_\cU = \rho_\cU$.
Now consider the case when $f=\sigma_\cU$. Appealing to (\ref{eq:des sigma}) in Lemma~\ref{lem:des char} we have $\Des(\sigma_\cU(\pi)) = \{i\}$ if and only if
$$\Des(\delta_\cU) = \emptyset \quad \myand\quad \makeset{u\in \cU^*}{\delta_\cU(u)\in\Des(\pi)} = \{i\}$$
or vice versa. As before, the displayed case can only occur when $\cU = \cI$. As $\delta_\cI = 1$ we further have
$$\{i\} = \makeset{u\in \cI^*}{\delta_\cI(u)\in\Des(\pi)} = \Des(\pi)\cap \cI^*.$$
so that we can take $a$ as above in this case as well. Now consider the case when
$$\Des(\delta_\cU) = \{i\} \quad \myand\quad \makeset{u\in \cU^*}{\delta_\cU(u)\in\Des(\pi)} = \emptyset.$$
This occurs when $\cU\in \Pi_i(\alpha)$ and $\Des(\pi) \cap \delta_\cU(\cU^*) = \emptyset$. As the latter is equivalent to $\Des(\pi) \subseteq [n-1] \setminus \delta_\cU(\cU^*)\subseteq s(\cU)$, this explains our second term.
\end{proof}
\begin{lemma}\label{lem:r and s perms}
Fix $\alpha\models n$ For any $\cU\vdash \Pi(\alpha)$ we have $r(\cU) \sim s(\cU)$.
\end{lemma}
\begin{proof}
First consider the case when all the blocks of $\cU$ are intervals. Let $J\subseteq[n-1]$ be such that $\co(J) = \alpha$. As the blocks in $\cU$ are intervals we see that
$$\delta_\cU(\cU^*) = [n-1]\setminus J.$$
This means that $s(\cU)=J$. We must now show that $\co(r(\cU)) \sim \alpha = \co(J)$. Again using the fact that the blocks of $\cU$ are intervals we may define $\cW=(W_1, W_2,\ldots)$ to be the ordered set partition obtained by permuting the blocks of $\cU$ so that $\max(W_i) < \min(W_{i+1})$. As $\cW^* = \cU^*$ it follows that
$$r(\cU) = r(\cW)=\{\max(W_1)<\max(W_2)<\cdots\}$$
which, in turn, implies that $\co(r(\cU)) = (|W_1|, |W_2|, \ldots)$. By our choice of $\cW$ our claim now follows.
Now consider an arbitrary $\cU\in \Pi(\alpha)$ and let $\cV$ be the refinement given by replacing each block $U_i$ of $\cU$ with the sequence $(I_1,I_2,\ldots)$ of maximal nonempty intervals in $U_i$ ordered so that $\max(I_i)< \min(I_{i+1})$. Observe that $\cV^* = \cU^*$ and $\delta_\cU = \delta_\cV$. Consequently, $r(\cU) = r(\cV)$ and $s(\cU) = s(\cV)$. The general claim now follows from our first paragraph.
\end{proof}
For the next few proofs, some additional terminology relating to permutations of compositions is required.
For any composition $\alpha\models n$ with $p$ parts and $I=\{i_1<i_2<\cdots <i_s\}\subseteq [p]$ we write
$$\alpha(I):= (\alpha_{i_1},\alpha_{i_2},\ldots, \alpha_{i_s},\alpha_{j_1},\alpha_{j_2},\ldots, \alpha_{j_t})$$
where $j_1<j_2<\ldots <j_t$ are the elements in $[p]\setminus I$. In particular for any $m\leq p$ we have $\alpha([m]) = \alpha$ and, in general, $\alpha\sim \alpha(I)$. We also define $S_k(\alpha)$ to be the set of all $I\subseteq[p]$ such that $\sum_{i\in I}\alpha_i = k$. As all the parts of $\alpha$ are positive integers there is at most one $I=[m]\subseteq [p]$ with $I\in S_k(\alpha)$. In this case set $S_k'(\alpha) = S_k(\alpha)\setminus\{I\}$ otherwise set $S_k'(\alpha) = S_k(\alpha)$.
\begin{lemma}\label{lem:coef equal}
Fix $\lambda\vdash n$ and assume for each composition $\alpha\sim \lambda$ there exists some $C_\alpha\geq 0$. If for each such $\alpha$ and $k\leq n$ we have
\begin{equation}\label{eq:constance}
|S_k(\alpha)|\cdot C_\alpha = \sum_{I\in S_k(\alpha)} C_{\alpha(I)},
\end{equation}
then the $C_\alpha$'s are equal.
\end{lemma}
\begin{proof}
For $\alpha,\beta\sim \lambda$ there exists a sequence of compositions
$$\alpha=\alpha^{(1)}, \alpha^{(1)}, \ldots, \alpha^{(t)}=\beta$$
so that $\alpha^{(i+1)} = \alpha^{(i)}(\{j\})$ for some $j$. Now choose $\alpha$ so that $C_\alpha$ is maximized. By this choice of $\alpha$ together with (\ref{eq:constance}) and $k = \alpha_j$ it follows that $C_\alpha = C_{\alpha(\{j\})}$. This with our first observation implies our lemma.
\end{proof}
\begin{lemma}\label{lem:Gamma bij}
Set $k< n$ and $\alpha\models n$. There exists a bijection
$$f:S_k'(\alpha)\to \Gamma_k(\alpha)$$
so that for each $I\in S_k'(\alpha)$ we have $\co(r(f(I))) = \alpha(I)$.
\end{lemma}
\begin{proof}
Fix $I \in S_k'(\alpha)$ and let $J\subseteq[n-1]$ be such that $\co(J) = \alpha(I)$. Define $f(I)=(U_1,\ldots, U_p)\vdash [n]$ by
$$(U_i)_{i\in I} = ([j_1], [j_1+1,j_2],\ldots, [j_{t-1},j_{t}]) \vdash [k]$$
and
$$(U_i)_{i\in [p]\setminus I}= [j_{t+1},j_{t+2}],\ldots, [j_{p-1},n]) \vdash [n]\setminus [k].$$
As $I \neq [m]\subseteq [p]$ for some form $m\leq p$ it follows that $f(I)\in \Gamma_k(\alpha)$. Our map $f$ is certainly injective. It also follows that every partition in $\Gamma_k(\alpha)$ can be constructed in this manner. Hence $f$ is also surjective.
Continuing with the notation above we see that $r(f(I)) = [n-1]\setminus f(I)^* = J$. As $\co(J) = \alpha(I)$ this justifies our last claim.
\end{proof}
\begin{prop}\label{prop:com->sym}
If $B\Subset \fS_n$ is $D$-commutative with all inverse $J$-classes then $B$ is symmetric.
\end{prop}
\begin{proof}
Define $B_\alpha:= B_J$ where $\co(J) = \alpha$. By recalling the discussion at the start of this subsection, it suffices to prove this proposition by showing that $|B_\alpha| = |B_\beta|$ whenever $\alpha\sim \beta$. We proceed by induction on the number of parts in our compositions where the base case is when our composition has $n$ parts. This case holds trivially since $(1^n)$ is the only composition with $n$ parts.
Take $\alpha\models n$ with $p$ parts. As $B$ is $D$-commutative with all inverse $J$-classes, we know that (\ref{eq:B commutes}) holds for this $\alpha$. In light of this equation and Lemma~\ref{lem:coef on i} it follows that
$$\sum_{\cU\in \Pi_k(\alpha)}|B_{r(\cU)}|=\sum_{\cU\in \Pi_k(\alpha)}|B_{s(\cU)}|$$
holds for all $1\leq k<n$. Now consider a particular $\cU\in \Pi_k(\alpha)\setminus \Gamma_k(\alpha)$. As $\cU$ has $p$ blocks and at least one of them is not an interval, it follows that $|[n-1]\setminus \cU^*|\geq p$. Therefore the composition corresponding to $r(\cU)$ and $s(\cU)$ has at least $p+1$ parts and by Lemma~\ref{lem:r and s perms} we know that $r(\cU) \sim s(\cU)$. It now follows by induction that $|B_{r(\cU)}|= |B_{s(\cU)}|$.
Consequently
$$\sum_{\cU\in \Gamma_k(\alpha)}|B_{r(\cU)}| = \sum_{\cU\in \Gamma_k(\alpha)}|B_{s(\cU)}| = |\Gamma_k(\alpha)|\cdot |B_\alpha|,$$
where the second equality follows from the fact that if $\cU\in \Gamma_k(\alpha)$ then $\co(s(\cU)) = \alpha$. By appealing the bijection in Lemma~\ref{lem:Gamma bij} we have for all $k< n$
\begin{equation}\label{eq:gamma2}
\sum_{I\in S_k'(\alpha)} |B_{\alpha(I)}| = |S_k'(\alpha)|\cdot |B_\alpha|.
\end{equation}
In the case $I \in S_k(\alpha)\setminus S_k'(\alpha)$ then $\alpha(I) = \alpha$. By adding the term $|B_\alpha|$ to both sides of (\ref{eq:gamma2}) if necessary we have
\begin{equation}\label{eq:gamma2}
\sum_{I\in S_k(\alpha)} |B_{\alpha(I)}| = |S_k(\alpha)|\cdot |B_\alpha|.
\end{equation}
for all $k<n$ and compositions $\alpha$ with $p$ parts. As the case when $k=n$ yields a trivial equality we may further assume $k\leq n$. Appealing to Lemma~\ref{lem:coef equal} with $C_\alpha := |B_\alpha|$ we conclude that $|B_\alpha| = |B_\alpha|$ for compositions $\alpha\sim \beta$ with $p$ parts. This completes our proof.
\end{proof}
\subsection{Necessity: Symmetry implies $D$-symmetry}\label{subsec:Knuth}
The goal of this subsection is to prove that symmetric multisets are $D$-symmetric, i.e., to prove Proposition~\ref{prop:sym->Dsym}. We first show that this proposition holds for fine sets and, using this fact, ``bootstrap'' up to the general result. As fine sets are intimately connected to the theory of tableaux we begin by introducing the needed ideas from this theory.
A \emph{standard Young tableaux} of shape $\mu\vdash P$ is a filling of the Young digram of $\mu$ with each number in $[n]$ used exactly once so that rows and columns are strictly increasing. We denote by $\SYT(\mu)$ the set of all standard Young tableaux of shape $\mu$ and set $\SYT(n) = \cup_{\mu\vdash n}\SYT(\mu)$. For any $P\in \SYT(n)$ and $m\leq n$ we define $P_{<m}$ to be the standard Young tableaux in $\SYT(m-1)$ given by the entries in $P$ that are $<m$. We refer to a coordinate location in a Young tableau as a \emph{box} and the element in a box as a \emph{value}. All boxes are coordinatized using matrix coordinates.
For any $P\in \SYT(n)$ we define its \emph{descent set} by
$$\Des P: = \makeset{i}{\textrm{$i+1$ is on a row below $i$ in $P$}}\subseteq [n-1].$$
\ytableausetup{mathmode, boxsize=2.3em}
Given $P\in\SYT(n)$ we say a sequence of boxes $b_0,\ldots, b_m$ in $P$ is a \emph{promotion path} provided that $b_{i+1}$ is whichever of the boxes immediately below or to the right of $b_i$ that contain the smaller value for all $0\leq i<m$. So given an initial box $b_0$ the maximal promotion path starting at $b_0$ is uniquely determined. Consequently it makes sense to define the \emph{$v$-promotion path} in $P$ to be the maximal promotion path that starts at the box containing the value $v$.
For any $\mu\vdash n$ and $a,b\in [n]$ with $a\leq b$ define the \emph{promotion operator}
$$\partial_a^b: \SYT(\mu) \to \SYT(\mu)$$
as follows. Fix some $P\in \SYT(\mu)$ and consider the \emph{skew tableau} formed by the values in $[a,b]$. Let $c_0,\ldots, c_m$ be the $a$-promotion path in this skew tableau. Next delete the entry in box $c_0$ and slide the value in $c_{i+1}$ into $c_i$. As $c_m$ is now empty, place in it the value $b+1$. Finally decrement each value in this skew tableau by 1 to obtain $\partial_a^b P\in \SYT(\mu)$.
\begin{exa}
Take $P$ to be the tableau on the left then $\partial_3^{12}P$ is the tableau on the right:
\ytableausetup{mathmode, boxsize=2.2em}
$$\begin{ytableau}
1& *(gray)3& *(lightgray)6& *(lightgray)7\\
2& *(gray)5& *(gray)9& *(gray)11\\
*(lightgray)4& *(lightgray)10& 13& 15\\
*(lightgray)8& 14\\
*(lightgray)12
\end{ytableau}
\qquad\qquad \qquad
\begin{ytableau}
1& *(lightgray)4& *(lightgray)5& *(lightgray)6\\
2& *(lightgray)8& *(lightgray)10& *(lightgray)12\\
*(lightgray)3& *(lightgray)9& 13& 15\\
*(lightgray)7& 14\\
*(lightgray)11
\end{ytableau}\ .$$
Here the boxes corresponding to the skew tableau are in gray and the promotion path in $P$ is in dark gray.
\end{exa}
We point out two important properties of the promotion operator. First it is clear that
$$P_{<a}=(\partial_a^b P)_{<a}.$$
Second, we see that for each $\mu\vdash n$ the mapping $\partial_a^b:\SYM(\mu) \to \SYM(\mu)$ is bijective as its inverse can be constructed as follows. Let $c_0$ be the box containing $b$ and define the unique maximal sequence of boxes $c_0,\ldots, c_m$ so that $c_{i+1}$ is whichever of the boxes immediately above or to the left of $c_i$ that contains the larger value. Now delete the value $b$ in $c_0$ and slide the value in $b_{i+1}$ into box $b_{i}$. Next increment all the values in $[a,b]$ by 1 and place $a$ in the empty box $b_m$.
We introduce more theory related to tableaux below as it is needed. For now we have enough to prove our first few lemmas.
\begin{lemma}\label{lem:u iff u-1}
For $Q\in \SYT(n)$ and $a<u<b$ we have
$$u\in \Des Q\ \Longleftrightarrow\ u-1 \in \Des(\partial_a^b Q).$$
\end{lemma}
\begin{proof}
First assume that $u\in \Des Q$. In the calculation of $\partial_a^b Q$ the values in $[a+1,b]$ in $Q$ shift up one unit, shift left one unit, or remain fixed before being decremented by 1. Hence the only way $u-1\notin \Des(\partial_a^b Q)$ is if $Q$ contains one of the following configurations:
\begin{center}
\ytableausetup{mathmode, boxsize=2.2em}
\begin{ytableau}
*(lightgray) w &z_1 & \cdots & z_\ell & u \\
*(lightgray) u{+}1
\end{ytableau}
\quad\myor\quad
\begin{ytableau}
*(lightgray)w & *(lightgray)u \\
z_1 & *(lightgray)u{+}1
\end{ytableau}\ ,
\end{center}
where the promotion path is highlighted in gray. In both cases a simple check shows that such promotion paths are impossible. (E.g., in the second case $z_1<u$.) So neither of these two cases can occur. We conclude that the forward direction of our lemma holds.
To prove the reverse direction, we need to show that if $u\notin\Des Q$ then $u-1\notin\Des(\partial_a^bQ)$. Observe that if $Q^*$ denotes the conjugate of $Q$ then $u\in \Des Q^*$. Also note that the operation of taking the conjugate commutes with $\partial_a^b$. So to prove this direction we need only apply the above argument to $Q^*$. The lemma now follows.
\end{proof}
\begin{lemma}\label{lem:v,v+1}
Let $Q\in \SYT(n)$ and $u\leq k+1<n$. Then
$$u\in \Des Q\iff k+1\in \Des(\partial_{u}^{k+1}\circ \partial_{u+1}^{k+2} Q ).$$
\end{lemma}
\begin{proof}
Set $\partial:= \partial_{u}^{k+1}\circ \partial_{u+1}^{k+2}$. Suppose for a contradiction that $u\in \Des(Q)$ and $k+1\notin \Des(\partial Q)$. Let $B=(b_0,\ldots, b_m)$ be the promotion path used by $\partial_{u+1}^{k+2}$ and let $C=(c_0,\ldots,c_\ell)$ be the promotion path used by $\partial_u^{k+1}$. So in $Q$ boxes $b_0$ and $c_0$ contain $u+1$ and $u$ respectively, and in $\partial Q$ boxes $b_m$ and $c_\ell$ contain $k+2$ and $k+1$ respectively. As $u\in \Des Q$ then $c_0$ is strictly above $b_0$. By our assumption that $k+1\notin\Des(\partial Q)$ we must also have that $c_\ell$ is weakly below $b_m$. In fact since $b_m$ contains $k+2$ in $\partial_{u+1}^{k+2} Q$ then $c_\ell$ is also strictly to the left of $b_m$. Now consider the first time a box in $C$ is weakly below and strictly to the left of some box in $B$. Certainly this box cannot be $c_0$ (since $c_0$ is above $b_0$) and in fact we must have the following configuration of values in $Q$:
\ytableausetup{mathmode, boxsize=2.2em}
\begin{center}
\definecolor{red}{HTML}{f12e2e}
\definecolor{blue}{HTML}{204ae2}
\definecolor{purple}{HTML}{893C88}
\begin{ytableau}
*(blue) & w \\
*(purple) & *(red)u
\end{ytableau}\ ,
\end{center}
where the blue and purple boxes are in $C$ and the red and purple boxes are in $B$. So $w<u$. Therefore in $\partial_{u+1}^{k+2}Q$ the purple box contains $u-1$ and the white box contains $w' =w$ or $w-1$. Since the standard Young tableaux $\partial_{u+1}^{k+2}Q$ contains exactly one occurrence of $w$ it follows that $u-1>w'$. It is now immediate that if the blue box is in $C$ then $C$ must contain the white box and not the purple box. We arrive at our desired contradiction proving that the forward direction holds.
The converse can be proved by an argument similar to that used to prove the converse in the previous lemma.
\end{proof}
\begin{definition}
For any $\mu\vdash n$ let $k\leq n$ be such that $V=\{v_1<v_2<\cdots< v_{n-k}\}\subseteq [n]$. Define the mapping $\partial_V:\SYT(\mu) \to \SYT(\mu)$
by
$$\partial_{V}:= \partial_{v_1}^{k+1}\circ\partial_{v_2}^{k+2}\circ \cdots\circ \partial_{v_{n-k}}^n.$$
Take $\partial_{\emptyset}$ to be the identity function.
\end{definition}
We pause to comment on our choice of indexing above. In what follows the operator $\partial_V$ appears in the context where $V$ is the second block of an ordered partition where the first block has size $k$. As a result we have chosen to index the elements of $V$ as above.
\begin{exa}
For example $V= \{3,9,10\}\subseteq [15]$. Then we have
\begin{center}
\ytableausetup{mathmode, boxsize=1.9em}
\begingroup
\setlength{\tabcolsep}{10pt}
\renewcommand{\arraystretch}{2}
\begin{tabular}{cccc}
$P$& $P_1=\partial_{10}^{15}P$ & $P_2=\partial_{9}^{14}P_1$ & $\partial_V P=\partial_{3}^{13}P_2$ \\
$\begin{ytableau}
1& 3& 6& 7\\
2& 5& 9& *(lightgray)11\\
4& *(lightgray)10& *(lightgray)13& *(lightgray)15\\
8& *(lightgray)14\\
*(lightgray)12
\end{ytableau}$
&
$\begin{ytableau}
1& 3& 6& 7\\
2& 5& *(lightgray)9& *(lightgray)10\\
4& *(lightgray)12& *(lightgray)14& 15\\
8& *(lightgray)13\\
*(lightgray)11
\end{ytableau}$
&
$\begin{ytableau}
1& *(lightgray)3& *(lightgray)6& *(lightgray)7\\
2& *(lightgray)5& *(lightgray)9& 14\\
*(lightgray)4& *(lightgray)11& *(lightgray)13& 15\\
*(lightgray)8& *(lightgray)12\\
*(lightgray)10
\end{ytableau}$
&
$\begin{ytableau}
1& 4& 5& 6\\
2& 8& 12& 14\\
3& 10& 13& 15\\
7& 11\\
9
\end{ytableau}$
\end{tabular}
\endgroup
\end{center}
where the skew tableau used to obtain the following tableau is highlighted.
\end{exa}
The next lemma follows immediately from the fact that each promotion operator is bijective.
\begin{lemma}\label{lem:partial bij}
Fix $\mu\vdash n$ and some $V\subseteq [n]$. Then $\partial_V:\SYT(\mu) \to \SYT(\mu)$ is a bijection.
\end{lemma}
\begin{lemma}\label{lem:Knuth Dsym}
Let $Q\in \SYT(n)$ and fix $\cU=(U,V)\in \Pi(n)$. For $u\in \cU^*$ we have
$$u\in \Des Q\iff \delta_\cU(u)\in \Des(\partial_{V} Q).$$
\end{lemma}
\begin{proof}
We prove this by induction on $| V|$. Observe that when $V=\emptyset$ then $\partial_VQ = Q$ and $\delta_\cU = 1$ so the lemma holds in this case.
Now take $|U|=k<n$ so that $|V|>0$ and let $v_1 = \min(V)$. Define $V'=V\setminus \{v_1\}$ and $U' = U \cup \{v_1\}$ and set $\cW = (U',V')$. For $x\in [v_1]$ we then have $\delta_\cW(x) = x$ and by induction we know that if $u\in \cW^*$ then
$$u\in \Des Q \iff \delta_\cW(u) \in \Des( \partial_{V'} Q).$$
Now define the function $f:[n]\to [n]$ by setting
$$f(x) = \begin{cases}
x - 1 & \textrm{ if } v_1<x\leq k+1\\
x & \textrm{ otherwise}.
\end{cases}$$
Observe that for $x\neq v_1$ we have $f\circ\delta_\cW(x) =\delta_\cU(x)$.
Next recall that the positions of values less than $v_1$ and greater than $k+1$ are invariant under the action of $\partial_{v_1}^{k+1}$. This together with Lemma~\ref{lem:u iff u-1} tells us that for any $T\in\SYT(n)$ and $u\neq v_1-1,v_1,k+1$ we have
\begin{align}
u\in \Des T &\iff f(u) \in \Des(\partial_{v_1}^{k+1}T).
\end{align}
We now consider the following two cases.
\medskip
\noindent
\textbf{Case 1: } $u\in \cU^*$ and $u\neq v_1$
\medskip
As $u\neq v_1$ then $\delta_\cW(u) \neq v_1$. As $v_1=\min(V)$ then $u\neq v_1-1$ and so $\delta_\cW(u) \neq v_1-1$. As $u\neq \max(U)$ then $\delta_\cW(u) \neq k+1$. Computing we now have
\begin{align*}
u\in \Des Q &\iff \delta_\cW(u) \in \Des( \partial_{V'} Q)\\
& \iff f\circ\delta_\cW(u) \in \Des( \partial_{v_1}^{k+1}\partial_{V'} Q)\\
& \iff \delta_\cU(u) \in \Des( \partial_V Q).
\end{align*}
So the lemma holds in this case.
\medskip
\noindent
\textbf{Case 2: } $u=v_1\in \cU^*$
\medskip
In this case $|V|\geq 2$ and $v_1+1 = v_2$. Set $V'' = V\setminus \{v_1,v_2\}$ so that $V''$ is either empty or all its values are $>v_2$. By definition we have
$$\partial_V Q = \partial_{v_1}^{k+1}\circ \partial_{v_2}^{k+2}R$$
where $R = \partial_{V''}Q$. (Note that if $|V| = 2$ then $Q=R$.) As $Q_{<v_2+1} = R_{<v_2+1}$ and $v_1<v_2$ we have $v_1\in\Des Q \iff v_1\in \Des R$. This together with Lemma~\ref{lem:v,v+1} gives
\begin{align*}
v_1\in \Des Q &\iff v_1 \in \Des R \\
&\iff k+1\in \Des(\partial_V Q) \\
&\iff \delta_\cU(v_1)\in \Des(\partial_V Q).
\end{align*}
This completes our proof.
\end{proof}
In order to continue we recall some additional well known results from the theory of tableaux. We denote the \emph{Robinson-Schensted correspondence} by
$$\RS:\fS_n \to \bigcup_{\mu\vdash n} \SYT(\mu)\times \SYT(\mu).$$
If $\pi\mapsto (P,Q)$ under this bijection we call $P$ the \emph{insertion tableau}. We omit the definition of $\RS$ as it is not needed here but we point out two key properties of this mapping that are well-known. First $\RS$ is a bijection. The second well known property we state as a lemma. For a reference see \cite{SaganBook}, Chapter 5, Exercise 22.
\begin{lemma}\label{lem:des under RS}
If $\RS(\pi) = (P,Q)$, then $\Des(\pi) = \Des Q$.
\end{lemma}
For each $P\in \SYT(n)$ we define the corresponding \emph{Knuth class} to be the set
$$K(P):= \makeset{\pi\in \fS_n}{\textrm{$P$ is the insertion tableau for $\pi$}}.$$
Additionally we need the following well-known result due to Gessel.
\begin{thm}[\cite{Ges84}]\label{thm:ges}
For any $\lambda\vdash n$ we have
$$s_\lambda = \sum_{Q\in\SYT(\lambda)} F_{\Des Q,n}.$$
\end{thm}
This result together with the Robinson-Schensted correspondence tells us that if $P$ has shape $\lambda$ then $Q(K(P)) = s_\lambda$. In fact we can say more. Consider any fine multiset $B\Subset\fS_n$ with $Q(B) = \sum_{\lambda\vdash n} c_\lambda s_\lambda$ where $c_\lambda\geq 0$. For each shape $\lambda$ fix some $P_\lambda$ of that shape. Define the multiset
$$A = \bigsqcup_{\lambda\vdash n}\bigsqcup_{i=1}^{c_\lambda} K(P_\lambda)$$
so that $Q(A) = Q(B)$ and hence $A\equiv B$. We make use of this in our next lemma.
\begin{lemma}\label{lem:fine->Dsym}
If $B\Subset\fS_n$ is fine then it is $D$-symmetric.
\end{lemma}
\begin{proof}
As $B$ is fine it follows from the discussion preceding the statement of this lemma that $B\equiv \bigsqcup K(P_i)$ where our disjoint union is over some finite index set $I$ and $P_i\in \SYT(n)$. By parts a) and b) of Lemma~\ref{lem:Dsym prop} it then suffices to show that individual Knuth classes are $D$-symmetric.
To show this fix $P\in \SYT(\mu)$ and consider the Knuth class $K(P)$. Take $\cU=(U,V)\vdash [n]$ and consider the mapping $\Psi_\cU:K(P)\to K(P)$ given by
$$\pi \mapsto \RS^{-1}(P,\partial_V Q)$$
where $RS(\pi) = (P,Q)$. The bijectivity of this mapping follows from Lemma~\ref{lem:partial bij} and the fact that $\RS$ is bijective. Now take $u\in \cU^*$ and fix some $\pi\in K(P)$ with $\RS(\pi) = (P,Q)$. By Lemma~\ref{lem:des under RS} and Lemma~\ref{lem:Knuth Dsym}, since $\cU=(U,V)$, we obtain our desired result
$$u\in \Des(\pi) \iff u\in \Des Q \iff \delta_\cU(u) \in \Des(\partial_V Q)\iff \delta_\cU(u) \in \Des(\Psi_\cU(\pi)).$$
By Lemma~\ref{lem:Dsym reduction} we now conclude that $B$ is $D$-symmetric.
\end{proof}
\begin{prop}\label{prop:sym->Dsym}
If $B\Subset \fS_n$ is symmetric then it is $D$-symmetric.
\end{prop}
\begin{proof}
As $B$ is assumed to be symmetric and the Schur functions $\{s_\lambda\}_{\lambda\vdash n}$ are an integral basis for $\Lambda(n)_\mathbb{Z}$, the ring of symmetric functions of degree $n$ with integral coefficients, we can write
$$Q(B) = \sum_{\lambda\vdash n} c_\lambda s_\lambda$$
where $c_\lambda\in \mathbb{Z}$. Now define the multiset
$$A = \bigsqcup_{\lambda\vdash n\atop c_\lambda<0}\bigsqcup_{i = 1}^{|c_\lambda|} K(\lambda)$$
so that by Theorem~\ref{thm:ges} have
$$Q(A) = \sum_{\lambda\vdash n \atop c_\lambda<0}|c_\lambda|s_\lambda,$$
so that $A$ is fine, and by our choice $A$ we also see that
$$Q(A\sqcup B) = Q(A) + Q(B) = \sum_{\lambda\vdash n\atop c_\lambda>0} c_\lambda s_\lambda,$$
so that $A\sqcup B$ is also fine. By Lemma~\ref{lem:fine->Dsym} we now see that $A$ and $A\sqcup B$ are both $D$-symmetric. It now follows from Lemma~\ref{lem:Dsym prop} that $B$ is $D$-symmetric as well.
\end{proof}
\bibliographystyle{alpha}
\bibliography{mybib}
\end{document} | 44,559 |
TITLE: On the sharpness of the Harada-Sai lemma
QUESTION [2 upvotes]: Let $A$ be a finite dimensional $\mathbb{K}$-algebra, where $\mathbb{K}$ is an algebraically closed field. The following result is a well known (see for example Assem & Coelho - Basic representation theory of algebras):
Lemma (Harada-Sai). Let $m>0$ and
$$M_1\xrightarrow[]{f_1} M_2\xrightarrow[]{f_2} \dots \xrightarrow[]{f_{2^m-1}} M_{2^m}$$
be a radical path where each $M_i$ has composition length at most equal to $m$. Then, $f_{2^m-1}\dots f_2 f_1=0$.
Another well known result is that this bound on the length of non-zero radical paths is a sharp bound (see exercise VI.1.2 of Assem & Coelho).
My question is the following : Is the bound given by Harada-Sai's lemma sharp when we restrict ourselves to representation finite algebras? The exercise pointed out above gives an example of the sharpness for the algebra $\mathbb{K}[t_1,t_2]/\langle t_1^2,t_2^2\rangle$ which is representation infinite.
More precisely, I am looking for an example of a representation finite algebra $A$ such that the composition of $2^{m}-2$ radical morphisms (from modules which the lengths are bound by $m$) is non-zero.
REPLY [3 votes]: Bongartz gives an example, for any $m$, in Section A.1 on page 326 of
Bongartz, Klaus, Treue einfach zusammenhaengende Algebren. I, Comment. Math. Helv. 57, 282-330 (1982). ZBL0502.16022.
The algebra is given by the quiver
$$
1
\begin{array}{c}
\xrightarrow{a}\\
\xleftarrow[b]\\
\end{array}
2
\begin{array}{c}
\xrightarrow{a}\\
\xleftarrow[b]\\
\end{array}
3
\cdots
m-1
\begin{array}{c}
\xrightarrow{a}\\
\xleftarrow[b]\\
\end{array}
m
$$
with relations $ab=ba=0$.
As an example I'll list the modules in a chain of $14$ morphisms for $m=4$. Arrows to the right indicate the action of $a$; arrows to the left indicate the action of $b$.
$$1\to 2\to3\to4$$
$$1\to2\to3$$
$$1\to2\to3\leftarrow4$$
$$1\to2$$
$$1\to2\leftarrow3\to4$$
$$1\to2\leftarrow3$$
$$1\to2\leftarrow3\leftarrow4$$
$$1$$
$$1\leftarrow2\to3\to4$$
$$1\leftarrow2\to3$$
$$1\leftarrow2\to3\leftarrow4$$
$$1\leftarrow2$$
$$1\leftarrow2\leftarrow3\to4$$
$$1\leftarrow2\leftarrow3$$
$$1\leftarrow2\leftarrow3\leftarrow4$$ | 125,575 |
/-
Copyright (c) 2018 Jeremy Avigad. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Jeremy Avigad, Mario Carneiro, Simon Hudon
-/
import data.fin.fin2
import data.typevec
import logic.function.basic
import tactic.basic
/-!
Functors between the category of tuples of types, and the category Type
Features:
`mvfunctor n` : the type class of multivariate functors
`f <$$> x` : notation for map
-/
universes u v w
open_locale mvfunctor
/-- multivariate functors, i.e. functor between the category of type vectors
and the category of Type -/
class mvfunctor {n : ℕ} (F : typevec n → Type*) :=
(map : Π {α β : typevec n}, (α ⟹ β) → (F α → F β))
localized "infixr ` <$$> `:100 := mvfunctor.map" in mvfunctor
variables {n : ℕ}
namespace mvfunctor
variables {α β γ : typevec.{u} n} {F : typevec.{u} n → Type v} [mvfunctor F]
/-- predicate lifting over multivariate functors -/
def liftp {α : typevec n} (p : Π i, α i → Prop) (x : F α) : Prop :=
∃ u : F (λ i, subtype (p i)), (λ i, @subtype.val _ (p i)) <$$> u = x
/-- relational lifting over multivariate functors -/
def liftr {α : typevec n} (r : Π {i}, α i → α i → Prop) (x y : F α) : Prop :=
∃ u : F (λ i, {p : α i × α i // r p.fst p.snd}),
(λ i (t : {p : α i × α i // r p.fst p.snd}), t.val.fst) <$$> u = x ∧
(λ i (t : {p : α i × α i // r p.fst p.snd}), t.val.snd) <$$> u = y
/-- given `x : F α` and a projection `i` of type vector `α`, `supp x i` is the set
of `α.i` contained in `x` -/
def supp {α : typevec n} (x : F α) (i : fin2 n) : set (α i) :=
{ y : α i | ∀ ⦃p⦄, liftp p x → p i y }
theorem of_mem_supp {α : typevec n} {x : F α} {p : Π ⦃i⦄, α i → Prop} (h : liftp p x) (i : fin2 n):
∀ y ∈ supp x i, p y :=
λ y hy, hy h
end mvfunctor
/-- laws for `mvfunctor` -/
class is_lawful_mvfunctor {n : ℕ} (F : typevec n → Type*) [mvfunctor F] : Prop :=
(id_map : ∀ {α : typevec n} (x : F α), typevec.id <$$> x = x)
(comp_map : ∀ {α β γ : typevec n} (g : α ⟹ β) (h : β ⟹ γ) (x : F α),
(h ⊚ g) <$$> x = h <$$> g <$$> x)
open nat typevec
namespace mvfunctor
export is_lawful_mvfunctor (comp_map)
open is_lawful_mvfunctor
variables {α β γ : typevec.{u} n}
variables {F : typevec.{u} n → Type v} [mvfunctor F]
variables (p : α ⟹ repeat n Prop) (r : α ⊗ α ⟹ repeat n Prop)
/-- adapt `mvfunctor.liftp` to accept predicates as arrows -/
def liftp' : F α → Prop :=
mvfunctor.liftp $ λ i x, of_repeat $ p i x
/-- adapt `mvfunctor.liftp` to accept relations as arrows -/
def liftr' : F α → F α → Prop :=
mvfunctor.liftr $ λ i x y, of_repeat $ r i $ typevec.prod.mk _ x y
variables [is_lawful_mvfunctor F]
@[simp]
lemma id_map (x : F α) :
typevec.id <$$> x = x :=
id_map x
@[simp]
lemma id_map' (x : F α) :
(λ i a, a) <$$> x = x :=
id_map x
lemma map_map (g : α ⟹ β) (h : β ⟹ γ) (x : F α) :
h <$$> g <$$> x = (h ⊚ g) <$$> x :=
eq.symm $ comp_map _ _ _
section liftp'
variables (F)
lemma exists_iff_exists_of_mono {p : F α → Prop} {q : F β → Prop} (f : α ⟹ β) (g : β ⟹ α)
(h₀ : f ⊚ g = id)
(h₁ : ∀ u : F α, p u ↔ q (f <$$> u)) :
(∃ u : F α, p u) ↔ (∃ u : F β, q u) :=
begin
split; rintro ⟨u,h₂⟩; [ use f <$$> u, use g <$$> u ],
{ apply (h₁ u).mp h₂ },
{ apply (h₁ _).mpr _,
simp only [mvfunctor.map_map,h₀,is_lawful_mvfunctor.id_map,h₂] },
end
variables {F}
lemma liftp_def (x : F α) : liftp' p x ↔ ∃ u : F (subtype_ p), subtype_val p <$$> u = x :=
exists_iff_exists_of_mono F _ _ (to_subtype_of_subtype p) (by simp [mvfunctor.map_map])
lemma liftr_def (x y : F α) :
liftr' r x y ↔
∃ u : F (subtype_ r), (typevec.prod.fst ⊚ subtype_val r) <$$> u = x ∧
(typevec.prod.snd ⊚ subtype_val r) <$$> u = y :=
exists_iff_exists_of_mono _ _ _ (to_subtype'_of_subtype' r)
(by simp only [map_map, comp_assoc, subtype_val_to_subtype']; simp [comp])
end liftp'
end mvfunctor
open nat
namespace mvfunctor
open typevec
section liftp_last_pred_iff
variables {F : typevec.{u} (n+1) → Type*} [mvfunctor F] [is_lawful_mvfunctor F]
{α : typevec.{u} n}
variables (p : α ⟹ repeat n Prop)
(r : α ⊗ α ⟹ repeat n Prop)
open mvfunctor
variables {β : Type u}
variables (pp : β → Prop)
private def f : Π (n α), (λ (i : fin2 (n + 1)), {p_1 // of_repeat (pred_last' α pp i p_1)}) ⟹
λ (i : fin2 (n + 1)), {p_1 : (α ::: β) i // pred_last α pp p_1}
| _ α (fin2.fs i) x := ⟨ x.val, cast (by simp only [pred_last]; erw const_iff_true) x.property ⟩
| _ α fin2.fz x := ⟨ x.val, x.property ⟩
private def g : Π (n α), (λ (i : fin2 (n + 1)), {p_1 : (α ::: β) i // pred_last α pp p_1}) ⟹
(λ (i : fin2 (n + 1)), {p_1 // of_repeat (pred_last' α pp i p_1)})
| _ α (fin2.fs i) x := ⟨ x.val, cast (by simp only [pred_last]; erw const_iff_true) x.property ⟩
| _ α fin2.fz x := ⟨ x.val, x.property ⟩
lemma liftp_last_pred_iff {β} (p : β → Prop) (x : F (α ::: β)) :
liftp' (pred_last' _ p) x ↔ liftp (pred_last _ p) x :=
begin
dsimp only [liftp,liftp'],
apply exists_iff_exists_of_mono F (f _ n α) (g _ n α),
{ clear x _inst_2 _inst_1 F, ext i ⟨x,_⟩, cases i; refl },
{ intros, rw [mvfunctor.map_map,(⊚)],
congr'; ext i ⟨x,_⟩; cases i; refl }
end
open function
variables (rr : β → β → Prop)
private def f :
Π (n α),
(λ (i : fin2 (n + 1)),
{p_1 : _ × _ // of_repeat (rel_last' α rr i (typevec.prod.mk _ p_1.fst p_1.snd))}) ⟹
λ (i : fin2 (n + 1)), {p_1 : (α ::: β) i × _ // rel_last α rr (p_1.fst) (p_1.snd)}
| _ α (fin2.fs i) x := ⟨ x.val, cast (by simp only [rel_last]; erw repeat_eq_iff_eq) x.property ⟩
| _ α fin2.fz x := ⟨ x.val, x.property ⟩
private def g :
Π (n α), (λ (i : fin2 (n + 1)), {p_1 : (α ::: β) i × _ // rel_last α rr (p_1.fst) (p_1.snd)}) ⟹
(λ (i : fin2 (n + 1)),
{p_1 : _ × _ // of_repeat (rel_last' α rr i (typevec.prod.mk _ p_1.1 p_1.2))})
| _ α (fin2.fs i) x := ⟨ x.val, cast (by simp only [rel_last]; erw repeat_eq_iff_eq) x.property ⟩
| _ α fin2.fz x := ⟨ x.val, x.property ⟩
lemma liftr_last_rel_iff (x y : F (α ::: β)) :
liftr' (rel_last' _ rr) x y ↔ liftr (rel_last _ rr) x y :=
begin
dsimp only [liftr,liftr'],
apply exists_iff_exists_of_mono F (f rr _ _) (g rr _ _),
{ clear x y _inst_2 _inst_1 F, ext i ⟨x,_⟩ : 2, cases i; refl, },
{ intros, rw [mvfunctor.map_map,mvfunctor.map_map,(⊚),(⊚)],
congr'; ext i ⟨x,_⟩; cases i; refl }
end
end liftp_last_pred_iff
end mvfunctor
| 171,688 |
- lozspeyer
Bandcamp links now appearing!
Just finding out how to embed a Bandcamp player into this website, let's see if this works...
Clave Sin Embargo
The latest album from Loz Speyer's Time Zone, released Oct 2019. Eight compositions featuring Martin Hathaway, Stuart Hall, Dave Manington, Andy Ball, Maurizio Ravalico and Loz Speyer
#CubanJazzFromLondon #OddMeterClave | 404,274 |
Characterised by the yellow colour of turmeric and coconut milk, this dish is colourful, cheerful and super healthy. It’s served everywhere throughout Indonesia and you must have a go – it’s so simple and delicious.
Swap the tofu for chicken if you prefer a meaty version and serve it on rice.
Serves 2
Ingredients
2 tbsp vegetable oil, soy oil or rice bran oil
200g tofu
2 cloves garlic smashed
2 shallots peeled and finely chopped
1 bird’s eye chilli de-seeded
1 stick lemongrass cut into pieces 3cm long
15g palm sugar
1 good thumb of peeled fresh turmeric
1 thumb of peeled fresh galangal
1 thumb of peeled fresh ginger
1 teaspoon of whole coriander
1 macademia nut
pinch of salt
150ml coconut milk
150ml water
vegetables (you can use what you like here, try for a mix of colour)
1/4 butternut squash cubed
1/4 red pepper sliced
a handful of Chinese spinach / spinach / chard
1/4 onion sliced
2 mushrooms sliced
Method
1. Put shallots, galangal, garlic, ginger, turmeric, macadamia, coriander chilli, sugar and salt in a blender with the water and blend to a paste.
2. Heat the oil in a wok. When the oil is hot add the tofu and lemongrass.
3. When the tofu has browned add the vegetables and cook.
4. Add the paste and heat through releasing the flavours.
5. Add the coconut milk.
6. Serve.
Enjoy. | 286,258 |
Here’s the web page…
Now before everybody goes ballistic, this is actually correct for a COUPLE of reasons.
One, NIPR is a totally unclassified system, e.g NO classified of any kind allowed. If you’ve looked at the Guardian article, the documents have a TS classification. If Airman Shmukatelli were to go look at that page and pull up that document to look at it, he is effectively (even though the document is public on an unclassified server) committing a security violation.
Second, if Airman Shmukatelli were to go look at that page and pull up that document to look at it and SAVE it to his drive, he is effectively (even though the document is public on an unclassified server) committing a more severe security violation, and if he sent it via NIPR to anyone, that ramps up to yet again ANOTHER level of security violation.
It does not matter whether the document is public or not, it is the classification level on the document that is viable and valid since that document has never been declassified by higher authority.
Actually, they are doing their folks a favor by preventing them from committing, however unwittingly, a security violation they would “legally” have to be prosecuted for. Same as with the Manning/Wikileaks stuff…
A gazillion years ago we did a lot of survey work on missile bases, positioning the silos. In the 80s, a security wonk came to get the originals. He redacted a lot of angles with black magic marker and took the originals. We were to retain copies, so we took shifts at the copier as he worked on the redaction. It was, of course, faster to leave the copier lid up and just slap sheet down, hit button, yank sheet out, repeat. And I noticed something. It’s against regulations to write survey observations in pencil. Pens leave a goodly imprint in paper. Every time the copier light flashed, the imprint of the redacted data showed through, clear as could be. If I had wanted…But I didn’t. I just laughed at their concept of security.
PH- Yep, strange things happened with copiers then, and with Adobe now… But you ‘have’ to play by the rules, even if they make no sense…
Double redaction — it’s the only way to be sure. Basically, re-redact the copies. (In the case of Adobe Acrobat “redaction” [sic], that means actually truning it into a flat file — the Adobe soi disant redaction bars are a layer that can be lifted, last I checked. That’s not even really redaction. . . )
Yep… sigh…
The military runs on minutia — particularly the security side of things. I remember receiving blue channel traffic behind CNN during Gulf War 1. The blue channel info was TS/SI-TK/NOFORN/WNINTEL/ORCON/LIMDIS and wasn’t as accurate as CNN because the CNN feed was real time. I could comment on what I saw on television but to comment on the less accurate national security feed would have landed me in jail.
Go figure. It’s rules.
LL- LOL, yep I know ‘exactly’ what you’re talking about… Rules it is… Right, wrong or otherwise…
If a PC that’s on NIPR touches anything classified, meaning either SIPR or JDIS… then it’s not necessarily a security violation. You can avoid the security violation by immediately pulling the ethernet cable and saying “Looks like we have a new SIPR (or JDIS) PC in the office!”
This assumes you have the appropriate security clearance associated with the content in the first place.
Tango- LOL, true!!!
I notice that this document does not actually order not to view the documents, just not to view them from the AF NIPR net.
I remember taking a TS Stamp on my First Boat (USS John Marshall, SSBN 611) and inking up a few Playboys that were floating around. I placed them on the Mess Deck, grabbed a cup of coffee, and told my Shipmates that only those with a “Need to Know” AND the Clearance could look at them. The poor Junior Strikers thought I was serious! Then the Chief Yeoman walked in, got “That Smile,” called up MY Chief, and guess who was assigned to do the Burn Bags when we pulled back into Guam?
Of course, nowadays, the mere Presence of a Playboy aboard a Boat would probably send my Ass to the Brig.
JR- Correct!
Les- And he made you burn them didn’t he…
Just the ones the chief didn’t take. 🙂
Linked back from my place…
Something occurred to me today:
By treating those documents as classified material, the government is more or less confirming their authenticity.
Absent this sort of reaction, they could perfectly well be forgeries. After all, anyone with a PC can create documents, slapping a downloaded NSA logo on a Powerpoint presentation is a trivial exercise, and it’s not like the average member of press or public could tell the difference.
Whatever became of “neither confirm nor deny”?
Tango- 🙂
SiG- Thanks!
Eric- True!!! And it’s the same drill that occurred over the Wikileaks stuff… | 99,124 |
\begin{document}
\maketitle
\begin{abstract}
This paper defines the $q$-analogue of a matroid and establishes several properties like duality, restriction and contraction. We discuss possible ways to define a $q$-matroid, and why they are (not) cryptomorphic. Also, we explain the motivation for studying $q$-matroids by showing that a rank metric code gives a $q$-matroid. \\
Keywords: matroid theory, $q$-analogue, rank metric codes
\end{abstract}
This paper establishes the definition and several basic properties of $q$-matroids. Also, we explain the motivation for studying $q$-matroids by showing that a rank metric code gives a $q$-matroid. We give definitions of a $q$-matroid in terms of its rank function and independent spaces. The dual, restriction and contraction of a $q$-matroid are defined, as well as truncation, closure, and circuits. Several definitions and results are straightforward translations of facts for ordinary matroids, but some notions are more subtle. We illustrate the theory by some running examples and conclude with a discussion on further research directions involving $q$-matroids. \\
Many theorems in this article have a proof that is a straightforward $q$-analogue of the proof for the case of ordinary matroids. Although this makes them appear very easy, we feel it is needed to include them for completeness and also because it is not a guarantee that $q$-analogues of proofs exist.
\section{$q$-Analogues}
The $q$-analogue of the number $n$ is defined by
\[ [n]_q = 1 + q + \cdots + q^{n-1} = \frac{q^n-1}{q-1}. \]
This forms the basis of \emph{quantum calculus}, and we refer to Kac and Cheung \cite{kac:2002} for an introduction to the subject. In combinatorics, one can view the $q$-analogue as what happens if we generalize from a finite set to a finite dimensional vector space. The ``$q$'' in $q$-analogue does not only refer to quantum, but also to the size of a finite field. In the latter case, $[n]_q$ is the number of $1$-dimensional vector spaces of a vector space $\mathbb{F}_q^n$; but also in general, we can view $1$-dimensional subspaces of a finite dimensional space as the $q$-analogues of the elements of a finite set. In this text we keep in mind finite fields, because of applications, but we will consider finite dimensional vector spaces over both finite and infinite fields. \\
Most notions concerned with sets have a straightforward $q$-analogue, as given in the following table:
\begin{center}
\begin{tabular}{|c|c|}
\hline
finite set & finite dim space \\
\hline
element & $1$-dim subspace \\
$\emptyset$ & $\mathbf{0}$ \\
size & dimension \\
$n$ & $\frac{q^n-1}{q-1}$ \\
intersection & intersection \\
union & sum \\
\hline
\end{tabular}
\end{center}
Furthermore, the Newton binomial
\[ \binom{n}{k} = \frac{n!}{k! (n-k)!} = \frac{n\cdots (n-k+1)}{1\cdots k} \]
counts the number of subsets of $\{1, \ldots ,n\}$ of size $k$. The $q$-analogue is given by the Gaussian binomial, or $q$-binomial
\[ \qbinom{n}{k}_q = \frac{[n]_q!}{[k]_q! [n-k]_q!} = \frac{(q^n-1)\cdots (q^{n-k+1}-1)}{(q-1)\cdots (q^k-1)}. \]
If we consider $q$ as the size of a finite field, the $q$-binomial counts the number of subspaces of $\mathbb{F}_q^n$ of dimension $k$. For infinite fields, we get a polynomial in $q$ that can be considered as the counting polynomial of the Grassmann variety of $k$-dimensional subspaces of an $n$-dimensional vector space, see \cite{plesken:2009}.
In most cases, we can go from the $q$-analogue to the ``normal'' case by taking the limit for $q\to1$. This can also be viewed as projective geometry over the field $\mathbb{F}_1$, as is nicely explained by Cohn \cite{cohn:2004}. \\
Two notions that were not mentioned above, because they need a bit more caution, are the difference and the complement. When taking the difference $A-B$ of two subsets $A$ and $B$, we mean ``all elements that are in $A$ but not in $B$''. The $q$-analogue of this would be ``all $1$-dimensional subspaces that are in $A$ but not in $B$''. The problem is that when $A$ and $B$ are finite dimensional spaces, all these $1$-dimensional subspaces together do not form a subspace. Sometimes this is not a problem, as we will see for example in property (I3) later on. We have several options for $A-B$ as a subspace. We can take a subspace $C$ with $C\cap B=\mathbf{0}$ and $C\oplus(A\cap B)=A$. However, this space is not uniquely defined. We can also take the orthogonal complement, but this has the disadvantage that $A\cap A^\perp$ can be non-trivial. Using the quotient space as a complement will lower the dimension of the ambient space, which makes it perfect for the definition of contraction but not very suitable for other purposes. \\
The solution to this problem is to use all options described above, depending on for which property of $A-B$ we need a $q$-analogue.
\section{Rank function}
Although it is not strictly necessary to know about matroids before defining their $q$-analogue, the subject probably makes a lot more sense with ordinary matroids in mind. A great resource on matroids is Oxley \cite{oxley:2011}. Another one, that we will follow for in our search for cryptomorphic definitions of a $q$-matroid and the proofs of their equivalence, is Gordon and McNulty \cite{gordon:2012}.
\begin{definition}
A $q$-matroid $M$ is a pair $(E,r)$ in which $E$ is a finite dimensional vector space over a field $\mathbb{F}$ and $r$ an integer-valued function defined on the subspaces of $E$, called the \emph{rank}, such that for all subspaces $A,B$ of $E$:
\begin{itemize}
\item[(r1)] $0\leq r(A)\leq\dim A$
\item[(r2)] If $A\subseteq B$, then $r(A)\leq r(B)$.
\item[(r3)] $r(A+B)+r(A\cap B)\leq r(A)+r(B)$
\end{itemize}
\end{definition}
Note that this definition is a straightforward $q$-analogue of the definition of a matroid in terms of its rank. In the same way, we define the following.
\begin{definition}
Let $M=(E,r)$ be a $q$-matroid and let $A$ be a subspace of $E$. If $r(A)=\dim A$, we call $A$ an \emph{independent space}. If not, $A$ is called \emph{dependent}. If $A$ is independent and $r(A)=r(E)$, we call $A$ a \emph{basis}. The \emph{rank of $M$} is denoted by $r(M)$ and is equal to $r(E)$. A $1$-dimensional subspace that is dependent, is called a \emph{loop}.
\end{definition}
These definitions might cause some confusion at first: we assign a rank to a subspace that has little to do with its dimension, and we call a complete subspace (in)dependent. However, we stick to these notions because they are a direct $q$-analogue of what happens in ordinary matroids. Before we go to an example, we prove a Lemma that will be used repeatedly.
\begin{lemma}\label{unit-rank-increase}
Let $(E,r)$ be a $q$-matroid. Let $A$ be a subspace of $E$ and let $x$ be a $1$-dimensional subspace of $E$. Then $r(A+x)\leq r(A)+1$.
\end{lemma}
\begin{proof}
First note that for any $q$-matroid $r(\mathbf{0})=0$ and $r(x)$ is either $0$ or $1$, by (r1). Now apply property (r3) to $A$ and $x$:
\begin{eqnarray*}
r(A+x) & = & r(A+x) + 0 \\
& = & r(A+x)+r(A\cap x) \\
& \leq & r(A)+r(x) \\
& \leq & r(A)+1.
\end{eqnarray*}
\end{proof}
\begin{example}\label{ex-uniform}
Let $E$ be a finite dimensional vector space of dimension $n$. Let $0\leq k\leq n$ be an integer. Define a function $r$ on the subspaces of $E$ as follows:
\[ r(A)=\left\{ \begin{array}{ll}
\dim A & \text{if }\dim A\leq k \\
k & \text{if }\dim A>k
\end{array} \right. \]
To show that $(E,r)$ is a $q$-matroid, we have to show that $r$ satisfies the properties (r1),(r2),(r3). First of all, $r$ is an integer valued function. It is clear from the definition of $r$ that (r1) and (r2) hold. For (r3), let $A,B$ be subspaces of $E$. We distinguish three cases, depending on the dimensions of $A$ and $B$. \\
If $r(A)=\dim A$ and $r(B)=\dim B$, then the definition of $r$ implies that $r(A\cap B)=\dim A\cap B$. By the modularity of dimension and (r2) it follows that
\begin{eqnarray*}
r(A+B)+r(A\cap B) & = & r(A+B)+\dim A\cap B \\
& = & r(A+B)+\dim A+\dim B-\dim(A+B) \\
& = & r(A+B)+r(A)+r(B)-\dim(A+B) \\
& \leq & r(A)+r(B).
\end{eqnarray*}
If $r(A)=r(B)=k$, this implies that also $r(A+B)=k$. Since $r(A\cap B)\leq k$ by definition, we have that
\begin{eqnarray*}
r(A+B)+r(A\cap B) & \leq & k+k \\
& = & r(A)+r(B).
\end{eqnarray*}
Finally, let $r(A)=\dim A$ and $r(B)=k$. Since $\dim B\geq k$, we also have that $\dim A+B\geq k$, hence $r(A+B)=k$.
\begin{eqnarray*}
r(A+B)+r(A\cap B) & = & k+r(A\cap B) \\
& \leq & k+\dim A\cap B \\
& \leq & k+\dim A \\
& \leq & r(B)+r(A).
\end{eqnarray*}
We conclude that $(E,r)$ is indeed a $q$-matroid. We call it the \emph{uniform $q$-matroid} and denote it by $U_{k,n}$. Its independent spaces are all subspaces of dimension at most $k$, and its bases are all subspaces of dimension $k$.
\end{example}
The following two Propositions can be viewed as a variation of (r3). We will use them in later proofs.
\begin{proposition}\label{p-rank1}
Let $r$ be the rank function of a $q$-matroid $(E,r)$ and let $A,B$ be subspaces of $E$. Suppose $r(A+x)=r(A)$ for all $1$-dimensional subspaces $x\subseteq B$, $x\not\subseteq A$. Then $r(A+B)=r(A)$.
\end{proposition}
\begin{proof}
We prove this by induction on $k=\dim B-\dim(A\cap B)$. Let $\{x_1,\ldots,x_k\}$ be $1$-dimensional subspaces of $E$ that are in $B$ but not in $A$ such that $A+x_1+\cdots+x_k=A+B$. So the $x_i$ are generated by linearly independent vectors. Note that $k$ is finite, since $\dim(A+B)$ is finite and $k\leq\dim(A+B)$. If $k=0$, then $B\subseteq A$ so clearly $r(A+B)=r(A)$. \\
Now assume that $r(A+x_1+\cdots+x_t)=r(A)$ for all $t<k$.
We have to show that $r(A+x_1+\cdots+x_k)=r(A)$. By (r2) we have that $r(A)\leq r(A+x_1+\cdots+x_k)$.
By (r3) we have that
\begin{multline*}
r((A+x_1+\cdots+x_{k-1})+(A+x_k))+r((A+x_1+\cdots+x_{k-1})\cap(A+x_k)) \\
\leq r(A+x_1+\cdots+x_{k-1})+r(A+x_k)
\end{multline*}
which is equal to
\[ r(A+x_1+\cdots+x_k)+r(A)\leq r(A)+r(A) \]
and thus $r(A+x_1+\cdots+x_k)\leq r(A)$. We conclude that equality holds, and since $A+x_1+\cdots+x_k=A+B$, this proves the statement.
\end{proof}
\begin{proposition}\label{p-rank2}
Let $r$ be the rank function of a $q$-matroid $(E,r)$, let $A$ be a subspace of $E$ and let $x,y$ be $1$-dimensional subspaces of $E$. Suppose $r(A+x)=r(A+y)=r(A)$. Then $r(A+x+y)=r(A)$.
\end{proposition}
\begin{proof}
Applying (r3) to $A+x$ and $A+y$ gives the following equivalent statements:
\begin{eqnarray*}
r((A+x)+(A+y))+r((A+x)\cap(A+y)) & \leq & r(A+x)+r(A+y) \\
r(A+x+y)+r(A) & \leq & r(A)+r(A) \\
r(A+x+y) & \leq & r(A).
\end{eqnarray*}
On the other hand, by (r2) we have that $r(A)\leq r(A+x+y)$, so equality must hold.
\end{proof}
We end this section with a remark about the difference between matroids and $q$-matroids. Let $(\mathbb{F}_q^n,r)$ be a $q$-matroid defined over a finite field. Let $X$ be the set of $1$-dimensional subspaces of $E$ and define a function on the subsets of $X$ as follows:
\[ \rho(A)=r(\langle A\rangle), \]
that is, we take the rank in the $q$-matroid of the span of $A$. Then it is not difficult to show that $(X,\rho)$ is a matroid. However, this matroid behaves a lot different from the $q$-matroid that we started with. For example, it has $\frac{q^n-1}{q-1}$ elements and rank $n$, which means its rank is very low in comparison to its cardinality. Also, if we take the usual duality, we do not get the dual $q$-matroid (that we define later) because the complement of a subspace in $X$ is not a subspace. Similar remarks hold for restriction and contraction, as well as for the link with rank metric codes. In short, by changing to the matroid $(X,\rho)$, we lose a lot of the structure of the $q$-matroid $(\mathbb{F}_q^n,r)$.
\section{Independent spaces}
Now that we have defined a $q$-matroid in terms of its rank function, a logical question is to ask if we could also define it in terms of its independent spaces, bases, etcetera. Unfortunately, the answer to this question is not as easy as just taking the $q$-analogues of cryptomorphic definitions of an ordinary matroid. The goal of this section is to establish the next cryptomorphic definition of a $q$-matroid.
\begin{theorem}\label{indep-rank}
Let $E$ be a finite dimensional space. If $\mathcal{I}$ is a family of subspaces of $E$ that satisfies the conditions:
\begin{itemize}
\item[(I1)] $\mathcal{I}\neq\emptyset$.
\item[(I2)] If $J\in\mathcal{I}$ and $I\subseteq J$, then $I\in\mathcal{I}$.
\item[(I3)] If $I,J\in\mathcal{I}$ with $\dim I<\dim J$, then there is some $1$-dimensional subspace $x\subseteq J$, $x\not\subseteq I$ with $I+x\in\mathcal{I}$.
\item[(I4)] Let $A,B\subseteq E$ and let $I,J$ be maximal independent subspaces of $A$ and $B$, respectively. Then there is a maximal independent subspace of $A+B$ that is contained in $I+J$.
\end{itemize}
and $r$ is the function defined by $r_{\mathcal{I}}(A)=\max\{\dim I:I\in\mathcal{I},I\subseteq A\}$ for all $A\subseteq E$,
then $(E,r_{\mathcal{I}})$ is a $q$-matroid and its family of independent spaces is equal to $\mathcal{I}$.\\
Conversely, if $\mathcal{I}_r$ is the family of independent spaces of a $q$-matroid $(E,r)$,
then $\mathcal{I}_r$ satisfies the conditions (I1),(I2),(I3),(I4) and $r=r_{\mathcal{I}_r}$.
\end{theorem}
The first three properties are a direct $q$-analogue of the axioms we use when we define an ordinary matroid in terms of its independent sets. The property (I4) however is really needed, as the next example and counter example show.
\begin{example}\label{ex-2}
Let $E=\mathbb{F}_2^4$ and let $\mathcal{I}$ be the set of all subspaces of dimension at most $2$ that do not contain the $1$-dimensional space $\langle0001\rangle$. Now $\mathcal{I}$ is not empty, so it satisfies (I1). If a space does not contain $\langle0001\rangle$, then all its subspaces also do not contain $\langle0001\rangle$, hence (I2) holds. For (I3), the interesting case is to check for $\dim I=1$ and $\dim J=2$, with $I\not\subseteq J$. From all the three $1$-dimensional spaces $x$ in $J$, there can only be one such that $I+x$ contains $\langle0001\rangle$, hence we have proved (I3). We will see in the next section that $\mathcal{I}$ is indeed the family of independent subspaces of a $q$-matroid.
\end{example}
\begin{example}\label{counterexample}
Let $E=\mathbb{F}_2^4$ and let $\mathcal{I}$ be the family consisting of
\[ I=\left\langle\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \end{array}\right\rangle \]
and all its subspaces. It is not difficult to see that $\mathcal{I}$ satisfies (I1),(I2),(I3): in fact, $\mathcal{I}$ is the family of independent spaces of the uniform $q$-matroid $U_{2,2}$ embedded into the space $E$. Consider the subspaces
\[ A=\left\langle\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array}\right\rangle, \quad B=\left\langle\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right\rangle. \]
Both $A$ and $B$ have $\langle0110\rangle$ as a maximal independent subspace. But $A+B=E$ has $I$ as a maximal independent subspace, and $I$ is not contained in $\langle0110\rangle$. So $\mathcal{I}$ does not satisfy (I4). \\
Let $r_\mathcal{I}$ be the rank function defined in Theorem \ref{indep-rank}. Then
\[ r_\mathcal{I}(A+B)+r_\mathcal{I}(A\cap B)=2+1>1+1=r_\mathcal{I}(A)+r_\mathcal{I}(B), \]
so property (r3) does not hold for $r_\mathcal{I}$ and $\mathcal{I}$ is not the family of independent spaces of a $q$-matroid.
\end{example}
In order to understand why we need the extra axiom (I4), let us investigate a bit what goes wrong in the counter example.
\begin{lemma}\label{loopsum}
Let $x$ and $y$ be loops of a $q$-matroid. Then the space $x+y$ has rank $0$.
\end{lemma}
\begin{proof}
Apply property (r3) to $x$ and $y$:
\begin{eqnarray*}
r(x+y) & = & r(x+y)+0 \\
& = & r(x+y)+r(x\cap y) \\
& \leq & r(x)+r(y) \\
& = & 0.
\end{eqnarray*}
By (r1), it follows that $r(x+y)=0$.
\end{proof}
Or in other words: loops come in subspaces. This Lemma might look trivial, but it is exactly what goes wrong in Example \ref{counterexample}. Take the loops $\langle1000\rangle$ and $\langle0001\rangle$: their sum has rank $1$. The difference with ordinary matroids is that for sets, $A\cup B$ contains only elements that were already in either $A$ or $B$. In the $q$-analogue this is not true: the space $A+B$ contains $1$-dimensional subspaces that are in neither $A$ nor $B$. Therefore, it is ``more difficult'' to bound $r(A+B)$, making it also more difficult for property (r3) to hold.
\begin{remark}\label{r-axioms}
Let $\mathcal{I}$ be the family of independent spaces of a $q$-matroid with ground space $E$. Embed $\mathcal{I}$ in a space $E^\prime$ with $\dim E^\prime>\dim E$, resulting in a family $\mathcal{I}^\prime$. Then $\mathcal{I}^\prime$ is \emph{not} the family of independent spaces of a $q$-matroid over $E^\prime$. This is because all $1$-dimensional spaces that are in $E^\prime$ but not in $E$ are loops, but they do not form a subspace: this contradicts Lemma \ref{loopsum}. It follows that a set of axioms for $\mathcal{I}$ that is invariant under embedding can never be a full set of axioms that defines a $q$-matroid.
\end{remark}
Again, if we look back at Example \ref{counterexample}, we see that this counter example was created by embedding a uniform matroid in a space of bigger dimension. So in order to completely determine a $q$-matroid in terms of its independent spaces, we need an extra axiom that regulates how the spaces in $\mathcal{I}$ interact with the other subspaces of the $q$-matroid. This is what the axiom (I4) does. We will now prove in three steps that (I4) holds for every $q$-matroid.
\begin{proposition}\label{p-maxindep1}
Let $(E,r)$ be a $q$-matroid. Let $A\subseteq E$ and let $I$ be a maximal independent subspace of $A$. Let $x\subseteq E$ be a $1$-dimensional space. Then there is a maximal independent subspace of $A+x$ that is contained in $I+x$.
\end{proposition}
\begin{proof}
If $x\subseteq A$, the result is clear. If $r(A)=r(A+x)$ then $I$ is a maximal independent set in $A+x$ and $I\subseteq I+x$, so we are also done. Therefore assume that $x$ is not contained in $A$ and $r(A)\neq r(A+x)$. By Lemma \ref{unit-rank-increase} this means $r(A+x)=r(A)+1$. \\
If $A$ is independent, then $A+x=I+x$ also has to be independent, so the statement is proven. Assume that $A$ is not independent. Then there are $A^\prime,y\subseteq A$ such that $A=A^\prime+y$ and $I\subseteq A^\prime$, hence $r(A)=r(A^\prime)$.
Now we use Proposition \ref{p-rank2} on $A^\prime$, $x$, and $y$. We have that $r(A^\prime+x+y)=r(A+x)\neq r(A)$ by assumption, so $r(A^\prime)$, $r(A^\prime+x)$ and $r(A^\prime+y)$ can not all be equal since this would contradict Proposition \ref{p-rank2}. Because $r(A^\prime)=r(A)$ and $r(A^\prime+y)=r(A)$, it needs to be that $r(A^\prime+x)\neq r(A)$. In fact, $r(A^\prime+x)>r(A)$. \\
If $A^\prime$ is independent, then $A^\prime=I$ and we have that $A^\prime+x=I+x$ is independent as well. This proves the statement. If $A^\prime$ is not independent, we repeat the procedure above: find $A^{\prime\prime}$ and $y^\prime$ such that $A^\prime=A^{\prime\prime}+y^\prime$ and $I\subseteq A^{\prime\prime}$, and apply Proposition \ref{p-rank2}. We keep doing this until we arrive at $r(I+x)>r(A)$, which means $I+x$ is independent.
\end{proof}
This result has the following consequence. First of all, the result holds for all maximal independent subspaces $I\subseteq A$. Suppose that $r(A+x)=r(A)+1$. For all $1$-dimensional subspaces $z\subseteq A+x$, $z\not\subseteq A$, we have that $A+x=A+z$. Hence, for all these $z$, we have that $I+z$ is independent. Also, all these $z$ have to be independent themselves, by (I2). So if enlarging a space raises its rank, it means all added $1$-dimensional subspaces are independent and all combinations of $I+z$ have to be independent as wel.
\begin{proposition}\label{p-maxindep2}
Let $(E,r)$ be a $q$-matroid. Let $A\subseteq E$ and let $I$ be a maximal independent subspace of $A$. Let $B\subseteq E$. Then there is a maximal independent subspace of $A+B$ that is contained in $I+B$.
\end{proposition}
\begin{proof}
The proof goes by induction on $\dim B$. If $\dim B=0$ then the statement is trivially true. If $\dim B=1$ the statement is true by Proposition \ref{p-maxindep1} above. Assume $\dim B>1$ and the statement is true for all subspaces with dimension less then $\dim B$. \\
Let $B^\prime$ be a subspace of $B$ of codimension $1$. By the induction hypothesis there is a maximal independent subspace $J$ of $A+B^\prime$ that is contained in $I+B^\prime$. Let $x$ be a $1$-dimensional subspace $x\subseteq B$, $x\not\subseteq B^\prime$, so $B=B^\prime+x$ and $A+B=A+B^\prime+x$. Now apply Proposition \ref{p-maxindep1} to $A+B^\prime$ and $J$: there is a maximal independent subspace of $A+B$ that is contained in $J+x\subseteq I+B^\prime+x=I+B$.
\end{proof}
\begin{proposition}\label{p-maxindep3}
Let $(E,r)$ be a $q$-matroid. Let $A,B\subseteq E$ and let $I,J$ be maximal independent subspaces of $A$ and $B$, respectively. Then there is a maximal independent subspace of $A+B$ that is contained in $I+J$.
\end{proposition}
\begin{proof}
By Proposition \ref{p-maxindep2} there is a maximal independent subspace of $A+B$ that is contained in $I+B$. This subspace is also maximal independent in $I+B$, by (r2) and $I+B\subseteq A+B$. So $r(A+B)=r(I+B)$. On the other hand, if we apply the same Proposition \ref{p-maxindep2} to $B$ and $I$, we find a maximal independent subspace $K$ of $B+I$ that is contained in $J+I$. Again, $K$ is also maximal independent in $J+I$, by (r2) and $J+I\subseteq B+I$. So $r(I+B)=r(I+J)$. This implies that $r(A+B)=r(I+J)$, hence the subspace $K\subseteq I+J$ is maximal independent in $A+B$, as was to be shown.
\end{proof}
Before finally proving Theorem \ref{indep-rank}, we prove a variation of the properties (I1),(I2),(I3). We denote by $\mathbf{0}$ the $0$-dimensional subspace that contains only the zero vector.
\begin{proposition}\label{indep-prime}
Let $E$ be a finite dimensional space and let $\mathcal{I}$ be a family of subspaces of $E$.
Then the family $\mathcal{I}$ satisfies the properties (I1),(I2),(I3) above if and only if it satisfies:
\begin{itemize}
\item[(I1')] ${\bf 0} \in\mathcal{I}$.
\item[(I2)\phantom{'}] If $J\in\mathcal{I}$ and $I\subseteq J$, then $I\in\mathcal{I}$.
\item[(I3')] If $I,J\in\mathcal{I}$ with $\dim J=\dim I+1$, then there is some $1$-dimensional subspace $x\subseteq J$, $x\not\subseteq I$ with $I+x\in\mathcal{I}$.
\end{itemize}
\end{proposition}
\begin{proof}
We need to show that (I1),(I2),(I3)$\iff$(I1'),(I2),(I3'). \\
$\Rightarrow$: Since $\mathcal{I}\neq\emptyset$ by (I1) and every subspace of an independent space is independent by (I2), we have $\mathbf{0}\subseteq\mathcal{I}$ (I1'). (I3') is just a special case of (I3). \\
$\Leftarrow$: (I1') directly implies (I1). Let $I,J\in\mathcal{I}$ with $\dim I<\dim J$. Let $I^\prime$ be some subspace of $J$ with $\dim I^\prime=\dim I+1$. Then $I^\prime$ is independent by (I2) and we can use (I3') to find a $1$-dimensional subspace $x\subseteq I^\prime$, $x\not\subseteq I$ with $I+x\in\mathcal{I}$. Since $I^\prime\subseteq J$, clearly $x\subseteq J$, $x\not\subseteq I$, so (I3) follows.
\end{proof}
\begin{proof}[Proof (Theorem \ref{indep-rank})]
The proof consists of three parts.
\begin{enumerate}
\item $r\to\mathcal{I}$. Given a function $r$ with properties (r1),(r2),(r3), define $\mathcal{I}$ as $\{A\subseteq E:r(A)=\dim A\}$ and prove (I1),(I2),(I3),(I4).
\item $\mathcal{I}\to r$. Given a family $\mathcal{I}$ with properties (I1),(I2),(I3),(I4), define
$r(A)$ as $\max_{I\subseteq A}\{\dim I:I\in\mathcal{I}\}$ and prove (r1),(r2),(r3).
\item The first two are each others inverse, that is: $\mathcal{I}\to r\to\mathcal{I}^\prime$ implies $\mathcal{I}=\mathcal{I}^\prime$, and $r\to\mathcal{I}\to r^\prime$ implies $r=r^\prime$.
\end{enumerate}
$\bullet$ Part 1. Let $M=(E,r)$ be a $q$-matroid and define the family $\mathcal{I}$ to be those subspaces $I$ of $E$ for which $r(I)=\dim I$. We will show $\mathcal{I}$ satisfies (I1'),(I2),(I3'),(I4). \\
By (r1), $r(\mathbf{0})=0$, so $r(\mathbf{0})=\dim\mathbf{0}$ and $\mathbf{0}\in\mathcal{I}$, hence (I1'). (I4) was proven in Proposition \ref{p-maxindep3}. \\
For (I2), let $J\in\mathcal{I}$ and $I\subseteq J$. We use (r3) with $A=I$ and $B$ a subspace of $J$ such that $A\cap B= {\bf 0}$ and $A+B=J$, to show $\dim I=r(I)$. The following is independent of the choice of $B$. Since $\dim J=r(J)$, we have
\[ r(I+B)+r(I\cap B)=r(J)+r(\mathbf{0})=\dim J. \]
By (r1), we have
\[ r(I)+r(B)\leq\dim I+\dim(B)=\dim J. \]
Combining and using (r3) gives
\[ \dim J=r(J)+r(\mathbf{0})\leq r(I)+r(B)\leq\dim I+\dim(B)=\dim J, \]
so we must have equality everywhere. This means, with (r1), that $r(B)=\dim(B)$ and $r(I)=\dim I$.
Therefore $I\in\mathcal{I}$ and (I2) holds. \\
We will prove (I3') by contradiction. Let $I,J\in\mathcal{I}$ with $\dim I<\dim J$ and let $x$ a $1$-dimensional subspace $x\subseteq J$, $x\not\subseteq I$. Suppose that (I3) fails, so $I+x\notin\mathcal{I}$. Then we have $r(I)=\dim I$ but $r(I+x)\neq\dim(I+x)=\dim I+1$. By (r1) and (r2) we have that
\[ \dim I=r(I)\leq r(I+x)\leq\dim(I+x)=\dim I+1. \]
The second inequality can not be an equality, so the first inequality has to be an equality: $r(I+x)=r(I)$. Now this reasoning holds for every $1$-dimensional subspace $x\subseteq J$, $x\not\subseteq I$ so by Proposition \ref{p-rank1} we have that $r(I)=r(I+J)$. But $J\in\mathcal{I}$ and we have that
\[ r(I+J)=r(I)=\dim I<\dim J=r(J) \]
which contradicts (r2) because $J\subseteq I+J$. So (I3) has to hold. \\
$\bullet$ Part 2. Let $\mathcal{I}$ be a family of subspaces of $E$ that satisfies (I1),(I2),(I3),(I4). Define $r(A)$ to be the dimension of the largest independent space contained in $A$. We show $r$ satisfies (r1),(r2),(r3). \\
Since the rank is a dimension, it is a non-negative integer. From the definition of $r$ we have $r(A)\leq\dim A$ and from (I1') we have $0\leq r(A)$. This proves (r1). If $A\subseteq B\subseteq E$, then every independent subspace of $A$ is an independent subspace of $B$, so
\[ r(A)=\max_{I\subseteq A}\{\dim I:I\in\mathcal{I}\}\leq\max_{I\subseteq B}\{\dim I:I\in\mathcal{I}\}=r(B) \]
and thus (r2). The difficultly in this part is to prove (r3). \\
Let $A,B\subseteq E$ and let $I_{A\cap B}$ be a maximal independent space in $A\cap B$. Use (I3) as many times as possible to extend $I_{A\cap B}$ to a maximal independent space $I_A\subseteq A$, and the same to get a maximal independent space $I_B\subseteq B$. By Proposition \ref{p-maxindep3} there is a maximal independent space $I_{A+B}$ of $A+B$ that is contained in $I_A+I_B$. Furthermore, $I_A\cap I_B=I_{A\cap B}$ because $I_{A\cap B}\subseteq I_A$ and $I_{A\cap B}\subseteq I_B$, and $I_{A\cap B}$ is a maximal independent space in $A\cap B$ hence a maximal independent space in $I_A\cap I_B$. Combining all this, we have
\begin{eqnarray*}
r(A+B)+r(A\cap B) & = & \dim I_{A+B}+\dim I_{A\cap B} \\
& \leq & \dim(I_A+I_B)+\dim I_{A\cap B} \\
& = & \dim I_A+\dim I_B-\dim I_{A\cap B}+\dim I_{A\cap B} \\
& = & \dim I_A+\dim I_B \\
& = & r(A)+r(B)
\end{eqnarray*}
and this is exactly (r3). \\
$\bullet$ Part 3. Given a rank function $r$ satisfying (r1),(r2),(r3), create a family $\mathcal{I}$ by $I\in\mathcal{I}$ if $\dim I=r(I)$. Then use $\mathcal{I}$ to create a (possibly new) rank function $r^\prime(A)=\max_{I\subseteq A}\{\dim I:I\in\mathcal{I}\}$. We want to show that $r^\prime(A)=r(A)$ for all $A\subseteq E$. Note that $r^\prime(A)=\dim I=r(I)$ for some $I\subseteq A$. By (r2), $r(I)\leq r(A)$ so $r^\prime(A)\leq r(A)$. For the reverse inequality, assume $r^\prime(A)<r(A)$ for some $A\subseteq E$. Then by definition of $r^\prime$, for all $I\in\mathcal{I}$ with $I\subseteq A$ we must have $r(A)>\dim I$. Let $I$ be a maximum-dimension such space. Then for all $1$-dimensional subspaces $x\subseteq A$ that intersect trivially with $I$, we have $I+x\notin\mathcal{I}$. Thus $r(I)=r(I+x)$ for all such $x$ and by Proposition \ref{p-rank1} we have $r(I)=r(A)=\dim I$. Contradiction, so $r^\prime(A)\geq r(A)$. Together we have $r^\prime(A)=r(A)$. \\
Given a family $\mathcal{I}$ satisfying (I1),(I2),(I3),(I4), define $r$ by $r(A)=\max_{I\subseteq A}\{\dim I:I\in\mathcal{I}\}$. Then let $\mathcal{I}^\prime$ be defined by $I\in\mathcal{I}^\prime$ if $r(I)=\dim I$. We want to show that $\mathcal{I}=\mathcal{I}^\prime$. Let $I\in\mathcal{I}$, then $r(I)=\dim I$ by the definition of $r$, and thus $I\in\mathcal{I}^\prime$. Now let $I\in\mathcal{I}^\prime$, then $r(I)=\dim I$ by the definition of $\mathcal{I}^\prime$, and thus $I$ is the largest independent subspace of $I$ and $I\in\mathcal{I}$.
\end{proof}
\section{Rank metric codes}\label{sec-codes}
Now that we have established some basic facts about $q$-matroids, we are ready to discuss the motivation of studying them. We show that every rank metric code gives rise to a $q$-matroid. For more on rank metric codes, see Gabidulin \cite{gabidulin:1985}. We consider codes over $L$, where $L$ is a finite Galois field extension of a field $K$. This is a generalization of the case where $K=\mathbb{F}_q$ and $L=\mathbb{F}_{q^m}$ of Gabidulin's \cite{gabidulin:1985} to arbitrary characteristic as considered by Augot, Loidreau and Robert \cite{augot:2013,augot:2014}. Much of the material here about rank metric codes is taken from \cite{jurrius:2015a,jurrius:2016}. See also \cite{martinez-penas:2016}. \\
Let $K$ be a field and let $L$ be a finite Galois extension of $K$. A \emph{rank metric code} is an $L$-linear subspace of $L^n$. To all codewords we associate a matrix as follows. Choose a basis $B=\{ \alpha_1, \ldots ,\alpha _m \} $ of $L$ as a vector space over $K$. Let $\mathbf{c} =(c_1, \ldots ,c_n)\in L^n$. The $m \times n$ matrix $M_B(\mathbf{c})$ is associated to $\mathbf{c} $ where the $j$-th column of $M_B(\mathbf{c} )$ consists of the coordinates of $c_j$ with respect to the chosen basis: $c_j = \sum_{i=1}^m c_{ij}\alpha_i$. So $M_B(\mathbf{c} )$ has entries $c_{ij}$. \\
The $K$-linear row space in $K^n$ and the rank of $M_B(\mathbf{c} )$ do not depend on the choice of the basis $B$,
since for another basis $B'$ there exists an invertible matrix $A$ such that $M_B(\mathbf{c})= AM_{B'}(\mathbf{c})$.
If the choice of basis is not important, we will write $M(\mathbf{x})$ for $M_{B}(\mathbf{x})$.
The rank weight $\wt_R(\mathbf{c} )=\rk(\mathbf{c})$ of $\mathbf{c} $ is by definition the rank of the matrix $M(\mathbf{c} )$,
or equivalently the dimension over $K$ of the row space of $M_B(\mathbf{c} )$. This definition follows from the rank distance, that is defined by $d_R(\mathbf{x}, \mathbf{y} ) = \rk(\mathbf{x}-\mathbf{y})$. The rank distance is in fact a metric on the collection of all $m \times n$ matrices, see \cite{augot:2013,gabidulin:1985}.
\begin{definition}
Let $C$ be an $L$-linear code.
Let $\mathbf{c} \in C$. Then $\Rsupp(\mathbf{c})$, the \emph{rank support} of $\mathbf{c}$
is the $K$-linear row space of $M_B(\mathbf{c})$. So $\wt_R(\mathbf{c})$ is the dimension of $\Rsupp(\mathbf{c})$.
\end{definition}
Note that this definition is the rank metric case of the support weights, or weights of subcodes,
of codes over the Hamming metric.
\begin{definition}\label{dJ2}
For a $K$-linear subspace $J$ of $K^n$ we define:
\[ C(J)=\{\mathbf{c}\in C :\Rsupp(\mathbf{c})\subseteq J^\perp\}. \]
\end{definition}
From this definition it is clear that $C(J)$ is a $K$-linear subspace of $C$, but in fact it is also an $L$-linear subspace.
\begin{lemma}\label{lJ2}
Let $C$ be an $L$-linear code of length $n$ and let $J$ be a $K$-linear subspace of $K ^n$.
Then $\mathbf{c} \in C(J)$ if and only if $\mathbf{c}\cdot\mathbf{y}=0$ for all $ \mathbf{y} \in J $.
Furthermore $C(J)$ is an $L$-linear subspace of $C$.
\end{lemma}
\begin{proof}
The following statements are equivalent:
\[ \begin{array}{c}
\mathbf{c} \in C(J)\\
\sum_{j=1}^nc_{ij}y_j =0 \mbox{ for all } \mathbf{y} \in J \mbox{ and } i=1, \ldots ,m\\
\sum_{i=1}^m(\sum_{j=1}^nc_{ij}y_j) \alpha_i=0 \mbox{ for all } \mathbf{y} \in J\\
\sum_{j=1}^n(\sum_{i=1}^mc_{ij} \alpha_i) y_j=0 \mbox{ for all } \mathbf{y} \in J\\
\sum_{j=1}^n c_j y_j=0 \mbox{ for all }\mathbf{y} \in J\\
\mathbf{c} \cdot \mathbf{y} =0\mbox{ for all } \mathbf{y} \in J\\
\end{array} \]
Hence $C(J) = \{ \mathbf{c} \in C : \mathbf{c} \cdot \mathbf{y}=0 \mbox{ for all } \mathbf{y} \in J \} $.
From this description it follows directly that $C(J)$ is an $L$-linear subspace of $C$.
\end{proof}
\begin{definition}\label{dH3}
Let $C$ be an $L$-linear code of length $n$.
Let $J$ be a $K$-linear subspace of $K ^n$ of dimension $t$ with generator matrix $Y$.
Define the map $\pi _J : L^n \rightarrow L^t$ by $\pi _J (\mathbf{x})= \mathbf{x} Y^T$, and $C_J = \pi_J(C)$.
\end{definition}
\begin{lemma}\label{lJ3}
Let $C$ be an $L$-linear code of length $n$.
Let $J$ be a $K$-linear subspace of $K ^n$ of dimension $t$ with generator matrix $Y$.
Then $\pi _J $ is an $L$-linear map and $C_J$ is an $L$-linear code of length $t$ and its dimension does not depend on the chosen
generator matrix. Furthermore we have an exact sequence of vector spaces:
\[ 0 \longrightarrow C(J) \longrightarrow C \longrightarrow C_J \longrightarrow 0. \]
\end{lemma}
\begin{proof}
The map $\pi _J $ is defined by a matrix with entries in $K$ so it is $L $-linear.
The image of $C$ under $\pi_J$ is $C_J$. Hence $C_J$ is an $L$-linear code.\\
If $G$ is generator matrix of $C$, then $C_J$ is the row space of $GY^T$ and the dimension of $C_J$ is equal to the rank of $GY^T$.
If $G'$ is another generator matrix of $C$ and $Y'$ another generator matrix of $J$, then
there exists an invertible $k\times k$ matrix $A$ with entries in $L$ and an invertible $t\times t$ matrix $B$ with entries in $K$
such that $G'=AG$ and $Y'=BY$. The row space of $G'(Y')^T$ is the space $C_J$ with respect $Y'$. Now
\[ G'(Y')^T=(AG)(BY)^T = A(GY^T) B^T, \]
and $A$ and $B^T$ are invertible. Hence $G'(Y')^T$ and $GY^T$ have the same rank. Therefore the dimension of $C_J$ does not depend on the chosen generator matrix for $J$.\\
The map $C(J) \rightarrow C$ is injective and the map $\pi_J: C \rightarrow C_J $ is surjective, both by definition.
Furthermore the kernel of $\pi_J: C \rightarrow C_J $ is equal to $\{ \mathbf{c} \in C : \mathbf{c} \cdot \mathbf{y}=0 \mbox{ for all } \mathbf{y} \in J \} $, which is equal to $C(J)$ by Lemma \ref{lJ2}.
Hence the given sequence is exact.
\end{proof}
\begin{definition}\label{dH4}
Let $C$ be an $L$-linear code of length $n$.
Let $J$ be a $K$-linear subspace of $K^n$ of dimension $t$.
Define $l(J) = \dim_{L} C(J)$ and $r(J) = \dim_{L} C_J$.
\end{definition}
\begin{corollary}\label{cJ4}
Let $C$ be an $L$-linear code of length $n$ and dimension $k$ and let $J$ be a $K$-linear subspace of $K^n$. Then $l(J)+r(J)=k$.
\end{corollary}
\begin{proof}
This is a direct consequence of Proposition \ref{lJ3}.
\end{proof}
We now know enough about rank metric codes to show that there is a $q$-matroid associated to them.
\begin{theorem}
Let $C$ be a linear rank metric code over $L$, $E=K^n$ and $r$ the function from Definition \ref{dH4}. Then $(E,r)$ is a $q$-matroid.
\end{theorem}
\begin{proof}
First of all, it is clear that $r$ is an integer valued function defined on the subspaces of $E$. We need to show that $r$ satisfies the properties (r1),(r2),(r3). Let $I,J\subseteq E$. We will make heavy use of Corollary \ref{cJ4}, saying $r(J)=k-l(J)$. \\
$\bullet$ (r1) $0\leq r(J)\leq\dim J$. \\
This follows from the definition of $r(J)= \dim C_J$ and the fact that $C_J$ is a subspace of $K^t$ with $t=\dim J$. \\
$\bullet$ (r2) If $I\subseteq J$ then $r(I)\leq r(J)$. \\
Let $I \subseteq J$ and let $ \mathbf{c}\in C(J)$. Then $I \subseteq J \subseteq \Rsupp(\mathbf{c})^\perp$. So $ \mathbf{c}\in C(I)$. Hence $C(J) \subseteq C(I)$ and $l(J) \leq l(I)$. Therefore $r(I) \leq r(J)$. \\
$\bullet$ (r3) $r(I+J) +r(I\cap J) \leq r(I) + r(J)$. \\
Let $I$, $J$ and $H$ be linear subspaces of $K^n$. If $I \subseteq H$ and $J \subseteq H$, then $I +J \subseteq H$, since $H$ is a subspace. On the other hand, if $I+J \subseteq H$, then $I \subseteq I+J \subseteq H$ so $I \subseteq H$ and similarly $J \subseteq H$. Hence $I+J \subseteq H$ if and only if $I \subseteq H$ and $J \subseteq H$.\\
The following statements are then equivalent:
\[ \begin{array}{c}
\mathbf{c} \in C(I) \cap C(J)\\
\mathbf{c} \in C(I) \ \mbox{ and } \ \mathbf{c} \in C(J) \\
I \subseteq\Rsupp(\mathbf{c})^\perp \ \mbox{ and } \ J\subseteq\Rsupp(\mathbf{c})^\perp \\
I+J \subseteq \Rsupp(\mathbf{c})^\perp \\
\mathbf{c} \in C(I+J)
\end{array} \]
Hence $C(I) \cap C(J) = C(I+J)$. \\
Now if $\mathbf{c} \in C(I)$ then $I \subseteq \Rsupp(\mathbf{c})^\perp$ so $I\cap J \subseteq (\Rsupp(\mathbf{c}))^\perp $. Hence $\mathbf{c} \in C(I\cap J)$. So $C(I) \subseteq C(I\cap J)$ and similarly $C(J) \subseteq C(I\cap J)$. Therefore $C(I)+C(J) \subseteq C(I\cap J)$.\\
Combining the above and using the modularity of dimension, we now have
\begin{eqnarray*}
l(I)+l(J) & = & \dim C(I) + \dim C(J) \\
& = & \dim(C(I)\cap C(J))+\dim(C(I)+C(J)) \\
& \leq & \dim(C(I+J))+\dim(C(I\cap J)) \\
& = & l(I+J)+l(I\cap J)
\end{eqnarray*}
It follows that $r(I+J)+r(I\cap J) \leq r(I) + r(J)$. \\
We have shown that the function $r$ satisfies (r1),(r2),(r3), so we conclude that $(E,r)$ is indeed a $q$-matroid.
\end{proof}
\begin{corollary}
The rank of the $q$-matroid $M(C)$ associated to a rank metric code $C$ is $\dim C$.
\end{corollary}
\begin{proof}
We have that $r(M(C))=r(E)=\dim C-l(E)$ and also $E^\perp=\mathbf{0}$, so $C(E)=\mathbf{0}$ and $r(M(C))=\dim C$.
\end{proof}
\begin{corollary}
Let $L^\prime$ be a field extension of $L$ such that $L^\prime$ is Galois over $K$. Let $C\otimes L^\prime$ be the the $L^\prime$-linear code obtained by taking all $L^\prime$-linear combinations of words of $L$. Then the $q$-matroids associated to $C$ and $C\otimes L^\prime$ are the same.
\end{corollary}
\begin{proof}
We first show that $(C(I))\otimes L^\prime = (C\otimes L^\prime )(I)$. \\
Let $\mathbf{c} \in (C(I))\otimes L^\prime$.
Let $\mathbf{b}_1, \ldots , \mathbf{b}_l$ be a basis of $C(I)$ over $L$.
Then $\mathbf{b}_1, \ldots , \mathbf{b}_l$ is also a basis of $(C(I))\otimes L^\prime$ over $L^\prime$ by the definition of taking $\otimes L^\prime$.
Also, $\mathbf{b}_i \cdot \mathbf{x} =0$ for all $\mathbf{x} \in I$ by Lemma \ref{lJ2}.
There exist $\lambda _1, \ldots ,\lambda_l \in L^\prime$ such that
$\mathbf{c} = \sum_{i=1}^l \lambda_i \mathbf{b}_i$.
So by linearity $\mathbf{c} \cdot \mathbf{x} =0$ for all $\mathbf{x} \in I$,
hence $\mathbf{c}\in (C\otimes L^\prime )(I)$ by Lemma \ref{lJ2}.
Therefore $(C(I))\otimes L^\prime \subseteq (C\otimes L^\prime )(I)$.\\
Conversely, let $\mathbf{c} \in (C\otimes L^\prime )(I)$.
Then $\mathbf{c} \cdot \mathbf{x} =0$ for all $\mathbf{x} \in I$.
Let $\mathbf{g}_1, \ldots , \mathbf{g}_k$ be a basis of $C$ over $L$.
Then $\mathbf{g}_1, \ldots , \mathbf{g}_k$ is also a basis of $C\otimes L^\prime$ over $L^\prime$.
There exist $\lambda _1, \ldots ,\lambda_k \in L^\prime$ such that
$\mathbf{c} = \sum_{i=1}^k \lambda_i\mathbf{g}_i$.
Let $\alpha_1, \ldots ,\alpha_m$ be a basis of $L^\prime $ over $L$.
Then for every $i$ there exist $\lambda _{i1}, \ldots ,\lambda_{im} \in L$ such that
$\lambda_i = \sum_{j=1}^m \lambda_{ij}\alpha_j$.
Let $\mathbf{x}\in I$. Then $\sum_{i=1}^k \lambda_{ij} \mathbf{g}_i \cdot \mathbf{x} \in L$ for all $j$,
$$
0 =\mathbf{c} \cdot \mathbf{x} =
\sum_{j=1}^m \left(\sum_{i=1}^k \lambda_{ij} \mathbf{g}_i \cdot \mathbf{x}\right) \alpha_j
$$
and $\alpha_1, \ldots ,\alpha_m$ is a basis of $L^\prime $ over $L$.
So $\sum_{i=1}^k \lambda_{ij} \mathbf{g}_i \cdot \mathbf{x}=0$ for all $j$ and all $\mathbf{x}\in I$.
Hence $\sum_{i=1}^k \lambda_{ij} \mathbf{g}_i \in C(I)$ for all $j$.
Therefore $\mathbf{c} \in (C(I))\otimes L^\prime$, and
$(C\otimes L^\prime )(I) \subseteq (C(I))\otimes L^\prime$.\\
We conclude that $l(I)$, the dimension of $C(I)$ over $L$ is also the dimension of
$(C\otimes L^\prime )(I)$ over $L^\prime$.
Hence the rank functions of the $q$-matroids $M(C)$ and $M((C\otimes L^\prime ))$ are the same.
\end{proof}
\begin{example}\label{ex-code}
Let $L=\mathbb{F}_{8} $ and $K=\mathbb{F}_2 $. Let $a \in \mathbb{F}_{8}$ with $a^3=1+a$. Let $C$ be the rank metric code over $L$ with generator matrix
\[ G=\left(\begin{array}{cccc}
1 & a & 0 & 0 \\
0 & 1 & a & 0
\end{array}\right). \]
We can find the matroid associated to $C$ by finding its bases. They are independent, so their rank equals their dimension, which is $2$. These are the subspaces $J$ of $\mathbb{F}_2^4$ such that $l(J)=0$. This means $C(J)=\mathbf{0}$, i.e., there is no nonzero codeword such that $\Rsupp(\mathbf{c})\subseteq J^\perp$. Now $\wt_R(\mathbf{c})$ can not be $0$ unless $\mathbf{c}=\mathbf{0}$. It can only be $1$ if all nonzero entries in the codeword are the same: that can not happen. So if $\mathbf{c}$ is nonzero, the dimension of $\Rsupp(\mathbf{c})$ is at least $2$. On the other hand, all codewords have a zero in the last coordinate of their rank support. This means that if $J$ is perpendicular to $(0,0,0,1)$, there can not be a nonzero codeword that has $\Rsupp(\mathbf{c})\subseteq J^\perp$. The bases of $M(C)$ are thus the $2$-dimensional subspaces of $\mathbb{F}_2^4$ that do not contain $\langle0001\rangle$. This means the subspace $\langle0001\rangle$ is a loop. In fact, this is the matroid from Example \ref{ex-2}.
\end{example}
Using the theory of rank metric codes, we can learn more about the function $l(J)$.
\begin{definition}
Let $C$ be an $L$-linear code of length $n$. Then the \emph{dual} of $C$, notated by $C^\perp$, consists of all vectors of $L^n$ that are orthogonal to all codewords of $C$.
\end{definition}
The next Proposition is the $q$-analogue of the well-known fact that the minimum distance is the minimal number of dependent columns in a parity check matrix of the code.
\begin{proposition}\label{pJ}
Let $C$ be an $L$-linear code of length $n$. Then $t<d_R(C^\perp)$ if and only if $\dim_{L} (C_J)=t$ for all $K $-linear subspaces $J$ of $K^n$ of dimension $t$.
\end{proposition}
\begin{proof}
See \cite[Theorem 1]{gabidulin:1985}.
\end{proof}
\begin{lemma}\label{lJ5}
Let $C$ be an $L$-linear code of length $n$.
Let $d_R$ and $d_R^{\perp}$ be the minimum rank distance of $C$ and $C^{\perp}$, respectively.
Let $J$ be a $K$-linear subspace of $K ^n$ of dimension $t$. Let $l(J) = \dim_{L} C(J)$.
Then
$$
l(J)=\left\{\begin{array}{cl}
k-t & \text{for all } t<d^{\perp}_R \\
0 & \text{for all } t>n-d_R
\end{array}\right.
$$
\end{lemma}
\begin{proof}
The first inequality is a direct consequence of Proposition \ref{pJ}.\\
Let $t>n-d_R$ and let $\mathbf{c}\in C(J)$. Then $J$ is contained in the orthoplement of $\Rsupp (\mathbf{c})$, so $t\leq n-\wt_R(\mathbf{c})$.
It follows that $\wt_R(\mathbf{c})\leq n-t<d_R$, so $\mathbf{c}$ is the zero word and therefore $l(J)=0$.
\end{proof}
\begin{example}\label{ex-uniform-MRD}
Let $m\geq n$ and let $C$ be an $L$-linear code of length $n$, dimension $k$ and minimum distance $d_R=n-k+1$. Such a code is called an MRD (\emph{maximum rank distance}) code. Gabidulin \cite{gabidulin:1985} constructed such codes over finite fields for all $n$, $k$ and $q$. The construction was generalized to characteristic $0$ and rational function fields by Augot, Loidreau and Robert \cite{augot:2013,augot:2014}. The dual of an MRD code is again an MRD code and its minimum distance is therefore $d_R^\perp=k+1$. If we apply Lemma \ref{lJ5}, we find that the function $l(J)$ is completely determined in terms of the dimension $t$ of $J$:
\[ l(J)=\left\{\begin{array}{cl}
k-t & \text{for all } t\leq k \\
0 & \text{for all } t>k
\end{array}\right. \]
This means that also $r(J)$ is completely determined:
\[ r(J)=\left\{\begin{array}{cl}
t & \text{for all } t\leq k \\
k & \text{for all } t>k
\end{array}\right. \]
As we have seen in Example \ref{ex-uniform}, this is the rank function of the uniform $q$-matroid $U_{k,n}$.
\end{example}
\section{Truncation}
We present the notion of truncation of a $q$-matroid, so that we can use it in our proofs concerning axioms for bases. From now on we denote by $\mathcal{I}(M)$ the independent spaces of the $q$-matroid $M$, and if a $q$-matroid is defined by $\mathcal{I}$, we denote it by $(M,\mathcal{I})$.
\begin{definition}
Let $M=(E,\mathcal{I})$ be a $q$-matroid with $r(E)\geq 1$. The \emph{truncated matroid} $\tau(M)$ is a $q$-matroid with ground space $E$ and independent spaces those members of $\mathcal{I}$ that have dimension at most $r(M)-1$; so
\[ \mathcal{I}(\tau(M))=\{I\in\mathcal{I}:\dim I<r(M)\}. \]
\end{definition}
Because the dimension of an independent space is at most $r(M)$, this means that we simply remove all maximal independent spaces from $\mathcal{I}(M)$ to get $\mathcal{I}(\tau(M))$.
\begin{theorem}\label{t-trunc}
The truncation $\tau(M)$ is indeed a $q$-matroid, that is, $\mathcal{I}(\tau(M))$ satisfies (I1),(I2),(I3),(I4).
\end{theorem}
\begin{proof}
Because $r(M)\geq 1$, we have that $\mathbf{0}\in\mathcal{I}(\tau(M))$ hence (I1) holds. Let $J\in\mathcal{I}(\tau(M))$ and $I\subseteq J$. Then $\dim I\leq\dim J$ and $\dim J<r(M)$, so $\dim I<r(M)$ and $I\in\mathcal{I}(\tau(M))$. This proves (I2). For (I3) and (I4), it is enough to note that $\mathcal{I}(\tau(M))\subseteq\mathcal{I}(M)$. We conclude that $\tau(M)$ is indeed a $q$-matroid.
\end{proof}
We have the following straightforward description of the rank function of the truncated matroid:
\begin{corollary}
Let $M=(E,r)$ be a $q$-matroid with $r(M)\geq 1$. Then the rank function $r_\tau$ of the truncation $\tau(M)$ is give by
\[ r_\tau(A)=\min\{r(A),r(M)-1\}. \]
\end{corollary}
This means that for all subspaces $A$ of $E$ with $r(A)<r(M)$, we have $r(A)=r_\tau(A)$. Only for $r(A)=r(M)$ the rank gets down: $r_\tau(A)=r(A)-1$.
\begin{example}
Let $U_{k,n}$ be the uniform $q$-matroid of Example \ref{ex-uniform}. The truncation of $U_{k,n}$ has as independent spaces all subspaces of dimension at most $k-1$, so it is equal to $U_{k-1,n}$.
\end{example}
\begin{example}
Let $M$ the $q$-matroid of Example \ref{ex-2}. The truncation has as independent spaces $\mathbf{0}$ and all $1$-dimensional subspaces except $\langle0001\rangle$.
\end{example}
\section{Bases}
Remark \ref{r-axioms} about the axioms for independent spaces, holds for bases as well: if a set of axioms is invariant under embedding the family of bases $\mathcal{B}$ in a space of higher dimension, then it can not completely determine a $q$-matroid. This is why we need a fourth axiom.
\begin{theorem}\label{indep-bases}
Let $E$ be a finite dimensional space. If $\mathcal{B}$ is a family of subspaces of $E$ that satisfies the conditions:
\begin{itemize}
\item[(B1)] $\mathcal{B}\neq\emptyset$
\item[(B2)] If $B_1,B_2\in\mathcal{B}$ and $B_1\subseteq B_2$, then $B_1=B_2$.
\item[(B3)] If $B_1,B_2\in\mathcal{B}$, then for every codimension $1$ subspace $A$ of $B_1$ with $B_1\cap B_2\subseteq A$ there is a $1$-dimensional subspace $y$ of $B_2$ with $A+y\in\mathcal{B}$.
\item[(B4)] Let $A,B\subseteq E$ and let $I,J$ be maximal intersections of some bases with $A$ and $B$, respectively. Then there is a maximal intersection of a basis and $A+B$ that is contained in $I+J$.
\end{itemize}
and $\mathcal{I}$ is the family defined by $\mathcal{I}_\mathcal{B}=\{I:\exists B\in\mathcal{B},I\subseteq B \}$, then $(E,\mathcal{I}_\mathcal{B})$ is a $q$-matroid and its family of bases is $\mathcal{B}$. \\
Conversely, if $\mathcal{B}_\mathcal{I}$ is the family of bases of a $q$-matroid $(E,\mathcal{I})$, then $\mathcal{B}_\mathcal{I}$ satisfies the conditions (B1),(B2),(B3),(B4) and $\mathcal{I}=\mathcal{I}_{\mathcal{B}_\mathcal{I}}$.
\end{theorem}
\begin{remark}\label{r-ynotinB1}
In property (B3), it can not happen that $y\subseteq B_1$. Since by assumption $y\subseteq B_2$, this means that $y\subseteq B_1\cap B_2\subseteq A$. But then $A+y=A$, but $A$ can not be a basis because of (B2). So $y\notin B_1$.
\end{remark}
\begin{example}
Let $M$ be the $q$-matroid of Example \ref{ex-2}. The bases are the subspaces of dimension $2$ that do not contain $\langle0001\rangle$. We illustrate the property (B3). Let $B_1=\langle1100,0010\rangle$ and $B_2=\langle1010,0100\rangle$ . Then the intersection $B_1\cap B_2$ is $\langle1110\rangle$. This means we only have one choice for a codimension $1$ subspace of $B_1$ that contains $B_1\cap B_2$: it has to be $\langle1110\rangle$. If we add either $\langle1010\rangle$ or $\langle0100\rangle$ we get $B_2$, which is a basis.
\end{example}
Before proving the theorem, we first prove a slight variation of the axioms.
\begin{proposition}
Let $E$ be a finite dimensional space and let $\mathcal{B}$ be a family of subspaces of $E$.
Consider the condition:
\begin{itemize}
\item[(B2')] If $B_1,B_2\in\mathcal{B}$, then $\dim B_1=\dim B_2$.
\end{itemize}
The family $\mathcal{B}$ satisfies (B1),(B2),(B3) if and only if $\mathcal{B}$ satisfies (B1),(B2'),(B3).
\end{proposition}
\begin{proof}
$\Leftarrow$: It is clear that (B2') implies (B2). \\
$\Rightarrow$: Let $B_1,B_2\in\mathcal{B}$. Then use (B3) multiple times to form $B_1^\prime$, of the same dimension as $B_1$, that contains only $1$-dimensional subspaces that are also in $B_2$. So $B_1^\prime\subseteq B_2$ and by (B2) they must be equal. So $\dim B_1^\prime=\dim B_2$ and since $\dim B_1=\dim B_1^\prime$, we have $\dim B_1=\dim B_2$ and thus (B2').
\end{proof}
\begin{proof}[Proof (Theorem \ref{indep-bases})]
As with Theorem \ref{indep-rank}, the proof has three parts:
\begin{enumerate}
\item $\mathcal{I}\to\mathcal{B}$. Given $\mathcal{I}$ with properties (I1),(I2),(I3),(I4), define $\mathcal{B}$ as the family of independent spaces that are maximal with respect to inclusion and prove (B1),(B2),(B3),(B4).
\item $\mathcal{B}\to\mathcal{I}$. Given $\mathcal{B}$ with properties (B1),(B2),(B3),(B4), prove that the properties (I1),(I2),(I3),(I4) hold for $\mathcal{I}=\{I:\exists B\in\mathcal{B},I\subseteq B \}$.
\item The first two are each others inverse, that is:
$\mathcal{I}\to\mathcal{B}\to\mathcal{I}^\prime$ implies
$\mathcal{I}=\mathcal{I}^\prime$, and $\mathcal{B}\to\mathcal{I}\to\mathcal{B}^\prime$ implies $\mathcal{B}=\mathcal{B}^\prime$.
\end{enumerate}
$\bullet$ Part 1. Let $M=(E,\mathcal{I})$ be a $q$-matroid and define $\mathcal{B}$ to be the family of subspaces $B$ that are subspaces of $\mathcal{I}$ of maximal dimension, i.e.,
$\mathcal{B}=\{B\in\mathcal{I}:\forall B^\prime\in\mathcal{I}, B\subseteq B^\prime\Rightarrow B=B^\prime\}$.
We need to show that $\mathcal{B}$ satisfies (B1),(B2'),(B3),(B4). \\
Now (B1) is easy: since $\mathbf{0}\in\mathcal{I}$ by (I1') and $E$ is finite-dimensional, we can find an element $B\in\mathcal{I}$ which is not properly contained in any other independent space. Then $B\in\mathcal{B}$ and hence $\mathcal{B}\neq\emptyset$. \\
(B2') is also easy: if there are sets $B_1,B_2\in\mathcal{B}$ with $\dim B_1<\dim B_2$, then, since $B_1,B_2\in\mathcal{I}$ and (I3), there is a $1$-dimensional subspace $x\subseteq B_2$, $x\not\subseteq B_1$, so that $B_1+x\in\mathcal{I}$.
But $B_1$ is a proper subspace of $B_1+x$, which contradicts the definition of $\mathcal{B}$, so $\dim B_1=\dim B_2$ and hence (B2'). \\
Next we show (B3). Let $B_1,B_2\in\mathcal{B}$. Let $A$ be a codimension $1$ subspace of $B_1$ with $B_1\cap B_2=A\cap B_2$.
Since $B_1,B_2\in\mathcal{I}$ and $A$ is a subspace of $B_1$, we have $A\in\mathcal{I}$ by (I2).
Apply (I3) to $A$ and $B_2$: since $\dim(A)<\dim B_2$ by (B2'), there is a $1$-dimensional subspace $y\subseteq B_2$,
$y\not\subseteq A$ such that $A+y\in\mathcal{I}$. Because $B_1\cap B_2\subseteq A$ we have that $y\not\subseteq B_1$. We show that $A+y$ is in $\mathcal{B}$.
Suppose not, then there is an $B_3\in\mathcal{B}$ such that $A+y\subset B_3$ and $\dim(A+y)=\dim B_1<\dim B_3$, which contradicts (B2').
So (B3) holds. \\
Finally, for (B4) it is enough to notice that by (I3) every independent space is contained in a basis. So a maximal independent subspace of $A\subseteq E$ is the same as a maximal intersection between a member of $\mathcal{B}$ and $A$. Then (B4) is just a re-formulation of (I4) in terms of bases instead of independent spaces. \\
$\bullet$ Part 2. Let $\mathcal{B}$ be a family of subspaces of a finite dimensional space $E$ satisfying (B1),(B2),(B3),(B4).
Define $\mathcal{I}=\{I:\exists B\in\mathcal{B}, I\subseteq B \}$. We need to show $\mathcal{I}$ satisfies (I1),(I2),(I3),(I4). \\
Since $\mathcal{B}\neq\emptyset$ by (B1) and $\mathcal{B}\subseteq\mathcal{I}$, it follows that $\mathcal{I}\neq\emptyset$ and thus (I1). \\
To verify (I2), we need to show that if $I^\prime\subseteq I$ for some $I\in\mathcal{I}$, then $I^\prime\in\mathcal{I}$.
By the construction of $\mathcal{I}$, we know $I\subseteq B$ for some $B\in\mathcal{B}$. But then $I^\prime\subseteq I\subseteq B$ and so $I^\prime\in\mathcal{I}$ and (I2). \\
Now we prove (I3). Let $I_1,I_2\in\mathcal{I}$ with $\dim I_1<\dim I_2$. We may assume without loss of generality that $I_2$ is a basis, by truncating the matroid sufficiently many times by Theorem \ref{t-trunc}. Now $I_1$ is contained in a basis $B_1$ and $I_2=B_2$ is a basis.
There exists a codimension $1$ subspaces $A$ of $B_1$ that contains $I_1$, since $\dim I_1<\dim I_2$. Furthermore, we can choose $A$ such that $B_1\cap I_2\subseteq A$.
Hence by (B3) there is a one dimensional subspace $y$ of $I_2$ such that $A+y$ is a basis and $\dim A+y = \dim I_2$.
Now $y$ is not contained in $A$, since $\dim A = \dim I_2 -1$.
Therefore $I_1+y \subseteq A+y$ and $A+y$ is independent.
So $I_1+y$ independent and $y$ is not contained in $I_1$.\\
Finally, for (I4) we have the same reasoning as in part 1: (I4) is a re-formulation of (B4) in terms of independent spaces instead of bases. \\
$\bullet$ Part 3. Given a family of subspaces $\mathcal{I}$ satisfying (I1),(I2),(I3),(I4) create a family
$\mathcal{B}=\{B\in\mathcal{I}:\forall B^\prime\in\mathcal{I}, B\subseteq B^\prime\Rightarrow B^\prime=B\}$.
Then use $\mathcal{B}$ to create a (possibly new) family
$\mathcal{I}^\prime=\{I:\exists B\in\mathcal{B}, I\subseteq B \}$. We want to show that $\mathcal{I}^\prime=\mathcal{I}$. Let $I\in\mathcal{I}$, then $I\subseteq B$ for some $B\in\mathcal{B}$ that is of maximal dimension, so immediately $I\in\mathcal{I}^\prime$. On the other hand, if $I^\prime\in\mathcal{I}^\prime$, then $I^\prime\subseteq B$ for some $B\in\mathcal{I}$ of maximal dimension. By (I2), $I^\prime\in\mathcal{I}$, so $\mathcal{I}^\prime=\mathcal{I}$. \\
Given a family $\mathcal{B}$ satisfying (B1),(B2),(B3),(B4) create a family $\mathcal{I}=\{I:\exists B\in\mathcal{B}, I\subseteq B\}$. Then let $\mathcal{B}^\prime$ be the members of $\mathcal{I}$ of maximal dimension, that is,
$\mathcal{B}^\prime=\{B\in\mathcal{I}:\forall B^\prime\in\mathcal{I},B\subseteq B^\prime\Rightarrow B^\prime=B\}$. We will show that $\mathcal{B}^\prime=\mathcal{B}$. Let $B\in\mathcal{B}$ and suppose $B\notin\mathcal{B}^\prime$. Then $B$ is not a member of $\mathcal{I}$ of maximal dimension, so $B\subsetneq B^\prime$ for some $B^\prime\in\mathcal{I}$, which contradicts (B2'). Hence $B\in\mathcal{B}^\prime$. If $B^\prime\in\mathcal{B}^\prime$, then $B^\prime\in\mathcal{I}$, so $B^\prime\subseteq B$ for some $B\in\mathcal{B}$ Since $B^\prime$ is a member of $\mathcal{I}$ of maximal dimension and $B\in\mathcal{I}$ as well, we get $B^\prime\in\mathcal{B}$ so $\mathcal{B}^\prime=\mathcal{B}$.
\end{proof}
\section{Duality}
\begin{definition}
Let $M=(E,r)$ be a matroid and let
\[ r^*(A)=\dim A-r(M)+r(A^\perp) \]
be an integer-valued function defined on the subspaces of $E$. Then $M^*=(E,r^*)$ is the \emph{dual} of the $q$-matroid $M$.
\end{definition}
We need to show that this definition is well-defined, so that the the dual of a $q$-matroid is again a $q$-matroid.
\begin{theorem}
The dual $q$-matroid is indeed a $q$-matroid, that is, the function $r^*$ satisfies (r1),(r2),(r3).
\end{theorem}
\begin{proof}
Let $A,B\subseteq E$. We start with proving (r2), so assume $A\subseteq B$. Then $B^\perp\subseteq A^\perp$. This means we can find independent vectors $x_1,\ldots,x_k$ such that $A^\perp=B^\perp+x_1+\cdots+x_k$, where $k=\dim A^\perp-\dim B^\perp$. By repeating Lemma \ref{unit-rank-increase} multiple times, we find that
\[ r(A^\perp)\leq r(B^\perp)+k=r(B^\perp)+\dim A^\perp-\dim B^\perp. \]
We have the following equivalent statements:
\begin{eqnarray*}
r(A^\perp) & \leq & r(B^\perp)+\dim A^\perp-\dim B^\perp \\
r(A^\perp)-\dim A^\perp & \leq & r(B^\perp)-\dim B^\perp \\
r(A^\perp)+\dim E-\dim A^\perp & \leq & r(B^\perp)+\dim E-\dim B^\perp \\
r(A^\perp)+\dim A & \leq & r(B^\perp)+\dim B
\end{eqnarray*}
Then it follows that
\begin{eqnarray*}
r^*(A) & = & \dim A-r(M)+r(A^\perp) \\
& \leq & \dim B-r(M)+r(B^\perp) \\
& = & r^*(B)
\end{eqnarray*}
and we have proved (r2). For (r1), notice that $r^*(\mathbf{0})=0-r(M)+r(E)=0$ and by (r2) it follows that $0\leq r^*(A)$ for all $A\subseteq E$. The other inequality of (r1) is proved via
\begin{eqnarray*}
r^*(A) & = & \dim A-r(M)+r(A^\perp) \\
& \leq & \dim A-r(M)+r(M) \\
& = & \dim A.
\end{eqnarray*}
We show (r3) using the modularity of dimension and semimodularity of $r$:
\begin{eqnarray*}
\lefteqn{r^*(A+B)+r^*(A\cap B)} \\
& = & \dim(A+B)+\dim(A\cap B)-2\cdot r(E)+r((A+B)^\perp)+r((A\cap B)^\perp) \\
& = & \dim A+\dim B-2\cdot r(E)+r(A^\perp\cap B^\perp)+r(A^\perp+B^\perp) \\
& \leq & \dim A+\dim B-2\cdot r(E)+r(A^\perp)+r(B^\perp) \\
& = & r^*(A)+r^*(B).
\end{eqnarray*}
Now $r^*$ satisfies (r1),(r2),(r3), so we conclude that the dual $q$-matroid is indeed a $q$-matroid.
\end{proof}
\begin{remark}\label{r-dual}
In the definition of duality we use the orthogonal complement of a subspace, with respect to the standard inner product in $E$. We could, however, have chosen any nondegenerate bilinear form on $E$ to define duality. Choosing another inner product (bilinear form) will result in isomorphic duals.
\end{remark}
An easy consequence of the definition of duality is the following:
\begin{corollary}
The rank of the dual $q$-matroid is $\dim E-r(M)$.
\end{corollary}
\begin{proof}
We have $r^*(M)=r^*(E)=\dim E-r(M)+r(\mathbf{0})=\dim E-r(M)$ as was to be shown.
\end{proof}
We can also characterize the bases of the dual $q$-matroid.
\begin{theorem}
Let $M=(E,r)$ be a $q$-matroid with $\mathcal{B}$ as collection of bases, and let
\[ \mathcal{B}^*=\{B^\perp:B\in\mathcal{B}\}. \]
Then the dual $q$-matroid $M^*$ has $\mathcal{B}^*$ as collection of bases.
\end{theorem}
\begin{proof}
The following statements are equivalent:
\begin{center}
\begin{tabular}{c}
$B$ is a basis of $M^*$ \\
$r^*(B)=\dim B=r^*(E)$ \\
$r(B^\perp)=r(M)$ and $\dim B=\dim E-r(M)$ \\
$r(B^\perp)=r(M)$ and $r(M)=\dim B^\perp$ \\
$B^\perp$ is a basis of $M$ \\
\end{tabular}
\end{center}
This proves that $M^*=(E,\mathcal{B}^*)$.
\end{proof}
This is a straightforward consequence of the theorem above:
\begin{corollary}
Let $M$ be a $q$-matroid. Then $(M^*)^*=M$.
\end{corollary}
\begin{example}
Consider the uniform $q$-matroid $U_{r,n}$ from Example \ref{ex-uniform}. We know that its bases are all subspaces of dimension $r$. This means the bases of the dual are all subspaces of dimension $n-r$. Thus, the dual of $U_{r,n}$ is $U_{n-r,n}$.
\end{example}
We have discussed in Section \ref{sec-codes} that rank metric codes give rise to $q$-matroids.
We show that the $q$-matroid associated to the dual code is the same as the dual of the $q$-matroid associated to the code.
\begin{theorem}\label{thm-dualmatroidC}
Let $K\subseteq L$ be a finite Galois field extension and let $C\subseteq L^n$ be a rank metric code.
Let $M(C)$ be the $q$-matroid associated to the code $C$. Then $M(C)^*=M(C^\perp)$.
\end{theorem}
\begin{proof}
We will show that both matroids have the same set of bases. Let $C$ be $k$ dimensional rank metric code over $L$ and let $G$ be a generator matrix of $C$.
A basis of $M(C)^*$ is of the form $B^\perp$ where $B$ is a basis of $M(C)$. Pick such a basis $B$ of $M(C)$, then $r(B)=r(M)=\dim B=k$.
After a $K$-linear coordinate change of $K^n$ we may assume without loss of generality that $B$ has generator matrix $Y=(I_k | O)$. (See Berger \cite{berger:2003} for more details on rank metric equivalence.) \\
Let $G=(G_1|G_2)$, where $G_1$ consists of the first $k$ columns of $G$ and $G_2$ consists of the last $n-k$ columns of $G$.
Then $GY^T=G_1$ is a generator matrix of $C_B$. Now $\dim_L(C_B)=k$, since $B$ is a basis. So $C_B=L^k$.
Hence, after a base change of $C$ we may assume without loss of generality that $C$ has generator matrix $G'=(I_k | P)$.
Therefore $H=(-P^T|I_{n-k})$ is a parity check matrix of $C$ and a generator matrix of $C^\perp$.
Now $Z=(O|I_{n-k})$ is a generator matrix of $B^\perp$ and $HZ^T=I_{n-k}$ is a generator matrix of $(C^\perp)_{B^\perp}$.
So $(C^\perp)_{B^\perp}=L^{n-k}$ and $B^\perp$ is a basis of $M(C^\perp)$. Therefore $\mathcal{B}(M(C)^*)\subseteq\mathcal{B}(M(C^\perp))$. \\
The other inclusion follows from using duality and replacing $C$ by $C^\perp$, leading to the following equivalent statements:
\begin{center}
\begin{tabular}{c}
$\mathcal{B}(M(C)^*)\subseteq\mathcal{B}(M(C^\perp))$ \\
$\mathcal{B}^*(M(C)^*)\subseteq\mathcal{B}^*(M(C^\perp))$ \\
$\mathcal{B}(M(C))\subseteq\mathcal{B}(M(C^\perp)^*)$ \\
$\mathcal{B}(M(C^\perp))\subseteq\mathcal{B}(M(C)^*)$ \\
\end{tabular}
\end{center}
We conclude that $\mathcal{B}(M(C)^*)=\mathcal{B}(M(C^\perp))$ and hence $M(C)^*=M(C^\perp)$.
\end{proof}
\begin{corollary}
The minimum rank distance $d_R(C)$ of a rank metric code $C$ is determined by $M(C)$. Moreover, rank metric codes that give rise to the same $q$-matroid will have the same minimum rank distance.
\end{corollary}
\begin{proof}
This is a direct consequence of Proposition \ref{pJ} and Theorem \ref{thm-dualmatroidC}.
\end{proof}
\begin{example}
Let $L=\mathbb{F}_{8} $ and $K=\mathbb{F}_2 $. Let $a\in\mathbb{F}_{8}$ with $a^3=1+a$. Let $C^\perp$ be the rank metric code that is the dual of the code defined in Example \ref{ex-code}. It is generated by
\[ H=\left( \begin{array}{cccc}
a^2 & a & 1 & 0 \\
0 & 0 & 0 &1
\end{array}\right). \]
We have seen that $M(C)$ is the $q$-matroid we defined in Example \ref{ex-2}. Its bases are the $2$-dimensional subspaces of $E=\mathbb{F}_2^4$ that do not contain $\langle0001\rangle$. This means the bases of $M(C)^*$ are the $2$-dimensional subspaces of $E$ that do not have $\langle0001\rangle$ in their complement. We check that these spaces are indeed the bases of $M(C^\perp)$. As argued in Example \ref{ex-code}, we need to show that there are no nonzero codewords of $C^\perp$ such that $\Rsupp(\mathbf{c})\subseteq B$, where $B$ is a basis of $M(C)$ (which is the orthogonal complement of a basis of $M(C)^*$). But $\Rsupp(\mathbf{c})$ for a nonzero word of $C^\perp$ has either dimension $3$, because $a^2$, $a$ and $1$ are algebraically independent in $\mathbb{F}_8$, or it is a multiple of $\langle0001\rangle$. In both cases we can not have that $\Rsupp(\mathbf{c})\subseteq B$. So we find that the bases of $M(C^\perp)$ are the same as the bases of $M(C)^*$, hence the two $q$-matroids are the same.
\end{example}
We conclude this section with a definition we will need later.
\begin{definition}
A $1$-dimensional subspace that is not in the orthogonal complement of a basis is called an \emph{isthmus}.
\end{definition}
\begin{corollary}
Let $e$ be a loop of the $q$-matroid $M$. Then $e$ is an isthmus of the dual $q$-matroid $M^*$.
\end{corollary}
\section{Restriction and contraction}
\begin{definition}
Let $M=(E,r)$ be a $q$-matroid and let $H$ be a hyperplane of $E$ that contains at least one basis of $M$. Then the \emph{restriction} $M|_H$ is a $q$-matroid with ground space $H$ and rank function
\[ r_{M|_H}(A)=r_M(A) \]
defined on the subspaces $A\subseteq H$.
\end{definition}
Before proving that restriction is well defined, a remark on deletion. For ordinary matroids, deletion of an element $e$ is the same as restriction to the complement of $e$. For $q$-matroids, we could say that restriction to $H$ is the same as deletion of the $1$-dimensional subspace $e$ orthogonal to $H$. However, since $H$ might contain $e$, the term ``deletion of $e$'' is a bit misleading. Therefore we prefer to talk about restriction.
\begin{theorem}
The restriction $M|_H$ is indeed a $q$-matroid, that is, $r_{M|_H}$ satisfies (r1),(r2),(r3).
\end{theorem}
\begin{proof}
For all $A\subseteq H$, we have that $A\subseteq E$. Hence the function $r_{M|_H}$ inherits the properties (r1),(r2),(r3) directly from $r_M$. We conclude that $M|_H$ is indeed a $q$-matroid.
\end{proof}
\begin{definition}
Let $M=(E,\mathcal{I})$ be a $q$-matroid and let $e$ be a $1$-dimension subspace of $E$ that is not a loop. Consider the projection $\pi:E\to E/e$. For every $A\subseteq E/e$, let $B$ be the unique subspace of $E$ such that $e\subseteq B$ and $\pi(B)=A$. Then the \emph{contraction} $M/e$ is a $q$-matroid with ground space $E/e$ and rank function
\[ r_{M/e}(A)=r_M(B)-1 \]
defined on the subspaces $A\subseteq E/e$.
\end{definition}
\begin{theorem}
The contraction $M/e$ is indeed a $q$-matroid, that is, $r_{M/e}$ satisfies (r1),(r2),(r3).
\end{theorem}
\begin{proof}
Note that $\dim B=\dim A+1$. Because $e\subseteq B$ and $e$ is not a loop, $r_M(B)\geq1$ hence $r_{M/e}(A)\geq0$. Since $r_M(B)\leq\dim B=\dim A+1$, we have $r_{M/e}(A)\leq\dim A$. This proves (r1). For (r2), let $A_1\subseteq A_2\subseteq E/e$ with corresponding $B_1,B_2\subseteq E$. Then $B_1\subseteq B_2$ so $r_M(B_1)\leq r_M(B_2)$, and it follows that $r_{M/e}(A_1)\leq r_{M/e}(A_2)$. \\
For (r3), take $A_1,A_2\subseteq E/e$ with corresponding $B_1,B_2\subseteq E$. Since $\pi$ preserves inclusion, we have that $\pi(B_1\cap B_2)=\pi(B_1)\cap\pi(B_2)=A_1\cap A_2$, and because $\pi$ is a homomorphism, we have that $\pi(B_1+B_2)=\pi(B_1)+\pi(B_2)=A_1+A_2$. Hence
\begin{eqnarray*}
r_{M/e}(A_1+A_2)+r_{M/e}(A_1\cap A_2) & = & r_M(B_1+B_2)-1+r_M(B_1\cap B_2) \\
& \leq & r_M(B_1)-1+r_M(B_2)-1 \\
& = & r_{M/e}(A_1)+r_{M/e}(A_2).
\end{eqnarray*}
This proves (r3). We conclude that $M/e$ is indeed a $q$-matroid.
\end{proof}
Before we give examples, we describe the independent spaces of restriction and contraction.
\begin{theorem}
Let $M=(E,r)$ be a $q$-matroid. Let $e$ be a $1$-dimension subspace of $E$ and consider the projection $\pi:E\to E/e$. Then the the independent spaces of the restriction to $e^\perp$ and the contraction of $e$ are given by
\begin{itemize}
\item Restriction: $\mathcal{I}(M|_{e^\perp})=\{I\in\mathcal{I}(M):I\subseteq e^\perp\}$
\item Contraction: $\mathcal{I}(M/e)=\{\pi(I):I\in\mathcal{I}(M),e\subseteq I\}$
\end{itemize}
\end{theorem}
\begin{proof}
For restriction this is quite clear: a subspace $I$ is independent in $M|_{e^\perp}$ if $r_{M|_{e^\perp}}(I)=\dim I$. By definition, this means $r_M(I)=\dim I$, so $I$ is independent in $M$.
For contraction, let $I$ be an independent subspace of $M/e$. Let $J$ be the unique subspace of $E$ such that $\pi(J)=I$ and $e\subseteq J$. Then we have that
\[ \dim I=r_{M/e}(I)=r_M(J)-1\leq \dim J-1=\dim I, \]
so equality must hold everywhere. Hence $r_M(J)=\dim J$ and $J$ is independent in $M$.
\end{proof}
\begin{example}\label{ex-uniform-delcon}
Let $U_{k,n}$ be the uniform $q$-matroid of Example \ref{ex-uniform} and let $e$ be a $1$-dimensional subspace of $E$. Then the restriction $U_{k,n}|_{e^\perp}$ has as independent spaces all subspaces of dimension at most $k$ that are contained in $e^\perp$. So $U_{k,n}|_{e^\perp}=U_{k,n-1}$ for any $e$. The contraction $U_{k,n}/e$ has as independent subspaces all subspaces of dimension at most $k$ containing $e$, mapped to $E/e$. This gives all subspaces in $E/e$ of dimension at most $k-1$. So $U_{k,n}/e=U_{k-1,n-1}$ for any $e$.
\end{example}
\begin{example}
Let $M$ be the matroid of Example \ref{ex-2} and let $e=\langle(0,0,0,1)\rangle$. Then we can not contract $e$, since it is a loop. But we can restrict to $e^\perp$. The independent spaces that are contained in $e^\perp$ can not contain $e$, because $e$ is not in $e^\perp$. This means all subspaces of $e^\perp$ of dimension $2$ or less are independent in the restriction, hence $M|_{e^\perp}$ is the uniform matroid $U_{2,3}$.
\end{example}
From now on, we will always assume that we never restrict to a hyperplane that does not contain a bases, nor contract loops. So if we talk about $M|_{e^\perp}$ we will assume $e$ is not an isthmus and if we talk about $M/e$ we assume $e$ is not a loop. The following observations are necessary to prove that restriction and contraction are dual operations:
\begin{remark}
Since $e^\perp$ and $E/e$ are both vector spaces over the same field of the same dimension $r(M)-1$, they are isomorphic. We construct an explicit isomorphism as follows. Recall that that all subspaces of $E/e$ can be obtained by $\pi(A)$ with $A\subseteq E$ and $e\subseteq A$. This gives an isomorphism between the subspaces of $E$ that contain $e$ and the subspaces of $E/e$. On the other hand, for a subspace $A$ that contains $e$ we can take the orthogonal complement of $e$ inside $A$ by restricting the inner product of $E$ to $A$. The result is in $e^\perp$.
\end{remark}
\begin{definition}
We denote bij $\varphi:E/e\to e^\perp$ the isomorphism taking $\pi(A)$ to the orthogonal complement of $e$ in $A$. On $e^\perp$ we have a canonical inner product, which is the restriction of the inner product of $E$. We denote it by $\langle\mathbf{x},\mathbf{y}\rangle_{e^\perp}$. Using $\varphi$, we can use it to define an inner product (bilinear form) $\langle\mathbf{x},\mathbf{y}\rangle_{E/e}$ on $E/e$ given by $\langle\mathbf{x},\mathbf{y}\rangle_{E/e}=\langle\varphi(\mathbf{x}),\varphi(\mathbf{y})\rangle_{e^\perp}$.
\end{definition}
\begin{theorem}
Let $M$ a $q$-matroid and $e\subseteq E$ not a loop or isthmus. Then restriction and contraction are dual notions, that is, $M^*/e=(M|_{e^\perp})^*$ and $(M/e)^*=M^*|_{e^\perp}$.
\end{theorem}
\begin{proof}
First, recall from Remark \ref{r-dual} that duality does not depend on the chosen inner product. We have the following equivalent statements:
\begin{center}
\begin{tabular}{c}
$\pi(B)\subseteq E/e$ is a basis of $M^*/e$ \\
$B\subseteq E$ is a basis of $M^*$ and $e\subseteq B$ \\
$B^\perp\subseteq E$ is a basis of $M$ and $B^\perp\subseteq e^\perp$ \\
$B^\perp\subseteq e^\perp$ is a basis of $M|_{e^\perp}$ \\
\end{tabular}
\end{center}
On the other hand, we have that $\varphi(\pi(B))\subseteq e^\perp$ and this is the orthogonal complement of $B^\perp$ in $e^\perp$. This shows that a basis in $M^*/e$ is isomorphic to a basis in $(M|_{e^\perp})^*$ and hence $M^*/e=(M|_{e^\perp})^*$. For the other equality, use duality and replace $M$ by $M^*$ to get the following equivalent statements:
\begin{eqnarray*}
M^*/e & = & (M|_{e^\perp})^* \\
(M^*/e)^* & = & M|_{e^\perp} \\
(M/e)^* & = & M^*|_{e^\perp}
\end{eqnarray*}
This proves that restriction and contraction are dual operations.
\end{proof}
\section{Towards more cryptomorphisms}\label{sec-morecrypt}
An important strength of ordinary matroids is that they have so may cryptomorphic definitions. For $q$-matroids we already saw a definition in terms of the rank function, independent spaces, and bases. We saw that taking the $q$-analogue of two cryptomorphic definitions of a matroid can result in statements that are not cryptomorphic. In this section we lay some ground work for more cryptomorphisms.
\subsection{Circuits}
\begin{definition}
Let $M=(E,\mathcal{I})$ be a $q$-matroid and let $C\subseteq E$. Then $C$ is a \emph{circuit} of $M$
if $C$ is a dependent subspace of $E$ and every proper subspace of $C$ is independent.
\end{definition}
\begin{example}
Let $U_{k,n}$ be the uniform $q$-matroid of Example \ref{ex-uniform}. Its circuits are all the subspaces of $E$ of dimension $k+1$.
\end{example}
\begin{example}
Let $M$ be the $q$-matroid of Example \ref{ex-2}. Its circuits are the $3$-dimensional spaces not containing $\langle0001\rangle$ and the $2$ dimensional spaces that do contain $\langle0001\rangle$.
\end{example}
The circuits of a $q$-matroid satisfy the following properties.
\begin{theorem}
Let $M=(E,\mathcal{I})$ be a $q$-matroid and $\mathcal{C}$ its family of circuits. Then $\mathcal{C}$ satisfies:
\begin{itemize}
\item[(C1)] $\mathbf{0}\notin\mathcal{C}$
\item[(C2)] If $C_1,C_2\in\mathcal{C}$ and $C_1\subseteq C_2$, then $C_1=C_2$.
\item[(C3)] If $C_1,C_2\in\mathcal{C}$ distinct and $x\subseteq C_1\cap C_2$ a $1$-dimensional subspace, then there is a $C_3\subseteq C_1+C_2$ with $x\not\subseteq C_3$ so that $C_3\in\mathcal{C}$.
\end{itemize}
\end{theorem}
\begin{proof}
Since $\mathbf{0}$ is independent by (I1'), it is not a circuit and thus (C1) holds. (C2) follows from the definition of a circuit. \\
To show (C3), let $C_1,C_2\in\mathcal{C}$ with nontrivial intersection. The space $C_1+C_2$ is dependent, since it contains $C_1$ and $C_2$, so it has to contain at least one circuit. We have to prove that for a $1$-dimensional $x\subseteq C_1\cap C_2$ we have such a circuit that trivially intersects $x$. Consider a codimension $1$ subspace $D$ of $C_1+C_2$ that does not contain $x$. Then $\dim D=\dim(C_1+C_2)-1$. Assume that $D$ is independent. \\
Now we know that $C_1-C_2$ can not be empty, because then $C_1\subseteq C_2$ which violates (C2). Similarly, $C_2-C_1$ is nonempty. Let $X\subseteq C_1$ of codimension $1$ with $C_1\cap C_2\subseteq X$. Such an $X$ exists because $C_1-C_2$ is nonempty. $X$ is independent, because it is a proper subspace of a circuit. Use (I3) multiple times to extend $X$ to a maximal independent space in $C_1+C_2$, call it $Y$. Now $Y$ contains $C_1\cap C_2$, but it does not contain all of $C_1$ or $C_2$ by construction. So $\dim Y\leq(\dim C_1-1)+(\dim C_2-1)-\dim(C_1\cap C_2)\leq\dim(C_1+C_2)-2$. \\
We now have two independent spaces in $C_1+C_2$: $D$ and $Y$. But $\dim Y<\dim D$ contradicts the maximality of $Y$. So $D$ has to be dependent and we can find a circuit $C_3\subseteq D$ with $x\not\subseteq C_3$. This proves (C3).
\end{proof}
We can already say that these three properties (C1),(C2),(C3) will not be enough to determine a $q$-matroid, for the same reasons as mentioned in Remark \ref{r-axioms}. If we take the family of circuits of a $q$-matroid and embed them in a space of higher dimension, then the properties (C1),(C2),(C3) still hold, but Lemma \ref{loopsum} fails.
\subsection{Closure}
\begin{definition}
Let $M=(E,r)$ be a $q$-matroid. For all subspaces $A\subseteq E$ we define the \emph{closure} of $A$ as
\[ \cl(A)=\bigcup\{x\subseteq E: r(A+x)=r(A)\}. \]
So $\cl$ is a function from the subspaces of $E$ to the subspaces of $E$. If a subspace is equal to its closure, we call it a \emph{flat}.
\end{definition}
Note that the closure is in fact a subspace, by Proposition \ref{p-rank2} and (r3).
\begin{example}
Let $U_{r,n}$ be the uniform $q$-matroid of Example \ref{ex-uniform}. All subspaces of dimension at most $k-1$ are flats, since adding a $1$-dimensional subspace will increase the rank. The closure of a basis is the whole space $E$ -- in fact, this is true for any $q$-matroid.
\end{example}
\begin{example}
Let $M$ be the $q$-matroid of Example \ref{ex-2}. To find the closure of a $1$-dimensional space, we can always add the loop $\langle0001\rangle$.
\end{example}
The closure satisfies the following properties.
\begin{theorem}
Let $M=(E,r)$ be a $q$-matroid and $\cl$ its closure. Then $\cl$ satisfies for all $A,B\subseteq E$ and $1$-dimensional subspaces $x,y\subseteq E$:
\begin{itemize}
\item[(cl1)] $A\subseteq\cl(A)$
\item[(cl2)] If $A\subseteq B$ then $\cl(A)\subseteq\cl(B)$.
\item[(cl3)] $\cl(A)=\cl(\cl(A))$
\item[(cl4)] If $y\subseteq\cl(A+x)$ and $y\not\subseteq\cl(A)$, then $x\subseteq\cl(A+y)$.
\end{itemize}
\end{theorem}
\begin{proof}
Property (cl1) follows directly from the definition of closure. For (cl2), assume $A\subseteq B$. By (r3) we have that
\[ r(\cl(A)+B)+r(\cl(A)\cap B)\leq r(\cl(A))+r(B)=r(A)+r(B). \]
Because $A\subseteq\cl(A)\cap B$ we have by (r2) that $r(A)\leq r(\cl(A)\cap B)$. Combing gives that $r(\cl(A)+B)\leq r(B)$. On the other hand, $B\subseteq\cl(A)+B$, hence (r2) gives that $r(B)\leq r(\cl(A)+B)$. It follows that equality must hold, so $r(B)=r(\cl(A)+B)$ and therefore $B+\cl(A)\subseteq\cl(B)$. Finally, since $\cl(A)\subseteq\cl(A)+B$, it follows that $\cl(A)\subseteq\cl(B)$. \\
We prove (cl3) by proving the two inclusions. From (cl1) it follows that $\cl(A)\subseteq\cl(\cl(A))$. For the other inclusion, let $x\subseteq\cl(\cl(A))$ be a $1$-dimensional subspace. Then we have $r(\cl(A)+x)=r(\cl(A))+r(A)$. But by (r2), we have $r(\cl(A)+x)\geq r(A+x)\geq r(A)$, so equality must hold in throughout this statement. It follows that $x\subseteq\cl(A)$, hence $\cl(\cl(A)\subseteq\cl(A)$ and (cl3) is proved. \\
To prove (cl4), let $y\subseteq\cl(A+x)$ and $y\not\subseteq\cl(A)$. Then $r(A+x+y)=r(A+x)$ and $r(A+y)\neq r(A)$, so by Lemma \ref{unit-rank-increase} it follows that $r(A+y)=r(A)+1$. We have that
\[ r(A)+1=r(A+y)\leq r(A+y+x)=r(A+x)\leq r(A)+1, \]
so equality must hold everywhere. This means $r(A+y)=r(A+y+x)$, hence $x\subseteq\cl(A+y)$ and we have proved (cl4).
\end{proof}
It is not known if these properties (cl1),(cl2),(cl3),(cl4) are enough to completely determine a $q$-matroid.
\section{Further research directions}
We have established the definitions and several basic properties of $q$-matroids. However, this is just the beginning of the research: in potential, all that is known about matroids could have a $q$-analogue. In this section we make a modest (and somewhat personal) wish-list on where to go next with the research in $q$-matroids. \\
In a late stadium, we learned about the work of Crapo \cite{crapo:1964} on a very closely related topic. Defining an ordinary matroid by its rank function can be viewed as assigning a rank to every element of a Boolean lattice, in such a way that the following properties hold:
\begin{itemize}
\item[(r1)] $0\leq r(A)\leq h(A)$
\item[(r2)] If $A\leq B$, then $r(A)\leq r(B)$.
\item[(r3)] $r(A\vee B)+r(A\wedge B)\leq r(A)+r(B)$
\end{itemize}
Here $h(A)$ is the hight of $A$ in the Boolean lattice, that is, the size of the subset. Join and meet in the Boolean lattice correspond to union and intersection. In this work, we assign a rank with the same properties to every element in a (finite) subspace lattice. The hight of an element in the subspace lattice is its dimension, and the equivalents of join and meet are sum and intersection. The work of Crapo generalises this idea: it turns out that for every complemented modular lattice, one can give the elements a rank function that satisfies the above properties. \\
So, this work on $q$-matroids can be viewed as a special case of the work of Crapo. Where Crapo's motivation and point of view are much more combinatorial, our work relies heavily on linear algebra and therefore might not be easily generalised. We strongly believe that a combination of the two approaches can greatly benefit the study of $q$-matroids. \\
There are many more ways to define matroids that probably have a $q$-analogue. For example in terms of circuits, flats, hyperplanes, or the closure function. First steps in this direction were taken in Section \ref{sec-morecrypt}. Another property of matroids that could have a $q$-analogue is that of connectivity and the direct sum. Special properties of matroids for which we want to decide there is a $q$-analogue include Pappus, Desargues and Vamos. \\
The motivation to study $q$-matroids comes from rank metric codes. There is a link between the weight enumerator of a linear code (in the Hamming metric) and the Tutte polynomial of the associated matroid. It can be established via the function $l(J)$. Can we do the same for $q$-matroids and rank metric codes? \\
To answer this question, we must first find the right definition of the Tutte polynomial. Originally, it was defined in terms of internal and external activity of bases of a matroid. It seems not so easy to do the same for $q$-matroids. A better place to start would be the rank generating polynomial:
\[ R_M(X,Y)=\sum_{A\subseteq E}X^{r(E)-r(A)}Y^{|A|-r(A)}. \]
First notice that in order to get a finite sum, we need $E$ to be a vector space over a finite field -- or maybe we need a different definition to begin with. In the case of a finite field the formula above has a straightforward $q$-analogue: just replace $|A|$ with $\dim A$. For normal matroids, this polynomial is equivalent to the Tutte polynomial. Greene \cite{greene:1976} was the first to prove the link between the Tutte polynomial and the weight enumerator. He used that both behave the same under deletion and contraction. How would that work in $q$-matroids? This is by no means straightforward. In ordinary matroids, we have that $|\mathcal{B}(M-e)|+|\mathcal{B}(M/e)|=|\mathcal{B}(M)|$, which can be used to show the relation between the Tutte polynomials (hence rank generating polynomials) of $M$, $M-e$ and $M/e$. For $q$-matroids, life is less pretty. $\mathcal{B}(M|_{e^\perp})$ comes from the bases of $M$ that are contained in $e^\perp$ while $\mathcal{B}(M/e)$ comes from the bases of $M$ that contain $e$. Because of self-duality in finite vector spaces, these families are not disjoint and also together they do not have to give all bases of $M$. \\
Another question regarding the Tutte polynomial, that looks easier to solve, is how it behaves under duality. \\
For linear error-correcting codes and matroids, the notions of puncturing and shortening of codes generalize to deletion and contraction in matroids. For rank metric codes, the operations of puncturing and shortening are studied in \cite{martinez-penas:2016}. Linking the notions of restriction and contraction of $q$-matroids and puncturing and shortening in rank metric codes should help to find a $q$-analogue for the proof of Greene \cite{greene:1976} of the link between the Tutte polynomial and the weight enumerator. \\
We can consider $q$-matroids that arise from rank metric codes as \emph{representable}, analogous to the case for normal matroids. Are all $q$-matroids representable? A big difference between normal matroids and $q$-matroids is that all uniform $q$-matroids are representable by MRD codes, as we have seen in Example \ref{ex-uniform-MRD}, and MRD codes are known tho exist for all parameters over finite fields \cite{gabidulin:1985}, in characteristic zero \cite{augot:2013}, as well as over rational function fields \cite{augot:2014}. \\
A very important reason why matroids are studies extensively, is that they are generalizations of many objects in discrete mathematics. It is interesting to see if this holds for $q$-matroids as well. It is known \cite{deza:1992} that Steiner systems give matroids, so called \emph{perfect matroid designs}: these are matroids where all flats of the same rank have the same size. Do $q$-ary Steiner systems, the $q$-analogue of Steiner systems, also give us a special kind of $q$-matroids? Currently, there is only one $q$-ary Steiner system known \cite{braun:2013}. Perfect matroid designs have been used to construct new Steiner systems. If a $q$-analogue of a perfect matroid design exists, it provides a new tool in the search for $q$-ary Steiner systems. \\
Matroids generalize graphs and graphs are an important class of matroids. For $q$-matroids, it is not clear if they generalize a $q$-analogue of a graph. We would expect that if such analogy exists, it follows directly from the notion of circuits of $q$-matroids. There are some results about $q$-Kneser graphs, see for example \cite{mussche:2009}, which are the $q$-analogues of Kneser graphs. But these $q$-Kneser graphs are still ``ordinary'' graphs, so it is unlikely that they play the role to $q$-matroids as graphs do for matroids. \\
To summarize, we think that one should study $q$-matroids for the same reasons one should study matroids. There are a lot of problems and questions regarding $q$-matroids waiting for interested researchers.
\section*{Acknowledgement}
This paper has been ``work in progress'' for quite some time. The authors would like to thank all colleagues who have discussed the subject with us over time. Specifically, the members of the COST action ``Random network coding and designs over GF(q)'' and the attendants of the 2016 International Workshop on Structure in Graphs and Matroids. We are grateful to Henry Crapo for sending us his thesis \cite{crapo:1964} and sharing his approach on the subject.
\bibliographystyle{plain}
\bibliography{qmatroid}
\end{document} | 130,635 |
\begin{document}
\maketitle
\begin{center}
\rule{\textwidth}{.8pt}
\textbf{Abstract}
\end{center}
\begin{abstract}
\small We prove the Manin--Peyre equidistribution principle for smooth projective split toric varieties over the rational numbers. That is, rational points of bounded anticanonical height outside of the boundary divisors are equidistributed with respect to the Tamagawa measure on the adelic space.
Based on a refinement of the universal torsor method due to Salberger, we provide asymptotic formulas with effective error terms for counting rational points in arbitrary adelic neighbourhoods, and develop an Ekedahl-type geometric sieve for such toric varieties. Several applications to counting rational points satisfying infinitely many local conditions are derived.
\end{abstract}
\rule{\textwidth}{1pt}
\tableofcontents
\section{Introduction}
\subsection{Empiricism and main result}
The \emph{strong approximation} property\footnote{See e.g. \cite[\S2.7]{Wittenberg} and \cite[\S1]{Cao-Huang1} for background materials of this property.} measures the density of rational points on an algebraic variety over a number field inside of the adelic space. When such a property is satisfied, then a quantitative aspect is to execute point counting via imposing appropriate bounded height conditions, which measures how fast the rational points grow to ``fill up'' arbitrary adelic neighbourhood. If an asymptotic formula exists, then we expect that the leading constant should be a sum of products of local densities, each one being defined in terms of the integral with respect to certain volume form.
Around 80's, Manin and his collaborators (see \cite{Bat-Manin,F-M-T}) put forward asymptotic formulas for counting rational points of bounded height on projective algebraic varieties, together with geometric interpretations of the order of magnitude.
Let $V$ be an ``almost-Fano'' variety (see Definition \ref{def:almostFano}) over a number field $k$. Let $H:V(k)\to\BR_{>0}$ be a height function induced by a fixed choice of adelic metrics associated to the anticanonical line bundle $-K_V$ (see \S\ref{se:adelicmetrics}). Then there should exist a Zariski dense open subset $U\subset V$ and $c_V>0$, such that as $B\to \infty$,
\begin{equation}\label{eq:Maninconj}
\#\{P\in U(k):H(P)\leqslant B\}\sim c_V B(\log B)^{r-1},
\end{equation} where once and for all, $r:=\operatorname{rank}(\Pic(V))$, and we expect $c_V$ to be a product of local densities (if there is no Brauer--Manin obstruction to weak approximation). We would like to stress that it is usually necessary to restrict to such an open subset, otherwise the number of rational points in the complement could dominate those on the whole variety.
In \cite{Peyre}, Peyre refined and generalised Manin's conjecture \eqref{eq:Maninconj} to the following equidistribution principle of rational points in adelic spaces. This in particular gives a conjectural description of the leading constant $c_V$. Let $\omega^V$ be the Tamagawa measure on the adelic space $V(\RA_k)$ induced by the fixed adelic metric on $V$ (see \eqref{eq:Tamagawameas}). For every infinite subset $\CM\subset V(k)$, we define the counting measure \begin{equation}\label{eq:countingmeasure}
\delta_{\CM_{\leqslant B}}=\sum_{P\in \CM:H(P)\leqslant B} \delta_P\footnote{Note that our definition of counting measures differs slightly from \cite[Definition 3.8]{PeyreBeyond}.}
\end{equation} on $V(\RA_\BQ)$. We can now state the equidistribution principle in the following form.
\begin{principle}[The Manin--Peyre equidistribution principle]\label{prin:Manin-Peyre}
There exists a thin subset $M\subset V(k)$ such that, as $B\to \infty$, the normalised sequence of counting measures
$$\frac{1}{B (\log B)^{r-1}}\delta_{(V(k)\setminus M)_{\leqslant B}} $$ on $V(\RA_k)$ converges \emph{vaguely} to the measure $\alpha(V)\beta(V)\omega^V$\footnote{i.e., for every continuous function with compact support $f: V(\RA_k)\to\BR$, we have as $B\to\infty$, $$(B (\log B)^{r-1})^{-1}\int_{V(\RA_k)}f\operatorname{d} \delta_{(V(k)\setminus M)_{\leqslant B}} \to \alpha(V)\beta(V)\int_{V(\RA_k)}f\operatorname{d}\omega^V.$$ Since $V(\RA_k)$ is compact, this is equivalent to the weak convergence. See however the footnote of Principle \ref{prin:purity}.}, where $\alpha(V)$ is related to the cone of pseudo-effective divisors $\operatorname{Eff}(\overline{V})$, and $\beta(V)$ is related to the first Galois cohomology of the geometric Picard group $\Pic(\overline{V})$ (see $\S$\ref{se:heightsTamagawa}).
\end{principle}
It was also pointed out by Peyre \cite[\S5]{Peyre} that the validity of Principle \ref{prin:Manin-Peyre} for \emph{at least one} choice of adelic metrics is equivalent to the validity of (1) for \emph{any choice} of adelic metrics on $-K_V$.
Since the removal of a codimension two closed subset does not introduce new cohomological obstruction to strong approximation (as first observed by Min\v{c}hev \cite{Minchev}), inspired by a question of Wittenberg (see \cite[Question 2.11]{Wittenberg}) on \emph{(arithmetic) purity of strong approximation} (APSA) (see \S\ref{se:adelicspaces}) and its recent progress on linear algebraic groups and their homogeneous spaces (see \cite{Wei,Cao-Liang-Xu,Cao-Huang1}), it is therefore natural to extend Principle \ref{prin:Manin-Peyre} to open subvarieties of any fixed almost-Fano variety. We keep using the same notation as above.
\begin{principle}[Purity of equidistribution]\label{prin:purity}
Let $V$ be an almost-Fano variety over a number field $k$. Suppose that $V$ satisfies the Manin--Peyre equidistribution principle \ref{prin:Manin-Peyre}, upon removing a thin subset $M\subset V(k)$. Then the equidistribution principle (with the removal of $M$) also holds for any Zariski open dense subset of $V$ whose complement has codimension at least two.
That is, let $W\subset V$ be such an open subset. Then as $B\to\infty$, the normalised sequence of counting measures
\begin{equation}\label{eq:seqW}
\frac{1}{B (\log B)^{r-1}}\delta_{(W(k)\setminus M)_{\leqslant B}}
\end{equation} on $W(\RA_k)$ converges \emph{vaguely} to the measure $\alpha(V)\beta(V)\omega^V|_W$, where $\omega^V|_W$ is the \emph{restriction} of the Tamagawa measure $\omega^V$ on $W(\RA_k)$. \footnote{We note that the topology of $W(\RA_k)$ is in general not the one induced by $V(\RA_k$). See \S\ref{se:adelicspaces}. It is in general only locally compact. But the restriction of $\omega^V$ to $W(\RA_k)$ is well-defined, cf. Proposition \ref{prop:restTmeasure}.}
\end{principle}
Let us now compare Principle \ref{prin:purity} with \emph{arithmetic purity of the Hardy--Littlewood property} in setting of affine varieties which was first initiated in \cite[Question 1.1]{Cao-Huang2} and inspired by the work of Borovoi and Rudnick \cite{Borovoi-Rudnick}. For simplicity assume that $k=\BQ$. As seen from the discreteness of the embedding $\BQ\hookrightarrow \RA_\BQ$, in most cases, rational points on a $\BQ$-quasi-affine variety $Y$ can be dense only in $Y(\RA_\BQ^f)\subset \prod_{\nu\neq\infty} Y(\BQ_\nu)$, the adelic space \emph{off infinity} (see \cite[Definition 1.3]{Cao-Huang1}). Therefore, we expect that (see \cite[Definition 1.2]{Cao-Huang2}) rational points in each connected component of $Y(\BR)$ should be equidistributed in $Y(\RA_\BQ^f)$ with respect to the finite part of the Tamagawa measure (together with certain density function measuring the failure of strong approximation). So the biggest difference between the affine setting and ours is that, in the latter we consider the ``full'' adelic space $\prod_{\nu\in\operatorname{Val}(\BQ)}Y(\BQ_\nu)$ of local points, and we expect that rational points should be equidistributed with respect to the real Tamagawa measure ``jointly'' with the finite one.
Although the conjecture \eqref{eq:Maninconj} is shown for a number of varieties, it is worth emphasising that Principle \ref{prin:Manin-Peyre} is only established in very rare cases (among which are certain complete intersections with many variables and generalised flag varieties, cf. \cite[\S5--\S6]{Peyre}).
Furthermore, apart from projective spaces (cf. \cite{Ekedahl} and also \cite{Poonen,Bhargava}), the only classes of varieties for which Principle \ref{prin:purity} has been previously verified are projective quadrics due to Browning and Heath-Brown \cite[Corollaries 1.4, 1.5]{Browning-HB}.
The main goal of this article is to prove
\begin{mainthm}
Principles \ref{prin:Manin-Peyre} and \ref{prin:purity} hold for projective smooth split toric varieties over $\BQ$ whose anticanonical line bundle is globally generated. \footnote{In fact, our proof shows that the sequence \eqref{eq:seqW} also converges weakly in the space $\prod_{\nu\in\operatorname{Val}(\BQ)}W(\BQ_{\nu})$, which implies the \emph{purity of weak approximation} property analogous to (APSA). See Remark \ref{rmk:APWA} in \S\ref{se:proofofpurity}.}
\end{mainthm}
The distribution of rational points on toric varieties have long remained central objects of study in arithmetic geometry. Manin's conjecture \eqref{eq:Maninconj} for toric varieties is established by Batyrev and Tschinkel in \cite{Bat-Tsch1,Bat-Tsch2} (see also the work of Chambert-Loir and Tschinkel \cite{Chambert-Loir-Tsch}) using harmonic analysis, and independently by Salberger in \cite{Salberger} using the \emph{universal torsor method}. In \cite{Breteche}, based on Salberger's approach and an analytic method about multivariate Dirichlet series attached to certain arithmetic functions \cite{Breteche2}, la Bretèche proves a refinement of Manin's conjecture \eqref{eq:Maninconj} with power-saving secondary terms. Subsequent work of Pieropan and Schindler \cite{Pieropan-Schindler} deals with the distribution of Campana points on toric varieties.
On the other hand, by \cite[Theorem]{Wei} and \cite[Theorem 1.3]{Cao-Liang-Xu}, smooth projective split toric varieties satisfy purity of strong approximation, i.e., for any such $W$ as in Principle \ref{prin:purity}, $W(k)$ is dense in $W(\RA_k)$.
In contrast to the harmonic analysis technique which allows to handle toric varieties over general number fields for \eqref{eq:Maninconj},
the universal torsor method has the advantage that the parametrisation of rational points is made in a totally explicit way that strongly ties to the combinatorial data of the structural fan. This method also proves powerful (and works over arbitrary number fields) in \cite{Huang}, in which the author studied Diophantine approximation of rational points, a problem of completely different nature. Salberger's universal torsor method \cite{Salberger} is the main ingredient of our approach to proving an \emph{effective} version of Principle \ref{prin:Manin-Peyre} -- Theorem \ref{thm:mainequidist} and the geometric sieve -- Theorem \ref{thm:maingeomsieve}, which together with Principle \ref{prin:Manin-Peyre} imply Principle \ref{prin:purity}, and some further applications will be exhibited in \S\ref{se:applifibration}.
\subsection{Results on effective equidistribution and the geometric sieve}
Let $X$ be a smooth projective split toric variety over $\BQ$ such that $-K_X$ is globally generated. Recall (see \S\ref{se:toricparaheight}) that we can associate a canonical toric adelic metric $(\|\cdot\|_{\operatorname{tor},\nu})_{\nu\in\Val(\BQ)}$ on $X$, which induces a toric height function $H_{\operatorname{tor}}:X(\BQ)\to\BR_{>0}$ and a toric Tamagawa measure $\omega_{\operatorname{tor}}^X$ on $X(\RA_\BQ)$. Let $D\subset X$ be the union of boundary divisors and let $\CT_{O}:=X\setminus D$ be the open orbit.
We state our main result on effective equidistribution in terms of every ``standard'' adelic neighbourhood $\CF=\CF_\infty\times\CF_{f}\subset X(\RA_\BQ)=X(\BR)\times X(\RA_\BQ^f)$, where $\CF_\infty\subset X(\BR)$ is real measurable and $\CF_{f}\subset X(\RA_\BQ^f)$ is open-closed (hence compact). Recall that $X$ admits a canonical smooth projective integral model $\CX$ over $\BZ$. Let us fix an embedding $X\hookrightarrow\BP^m_\BQ$ which extends to $\CX\hookrightarrow\BP_\BZ^m$. Recall that there is a unique smallest integer $\CL(\CF_f)>0$ such that the finite part $\CF_f$ ``is defined modulo $\CL(\CF_f)$'' (with respect to $\CX\subset\BP_\BZ^m$) (see Definition \ref{def:LCEf}).
\begin{theorem}[Effective equidistribution]\label{thm:mainequidist}
With the notation above, as $B\to\infty$, we have
\begin{multline*}
\#\{P\in \CF\cap \CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B\}\\=\alpha(X)\omega_{\operatorname{tor}}^X (\CF)B(\log B)^{r-1}+O_{\CF_\infty,\varepsilon}\left(\CL(\CF_f)^{r+2\dim X+\varepsilon}B(\log B)^{r-\frac{5}{4}}\log\log B\right),
\end{multline*}
where the constant $\alpha(X)$ is defined by \eqref{eq:alphaV}.
\end{theorem}
Besides confirming Principle \ref{prin:Manin-Peyre} for such toric varieties, a key feather of Theorem \ref{thm:mainequidist} is that it provides, for the first time as far as the author is aware, explicit uniform polynomial dependency of $\CL(\CF_{f})$ for the adelic neighbourhood $\CF$ in the error term, albeit allowing the implied constant to depend on $\CF_\infty$. \footnote{In studying Manin's conjecture \eqref{eq:Maninconj}, i.e., when $\CF$ is the whole adelic space $X(\RA_\BQ)$ (note that $\CL(X(\RA_\BQ^f))=1$), the works \cite{Bat-Tsch1,Bat-Tsch2,Breteche} provide finer asymptotic formulas of the form $$\#\{P\in \CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B\}=B\CP(\log B)+O(B^{1-\iota}),$$ where $\CP$ is a polynomial with real coefficients of degree $r-1$ and $0<\iota<1$.}
To state our result on the geometric sieve, we write $\widetilde{P}\in\CX(\BZ)$ the unique lift for every $P\in X(\BQ)$. The next result, with effective error term, shows that the number of rational points which specialise into a Zariski closed subset of codimension at least two modulo any single arbitrarily large prime are negligible.
\begin{theorem}[Geometric sieve]\label{thm:maingeomsieve}
Let $Z\subset X$ be Zariski closed of codimension at least two.
Let $\CZ\subset \CX$ be the Zariski closure of $Z$ in $\CX$. Then uniformly for every $N>1$,
\begin{multline*}
\#\{P\in \CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B, \text{there exists } p\geqslant N,\widetilde{P}~\operatorname{mod}~ p \in\CZ(\BF_p)\}\\\ll_Z \frac{B(\log B)^{r-1}}{N\log N}+B(\log B)^{r-2}\log\log B.
\end{multline*}
\end{theorem}
Theorem \ref{thm:maingeomsieve} adds to the small number of results on the geometric sieve for higher dimensional projective varieties of degree $>1$
(see \cite[Theorems 1.1--1.3]{Browning-HB} and \cite[Theorem 1.6]{Cao-Huang2} concerning quadratic hypersurfaces).
\subsection{Applications to counting rational points arising from fibrations}\label{se:applifibration}
Motivated by a question of Serre \cite{Serre-Br}, and a series of papers of Loughran \emph{et al} \cite{Loughran,BBL,Loughran-Smeets, Browning-Loughran}, we address further applications of Theorems \ref{thm:mainequidist} and \ref{thm:maingeomsieve} concerning estimating \emph{effectively} rational points of bounded height lying inside of the adelic image of a fibration.
\begin{theorem}\label{thm:ratptsfibration}
Let $f:Y\to X$ be a dominant proper $\BQ$-morphism, where $X$ denotes a split smooth projective toric variety and $Y$ is proper, smooth and geometrically integral.
We define the counting function
\begin{equation}\label{eq:countingfibration}
\CN_{\operatorname{loc}}(f;B):=\#\{P\in\CT_{O}(\BQ):H(P)\leqslant B,P\in f(Y(\RA_\BQ))\}.
\end{equation}
\begin{enumerate}
\item Assume that $f$ is generically finite of degree $>1$ and $\dim Y=\dim X$. Then $$\CN_{\operatorname{loc}}(f;B)=O(B(\log B)^{r-1-\iota_{f}}),$$ where $0<\iota_{f}<1$ is certain numerical constant depending on $f$.
\item Assume that $f$ has geometrically integral generic fibre.
\begin{enumerate}
\item Assume that there exists at least one non-pseudo-split fibre\footnote{Recall that (\cite[Definition 1.3]{Browning-Loughran}) a scheme $B$ over a perfect field $k$ is \emph{pseudo-split} if every element of the absolute Galois group $\operatorname{Gal}(\overline{k}/k)$ fixes an irreducible component of $B_{\overline{k}}$ of multiplicity one. It is weaker than being \emph{split} (\cite[Definition 0.1]{Skorobogatov}), i.e., the scheme $B$ contains a open subscheme which is geometrically integral.} over the codimension one points of $X$. Then $$\CN_{\operatorname{loc}}(f;B)=O\left(\frac{B(\log B)^{r-1}}{(\log\log B)^{\triangle(f)}}\right),$$ where $\triangle(f)>0$ is certain constant depends on the fibration $f$.
\item Assume that the fibre of $f$ above any codimension one point of $X$ is pseudo-split, and $Y(\RA_\BQ)\neq \varnothing$. Then \begin{equation}\label{eq:everylocdensity}
\CN_{\operatorname{loc}}(f;B)\sim \alpha(X)\mathfrak{G}(f)B(\log B)^{r-1},
\end{equation} where $\mathfrak{G}(f)>0$ and is equal to a product of local densities.
\end{enumerate}
\end{enumerate}
\end{theorem}
Theorem \ref{thm:ratptsfibration} (1) applies in particular to a special family of sets of rational points called \emph{thin sets}. Recall that (cf. \cite[\S3.1]{Serre}) \emph{a thin set of type II} is a subset of the form $g(U(\BQ))\subset X(\BQ)$, where $g:U\to X$ is a generically finite dominant morphism of degree $\geqslant 2$ and $U$ is geometrically integral over $\BQ$.
Theorem \ref{thm:ratptsfibration} (1) is therefore in accordance with the fact (which goes back S. D. Cohen \cite{Cohen}) that thin sets ``have Tamagawa measure zero'' (see \cite[Propositions 3.5.1, 3.5.2, Theorem 3.6.2]{Serre} and the estimate \eqref{eq:est0})\footnote{Though their images in $X(\RA_\BQ)$ are in general not adelic neighbourhoods if they are of type II.}, and
its power-saving on $\log B$ may be reinterpreted as an \emph{effective Hilbert Irreducibility Theorem} with coefficients in a toric variety (cf. \cite[\S3.4]{Serre}).\footnote{If $A$ is \emph{a thin set of type I}, i.e. a subset contained in $Z(\BQ)$ where $Z\subset X$ is a proper Zariski closed subvariety, then we can prove a (slightly) better estimate. See Corollary \ref{cor:subvar}.}
Theorem \ref{thm:ratptsfibration} (2) generalises various results such as \cite[Theorem 1.1]{Loughran-Smeets} and \cite[Theorem 1.3]{BBL}, where the base is the toric variety $\BP^n$. We refer to \cite[\S1.1]{Loughran}, \cite[p. 1450--1451]{Loughran-Smeets}, \cite[p. 5762]{Browning-Loughran} for the definition of the constant $\triangle(f)$ in Theorem \ref{thm:ratptsfibration} (2a). In particular, $\triangle(f)>0$ if and only if the fibre over certain codimension one point of $X$ is not pseudo-split.\footnote{In general the existence of a non-split fibre over certain codimension one point is not sufficient to guarantee that $\CN_{\operatorname{loc}}(f;B)=o(B(\log B)^{r-1})$. See \cite[Example 5.9]{Loughran-Smeets} for an example studied by Colliot-Thélène, in which all fibres are everywhere locally soluble and pseudo-split, but two of them are non-split.} Theorem \ref{thm:ratptsfibration} (2b) also confirms a conjecture of Loughran \cite[Conjecture 1.7]{Loughran}. An immediate consequence of Theorem \ref{thm:ratptsfibration} (2b) is that, if in addition smooth fibres of $f$ satisfy the Hasse principle, then $Y$ also satisfies the Hasse principle (cf. \cite[Corollary 1.5]{BBL}). All these are in accordance with the general philosophy that, the image of $Y(\RA_\BQ)$ in $X(\RA_\BQ)$ ``having Tamagawa measure zero or not'' (see estimates \eqref{eq:est1} \eqref{eq:est2}), or equivalently whether a positive proportion of fibres are everywhere locally soluble, depends on the geometry of the fibration, more crucially on the splitting behaviour of fibres over codimension one points.
We remark that the error terms of \eqref{eq:everylocdensity} and for the convergence in Principle \ref{prin:purity} can all be made explicit. See Remark \ref{rmk:effectivity}.
\begin{comment}
\begin{theorem}\label{thm:mainthinset}
Let $A\subset X(\BQ)$ be a thin set. Then
$$\#\{P\in A\cap\CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B\}=\begin{cases}
O(B(\log B)^{r-2}\log\log B) &\text{ if } A \text{ is of type I};\\ O(B(\log B)^{r-1-\delta_{A}}) &\text{ if } A \text{ is of type II},
\end{cases}$$ where in the second case, $0<\delta_{A}<1$ is certain numerical constant depending on $A$.
\end{theorem}
The following results are concerned with morphisms to toric varieties having geometrically integral generic fibre.
Our first result shows that the proportion of fibres which contain a rational point has density zero.
\begin{theorem}
Let $f_1:Y_1\to X$ be a dominant proper $\BQ$-morphism whose generic fibre is geometrically integral. Then
$$\#\{P\in\CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B,P\in f_1((Y_1(\BQ)))\}=O\left(\frac{B(\log B)^{r-1}}{(\log\log B)^{\triangle(f_1)}}\right),$$ where $\triangle(f)\geqslant 0$ is certain constant depends on the fibration $f$, and $\triangle(f_1)>0$ if and only if the fibre over certain codimension one point is not pseudo-split (\cite[Definition 1.3]{Browning-Loughran}).
\end{theorem}
The next result confirms that the of everywhere locally soluble fibres of morphisms which are split in codimension one (in particular, $\triangle=0$) admit a positive density.
\begin{theorem}\label{thm:mainlocalsol}
Let $f_2:Y_2\to X$ be a dominant proper $\BQ$-morphism whose generic fibre is geometrically integral. Assume that \begin{equation}\label{eq:codimonesplit}
\text{the fibre of } f \text{ over every codimension one point is split},
\end{equation} and that $Y(\RA_\BQ)\neq\varnothing$. Then the limit
$$\lim_{B\to\infty}\frac{\#\{P\in\CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B,f_2^{-1}(P)(\RA_\BQ)\neq\varnothing\}}{\#\{P\in\CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B\}}$$ exists, and is equal to a product of local densities.
\end{theorem}
\end{comment}
\subsection{Sketch of methods}
We first sketch the basic strategy used in \cite[\S11]{Salberger} to prove Manin's conjecture \eqref{eq:Maninconj}. For any smooth projective split toric variety $X$, up to isomorphism, there is a unique universal torsor $X_0$ over $X$ (under the Néron--Severi torus $\CTNS$), which is itself an affine toric variety. They both admit canonical integral models $\CX,\CX_0$ over $\BZ$, and a morphism $\CX_0\to\CX$ as a $\BZ$-torsor under $\FTNS$, a canonical $\BZ$-model of the Néron--Severi torus $\CTNS$. Counting rational points in $X(\BQ)$ reduces to counting integral points in $\CX_0(\BZ)$. Upon a canonical choice of toric norm, the toric height function is defined in a purely combinatorial way. Let us assume that $\dim X=d,\dim X_0=n,\dim\CTNS=r$, so that $n=r+d$. The point counting procedure on subsets of $\CX_0(\BZ)$ consisting of $n$-tuple integers of bounded toric height may be thought of as first fixing the first $r$-tuples satisfying certain bounded height condition, and then counting the remaining $d$-tuples lying in a certain box whose side-lengths are defined as fractions of the fixed $r$-tuple. Then we compare such sums with certain integrals defined in terms of $\CTNS$-invariant differential forms, from which we retrieve the leading constants in \eqref{eq:Maninconj}.
To obtain the effective equidistribution, as well as the geometric sieve however, further refinements regarding the arguments in \cite[\S11]{Salberger} are needed. One key innovation of this article -- Theorem \ref{thm:CABCAAB} -- is showing that the contribution from the integral points of bounded toric height $B$, whose first $r$-tuple defines a lopsided box (more precisely of side-length less than a power of $\log B$) for the remaining $d$-tuple, is negligible. We tentatively name this treatment \emph{toric van der Corput method}, since it is partly in spirit inspired by the classical van der Corput method which estimates certain exponential sums over the integers by comparing them with integrals and controls the error via appropriate averaging processes. In the toric setting, we achieve this by comparing the point-counting with certain integrals over the Néron--Severi torus, on observing that certain variables (within the first $r$-tuple) ``near'' the domain of integration have short ranges, so that the deviation arising from this comparison can be controlled satisfactorily.
The passage to the complement of such integral points is a crucial step, not least for executing lattice point counting in arbitrary congruence neighbourhoods intersecting with arbitrary real neighbourhoods (cf. the proof of Theorem \ref{thm:equidistrcong} in $\S$\ref{se:proofequistr}), but also for applying the (generalised) Ekedahl sieve (cf. the proof of Theorem \ref{thm:geomsieve1} in $\S$\ref{se:countingsubvar}--$\S$\ref{se:proofpropgeomsieve2}). This method and its applications mentioned above complement those in \cite[\S11]{Salberger} and in \cite{Breteche}.
The error terms of the weak convergence of the sequence of measures in Principles \ref{prin:Manin-Peyre} and \ref{prin:purity} may depend on each chosen function defined on the adelic space. We prove (Proposition \ref{prop:EEimpliesgeneral}) that the error terms can be made uniform, provided that this is true for a certain family of adelic subsets called \emph{projective congruence neighbourhoods}, defined in \S\ref{se:congneigh}. This motivates us to formulate, based on all known examples, an effective equidistribution condition \textbf{(EE)} in \S\ref{se:effdist}. Similarly, when lifting into universal torsors, we also formulate the condition \textbf{(EEUT)} in \S\ref{se:EEUT} in terms of \emph{affine congruence neighbourhoods}, and we prove (Proposition \ref{prop:univtorEEGS}) that \textbf{(EEUT)} implies \textbf{(EE)} (with a different choice of exponents). This is how we achieve the uniform dependency in Theorem \ref{thm:mainequidist}.
Principle \ref{prin:purity} and Theorem \ref{thm:ratptsfibration} are all concerned with counting rational points satisfying ``infinitely many'' local conditions, whose corresponding sets in the adelic space are not adelic neighbourhoods. The common difficulty (e.g. to operate harmonic analysis or to use the height zeta function) of deriving asymptotic formulas is the lack of group action or the absence of stability under multiplication of points on arbitrary adelic neighbourhoods. Our approach relies on a ``passage from local to global'' strategy established in Theorem \ref{thm:keyOmega}. Suppose that an almost-Fano variety $V$ satisfies Principle \ref{prin:Manin-Peyre}. The idea is that, roughly speaking, we ``approximate'' these non-adelic sets by adelic neighbourhoods of $V(\RA_\BQ^f)$ defined by the finite collection of local conditions ``truncated'' up to a reasonably large parameter say $N$ which goes to $\infty$ as $B$ grows. Then the geometric sieve fits into our aim of handling the deviation of this approximation procedure, via controlling in a uniform way (in terms of $N$) rational points violating certain local condition associated to any single sufficiently large prime ``for certain codimension two reason''. This sieve method goes back to Ekedahl \cite{Ekedahl}, and is vastly generalised and applies to a variety of counting problems, notably in the works of Poonen--Stoll \cite[\S9]{Poonen-Stoll}, Bhargava--Shankar \cite[\S2.6]{Bhargava-Shankar}, Poonen \cite{Poonen}, Bright--Browning--Loughran \cite[\S3]{BBL}, Cao--Huang \cite[\S3]{Cao-Huang2}, Browning--Heath-Brown \cite[\S7]{Browning-HB}. As an opposite case of Theorem \ref{thm:keyOmega}, we also prove Theorem \ref{thm:Tamagawazero}, concerning in general when these infinitely many local conditions result in a negligible contribution to point counting.
To obtain the upper bounds in Theorem \ref{thm:ratptsfibration}, it suffices to relax the infinitely many local conditions to those truncated by $N$ as above. As an effective version of Theorem \ref{thm:Tamagawazero}, we develop a Selberg sieve -- Theorem \ref{thm:Selbergsieve} -- capturing these finitely many local conditions and applying to situations of Theorems (1) and (2a) in which each of these local conditions can be ``detected'' modulo a uniform power of primes. The optimal choice of $N$ relies on the form of the error term in the effective equidistribution.
\subsection{Structure and details of the article}
In $\S$\ref{se:adelicmetrics} we recall the definition of adelic metrics, adelic measures and heights. In \S\ref{se:adelicspaces} we recall the notion of adelic spaces. In $\S$\ref{se:heightsTamagawa}, we define almost-Fano varieties, the constants $\alpha(V),\beta(V)$, and the Tagamawa measures. In \S\ref{se:congneigh}, we introduce the notion of congruence neighbourhoods as a topological basis of the adelic space, we relate it to Principle \ref{prin:Manin-Peyre}. We establish Theorems \ref{thm:Tamagawazero} and \ref{thm:keyOmega} in \S\ref{se:local-to-global} based on Principle \ref{prin:Manin-Peyre}. We establish Theorem \ref{thm:Selberg} in \S\ref{se:Selbersieve} based on the condition \textbf{(EE)} in \S\ref{se:effdist}.
In $\S$\ref{se:univtor} we first recall the construction of Tamagawa measures on universal torsors, and how to lift rational points into integral points. Then we formulate an effective equidistribution condition on universal torsors \textbf{(EEUT)} in terms of congruence neighbourhoods and show how it implies the property \textbf{(EE)}.
In $\S$\ref{se:toricparaheight} we recall the construction of toric norms and toric Tamagawa measures based on toric geometry, following Salberger.
Section $\S$\ref{se:toricvandercorput} is devoted to proving the central technical result Theorem \ref{thm:CABCAAB}.
The main ingredients are a comparison between certain sums and integrals (Proposition \ref{prop:keyprop2}), and another comparison between different integrals (Proposition \ref{prop:keyprop1}).
As applications, in $\S$\ref{se:purityproof} we prove Theorem \ref{thm:mainequidist} in the form of congruence neighbourhoods (Theorem \ref{thm:effective}). This is reduced to a lattice points counting problem with toric bounded height condition in arbitrary real neighbourhoods (Proposition \ref{co:CAsigmaWBCDWsigmaB}), and a comparison between sums and integrals regarding the fixed real neighbourhoods (Proposition \ref{prop:CAxilBCAB}). To achieve so, in $\S$\ref{se:comparelatticepts} we furthermore need to prove various estimates strengthening those in $\S$\ref{se:toricvandercorput}.
In $\S$\ref{se:geomsieve} we prove Theorem \ref{thm:maingeomsieve}, by means of a universal torsor version of the geometric sieve (Theorem \ref{thm:geomsieve1}). This is done by establishing two separate sieves arguments, one working for all sufficiently large primes (Proposition \ref{prop:geomsieve1}) and another (Proposition \ref{prop:geomsieve2}) for every individual prime.
In \S\ref{se:application} we confirm Principle \ref{prin:purity} and prove Theorem \ref{thm:ratptsfibration}.
\subsection{Notation and conventions}
A \emph{nice} $\BQ$-variety is a separated, smooth, geometrically integral scheme of finite type over $\BQ$. Fix an algebraic closure $\overline{\BQ}$ of $\BQ$. For every $\BQ$-scheme $V$, let $\overline{V}:=V\times_\BQ\overline{\BQ}$. And for every $x\in V$, let $k(x)$ denote the residue field of $x$.
For a $\BZ$-scheme $\CV$, we write $\CV(\widehat{\BZ}):=\prod_{p}\CV(\BZ_p)$.
For every $\BQ$-subvariety $Z$ of $V$, we write $\codim_V(Z)$ for the codimension of $Z$ in $V$.
In this article, we shall use the ordinary absolute values for valuations of $\BQ$. That is, if $\nu\in\operatorname{Val}(\BQ)$ is non-archimedean and corresponds to a prime $p$, then for $x\in\BQ^\times$, $|x|_\nu:=p^{-\operatorname{ord}_p(x)}$. And if $\nu$ corresponds to the real place, then $|\cdot|_\nu=|\cdot|$ is the usual real absolute value. The normalised $\nu$-adic measures $\operatorname{d}x_\nu$ on $\BQ_\nu$ are given by
\begin{equation}\label{eq:nuadicmeasurenormal}
\begin{cases}
\int_{\BZ_p} \operatorname{d}x_\nu =1 &\text{ if } \nu \text{ corresponds to } p;\\
\int_{[0,1]}\operatorname{d}x_{\nu}=1 &\text{ if } \nu=\BR.
\end{cases}
\end{equation}
If $\nu\in\operatorname{Val}(\BQ)$ corresponds to a prime $p$ or $\infty$, they are written interchangeably as subscripts (e.g. $\BQ_p$ is the same as $\BQ_\nu$).
We write $\operatorname{d}x_1\wedge\cdots\wedge\operatorname{d}x_d$ for differential forms as local sections of $K_V=\wedge^d\operatorname{Cot} (V)$ (the determinant line bundle of the cotangent bundle) of a nice $d$-dimensional variety $V$, and we write $\frac{\partial}{\partial x_1}\wedge\cdots\wedge \frac{\partial}{\partial x_d}$ for local sections of $-K_V=\wedge^d\operatorname{Tan} (V)$ (the determinant line bundle of the tangent bundle).
In this article we use interchangeably Vinogradov's symbol and Landau's symbol. For real-valued functions $f$ and $g$ defined over the real numbers with $g$ non-negative, $f \ll g$ and $f=O(g)$ both mean that there exists $C>0$ such that $|f|\leqslant Cg$. The dependency of the implied constant $C$ will be specified explicitly. The notation $f\asymp g$ means that $f\ll g$ and $g\ll f$ both hold. If $g$ is nowhere zero, $f(x)=o(g(x))$ means that $\lim_{x\to \infty}\frac{f(x)}{g(x)}=0$, and $f\sim g$ means that $\lim_{x\to \infty}\frac{f(x)}{g(x)}=1$, or equivalently, $f-g=o(g)$.
Starting from $\S$\ref{se:toricvandercorput}, we will be working with a fixed toric variety. Unless otherwise specified, all implied constants may depend on this ambient variety.
\section{Equidistribution and sieving}\label{se:equidistgeomsieve}
\subsection{Adelic metrics, adelic measures and heights}\label{se:adelicmetrics}
(See \cite[\S4]{Salberger}, \cite[\S2, \S3]{PeyreBeyond}.)
Let $Y$ be a nice $\BQ$-variety, and let $L$ be a line bundle on $Y$.
Recall that, for every $\nu\in\Val(\BQ)$, a \emph{$\nu$-adic metric} on $L$ is a map which associates at each $P\in Y(\BQ_\nu)$ a $\nu$-adic norm $\|\cdot\|_\nu$ on the $\BQ_\nu$-vector space $L(P):=L_P\otimes \BQ_\nu$, such that for every section $s_\nu$ of $L$ defined on a $\nu$-adic neighbourhood $U_\nu\subset Y(\BQ_\nu)$, the map $U_\nu\to\BR_{>0}$ given by $x\mapsto \|s_\nu(x)\|_\nu$ is continuous.
An \emph{adelic metric} on $L$ is a family $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$ of metrics such that there exists a finite set of places $S$ containing $\BR$, a smooth model $(\CL,\CY)$ of $(L,Y)$ over $\BZ_S$ satisfying the following property. For every $\nu\in\Val(\BQ)\setminus S$, for every $\BZ_{>0}\nu$-point $\mathbf{P}_\nu: \operatorname{Spec}(\BZ_\nu)\to \CY$ with generic fibre $P_\nu\in Y(\BQ_\nu)$, there exists a generator $s_{0,\nu}(P)$ of the $\BZ_\nu$-module $\mathbf{P}_\nu^*(\CL)$ such that for every $s_\nu(P)\in L(P_\nu)$, write $\lambda\in \BQ_\nu$ with $s_\nu(P)=s_{0,\nu}(P)\lambda$, then $\|s_\nu(P)\|_\nu=\left|\lambda\right|_\nu$.
We sometimes call $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$ the \emph{model norm} attached to $(\CL,\CY)$ over $\BZ_S$.
Associated to a fixed adelic metric $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$ on $L$, the \emph{(Arakelov) height function} $H_L: Y(\BQ)\to\BR_{>0}$ is defined by \begin{equation}\label{eq:heightmetric}
H_{L,(\|\cdot\|_\nu)}(x):=\prod_{\nu\in\Val(\BQ)}\|s(x)\|^{-1}_\nu,
\end{equation} where $s$ is any local section defined at a neighbourhood of $x$ such that $s(x)\neq 0$. By the product formula, this definition is independent of the choice of $s$.
In this article we shall always work with the anticanonical line bundle $-K_Y=\det (\operatorname{Tan} Y)$, to which we equip an adelic metric $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$. Write $d:=\dim Y$. For every $\nu\in\operatorname{Val}(\BQ)$ and for every analytic neighbourhood $U_\nu\subset Y(\BQ_\nu)$, let $\phi_\nu:U_\nu\to \BQ_\nu^{n}$ be a local chart which trivialises $-K_Y$ with local section $\frac{\partial}{\partial x_1}\wedge\cdots\wedge \frac{\partial}{\partial x_{d}}$. Let $\operatorname{d}x_{1}\cdots\operatorname{d}x_{d}$ be the standard $\nu$-adic measure on $\BQ_\nu^{n}$ (normalised as in \eqref{eq:nuadicmeasurenormal}). Then the local $\nu$-adic measure
\begin{equation}\label{eq:adelicmeasure}
\left\|\frac{\partial}{\partial x_1}\wedge\cdots\wedge \frac{\partial}{\partial x_d}\right\|_\nu\operatorname{d}x_{1}\cdots\operatorname{d}x_{d}
\end{equation} on $U_\nu$ is compatible with change of coordinates, so these local measures glue together to a global $\nu$-adic measure $\omega^Y_\nu$ on $Y(\BQ_\nu)$ (cf. \cite[\S3.2 Construction 3.6]{PeyreBeyond}).
This measure is also uniquely determined by the positive linear functional $\Gamma_\nu$ defined locally by
$$\Gamma_\nu(f)=\int_{U_\nu} f\circ\phi_\nu^{-1}(x_1,\cdots,x_n)\left\|\frac{\partial}{\partial x_1}\wedge\cdots\wedge \frac{\partial}{\partial x_d}\right\|_\nu\operatorname{d}x_{1}\cdots\operatorname{d}x_{d}$$
for every $f$ compactly supported in $U_\nu$, according to the Riesz representation theorem.
If the adelic metric $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$ is attached to a smooth model $(K_\CY,\CY)$ over $\BZ_S$, we sometimes call $(\omega^Y_\nu)_{\nu\not\in S}$ the \emph{model measures} attached to $(K_\CY,\CY)$. It can be computed as follows.
Fix $\nu\not\in S$ corresponding to $p$,
for every $k\geqslant 1$, we write
\begin{equation}\label{eq:redmodpk}
\operatorname{Mod}_{p^k}:\CY(\BZ_p)\longrightarrow \CY(\BZ/p^k\BZ)
\end{equation}
for the reduction modulo $p^k$ map.
For every $\overline{\xi}\in\CY(\BZ/p^k\BZ)$, consider the $\nu$-adic neighbourhood $$\operatorname{Mod}_{p^k}^{-1}(\overline{\xi})=\{\bxi\in\CY(\BZ_p):\operatorname{Mod}_{p^k}(\bxi)= \overline{\xi}\}.$$ This is a non-empty open-closed compact subset of $Y(\BQ_\nu)$. Then we have (cf. e.g. \cite[\S2.2]{Oesterle}, \cite[Theorems 2.13, 2.14]{Salberger}) \begin{equation}\label{eq:modelmeasurecomp}
\omega^Y_\nu(\operatorname{Mod}_{p^k}^{-1}(\overline{\xi}))=\left(\frac{\int_{\BZ_\nu}\operatorname{d}x_\nu}{\#(\BZ/p^k\BZ)}\right)^d=\frac{1}{p^{kd}}.
\end{equation}
In particular, if $Y$ is projective, we have $\CY(\BZ_\nu)=Y(\BQ_\nu)$ and \begin{equation}\label{eq:omeganuV}
\omega^Y_\nu(Y(\BQ_\nu))=\frac{\#\CY(\BF_p)}{p^{d}}.
\end{equation}
In \S\ref{se:toricparaheight} later we shall see that the toric norm used by Salberger \cite{Salberger} for split toric varieties agrees with the model norm over $\BZ$.
\subsection{Adelic spaces}\label{se:adelicspaces}
(See \cite[\S3]{Oesterle}, \cite[\S2.7]{Wittenberg}.)
Let $Y$ be a nice $\BQ$-variety. Let $\CY_S$ be an integral model of $Y$ over $\BZ_S$, where $S\subset \operatorname{Val}(\BQ)$ is a finite set of places of $\BQ$ containing $\BR$. We define
$$\CY_S(\RA_\BQ^S):=\prod_{\nu\in S}Y(\BQ_\nu)\times\prod_{\nu\not\in S\cup\BR}\CY_S(\BZ_\nu),$$ equipped with product topology. Set-theoretically it is $$``\{(x_\nu)\in\prod_{\nu\in\operatorname{Val}(\BQ)}Y(\BQ_\nu):x_\nu \text{ is integral for } \nu\not\in S\}".$$ The pairs $(\CY_S,S)$, ordered by inclusion and evident morphisms, form a direct system.
The \emph{adelic space} (after Weil--Grothendieck) of $Y$ is
\begin{equation}\label{eq:adelization}
Y(\RA_\BQ)=\varinjlim_{(\CY_S,S):\BR\subset S}\CY_S(\RA_\BQ),
\end{equation} equipped with the direct-limit-topology.
It is a locally compact Haudorff topological space with countable base.
If $Y$ is proper, then $$Y(\RA_\BQ)=\prod_{\nu\in\operatorname{Val}(\BQ)} Y(\BQ_\nu),$$ and it is compact.
We say that $Y$ satisfies \emph{strong approximation}, if $Y(\BQ)$ is dense in $Y(\RA_\BQ)$.
We say that $Y$ satisfies \emph{(arithmetic) purity of strong approximation} (see \cite[p. 336]{Cao-Liang-Xu}, \cite[Definition 1.3 (ii)]{Cao-Huang1}), if for every Zariski closed subset $Z\subset Y$ of codimension at least two, the open subset $Y\setminus Z$ satisfies strong approximation.
\subsection{Almost-Fano varieties}\label{se:heightsTamagawa}
(See \cite[\S3.24]{PeyreBeyond}.)
\begin{definition}[See \cite{PeyreBeyond} Hypotheses 3.27]\label{def:almostFano}
We call a nice $\BQ$-projective variety $V$ \emph{almost Fano} if it satisfies the following hypotheses.
\begin{enumerate}
\item The set $V(\BQ)$ is dense in $V(\RA)=\prod_{\nu\in\Val(\BQ)}V(\BQ_\nu)$.
\item The cohomological groups $H^1(V,\CO_V)$, $H^2(V,\CO_V)$, $\operatorname{Br}(\overline{V})$ are all zero.
\item The group $\Pic(\overline{V})$ is torsion free.
\item The cone of pseudoeffective divisors $\operatorname{Eff}(\overline{V})$ is finitely generated.
\item The anticanonical line bundle $-K_V$ is nef and big.
\end{enumerate}
\end{definition}
\subsubsection{The constants $\alpha(V)$ and $\beta(V)$}
(See \cite[D\'efinition 2.4]{Peyre}.)
Let $\Pic(V)^\vee$ be the dual lattice of $\Pic(V)$ inside $\Pic(V)^\vee_\BR\simeq \BR^r$. Let $\operatorname{d}y$ be the Haar measure on $\Pic(V)^\vee_\BR$ normalised so that $\Pic(V)^\vee$ has covolume one. Let $\operatorname{Eff}(\overline{V})^\vee\subset\Pic(V)^\vee_\BR$ be the dual cone of $\operatorname{Eff}(\overline{V})$.
The $\alpha$-constant is
\begin{equation}\label{eq:alphaV}
\alpha(V):=\frac{1}{(r-1)!}\int_{\operatorname{Eff}(\overline{V})^\vee}\operatorname{e}^{-\langle-K_V,y\rangle}\operatorname{d}y.
\end{equation} Using Fubini's theorem, if $\operatorname{d}z$ denotes the Lebesgue measure on the affine hyperplane $$H_1:=\{y\in\Pic(V)^\vee_\BR:\langle -K_V,y\rangle=1\},$$ then
$$\alpha(V)=\int_{H_1\cap\operatorname{Eff}(\overline{V})^\vee}\operatorname{d}z.$$
The $\beta$-constant is defined by \begin{equation}\label{eq:beta}
\beta(V):=\#H^1(\BQ,\operatorname{Pic}(\overline{V})).
\end{equation}
In many cases it equals $\#(\operatorname{Br}(V))$.
\subsubsection{Tamagawa measures on almost-Fano varieties}
(See \cite[\S2]{Peyre}.)
Let us fix a family of adelic measures $(\omega_\nu^V)$ on an almost-Fano variety $V$. As seen from \eqref{eq:omeganuV}, the infinite product $\prod_{\nu\in\Val(\BQ)}\omega^V_\nu(V(\BQ_\nu))$ in general does not converge. An introduction of convergence factors is needed.
Let $\CV$ be an integral model of $V$ over $\BZ$. Let $S\subset\Val(\BQ)$ be a set of places containing $\BR$, such that $\Pic(\CV_{\overline{\BF_p}})\simeq \Pic(V_{\overline{\BQ}})$ for every $p\not\in S$. Then the geometric Frobenius $\operatorname{Fr}_p$ acts on $H^2_{\text{\'et}}(\CV_{\overline{\BF_p}},\BQ_l(1))\simeq \Pic(\overline{V})$ for any prime $l\neq p$.
Now on $\Re(s)>0$, we define the Artin L-functions
\begin{equation}\label{eq:artinLnu}
L_\nu(s,\Pic(\overline{V})):=\frac{1}{\det(1-\#\BF_p^{-s}\operatorname{Fr}_p|\Pic(\overline{V})_\BQ)},\quad \text{for every }\nu\not\in S,
\end{equation}
\begin{equation}\label{eq:artinLS}
L_S(s,\Pic(\overline{V})):=\prod_{\nu\not\in S}L_\nu(s,\Pic(\overline{V})).
\end{equation}
Then $L_S(s,\Pic(\overline{V}))$ has a pole at $s=1$ of order $r=\operatorname{rank}(\Pic(V))$ by a theorem of Artin.
We now define the set of convergence factors
\begin{equation}\label{eq:convergencefact}
\lambda_\nu:=\begin{cases}
1 & \text{ if } \nu\in S;\\ L_\nu(1,\Pic(\overline{V}))^{-1} &\text{ if }\nu\not\in S.
\end{cases}
\end{equation}
By hypotheses (2), (3), using the Grothendieck-Lefschetz formula and Deligne's proof of Weil's conjecture, one deduces that, for almost all $\nu$,
$$\omega_\nu^V(V(\BQ_\nu))=L_\nu(1,\Pic(\overline{V}))+O(p^{-\frac{3}{2}}).$$
Therefore, the product measure
\begin{equation}\label{eq:Tamagawameas}
\omega^V:=\left(\lim_{s\to 1}(s-1)^rL_S(s,\Pic(\overline{V}))\right)\prod_{\nu\in\Val(\BQ)}\lambda_\nu\omega_\nu^V
\end{equation} applied to $V(\RA_\BQ)$ is convergent, and is independent of the choice of the finite set $S$.\footnote{In general if the Brauer--Manin obstruction is non-empty and is the only one for weak approximation on $V$, one needs to restrict the measure $\omega^V$ to the (closed) Brauer--Manin set $V(\RA_\BQ)^{\operatorname{Br}}\subset V(\RA_\BQ)$.} We write $$\omega^V=\omega_\infty^V\times \omega_f^V,$$ where $ \omega_f^V$ is the finite part measure on $V(\RA^f_\BQ)$.
For each $\nu\in\operatorname{Val}(\BQ)$, let $\Omega_\nu\subset V(\BQ_{\nu})$ be a measurable $\nu$-adic subset. Then $\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\subset V(\RA_\BQ)$ is measurable, and $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)$ is equal to the infinite product
\begin{equation}\label{eq:omeganu}
\left(\lim_{s\to 1}(s-1)^rL_S(s,\Pic(\overline{V}))\right)\prod_{\nu\in\Val(\BQ)}\lambda_\nu\omega_\nu^V(\Omega_\nu).
\end{equation}
(See e.g. \cite[\S38, p. 158 (4)]{Halmos}. Note that $\omega^V$ differs from the infinite product probability measure $\otimes_{\nu\in\operatorname{Val}(\BQ)} \frac{\omega_\nu^V}{\omega_\nu^V(V(\BQ_\nu))}$ by the constant $\omega^V(V(\RA_\BQ))$.) In particular, if $\omega_\nu^V(\Omega_\nu)\neq 0$ for all $\nu\in\operatorname{Val}(\BQ)$, then $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)=0$ (resp. $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)> 0$) if and only if the infinite product \eqref{eq:omeganu} diverges to zero (resp. converges).
Now let $W\subset V$ be an open dense subset whose complement $Z=V\setminus W$ is of codimension at least two. For every $\nu\in\operatorname{Val}(\BQ)$, the $\nu$-adic measure $\omega_\nu^V$ naturally restricts to the (open) measurable subset $W(\BQ_\nu)$, which we denote by $\omega_\nu^W$.
\begin{proposition}\label{prop:restTmeasure}
The family of adelic measures $(\omega_\nu^W)$ together with the set of convergence factors \eqref{eq:convergencefact} define in the same way as \eqref{eq:Tamagawameas} a $\sigma$-finite measure on $W(\RA_\BQ)$.
\end{proposition}
\begin{proof}[Sketch of proof] (cf. \cite[Proposition 2.1]{Cao-Huang2})
Let $\CV$ be an integral model of $V$ over $\BZ$ and let $\CW=\CV\setminus\overline{Z}$.
It is enough to show that, there exists a finite set of places $S$ with $\BR\in S$ such that the infinite product $\prod_{\nu\not\in S}\lambda_\nu\omega_\nu^V(\CW(\BZ_p))$ is absolutely convergent. This follows from \eqref{eq:modelmeasurecomp} and the (relative) Lang-Weil estimates \cite{Lang-Weil} for these integral models. Hence the set of convergence factors $(\lambda_\nu)$ \eqref{eq:convergencefact} for $(\omega_\nu^V)$ on $V(\RA_\BQ)$ is also a set of convergence factors for $(\omega^W_\nu)$ on $W(\RA_\BQ)$.
\end{proof}
We shall denote by $\omega^V|_W$ (rep. $\omega_f^V|_W$) the measure constructed above on $W(\RA_\BQ)$ (resp. $W(\RA_\BQ^f)$), and by abuse of terminology call it the \emph{restriction of the Tamagawa measure} to $W$.
\subsection{Equidistribution and congruence neighbourhoods}\label{se:congneigh}
The goal of this section is reduce the proof of Principle \ref{prin:purity} for almost-Fano varieties to a specific family of adelic neighbourhoods (Principle \ref{prin:PManin-Peyre}). Compared to the affine case (\cite[\S2.2]{Cao-Huang2}), we need to correspondingly define projective congruence neighbourhoods (Definition \ref{def:projcong}).
Let $Y\subset\BP^m_\BQ$ be a quasi-projective variety. Let $\CY\subset\BP^m_\BZ$ be a $\BZ$-integral model of $Y$. We fix $l_0\in\BZ_{>0}$ such that $\prod_{p\nmid l_0}\CY(\BZ_p)\neq\varnothing$ (by the Lang--Weil estimate and Hensel's lemma, cf. \cite[\S1.7]{Cao-Huang2}). Let $l\in\BZ_{>0}$ with $l_0\mid l$ and $\bxi=(\bxi_p)_{p\mid l}\in \prod_{p\mid l}Y(\BQ_p)$ be a collection of local points. Write $\bxi_p=[\bxi_{p,0}:\cdots:\bxi_{p,n}]$ with $\bxi_{p,i}\in\BZ_p$ for every $0\leqslant i\leqslant n$, such that there exists certain $\bxi_{p,i}$ not in $p\BZ_p$. We consider define the (projective) $p$-adic neighbourhood of $\BP^m_\BZ$ associated to $l,\bxi_p$ to be
$$\CE_p^{\BP_\BZ^m}(l;\bxi_p):=\{[\zeta_{0,p}:\cdots:\zeta_{n,p}]:\zeta_{i,p}\in \bxi_{p,i}+p^{\operatorname{ord}_p(l)}\BZ_p\},$$ and the $p$-adic neighbourhood $\CE^{\CY}_p(l;\bxi_p)$ of $\CY$ to be $$\CE^{\CY}_p(l;\bxi_p):=Y(\BQ_p)\cap \CE_p^{\BP_\BZ^m}(l;\bxi_p).$$
Then for any two $\bxi_p^\prime,\bxi_p^{\dprime}\in Y(\BQ_p)$, we have \begin{equation}\label{eq:intercong}
\CE^{\CY}_p(l;\bxi^\prime_p)\cap \CE^{\CY}_p(l;\bxi_p^{\dprime})\neq\varnothing\Longleftrightarrow\CE^{\CY}_p(l;\bxi_p^\prime)= \CE^{\CY}_p(l;\bxi^{\dprime}_p).
\end{equation}
Upon replacing $l$ by its powers (depending on $\CY$ and $\xi$), we may assume that $\CE^{\CY}_p(l;\bxi_p)$ is closed in $\CE_p^{\BP_\BZ^m}(l;\bxi_p)$ (hence it is compact) for every $p\mid l$. (See \cite[the discussion below Definition 2.2]{Cao-Huang2}.)
\begin{definition}\label{def:projcong}
The \emph{(projective) congruence neighbourhood of $\CY$ of level $l$ associated to $\bxi$} is the non-empty compact finite adelic neighbourhood $$\CE^\CY_f(l;\bxi):=\prod_{p\mid l}\CE^{\CY}_p(l;\bxi_p)\times \prod_{p\nmid l}\CY(\BZ_p)\subset Y(\RA_\BQ^f).$$
\end{definition}
The collection $\{\CE^\CY_f(l;\bxi)\}$ forms a topological basis of $Y(\RA_\BQ^f)$. Moreover, for any two collections of local points $\bxi^\prime=(\bxi_p^\prime),\bxi^{\dprime}=(\bxi^{\dprime}_p)\in \prod_{p\mid l}Y(\BQ_p)$, thanks to \eqref{eq:intercong},
$$\CE^\CY_f(l;\bxi^\prime)\cap \CE^\CY_f(l;\bxi^{\dprime})\neq\varnothing\Longleftrightarrow \CE^\CY_f(l;\bxi^\prime)= \CE^\CY_f(l;\bxi^{\dprime}).$$
Throughout the rest of this section, we shall be working with varieties $V$ satisfying the following hypothesis.
\begin{hypothesis}\label{hyp:almost-Fano}
The variety $V\subset\BP^m_\BQ$ is almost-Fano with a fixed $\BZ$-integral model $\CV\subset\BP_\BZ^m$, equipped with a family of adelic measures $(\omega_\nu^V)_{\nu\in\operatorname{Val}(\BQ)}$ and its associated Tamagawa measure $\omega^V$, and with $H=H_{-K_V}:V(\BQ)\to\BR_{>0}$ an anticanonical height function. Let $M\subset V(\BQ)$ be a fixed thin subset.
\end{hypothesis}
For every open dense subvariety $W\subset V$, for every adelic neighbourhood $\CE\subset W(\RA_\BQ)$,
consider the counting function ($M$ is considered fixed throughout and will not be specified explicitly) \begin{equation}\label{eq:countingfunction}
\CN_W(\CE;B):=\#\{P\in W(\BQ)\setminus M:H(P)\leqslant B,P\in\CE\}.
\end{equation}
It is naturally related to the counting measure \eqref{eq:countingmeasure} via
$$\int \mathds{1}_{\CE}\operatorname{d}\delta_{(W(\BQ)\setminus M)_{\leqslant B}}=\CN_W(\CE;B),$$ where $\mathds{1}_{\CE}$ stands for the indicator function of $\CE$ in $W(\RA_\BQ)$.
The integral model of $W$ is, by convention, chosen to be $\CW:=\CV\setminus(\overline{V\setminus W})$.
Since congruence neighbourhoods of $\CW$ form a topological basis of $W(\RA_\BQ)$, and every compactly supported continuous function can be approximated by indicator functions of unions of congruence neighbourhoods, obtaining Principle \ref{prin:purity} for $W$ boils down to
\begin{principle}\label{prin:PManin-Peyre}
Let $V$ be as in Hypothesis \ref{hyp:almost-Fano}. There exists a thin subset $M\subset V(\BQ)$ such that, for any adelic neighbourhood of the form $\CF=\CF_\infty\times \CF_f\subset W(\RA_\BQ)$, where $\CF_\infty\subset W(\BR)\subset V(\BR)$ is real measurable and $\CF_f\subset W(\RA_\BQ^f)$ is a congruence neighbourhood of $\CW$, as $B\to \infty$, we have $$\CN_W(\CF;B)\sim \alpha(V)\beta(V)\omega_\infty^V(\CF_\infty)\left(\omega_f^V|_W(\CF_f)\right) B(\log B)^{r-1},$$ where the implied constant may depend on $\CF$.
\end{principle}
\subsection{Local-to-global passage and the geometric sieve}\label{se:local-to-global}
Let $V$ be as in Hypothesis \ref{hyp:almost-Fano}.
Let $(\Omega_\nu\subset V(\BQ_\nu))_{\nu\in\operatorname{Val}(\BQ)}$ be a collection of $\nu$-adic measurable sets satisfying \begin{equation}\label{eq:posmeas}
\omega_\nu^V(\Omega_\nu)>0\text{ and } \omega^V_\nu(\partial\Omega_\nu)=0 \text{ for every } \nu\in\operatorname{Val}(\BQ).
\end{equation}
In this subsection we prove two theorems related to the counting function $$\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right):=\#\{P\in V(\BQ)\setminus M:P\in\Omega_\nu \text{ for every } \nu\in\operatorname{Val}(\BQ)\}.$$
Being opposed to each other, Theorem \ref{thm:Tamagawazero} states that if $\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu$ has measure zero, then $\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)$ has negligible contribution, while Theorem \ref{thm:keyOmega} provides a sufficient condition (i.e. the geometric condition \textbf{(GS)} \emph{infra}) to guarantee that $\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)$ possesses the expected asymptotic formula.
A significant feature is that both theorems do not require effective error term estimates in Principle \ref{prin:Manin-Peyre} or in the condition \textbf{(GS)} below.
All implied constants in this subsection are allowed to depend on the family $(\Omega_\nu)$.
Our first theorem generalises \cite[Theorem 1.2]{Browning-Loughran} in the case where each $\Omega_\nu$ is the $\nu$-adic closure of a thin set of $V(\BQ)$, and also applies to the situation of Theorem \ref{thm:ratptsfibration} (2a).
\begin{theorem}\label{thm:Tamagawazero}
Assume that the variety $V$ satisfies Principle \ref{prin:Manin-Peyre}. If $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)=0$, then as $B\to \infty$,
$$\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)=o(B(\log B)^{r-1}).$$
\end{theorem}
\begin{proof}
Let $\varepsilon>0$ be fixed. Then $\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu$ having measure zero implies that the infinite product
$\prod_{\nu\in\Val(\BQ)}\frac{\omega_\nu^V(\Omega_\nu)}{\omega_\nu^V(V(\BQ_\nu))}$ diverges to zero. So there exists $N(\varepsilon)\geqslant 1$ such that, $$\prod_{\substack{\nu\in\operatorname{Val}(\BQ)\setminus\{\BR\}\\ \nu\leftrightarrow p\leqslant N(\varepsilon)}}\frac{\omega_\nu^V(\Omega_\nu)}{\omega_\nu^V(V(\BQ_\nu))}<\varepsilon.$$
For every non-archimedean place $\nu$, we let $\overline{\Omega_\nu}$ be the closure of $\Omega_\nu$ inside $V(\BQ_\nu)$. Consider the adelic neighbourhood $$\CE((\Omega_p)_{p\leqslant N(\varepsilon)}):=V(\BR)\times \prod_{p>N(\varepsilon)} V(\BQ_p)\times \prod_{p\leqslant N(\varepsilon)}\overline{\Omega_p}\subset V(\RA_\BQ).$$
Then
$$\omega^V(\CE((\Omega_p)_{p\leqslant N(\varepsilon)}))=\omega^V(V(\RA_\BQ))\prod_{\substack{\nu\in\operatorname{Val}(\BQ)\setminus\{\BR\}\\ \nu\leftrightarrow p\leqslant N(\varepsilon)}}\frac{\omega_\nu^V(\Omega_\nu)}{\omega_\nu^V(V(\BQ_\nu))}<\varepsilon \omega^V(V(\RA_\BQ)).$$
On the other hand, Principle \ref{prin:Manin-Peyre} implies that there exists $B(\varepsilon)>0$ such that for all $B>B(\varepsilon)$,
$$\left|\frac{\CN_V(\CE((\Omega_p)_{p\leqslant N(\varepsilon)});B)}{\alpha(V)\beta(V)\omega^V(\CE((\Omega_p)_{p\leqslant N(\varepsilon)}))B(\log B)^{r-1}}-1\right|<\varepsilon.$$
It follows that for all $B>B(\varepsilon)$,
\begin{align*}
\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)\leqslant \CN_V(\CE((\Omega_p)_{p\leqslant N(\varepsilon)});B)&\leqslant (1+\varepsilon) \alpha(V)\beta(V)\omega^V(\CE((\Omega_p)_{p\leqslant N(\varepsilon)}))B(\log B)^{r-1}\\ &<\varepsilon(1+\varepsilon)\alpha(V)\beta(V)\omega^V(\RA_\BQ)B(\log B)^{r-1}.
\end{align*}This finishes the proof, upon rescaling $\varepsilon$.
\end{proof}
Our next theorem is inspired by the work of Poonen--Stoll \cite[Lemma 20]{Poonen-Stoll}. It serves as the key ingredient for proving Principle \ref{prin:purity} and Theorem \ref{thm:ratptsfibration} (2b).
\begin{theorem}\label{thm:keyOmega}
Assume that the variety $V$ satisfies Principle \ref{prin:PManin-Peyre} with respect to a $\BZ$-integral model $\CV\subset\BP_\BZ^m$. Assume moreover that the family $(\Omega_\nu)$ satisfy the following condition:
\begin{itemize}
\item \textbf{Condition (GS)}: For every $N\in\BZ_{>0}$, we have $$\limsup_{B\to\infty} \frac{\CR((\Omega_\nu);N,B)}{B(\log B)^{r-1}}=o(1) \text{ as }N\to\infty,$$ where $$\CR((\Omega_\nu);N,B):=\#\{P\in V(\BQ)\setminus M:P\in\Omega_\infty,\exists p>N, P\not\in\Omega_p\}.$$
\end{itemize}
Then we have $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)>0$, and as $B\to\infty$, \begin{equation}\label{eq:asympCS}
\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)\sim \alpha(V)\beta(V)\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)B(\log B)^{r-1}.
\end{equation}
\end{theorem}
\begin{proof}
To ease notation, we write in what follows, for each measurable $\CE\subset V(\RA_\BQ)$,
$$g(\CE;B):=\frac{\CN_V(\CE;B)}{\alpha(V)\beta(V)B(\log B)^{r-1}}.$$
For every $N_1,N_2\in\BZ_{>0}$ with $N_1<N_2$, let us define the measurable finite adelic set $$\Omega_f[N_1,N_2]:=\prod_{N_1<p\leqslant N_2}\Omega_p\times \prod_{p\leqslant N_1\text{ or } p>N_2}V(\BQ_p)\subset V(\RA_\BQ^f).$$
\textbf{Step I.} We first prove that, for every such fixed $N_1,N_2$, the limit \begin{equation}\label{eq:aN}
\lim_{B\to\infty}g(\Omega_\infty\times\Omega_f[N_1,N_2];B)
\end{equation} exists and is equal to $\omega^V(\Omega_\infty\times \Omega_f[N_1,N_2])$. This will only make use of Principle \ref{prin:PManin-Peyre}.
Indeed, we let
$$\overline{\Omega_f[N_1,N_2]}:=\prod_{N_1<p\leqslant N_2}\overline{\Omega_p}\times\prod_{p\leqslant N_1\text{ or } p>N_2}V(\BQ_p)$$ be the closure of $\Omega_f[N_1,N_2]$ in $V(\RA_\BQ^f)$, which is a closed compact measurable subset of $V(\RA_\BQ^f)$. Since $\omega_p^V(\partial\Omega_p)=0$ by assumption, we have $$\omega_f^V(\overline{\Omega_f[N_1,N_2]})=\omega_f^V(\Omega_f[N_1,N_2]).$$ For every $\varepsilon>0$, we can cover $\overline{\Omega_f[N_1,N_2]}$ by a finite disjoint union of projective congruence neighbourhoods $\{\CE_f^\CV(l_i;\bxi_i)\}_{i\in I}$, such that $$\omega_f^V\left(\bigsqcup_{i\in I}\CE_f^\CV(l_i;\bxi_i)\right)-\omega_f^V(\overline{\Omega_f[N_1,N_2]})<\varepsilon.$$
Then applying Principle \ref{prin:PManin-Peyre} to each $\CE_f^\CV(l_i;\bxi_i)$, we have
\begin{equation}\label{eq:upperepsilon}
\begin{split}
\limsup_{B\to\infty}g(\Omega_\infty\times\Omega_f[N_1,N_2];B)&\leqslant \limsup_{B\to\infty}\sum_{i\in I}g(\Omega_\infty\times \CE_f^\CV(l_i;\bxi_i);B)\\ &=\sum_{i\in I}\omega_\infty^V(\Omega_\infty)\omega_f^V(\CE_f^\CV(l_i;\bxi_i))\\ &=\omega_\infty^V(\Omega_\infty)\omega_f^V\left(\bigsqcup_{i\in I}\CE_f^\CV(l_i;\bxi_i)\right)\\ &<\omega_\infty^V(\Omega_\infty)\left(\omega_f^V(\Omega_f[N_1,N_2])+\varepsilon\right).
\end{split}
\end{equation}
On the other hand, for every prime $p$, we let $\overline{\Omega_p^c}$ be the closure of the measurable set $V(\BQ_p)\setminus\Omega_p$ in $V(\BQ_p)$, and let
$$\overline{\Omega_f[N_1,N_2]^c}:=\bigcup_{N_1<p_0\leqslant N_2}\left(\overline{\Omega_{p_0}^c}\times \prod_{N_1<p\leqslant N_2,p\neq p_0}\overline{\Omega_p}\times \prod_{p\leqslant N_1\text{ or } p>N_2}V(\BQ_p)\right),$$ which is also a closed compact subset of $V(\RA_\BQ^f)$. Moreover \begin{equation}\label{eq:complem}
V(\RA_\BQ^f)\setminus \Omega_f[N_1,N_2]\subset \overline{\Omega_f[N_1,N_2]^c},\quad \omega_f^V\left(V(\RA_\BQ^f)\setminus \Omega_f[N_1,N_2]\right)=\omega_f^V( \overline{\Omega_f[N_1,N_2]^c}).
\end{equation}
For every $\varepsilon>0$, on covering $\overline{\Omega_f[N_1,N_2]^c}$ with finitely many disjoint congruence neighbourhoods $\{\CE_f^\CV(l_j;\bxi_j)\}_{j\in J}$ such that $$\omega_f^V\left(\bigsqcup_{j\in J}\CE_f^\CV(l_j;\bxi_j)\right)-\omega_f^V(\overline{\Omega_f[N_1,N_2]^c})<\varepsilon,$$ we similarly obtain as \eqref{eq:upperepsilon},
$$\limsup_{B\to\infty}g(\Omega_\infty\times\overline{\Omega_f[N_1,N_2]^c};B)<\omega_\infty^V(\Omega_\infty)\left(\omega_f^V(\Omega_f[N_1,N_2]^c)+\varepsilon\right).$$
Therefore, using \eqref{eq:complem},
\begin{align*}
\liminf_{B\to\infty}g(\Omega_\infty\times\Omega_f[N_1,N_2];B)&\geqslant\liminf_{B\to\infty}\left(g(\Omega_\infty\times V(\RA_\BQ^f);B)-\sum_{j\in J}g(\Omega_\infty\times\CE_f^\CV(l_j;\bxi_j);B)\right)\\ &\geqslant \liminf_{B\to\infty}g(\Omega_\infty\times V(\RA_\BQ^f);B)-\sum_{j\in J}\limsup_{B\to\infty}g(\Omega_\infty\times\CE_f^\CV(l_j;\bxi_j);B)\\ &>\omega_\infty^V(\Omega_\infty)\left(\omega_f^V(V(\RA_\BQ^f))-\omega_f^V(\overline{\Omega_f[N_1,N_2]^c})+\varepsilon\right)\\ &=\omega_\infty^V(\Omega_\infty)\left(\omega_f^V(\Omega_f[N_1,N_2])+\varepsilon\right).
\end{align*}
Combing this with \eqref{eq:upperepsilon} yields the claim.
\textbf{Step II.} We next show that the infinite product \eqref{eq:omeganu} converges, so that for the sequence $$(a_N:=\omega^V(\Omega_\infty\times \Omega_f[1,N])),$$ defined as the limit \eqref{eq:aN}, $\lim_{N\to\infty} a_N$ exists, and is equal to $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)$. The condition \textbf{(GS)} now comes into play.
By Cauchy's criterion, we want to show that $\frac{a_{N_1}}{a_{N_2}}=\prod_{N_1<p\leqslant N_2}\lambda_p\omega_p^V(\Omega_p)$ is sufficiently close to $1$ uniformly for arbitrarily large $N_1<N_2$. Indeed, since $$\CN_V(\Omega_\infty\times V(\RA_\BQ^f);B)-\CN_V(\Omega_\infty\times\Omega_f[N_1,N_2];B)\leqslant \CR((\Omega_\nu);N_1,B),$$ on writing for convenience $$h(N):=\limsup_{B\to\infty} \frac{\CR((\Omega_\nu);N,B)}{\alpha(V)\beta(V)B(\log B)^{r-1}},$$ according to \eqref{eq:aN}, we have
\begin{align*}
&\omega^V(\Omega_\infty\times V(\RA_\BQ^f))\left|\prod_{N_1<p\leqslant N_2}\lambda_p\omega_p^V(\Omega_p)-1\right| \\ =&\lim_{B\to\infty}\left|\left(\prod_{N_1<p\leqslant N_2}\lambda_p\omega_p^V(V(\BQ_p))\right)g(\Omega_\infty\times\Omega_f[N_1,N_2];B)-g(\Omega_\infty\times V(\RA_\BQ^f);B)\right|\\ \leqslant &\left|\prod_{N_1<p\leqslant N_2}\lambda_p\omega_p^V(V(\BQ_p))-1\right|\lim_{B\to\infty}g(\Omega_\infty\times V(\RA_\BQ^f);B)\\ &\quad\quad+\limsup_{B\to\infty}\left(g(\Omega_\infty\times V(\RA_\BQ^f);B)-g(\Omega_\infty\times\Omega_f[N_1,N_2];B)\right)\\ \leqslant&\left|\prod_{N_1<p\leqslant N_2}\lambda_p\omega_p^V(V(\BQ_p))-1\right|\omega^V(\Omega_\infty\times V(\RA_\BQ^f))+h(N_1).
\end{align*}
By construction, the Tamagawa measure \eqref{eq:Tamagawameas} with the family of convergence factors $(\lambda_\nu)$ is absolutely convergent. So $\prod_{N_1<p\leqslant N_2}\lambda_p\omega_p^V(V(\BQ_p))$ is arbitrarily close to $1$ provided that $N_1$ is large.
So the hypothesis \textbf{(GS)} shows that $\frac{a_{N_1}}{a_{N_2}}\to 1$ uniformly for $N_1<N_2,N_1\to\infty$, which confirms the convergence of the infinite product \eqref{eq:omeganu}.
\textbf{Step III.} Going back to the counting function $\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)$. We write for simplicity
$$g(B):=\frac{\CN_V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu;B\right)}{\alpha(V)\beta(V)B(\log B)^{r-1}},\quad g_N(B):=g(\Omega_\infty\times\Omega_f[1,N];B).$$
To complete the proof we now show $\lim_{B\to\infty} g(B)$ exists and is equal to $\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right)$, which is the limit of the sequence $(a_N)$.
First, observe that for every $N$,
$$\limsup_{B\to\infty} g(B)\leqslant\limsup_{B\to\infty}g_N(B)=a_N,$$ and therefore $$\limsup_{B\to\infty}g(B)\leqslant \lim_{N\to\infty}a_N=\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right).$$
On the other hand, since $$g_N(B)-g(B)\leqslant \frac{\CR((\Omega_\nu);N,B)}{\alpha(V)\beta(V)B(\log B)^{r-1}},$$ we get
$$\liminf_{B\to\infty} g(B)\geqslant \liminf_{B\to\infty} g_N(B)-\limsup_{B\to\infty}\left(g_N(B)-g(B)\right)=a_N-h(N).$$ Hence
$$\liminf_{B\to\infty} g(B)\geqslant \lim_{N\to\infty} \left(a_N-h(N)\right)=\omega^V\left(\prod_{\nu\in\operatorname{Val}(\BQ)}\Omega_\nu\right).$$
This finishes the proof.
\end{proof}
\subsection{The effective equidistribution condition (EE)}\label{se:effdist}
Let $V$ be as in Hypothesis \ref{hyp:almost-Fano}.
\begin{definition}\label{def:LCEf}
For every non-empty open-closed (compact) subset $\CE_f\subset V(\RA_\BQ^f)$, we define $\CL(\CE_{f})$ to be the smallest integer $l$ (depending on the model $\CV\subset\BP_\BZ^m$) such that $\CE_f$ can be covered by a finite disjoint union of congruence neighbourhoods $\{\CE^\CV_f(l;\bxi_i)\}_i$ of level all equal to $l$ (recall Definition \ref{def:projcong}) $$\CE_{f}=\bigsqcup_{i} \CE^\CY_f(l;\bxi_i).$$
\end{definition}
Since $\CE_f$ is compact, and congruence neighbourhoods of $\CV$ form a topological basis of $V(\RA_\BQ^f)$, we have $\CL(\CE_{f})<\infty$.
The following condition that we now formulate is an effective version of Principle \ref{prin:PManin-Peyre} which depends on the fixed projective $\BZ$-integral model $\CV\subset\BP_\BZ^m$ of $V$. We require certain uniformity of their levels in error terms.
\begin{itemize}
\item \textbf{Condition (EE).} \emph{There exist
$\gamma\geqslant 0$, and a continuous function $h:\BR_{>0}\to\BR_{>0}$ with $h(x)=o(1),x\to\infty$, satisfying the following property.
For every adelic neighbourhood of the form $\CF=\CF_\infty\times \CF_f$, where $\CF_\infty\subset V(\BR)$ is real measurable and $\CF_f=\CE^\CV_f(l;\bxi)\subset V(\RA_\BQ^f)$ is a projective congruence neighbourhood of $\CV$ of level $l\in\BZ_{\geqslant 2}$ associated to $\bxi\in\prod_{p\mid l} V(\BQ_p)$, we have
$$\CN_V(\CF;B)=B(\log B)^{r-1}\left(\alpha(V)\beta(V)\omega^V(\CF)+O_{\CF_\infty}(l^\gamma h(B))\right),$$ where the implied constant may depend also on $\CV$ but it is uniform for every such $l,\bxi$.}
\end{itemize}
\begin{remark} The polynomial-type bound on $l$ (i.e. the term $l^\gamma$) is compatible with all known examples (see e.g. \cite[\S4]{Browning-Loughran} for projective quadrics and several affine varieties \cite[Theorem 3.1]{Cao-Huang2}).
\end{remark}
\begin{proposition}\label{prop:EEimpliesgeneral}
Suppose that $V$ satisfies \textbf{(EE)} with respect to a $\BZ$-integral model $\CV\subset\BP_\BZ^m$. Then there exists $\eta\geqslant 0$ depending on $\CV\subset\BP_\BZ^m$ such that, for every adelic neighbourhood $\CF=\CF_\infty\times\CF_{f}$ where $\CF_\infty\subset V(\BR)$ is real measurable and $\CF_{f}\subset V(\RA_\BQ^f)$ is non-empty open-closed, we have
$$\CN_V(\CF;B)=B(\log B)^{r-1}\left(\alpha(V)\beta(V)\omega^V(\CF)+O_{\CF_\infty,\varepsilon}(\CL(\CF_{f})^{\gamma+\dim V+\eta+\varepsilon} h(B))\right),$$ where the integer $\CL(\CF_{f})$ is defined in Definition \ref{def:LCEf}, and the implied constant may depend on $\CV\subset\BP_\BZ^m$ but is uniform with respect to $\CF_{f}$. If $\CV$ smooth over $\BZ$, then we can take $\eta=0$.
\end{proposition}
\begin{proof}
To ease notation, we write $\CL$ for $\CL(\CF_{f})$, given that $\CF_{f}$ is fixed throughout.
On covering $\CF_{f}$ by a finite disjoint union of congruence neighbourhoods $\{\CE^\CV(\CL(\CF_{f});\bxi_i)\}_{i\in I}$ of level $\CL$:
$$\CF_{f}=\bigsqcup_{i\in I} \CE^\CV(\CL;\bxi_i),$$
it remains to estimate $\# I$. Indeed, the local points $(\bxi_i)$ are contained in the preimages of residues of the reduction modulo $\CL$-map
$$\prod_{p\mid \CL}\CV(\BZ_p)\to \prod_{p\mid\CL}\CV(\BZ/p^{\operatorname{ord}_p(\CL)}\BZ).$$
We now fix a finite set $S$ of places such that $\CV$ is smooth over $\BZ_S$. If $p\mid \CL$ and $p\not\in S$, using the Lang--Weil estimate \cite{Lang-Weil} (see also \cite[\S1.7]{Cao-Huang2}) and Hensel's lemma, we obtain
$$\#\CV(\BZ/p^{\operatorname{ord}_p(\CL)}\BZ)=p^{\dim V\operatorname{ord}_p(\CL)}\left(1+O(p^{-\frac{1}{2}})\right).$$
To estimate the contribution from $p\mid \CL$ and $p\in S$, we write $p_S:=\max_{p\in S} p$. Note that if $p\mid \CL$ then $2^{\operatorname{ord}_p(\CL)}\leqslant p^{\operatorname{ord}_p(\CL)}\leqslant \CL$, and hence $\operatorname{ord}_p(\CL)\leqslant \log_2(\CL)$. Applying the Lang--Weil estimate as above to $\BP_\BZ^m$, we have
\begin{align*}
\#I&\leqslant \prod_{p\mid\CL}\#\CV(\BZ/p^{\operatorname{ord}_p(\CL)}\BZ)\\&\leqslant\prod_{p\mid \CL, p\in S} \#\BP_\BZ^m(\BZ/p^{\operatorname{ord}_p(\CL)}\BZ)\times \prod_{p\mid \CL,p\notin S}\#\CV(\BZ/p^{\operatorname{ord}_p(\CL)}\BZ)\\ &=\prod_{p\mid \CL, p\in S} p^{m\operatorname{ord}_p(\CL)}\left(1+O(p^{-\frac{1}{2}})\right)\times \prod_{p\mid \CL,p\notin S}p^{\dim V\operatorname{ord}_p(\CL)}\left(1+O(p^{-\frac{1}{2}})\right)\\ &\ll_\varepsilon \CL^{\dim V+\varepsilon}\times \prod_{p\mid \CL, p\in S}p^{(m-\dim V)\operatorname{ord}_p(\CL)}\\ &\leqslant \CL^{\dim V+\varepsilon} p_S^{(m-\dim V)\#\{p:p\in S\} \log_2 \CL}=\CL^{\dim V+(m-\dim V)\# \{p\in S\} \log_2 p_S+\varepsilon}.
\end{align*}
Therefore, we define $$\eta:=(m-\dim V)\#\{p:p\in S\} \log_2 p_S.$$ In particular, if $\CV$ is smooth over $\BZ$, we can take $S=\{\BR\}$ and thus $\eta=0$. We finally obtain \begin{align*}
&\CN_V(\CF_\infty\times\CF_{f};B)\\=&\sum_{i\in I}\CN_V(\CF_\infty\times\CE^\CV(\CL;\bxi_i);B)\\ =&B(\log B)^{r-1}\left(\alpha(V)\beta(V)\left(\sum_{i\in I}\omega^V(\CF_\infty\times\CE^\CV(\CL;\bxi_i))\right)+O_{\CF_\infty}\left((\# I)\CL^\gamma h(B)\right)\right)\\ =&B(\log B)^{r-1}\left(\alpha(V)\beta(V)\omega^V(\CF)+O_{\CF_\infty,\varepsilon}(\CL^{\gamma+\dim V+\eta+\varepsilon} h(B))\right).
\end{align*}
The proof is thus completed.
\end{proof}
\subsection{The Selberg sieve for almost-Fano varieties}\label{se:Selbersieve}
In this section, based on the condition \textbf{(EE)}, we seek to establish a sieving result for rational points on almost-Fano varieties satisfying conditions ``modulo higher power primes'', as an effective version of Theorem \ref{thm:Tamagawazero}. Our key analytic input is the Selberg sieve, best suitable for our application. The main result is Theorem \ref{thm:Selbergsieve} which is a generalisation of the case \cite[Theorem 1.7]{Browning-Loughran} where $X$ is a projective quadratic hypersurface.
Let $\CP$ be a finite set of prime numbers. Let $$\Pi(\CP):=\prod_{p\in\CP} p.$$
Let $$\CA=(a_i)_{i\in I}$$ be a sequence of integers indexed by a finite set $I$.
The cardinality $|\CA|$ of the sequence $\CA$ is $\# I$.
For $d\in\BN_{\geqslant 1}$, we define the finite subsequence $$\CA_d:=(a_i)_{i\in I_d}$$ of $\CA$ with $I_d:=\{i\in I:d\mid a_i\}\subset I$.
The goal is to provide upper bound estimate for the sifting function
$$S(\CA,\CP):=\#\{i\in I:\gcd(a_i,\Pi(\CP))=1\}.$$
\begin{theorem}[cf. \cite{F-I}, Theorem 7.1]\label{thm:Selberg}
Let $\mathfrak{X}>0$ and let $g:\BZ_{>0}\to\BR_{>0}$ be a multiplicative arithmetic function with $0<g(p)<1$ for every $p\mid\Pi(\CP)$. Then for every $D>1$, we have
$$S(\CA,\CP)\leqslant \frac{\mathfrak{X}}{J(\CP,D)}+\sum_{d\mid \Pi(\CP),d<D}\tau_3(d)\left|\CR_{\CA_d}(\mathfrak{X})\right|,$$ where $$J(\CP,D):=\sum_{d\mid \Pi(\CP),d<\sqrt{D}}\prod_{p\mid d}\left(\frac{g(p)}{1-g(p)}\right)$$ and
$$ \CR_{\CA_d}(\mathfrak{X})=|\CA_d|-g(d)\mathfrak{X}.$$
\end{theorem}
Now we fix a variety $V$ satisfying Hypothesis \ref{hyp:almost-Fano}. We fix a finite set of places $S$ containing $\BR$,
and a collection of open closed non-empty $\nu$-adic sets $(\Theta_\nu\subset V(\BQ_\nu))_{\nu\notin S}$, to which we associate the adelic neighbourhood
$$\CE[(\Theta_\nu)]:=\prod_{\nu\in S} V(\BQ_\nu)\times \prod_{\nu\notin S}\Theta_\nu,$$ and the counting function
$$\CN_V(\CE[(\Theta_\nu)];B)=\#\{P\in (V(\BQ)\setminus M)\cap \CE[(\Theta_\nu)]:H(P)\leqslant B\}.$$
We also write $\Theta_\nu^c:= V(\BQ_\nu)\setminus \Theta_\nu$.
\begin{theorem}\label{thm:Selbergsieve}
Assume that the condition \textbf{(EE)} (\S\ref{se:effdist}) holds with exponent $\gamma$ and function $h$ with respect to a fixed $\BZ$-integral model $\CV\subset\BP_\BZ^k$ of $V$. Assume also that there exists $k\in\BZ_{>0}$ such that, if $\nu_0\notin S$ corresponds to $p_0$, then $$\CL\left( \Theta_{\nu_0}\times\prod_{\substack{\nu\in\Val(\BQ)\setminus\{\BR\}\\\nu\neq \nu_0}}V(\BQ_\nu)\right)\leqslant p_0^k.$$ (Recall Definition \ref{def:LCEf} for $\CL$). Then uniformly for every $N\geqslant 1$,
$$\CN_V(\CE[(\Theta_\nu)];B)\ll_\varepsilon B(\log B)^{r-1}\left(G(N)^{-1}+N^{2k(\gamma+\dim V+\eta)+2+\varepsilon} h(B)\right).$$ where $\eta\geqslant 0$ is as in Proposition \ref{prop:EEimpliesgeneral} and the function $G$ is defined by \begin{equation}\label{eq:GN}
G(x):=\sum_{\substack{j<x\\p\mid j\Rightarrow p\not\in S,\Theta_{p}\neq V(\BQ_p)}}\mu^2(j)\prod_{\substack{\nu\mid j}}\frac{\omega_\nu^V(\Theta_\nu^c)}{\omega_\nu^V(\Theta_\nu)}.
\end{equation}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:Selbergsieve}]
For every $N>1$, we let $$\CP(N):=\{p\text{ prime}: p\not\in S,\Theta_p\neq V(\BQ_p),p<N\}.$$
For each prime $p$ corresponding to $\nu$, let
$$g(p):=\begin{cases}
\frac{\omega_\nu^V(\Theta_\nu^c)}{\omega_\nu^V(\BQ_\nu)} &\text{ if } p\in \CP(N);\\ 0 &\text{ otherwise}.
\end{cases}$$ Then $g$ extends to a multiplicative function and clearly $0<g(p)<1$ whenever $p\mid \Pi(\CP(N))$.
We shall apply Theorem \ref{thm:Selberg} to the sequence $\CA=(a(P))_{P\in I}$, indexed by $$I:=\{P\in V(\BQ)\setminus M:H(P)\leqslant B\},$$ where
$$a(P):=\prod_{\substack{p\not\in S,p<N\\ P\in \Theta_p^c}}p.$$
Let $$\mathfrak{X}:=B(\log B)^{r-1}\alpha(V)\beta(V)\omega^V(V(\RA_\BQ)).$$
For every $d\mid \Pi(\CP(N))$, the subsequence $(\CA_d)=(a(P))_{P\in I_d}$ is therefore indexed by the set $$I_d:=\{P\in V(\BQ)\setminus M:H(P)\leqslant B,P\in \Theta_p^c \text{ for all }p\mid d\}.$$ Define the finite adelic neighbourhood $$\CE(d,\Theta^c;N):=\prod_{p\in S\text{ or }p\nmid d}V(\BQ_p)\times\prod_{p\not \in S\text{ and }p\mid d}\Theta_p^c.$$
Then with respect to the model $\CV\subset\BP_\BZ^m$
$$\CL(\CE(d,\Theta^c;N))\leqslant\prod_{p\mid d}p^k=d^k.$$ The condition \textbf{(EE)} and Proposition \ref{prop:EEimpliesgeneral} imply
\begin{align*}
|\CA_d| =&B(\log B)^{r-1}\left(\alpha(V)\beta(V)\omega_\infty^V(V(\BR))\omega_f^V(\CE(d,\Theta^c;N))+O_\varepsilon(d^{k(\gamma+\dim V+\eta)+\varepsilon}h(B))\right)\\ =&\frac{\omega_f^V(\CE(d,\Theta^c;N))}{\omega_f^V(V(\RA_\BQ^f))}\mathfrak{X}+O_\varepsilon(d^{k(\gamma+\dim V+\eta)+\varepsilon}B(\log B)^{r-1}h(B))\\
=&\left(\prod_{p\mid d}\frac{\omega_\nu^V(\Theta_p^c)}{\omega_\nu^V(V(\BQ_p))}\right)\mathfrak{X}+O_\varepsilon(d^{k(\gamma+\dim V+\eta)+\varepsilon}B(\log B)^{r-1}h(B))\\ =&\left(\prod_{p\mid d} g(p)\right)\mathfrak{X}+O_\varepsilon(d^{k(\gamma+\dim V+\eta)+\varepsilon}B(\log B)^{r-1}h(B)).
\end{align*}
We are now in a position to apply Theorem \ref{thm:Selberg}.
Since \begin{align*}
\CN_V(\CE[(\Theta_\nu)];B)&\leqslant S(\CA,\CP(N))\\ &=\#\{P\in V(\BQ)\setminus M:H(P)\leqslant B,P\in\Theta_p \text{ for all }p\not\in S,p<N\},
\end{align*}
on defining $$\CR_{\CA_d}(\mathfrak{X}):=|\CA_d|-\left(\prod_{p\mid d} g(p)\right)\mathfrak{X},$$ we obtain that uniformly for $N\geqslant 1$, by taking $D=N^2$ in Theorem \ref{thm:Selberg} and using $\tau_3(d)\ll_\varepsilon d^\varepsilon$,
\begin{align*}
\CN_V(\CE[(\Theta_\nu)];B)&\leqslant \frac{\mathfrak{X}}{J(\CP(N),N^2)}+\sum_{d<N^2}\tau_3(d)\left|\CR_{\CA_d}(\mathfrak{X})\right|\\ &\ll_\varepsilon B(\log B)^{r-1}\left(\left(\sum_{\substack{d<N\\ p\mid d\Rightarrow p\in \CP(N)}}\prod_{p\mid d}\left(\frac{g(p)}{1-g(p)}\right)\right)^{-1}+h(B)\sum_{d<N^2}d^{k(\gamma+\dim V+\eta)+\varepsilon}\right)\\ &\ll B(\log B)^{r-1}\left(G(N)^{-1}+N^{2k(\gamma+\dim V+\eta)+2+\varepsilon} h(B)\right).
\end{align*}
This finishes the proof.
\end{proof}
\section{Universal torsor method}\label{se:univtor}
(See e.g. \cite[\S5]{Salberger}, \cite[\S2, \S3]{Frei-Pieropan}, \cite[\S4.3.2]{PeyreBeyond}.)
The notion of universal torsors was first introduced by Colliot-Thélène and Sansuc in \cite{CT-Sansuc}. Roughly speaking, universal torsors are quasi-affine varieties having simpler geometry, and rational points can be lifted to integral points lying on universal torsors, which are often easier to handle.
Let $V$ be a $\BQ$-variety satisfying Hypothesis \ref{hyp:almost-Fano}.
Let $\pi:\CTUT\to V$ be a universal torsor of $V$ under the N\'eron-Severi torus $\CTNS$, such that $\CTUT(\BQ)\neq\varnothing$. Write $$d:=\dim V,\quad r:=\dim\CTNS,\quad n:=\dim\CTUT,$$ so that $n=r+d$.
In what follows we assume that the following hypothesis holds, which is tailored for the application to split toric varieties for the sake of brevity. \footnote{For thorough discussions on versal torsors over general number fields, lifting rational points, as well as construction of relevant fundamental domains, see e.g. \cite[\S5, \S6, \S10]{Salberger} and \cite[\S4.3]{PeyreBeyond}. For discussions on constructions of models of universal torsors, see \cite[\S3]{Frei-Pieropan}.}
\begin{hypothesis}\label{hyp:univtor}
There exist $\FTUT\subset\BA_\BZ^k$ a quasi-affine integral model of $\CTUT$, $\FTNS$ an integral model of $\CTNS$, $\CV\subset\BP_\BZ^m$ an integral model for $V$, and a morphism $\widetilde{\pi}:\FTUT\to\CV$, all of which are over $\BZ$, such that $\FTNS$ is split over $\BZ$ (i.e. $\FTNS\simeq_\BZ \BG^{\dim\CTNS}_{\operatorname{m},\BZ}$) and $\FTUT$ is a $\BZ$-torsor of $\CV$ under $\FTNS$.
\end{hypothesis}
Hypothesis \ref{hyp:univtor} implies that the $\CTNS$-torsor $\CTUT$ has no other $\BQ$-form by Hilbert 90,
and that the map $$\widetilde{\pi}|_{\FTUT(\BZ)}:\FTUT(\BZ)\to \CV(\BZ)=V(\BQ)$$ is surjective (cf. \cite[\S2.2, \S2.3]{CT-Sansuc}, \cite[Proposition 2.1]{Frei-Pieropan}, \cite[Remark 4.23]{PeyreBeyond}).
The goal of this section is to prove Propositions \ref{prop:equidistunivtor} and \ref{prop:univtorEEGS}. The former ``lifts'' Principle \ref{prin:Manin-Peyre} into universal torsors. and the latter ``lifts'' the condition \textbf{(EE)} on $\CV$ into the condition \textbf{(EEUT)} on $\FTUT$.
\subsection{Tamagawa measures on universal torsors}\label{se:Tamagawaunivtor}
Let $\CT$ be a $\BQ$-split torus of dimension $e$. Let $\varpi_{\CT}:=\frac{\operatorname{d}x_1}{x_1}\wedge\cdots\wedge \frac{\operatorname{d}x_e}{x_e}$ be a global $\CT$-invariant differential form \emph{of minimal $\operatorname{d}\log $-type} on $\CT$ (cf. \cite[3.28]{Salberger}). Then for every local section $s_\nu$ of $-K_{\CT}$ defined at $P_\nu\in \CT(\BQ_\nu)$, the \emph{order norm} is (cf. \cite[3.29]{Salberger})
\begin{equation}\label{eq:ordernorm}
\|s_\nu(P_\nu)\|_{\operatorname{ord},\nu}:=\left|\left\langle s_\nu ,\varpi_{\CT}\right\rangle(P_\nu)\right|_\nu.
\end{equation}
Now let $V$ be as in Hypotheses \ref{hyp:almost-Fano} and \ref{hyp:univtor}. The order norm on $-K_{\CTNS}$ induces the \emph{constant norm} $(\|\cdot\|_{\CTUT/V,\nu})$ on $-K_{\CTUT/V}$ (\cite[3.30]{Salberger}) by pulling-back. One can show that (cf. \cite[3.31]{Salberger}) the constant norm coincides with the model norm attached to $(-K_{\FTUT/\CV},\FTUT)$.
Having equipped $-K_V$ with an adelic norm $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$ with associated adelic measures $(\omega_\nu^V)$ (recall \eqref{eq:adelicmeasure}), since $$-K_{\CTUT}\simeq \pi^*(-K_V)\otimes -K_{\CTUT/V},$$ we define the \emph{induced norm} (cf. \cite[3.30]{Salberger}) $(\|\cdot\|_{\CTUT,\nu})_{\nu\in\Val(\BQ)}$ on $-K_{\CTUT}$ to be the product of the pull-back of $(\|\cdot\|_\nu)_{\nu\in\Val(\BQ)}$ with the constant norm $(\|\cdot\|_{\CTUT/V,\nu})_{\nu\in\Val(\BQ)}$, to which we associate the adelic measures $(\omega_\nu^{\CTUT})$.\footnote{For another definition of ``canonical meacures'', see \cite[p. 245]{PeyreBeyond}.}
For all non-archimedean $\nu\in\Val(\BQ)$, the adelic measures $\omega^V_\nu,\omega_\nu^{\CTUT}$ ``patch together'', in the following sense (cf. \cite[Corollary 2.23]{Salberger}, \cite[Theorem 4.33]{PeyreBeyond}). Write $$\widehat{\pi}_\nu:\FTUT(\BZ_\nu)\to \CV(\BZ_\nu)=V(\BQ_\nu)$$ for the map $\widetilde{\pi}|_{\FTUT(\BZ_\nu)}$. Then for every $\nu$-adic neighbourhood $\CE_\nu\subset V(\BQ_\nu)$,
\begin{equation}\label{eq:patch}
\omega_\nu^{\CTUT}(\widehat{\pi}_\nu^{-1}(\CE_\nu))=L_\nu(1,\Pic(\overline{V}))^{-1}\omega_\nu^V(\CE_\nu).
\end{equation}
Therefore, recalling the choice of convergent factors for $(\omega^V_\nu)$ \eqref{eq:convergencefact}, the product measure on $\CTUT(\RA_\BQ)$
\begin{equation}\label{eq:Tmeasureunivtor}
\omega^{\CTUT}=\prod_{\nu\in\Val(\BQ)}\omega_\nu^{\CTUT}=\omega_\infty^{\CTUT}\times \omega_f^{\CTUT}
\end{equation}
is convergent (i.e., $(1)_\nu$ is a set of convergence factors), and it is independent of the choice of the form $\varpi_{\CTNS}$ by the product formula (cf. \cite[Lemma 5.16]{Salberger} and \cite[p. 246]{PeyreBeyond}).
We call $\omega_f^{\CTUT}$ the \emph{Tamagawa measure} on $\CTUT(\RA_\BQ)$ associated to the induced norm $(\|\cdot\|_{\CTUT,\nu})_{\nu\in\Val(\BQ)}$.
We shall prove the accordance of this construction with the toric Tamagawa measures used by Salberger in \S\ref{se:toricparaheight}.
\subsection{Lifting into universal torsors}
We assume that Hypotheses \ref{hyp:almost-Fano} and \ref{hyp:univtor} hold. We first recall the definition of \emph{affine congruence neighbourhoods} in \cite[\S2.2]{Cao-Huang2}.
Let $l\in\BZ_{>0}$ and let $$\bXi=(\bXi_p)_{p\mid l}\subset \prod_{p\mid l}\FTUT(\BZ_p)\subset \prod_{p\mid l} \CTUT(\BQ_p)$$ be a collection of (integral) local points. We assume that for each $p\mid l$, the $p$-adic neighbourhood $$\CB^{\FTUT}_p(l;\Xi):=\FTUT(\BZ_p)\cap \left(\Xi_{p}+p^{\operatorname{ord}_p(l)}\BZ_p^k\right)\subset \BA_\BZ^k(\BZ_p)$$ is closed. This is always the case upon replacing $l$ by its sufficiently large powers (depending on $\FTUT$ and $\bXi$). Since $\FTUT(\BZ_p)$ is compact, then $\CB^{\FTUT}_p(l;\Xi)$ is also compact.
We the define the \emph{(integral affine) congruence neighbourhood} of $\FTUT$ of level $l$ associated to $\Xi$ to be (see \cite[Definition 2.2]{Cao-Huang2}) $$\CB^{\FTUT}_f(l;\Xi):=\prod_{p\mid l}\CB^{\FTUT}_p(l;\Xi)\times \prod_{p\nmid l}\FTUT(\BZ_p)\subset \FTUT(\widehat{\BZ})\subset\CTUT(\RA_\BQ^f).$$ Then $\CB^{\FTUT}_f(l;\Xi)$ is non-empty, open-closed and compact. The collection of congruence neighbourhoods of $\FTUT$ forms a topological basis for $ \FTUT(\widehat{\BZ})$.
We can reformulate the effective equidistribution property in terms of asymptotics of counting points in $\FTUT(\BZ)$ via the finite part Tamagawa measure $\omega_f^{\CTUT}$ on $\CTUT(\RA_\BQ^f)$. (See e.g. \cite[Conjecture 7.12]{Salberger}, and also \cite[p. 248--p. 252, Remark 4.34 (i)]{PeyreBeyond}.) Recall that $M\subset V(\BQ)$ denotes the fixed thin subset.
For every $\CF_\infty\subset V(\BR)$ measurable and for every congruence neighbourhood $\CB^{\FTUT}_f\subset\FTUT(\widehat{\BZ})$, let us consider the counting function
\begin{equation}\label{eq:NFTUT}
\CN_{\FTUT}(\CF_\infty,\CB^{\FTUT}_f;B):=\#\{\XX\in\FTUT(\BZ)\cap \CB^{\FTUT}_f:\widetilde{\pi}(\XX)\in \CF_\infty\setminus M,H(\widetilde{\pi}(\XX))\leqslant B\}.
\end{equation}
\begin{proposition}\label{prop:equidistunivtor}
Suppose that the variety $V$ satisfies Hypotheses \ref{hyp:almost-Fano} and \ref{hyp:univtor}. Then $V$ satisfies Principle \ref{prin:PManin-Peyre} if for all $\CF_\infty\subset V(\BR)$ and $\CB^{\FTUT}_f\subset\FTUT(\widehat{\BZ})$ as above, as $B\to\infty$, \begin{equation}\label{eq:asymptoteunivtor}
\CN_{\FTUT}(\CF_\infty,\CB^{\FTUT}_f;B)\sim \#\FTNS(\BZ)
\alpha(V)\omega^V_\infty(\CF_\infty)\omega_f^{\CTUT}(\CB^{\FTUT}_f)B(\log B)^{r-1},
\end{equation}
where the implied constant may depend on $\CF_\infty$ and $\CB^{\FTUT}_f$.
\end{proposition}
\begin{proof}
Since $\CTNS$ is split, by taking $S=\{\BR\}$, we have in \eqref{eq:artinLnu} \eqref{eq:artinLS},
$$L_\nu(s,\Pic(\overline{V}))=\frac{1}{(1-p^{-s})^r},\text{ for all }\nu\in\operatorname{Val}(\BQ)\setminus\{\BR\},\quad L_{\BR}(s,\Pic(\overline{V}))=\frac{1}{\zeta(s)^r},$$
whence $\lim_{s\to 1}(s-1)^r L_{\BR}(s,\Pic(\overline{V}))=1$. We also have $\beta(V)=\#H^1(\BQ,\BX^*(\CTNS))=1$.
For every prime $p$, we write \begin{equation}\label{eq:pip}
\widehat{\pi}_p:\FTUT(\BZ_p)\to\CV(\BZ_p)
\end{equation} the $p$-adic map restricting to $\BZ_p$-points, and \begin{equation}\label{eq:pihat}
\widehat{\pi}:=\prod_p \widehat{\pi}_p:\FTUT(\widehat{\BZ})\to\CV(\widehat{\BZ})
\end{equation}
We claim:
\begin{enumerate}
\item Every $\widehat{\pi}_p$ is proper\footnote{i.e., the preimage of any compact subset is compact.};
\item Every $\widehat{\pi}_p$ is surjective.
\end{enumerate}
Property (1) holds because $\widehat{\pi}_p$ is continuous between two compact Hausdorff topological spaces (cf. \cite[Corollary 2.5]{Salberger}). Consequently, the map $\widehat{\pi}$ is also proper (because the topology on $\FTUT(\widehat{\BZ})$ is given by the product topology).
Property (2) holds because $H^1_{\text{\'et}}(\BZ,\FTNS)=0$
since $\FTNS$ is split over $\BZ$, and by Hensel's lemma.\footnote{Indeed the weaker assumption that $\FTNS$ has good reduction modulo every prime $p$ suffices. This is a consequence of Lang's theorem which asserts that $H^1(\BF_p,\CG)=0$ for every connected and smooth group $\CG$ over $\BZ_p$.}
Hence for any compact open projective congruence neighbourhood $\CE_f^\CV\subset V(\RA_\BQ^f)=\CV(\widehat{\BZ})$, the preimage $\widehat{\pi}^{-1}(\CE_f^\CV)\subset \FTUT(\widehat{\BZ})$ is non-empty, open-closed and compact. Therefore there exist finitely many disjoint affine congruence neighbourhoods $\{\CB^{\FTUT}_{f,i}\}_i$ of $\FTUT$ such that
\begin{equation*}
\widehat{\pi}^{-1}(\CE_f^\CV)=\bigsqcup_i \CB^{\FTUT}_{f,i}.
\end{equation*}
So for every measurable $\CF_\infty\subset V(\BR)$, the asymptotic formula \eqref{eq:asymptoteunivtor} for each of these pairs $(\CF_\infty,\CB^{\FTUT}_{f,i})$ implies, by \eqref{eq:patch} and \cite[Lemma 11.4]{Salberger}
\begin{align*}
\CN_V(\CF_\infty\times\CE_f^\CV;B)&=\frac{1}{\#\FTNS(\BZ)}\CN_{\FTUT}(\CF_\infty,\widehat{\pi}^{-1}(\CE_f^\CV);B)\\&=\sum_i \frac{1}{\#\FTNS(\BZ)} \CN_{\FTUT}(\CF_\infty,\CB^{\FTUT}_{f,i};B)\\ &\sim \alpha(V)\omega^V_\infty(\CF_\infty)\left(\sum_i\omega_f^{\CTUT}(\CB^{\FTUT}_{f,i})\right)B(\log B)^{r-1}\\ &=\alpha(V)\omega^V_\infty(\CF_\infty)\omega_f^{\CTUT}( \widehat{\pi}^{-1}(\CE_f^\CV))B(\log B)^{r-1}\\ &=\alpha(V)\omega^V_\infty(\CF_\infty)\omega_f^V(\CE_f^\CV)B(\log B)^{r-1},
\end{align*}
which coincides with the one for $\CF_\infty\times \CE_f^\CV$ in Principle \ref{prin:PManin-Peyre}.
\end{proof}
\subsection{The effective equidistribution on universal torsors (EEUT)}\label{se:EEUT}
In this subsection, we continue to assume that $V$ satisfies Hypotheses \ref{hyp:almost-Fano} and \ref{hyp:univtor}.
We now state the following condition as an effective version of Proposition \ref{prop:equidistunivtor} depending on the integral models $(\CV,\FTUT)$.
\begin{itemize}
\item \textbf{Condition (EEUT).} \emph{There exist $\gamma_0\geqslant 0$ and a continuous function $h_0:\BR_{>0}\to\BR_{>0}$ with $h_0(x)=o(1),x\to\infty$, satisfying the following property.
For every
$\CF_\infty\subset V(\BR)$ a real measurable neighbourhood and $\CB^{\FTUT}_f(l;\Xi)\subset \CTUT(\widehat{\BZ})$ a congruence neighbourhood of $\FTUT$ of level $l\in\BZ_{\geqslant 2}$ associated to $\Xi\in\prod_{p\mid l} \FTUT(\BZ_p)$, we have (recall \eqref{eq:NFTUT})
\begin{multline*}
\CN_{\FTUT}(\CF_\infty,\CB_f(l;\Xi);B)\\=B(\log B)^{r-1}
\left( \#\FTNS(\BZ)\alpha(V)\beta(V)\omega_\infty^V(\CF_\infty)\omega_f^{\CTUT}(\CB_f(l;\Xi))+O_{\CF_\infty}(l^{\gamma_0} h_0(B))\right),
\end{multline*}
uniformly for every such $l,\Xi$.}
\end{itemize}
\begin{proposition}\label{prop:univtorEEGS}
The condition \textbf{(EEUT)} for $(\CV,\FTUT)$ implies the condition \textbf{(EE)} for $\CV$
with $$\gamma=\gamma_0+\dim\CTUT+\eta_0+\varepsilon\quad\text{and}\quad h=h_0,$$
where $\varepsilon>0$ is arbitrary and $\eta_0\geqslant 0$ depends on the model $\FTUT$. If $\FTUT$ is smooth over $\BZ$ we can take $\eta_0=0$.
\end{proposition}
\begin{proof}
We fix throughout a projective congruence neighbourhood $$\CE_f^\CV(l;\bxi):=\prod_{p\mid l}\CE^{\CV}_p(l;\xi_p)\times \prod_{p\nmid l}\CV(\BZ_p)\subset \CV(\widehat{\BZ})$$ of $\CV$ of level $l$ associated to $\bxi=(\xi_p)_{p\mid l}$.
Recall the map \eqref{eq:pip}. Then for each $\Xi_p\in\widehat{\pi}_p^{-1}(\CE_p^\CV(l;\bxi_p))\subset\FTUT(\BZ_p)$, we have $\widehat{\pi}_p(\CB^{\FTUT}_p(l;\Xi_p))\subset \CE_p^{\CV}(l;\bxi_p)$. This can be seen by expressing the morphism $\widetilde{\pi}$ as homogeneous integral polynomials in terms of the fixed embeddings $\CV\subset\BP^n_{\BZ}$ and $\FTUT\subset\BA_\BZ^k$. Therefore, we see that for every $\Xi\in (\prod_{p\mid l}\widehat{\pi}_p)^{-1}(\bxi)$, the image of the affine congruence neighbourhood $$\CB^{\FTUT}_f(l;\Xi):=\prod_{p\mid l}\CB^{\FTUT}_p(l;\Xi)\times \prod_{p\nmid l}\FTUT(\BZ_p)\subset\FTUT(\widehat{\BZ})$$ of level $l$ associated to $\bXi$ under the map $\widehat{\pi}$ \eqref{eq:pihat} is contained in $\CE_f^\CV(l;\bxi)$.
By the properness and surjectivity of the map $\widehat{\pi}_p$ as in the proof of Proposition \ref{prop:equidistunivtor}, for every $p\mid l$, the set $\widehat{\pi}_p^{-1}(\CE_p^\CV(l;\bxi_p))$ is non-empty and compact, and thus it can be covered by finitely many $p$-adic affine congruence neighbourhood of level $l$, the number of which is at most $\#\FTUT(\BZ/p^{\operatorname{ord}_p(l)}\BZ)$, by using residues of the reduction $\FTUT(\BZ_p)\to\FTUT(\BZ/p^{\operatorname{ord}_p(l)}\BZ)$.
Consequently, we can find finitely many $\Xi_i\in\prod_{p\mid l}\FTUT(\BZ_p)$ such that
$$\widehat{\pi}^{-1}(\CE_f^\CV(l;\bxi))=\bigsqcup_{\Xi_i}\CB_f^{\FTUT}(l;\Xi_i).$$ The number of such $\Xi_i$ is, by the Lang-Weil estimate \cite{Lang-Weil} and Hensel's lemma as in the proof of Proposition \ref{prop:EEimpliesgeneral} (note that $\FTUT$ is quasi-affine), $$\leqslant\#\FTUT(\BZ/l\BZ) =\prod_{p\mid l}\#\FTUT(\BZ/p^{\operatorname{ord}_p(l)}\BZ)\ll_\varepsilon l^{\dim \CTUT+\eta_0+\varepsilon}$$ for certain $\eta_0$ depending on the model $\FTUT$. We have $\eta_0=0$ when $\FTUT$ is smooth over $\BZ$.
So for every real measurable $\CF_\infty\subset V(\BR)$,
we finally get from the condition \textbf{(EEUT)}, and using again \eqref{eq:patch} and \cite[Lemma 11.4]{Salberger},
\begin{align*}
&\CN_V(\CF_\infty\times\CE_f^\CV(l;\bxi);B)\\=&\sum_{\Xi_i}\frac{1}{\#\FTNS(\BZ)}\CN_{\FTUT}(\CF_\infty,\CB^{\FTUT}_f(l;\Xi_i);B)\\ =&
\sum_{\Xi_i}B(\log B)^{r-1}\left(\alpha(V)\omega_\infty^V(\CF_\infty)\omega_f^{\CTUT}(\CB^{\FTUT}_f(l;\Xi_i))+O(l^{\gamma_0} h_0(B))\right)\\ =&B(\log B)^{r-1}\left(\alpha(V)\omega_\infty^V(\CF_\infty)\omega_f^{\CTUT}(\widehat{\pi}^{-1}(\CE_f^\CV(l;\bxi)))+O_\varepsilon(l^{\gamma_0+\dim \CTUT+\eta_0\varepsilon} h_0(B))\right)\\ =&B(\log B)^{r-1}\left(\alpha(V)\omega^V(\CF_\infty\times\CE_f^\CV(l;\bxi))+O_\varepsilon(l^{\gamma_0+\dim \CTUT+\eta_0+\varepsilon} h_0(B))\right). \qedhere
\end{align*}
\end{proof}
\section{Parametrisation and heights on toric varieties}\label{se:toricparaheight}
In this section, we recall basic toric geometry, toric construction of universal torsors, toric Tamagawa measures and formulas for toric height functions.
\subsection{Construction and lifting}
(See \cite[\S8]{Salberger}, \cite[\S2]{Huang}.)
Fix an integral lattice $N\simeq \BZ^d$. Let $M=N^\vee=\operatorname{Hom}_\BZ(N,\BZ)$ be the dual lattice. A fan $\triangle$ that we consider in this article is a collection of (strongly convex, rational polyhedral and simplicial) cones inside $N_\BR\simeq \BR^d$ containing the vector $\mathbf{0}$ and satisfying the following.
\begin{itemize}
\item Any face of a cone is also a cone in $\triangle$;
\item The intersection of any two cones in $\triangle$ is the common face of them;
\end{itemize}
We moreover assume $\triangle$ is \emph{regular} and \emph{complete}, i.e.
\begin{itemize}
\item Every cone in $\triangle$ is generated by a family of integral vectors which form part of a $\BZ$-basis of the lattice $N$;
\item The support $\bigcup_{\sigma\in\triangle}\sigma$ is $N_\BR$.
\end{itemize}
Let $R$ be either $\BQ$ or $\BZ$ in what follows. For every $\sigma\in\triangle$, we consider $\sigma^\vee\subset M_\BR$ its dual cone and associate the affine open neighbourhood $U_\sigma=\operatorname{Spec}(R[\sigma^\vee\cap M])$ attached to the semigroup $\sigma^\vee\cap M$. Clearly if $\tau$ is a face of $\sigma$, then $U_\tau\subset U_\sigma$, all of which contain the open orbit $$\CT_{O}:=U_{\mathbf{0}}=\operatorname{Spec}(R[M])\simeq \mathbb{G}_{\operatorname{m},R}^d.$$
The toric variety $\operatorname{Tor}_R(\triangle)$ attached to $\triangle$ is constructed by gluing the data $(U_\sigma)_{\sigma\in\triangle}$, which is a smooth, flat and proper scheme over $R$.
Let $\triangle_{\operatorname{max}}$ be the set of all cones $\sigma$ whose $\BR$-span is of dimension $d$ (we call them \emph{maximal cones}). Let $\triangle(1)\subset \triangle$ be the set of one-dimensional cones, which we shall call \emph{rays}. Each ray $\varrho$ corresponds to a $\CT_{O}$-invariant boundary divisor $D_\varrho$. We denote by $n_\varrho$ the unique primitive element of $\varrho\cap N$. Similarly, for every cone $\sigma\in\triangle$, let $\sigma(1)$ be the set of rays as faces of $\sigma$. Then we have the following fundamental exact sequence of $\BZ$-modules (see \cite[Theorem 4.1.3]{Coxbook}) \begin{equation}\label{eq:exactsq}
\xymatrix{
0\ar[r]&M\ar[r]^-h& \BZ^{\triangle(1)}\ar[r]&
\operatorname{Pic}(X)\ar[r]& 0.}
\end{equation} We write $r=\operatorname{rank}\Pic(X)$, so that $\#\triangle(1)=r+d$. We write $\triangle(1)=\{\varrho_1,\cdots,\varrho_{r+d}\}$. For every $\sigma\in\triangle_{\operatorname{max}}$, by convention, we call an ordering of rays $\triangle(1)$ \emph{admissible} with respect to $\sigma$ if $\sigma(1)=\{\varrho_{r+j},1\leqslant j\leqslant d\}$.
The exact sequence \eqref{eq:exactsq} induces, respectively when $R=\BQ$ and $\BZ$,
\begin{equation}\label{eq:toriexactseq}
\xymatrix{ 1\ar[r]& \mathcal{T}_{\operatorname{NS}}\ar[r]& \BG_{\operatorname{m},\BQ}^{\triangle(1)}\ar[r]&\CT_{O}\ar[r]& 1}\index{TNS@$\mathcal{T}_{\operatorname{NS}},\widetilde{\mathcal{T}}_{\operatorname{NS}}$}
\end{equation} between split $\BQ$-tori and $$\xymatrix{ 1\ar[r]&\FTNS\ar[r]& \BG_{\operatorname{m},\BZ}^{\triangle(1)}\ar[r]&\mathfrak{T}_{O}\ar[r]& 1}$$ between split $\BZ$-tori.
Now consider the lattice $N_0=\operatorname{Hom}_\BZ(\BZ^{\triangle(1)},\BZ)$, and the dual map $h^\vee:N_0\to N$ of $h$ in \eqref{eq:exactsq} which is given by for $(a_\rho)_{\rho\in\triangle(1)}\in \BZ^{\triangle(1)}$,
$$h^\vee((a_\rho)_{\rho\in\triangle(1)})=\sum_{\rho\in\triangle(1)}a_\rho \rho.$$
For every $\sigma\in\triangle$, we associate the cone $\sigma_0=(h^\vee)^{-1}(\sigma)\subset N_{0,\BR}$. It is precisely the cone generated by the unit vectors in $\BR^{\triangle(1)}$ corresponding to elements in $\sigma(1)$, and it defines an affine scheme $U_{0,\sigma}=\operatorname{Spec}(R[\sigma_0^\vee\cap \BZ^{\triangle(1)}])$. The collection of cones $(\sigma_0)_{\sigma\in\triangle}$ inside $\BR^{\triangle(1)}$ forms a fan $\triangle_0$, to which we associate the toric $R$-scheme $\operatorname{Tor}_R(\triangle_0)$. This is an open toric subscheme of the Cox ring $\BA^{\triangle(1)}_R\simeq \operatorname{Spec}(R[\BZ^{\triangle(1)}])$. The morphisms $(U_{0,\sigma}\to U_\sigma)_{\sigma\in\triangle}$ induced by $h^\vee|_{\sigma_0}:\sigma_0\to \sigma$ glue together to a toric morphism $$\pi:\operatorname{Tor}_R(\triangle_0)\to \operatorname{Tor}_R(\triangle)$$ between toric schemes.
We write here and after $$X:=\operatorname{Tor}_\BQ(\triangle),\quad X_0:=\operatorname{Tor}_\BQ(\triangle),\quad \CX:=\operatorname{Tor}_\BZ(\triangle),\quad \CX_0:=\operatorname{Tor}_\BZ(\triangle_0),$$
$$\CT_{X_0}:=\BG_{\operatorname{m},\BQ}^{\triangle(1)},\quad \mathfrak{T}_{\CX_0}:=\BG_{\operatorname{m},\BZ}^{\triangle(1)}.$$
For every $(r+d)$-tuple $(X_\varrho)\subset \BQ^{\triangle(1)}$, for every $\sigma\in\triangle_{\operatorname{max}}$, we write $$\XX^{\usigma}=\prod_{\varrho\in\sigma(1)}X_\varrho.$$
\begin{theorem}[Colliot-Thélène--Sansuc \& Salberger]\label{thm:univtor}
\hfill
\begin{itemize}
\item The quasi-affine variety $X_0$ is a universal torsor, which is unique up to $\BQ$-isomorphism, over $X$ under the
N\'eron-Severi torus $\mathcal{T}_{\operatorname{NS}}$.
\item Every point $X(\BQ)$ lifts to a point of $\CX_0(\BZ)$, which consists of precisely the $(r+d)$-tuples $(X_\varrho)\subset\BZ^{\triangle(1)}$ satisfying the property
$\gcd(\XX^{\usigma},\sigma\in\triangle_{\operatorname{max}})=1$, lifts differing by the action of $\FTNS(\BZ)$.
\end{itemize}
\end{theorem}
Following Salberger \cite[p. 191]{Salberger}, we call $X_0$ the \emph{principal universal torsor}.
\begin{proof}
See \cite[Propositions 8.4, 8.5, Corollary 8.8]{Salberger}, and also \cite[\S3A]{Huang}.
\end{proof}
\subsection{Toric norms and toric Tamagawa measures}
(See \cite[\S9]{Salberger}.)
For every $\nu\in\operatorname{Val}(\BQ)$, let us consider the group homomorphism (\cite[p. 196]{Salberger})
$$L_\nu:\CT_{O}(\BQ_\nu)\to N_\BR$$ given by regarding $\CT_{O}(\BQ_\nu)=\Hom_\BZ(M,\BQ_\nu^\times),N_\BR=\Hom_\BZ(M,\BR)$ and composing with the map $\log (|\cdot|_\nu):\BQ_\nu^\times\to\BR$. We write, for every maximal cone $\sigma\in\triangle_{\max}$, $C_\sigma(\BQ_\nu)$ for the $\nu$-adic closure of $L_\nu^{-1}(-\sigma)$ in $X(\BQ_\nu)$. Since the the fan $\triangle$ is complete, the $\nu$-adic locus $X(\BQ_\nu)$ has the partition (cf. \cite[Proposition 9.2]{Salberger})
\begin{equation}\label{eq:divisionRpoints}
X(\BQ_\nu)=\bigcup_{\sigma\in\triangle_{\operatorname{max}}} C_\sigma(\BQ_\nu),
\end{equation}
From now on we suppose that the anti-canonical line bundle $-K_X$ is globally generated. We write the corresponding divisor $$D_0:=\sum_{\varrho\in\triangle(1)}D_{\varrho}.$$
For every $\sigma\in\triangle_{\max}$, let $m_{D_0}(\sigma)\in M$ be unique the element characterised by the property that $\langle m_{D_0}(\sigma),u_\varrho\rangle=1$ for every $\varrho\in\sigma(1)$.\footnote{This notation adopted here follows \cite{Salberger}. It differs from \cite{Huang} by a minus sign.
} The associated character $\chi^{m_{D_0}(\sigma)}\in\Hom(\CT_{O},\BG_{\operatorname{m}})$ defines $D_0$ on $U_\sigma$ and lifts to a global section of the line bundle $\CO_X(D_0)$. So on $U_\sigma$, $\CO_X(D_0)$ trivialises and is generated by the section $\chi^{-m_{D_0}(\sigma)}$, and $\chi^{-m_{D_0}(\sigma)}(P)\neq 0$ for every $P\in U_\sigma(\BQ_\nu)$.
Let us now recall the definition of \emph{toric norm} (or \emph{toric adelic metric}) $(\|\cdot\|_{\operatorname{tor},\nu})_{\nu\in\Val(\BQ)}$ on $D_0$.
For every $\nu\in\Val(\BQ),P\in X(\BQ_\nu)$, choose $\sigma\in\triangle_{\max}$ such that $P\in C_\sigma(\BQ_\nu)$. Then for every local section $s$ of $D_0$ defined at $P$ such that $s(P)\neq 0$, we set (\cite[Proposition 9.2]{Salberger})
\begin{equation}\label{eq:normD0}
\|s(P)\|_{D_0,\nu}:=\left|\frac{s}{\chi^{-m_{D_0}(\sigma)}}(P)\right|_\nu.
\end{equation}
It is independent of the cone $\sigma\in\triangle_{\max}$ with $P\in C_\sigma(\BQ_\nu)$.
By \cite[Proposition 9.11]{Salberger}, the order norm $(\|\cdot\|_{\operatorname{ord},\nu})_{\nu\in\operatorname{Val}(\BQ)}$ on $-K_{\CT_{O}}$ of the open orbit $\CT_{O}$ extends uniquely to a norm on the (trivial) line bundle $-K_X\otimes\CO_X(D_0)$ of $X$ denoted by $(\|\cdot\|_\nu^\#)_{\nu\in\operatorname{Val}(\BQ)}$. Then we define \begin{equation}\label{eq:toricnorm}
\|\cdot\|_{\operatorname{tor},\nu}:=\|\cdot\|_{D_0,\nu}\|\cdot\|_\nu^\#.
\end{equation}
We write $(\omega^X_{\operatorname{tor},\nu})$ for the associated adelic measures, which we use to define the Tamagawa measure $\omega^X_{\operatorname{tor}}$ on $X(\RA_\BQ)$ by \eqref{eq:Tamagawameas}.
We let $(\|\cdot\|_{X_0,\nu})_{\nu\in\Val(\BQ)}$ be the induced norm on $-K_{X_0}$, and we let $(\omega^{X_0}_{\operatorname{tor},\nu})$ be the associated adelic measures, whose product defines the Tamagawa measure $\omega^{X_0}_{\operatorname{tor}}$ on $X_0(\RA_\BQ)$ \eqref{eq:Tmeasureunivtor}.
\begin{theorem}[cf. \cite{Salberger} Theorem 9.12]\label{thm:inducednormX0}
Let $\nu\in\Val(\BQ)$. Then for every $P\in\CTUT(\BQ_\nu)$, for every local section $s:=f(x_1,\cdots,x_n) \frac{\partial}{\partial x_{1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{n}}$ defined at $P$ with $f$ measurable,
$$\|s(P)\|_{X_0,\nu}=\frac{|f(P)|_\nu}{\sup_{\sigma\in\triangle_{\operatorname{max}}}|\XX(P)^{D_0(\sigma)}|_\nu}.$$
\end{theorem}
\begin{proof}[Sketch of proof]
Note that the restriction of $\|\cdot\|_{X_0,\nu}$ to $\CT_{X_0,\BQ_\nu}$ is the product of the order norm on $-K_{\CT_{X_0,\BQ_\nu}}$ with the (restriction of) pull-back of $\omega^X_{\operatorname{tor},\nu}$. We then apply \eqref{eq:normD0}.
\end{proof}
The following result settles the computation of toric Tamagawa measures for non-archimedean places (recall \eqref{eq:modelmeasurecomp}).
\begin{theorem}\label{thm:nonarchTmeas}
Let $\nu\in\Val(\BQ)$ be a non-archimedean place.
\begin{itemize}
\item The toric norm $\|\cdot\|_{\operatorname{tor},\nu}$ \eqref{eq:toricnorm} on $-K_X$ (resp. the adelic measure $\omega^X_{\operatorname{tor},\nu}$) coincides with the model norm (resp. the model measure) determined by $(\CX,-K_{\CX})$;
\item When restricted to $\CX_0(\BZ_\nu)$, the induced norm $\|\cdot\|_{X_0,\nu}$ on $-K_{X_0}$ (resp. the adelic measure $\omega^{X_0}_{\operatorname{tor},\nu}$) of the toric norm $\|\cdot\|_{\operatorname{tor},\nu}$ coincides with the model norm (resp. the model measure) determined by $(\CX_0,-K_{\CX_0})$.
\end{itemize}
\end{theorem}
\begin{proof}[Sketch of proof]
This is done via constructing a ``canonical splitting'', see \cite[Propositions 9.4, 9.5, 9.13]{Salberger}.
\end{proof}
Let us now discuss the computation of the real Tamagawa measure $\omega_{\operatorname{tor},\infty}^X$.
For every $\sigma\in\triangle_{\max}$, the lattice $N$ is generated by elements of $\sigma(1)$ by regularity assumption. Let us choose an admissible pairing $\sigma(1)=\{\varrho_{r+1},\cdots,\varrho_{r+d}\}$, and let $\{n_{\varrho_{r+1}}^{\vee},\cdots,n_{\varrho_{r+d}}^{\vee}\}$ be the corresponding dual base.
According to the exact sequence \eqref{eq:exactsq}, the restriction of $\pi:X_0\to X$ to the affine neighbourhood $U_\sigma\simeq \mathbf{A}^n$ with respect to the parametrisation given by $\sigma$ is
\begin{equation}\label{eq:univmap}
\begin{split}
\pi:\pi^{-1} U_\sigma&\longrightarrow U_\sigma\\
(X_\varrho)_{\varrho\in\triangle(1)}&\longmapsto \left(z_{r+j}:=\prod_{\varrho\in\triangle(1)}X_{\varrho}^{\langle n_{\varrho_{r+j}}^\vee,n_{\varrho}\rangle}\right)_{1\leqslant j\leqslant d}.
\end{split}
\end{equation}
If $P=(P_{r+1},\cdots,P_{r+d})\in\CT_{O}(\BR)$ is viewed as $\sum_{j=1}^{d}P_{r+j}n_{\varrho_{r+j}}^\vee$, then $$L_\infty(P)\in -\sigma\Longleftrightarrow \log |P_{r+j}|\leqslant\ 0 \text{ for all }j.$$
So under the parametrisation \eqref{eq:univmap}, \begin{equation}\label{eq:CsigmaR}
C_{\sigma}(\BR)=\{(z_{r+1},\cdots,z_{r+d})\in\BR^d:|z_{r+j}|\leqslant 1 \text{ for all }j\}.
\end{equation}
\begin{theorem}\label{thm:realTamagawameas}
\hfill
\begin{itemize}
\item In terms of the parametrisation of $\sigma$, the adelic measure $\omega^X_{\operatorname{tor},\infty}$ associated to the toric norm $\|\cdot\|_{\operatorname{tor},\infty}$ restricted to $C_\sigma(\BR)$ is the $d$-dimensional Lebesgue measure;
\item For any two different maximal cones $\sigma_1,\sigma_2$, the set $C_{\sigma_1}(\BR)\cap C_{\sigma_2}(\BR)$ has $\omega^X_{\operatorname{tor},\infty}$-measure zero.
\end{itemize}
\end{theorem}
\begin{proof}[Sketch of proof]
(See the proof of \cite[Proposition 9.16]{Salberger})
Recall the parametrisation \eqref{eq:univmap}. We have $\chi^{-m_{D_0}(\sigma)}=\prod_{j=1}^d z_{r+j}$ and $\varPi_{\CT_{O}}=\frac{\operatorname{d} z_{r+1}}{z_{r+1}}\wedge\cdots\wedge\frac{\operatorname{d} z_{r+d}}{z_{r+d}}$.
Take $P=(P_1,\cdots,P_d)\in L_\infty^{-1}(\sigma)\subset C_{\sigma}(\BR)$. Recall the definition of the order norm \eqref{eq:ordernorm}. The local section $\frac{\partial}{\partial z_{r+1}}\wedge\cdots\wedge\frac{\partial}{\partial z_{r+d}}$ of $-K_X=\CO_X(D_0)$ is defined at $P$. We have, by \eqref{eq:normD0},
\begin{align*}
&\left\|\frac{\partial}{\partial z_{r+1}}\wedge\cdots\wedge\frac{\partial}{\partial z_{r+d}}(P)\right\|_{\operatorname{tor},\infty}\\=&\left\|\frac{\partial}{\partial z_{r+1}}\wedge\cdots\wedge\frac{\partial}{\partial z_{r+d}}(P)\right\|_{D_0,\infty}\left\|\frac{\partial}{\partial z_{r+1}}\wedge\cdots\wedge\frac{\partial}{\partial z_{r+d}}(P)\right\|^\#_{\infty}\\=&\left|\frac{\frac{\partial}{\partial z_{r+1}}\wedge\cdots\wedge\frac{\partial}{\partial z_{r+d}}}{\chi^{-m_{D_0}(\sigma)}}(P)\right|\left|\left\langle \chi^{-m_{D_0}(\sigma)},\frac{\operatorname{d} z_{r+1}}{z_{r+1}}\wedge\cdots\wedge\frac{\operatorname{d} z_{r+d}}{z_{r+d}}\right\rangle (P)\right|=
\frac{|\prod_{j=1}^{d}P_j|}{|\prod_{j=1}^{d}P_j|}=1.
\end{align*}
Therefore, recalling, \eqref{eq:adelicmeasure}, by continuity, the corresponding Tamagawa measure $\omega^X_{\operatorname{tor},\infty}$ is the $d$-dimensional Lebesgue measure on $C_{\sigma}(\BR)$.
To show the second statement, note that on $C_{\sigma_1}(\BR)\cap C_{\sigma_2}(\BR)$ we have $|\chi^{-m_{D_0}(\sigma_1)}|=|\chi^{-m_{D_0}(\sigma_2)}|$ and therefore it has $d$-dimensional Lebesgue measure zero.
\end{proof}
\subsection{Toric height functions}
For every $\sigma\in\triangle_{\max}$, let \begin{equation}\label{eq:D0sigma}
D_0(\sigma):=D_0+\sum_{\varrho\in\triangle(1)}\langle -m_{D_0}(\sigma),n_{\varrho}\rangle D_{\rho}=\sum_{\rho\in\triangle(1)}a_{\varrho}(\sigma)D_\varrho,
\end{equation}
where for every $\varrho\in\triangle(1)$,
\begin{equation}\label{eq:arhosigma}
a_{\varrho}(\sigma):=1+\langle -m_{D_0}(\sigma),n_\varrho\rangle.
\end{equation} Since $D_0$ is globally generated, $D_0(\sigma)$ is an effective divisor (i.e. $a_{\varrho}(\sigma)\geqslant 0$ for every $\sigma\in\triangle_{\max},\varrho\in\triangle(1)$) with support in $\cup_{\varrho\in\triangle(1)\setminus\sigma(1)} D_\varrho$.
Then for every $(r+d)$-tuple $\XX=(X_\varrho)\in \BZ^{\triangle(1)}$, the expression
\begin{equation}\label{eq:XP0}
\XX^{D_0(\sigma)}=\prod_{\varrho\in\triangle(1):a_{\varrho}(\sigma)>0}X_\varrho^{a_{\varrho}(\sigma)},
\end{equation}
is a well-defined integer.
Let $H_{\operatorname{tor}}=H_{-K_X,(\|\cdot\|_{\operatorname{tor},\nu})}:X(\BQ)\to\BR_{>0}$ the \emph{toric height function} induced by the toric norm $(\|\cdot\|_{\operatorname{tor},\nu})_{\nu\in\Val(\BQ)}$, i.e., for every $P\in X(\BQ)$ (recall \eqref{eq:heightmetric}),
$$H_{\operatorname{tor}}(P)=\prod_{\nu\in\operatorname{Val}(\BQ)}\|s(P)\|^{-1}_{\operatorname{tor},\nu},$$ for any local section $s$ of $-K_X$ at $P$ with $s(P)\neq 0$.
\begin{theorem}[cf. \cite{Salberger}, Propositions 10.5, 10.14]
\hfill
\begin{itemize}
\item For every $P\in X(\BQ)$, we have
$$H_{\operatorname{tor}}(P)=H_{D_0}(P):=\prod_{\nu\in\operatorname{Val}(\BQ)}\|s(P)\|^{-1}_{D_0,\nu}.$$ for any local section $s$ of $\CO_X(D_0)$ at $P$ with $s(P)\neq 0$.
\item Suppose that $P\in X(\BQ)$ lifts into an $(r+d)$-tuple $\XX(P_0)\in\CX_0(\BZ)$. Then
$$H_{\operatorname{tor}}(P)=\prod_{\nu\in\operatorname{Val}(\BQ)}\sup_{\sigma\in\triangle_{\operatorname{max}}}|\XX(P_0)^{D_0(\sigma)}|_\nu=\sup_{\sigma\in\triangle_{\operatorname{max}}}|\XX(P_0)^{D_0(\sigma)}|,$$ where $|\cdot|$ is the usual absolute value on $\BR$.
\end{itemize}
\end{theorem}
\begin{proof}[Sketch of proof]
It suffices to assume that $P\in \CT_{O}(\BQ)$. By choosing $s^{\#}$ to be dual of $\varpi_{\CT_{O}}$,
$$H_{(\|\cdot\|_\nu^\#)}(P):=\prod_{\nu\in\Val(\BQ)}\|s^{\#}(P)\|_\nu^{\#-1}=1$$ by the product formula.
By \cite[Proposition 9.8]{Salberger}, in fact, $$\|s(P)\|_{D_0,\nu}=\inf_{\substack{\sigma\in\triangle_{\max}\\ P\in U_\sigma(\BQ_\nu)}}\left|\frac{s}{\chi^{-m_{D_0}(\sigma)}}(P)\right|_\nu.$$ So by choosing $\sigma_0\in\triangle_{\max}$ such that $P\in U_{\sigma_0}(\BQ)$, and by choosing the local section $\chi^{-m_{D_0}(\sigma_0)}$ generating $\CO_X(D_0)$ on $U_{\sigma_0}$, we have
\begin{align*}
H_{D_0}(P)&=\prod_{\nu\in\Val(\BQ)} \left\|\chi^{-m_{D_0}(\sigma_0)}(P)\right\|_{D_0,\nu}^{-1}\\ &=\prod_{\nu\in\operatorname{Val}(\BQ)}\frac{\sup_{\sigma\in\triangle_{\operatorname{max}}}|\XX(P_0)^{D_0(\sigma)}|_\nu}{\XX(P_0)^{D_0(\sigma_0)}}=\prod_{\nu\in\operatorname{Val}(\BQ)}\sup_{\sigma\in\triangle_{\operatorname{max}}}|\XX(P_0)^{D_0(\sigma)}|_\nu,
\end{align*}
by the product formula.
Theorem \ref{thm:univtor} implies that $\gcd(\XX(P_0)^{D_0(\sigma)},\sigma\in\triangle_{\operatorname{max}})=1$, thanks to our assumption that $-K_X$ is globally generated. Therefore for every non-archimedean valuation $\nu\in\operatorname{Val}(\BQ)$, $\sup_{\sigma\in\triangle_{\operatorname{max}}}|\XX(P_0)^{D_0(\sigma)}|_\nu=1$.
\end{proof}
We therefore define $H_0:\CX_0(\BZ)\to\BR_{>0}$ via the formula $$H_0(\XX):=H_{\operatorname{tor}}(\pi(\XX))$$ for every $\XX\in\CX_0(\BZ)$.
Another more elementary way of describing the toric height function is the following. As the line bundle $-K_X$ is globally generated, its space of global sections induces a morphism $i:X\to\BP^K$, where $K:=\dim H^0(X,-K_X)-1$. Let $H_{\CO(1)}$ be the naive height function $\BP^K(\BQ)\to\BR_{>0}$ corresponding the ordinary (Fubini--Study) metric. It is straightforward to verify that the induced height function $H_{\CO(1)}\circ i:X(\BQ)\to\BR_{>0}$ coincides with $H_{\operatorname{tor}}$.
\section{Toric van der Corput method}\label{se:toricvandercorput}
In this section, let $X$ be a smooth projective split toric variety over $\BQ$ defined by a complete regular fan $\triangle$. Let $X_0$ be the principal universal torsor. Our goal is this section is to establish Theorem \ref{thm:CABCAAB}, as the main technical result of this paper.
We list several frequently used quantities from \cite[p. 242--p. 243]{Salberger}.
We consider the set \cite[11.6]{Salberger}
\begin{equation}\label{eq:CAB}
\CA(B):=\{\XX\in\BZ_{>0}^n:\max_{\sigma\in\triangle_{\operatorname{max}}}\XX^{D_0(\sigma)}\leqslant B\}. \footnote{Noted that Salberger uses the notation $A(B)$ to denote be the cardinality of this set.}
\end{equation}
Up to a product of local densities, $\CA(B)$ contributes to real density and the order of growth.
For every $\sigma\in\triangle_{\max}$, we now define
\begin{equation}\label{eq:Fj}
F_\sigma(j):=\sum_{\varrho\in\triangle(1)}\langle n_{\varrho_{r+j}}^{*},n_{\varrho}\rangle D_\varrho,\quad 1\leqslant j\leqslant d,
\end{equation}
\begin{equation}\label{eq:Ej}
E_\sigma(j):=D_{\varrho_{r+j}}-F_\sigma(j),\quad 1\leqslant j\leqslant d.
\end{equation}
Note that $E_\sigma(j)$ has support\footnote{Here ``support'' means the union of all divisors with non-zeros coefficients.} in $\cup_{i=1}^r D_{\varrho_i}$, and $\XX^{E_\sigma(j)}$ is a Laurent monomial in $X_1,\cdots,X_r$.
Since
$$\sum_{j=1}^{d}E_\sigma(j)=\sum_{j=1}^{d}D_{\varrho_{r+j}}+\sum_{\varrho\in\triangle(1)}\langle -m(\sigma),n_\varrho\rangle D_\varrho$$ and $$D_0(\sigma)=D_0+\sum_{\varrho\in\triangle(1)}\langle -m(\sigma),n_\varrho\rangle D_\varrho, $$ we obtain
\begin{equation}\label{eq:EjDsigma}
\sum_{j=1}^{d}E_\sigma(j)=D_0(\sigma)-\sum_{i=1}^{r}D_{\varrho_i},
\end{equation} which also has support in $\cup_{i=1}^r D_{\varrho_i}$.
For every $A>0$, consider the subset
\begin{equation}\label{eq:CAAB}
\begin{split}
\CA^{(A)\flat}(B):=\{\XX\in\CA(B):&\text{ for every }\sigma\in\triangle_{\operatorname{max}} \text{ with an admissible ordering},\\ &\text{ for every }1\leqslant j\leqslant d,\XX^{E_\sigma(j)}\geqslant (\log B)^A\}.
\end{split}
\end{equation}
Our main result is the following, which compares the sets $\CA(B)$ and $\CA^{(A)\flat}(B)$ with negligible error term. It is crucial in establishing equidistribution (\S\ref{se:purityproof}) and toric geometric sieve (\S\ref{se:geomsieve}).
\begin{theorem}\label{thm:CABCAAB}
We have $$\#\left(\CA(B)\setminus\CA^{(A)\flat}(B)\right)=O_A(B(\log B)^{r-2}\log\log B).$$
\end{theorem}
We now derive a consequence of Theorem \ref{thm:CABCAAB}. For every $\sigma\in\triangle_{\max}$ with an admissible ordering, let $C_{0,\sigma}(\BR)$ be the subset of $\CX_0(\BR)$ consisting of $\YY\in\BR^n$ such that $$|\YY^{F_\sigma(j)}|\leqslant 1 \quad\text{for every } 1\leqslant j\leqslant d.$$
Then we have
\begin{equation*}
C_\sigma(\BR)=\pi_\BR(C_{0,\sigma}(\BR))\subset X(\BR).
\end{equation*} On recalling \eqref{eq:CAB}, we define
\begin{equation}\label{eq:CAsigmaB}
\CA_\sigma(B):=\CA(B)\cap C_{0,\sigma}(\BR).
\end{equation}
Then, according to \eqref{eq:CsigmaR} (cf. also \cite[11.22, 11.23]{Salberger}),
\begin{equation}\label{eq:CAsigmaBcond}
\CA_\sigma(B)=\{\XX\in\BZ_{>0}^n:\max_{1\leqslant i\leqslant n}X_i\leqslant B,\XX^{D_0(\sigma)}\leqslant B,\text{ for every }1\leqslant j\leqslant d,X_{r+j}\leqslant \XX^{E_\sigma(j)}\},
\end{equation}
For every $A\geqslant 1$, on recalling \eqref{eq:CAAB}, we also let
\begin{equation}\label{eq:CAsigmaAB}
\CA^{(A)}_\sigma(B):=\{\XX\in\CA_\sigma(B):\text{ for every }1\leqslant j\leqslant d, \XX^{E_\sigma(j)}\geqslant (\log B)^A\},
\end{equation}
\begin{corollary}\label{co:CAsigmaBCAAsigmaB}
For every $\sigma\in\triangle_{\max}$, we have $$\#\left(\CA_\sigma(B)\setminus\CA^{(A)}_\sigma(B)\right)=O_A(B(\log B)^{r-2}\log\log B).$$
\end{corollary}
\begin{proof}[Proof of Corollary \ref{co:CAsigmaBCAAsigmaB}]
Comparing \eqref{eq:CAAB} with \eqref{eq:CAsigmaAB}, we see $$\CA(B)\setminus\CA^{(A)\flat}(B)= \bigcup_{\sigma\in\triangle_{\operatorname{max}}}\left(\CA_\sigma(B)\setminus\CA_\sigma^{(A)}(B)
\right).$$ Therefore for each single $\sigma_0\in\triangle_{\operatorname{max}}$,
$$\#\left(\CA_{\sigma_0}(B)\setminus\CA^{(A)}_{\sigma_0}(B)\right)\leqslant \#\left(\CA(B)\setminus\CA^{(A)\flat}(B)\right)=O_A(B(\log B)^{r-2}\log\log B). \qedhere$$
\end{proof}
\begin{remark}\label{rmk:differenceCAABCAsigmaB}
From \eqref{eq:divisionRpoints}, the collection of sets $(\CA_\sigma(B))_{\sigma\in\triangle_{\operatorname{max}}}$ constitutes a sub-division of the set $\CA(B)$, i.e. $$\CA(B)=\bigcup_{\sigma\in\triangle_{\operatorname{max}}}\CA_\sigma(B).$$ In general we only have \begin{equation}\label{eq:CAABCAAsigmaB}
\CA^{(A)\flat}(B)\subset \bigcup_{\sigma\in\triangle_{\operatorname{max}}}\CA^{(A)}_\sigma(B),
\end{equation} the extra constraint being at the boundary, i.e. for any different $\sigma_1,\sigma_2\in\triangle_{\operatorname{max}}$, $$\XX\in\CA^{(A)\flat}(B)\cap\CA_{\sigma_1}(B)\cap\CA_{\sigma_2}(B)\Longrightarrow \XX\in\CA^{(A)}_{\sigma_1}(B)\cap\CA^{(A)}_{\sigma_2}(B).$$
\end{remark}
\subsection{Passage to integrals}
Our goal of this section is to state Propositions \ref{prop:keyprop2} and \ref{prop:keyprop1}. They are the main ingredients of the proof of Theorem \ref{thm:CABCAAB}.
Let
\begin{equation}\label{eq:CDB}
\CD(B):=\{\YY\in\BR_{\geqslant 1}^n:\max_{\sigma\in\triangle_{\operatorname{max}}}\YY^{D_0(\sigma)}\leqslant B\},
\end{equation}
and \begin{equation}\label{eq:CIB}
\CI(B):=\int_{\CD(B)}\left(\max_{\sigma\in\triangle_{\operatorname{max}}}\YY^{D_0(\sigma)}\right)\operatorname{d}\omega^{X_0}_{\operatorname{tor},\infty}(\YY).
\end{equation}
Thanks to Theorem \ref{thm:inducednormX0} (see also \cite[Proof of Lemma 11.29]{Salberger}), by taking $f$ to be the characteristic function of the region $\CD(B)$, we have
$$\CI(B)=\int_{\CD(B)}\left(\max_{\sigma\in\triangle_{\operatorname{max}}}\YY^{D_0(\sigma)}\right)\left\|\frac{\partial}{\partial x_{1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{n}}(\YY)\right\|_{X_0,\infty}\operatorname{d}\xx=\int_{\CD(B)}\operatorname{d}\xx,$$ where $\operatorname{d}\xx=\operatorname{d}x_1\cdots\operatorname{d}x_n$ is the usual $n$-dimensional Lebesgue measure on $\BR^n$.
Salberger showed the following comparison formula between the cardinality $\#\CA(B)$ and the integral $\CI(B)$.
\begin{proposition}[\cite{Salberger} Lemma 11.29]\label{le:exchangeABIB}
$$\#\CA(B)=\CI(B)+O(B(\log B)^{r-2}).$$
\end{proposition}
Proposition \ref{prop:keyprop2} is analogous to Proposition \ref{le:exchangeABIB} for certain subsets of $\CA(B)$ with more complicated conditions similar to \eqref{eq:CAAB} which we now define.
For every $A\geqslant 1$, let \begin{equation}\label{eq:CDAB}
\CD^{(A)\flat}(B):=\{\YY\in\CD(B):\text{ for every }\sigma\in\triangle_{\operatorname{max}},\text{ for every } 1\leqslant j\leqslant d, \YY^{E_\sigma(j)}\geqslant (\log B)^A\},
\end{equation} and also define \begin{equation}\label{eq:CIAB}
\CI^{(A)\flat}(B):=\int_{\CD^{(A)\flat}(B)}\left(\max_{\sigma\in\triangle_{\operatorname{max}}}\YY^{D_0(\sigma)}\right)\operatorname{d}\omega^{X_0}_{\operatorname{tor},\infty}(\YY).
\end{equation}
\begin{proposition}\label{prop:keyprop2}
Uniformly for every $A\geqslant 1$, we have $$\#\CA^{(A)\flat}(B)=\CI^{(A)\flat}(B)+O(B(\log B)^{r-2}).$$
\end{proposition}
Proposition \ref{prop:keyprop1} compares the integrals $\CI(B)$ \eqref{eq:CIB} and $\CI^{(A)\flat}(B)$ \eqref{eq:CIAB}.
\begin{proposition}\label{prop:keyprop1}
We have $$\CI(B)-\CI^{(A)\flat}(B)=O_A(B(\log B)^{r-2}\log\log B).$$
\end{proposition}
\subsection{Proof of Theorem \ref{thm:CABCAAB}}
Thanks to
\begin{align*}
\#\left(\CA(B)\setminus\CA^{(A)\flat}(B)\right)&=\#\CA(B)-\#\CA^{(A)\flat}(B)\\ &\leqslant \left|\#\CA(B)-\CI(B)\right|+\left(\CI(B)-\CI^{(A)\flat}(B)\right)+\left|\CI^{(A)\flat}(B)-\#\CA^{(A)\flat}(B)\right|,
\end{align*}
the desired estimate is a direct consequence of the following comparison results: Proposition \ref{le:exchangeABIB} (between $\#\CA(B)$ and $\CI(B)$), Proposition \ref{prop:keyprop1} (between $\CI(B)$ and $\CI^{(A)\flat}(B)$) and Proposition \ref{prop:keyprop2} (between $\CI^{(A)\flat}(B)$ and $\#\CA^{(A)\flat}(B)$).
\qed
\subsection{Estimates of boundary sums}
Before entering into the proof of Proposition \ref{prop:keyprop2} which compares $\#\CA^{(A)\flat}(B)$ and $\CI^{(A)\flat}(B)$, we establish Lemmas \ref{le:deltaB} and \ref{le:CGAB} dealing with various types of sums induced by boundary conditions.
A key tool is Proposition \ref{co:slicing}, based on the more elementary Lemma \ref{le:sublemma}.
\subsubsection{An inductive summation lemma}
We need a slight generalisation of a Fubini-type summation lemma due to Salberger \cite[Sublemma 11.24]{Salberger}.
\begin{lemma}\label{le:sublemma}
Let $m\in\BZ_{>0}$. Let $e\in\BR_{>0}$ and $\ee=(e_1,\cdots,e_m)\in\BZ_{\geqslant 0}^m$. For $B>3$ and $\lambda\in\BR_{>0}$ such that $1\leqslant \lambda \leqslant B$, we consider the sum
$$S_{e;\ee}^{[m]}(B,\lambda):=\sum_{(g_1,\cdots,g_m)\in\BZ_{>0}^m}^{*}\prod_{i=1}^{m}g_i^{\frac{e_i}{e}-1},$$ where $*$ means that the $m$-tuples $(g_1,\cdots,g_m)\in\BZ_{>0}^m$ satisfy
\begin{equation}\label{eq:condition2}
\prod_{i=1}^m g_i^{e_i}\leqslant \frac{B}{\lambda},\quad \text{and}\quad \max_{1\leqslant i\leqslant m}g_i\leqslant B.
\end{equation} Then
$$S_{e;\ee}^{[m]}(B,\lambda)=O_{m,e,\ee}\left(\max\left((\log B)^m,\left(\frac{B}{\lambda}\right)^{\frac{1}{e}}(\log B)^{m-1}\right)\right),$$ where the implied constant does not depend on $\lambda$.
\end{lemma}
\begin{remark}\label{rmk:sublemma}
\hfill
\begin{enumerate}
\item
The same bound holds for the integral $$R_{e;\ee}^{[m]}(B,\lambda):=\int_{(g_1,\cdots,g_m)\in\BR_{\geqslant 1}^m}^{*}\prod_{i=1}^{m}g_i^{\frac{e_i}{e}-1}\operatorname{d}\mathbf{g},$$
where $*$ means the $m$-tuples of real numbers $\mathbf{g}=(g_1,\cdots,g_m)$ satisfy \eqref{eq:condition2}.
\item The upper bound $(\log B)^m$ is achieved precisely when $\ee=\mathbf{0}$ and $\lambda\geqslant B/(\log B)^e$.
\end{enumerate}
\end{remark}
\begin{proof}[Proof of Lemma \ref{le:sublemma}]
The proof is a minor modification of the proof of \cite[Sublemma 11.24]{Salberger}, on keeping track of the extra parameter $\lambda$.
Assume first of all $m=1$. If $e_1\geqslant 1$, and the condition \eqref{eq:condition2} simplifies as $g_1\leqslant\left(\frac{B}{\lambda}\right)^{\frac{1}{e_1}}$. So we obtain
$$S^{[1]}_{e;e_1}(B,\lambda)\ll_{e,e_1}\left(\frac{B}{\lambda}\right)^\frac{1}{e}.$$
If $e_1=0$, then the first condition \eqref{eq:condition2} is empty, and it follows that $$S^{[1]}_{e;e_1}(B,\lambda)=O(\log B).$$
We may assume that $m\geqslant 2$, and we omit the subscript $m,e,\ee$. As has been remarked, We start by summing over $g_m$ for example using induction.
\begin{itemize}
\item If $e_m=0$, then we easily have
\begin{align*}
S_{e;\ee}^{[m]}(B,\lambda)&\leqslant S_{e;(e_1,\cdots,e_{m-1})}(B,\lambda)\times\left(\sum_{k=1}^{[B]}\frac{1}{k}\right)\\ &\ll S_{e;(e_1,\cdots,e_{m-1})}(B,\lambda)\times \log B\\ &\ll\max\left((\log B)^{m-1},\left(\frac{B}{\lambda}\right)^{\frac{1}{e}}(\log B)^{m-2}\right)\times \log B\\ &=O\left(\max\left((\log B)^m,\left(\frac{B}{\lambda}\right)^{\frac{1}{e}}(\log B)^{m-1}\right)\right),
\end{align*}
by induction hypothesis for $m-1$.
\item If $e_m\geqslant 1$, we then have
\begin{align*}
S_{e;\ee}^{[m]}&=\sum_{k=1}^{[\left(
\frac{B}{\lambda}\right)^{1/e_m}]}k^{\frac{e_m}{e}-1}S_{e;(e_1,\cdots,e_{m-1})}(B,k^{e_m}\lambda)\\ &\ll \sum_{k=1}^{[\left(
\frac{B}{\lambda}\right)^{1/e_m}]}k^{\frac{e_m}{e}-1}\max\left((\log B)^{m-1},\left(\frac{B}{k^{e_m}\lambda}\right)^\frac{1}{e}(\log B)^{m-2}\right)\\ &\ll (\log B)^{m-1}\sum_{k=1}^{[\left(
\frac{B}{\lambda}\right)^{1/e_m}]}k^{\frac{e_m}{e}-1} +\left(\frac{B}{\lambda}\right)^\frac{1}{e}(\log B)^{m-2}\sum_{k=1}^{[\left(
\frac{B}{\lambda}\right)^{1/e_m}]}\frac{1}{k}\\ &=O\left(\left(\frac{B}{\lambda}\right)^{\frac{1}{e}}(\log B)^{m-1}\right),
\end{align*} again by induction hypothesis for $m-1$.
\end{itemize} This finishes the proof.
\end{proof}
\subsubsection{Sums of slicing type}
For every $\sigma\in\triangle_{\operatorname{max}}$, we fix an admissible ordering $\sigma(1)=\{\varrho_{r+1},\cdots,\varrho_n\}$. Let $\aa(\sigma)=(a_1(\sigma),\cdots,a_r(\sigma))\in\BZ_{\geqslant 0}^r$ be such that
\begin{equation}\label{eq:D0sigmaai}
D_0(\sigma)=\sum_{i=1}^{r}a_i(\sigma) D_i.
\end{equation}
For every $i=1,\cdots,r$, let $\aa_{*i}(\sigma_0)$ denote the $(r-1)$-subtuple of $\aa(\sigma_0)$ without the term $a_{i}(\sigma_0)$.
\begin{proposition}\label{co:slicing}
For every $\sigma\in\triangle_{\operatorname{max}}$ with an admissible ordering, for every $1\leqslant i_0\leqslant r$, and for $B>2$, consider the sum $$\CN_\sigma(B)_{i_0,\eta[B]}:=\sum_{\substack{(X_1,\cdots,X_r)\in\BZ_{>0}^{r}\\ \prod_{i=1}^r X_i^{a_i(\sigma)}\leqslant B,\max_{1\leqslant i\leqslant r} X_i\leqslant B\\ X_{i_0}=\eta[B](X_i,i\neq i_0)}} \prod_{i=1}^r X_i^{a_i(\sigma)-1},$$ where $\eta[B]:\BZ_{>0}^{r-1}\to\BZ_{>0}$ is a map depending possibly on $B$.
Then $$\CN_\sigma(B)_{i_0,\eta}=O(B(\log B)^{r-2}).$$
The implied constant is independent of the map $\eta[B]$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{co:slicing}]
We may assume that $i_0=1$ and we write $\CN_\sigma(B)_{i_0,\eta[B]}=\CN_\sigma(B)_{\eta}$ for simplicity. We shall discuss two cases separately.
If $a_1(\sigma)>0$, then since $\XX^{D_0(\sigma)}\leqslant B$, then the sum $\CN_\sigma(B)_{\eta}=0$, unless $$\eta[B](X_2,\cdots,X_n)\leqslant \left\lfloor\left(B/\prod_{i=2}^n X_i^{a_i(\sigma)-1}\right)^{\frac{1}{a_1(\sigma)}}\right\rfloor,$$
which we now assume. Then, we have, using Lemma \ref{le:sublemma} and its notation,
\begin{align*}
\CN_\sigma(B)_{\eta} &\leqslant \sum_{\substack{(X_2,\cdots,X_r)\in\BZ_{>0}^{r-1}\\ \prod_{i=2}^r X_i^{a_i(\sigma)}\leqslant B/\eta[B](X_i,i\geqslant 2)\\\max_{2\leqslant i\leqslant r} X_i\leqslant B}}\left(B/\prod_{i=2}^r X_i^{a_i(\sigma)-1}\right)^{\frac{a_1(\sigma)-1}{a_1(\sigma)}} \prod_{i=2}^r X_i^{a_i(\sigma)-1}\\ &\leqslant B^{1-\frac{1}{a_1(\sigma)}}\sum_{\substack{(X_2,\cdots,X_r)\in\BZ_{>0}^{r-1}\\ \prod_{i=2}^r X_i^{a_i(\sigma)}\leqslant B\\ \max_{2\leqslant i\leqslant r} X_i\leqslant B}}\prod_{i=2}^{r}X_i^{\frac{a_i(\sigma)-1}{a_1(\sigma)}}\\ &=B^{1-\frac{1}{a_1(\sigma)}}S^{[r-1]}_{a_1(\sigma);\aa_{*1}(\sigma)}(B,1)=O(B(\log B)^{r-2}).
\end{align*}
However if $a_1(\sigma)=0$, then since $\eta[B]\geqslant 1$,
\begin{align*}
\CN_\sigma(B)_{\eta}&=\sum_{\substack{(X_2,\cdots,X_r)\in\BZ_{>0}^{r-1}\\ \prod_{i=2}^r X_i^{a_i(\sigma)}\leqslant B/\eta[B](X_i,i\geqslant 2)\\\max_{2\leqslant i\leqslant r} X_i\leqslant B}}\frac{1}{\eta[B](X_2,\cdots,X_n)}\prod_{i=2}^r X_i^{a_i(\sigma)-1}\\ &\leqslant \sum_{\substack{(X_2,\cdots,X_r)\in\BZ_{>0}^{r-1}\\ \prod_{i=2}^r X_i^{a_i(\sigma)}\leqslant B\\\max_{2\leqslant i\leqslant r} X_i\leqslant B}}\prod_{i=2}^r X_i^{a_i(\sigma)-1}= S^{[r-1]}_{1,\aa_{*1}(\sigma)}(B,1)=O(B(\log B)^{r-2}).
\end{align*}
None of the estimates above depend on the map $\eta[B]$, neither do any implied constants. This finishes the proof.
\end{proof}
\subsubsection{Boundary sums induced by the toric height}
In comparing sums and integrals we frequently need to deal with the following kind of set.
For every $1\leqslant k\leqslant n$, $\uu_k$ denotes the $n$-tuple of integers whose $k$-th coordinate is $1$ and and zero elsewhere. Now let us consider the set
\begin{equation}\label{eq:deltaB}
\begin{split}
\delta(B):=\{\XX\in\CA(B):\text{ there exists } 1\leqslant k\leqslant n,\max_{\sigma\in\triangle_{\operatorname{max}}}(\XX+\uu_k)^{D_0(\sigma)}>B\}. \end{split}
\end{equation}
\begin{lemma}[\cite{Salberger}, Lemma 11.25 (b)]\label{le:deltaB}
We have $$\#\delta(B)=O(B(\log B)^{r-2}).$$
\end{lemma}
With this lemma we can quickly prove Lemma \ref{le:exchangeABIB}.
Indeed, we need to cover the boundary introduced by the height condition using boxes, at least one vertex of which lies in $\delta(B)$. We then have $$\left|\#\CA(B)-\CI(B)\right|\leqslant 2^n\#\delta(B)=O(B(\log B)^{r-2})$$ by Lemma \ref{le:deltaB}.
\begin{proof}[Proof of Lemma \ref{le:deltaB}. (Salberger)]
Clearly it is sufficient to deal with every subset
\begin{equation}\label{eq:deltakB}
\delta_{k,\sigma_0}(B):=\{\XX\in\CA(B):\XX+\uu_k\in\CA_{\sigma_0}(B)\},
\end{equation} because $$\delta(B)=\bigcup_{k=1}^n\bigcup_{\sigma_0\in\triangle_{\operatorname{max}}}\delta_{k,\sigma_0}(B).$$
This is already carried out in \cite[p. 243--244]{Salberger}. To illustrate the idea, we choose to reproduce the proof here briefly.
Now take $\XX\in\delta_{k,\sigma_0}(B)$.
Since $$\XX^{D_0(\sigma_0)}\leqslant\max_{\sigma\in\triangle_{\operatorname{max}}}\XX^{D_0(\sigma)}\leqslant B<(\XX+\uu_k)^{D_0(\sigma)},$$ we deduce that the monomial $X_k$ has to divide $\XX^{D_0(\sigma_0)}$, that is to say, $\varrho_k\not\in \sigma_0(1)$. So we may reorder $\triangle(1)$ so that $\sigma_0(1)=\{\varrho_{r+1},\cdots,\varrho_{r+d}\}$ and $\varrho_k\mapsto\varrho_1$. We then have $a_1(\sigma_0)>0$.
Moreover since $\XX+\uu_1\in\CA_{\sigma_0}(B)$ implies that \begin{equation}\label{eq:cond1}
\max_{\sigma\in\triangle_{\operatorname{max}}}(\XX+\uu_1)^{D_0(\sigma)}=(\XX+\uu_1)^{D_0(\sigma_0)}=(X_1+1)^{a_1(\sigma_0)}\prod_{i=2}^r X_i^{a_i(\sigma_0)}>B,
\end{equation} and \begin{equation}\label{eq:cond2}
X_{r+j}\leqslant(\XX+\uu_1)^{E_{\sigma_0}(j)},1\leqslant j\leqslant d.
\end{equation}
Therefore the cardinality of $\delta_{k,\sigma_0}(B)$ is bounded from above by the number of $\XX\in\BZ_{>0}^n$ satisfying
\begin{equation}\label{eq:cond3}
\prod_{i=1}^{r}X_i^{a_i(\sigma_0)}\leqslant B.
\end{equation}
While fixing $X_1,\cdots,X_r\geqslant 1$, by \eqref{eq:EjDsigma}, the number of $d$-tuples of integers $(X_{r+1},\cdots,X_n)$ satisfying \eqref{eq:cond2} is at most
$$\prod_{j=1}^{d}(\XX+\uu_1)^{E_{\sigma_0}(j)}=(X_1+1)^{a_1(\sigma_0)-1}\prod_{i=2}^r X_i^{a_i(\sigma_0)-1}\leqslant 2^{a_1(\sigma_0)-1}\prod_{i=1}^r X_i^{a_i(\sigma_0)-1}.$$ Moreover, since $a_1(\sigma_0)>0$, \eqref{eq:cond1} \eqref{eq:cond3} together imply that while fixing the $(r-1)$-tuple $(X_2,\cdots,X_r)$, $X_1$ is the unique integer $1\leqslant \eta_k[B]=\eta_k[B](X_2,\cdots,X_r)$ lying in the inteval
$$\left]\left(B/\prod_{i=2}^r X_i^{a_i(\sigma_0)-1}\right)^{\frac{1}{a_1(\sigma_0)}}-1,\left(B/\prod_{i=2}^r X_i^{a_i(\sigma_0)-1}\right)^{\frac{1}{a_1(\sigma_0)}}\right].$$
Therefore, applying Proposition \ref{co:slicing} to the map $\eta_k[B]$, we obtain
\begin{align*}
\#\delta_{k,\sigma_0}(B)&\leqslant \sum_{\substack{(X_1,\cdots,X_r)\in\BZ_{>0}^{r}\\ \prod_{i=1}^r X_i^{a_i(\sigma_0)}\leqslant B,\max_{1\leqslant i\leqslant r} X_i\leqslant B\\ X_{i_0}=\eta_k[B](X_i,i\neq i_0)}}\prod_{j=1}^{d}(\XX+\uu_1)^{E_{\sigma_0}(j)}\leqslant 2^{a_1(\sigma_0)-1}\CN_{\sigma_0}(B)_{1,\eta_k[B]}\\ &=O(B(\log B)^{r-2}).
\end{align*}
This achieves the announced upper bound in Lemma \ref{le:deltaB}.
\end{proof}
\subsubsection{Boundary sums related to $\CA^{(A)\flat}(B)$}\label{se:CGAB}
To introduce our second kind of boundary set, we need to introduce the following notation.
For every $\sigma\in\triangle_{\operatorname{max}}$ with an admissible ordering, for every $1\leqslant j\leqslant d$, since $E_\sigma(j)$ \eqref{eq:Ej} has support in $\cup_{i=1}^r D_{\varrho_i}$, we let $\bb^{(j,\sigma)}=(b_1^{(j,\sigma)},\cdots,b_r^{(j,\sigma)})\in\BZ^r$ be such that
\begin{equation}\label{eq:Esigmajbij}
E_\sigma(j)=\sum_{i=1}^{r}b_i^{(j,\sigma)}D_{\varrho_i},
\end{equation} so that
\begin{equation}\label{eq:bijsigma}
\XX^{E_\sigma(j)}=\prod_{i=1}^rX_i^{b_i^{(j,\sigma)}}.
\end{equation}
We observe that $\bb^{(j,\sigma)}\neq \mathbf{0}$ for any $j$, thanks to the completeness and regularity of the fan $\triangle$.
Now for any $A\geqslant 1$, we consider the subset $\CG_\sigma^{(A)}(B)$ consisting of $\XX\in\CA(B)$ such that
\begin{itemize}
\item $X_{i}>1\quad \text{for every } i\in\{1,\cdots,r\}$;
\item There exists $j_0\in\{1,\cdots,d\}$ such that \begin{equation}\label{eq:CGsigmaA}
\max(X_{r+j_0},(\log B)^A)\leqslant \XX^{E_{\sigma}(j_0)} ,\quad X_{r+j}\leqslant 2^{\sum_{i=1}^{r}|b_i^{(j,\sigma)}|}\XX^{E_\sigma(j)},\text{ for every }j\in\{1,\cdots,d\}\setminus\{j_0\},
\end{equation} and \begin{equation}\label{eq:condpos}
\prod_{\substack{1\leqslant i\leqslant r\\ b_{i}^{(j_0,\sigma)}>0}}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma)}}\prod_{\substack{1\leqslant i\leqslant r\\ b_{i}^{(j_0,\sigma)}<0}}\left(X_i+1\right)^{b_{i}^{(j_0,\sigma)}}<(\log B)^A.
\end{equation}
\end{itemize}
\begin{lemma}\label{le:CGAB}
We have $$\#\CG_\sigma^{(A)}(B)=O(B(\log B)^{r-2}),$$ where the implied constant is independent of $A$.
\end{lemma}
\begin{proof}
Our proof is partly inspired by the treatment of Lemma \ref{le:deltaB}.
In fact, for every $\XX\in\CG_\sigma^{(A)}(B)$, the conditions above implies that certain $X_{i_0},1\leqslant i_0\leqslant r$ is fixed once other variables $X_i,i\in \{1,\cdots,r\}\setminus \{i_0\}$ are fixed.
Indeed, fix $\XX\in \CG_\sigma^{(A)}(B)$. Then the first condition of \eqref{eq:CGsigmaA} and \eqref{eq:condpos} imply that there exist $1\leqslant i_0\leqslant r$ with $b_{i_0}^{(j_0,\sigma)}\neq 0$ and a collection of indices $J\subset \{1\leqslant i\leqslant r:b_{i}^{(j_0,\sigma)}\neq 0\}$ containing $i_0$, such that \begin{equation}\label{eq:pos1}
\left(\prod_{i\not\in J\setminus\{i_0\}}X_i^{b_{i}^{(j_0,\sigma)}}\right)\left(\prod_{i\in J\setminus\{i_0\}:b_{i}^{(j_0,\sigma)}>0}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma)}}\right)\left(\prod_{i\in J\setminus\{i_0\}:b_i^{(j_0,\sigma)}<0}\left(X_i+1\right)^{b_{i}^{(j_0,\sigma)}}\right)\geqslant (\log B)^A,
\end{equation}
\begin{equation}\label{eq:pos2}
\left(\prod_{i\not\in J}X_i^{b_{i}^{(j_0,\sigma)}}\right)\left(\prod_{i\in J:b_{i}^{(j_0,\sigma)}>0}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma)}}\right)\left(\prod_{i\in J:b_i^{(j_0,\sigma)}<0}\left(X_i+1\right)^{b_{i}^{(j_0,\sigma)}}\right)< (\log B)^A.
\end{equation}
These, together with the condition $X_{i_0}>1$, this implies that $X_{i_0}$ is equal to
$$\eta_{\sigma,j_0,i_0,J}^{(A)}[B]=\eta_{\sigma,j_0,i_0,J}^{(A)}[B](X_i,i\neq i_0)=\max\left(\theta_{\sigma,j_0,i_0,J}^{(A)}[B],2\right),$$ where $\theta_{\sigma,j_0,i_0,J}^{(A)}[B]=\theta_{\sigma,j_0,i_0,J}^{(A)}[B](X_i,i\neq i_0)$ is the unique integer lying in the interval
\begin{equation}\label{eq:condition4}
\begin{cases}
\left[\Gamma^{1/b_{i_0}^{(j_0,\sigma)}},\Gamma^{1/b_{i_0}^{(j_0,\sigma)}}+1\right[ &\text{ if } b_{i_0}^{(j_0,\sigma)}>0;\\ \left]\Gamma^{1/b_{i_0}^{(j_0,\sigma)}}-1,\Gamma^{1/b_{i_0}^{(j_0,\sigma)}}\right]&\text{ if } b_{i_0}^{(j_0,\sigma)}<0,
\end{cases}
\end{equation} where $\Gamma=\Gamma_{\sigma,j_0,i_0,J}^{(A)}[B](X_i,i\neq i_0)$ is equal to
$$(\log B)^A\left(\prod_{i\not\in J}X_i^{b_{i}^{(j_0,\sigma)}}\right)\left(\prod_{i\in J\setminus\{i_0\}:b_{i}^{(j_0,\sigma)}>0}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma)}}\right)\left(\prod_{i\in J\setminus\{i_0\}:b_i^{(j_0,\sigma)}<0}\left(X_i+1\right)^{b_{i}^{(j_0,\sigma)}}\right).$$
By the condition \eqref{eq:CGsigmaA}, applying Proposition \ref{co:slicing}, we obtain
\begin{align*}
\#\CG_\sigma^{(A)}(B)\leqslant &\sum_{\substack{1\leqslant j_0\leqslant d\\1\leqslant i_0\leqslant d:b_{i_0}^{(j_0,\sigma)}\neq 0\\ i_0\in J\subset \{1\leqslant i\leqslant r:b_{i}^{(j_0,\sigma)}\neq 0\}}}\sum_{\substack{(X_1,\cdots,X_r)\in\BZ_{>0}^{r}\\ \XX^{D_0(\sigma)}\leqslant B,\max_{1\leqslant i\leqslant r} X_i\leqslant B\\ X_{i_0}=\eta_{\sigma,j_0,i_0,J}^{(A)}[B](X_i,i\neq i_0)}}2^{\sum_{j\neq j_0}\sum_{i=1}^{r}|b_i^{(j,\sigma)}|}\prod_{j=1}^{d}\XX^{E_{\sigma_0}(j)}\\ \ll &\sum_{\substack{1\leqslant j_0\leqslant d\\1\leqslant i_0\leqslant d:b_{i_0}^{(j_0,\sigma)}\neq 0\\ i_0\in J\subset \{1\leqslant i\leqslant r:b_{i}^{(j_0,\sigma)}\neq 0\}}}\CN_\sigma(B)_{i_0,\eta_{\sigma,j_0,i_0,J}^{(A)}[B]}=O(B(\log B)^{r-2}),
\end{align*} where the implied constant is independent of the map $\eta_{\sigma,j_0,i_0,J}^{(A)}[B]$ and hence independent of $A$.
\end{proof}
\subsection{Proof of Proposition \ref{prop:keyprop2}}\label{se:proofkeyProp2}
We want to prove that, any (real) point satisfying at least one of the boundary conditions in the definition of $\CA^{(A)\flat}(B)$ \eqref{eq:CAAB} is contained in an integral cube and one of its vertices lies in $$\delta(B)\bigcup\left(\bigcup_{\sigma\in\triangle_{\operatorname{max}}}\left(\CG^{(A)}_\sigma(B)\bigcup \left(\bigcup_{i=1}^r\{\XX\in\CA_\sigma(B):X_i\leqslant 2\}\right)\right)\right).$$
With the notation of Proposition \ref{co:slicing}, for every $\sigma\in\triangle_{\max}$, $$\#\{\XX\in\CA_\sigma(B):X_i\leqslant 2\}=\CN_\sigma(B)_{i,\eta\equiv 1}+\CN_\sigma(B)_{i,\eta\equiv 2}=O(B(\log B)^{r-2}).$$ Assuming this, we henceforth conclude by Lemmas \ref{le:deltaB} and \ref{le:CGAB}, the deviation resulting from switching the cardinality of $\CA^{(A)\flat}(B)$ to the integral $\CI^{(A)\flat}(B)$ is
\begin{align*}
|\CA^{(A)\flat}(B)-\CI^{(A)\flat}(B)|&\leqslant 2^n\left(\#\delta(B)+\sum_{\sigma\in\triangle_{\operatorname{max}}}\left(\#\CG^{(A)}_\sigma(B)+\sum_{i=1}^{r}\left(\CN_\sigma(B)_{i,\eta\equiv 1}+\CN_\sigma(B)_{i,\eta\equiv 2}\right)\right)\right)\\ &=O(B(\log B)^{r-2}),
\end{align*} uniformly in $A$.
This gives the desired estimate of Proposition \ref{prop:keyprop2}.
Now take any such $\YY_0=(Y_1,\cdots,Y_n)\in\CD^{(A)\flat}(B)$. Let $\sigma_0\in\triangle_{\max}$ be such that $\YY_0\in\CD_{\sigma_0}^{(A)}(B)$. Assume that $\YY_0$ satisfies
\begin{equation}\label{eq:condbd}
\YY_0^{E_{\sigma_0}(j_0)}=(\log B)^A,\text{ for certain }1\leqslant j_0\leqslant d,\quad Y_i\geqslant 3,\text{ for all }1\leqslant i\leqslant r,
\end{equation} and that it is not covered by integral cubes with one vertex in $\delta(B)$, that is, \begin{equation}\label{eq:conddeltaB}
\{\XX\in\CA(B): |Y_i-X_i|\leqslant 1,\text{ for all }1\leqslant i\leqslant n\}\bigcap \delta(B)=\varnothing.
\end{equation}
We want to show that it is contained in an integral cube with one vertex in $\CG^{(A)}_{\sigma_0}(B)$.
With the notation \eqref{eq:Esigmajbij}, let the integral vector $\yy_{0}=(y_1,\cdots,y_n)\in\BZ_{\geqslant 2}$ be defined such that
\begin{equation}\label{eq:yi}
y_i=\begin{cases}
\text{the unique integer in } [Y_i,Y_i+1[ & \text{ if } 1\leqslant i\leqslant r, \text{ and } b_{i}^{(j_0,\sigma_0)}>0,\\ \text{the unique integer in } ]Y_i-1,Y_i] & \text{ if } r+1\leqslant i\leqslant n \text{ or }1\leqslant i\leqslant r, \text{ and } b_{i}^{(j_0,\sigma_0)}\leqslant 0.
\end{cases}
\end{equation}
Then clearly $y_i\geqslant 2$ for every $1\leqslant i\leqslant n$ by the second assumption of \eqref{eq:condbd}. The assumption \eqref{eq:conddeltaB} guarantees that $\yy_{0}\in\CA(B)$. Moreover, by the first assumption of \eqref{eq:condbd} and its construction \eqref{eq:yi}, $$\yy_0^{E_{\sigma_0}(j_0)}\geqslant \YY_0^{E_{\sigma_0}(j_0)}=(\log B)^A\geqslant Y_{r+j_0}\geqslant y_{r+j_0}.$$ For every $j\neq j_0$, we observe that \begin{align*}
b_i^{(j,\sigma_0)}>0 &\Longrightarrow (y_i+1)^{b_i^{(j,\sigma_0)}}\leqslant 2^{b_i^{(j,\sigma_0)}} y_i^{b_i^{(j,\sigma_0)}};\\ b_i^{(j,\sigma_0)}<0 & \Longrightarrow (y_i-1)^{b_i^{(j,\sigma_0)}}\leqslant 2^{-b_i^{(j,\sigma_0)}}y_i^{b_i^{(j,\sigma_0)}}.
\end{align*} Since $\YY_0\in\CD_{\sigma_0}(B)$, the construction \eqref{eq:yi} implies that $$y_{r+j_0}\leqslant Y_{r+j_0}\leqslant \YY_0^{E_{\sigma_0}(j_0)}\leqslant \yy_0^{E_{\sigma_0}(j_0)},$$
$$y_{r+j}\leqslant Y_{r+j}\leqslant \YY_0^{E_{\sigma_0}(j)}\leqslant 2^{\sum_{i=1}^{r}|b_i^{(j,\sigma_0)}|}\yy_0^{E_{\sigma_0}(j)}, \text{ for every }j\neq j_0.$$
We have thus proved that $\yy_0$ satisfies \eqref{eq:CGsigmaA}.
Finally, since $\bb^{(j_0,\sigma_0)}\neq \mathbf{0}$, we have
$$\prod_{\substack{1\leqslant i\leqslant r\\ b_{i}^{(j_0,\sigma_0)}>0}}\left(y_i-1\right)^{b_{i}^{(j_0,\sigma_0)}}\prod_{\substack{1\leqslant i\leqslant r\\ b_i^{(j_0,\sigma_0)}<0}}\left(y_i+1\right)^{b_{i}^{(j_0,\sigma_0)}}<\YY_0^{E_{\sigma_0}(j_0)}=(\log B)^A.$$ This shows that $\yy_0$ satisfies \eqref{eq:condpos}.
So we have proved $\yy_0\in\CG^{(A)}_{\sigma_0}(B)$, and $\YY_0$ is in one of the integral cubes generated by $\yy_0$. This finishes the proof.
\qed
\subsection{Proof of Proposition \ref{prop:keyprop1}}
Recalling \eqref{eq:CDB}. We define
\begin{equation}\label{eq:CDsigmaB}
\CD_\sigma(B):=\CD(B)\cap C_{0,\sigma}(\BR),
\end{equation}
$$\CD_\sigma(B)=\{\YY\in\BR_{\geqslant 1}^n:\max_{1\leqslant i\leqslant n}Y_i\leqslant B,\YY^{D_0(\sigma)}\leqslant B,\text{ for every }1\leqslant j\leqslant d, Y_{r+j}\leqslant\YY^{E_\sigma(j)}\}.$$
Recalling \eqref{eq:CIB}, we also let
\begin{equation}\label{eq:CIsigmaB}
\CI_\sigma(B):=\int_{\CD_\sigma(B)}\YY^{D_0(\sigma)}\operatorname{d}\omega^{X_0}_{\operatorname{tor},\infty}(\YY)=\int_{\CD_\sigma(B)}\operatorname{d}\xx,
\end{equation}
Correspondingly for every $A\geqslant 1$, we define
\begin{equation}\label{eq:CDsigmaAB}
\CD_\sigma^{(A)}(B)=\{\YY\in\CD_\sigma(B):\text{ for every } 1\leqslant j\leqslant d,\YY^{E_\sigma(j)}\geqslant (\log B)^A\},
\end{equation} \begin{equation}\label{eq:CIsigmaAB}
\CI_\sigma^{(A)}(B):=\int_{\CD_\sigma^{(A)}(B)}\YY^{D_0(\sigma)}\operatorname{d}\omega^{X_0}_{\operatorname{tor},\infty}(\YY)=\int_{\CD_\sigma^{(A)}(B)}\operatorname{d}\xx.
\end{equation}
Then similarly to Remark \ref{rmk:differenceCAABCAsigmaB}, wee have \begin{equation}\label{eq:CDBcup}
\CD(B)=\bigcup_{\sigma\in\triangle_{\operatorname{max}}}\CD_\sigma(B),
\end{equation}
\begin{equation}\label{eq:CDABcup}
\CD^{(A)\flat}(B)\subset\bigcup_{\sigma\in\triangle_{\operatorname{max}}}\CD^{(A)}_\sigma(B),
\end{equation}
\begin{equation}\label{eq:CDABcup2}
\CD(B)\setminus\CD^{(A)\flat}(B)\subset \bigcup_{\sigma\in\triangle_{\operatorname{max}}}\left(\CD_\sigma(B)\setminus \CD_\sigma^{(A)}(B)\right).
\end{equation}
We begin with the following lemma.
\begin{lemma}\label{le:CIBCIsigmaB}
We have $$\CI^{(A)\flat}(B)=\sum_{\sigma\in\triangle_{\operatorname{max}}}\CI_\sigma^{(A)}(B).$$
\end{lemma}
\begin{proof}
It results from Theorem \ref{thm:realTamagawameas} that $C_{\sigma_1}(\BR)\cap C_{\sigma_2}(\BR)$ has $\omega^X_{\operatorname{tor},\infty}$ measure zero, and hence for every two different maximal cones $\sigma_1,\sigma_2\in\triangle_{\operatorname{max}}$, we have $$\int_{C_{0,\sigma_1}(\BR)\cap C_{0,\sigma_2}(\BR)}\operatorname{d}\xx=0.$$ Since $$\CD_{\sigma_1}(B)\cap\CD_{\sigma_2}(B)\subset C_{0,\sigma_1}(\BR)\cap C_{0,\sigma_2}(\BR),$$
By \eqref{eq:CDBcup}, this implies (cf. \cite[Proposition 11.32]{Salberger}) $$\CI(B)=\sum_{\sigma\in\triangle_{\operatorname{max}}}\CI_\sigma(B).$$
On the one hand, by \eqref{eq:CDABcup}, we have, $$\CI^{(A)\flat}(B)\leqslant\sum_{\sigma\in\triangle_{\operatorname{max}}}\CI_\sigma^{(A)}(B).$$ On the other hand, we deduce from \eqref{eq:CDABcup2} that
$$\CI(B)-\CI^{(A)\flat}(B)\leqslant\sum_{\sigma\in\triangle_{\operatorname{max}}}\left(\CI_\sigma(B)-\CI_\sigma^{(A)}(B)\right)=\CI(B)-\sum_{\sigma\in\triangle_{\operatorname{max}}}\CI_\sigma^{(A)}(B).$$ Therefore we deduce the equality of Lemma \ref{le:CIBCIsigmaB}.
\end{proof}
Now Proposition \ref{prop:keyprop1} follows from Lemma \ref{le:CIBCIsigmaB} and the following one.
\begin{lemma}\label{le:CIABsum}
For every $\sigma\in\triangle_{\operatorname{max}}$, we have
$$\CI_\sigma(B)-\CI_\sigma^{(A)}(B)=O_A(B(\log B)^{r-2}\log\log B).$$
\end{lemma}
Our goal in the remaining of this section is to prove Lemma \ref{le:CIABsum}.
The idea is, according to \cite[p. 249--p. 250]{Salberger}, using Fubini's theorem to reduce the computation of each $\CI^{(A)}_\sigma(B),\sigma\in\triangle_{\operatorname{max}}$ to integrals over fibres under the Neron-Severi torus $T(\BR)$, each one being diffeomorphic to a subregion of
\begin{equation}\label{eq:CFB}
\CF(B):=\{\YY\in U_0(\BR)\cap \Tns(\BR):\YY^{D_0}\leqslant B,\min_{1\leqslant i\leqslant n}Y_i\geqslant 1\}. \footnote{Note that $\CF(B)$ is independent of any $\sigma\in\triangle_{\max}$.}
\end{equation}
We recall that in $\Pic(X)$, for every $\sigma\in\triangle_{\max}$ with any admissible ordering, $$[D_0(\sigma)]=[D_0],\quad [D_{r+j}]=[E_{\sigma}(j)],1\leqslant j\leqslant d.$$
If we denote by $x_i,1\leqslant i\leqslant n$ the coordinate regular functions on $X_0\subset\BA^n$, then on $\Tns(\BR)=\Hom_\BZ(\Pic(X),\BR)\subset U_0(\BR)$, thanks to \eqref{eq:D0sigma} and \eqref{eq:Ej}, we have
\begin{equation}\label{eq:coordfuneq}
\xx^{D_0(\sigma)}=\xx^{D_0},\quad \text{and}\quad\xx^{E_{\sigma}(j)}=x_{r+j},\quad\text{for every }1\leqslant j\leqslant d.
\end{equation}
We next fix throughout $\sigma_0\in\triangle_{\operatorname{max}}$ with an admissible ordering.
\begin{comment}
For every $A\geqslant 1$, we also define
\begin{equation}\label{eq:CDACIA}
\CD^{(A)}(\CB_\infty;B):=\CD(\CB_\infty;B)\cap \CD_{\sigma_0}^{(A)}(B),\quad \CI^{(A)}(\CB_\infty;B):=\int_{\CD^{(A)}(\CB_\infty;B)}\YY^{D_0(\sigma_0)}\operatorname{d}\omega_\infty.
\end{equation}
The following generalises Lemma \ref{le:CIABsum}.
\begin{lemma}\label{le:CIAABsum}
We have $$\CI(\CB_\infty;B)-\CI^{(A)}(\CB_\infty;B)=O_{A,\CB_\infty}(B(\log B)^{r-2}\log\log B).$$
\end{lemma}
\end{comment}
We consider the region
\begin{align*}
\CF_{\sigma_0}^{(A)}(B)&:=\{\YY\in\CF(B):\min_{1\leqslant j\leqslant d}Y_{r+j}\geqslant (\log B)^A\}\\ &=\{\YY\in U_0(\BR)\cap \Tns(\BR):\YY^{D_0}\leqslant B,\min_{1\leqslant i\leqslant n}Y_i\geqslant 1,\min_{1\leqslant j\leqslant d}Y_{r+j}\geqslant (\log B)^A\}.
\end{align*}
\begin{lemma}[cf. \cite{Salberger}, Lemma 11.38]\label{le:CIsigma0Bomega}
Let $\varpi_{\CTNS}$ be the global $\CTNS$-invariant differential form represented in coordinate system of $U_{\sigma_0}$ with respect to an admissible ordering by $$\frac{\operatorname{d}x_1}{x_1}\wedge\cdots\wedge\frac{\operatorname{d}x_r}{x_r}.$$
We have, uniformly for $A\geqslant 1$,
$$\CI_{\sigma_0}(B)=\int_{\CF(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}+O(B(\log B)^{r-2}),$$
$$\CI_{\sigma_0}^{(A)}(B)=\int_{\CF_{\sigma_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}+O(B(\log B)^{r-2}).$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{le:CIsigma0Bomega}]
We introduce the regions (cf. \cite[11.34]{Salberger})
\begin{equation}\label{eq:OmegasigmaB}
\Omega_{\sigma_0}(B):=\{\ZZ\in\BR_{\geqslant 1}^r:\ZZ^{D_0(\sigma_0)}\leqslant B,\min_{1\leqslant j\leqslant d}\ZZ^{E_{\sigma_0}(j)}\geqslant 1\},
\end{equation}
$$\Omega_{\sigma_0}^{(A)}(B):=\{\ZZ\in\BR_{\geqslant 1}^r:\ZZ^{D_0(\sigma_0)}\leqslant B,\min_{1\leqslant j\leqslant d}\ZZ^{E_{\sigma_0}(j)}\geqslant (\log B)^A\}.$$
Then $\CF(B)$ (resp. $\CF_{\sigma_0}^{(A)}(B)$) is diffeomorphic to $\Omega_{\sigma_0}(B)$ (resp. $\Omega_{\sigma_0}^{(A)}(B)$) in $\BR$-topology via projection onto the first $r$ coordinates.
We thus have, by \eqref{eq:EjDsigma} and \eqref{eq:coordfuneq} (cf. \cite[11.35--11.37]{Salberger})
\begin{align*}
\CI_{\sigma_0}(B)=\int_{\CD_{\sigma_0}(B)}\operatorname{d}\xx &=\int_{\Omega_{\sigma_0}(B)}\left(\prod_{j=1}^{d}(\xx^{E_{\sigma_0}(j)}-1)\right)\operatorname{d}x_1\cdots\operatorname{d}x_r\\&=\int_{\Omega_{\sigma_0}(B)}\frac{\xx^{D_0(\sigma_0)}}{\prod_{i=1}^{r}x_i}\left(\prod_{j=1}^{d}(1-\xx^{-E_{\sigma_0}(j)})\right)\operatorname{d}x_1\cdots\operatorname{d}x_r\\&=\int_{\CF(B)}\xx^{D_0}\left(\prod_{j=1}^{d}(1-x^{-1}_{r+j})\right)\frac{\operatorname{d}x_1}{x_1}\cdots\frac{\operatorname{d}x_r}{x_r}\\ &=\int_{\CF(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}+O\left(\sum_{j=1}^{d}\int_{\CF(B)}\left(\frac{\xx^{D_0}}{x_{r+j}}\right)\operatorname{d}\varpi_{\CTNS}\right).
\end{align*} And similarly for $\CI_{\sigma_0}^{(A)}(B)$ we have
\begin{align*}
\CI_{\sigma_0}^{(A)}(B)=\int_{\CD_{\sigma_0}^{(A)}(B)}\operatorname{d}\xx &=\int_{\Omega_{\sigma_0}^{(A)}(B)}\left(\prod_{j=1}^{d}(\xx^{E_{\sigma_0}(j)}-1)\right)\operatorname{d}x_1\cdots\operatorname{d}x_r\\
&=\int_{\CF_{\sigma_0}^{(A)}(B)}\xx^{D_0}\left(\prod_{j=1}^{d}(1-x^{-1}_{r+j})\right)\frac{\operatorname{d}x_1}{x_1}\cdots\frac{\operatorname{d}x_r}{x_r}\\ &=\int_{\CF_{\sigma_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}+O\left(\sum_{j=1}^{d}\int_{\CF_{\sigma_0}^{(A)}(B)}\left(\frac{\xx^{D_0}}{x_{r+j}}\right)\operatorname{d}\varpi_{\CTNS}\right).
\end{align*}
The proof of \cite[Lemma 11.38]{Salberger} shows that for every $1\leqslant j\leqslant d$, $$\int_{\CF(B)}
\left(\frac{\xx^{D_0}}{x_{r+j}}\right)\operatorname{d}\varpi_{\CTNS}=O(B(\log B)^{r-2}),$$ so \emph{a fortiori}
$$\int_{\CF_{\sigma_0}^{(A)}(B)}
\left(\frac{\xx^{D_0}}{x_{r+j}}\right)\operatorname{d}\varpi_{\CTNS}=O(B(\log B)^{r-2}).$$
This finishes the proof of Lemma \ref{le:CIsigma0Bomega}.
\end{proof}
In view of Lemma \ref{le:CIsigma0Bomega}, to finish the proof of Lemma \ref{le:CIABsum}, it remains to show
\begin{lemma}\label{le:boundrylogBA}
$$\int_{\CF(B)\setminus\CF_{\sigma_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}=O_A(B(\log B)^{r-2}\log\log B).$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{le:boundrylogBA}]
Write $\CF(B)\setminus\CF_{\sigma_0}^{(A)}(B)=\cup_{j_0=1}^d \CH_{\sigma_0,j_0}^{(A)}(B)$, where for each $j_0\in\{1,\cdots,d\}$ according to the previously fixed admissible ordering of $\sigma_0$, $$\CH_{\sigma_0,j_0}^{(A)}(B)=\{\YY\in U_0(\BR)\cap \Tns(\BR):\YY^{D_0}\leqslant B,\min_{1\leqslant i\leqslant n}Y_i\geqslant 1, Y_{r+j_0}<(\log B)^A\}.$$
We now fix such $j_0$. We choose $\sigma_1\in\triangle_{\operatorname{max}}$ with $-n_{r+j_0}\in\sigma_1$ thanks to the completeness of the fan $\triangle$. Then $\varrho_{r+j_0}\not\in \sigma_1(1)$.
For the computation of $\int_{\CH_{\sigma_0,j_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}$, we now switch to an admissible ordering of $\sigma_1$ such that $\varrho_{r+j_0}\mapsto\varrho_1$ and $\triangle(1)\setminus\sigma_1(1)=\{\varrho_{r+1},\cdots,\varrho_{r+d}\}$. We write
$$D_0(\sigma_1)=\sum_{i=1}^{r}a_i(\sigma_1)D_{\varrho_i},$$ where $a_i(\sigma_1)\in\BZ_{\geqslant 0},1\leqslant i\leqslant r$. Then $a_1(\sigma_1)\geqslant 1$ thanks to our choice of $\sigma_1$. Using \eqref{eq:coordfuneq}, we then have
\begin{align*}
\int_{\CH_{\sigma_0,j_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}&=\int_{\CH_{\sigma_0,j_0}^{(A)}(B)}\xx^{D_0(\sigma_1)}\operatorname{d}\varpi_{\CTNS}\\&=\int_{\substack{\forall 2\leqslant i\leqslant r,1\leqslant x_i\leqslant B\\ 1\leqslant x_1<(\log B)^A, \xx^{D_0(\sigma_1)}\leqslant B}} \xx^{D_0(\sigma_1)}\frac{\operatorname{d}x_1}{x_1}\cdots\frac{\operatorname{d}x_r}{x_r}\\
&=\int_{1\leqslant x_1<(\log B)^A}x_1^{a_1(\sigma_1)-1}\left(\int_{\star}\left(\prod_{i=2}^{r}x_i^{a_i(\sigma_1)-1}\right)\operatorname{d}x_2\cdots\operatorname{d}x_r\right)\operatorname{d}x_1,
\end{align*}
where the condition $\star$ means \begin{equation}\label{eq:condint}
\forall 2\leqslant i\leqslant r,1\leqslant x_i\leqslant B,\quad \prod_{i=2}^{r}x_i^{a_i(\sigma_1)}\leqslant B/x_1^{a_1(\sigma_1)},
\end{equation}
In terms of the notation in Remark \ref{rmk:sublemma} (2), the integral above with condition \eqref{eq:condint} is
\begin{equation}\label{eq:intR}
\int_{1\leqslant x_1<(\log B)^A}x_1^{a_1(\sigma_1)-1} R_{1,\aa_{*1}(\sigma_1)}^{[r-1]}(B,x_1^{a_1(\sigma_1)})\operatorname{d}x_1.
\end{equation}
If $\aa_{*1}(\sigma_1)\neq\mathbf{0}$, then since $x_1\leqslant (\log B)^A$, using Remark \ref{rmk:sublemma}, \eqref{eq:intR} is
\begin{align*}
&\ll \int_{1\leqslant x_1<(\log B)^A}x_1^{a_1(\sigma_1)-1}\frac{B}{x_1^{a_1(\sigma_1)}}\left(\log B\right)^{r-2}\operatorname{d}x_1\\ &=B(\log B)^{r-2}\int_{1\leqslant x_1<(\log B)^A}\frac{\operatorname{d}x_1}{x_1}=O_A(B(\log B)^{r-2}\log\log B).
\end{align*}
If $\aa_{*1}(\sigma_1)=\mathbf{0}$, then \eqref{eq:intR} is $$\ll (\log B)^{r-1}\int_{1\leqslant x_1<(\log B)^A}x_1^{a_1(\sigma_1)-1}\operatorname{d}x_1=O((\log B)^{r-1+Aa_1(\sigma_1)}).$$ We finally conclude that
$$\int_{\CF(B)\setminus\CF_{\sigma_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}\leqslant\sum_{j_0=1}^{d} \int_{\CH_{{\sigma_0},j_0}^{(A)}(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}=O_A(B(\log B)^{r-2}\log\log B).$$
This finishes the proof of Lemma \ref{le:boundrylogBA}.
\end{proof}
\section{Effective equidistribution of rational points on toric varieties}\label{se:purityproof}
This section is denoted to the proof of the following theorem.
\begin{theorem}\label{thm:effective}
The effective equidistribution condition \textbf{(EE)} holds for the canonical $\BZ$-model $\CX$ of any smooth projective split toric variety $X$ over $\BQ$ whose anticanonical line bundle is globally generated, with $$\gamma=\dim X+\operatorname{rank}\Pic(X)+\varepsilon,\quad h(B)=\frac{\log\log B}{(\log B)^\frac{1}{4}},$$ for every $\varepsilon>0$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:mainequidist}]
This is a direct consequence of Theorem \ref{thm:effective} and Proposition \ref{prop:EEimpliesgeneral}. We just need the fact that the canonical $\BZ$-model $\CX$ is smooth over $\BZ$.
\end{proof}
Throughout the rest of this section, we fix $X$ a smooth projective split toric variety over $\BQ$. We write the map $\pi:X_0\to X$ between the principal universal torsor $X_0$ and $X$.
\subsection{Counting integral points with congruence conditions}
Following \cite[Notation 11.6 (c)]{Salberger}, we define the set
\begin{equation}\label{eq:CCB}
\CC_0(B)^+:=\{\XX\in\BZ_{>0}^n:\max_{\sigma\in\triangle_{\operatorname{max}}}\XX^{D_0(\sigma)}\leqslant B,\gcd(\XX^{\usigma},\sigma\in\triangle_{\operatorname{max}})=1\},
\end{equation}
and we define the local densities \begin{equation}\label{eq:kappa}
\kappa_p:=\frac{\#\CX_0(\BZ/p\BZ)}{p^n},\quad \kappa:=\prod_p \kappa_p.
\end{equation}
This infinite product is absolutely convergent. In fact we have (cf. Theorem \ref{thm:nonarchTmeas})
$$\kappa_p=\omega_{\operatorname{tor},\nu}^{X_0}(\CX_0(\BZ_p)),\quad \kappa=\omega_{\operatorname{tor},f}^{X_0}(\CX_0(\widehat{\BZ})).$$
Let the constant $\alpha_0$ denote the minimum of integers $\alpha\in\BZ_{>0}$ such that there exist a collection of $\alpha$ rays of $\triangle$ not contained in any cone of $\triangle$. Note that we always have $\alpha_0\geqslant 2$ since the fan $\triangle$ is complete.
\begin{theorem}[\cite{Salberger} ``Main Lemma 11.27'']\label{thm:Salbergermainlemma}
$$\#\CC_0(B)^+=\kappa \#\CA(B)+O_\varepsilon(B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}).$$
\end{theorem}
Our goal is to refine Theorem \ref{thm:Salbergermainlemma}. We establish asymptotic formulas for the number of integral points in $\CC_0(B)^+$ lying in a given real neighbourhood with congruence conditions, and we express the leading term as the Tamagawa measure of the corresponding adelic neighbourhood.
We fix in the remaining of this section a cone $\sigma_0\in\triangle_{\operatorname{max}}$ with an admissible ordering.
We recall that $C_{\sigma_0}(\BR)=\{(z_{r+1},\cdots,z_{r+d})\in\BR^d:|z_{r+j}|\leqslant 1\}$, where the coordinates $(z_{r+j})_{1\leqslant j\leqslant d}$ are given by the parametrisation of $U_{\sigma_0}$ \eqref{eq:univmap}. For every $1\leqslant j\leqslant d$, let $\lambda_j\in \mathopen]0,1\mathclose[$, and consider the following standard real neighbourhood \begin{equation}\label{eq:Binfty}
\CB_\infty=\CB_\infty(\sigma_0):=\prod_{j=1}^d \mathopen]0,\lambda_j\mathclose]\subset C_{\sigma_0}(\BR)\subset X(\BR).
\end{equation}
For every integer $l\in\BZ_{>0}$ and for every residue $\bxi_l\in\CX_0(\BZ/l\BZ)$,
consider the set \begin{equation}\label{eq:CC0xilVBinfty}
\CC_0([\bxi_l,\CB_\infty];B)^+:=\{\XX\in\CC_0(B)^+: \pi(\XX)\in\CB_\infty,\XX\equiv \bxi_l\mod l\}.
\end{equation}
\begin{theorem}\label{thm:equidistrcong}
We have, uniformly for any pair $(l,\bxi_l)$,
$$\#\CC_0([\bxi_l,\CB_\infty];B)^+=\kappa_{(l)}\operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O_\varepsilon(B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B),$$ where \begin{equation}\label{eq:kappal}
\kappa_{(l)}:=\frac{1}{l^n}\left(\prod_{p\nmid l}\kappa_p\right).
\end{equation}
\end{theorem}
By convention, all implied constants in this section are allowed to depend on $\CB_\infty$.
\begin{comment}
By Theorem \ref{thm:equidistrcong}, summing up all residues in $S_l$, we obtain
\begin{align*}
\#\CC_0([S_l,\CB_\infty];B)^+&=\sum_{\bxi_l\in S_l}\#\CC_0([\bxi_l,\CB_\infty];B)^+\\ &=\kappa_{(l)}\#S_l\#\CA(\CB_\infty;B)+O_\varepsilon((\#S_l) B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B).
\end{align*}
Hence
$$\#\CC_0([S_l,\CB_\infty];B)^+=\kappa_{(l)}\#S_l\operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O_\varepsilon((\#S_l) B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B).$$
Since $$\CS_l=\Psi_l^{-1}(S_l)=\bigsqcup_{\bxi_l\in S_l}\left(\CB_p^{\FTUT}(l;\bxi_l)\times\prod_{p\nmid l}\CX_0(\BZ_p)\right),$$
now it is straightforward to check that $\omega^{\CTUT}_\infty(\CB_\infty)=\operatorname{vol}(\CB_\infty)$ and $$\omega^{\CTUT}_f(\CS_l)=\sum_{\bxi_l\in S_l}\omega^{\CTUT}_f\left(\CB_p^{\FTUT}(l;\bxi_l)\times\prod_{p\nmid l}\CX_0(\BZ_p)\right)=\sum_{\bxi_l\in S_l}\frac{1}{l^n}\left(\prod_{p\nmid l}\kappa_p\right)=\frac{\#S_l}{l^n}\left(\prod_{p\nmid l}\kappa_p\right).\qedhere$$
\end{comment}
\subsection{Proof of Theorem \ref{thm:effective}}
Let us start by fixing throughout a real measurable set $\CF_\infty\subset V(\BR)$ and an affine congruence neighbourhood $\CB^{\FTUT}_f(l;\Xi_l)\subset \FTUT(\widehat{\BZ})$ of level $l$ associated to $\bXi\in\prod_{p\mid l}\CX_0(\BZ_p)$.
Recalling that our closed (thin) subset $M\subset V(\BQ)$ satisfies
$$\pi^{-1}(M)=\{\XX\in\CX_0(\BZ):\prod_{i=1}^{n}X_i=0\}.$$
So we may assume that $\CF_\infty\subset \CT_{O}(\BR)$. Recall from \eqref{eq:divisionRpoints} that $$X(\BR)=\bigcup_{\sigma\in\triangle_{\operatorname{max}}}C_\sigma(\BR),$$ So by intersecting $\CF_\infty$ with each $C_\sigma(\BR)$, it suffices to restrict ourselves inside of a fixed single $C_{\sigma_0}(\BR)$.
The exact sequence \eqref{eq:toriexactseq} induces (cf. \cite[11.46]{Salberger})
$$\xymatrix{1\ar[r]& \CTNS(\BR)^+\ar[d]\ar[r]& \CT_{X_0}(\BR)^+\ar[d]\ar[r]&\CT_{O}(\BR)^+\ar[d]\ar[r]& 1\\ 1\ar[r]& \CTNS(\BR)\ar[r]& \CT_{X_0}(\BR)\ar[r]&\CT_{O}(\BR)\ar[r]& 1,}$$ where the superscript $+$ means the real connected component containing the identity. So the group $\CT_{X_0}(\BR)/\CT_{X_0}(\BR)^+$ acts on $\CT_{O}(\BR)/\CT_{O}(\BR)^+$ with stabilizer $\CTNS(\BR)/\CTNS(\BR)^+$. Since this action corresponds to properly interchanging the sign of coordinates, and moreover, under the parametrisation of $U_{\sigma_0}$, the cube neighbourhoods of the form $\prod_{j=1}^{d}\left([-\lambda_{j},\lambda_{j}]\setminus\{0\}\right),\lambda_{j}\leqslant 1$ form a topological basis of $C_{\sigma_0}(\BR)\cap\CT_{O}(\BR)$, we may assume in what follows $\CF_\infty=\CB_\infty$ \eqref{eq:Binfty}, whence by Theorem \ref{thm:realTamagawameas}, $\omega^X_{\operatorname{tor},\infty}(\CF_\infty)=\operatorname{vol}(\CB_\infty)$.
On the other hand, by \eqref{eq:modelmeasurecomp} and Theorem \ref{thm:nonarchTmeas}, we have, on recalling the constant $\kappa_{(l)}$ \eqref{eq:kappal},
$$\omega^{X_0}_{\operatorname{tor},f}(\CB^{\CX_0}_f(l;\Xi_l))=\prod_{p\mid l}\omega^{X_0}_{\operatorname{tor},\nu}(\CB_p^{\CX_0}(l;\bXi_l))\times \prod_{p\nmid l}\omega^{X_0}_{\operatorname{tor},\nu}(\CX_0(\BZ_p))=\kappa_{(l)}.$$
Now we let $\bxi_l$ be the image of $\bXi_l$ under the reduction modulo $l$ map
$$\CX_0(\widehat{\BZ})\to\prod_{p\mid l}\CX_0(\BZ_p)\to\prod_{p\mid l}\CX_0(\BZ/p^{\operatorname{ord}_p(l)})\simeq \CX_0(\BZ/l).$$ Recall \eqref{eq:NFTUT}.
Hence by \cite[Lemma 11.4 (a)(b)]{Salberger}, we have
\begin{align*}
&\CN_{\CX_0}(\CF_\infty,\CB^{\CX_0}_f(l;\Xi_l);B)\\=&\#\left(\CTNS(\BR)/\CTNS(\BR)^+\right)\#\CC_0([\bxi_l,\CB_\infty];B)^+\\
=&2^{\dim \CTNS}\kappa_{(l)} \operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O_\varepsilon(B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B)\\=&\#\FTNS(\BZ)\omega^{X_0}_{\operatorname{tor},f}(\CB^{\CX_0}_f(l;\Xi_l))\omega^{X}_{\operatorname{tor},\infty}(\CF_\infty)\alpha(X)B(\log B)^{r-1}+O_\varepsilon(B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B).
\end{align*}
Since $r-2+\frac{1}{\alpha_0}\leqslant r-\frac{3}{2}$, so we can fix $\varepsilon>0$ small enough in Theorem \ref{thm:equidistrcong} so that $r-2+\frac{1}{\alpha_0}+\varepsilon\leqslant r-1-\frac{1}{4}$.
We finally conclude that the condition \textbf{(EEUT)} holds for $\CX,\CX_0$ with $$\gamma_0=0,\quad \text{and}\quad h_0(B):=\frac{\log\log B}{(\log B)^\frac{1}{4}}.$$
Finally, since the canonical model $\CX_0$ is smooth over $\BZ$, it follows from Proposition \ref{prop:univtorEEGS} that the condition \textbf{(EE)} for $\CX$ is verified with the choices of $\gamma$ and $h$ stated in Theorem \ref{thm:effective}. \qed
\subsection{Lattice points of bounded toric height in real neighbourhoods}\label{se:comparelatticepts}
We define
\begin{equation}\label{eq:CACBinfty}
\CA(\CB_\infty;B):=\CA(B)\cap \pi^{-1}(\CB_\infty)\subset\CA_{\sigma_0}(B).
\end{equation}
We also consider
\begin{equation}\label{eq:CDCBinfty}
\CD(\CB_\infty;B):=\CD(B)\cap \pi^{-1}(\CB_\infty)\subset\CD_{\sigma_0}(B),
\end{equation} and
\begin{equation}\label{eq:CICBinfty}
\CI(\CB_\infty;B):=\int_{\CD(\CB_\infty;B)}\YY^{D_0(\sigma_0)}\operatorname{d}\omega^{X_0}_{\operatorname{tor},\infty}(\YY)=\int_{\CD(\CB_\infty;B)}\operatorname{d}\xx.
\end{equation}
The main theorems of this section are Proposition \ref{co:CAsigmaWBCDWsigmaB}, generalising Proposition \ref{le:exchangeABIB}, which compares the cardinality of $\CA(\CB_\infty;B)$ \eqref{eq:CACBinfty} with the integral $\CI(\CB_\infty;B)$ \eqref{eq:CICBinfty}, and Proposition \ref{prop:CAxilBCAB}, generalising \cite[Lemma 11.26]{Salberger}, which relates the cardinality of $\CA(\CB_\infty;B)$ with its subsets defined via imposing congruence conditions.
\begin{proposition}\label{co:CAsigmaWBCDWsigmaB}
We have $$\#\CA(\CB_\infty;B)=\CI(\CB_\infty;B)+O(B(\log B)^{r-2}\log\log B).$$
\end{proposition}
\begin{comment}
In order to prove Proposition \ref{co:CAsigmaWBCDWsigmaB}, we appeal additionally to the following generalisation of Proposition \ref{prop:keyprop2}. For every $A\geqslant 1$, we define
$$\CA^{(A)}(\CB_\infty;B):=\CA(\CB_\infty;B)\cap \CA_{\sigma_0}^{(A)}(B).$$
\begin{proposition}\label{prop:CDACIA}
We have $$\CA^{(A)}(\CB_\infty;B)=\CI^{(A)}(\CB_\infty;B)+O_A(B(\log B)^{r-2}\log\log B).$$
\end{proposition}
\begin{proof}[Proof of Proposition \ref{co:CAsigmaWBCDWsigmaB}]
Using Corollary \ref{co:CAsigmaBCAAsigmaB} of Theorem \ref{thm:CABCAAB}, we have
$$\#\left(\CA(\CB_\infty;B)\setminus\CA^{(A)}(\CB_\infty;B)\right)\leqslant \#\left(\CA_{\sigma_0}(B)\setminus\CA_{\sigma_0}^{(A)\flat}(B)\right)=O_A(B(\log B)^{r-2}\log\log B).$$
Therefore thanks to the inequality
\begin{align*}
&\left|\#\CA(\CB_\infty;B)-\CI(\CB_\infty;B)\right|\\ \leqslant & \#\left(\CA(\CB_\infty;B)\setminus\CA^{(A)}(\CB_\infty;B)\right)+\left|\CA^{(A)}(\CB_\infty;B)-\CI^{(A)}(\CB_\infty;B)\right|+\left( \CI(\CB_\infty;B)-\CI^{(A)}(\CB_\infty;B)\right),
\end{align*} Proposition \ref{co:CAsigmaWBCDWsigmaB} follows from Lemma \ref{le:CIAABsum} and Proposition \ref{prop:CDACIA} with fixed $A=1$.
\end{proof}
\end{comment}
For every square-free integer $l\in\BZ_{>0}$ and for every residue $\bxi_l\in\CX_0(\BZ/l\BZ)$, we also define
\begin{equation}\label{eq:CAxilBinfty}
\CA([\bxi_l,\CB_\infty];B):=\{\XX\in\CA(\CB_\infty;B):\XX\equiv \bxi_l\mod l\},
\end{equation}
For $\dd_1=(d_{1,1},\cdots,d_{1,n}),\dd_2=(d_{2,1},\cdots,d_{2,n})\in\BZ_{>0}^n$, we write $\dd_1\mid \dd_2$ to mean that $d_{1,i}\mid d_{2,i}$ for every $1\leqslant i\leqslant n$.
Now for every $\dd\in\BZ_{>0}^n$, we define \begin{equation}\label{eq:CAdxilBinfty}
\CA_{\dd}([\bxi_l,\CB_\infty];B):=\{\XX\in\CA([\bxi_l,\CB_\infty];B):\dd\mid \XX\},
\end{equation} and we write \begin{equation}\label{eq:Pid}
\Pi(\dd):=\prod_{i=1}^{n}d_i.
\end{equation}
By convention, $(\dd,l)=1$ means that $p\mid l\Rightarrow p\nmid d_i$ for every $1\leqslant i\leqslant n$.
\begin{proposition}\label{prop:CAxilBCAB}
Uniformly for any pair $(l,\bxi_l)$ and $\dd\in\BZ_{>0}^n$ such that $(\dd,l)=1$, we have (recall \eqref{eq:Pid})
$$l^n\Pi(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)=\#\CA(\CB_\infty;B)+O(l^n\Pi(\dd) B(\log B)^{r-2}\log\log B).$$
\end{proposition}
We recall that in this section all implied constants are allowed to depend on $\CB_\infty$.
The following corollary is an easy consequence, which will be used in proving Proposition \ref{prop:eqdistCAinftyB}.
\begin{corollary}\label{co:CAxilBCAB}
\hfill
\begin{enumerate}
\item
We have
$$l^n\#\CA([\bxi_l,\CB_\infty];B)=\#\CA(\CB_\infty;B)+O(l^n B(\log B)^{r-2}\log\log B).$$
\item
Uniformly for any $\dd\in\BZ_{>0}^n$ such that $(\dd,l)=1$, we have
$$\Pi(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)=\#\CA([\bxi_l,\CB_\infty];B)+O(\Pi(\dd) B(\log B)^{r-2}
\log\log B).$$
\end{enumerate}
The implied constants are independent of $l$ and $\bxi_l$.
\end{corollary}
\begin{proof}[Proof of Corollary \ref{co:CAxilBCAB}]
For (1), we take $\dd=\underline{\mathbf{1}}$ in Proposition \ref{prop:CAxilBCAB}. For (2) it suffices to replace the term $\#\CA(\CB_\infty;B)$ in Proposition \ref{prop:CAxilBCAB} by the fomula in (1) and divide both side by $l^n$.
\end{proof}
\subsubsection{Three special types of boundary sums}
For each $1\leqslant j_0\leqslant 1$, the sum
\begin{equation}\label{eq:SAB}
\begin{split}
S^{(A)}_{\aa(\sigma_0),j_0}(B)&:=\sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B,\XX^{E_{\sigma_0}(j_0)}\geqslant (\log B)^A}}\prod_{\substack{1\leqslant j\leqslant d\\ j\neq j_0}}\XX^{E_{\sigma_0}(j)}.
\end{split}
\end{equation}
\begin{lemma}\label{le:twospecsum1}
Uniformly for $A\geqslant 1$, we have $$S^{(A)}_{\aa(\sigma_0),j_0}(B)=O(B(\log B)^{r-1-A}).$$
\end{lemma}
\begin{proof}
We recall \eqref{eq:EjDsigma} and \eqref{eq:D0sigmaai}. For the first sum, we have, using Lemma \ref{le:sublemma} and its notation,
\begin{equation*}\label{eq:SAbd}
\begin{split}
S^{(A)}_{\aa(\sigma_0),j_0}(B)=&\sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B,\XX^{E_{\sigma_0}(j_0)}\geqslant (\log B)^A}}\frac{\prod_{\substack{1\leqslant j\leqslant d}}\XX^{E_{\sigma_0}(j)}}{\XX^{E_{\sigma_0}(j_0)}}\\
&\leqslant (\log B)^{-A} \sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B}}\prod_{i=1}^{r}X_i^{a_i(\sigma_0)-1}\\ &=(\log B)^{-A}S^{[r]}_{1,\aa(\sigma_0)}(B,q)=O(B(\log B)^{r-1-A}).\qedhere
\end{split}
\end{equation*}\end{proof}
For each $1\leqslant i_0\leqslant r$, consider also \begin{equation}\label{eq:Sprimebd}
S^{[r]\prime}_{\aa(\sigma_0),i_0}(B):=\sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B}}\frac{\prod_{\substack{1\leqslant j\leqslant d}}\XX^{E_{\sigma_0}(j)}}{X_{i_0}}.
\end{equation}
\begin{lemma}\label{le:twospecsum2}
We have $$S^{[r]\prime}_{\aa(\sigma_0),i_0}(B)=O(B(\log B)^{r-2}).$$
\end{lemma}
\begin{proof}
Again by \eqref{eq:D0sigmaai}, we have $$S^{[r]\prime}_{\aa(\sigma_0),i_0}(B)=\sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B}}X_{i_0}^{a_{i_0}(\sigma_0)-2}\prod_{\substack{1\leqslant i\leqslant r\\ i\neq i_0}}X_i^{a_i(\sigma_0)-1}.$$
If $a_{i_0}(\sigma_0)=0$, then according to Lemma \ref{le:sublemma},
\begin{align*}
S^{[r]\prime}_{\aa(\sigma_0),i_0}(B)&=\left(\sum_{X_{i_0}=1}^{\lfloor B\rfloor}\frac{1}{X_{i_0}^2}\right) \left(\sum_{\substack{\XX=(X_i,i\neq i_0)\in\BZ_{>0}^{r-1}\\\prod_{i\neq i_0}X_{i}^{a_i(\sigma_0)}\leqslant B, \max_{i\neq i_0}X_i\leqslant B}}\prod_{\substack{1\leqslant i\leqslant r\\ i\neq i_0}}X_i^{a_i(\sigma_0)-1}\right)\\ &\leqslant S_{1;\aa_{*i_0}(\sigma_0)}^{[r-1]}(B,1)=O(B(\log B)^{r-2}).
\end{align*}
If $a_{i_0}(\sigma_0)>0$, then
\begin{align*}
S^{[r]\prime}_{\aa(\sigma_0),i_0}(B)&\leqslant\sum_{X_{i_0}=1}^{\lfloor B\rfloor}X^{a_{i_0}(\sigma_0)-2}
\sum_{\substack{\XX=(X_i,i\neq i_0)\in\BZ_{>0}^{r-1}\\ \prod_{i\neq i_0}X_{i}^{a_i(\sigma_0)}\leqslant B, \max_{i\neq i_0}X_i\leqslant B}}\prod_{\substack{ i\neq i_0}}X_i^{a_i(\sigma_0)-1}\\ &=\sum_{X_{i_0}=1}^{\lfloor B^{1/a_{i_0}(\sigma_0)}\rfloor}X^{a_{i_0}(\sigma_0)-2}S_{1;\aa_{*i_0}(\sigma_0)}^{[r-1]}\left(B,X_{i_0}^{a_{i_0}(\sigma_0)}\right)\\ &\ll \sum_{X_{i_0}=1}^{\lfloor B^{^{1/a_{i_0}(\sigma_0)}}\rfloor}X^{a_{i_0}(\sigma_0)-2}\max\left(\frac{B}{X_{i_0}^{a_{i_0}(\sigma_0)}}(\log B)^{r-2},(\log B)^{r-1}\right)\\ &\ll B(\log B)^{r-2}+B^{1-\frac{1}{a_{i_0}(\sigma_0)}}(\log B)^{r-1}+(\log B)^r=O(B(\log B)^{r-2}). \qedhere
\end{align*}
\end{proof}
We next consider, for every $1\leqslant i_0\leqslant n,k\in\BZ_{>0}$, the set \begin{equation}\label{eq:CABik}
\CA(B)_{i_0,\eta\equiv k}:=\{\XX\in\CA(B):X_{i_0}=k\}.
\end{equation}
\begin{lemma}\label{le:CABik}
We have $$\#\CA(B)_{i_0,\eta\equiv k}=O(B(\log B)^{r-2}\log\log B).$$
\end{lemma}
\begin{proof}
It suffices to work with $\#\CA_{\sigma}(B)_{i_0,\eta\equiv k}$ where $$\CA_{\sigma}(B)_{i_0,\eta\equiv k}=\CA(B)_{i_0,\eta\equiv k}\cap\CA_{\sigma}(B), \quad \sigma\in\triangle_{\operatorname{max}}.$$ Now if $1\leqslant i_0\leqslant r$ with respect to an admissible ordering of $\sigma$, then by Proposition \ref{co:slicing}, $$\#\CA_{\sigma}(B)_{i_0,\eta\equiv k}\leqslant \CN_{\sigma}(B)_{i_0,\eta\equiv k}=O(B(\log B)^{r-2}).$$ However if $r+1\leqslant i_0=r+j_0\leqslant r+d$, then on recalling the sum \eqref{eq:SAB}, we have, by Lemma \ref{le:twospecsum1},
\begin{align*}
\#\CA_{\sigma}(B)_{i_0,\eta\equiv k}&\leqslant \#(\CA(B)\setminus \CA^{(A)}(B))+ S^{(A)}_{\aa(\sigma),j_0}(B)\\ &=O_A(B(\log B)^{r-1-A} +B(\log B)^{r-2}\log\log B).
\end{align*}
We thus obtain the desired upper bound by fixing $A\geqslant 1$. \footnote{We can in fact save the $\log\log B$ factor by arguming differently based on a similar toric van der Corput method, but this has no improvement for the overall error term. So for the sake of brevity we omit the details.}
\end{proof}
\subsubsection{Estimates of boundary sums associated to real neighbourhoods}\label{se:nusigmaB}
Recall the definition of $\CB_\infty$ \eqref{eq:Binfty}. Then for every $\YY\in \pi^{-1}(\CB_\infty)\subset C_{0,\sigma_0}(\BR)$, with respect to a fixed admissible ordering of $\sigma_0$, we have
\begin{equation}\label{eq:conditionEsigmaW}
Y_{r+j}\leqslant \lambda_j \YY^{E_{\sigma_0}(j)},\quad 1\leqslant j\leqslant d.
\end{equation}
In this section we estimate the subset $\nu_{\sigma_0}(\CB_\infty;B)$ of $\CA(B)$ consisting of $n$-tuples $\XX$ with
\begin{itemize}
\item $ X_i>1 \quad\text{for every} \quad i\in\{1,\cdots,n\}$;
\item There exists $j_0\in\{1,\cdots,d\}$ such that
\begin{equation}\label{eq:nusigmaB}
X_{r+j_0}\leqslant\lambda_{j_0} \XX^{E_{\sigma_0}(j_0)},\quad X_{r+j}\leqslant\lambda_{j} 2^{\sum_{i=1}^{r}|b_{i}^{(j,\sigma_0)}|}\XX^{E_{\sigma_0}(j)},\text{ for every } j\neq j_0,
\end{equation}
and at least one of the following two possibilities happens.
\begin{align*}
\text{(a)} ~& X_{r+j_0}+3>\lambda_{j_0} \XX^{E_{\sigma_0}(j_0)};\\
\text{(b)} ~ & X_{r+j_0}>\lambda_{j_0}\prod_{\substack{1\leqslant i\leqslant r\\b_{i}^{(j_0,\sigma_0)}>0}}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma_0)}}\prod_{\substack{1\leqslant i\leqslant r\\b_i^{(j_0,\sigma_0)}<0}}\left(X_i+1\right)^{b_i^{(j_0,\sigma_0)}}.
\end{align*}
\end{itemize}
\begin{proposition}\label{prop:nusigmaWB} We have
$$\#\nu_{\sigma_0}(\CB_\infty;B)=O(B(\log B)^{r-2}\log\log B).$$
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:nusigmaWB}]
On defining
$$\#\nu^{(A)}_{\sigma_0}(\CB_\infty;B):=\nu_{\sigma_0}(\CB_\infty;B)\cap \CA^{(A)\flat}(B),$$ we start by passing from the set $\nu_{\sigma_0}(\CB_\infty;B)$ to the set $\nu^{(A)}_{\sigma_0}(\CB_\infty;B)$:
\begin{equation}\label{eq:passfromnutonuA}
\#\nu_{\sigma_0}(\CB_\infty;B)\leqslant \#(\CA(B)\setminus\CA^{(A)\flat}(B))+\#\nu^{(A)}_{\sigma_0}(\CB_\infty;B).
\end{equation}
We now partition $\nu^{(A)}_{\sigma_0}(\CB_\infty;B)$ into two subsets
$$\nu^{(A),[l]}_{\sigma_0}(\CB_\infty;B):=\{\XX\in \nu^{(A)}_{\sigma_0}(\CB_\infty;B):\text{ condition } (l) \text{ holds}\},\quad l=a,b.$$
We first treat the set $\nu^{(A),[a]}_{\sigma_0}(\CB_\infty;B)$. Recall \eqref{eq:conditionEsigmaW}. The first condition of \eqref{eq:nusigmaB} and (a) imply that, having fixed an $r$-tuple $(X_1,\cdots,X_r)$, $X_{n+j_0}$ is lying in the inteval $$\left]\lambda_{j_0} \XX^{E_{\sigma_0}(j_0)}-3,\lambda_{j_0} \XX^{E_{\sigma_0}(j_0)}\right],$$ and hence the number of $d$-tuples $(X_{r+1},\cdots,X_{r+d})$ is $$\leqslant 4\prod_{\substack{1\leqslant j\leqslant d\\ j\neq j_0}}\lambda_j 2^{\sum_{i=1}^{r}|b_{i}^{(j,\sigma_0)}|} \XX^{E_{\sigma_0}(j)}=2^{2+\sum_{j\neq j_0}\sum_{i=1}^{r}|b_{i}^{(j,\sigma_0)}|}\left(\prod_{j\neq j_0}\lambda_j\right) \prod_{\substack{1\leqslant j\leqslant d\\ j\neq j_0}}\XX^{E_{\sigma_0}(j)}.$$
Therefore, by \eqref{eq:SAB} and Lemma \ref{le:twospecsum1}, the contribution from points satisfying condition (a) can be bounded from above by
\begin{equation*}
\begin{split}
\nu^{(A),[a]}_{\sigma_0}(\CB_\infty;B)&\ll S^{(A)}_{\aa(\sigma_0),j_0}(B)=O(B(\log B)^{r-1-A}). \end{split}
\end{equation*}
Let us now turn to $\nu^{(A),[b]}_{\sigma_0}(\CB_\infty;B)$. Together with the first condition of \eqref{eq:nusigmaB}, these imply that $X_{n+j_0}$ lies in the interval
$$\left]\lambda_{j_0}\prod_{\substack{1\leqslant i\leqslant r\\b_{i}^{(j_0,\sigma_0)}>0}}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma_0)}}\prod_{\substack{1\leqslant i\leqslant r\\b_i^{(j_0,\sigma_0)}<0}}\left(X_i+1\right)^{b_i^{(j_0,\sigma_0)}},\lambda_{j_0}\prod_{i=1}^r X_i^{b_{i}^{(j_0,\sigma_0)}}\right]$$ of length
\begin{align*}
&\lambda_{j_0}\left(\prod_{i:b_i^{(j_0,\sigma_0)}\neq 0}X_i^{b_{i}^{(j_0,\sigma_0)}}-\prod_{\substack{1\leqslant i\leqslant r\\b_{i}^{(j_0,\sigma_0)}>0}}\left(X_i-1\right)^{b_{i}^{(j_0,\sigma_0)}}\prod_{\substack{1\leqslant i\leqslant r\\b_i^{(j_0,\sigma_0)}<0}}\left(X_i+1\right)^{b_i^{(j_0,\sigma_0)}}\right)\\
\ll & \sum_{i_0=1}^{r}\sum_{i_0:b_{i_0}^{(j_0,\sigma_0)}\neq 0}X_{i_0}^{b_{i_0}^{(j_0,\sigma_0)}-1}\prod_{i\neq i_0}X_i^{b_{i}^{(j_0,\sigma_0)}}.
\end{align*}
So having fixed an $r$-tuple $(X_1,\cdots,X_r)$, hence the number of $d$-tuples $(X_{r+1},\cdots,X_{r+d})$ is at most
\begin{equation}\label{eq:nucase2}
\begin{split}
&\ll 2^{\sum_{j\neq j_0}\sum_{i=1}^{r}|b_{i}^{(j,\sigma_0)}|}\prod_{j\neq j_0}\XX^{E_{\sigma_0}(j)}\left(1+\sum_{i_0=1}^{r}\sum_{i_0:b_{i_0}^{(j_0,\sigma_0)}\neq 0}X_{i_0}^{b_{i_0}^{(j_0,\sigma_0)}-1}\prod_{i\neq i_0}X_i^{b_{i}^{(j_0,\sigma_0)}}\right)\\ &\ll\left(1+\sum_{i_0=1}^{r}\frac{\XX^{E_{\sigma_0}(j_0)}}{X_{i_0}}\right) \prod_{j\neq j_0}\XX^{E_{\sigma_0}(j)}\\ &=\prod_{\substack{1\leqslant j\leqslant d\\ j\neq j_0}}\XX^{E_{\sigma_0}(j)}+\sum_{i_0=1}^{r}\frac{\prod_{1\leqslant j\leqslant d}\XX^{E_{\sigma_0}(j)}}{X_{i_0}}.
\end{split}\end{equation}
On recalling \eqref{eq:SAB} and \eqref{eq:Sprimebd}, we can control the contribution from condition (2) by
\begin{align*}
&\nu^{(A),[b]}_{\sigma_0}(\CB_\infty;B)\\ \ll&\sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B,\XX^{E_{\sigma_0}(j_0)}\geqslant (\log B)^A}}\prod_{\substack{1\leqslant j\leqslant d\\ j\neq j_0}}\XX^{E_{\sigma_0}(j)} +\sum_{\substack{\XX\in\BZ_{>0}^r:\XX^{D_0(\sigma_0)}\leqslant B\\ \max_{1\leqslant i\leqslant r}X_i\leqslant B}} \sum_{i_0=1}^{r}\frac{\prod_{1\leqslant j\leqslant d}\XX^{E_{\sigma_0}(j)}}{X_{i_0}}\\ =& S^{(A)}_{\aa(\sigma_0),j_0}(B)+\sum_{i_0=1}^{r} S^{[r]\prime}_{\aa(\sigma_0),i_0}(B).
\end{align*}
By Lemmas \ref{le:twospecsum1} and \ref{le:twospecsum2}, we hence obtain $$\nu^{(A),[b]}_{\sigma_0}(\CB_\infty;B)=O\left(B(\log B)^{r-1-A})+B(\log B)^{r-2}\right).$$
\begin{comment}
For the last case (c), a similar inspection as \eqref{eq:nucase2} for case (b) shows that, for any fixed $r$-tuple $(X_1,\cdots,X_r)$, hence the number of $d$-tuples $(X_{r+1},\cdots,X_{r+d})$ is at most
\begin{align*}
&2^{\sum_{j\neq j_0}\sum_{i=1}^{r}|b_{i}^{(j,\sigma_0)}|}\left(1+\lambda_{j_0}\left(\prod_{i=1}^rX_i^{b_{i}^{(j_0,\sigma_0)}}-(X_{i_2}+1)^{b_{i_2}^{(j_0,\sigma_0)}}\prod_{i\neq i_2}X_i^{b_{i}^{(j_0,\sigma_0)}}\right)\right)\prod_{j\neq j_0}\XX^{E_{\sigma_0}(j)}\\ &\ll_{\CB_\infty} \prod_{j\neq j_0}\XX^{E_{\sigma_0}(j)}+\frac{\prod_{1\leqslant j\leqslant d}\XX^{E_{\sigma_0}(j)}}{X_{i_2}}.
\end{align*}
From this we likewise obtain as before
\begin{align*}
\nu^{(A),[c]}_{\sigma_0}(\CB_\infty;B) &\leqslant S^{(A)}_{\aa(\sigma_0),j_0}(B)+S^{[r]\prime}_{\aa(\sigma_0),i_2}(B)
= O(B(\log B)^{r-1-A}+B(\log B)^{r-2}).
\end{align*}
\end{comment}
Gathering together the estimates obtained above, we deduce from \eqref{eq:passfromnutonuA} and Corollary \ref{thm:CABCAAB} that
\begin{align*}
\nu_{\sigma_0}(\CB_\infty;B)=O_A(B(\log B)^{r-2}\log\log B+B(\log B)^{r-1-A}+B(\log B)^{r-2}).
\end{align*} Upon fixing $A\geqslant 1$, we finally get $$\nu_{\sigma_0}(\CB_\infty;B)=O(B(\log B)^{r-2}\log\log B).\qedhere$$
\end{proof}
\subsubsection{Proof of Proposition \ref{co:CAsigmaWBCDWsigmaB}}
To compare the number of lattice points in $\CA(\CB_\infty;B)$ with the integral $\CI(\CB_\infty;B)$, we need to cover the points regarding the boundary condition of $\CD(\CB_\infty;B)$ by integral boxes with at least one vertex lying in
$$\delta(B)\bigcup\nu_{\sigma_0}(\CB_\infty;B)\bigcup \left(\bigcup_{i_0=n}^r\bigcup_{k=1}^2\CA_{i_0,\eta\equiv k}(B)\right),$$ where
the set $\nu_{\sigma_0}(\CB_\infty;B)$ is defined in \S\ref{se:nusigmaB}.
Admitting this, by Proposition \ref{co:slicing}, Lemma \ref{le:deltaB} and Proposition \ref{prop:nusigmaWB},
\begin{align*}
\left|\#\CA(\CB_\infty;B)-\CI(\CB_\infty;B)\right| &\leqslant 2^n\left(\#\delta(B)+\#\nu_{\sigma_0}(\CB_\infty;B)+\sum_{i_0=1}^{n}\sum_{k=1}^{2}\#\CA_{i_0,\eta\equiv k}(B)\right)\\ &=O(B(\log B)^{r-2}\log\log B).
\end{align*}
As we see from the proof of Proposition \ref{prop:keyprop2} in \S\ref{se:proofkeyProp2}, it remains to analyse this boundary condition.
Now fix $\YY_0=(Y_1,\cdots,Y_n)\in \CD(\CB_\infty;B)$ such that $$Y_i\geqslant 3,\text{ for every }1\leqslant i\leqslant n,\quad\text{and}\quad Y_{r+j_0}=\lambda_{j_0}\YY^{E_{\sigma_0}(j_0)},\text{ for certain }1\leqslant j_0\leqslant d,$$ and \eqref{eq:conddeltaB} holds. Let $\yy_{0}=(y_1,\cdots,y_n)\in\BZ_{\geqslant 2}$ be defined as in \eqref{eq:yi}, with the only exception that $y_{r+j_0}$ is defined by
\begin{equation}\label{eq:yr+j0}
y_{r+j_0}=\begin{cases}
\text{the unique integer in } [Y_{r+j_0},Y_{r+j_0}+1[ & \text{ if } \lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}-\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}> 2\\ &~\text{ and } Y_{r+j_0}-1\leqslant \lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)};\\ \text{the unique integer in } ]Y_{r+j_0}-1,Y_{r+j_0}] & \text{ otherwise}.
\end{cases}
\end{equation}
Here we denote by $$\widehat{\yy_0}^{E_{\sigma_0}(j_0)}:=\prod_{\substack{1\leqslant i\leqslant r\\b_{i}^{(j_0,\sigma_0)}>0}}\left(y_i-1\right)^{b_{i}^{(j_0,\sigma_0)}}\prod_{\substack{1\leqslant i\leqslant r\\b_i^{(j_0,\sigma_0)}<0}}\left(y_i+1\right)^{b_{i}^{(j_0,\sigma_0)}}.$$
Clearly $\yy_0\in\CA(B)$.
Let us now verify that $\yy_0$ satisfies conditions \eqref{eq:nusigmaB} and (a) or (b) in the definition of $\nu_{\sigma_0}(\CB_\infty;B)$. First with the exactly same argument as in \S\ref{se:proofkeyProp2}, the second condition of \eqref{eq:nusigmaB} is satisfied.
\boxed{\emph{Case $\lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}-\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}>2$.}} If $Y_{r+j_0}-1\leqslant \lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}$, then
$$y_{r+j_0}\geqslant Y_{r+j_0}>\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)},$$ and $$y_{r+j_0}<Y_{r+j_0}+1\leqslant\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}+2<\lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}$$ by construction. If
$Y_{r+j_0}-1> \lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}$, then $$\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}<Y_{r+j_0}-1<y_{r+j_0}\leqslant Y_{r+j_0}\leqslant \lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}.$$ This shows that in both cases $\yy_0$ satisfies (b) and the first condition of \eqref{eq:nusigmaB}.
\boxed{\emph{Case $\lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}-\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}\leqslant 2$}.} We have $$y_{r+j_0}+3>Y_{r+j_0}+2\geqslant \lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}+Y_{r+j_0}-\lambda_{j_0}\widehat{\yy_0}^{E_{\sigma_0}(j_0)}>\lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}.$$
So in this case $\yy_0$ satisfies (a), and the first condition of \eqref{eq:nusigmaB} follows from $y_{r+j_0}\leqslant Y_{r+j_0}\leqslant \lambda_{j_0}\yy_0^{E_{\sigma_0}(j_0)}$.
We have proved that $\yy_0\in\nu_{\sigma_0}(\CB_\infty;B)$.
This finishes the proof.
\qed
\subsubsection{Proof of Proposition \ref{prop:CAxilBCAB}}
We perform a refinement of the proof of \cite[Lemma 11.26]{Salberger}, taking special care of the uniformity of dependency on $\dd$ and on $(l,\bxi_l)$.
We may assume that $\CA_{\dd}([\bxi_l,\CB_\infty];B)\neq\varnothing$. By the Chinese remainder theorem and the condition $(\dd,l)=1$, we can choose a point $\Xi_{l\dd}=(\Xi_{ld_1},\cdots,\Xi_{ld_n})\in \CX_0(\BZ)$, with $0<\Xi_{ld_i}\leqslant l\max_{1\leqslant i\leqslant n} d_i$, such that $$\CA_{\dd}([\bxi_l,\CB_\infty];B)=\CA([\Xi_{l\dd},\CB_\infty];B):=\{\XX=(X_1,\cdots,X_n)\in\CA(\CB_\infty;B):X_i\equiv \Xi_{ld_i}~\operatorname{mod}~ d_il\}.$$
For every $\XX\in\BZ^n$ and $\dd_{0}=(d_{0,1},\cdots,d_{0,n})\in\BZ_{>0}^n$, we define the box
$$\square_{\XX,\dd_0}:=\{\YY\in\BR_{>0}^n:\text{ for every }i,X_i\leqslant Y_i<X_i+d_{0,i}\}.$$
And we write
$$\CJ_1:=\bigcup_{\XX\in\CA([\Xi_{l\dd},\CB_\infty];B)} \square_{\XX,l\dd},\quad \text{with} \quad \CJ_2:=\bigcup_{\XX\in \CA(\CB_\infty;B)}\square_{\XX,\underline{\mathbf{1}}}.$$
Then, since the boxes $(\square_{\XX,l\dd})_{\XX\in\CA([\Xi_{l\dd},\CB_\infty];B)}$ are mutually disjoint,
$$\operatorname{Area}(\CJ_1)=l^n\Pi(\dd)\#\CA([\Xi_{l\dd},\CB_\infty];B),\quad \operatorname{Area}(\CJ_2)=\#\CA(\CB_\infty;B).$$
Let us now compare the region $\CJ_1$ with $\CJ_2$.
Write $$\square^\prime_{\XX,\dd_0}:=\{\YY\in\BR^n:\text{ for every }i,X_i-2d_{0,i}\leqslant Y_i\leqslant X_i+2d_{0,i}\}.$$
Firstly, we have
$$\CJ_1\subset \CJ_2\bigcup\left(\bigcup_{\XX\in\delta^{[l\dd]}(B)\cup\nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B)}\square^\prime_{\XX,\one}\right)$$ where, on recalling $\delta(B)$ \eqref{eq:deltaB} and $\nu_{\sigma_0}(\CB_\infty;B)$ defined in $\S$\ref{se:nusigmaB}, $$\delta^{[l\dd]}(B):=\{\XX\in\BZ^n:\square^\prime_{\XX,l\dd}\cap\delta(B)\neq\varnothing\},$$ and $$\nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B):=\{\XX\in\BZ^n:\square^\prime_{\XX,l\dd}\cap\nu_{\sigma_0}(\CB_\infty;B)\neq\varnothing\}.$$
Indeed, if any box $\square_{\XX,l\dd}$ does not hit the boundary of $\CD(\CB_\infty;B)$ (and we have seen in the proof of Proposition that the number of boxes intersecting the boundary can be controlled by $\delta(B)\cup\nu_{\sigma_0}(\CB_\infty;B)$), then $$\square_{\XX,l\dd}=\bigcup_{\XX\in \square_{\XX,l\dd}\cap\BZ^n} \square_{\XX,\one}\subset \CA(\CB_\infty;B).$$
On the other hand, we similarly have
$$\CJ_2\subset \CJ_1\bigcup\left(\bigcup_{\XX\in\delta^{[l\dd]}(B)\cup \nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B)\cup \CM(\dd,l;B)}\square^\prime_{\XX,\one}\right),$$
where $$\CM(\dd,l;B):=\bigcup_{i_0=1}^n\bigcup_{k=1}^{l\max_{1\leqslant i\leqslant n} d_i} \CA(B)_{i_0,\eta\equiv k},$$ $\CA(B)_{i_0,\eta\equiv k}$ being defined by \eqref{eq:CABik}.
To estimate $\#\delta^{[l\dd]}(B)$, we note that for every $\XX\in\delta^{[l\dd]}(B)$, there exists $\ee=(e_1,\cdots,e_n)\in\BZ^n$ with $|e_i|\leqslant 2ld_i$ such that $\XX+\ee\in\delta(B)$. So using Lemma \ref{le:deltaB},
$$\#\delta^{[l\dd]}(B)\leqslant 4^n\Pi(l\dd)\#\delta(B)=O(l^n\Pi(\dd)B(\log B)^{r-2}).$$
Similarly, to estimate $\#\nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B)$, for every $\XX\in\nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B)$, there exists $\ff=(f_1,\cdots,f_n)\in\BZ^n$ such that $|f_i|\leqslant 2ld_i$ and $\XX+\ff\in \nu_{\sigma_0}(\CB_\infty;B)$. By Proposition \ref{prop:nusigmaWB}, we similarly have
$$\#\nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B)\leqslant 4^n \Pi(l\dd)\#\nu_{\sigma_0}(\CB_\infty;B)=O(l^n\Pi(\dd)B(\log B)^{r-2}\log\log B).$$
Using Lemma \ref{le:CABik} applied to each $\#\CA(B)_{i_0,\eta\equiv k}$ to control $\#\CM(\dd,l;B)$, we finally obtain
\begin{align*}
&\left|l^n\Pi(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)-\#\CA(\CB_\infty;B)\right|\\ =&\left|l^n\Pi(\dd)\#\CA([\Xi_{l\dd},\CB_\infty];B)-\#\CA(\CB_\infty;B)\right|\\\leqslant &4^n\left(\sum_{i_0=1}^n\sum_{k=1}^{l\max_{1\leqslant i\leqslant n} d_i}\#\CA(B)_{i_0,\eta\equiv k}+\#\delta^{[l\dd]}(B)+\#\nu_{\sigma_0}^{[l\dd]}(\CB_\infty;B)\right)\\ =&O(l^n\Pi(\dd)B(\log B)^{r-2}\log\log B).\qedhere
\end{align*}
This finishes the proof of Proposition \ref{prop:CAxilBCAB}. \qed
\subsection{Möbius inversions}
\begin{comment}
Next we recall the following.
\begin{lemma}\label{le:intxD0omega}
We have $$\CI_{\sigma_0}(B)=\alpha(X)B(\log B)^{r-1}+O(B(\log B)^{r-2}),$$ where $\alpha(X)$ is defined by \eqref{eq:alphaV}.
\end{lemma}
\begin{proof}
Combining Lemma \ref{le:CIsigma0Bomega} we are done.
\end{proof}
As a refinement of Lemma \ref{le:intxD0omega}, we now prove the following regarding integral over the real neighbourhood \eqref{eq:CDCBinfty}.
\begin{lemma}\label{le:CICBinfty}
We have $$\CI(\CB_\infty;B)=\operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O(B(\log B)^{r-2}).$$
\end{lemma}
\begin{proof} We compute, in a similar manner as in the proof of Proposition \ref{prop:keyprop1},
\begin{align*}
\CI(\CB_\infty;B)&=\int_{\CD(\CB_\infty;B)}\operatorname{d}\xx\\ &=\int_{\Omega_{\sigma_0}(B)}\left(\prod_{i=1}^{d}\left(\lambda_j\xx^{E_{\sigma_0}(j)}-1\right)\right)\operatorname{d}x_1\cdots\operatorname{d}x_r\\ &=\left(\prod_{j=1}^d \lambda_j\right)\int_{\CF(B)}\xx^{D_0}\varpi_{\CTNS}+O(B(\log B)^{r-2})\\ &=\operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O(B(\log B)^{r-2}),
\end{align*}
by Lemma \ref{le:intxD0omega}. \end{proof}
\end{comment}
We denote by $\mu_X$ the generalised Möbius inversion defined in \cite[p. 234]{Salberger}. It is characterised by the property that, for every $\dd=(d_1,\cdots,d_n)\in\BZ_{>0}^n$,
$$\sum_{\dd^\prime\mid \dd}\mu_X(\dd^\prime)=1\Longleftrightarrow \gcd_{\sigma\in\triangle_{\operatorname{max}}}(\dd^{D_0(\sigma)})=1.$$
Note that $\mu_X(\dd)=0$ if there exists $p$ such that $p^2\mid d_i$.
For every prime $p$, the constant $\kappa_p$ \eqref{eq:kappa} has also the expression
\begin{equation}\label{eq:kappanuX}
\kappa_p=\sum_{\substack{(e_1,\cdots,e_n)\in\BZ_{\geqslant 0}^n\\\dd=(p^{e_1},\cdots,p^{e_n})}}\frac{\mu_X(\dd)}{\Pi(\dd)}.
\end{equation}
Recall that $\alpha_0$ is the smallest integer such that there exists $\{\varrho_1,\cdots,\varrho_{\alpha_0}\}\subset\triangle(1)$ which does not generate a cone in $\triangle$. We then have
\begin{lemma}[\cite{Salberger} Lemma 11.19]\label{le:summud}
The formal series $$\sum_{\dd\in\BZ_{>0}^n}\frac{|\mu_X(\dd)|}{\Pi(\dd)^s}$$ is convergent for $s>\frac{1}{\alpha_0}$. Consequently, we have
\begin{itemize}
\item $\sum_{\dd\in\BZ_{>0}^n:\Pi(\dd)\leqslant b}|\mu_X(d)|=O_\varepsilon(b^{\frac{1}{\alpha_0}+\varepsilon})$,
\item $\sum_{\dd\in\BZ_{>0}^n:\Pi(\dd)> b}\frac{|\mu_X(d)|}{\Pi(\dd)}=O_\varepsilon(b^{\frac{1}{\alpha_0}-1+\varepsilon})$.
\end{itemize}
\end{lemma}
Recall the set \eqref{eq:CC0xilVBinfty}. The goal of this section is to prove:
\begin{proposition}\label{prop:eqdistCAinftyB}
We have, uniformly for any pair $(l,\bxi_l)$,
$$\#\CC_0([\bxi_l,\CB_\infty];B)^+=\kappa_{(l)}\#\CA(\CB_\infty;B)+O_\varepsilon(B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B),$$
where $\kappa_{(l)}$ is defined by \eqref{eq:kappal}.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:eqdistCAinftyB}]
We recall our convention that $(\dd,l)=1$ means $p\mid l\Rightarrow p\nmid d_i,1\leqslant i\leqslant n$, and we write $(\dd,l)>1$ if there exist $p$ and $d_i$ such that $p\mid (d_i,l)$.
By Möbius inversion, we have
\begin{align*}
\#\CC_0([\bxi_l,\CB_\infty];B)^+&=\sum_{\dd\in\BZ_{>0}^n}\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)\\ &=\left(\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}+\sum_{\dd\in\BZ_{>0}^n:(\dd,l)>1}\right)\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B).
\end{align*} Recall $\kappa_{(l)}$ \eqref{eq:kappal} and the expression \eqref{eq:kappanuX}. We have $$\kappa_{(l)}=\frac{1}{l^n}\left(\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\frac{\mu_X(\dd)}{\Pi(\dd)}\right).$$ So by Corollary \ref{co:CAxilBCAB} (1),
\begin{align*}
\kappa_{(l)}\#\CA(\CB_\infty;B)=&\left(\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\frac{\mu_X(\dd)}{\Pi(\dd)}\right)\frac{\#\CA(\CB_\infty;B)}{l^n}\\ =&\left(\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\frac{\mu_X(\dd)}{\Pi(\dd)}\right)\left(\#\CA([\bxi_l,\CB_\infty];B)+O(B(\log B)^{r-2}\log\log B)\right)\\ =&\left(\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\frac{\mu_X(\dd)}{\Pi(\dd)}\right)\#\CA([\bxi_l,\CB_\infty];B)+O(B(\log B)^{r-2}\log\log B).
\end{align*}
Therefore, using Corollary \ref{co:CAxilBCAB} (2) and Lemma \ref{le:summud} we have
\begin{align*}
&\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)-\left(\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\frac{\mu_X(\dd)}{\Pi(\dd)}\right)\#\CA([\bxi_l,\CB_\infty];B)\\ =&\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\mu_X(\dd)\left(\#\CA_{\dd}([\bxi_l,\CB_\infty];B)-\frac{\#\CA([\bxi_l,\CB_\infty];B)}{\Pi(\dd)}\right)\\ \ll& B(\log B)^{r-2}\log\log B\sum_{\substack{\dd\in\BZ_{>0}^n\\\Pi(\dd)\leqslant \log B}}|\mu_X(\dd)|+\#\CA(B)\sum_{\substack{\dd\in\BZ_{>0}^n\\\Pi(\dd)>\log B}}\frac{|\mu_X(\dd)|}{\Pi(\dd)}\\ \ll_\varepsilon& B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B,
\end{align*} where for the sum over $\dd$ with $\Pi(\dd)>\log B$, we have used the upper bounds (cf. \cite[Lemma 11.26]{Salberger})
$$\Pi(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)\leqslant \Pi(\dd)\#\CA_{\dd}(B)\leqslant \#\CA(B),\quad \#\CA([\bxi_l,\CB_\infty];B)\leqslant \#\CA(B).$$
We therefore obtain
\begin{align*}
&\sum_{\dd\in\BZ_{>0}^n:(\dd,l)=1}\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)-\kappa_{(l)}\#\CA(\CB_\infty;B)=O_\varepsilon(B(\log B)^{r-2+\frac{1}{\alpha_0}+\varepsilon}\log\log B).
\end{align*}
To finish the proof, we are now going to show $$\sum_{\dd\in\BZ_{>0}^n:(\dd,l)>1}\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)=0.$$
It suffices to consider $\dd=(d_1,\cdots,d_n)\in\BZ_{>0}^n$ such that each $d_i$ is square-free (otherwise $\mu_X(\dd)=0$), a condition which we denote by ``$\dd~\square$-free'' by a slight abuse of terminology. For every such $\dd$ with $(\dd,l)>1$, write $\dd=\dd_0\dd_{(l)}$ be the unique factorisation such that $\dd_0$ is maximal among $\ee\in\BZ_{>0}$ with $\ee\mid \dd$ and $(\ee,l)=1$.
Note that $(\dd,l)>1$ implies that $\dd_{(l)}\neq \underline{\mathbf{1}}=(1,\cdots,1)$.
We lift $\bxi_l$ to $\Xi_l\in\CX_0(\BZ)$. It then satisfies $\gcd_{\sigma\in\triangle_{\operatorname{max}}}(\Xi_l^{D_0(\sigma)})=1$, hence we have $$1=\sum_{\dd\in\BZ_{>0}^n:\dd\mid \Xi_l}\mu_X(\dd).$$ Since $\mu_X(\underline{\mathbf{1}})=1$, this implies $$\sum_{\substack{\dd\in\BZ_{>0}^n\\\dd\mid \Xi_l,\dd\neq\underline{\mathbf{1}}}}\mu_X(\dd)=0.$$
We further observe that, since every component of $\dd$ (and hence $\dd_{(l)}$) is square-free, $\dd_{(l)}\mid (l,\cdots,l)$ and $$\CA_{\dd}([\bxi_l,\CB_\infty];B)\neq\varnothing\Longrightarrow\dd_{(l)}\mid \Xi_l.$$ Moreover, under this assumption we have $$\CA_{\dd_0\dd_{(l)}}([\bxi_l,\CB_\infty];B)=\CA_{\dd_0}([\bxi_l,\CB_\infty];B).$$
We can now compute:
\begin{align*}
&\sum_{\dd\in\BZ_{>0}^n:(\dd,l)>1}\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)\\&=\sum_{\substack{\dd\in\BZ_{>0}^n,\dd~\square-\text{free}\\ (\dd,l)>1,\dd_{(l)}\mid \Xi_l}}\mu_X(\dd)\#\CA_{\dd}([\bxi_l,\CB_\infty];B)\\&=\sum_{\substack{\dd_0\in\BZ_{>0}^n\\\dd_0~\square-\text{free},(\dd_0,l)=1}}\sum_{\substack{\dd_{(l)}\in\BZ_{>0}^n,\dd_{(l)}~\square-\text{free}\\\dd_{(l)}\neq\underline{\mathbf{1}},\dd_{(l)}\mid \Xi_l}}\mu_X(\dd_0\dd_{(l)})\#\CA_{\dd_0\dd_{(l)}}([\bxi_l,\CB_\infty];B)\\
&=\sum_{\dd_0\in\BZ_{>0}^n:(\dd_0,l)=1}\mu_X(\dd_0)\#\CA_{\dd_0}([\bxi_l,\CB_\infty];B)\sum_{\substack{\dd_{(l)}\in\BZ_{>0}^n\\\dd_{(l)}\neq\underline{\mathbf{1}},\dd_{(l)}\mid \Xi_l}}\mu_X(\dd_{(l)})=0,
\end{align*}
on observing that the sums are in fact all finite.
\end{proof}
\subsection{Proof of Theorem \ref{thm:equidistrcong}}\label{se:proofequistr}
Let $\varpi_{\CTNS}$ be the global $\Tns$-invariant differential form as in Lemma \ref{le:CIsigma0Bomega}. Recall the region $\CF(B)\subset \Tns(\BR)$ \eqref{eq:CFB}. Then by \cite[11.39--11.41]{Salberger},
$$\int_{\CF(B)}\xx^{D_0}\operatorname{d}\varpi_{\CTNS}=\alpha(X)B(\log B)^{r-1}+O(B(\log B)^{r-2}).$$ We compute, in a similar manner as in the proof of Lemma \ref{le:CIsigma0Bomega},
\begin{align*}
\CI(\CB_\infty;B) &=\int_{\Omega_{\sigma_0}(B)}\left(\prod_{j=1}^{d}(\lambda_{j}\xx^{E_{\sigma_0}(j)}-1)\right)\operatorname{d}x_1\cdots\operatorname{d}x_r\\&=\int_{\Omega_{\sigma_0}(B)}\frac{\xx^{D_0(\sigma_0)}}{\prod_{i=1}^{r}x_i}\left(\prod_{j=1}^{d}(\lambda_{j}-\xx^{-E_{\sigma_0}(j)})\right)\operatorname{d}x_1\cdots\operatorname{d}x_r\\&=\int_{\CF(B)}\xx^{D_0}\left(\prod_{j=1}^{d}(\lambda_{j}-x^{-1}_{r+j})\right)\frac{\operatorname{d}x_1}{x_1}\cdots\frac{\operatorname{d}x_r}{x_r}\\&=\left(\prod_{j=1}^{d}\lambda_{j}\right)\int_{\CF(B)}\xx^{D_0}\varpi_{\CTNS}+O(B(\log B)^{r-2})\\ &=\operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O(B(\log B)^{r-2}).
\end{align*}
Combined with Proposition \ref{co:CAsigmaWBCDWsigmaB}, we obtain
$$\#\CA(\CB_\infty;B)=\operatorname{vol}(\CB_\infty)\alpha(X)B(\log B)^{r-1}+O(B(\log B)^{r-2}\log\log B).$$
Finally using Proposition \ref{prop:eqdistCAinftyB}, we are done.
\qed
\section{The geometric sieve for toric varieties}\label{se:geomsieve}
We keep the using notation from $\S$\ref{se:toricvandercorput}. Write $\pi:X_0\to X$ for the map from the principal universal torsor $X_0$ to $X$. In this section we prove
\begin{theorem}\label{thm:geomsieve1}
Let $f,g\in\BZ[\BA^n_\BZ]$ be coprime (i.e. $\codim_{\BA_\BQ^n}(f=g=0)=2$). Then uniformly for $N> 1$,
\begin{multline*}
\#\{\XX\in\CA(B): \text{there exists } p\geqslant N,p\mid \gcd(f(\XX),g(\XX))\} \\ \ll_{f,g} \frac{B(\log B)^{r-1}}{N\log N}+B(\log B)^{r-2}\log\log B.
\end{multline*}
\end{theorem}
Now Theorem \ref{thm:maingeomsieve} is just another way of stating the following Corollary.
\begin{remark}\label{re:effectiveGS}
In general, for $V$ an almost-Fano variety (or an affine variety equipped with a gauge form) and a Zariski closed subset $Z\subset V$ of codimension at least two, we fix $\CV\subset\BP_\BZ^m$ an integral model for $V$ an let $\CZ$ be the Zariski closure of $Z$ in $\CV$.
Write as usual $\widetilde{P}\in\CV(\BZ)$ for the lift of a point $P\in V(\BQ)$.
Summarising all known examples, we expect that the following effective \textbf{(GS)} condition holds (cf. \cite[Theorem 3.1 (ii) and its Remarks]{Cao-Huang2}):
\emph{There exist continuous functions $h_1,h_2:\BR_{>0}\to\BR_{>0}$ such that $h_i(x)=o(1),x\to\infty$ for $i=1,2$, such that, uniformly for every $N\geqslant 2$,
\begin{multline*}
\#\{P\in V(\BQ)\setminus M:H(P)\leqslant B, \text{there exists } p\geqslant N,\widetilde{P}~\text{mod}~p\in\CZ(\BF_p)\}\\=O\left((h_1(N)+h_2(B)) \#\{P\in V(\BQ)\setminus M:H(P)\leqslant B\}\right),
\end{multline*}
where the implied constant may depend only on $\CV$ and $Z$.}
Theorem \ref{thm:geomsieve1} shows that for arbitrary closed $Z\subset X$ of $\codim_X(Z)\geqslant 2$, we can choose
$$h_1(x)=\frac{1}{x\log x},\quad h_2(x)=\frac{\log\log x}{\log x}.$$
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:maingeomsieve}]
Let $\widetilde{\pi}:\CX_0\to\CX$ be the map between their canonical models. For every Zariski closed $Z\subset X$ with $\codim_{X}(Z)\geqslant 2$, we write $Z_0:=\pi^{-1}(Z)\subset X_0$. Then $Z_0$ is also closed, and $\codim_{X_0}(Z_0)=\codim_{X}(Z)\geqslant 2$. Let $\CZ_0$ be the Zariski closure of $Z_0$ in $\CX_0$. Then $\CZ_0=\widetilde{\pi}^{-1}(\CZ)$.
$$\xymatrix{\CZ_0\ar[d]\ar@{^{(}->}[r]& \CX_0\ar[d]\ar@{^{(}->}[r]&\BA_\BZ^n\\ \CZ\ar@{^{(}->}[r]& \CX\ar@{^{(}->}[r]&\BP_\BZ^m,}$$
Upon considering all irreducible components of $Z_0$, we may assume that $\CZ_0\subset (f_0=g_0=0)\cap \CX_0$ for certain non-zero coprime polynomials $f_0,g_0\in\BZ[\BA^n_\BZ]$. Therefore by Theorem \ref{thm:geomsieve1} applied to $f_0,g_0$,
\begin{align*}
&\#\{P\in U(\BQ):H(P)\leqslant B, \text{there exists } p\geqslant N,\widetilde{P}\mod p \in\CZ(\BF_p)\}\\ \leqslant & \#\mathfrak{T}_{\CX_0}(\BZ)\#\{\XX\in\CA(B): \text{there exists } p\geqslant N,p\mid \gcd(f(\XX),g(\XX))\}\\ \ll&_{f_0,g_0} \frac{B(\log B)^{r-1}}{N\log N}+B(\log B)^{r-2}\log\log B.
\end{align*}
The choice of $f_0,g_0$ depends only on $\CZ_0$ and hence on $Z$. This finishes the proof.
\end{proof}
The remaining of this chapter is denoted to proving Theorem \ref{thm:geomsieve1}.
\subsection{Counting points of bounded height on subvarieties}\label{se:countingsubvar}
The main theorems of this section are Proposition \ref{prop:subvar} and its Corollary \ref{cor:subvar}. The latter counts rational points of bounded height lying on proper closed subvarieties and saves a power of $\log B$.
We recall \eqref{eq:CAAB}.
\begin{proposition}\label{prop:subvar}
Let $\phi\in\BZ[\BA^n_\BZ]$ be non-zero. For $A\geqslant 1$ and $\sigma\in\triangle_{\operatorname{max}}$, consider the set $$\CA_{\sigma,\phi}^{(A)}(B):=\{\XX\in\CA_\sigma^{(A)}(B): \phi(\XX)=0\}.$$ Then we have
$$\#\CA_{\sigma,\phi}^{(A)}(B)\ll_A B(\log B)^{r-2}+B(\log B)^{r-1-A}.$$
\end{proposition}
\begin{corollary}\label{cor:subvar}
Let $Y\subset X$ be a proper closed subvariety.
Define
$$\CN(Y;B):=\#\{P\in (Y\cap \CT_{O})(\BQ):H_{\operatorname{tor}}(P)\leqslant B\}.$$
Then we have $$\CN(Y;B)=O(B(\log B)^{r-2}\log\log B).$$
\end{corollary}
\begin{proof}[Proof of Corollary \ref{cor:subvar}]
We consider the preimage $Y_0:=\pi^{-1}(Y)\subset X_0$.
We choose $\phi\in \BZ[\BA^n_\BZ]$ which vanishes on $Y_0$.
Forgetting about the coprimality condition, we clearly have
\begin{equation}\label{eq:NYB}
\CN(Y;B)\leqslant \#\CU_0(\BZ)\left( \#\left(\CA(B)\setminus\CA^{(A)\flat}(B)\right)+\sum_{\sigma\in\triangle_{\operatorname{max}}}\#\CA_{\sigma,\phi}^{(A)}(B)\right).
\end{equation}
Using Theorem \ref{thm:CABCAAB} and Proposition \ref{prop:subvar}, with a fixed $A\geqslant 1$, we obtain
\begin{align*}
\CN(Y;B)&= O(B(\log B)^{r-2}\log\log B)+\sum_{\sigma\in\triangle_{\operatorname{max}}}O(B(\log B)^{r-2})\\ &=O(B(\log B)^{r-2}\log\log B).\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:subvar}]
Upon considering irreducible components and the reduced part, we may assume that the scheme $\phi=0$ is integral over $\BQ$ in $\BA^n$.
Consider the projection $p_n:\BA^n\to\BA^{n-1}$ onto the first $n-1$ coordinates. Then either $\phi$ is constant on the variable $x_{n}$, i.e., $\phi$ factors through $p_n$, or the restriction $\pi_n:=p_n|_{\phi=0}\to\BA^{n-1}$ is a generally finite dominant morphism. For the latter case, we can choose a polynomial $\phi_n\in\BZ[x_1,\cdots,x_{n-1}]$ which vanishes on the ramification locus\footnote{i.e. $\pi_n$ fails to be of its generic degree} $Y_n\subset\BA^{n-1}$ of $\pi_n$, and we define
$$\CA_{\sigma,\phi,(n)}^{(A)}(B):=\{\XX\in\CA_{\sigma,\phi}^{(A)}(B):p_n(\XX)\not\in Y_n\},$$
$$\CA_{\sigma,\phi_n}^{(A)}(B):=\{\XX\in\CA_\sigma^{(A)}(B):\phi_n(p_n(\XX))=0\},$$
so that $$\CA_{\sigma,\phi}^{(A)}(B)\subset\CA_{\sigma,\phi,(n)}^{(A)}(B)\cup \CA_{\sigma,\phi_n}^{(A)}(B).$$ Note however for the first constant case we just have $\CA_{\sigma,\phi}^{(A)}(B)=\CA_{\sigma,\phi_n}^{(A)}(B)$ and $\CA_{\sigma,\phi,(n)}^{(A)}(B)=\varnothing$.
In any case, we are reduced to $\BA^{n-1}$.
Continuing the analysis of the projection from $\BA^{n-1}$ with respect to the polynomial $\phi_n$, we further break $\CA^{(A)}_{\sigma,\phi_n}(B)$ into subsets. Finally we obtain
\begin{equation}\label{eq:decompCABf}
\CA_{\sigma,\phi}^{(A)}(B)\subset \bigcup_{i=1}^{n}\CA_{\sigma,\phi,(i)}^{(A)}(B),
\end{equation}
where each set $\CA_{\sigma,\phi,(i)}^{(A)}(B),i\leqslant n$, if non-empty, is constructed at the $(n-i+1)$-th step with respect to a non-zero polynomial $\phi_{n-i+1}\in\BZ[X_1,\cdots,X_i]$ (with the convention $\phi_1=\phi$), and has the property that, once $(X_1,\cdots,X_{i-1})\in\BZ_{>0}^{i-1}$ are fixed, there are only finitely many solutions for $X_i$, the number of which is bounded by the degree of $\phi_{n-i+1}$.
We first analyse the set $\CA_{\sigma,\phi,(r+j_0)}^{(A)}(B)$ for every fixed $j_0\in\{1,\cdots,d\}$. It suffices to assume that $\CA_{\sigma,\phi,(r+j_0)}^{(A)}(B)\neq\varnothing$. On recalling the sum \eqref{eq:SAB}, we thus have, by Lemma \ref{le:twospecsum1},
\begin{align*}
\#\CA_{\sigma,\phi,(r+j_0)}^{(A)}(B)&\leqslant \deg(\phi_{n-(r+j_0)+1}) S^{(A)}_{\aa(\sigma),j_0}(B)=O(B(\log B)^{r-1-A}).
\end{align*}
Now we treat $\CA_{\sigma,\phi,(i_0)}^{(A)}(B)$ for every fixed $1\leqslant i_0\leqslant r$, and we may as before assume that $\CA_{\sigma,\phi,(i_0)}^{(A)}(B)\neq\varnothing$.
By construction above, there exist a finite collection of maps $(\eta_{i_0}(k):\BZ_{>0}^{i_0-1}\to\BZ_{>0})_{k\in K}$, the number $\#K$ of which is $\leqslant\deg \phi_{n-i_0+1}$, such that for every $\XX\in\CA_{\sigma,\phi,(i_0)}^{(A)}(B)$, $X_{i_0}=\eta_{i_0}(k)(X_1,\cdots,X_{i_0-1})$ for certain $k\in K$. Employing Proposition \ref{co:slicing} and its notation, we obtain
$$\#\CA_{\sigma,\phi,(i_0)}^{(A)}(B)\leqslant \sum_{k\in K} \CN_\sigma(B)_{i_0,\eta_{i_0}(k)}=O(B(\log B)^{r-2}).$$
To conclude, we go back to \eqref{eq:decompCABf} and obtain
$$\#\CA_{\sigma,\phi}^{(A)}(B)\leqslant \sum_{i=1}^{n}\#\CA_{\sigma,\phi,(i)}^{(A)}(B)=O(B(\log B)^{r-2}),$$ upon fixing $A\geqslant 1$.
The proof is thus completed.
\end{proof}
\subsection{Two specific versions of the geometric sieve}
We shall fix the coprime polynomials $f,g\in\BZ[\BA^n_\BZ]$ throughout the rest of this section, and we let $\CY_0$ be the subscheme $(f=g=0)\subset\BA^n_\BZ$.
To every cone $\sigma\in\triangle_{\operatorname{max}}$, we associate an admissible ordering, so that $\sigma(1)=\{\varrho_{r+1},\cdots,\varrho_{r+d}\}$ and the parametrisation of $X_0\subset \BA^n$ is induced by $\pi^{-1}(U_\sigma)$. Consider the projection $\operatorname{pr}_\sigma:\BA^n\to\BA^r$ onto the first $r$-coordinates, and for every $\zz\in \BA^r$, we denote the fibre of $\operatorname{pr}_\sigma$ over $\zz$ by $$\CY_{0,\zz}(\sigma):=\operatorname{pr}_\sigma^{-1}(\zz)\cap \CY_0.$$ It is defined by the polynomials $f(\zz,X_{r+1},\cdots,X_{r+d}),g(\zz,X_{r+1},\cdots,X_{r+d})\in\BZ[X_{r+1},\cdots,X_{r+d}]$. Let $$\CZ_0(\sigma):=\{\zz\in \BA^r:\codim_{\BA_{k(\zz)}^{d}}(\CY_{0,\zz}(\sigma))\leqslant 1\}.$$ Then $\CZ_0(\sigma)$ is a constructible set by Chevalley's theorem (cf. \cite[Exercise II 3.22 (d)]{Hartshorne}). By standard result in commutative algebra, (cf. \cite[Theorem 15.1]{Matsumura}), for every $\zz\in\CZ_0(\sigma)$,
$$\codim_{\BA^n}(\CY_0)\leqslant \operatorname{ht}_{\zz}(\CZ_0|\BA^r)+\codim_{\BA_{k(\zz)}^{d}}(\CY_{0,\zz}(\sigma)),$$ where $\operatorname{ht}_{\zz}(\CZ_0(\sigma)|\BA^r)$ denotes the height of the prime ideal corresponding to $\CZ_0(\sigma)$ of the local ring $\CO_{\BA^r,\zz}$.
Therefore we have $$\codim_{\BA^r}(\CZ_0(\sigma))\geqslant \min_{\zz\in\CZ_0(\sigma)}\operatorname{ht}_{\zz}(\CZ_0(\sigma)|\BA^r)\geqslant 1.$$ Hence we can choose a polynomial $h_\sigma\in\BZ[X_1,\cdots,X_r]$ which vanishes on $\CZ_0(\sigma)$. Then, for any $\zz\in\BA^r\setminus (h_\sigma=0)$, the fibre $\CY_{0,\zz}(\sigma)$ has codimension at least two.
The following version of geometric sieve is dedicated to controlling the contribution from arbitrarily large primes.
\begin{proposition}\label{prop:geomsieve1}
For every $\sigma\in\triangle_{\max}$ and for $A,M\geqslant 2$, consider the following set
\begin{equation}\label{eq:CBMB}
\CK_{f,g,\sigma}^{(A),[M],\natural}(B):=\{\XX\in\CA_\sigma^{(A)}(B):h_\sigma(\XX)\neq 0, \text{there exists } p\geqslant M,p\mid \gcd(f(\XX),g(\XX))\}.
\end{equation}
Then we have, uniformly for such $M$,
$$\#\CK_{f,g,\sigma}^{(A),[M],\natural}(B)\ll\frac{B(\log B)^r}{M\log M}+B(\log B)^{r-A},$$ where the implied constant depends only on $f,g$.
\end{proposition}
To get rid of the extra $\log B$ factor in the first term of Proposition \ref{prop:geomsieve1}, we also need the next sieve which treats the contribution from every single moderately large prime. We keep using the notation above.
\begin{proposition}\label{prop:geomsieve2}
For every $\sigma\in\triangle_{\max}$ and for every prime $p$, define analogously the set
$$\CK_{f,g,\sigma}^{(A),\{p\},\natural}(B):=\{\XX\in\CA_\sigma^{(A)}(B):h_\sigma(\XX)\neq 0,p\mid \gcd(f(\XX),g(\XX))\}.$$
Then we have, uniformly for $p\leqslant (\log B)^{A}$,
$$\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(B)\ll \frac{B(\log B)^{r-1}}{p^2}+p^{d-1}B(\log B)^{r-1-A}.$$
\end{proposition}
\subsection{Proof of Theorem \ref{thm:geomsieve1}}
We begin by assuming $N<(\log B)^2$. We consider the quantities
$$\Pi_1(B):=\sum_{\sigma\in\triangle_{\operatorname{max}}}\#\{\XX\in\CA_\sigma(B):h_\sigma(\XX)=0\},$$
$$\Pi_2^{(A),\{p\}}(B):=\sum_{\sigma\in\triangle_{\operatorname{max}}}\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(B),$$
$$\Pi_2^{(A),[M]}(B):=\sum_{\sigma\in\triangle_{\operatorname{max}}}\#\CK_{f,g,\sigma}^{(A),[M],\natural}(B).$$
By Corollary \ref{cor:subvar} we have
$$\Pi_1(B) \ll B(\log B)^{r-2}\log \log B. \footnote{In fact the proof of Proposition \ref{prop:subvar} shows that it is $\ll B(\log B)^{r-2}$.}$$ Applying Proposition \ref{prop:geomsieve1} with $M=(\log B)^2$, we obtain
$$\Pi_2^{(A),[(\log B)^2]}(B)\ll_A B(\log B)^{r-2}+B(\log B)^{r-A}.$$
Applying Proposition \ref{prop:geomsieve2}, we obtain $$\sum_{N\leqslant p\leqslant (\log B)^2}\Pi_2^{(A),\{p\}}(B)\ll \frac{B(\log B)^{r-1}}{N\log N}+B(\log B)^{r-1-A+2d},$$
where $A>2$ is to be determined in the next step.
By using the following decomposition,
\begin{align*}
&\#\{\XX\in\CA(B): \text{there exists } p\geqslant N,p\mid \gcd(f(\XX),g(\XX))\}\\ &\leqslant \Pi_1(B)+\#\left(\CA(B)\setminus\CA^{(A)\flat}(B)\right)+\sum_{N\leqslant p\leqslant (\log B)^2}\Pi_2^{(A),\{p\}}(B) +\Pi_2^{(A),[(\log B)^2]}(B)\\ &\ll_A B(\log B)^{r-2}\log \log B+\frac{B(\log B)^{r-1}}{N\log N}+B(\log B)^{r-1-A+2d},
\end{align*}
and then by fixing $A=2d+1$, we achieve the desired estimate.
Now assume $N\geqslant (\log B)^2$. Then the second error term dominates the first one. We deduce directly from Proposition \ref{prop:geomsieve1} with $A=2$ and $M=N$ that
\begin{align*}
&\#\{\XX\in\CA(B): \text{there exists } p\geqslant N,p\mid \gcd(f(\XX),g(\XX))\}\\ &\leqslant \#(\CA(B)\setminus\CA^{(A=2)}(B))+\CK_{f,g,\sigma}^{(A=2),[M],\natural}(B)\\ &\ll \frac{B(\log B)^r}{N\log N}+B(\log B)^{r-2}\log \log B=O(B(\log B)^{r-2}\log \log B),
\end{align*} which is also satisfactory.
This completes the proof of Theorem \ref{thm:geomsieve1}. \qed
\subsection{Proof of Proposition \ref{prop:geomsieve1}}
We need a quantitative and uniform version of Ekedahl's sieve \cite{Ekedahl} (cf. also \cite[Lemma 3.1 \& Theorem 3.3]{Bhargava}) for affine spaces, allowing boxes with unequal side-lengths.
We define the height of a polynomial to be the maximum of absolute values of its coefficients.
\begin{lemma}[Browning--Heath-Brown \cite{Browning-HB}, Lemma 2.1]\label{le:EkedahlBrHB}
Uniformly for any $B_1,\cdots,B_m\geqslant 1,Q\geqslant 1,M\geqslant 2$, and for any polynomials $f_1,f_2\in\BZ[X_1,\cdots,X_m]$ which are coprime and of degree $\leqslant d$ and height $\leqslant Q$, we have
$$\#\{\XX\in\BZ^m:\text{ for every } i,|X_i|\leqslant B_i, \text{and there exists } p>M,p\mid \gcd(f_1(\XX),f_2(\XX))\}$$
$$ \ll \frac{\FB\log (\FB Q)}{M\log M}+\frac{\FB\log (\FB Q)}{B_{\min}},$$
where $$\FB=\FB(B_i):=\prod_{i=1}^{m}B_i,\quad B_{\min}:=\min_{1\leqslant i\leqslant m}(B_i).$$ The implied constant only depends on $m,d$.
\end{lemma}
Now let us start the proof of Proposition \ref{prop:geomsieve1}.
For each fixed $\YY\in\BZ_{>0}^r$, Define the set
$$\CK_{f,g,\sigma}^{(A),[M],\natural}(\YY;B):=\{\ZZ\in\BZ_{>0}^d:Z_j\leqslant \YY^{E_\sigma(j)},1\leqslant j\leqslant d,\text{there exists } p\geqslant M,p\mid\gcd( f(\YY,\ZZ),g(\YY,\ZZ))\}.$$
Then by the construction of $h_\sigma$, the polynomials $f(\YY,X_{r+1},\cdots,X_{r+d}),g(\YY,X_{r+1},\cdots,X_{r+d})\in\BZ[X_{r+1},\cdots,X_{r+d}]$ are coprime for each $\YY\in\BZ_{>0}^r$ satisfying
\begin{equation}\label{eq:condition1}
\max_{1\leqslant i\leqslant r}Y_i\leqslant B,\quad \YY^{D_0(\sigma)}\leqslant B, \quad h_\sigma(\YY)\neq 0,\quad \min_{1\leqslant j\leqslant d}\YY^{E_\sigma(j)}\geqslant (\log B)^A.
\end{equation}
Then applying Lemma \ref{le:EkedahlBrHB} we get
\begin{align*}
\#\CK_{f,g,\sigma}^{(A),[M],\natural}(\YY;B)&\ll \log \left(\prod_{j=1}^{d}\YY^{E_\sigma(j)}\right)\left((M \log M)^{-1}+\left(\min_{1\leqslant i\leqslant d}\YY^{E_\sigma(j)}\right)^{-1}\right) \left(\prod_{j=1}^{d}\YY^{E_\sigma(j)}\right)\\ &\ll \left(\frac{\log B}{M \log M}+(\log B)^{1-A}\right)\left(\prod_{i=1}^{r}Y_i^{a_i(\sigma)-1}\right).
\end{align*}
The implied constant depends only on th degree of $f,g$.
Therefore we obtain
\begin{align*}
\#\CK_{f,g,\sigma}^{(A),[M],\natural}(B)&=\sum_{\substack{\YY\in\BZ_{>0}^r\\\eqref{eq:condition1}\text{ holds}}}\#\CK_{f,g,\sigma}^{(A),[M],\natural}(\YY;B)\\
&\ll \left(\frac{\log B}{M \log M}+(\log B)^{1-A}\right)\sum_{\substack{\YY\in\BZ_{>0}^r\\\max_{1\leqslant i\leqslant r}Y_i\leqslant B, \YY^{D_0(\sigma)}\leqslant B }}\prod_{i=1}^{r}Y_i^{a_i(\sigma)-1}\\ &\ll\frac{B(\log B)^r}{M\log M}+B(\log B)^{r-A},
\end{align*} by Lemma \ref{le:sublemma}.
We get the desired estimate.
\qed
\subsection{Proof of Proposition \ref{prop:geomsieve2}}\label{se:proofpropgeomsieve2}
We define likewise in the proof of Proposition \ref{prop:geomsieve1}, for each $\YY\in\BZ_{>0}^d$, $$\CK_{f,g,\sigma}^{(A),\{p\},\natural}(\YY;B):=
\{\ZZ\in\BZ_{>0}^d:Z_j\leqslant \YY^{E_\sigma(j)},1\leqslant j\leqslant d,p\mid\gcd( f(\YY,\ZZ),g(\YY,\ZZ))\}.$$
We now estimate $\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(\YY;B)$ for every $\YY\in\BZ_{>0}^d$ satisfying the assumptions \eqref{eq:condition1}. Employing the Lang-Weil estimate \cite{Lang-Weil}, we have, uniformly any such $\YY$, \begin{equation}\label{eq:LWcodim2}
\#\{\bxi\in (\BZ/p\BZ)^d:f(\YY,\bxi)\equiv g(\YY,\bxi)\equiv 0\mod p\}\ll p^{d-2}.
\end{equation} Here the implied constant depends only on the degree of $f,g$.
Now we break the $d$-dimensional cube $\prod_{j=1}^{d}\left[1, \YY^{E_\sigma(j)}\right]$ into unions of integral squared-cubes of side-length $p$, the number of which being
$$\ll\frac{\prod_{j=1}^d \YY^{E_\sigma(j)}}{p^d}+\frac{\prod_{j=1}^d \YY^{E_\sigma(j)}}{\min_{1\leqslant j\leqslant d}\YY^{E_\sigma(j)}}\leqslant \left(p^{-d}+(\log B)^{-A}\right)\prod_{j=1}^d \YY^{E_\sigma(j)}.$$ Applying \eqref{eq:LWcodim2} together with the Chinese remainder theorem, we have
$$\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(\YY;B)\ll p^{d-2}\left(p^{-d}+(\log B)^{-A}\right)\prod_{j=1}^d \YY^{E_\sigma(j)}= \left(\frac{1}{p^2}+\frac{p^{d-2}}{(\log B)^A}\right)\prod_{i=1}^{r}Y_i^{a_i(\sigma)-1}.$$
We are now ready to estimate $\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(B)$ as follows.
\begin{align*}
\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(B)&=\sum_{\substack{\YY\in\BZ_{>0}^r\\ \eqref{eq:condition1}\text{ holds}}}\#\CK_{f,g,\sigma}^{(A),\{p\},\natural}(\YY;B)\\&\ll \left(\frac{1}{p^2}+\frac{p^{d-2}}{(\log B)^A}\right)\sum_{\substack{\YY\in\BZ_{>0}^r\\\max_{1\leqslant i\leqslant r}Y_i\leqslant B, \YY^{D_0(\sigma)}\leqslant B }}\prod_{i=1}^{r}Y_i^{a_i(\sigma)-1}\\ &\ll\frac{B(\log B)^{r-1}}{p^2}+p^{d-2}B(\log B)^{r-1-A},
\end{align*} by using Lemma \ref{le:sublemma}.
\qed
\section{Further applications to counting rational points on toric varieties}\label{se:application}
\subsection{A preliminary lemma}
Let $Y$ be a smooth geometrically integral $\BQ$-variety and let $\CY$ be an integral model of $Y$ over $\BZ$.
Fix $S$ a finite set of places containing $\BR$ such that $\CY$ is smooth with geometrically integral fibres over $\BZ_S$.
Recall the map \eqref{eq:redmodpk}.
\begin{lemma}\label{le:preparation}
Let $Z\subset Y$ be a proper Zariski closed subset, and let $W$ be the open subset $Y\setminus Z$.
Let $\CZ$ be the Zariski closure of $Z$ in $\CV$, and $\CW:=\CV\setminus\CZ$. Then for every $p\not\in S$ and every $k\geqslant 1$, we have $\operatorname{Mod}_{p^k}^{-1}(\CW(\BZ/p^k\BZ))=\CW(\BZ_p)$.
\end{lemma}
\begin{proof}
Fix $p\not\in S$ and $k\geqslant 1$. For every point $\overline{P}_k\in \CW(\BZ/p^k\BZ)$, we write $\overline{P}$ for its image in $\CW(\BF_p)$. By Hensel's lemma, $\overline{P}_k$ lifts into $\widetilde{P}:\operatorname{Spec}(\BZ_p)\to\CY$. Since the open set $\CW\cap \operatorname{Spec}(\BZ_p)$ contains the closed point $\overline{P}$, it is non-empty and hence must also contain the generic point of $\widetilde{P}$. This shows that $\widetilde{P}\in \CW$, as desired.
\end{proof}
\subsection{Proof of purity of equidistribution -- Principle \ref{prin:purity}}\label{se:proofofpurity}
Let $Z\subset X$ be Zariski closed of codimension at least two, and let $W$ be the open subset $X\setminus Z$ and be its integral model. Let $\CZ=\overline{Z}\subset \CX$ and $\CW:=\CX\setminus \CZ$.
Recall the definition of $W(\RA_\BQ)$ \eqref{eq:adelization}. We are going to apply Theorem \ref{thm:keyOmega} to any adelic subset of the form $$\CF^W_\infty\times\prod_{\nu\mid l}\CF^W_\nu\times \prod_{\nu\nmid l} \CW(\BZ_\nu)\subset W(\RA_\BQ),$$ where $l$ is sufficiently large, $\CF^W_\infty\subset W(\BR)$ is a non-empty real open neighbourhood and each non-archimedean $\CF^W_\nu \subset W(\BQ_\nu)$ is non-empty, open and compact.
Clearly $\omega_{\operatorname{tor},\infty}^X(\CF^W_\infty)>0$ and $\omega_{\operatorname{tor},\nu}^X(\CF^W_\nu)>0$ for every $\nu\mid l$. By Lemma \ref{le:preparation}, Theorem \ref{thm:nonarchTmeas}, Equation \eqref{eq:modelmeasurecomp} and the Lang--Weil estimate \cite{Lang-Weil}, we see that, uniformly for $p\leftrightarrow\nu\nmid l$, $$\omega_{\operatorname{tor},\nu}^X(\CW(\BZ_\nu))=\frac{\#\CW(\BF_p)}{p^{\dim X}}=1+O\left(p^{-\frac{1}{2}}\right).$$ So $\omega_{\operatorname{tor},\nu}^X(\CW(\BZ_\nu))>0$ provided $l$ is sufficiently large. This shows that the family of $\nu$-adic neighbourhoods $(\CF^W_\infty)\cup(\CF^W_\nu)_{\nu\mid l}\cup (\CW(\BZ_{\nu}))_{\nu\nmid l}$ satisfies \eqref{eq:posmeas} of Theorem \ref{thm:keyOmega}.
We now verify the condition \textbf{(GS)}.
Lemma \ref{le:preparation} (1) shows that for each prime $p<\infty$, every $P\in X(\BQ_p)$ which lifts to $\widetilde{P}\in \CX(\BZ_p)$, $\widetilde{P}\not\in \CW(\BZ_p)$ implies $\widetilde{P}\mod p\in\CZ(\BF_p)$. So the condition \textbf{(GS)} follows from Theorem \ref{thm:maingeomsieve}. The variety $X$ satisfies Principle \ref{prin:Manin-Peyre} thanks to Theorem \ref{thm:mainequidist}. So the conclusion follows from Theorem \ref{thm:keyOmega}.
\qed
\begin{remark}\label{rmk:APWA}
The same argument also applies to the family $(\CF^W_\infty)\cup(\CF^W_\nu)_{\nu\mid l}\cup (W(\BQ_{\nu}))_{\nu\nmid l}$, whence the associated counting function admits also an asymptotic formula. This implies in particular the density of $W(\BQ)$ inside of arbitrary open neighbourhood of the larger space $\prod_{\nu\in\operatorname{Val}(\BQ)}W(\BQ_{\nu})$ equipped with the product topology.
\end{remark}
\subsection{Everywhere locally soluble fibres -- Proof of Theorem \ref{thm:ratptsfibration}}
Our proof is inspired by the strategies employed in \cite{BBL,Browning-Loughran,Loughran-Smeets}. We summarise them as follows.
\begin{theorem}\label{thm:locsolfibre}
Let $f_0:Y_1\to Y_2$ be a proper dominant morphism of smooth proper geometrically integral varieties over $\BQ$. Let $\widetilde{f}_0:\CY_1\to\CY_2$ be a proper model of $f$ over $\BZ_S$ where $S$ is a finite set of places containing $\BR$ such that $\CY_2(\BZ_{\nu})\neq\varnothing$ for every $\nu\notin S$.
\begin{enumerate}
\item Assume that $f_0$ is generically finite of degree $>1$ and $\dim Y_1=\dim Y_2$. Then there exist $0<c(f_0)<1$ a set of primes $\CP(f)$ containing almost all primes not in $S$ which split completely in the algebraic closure of $\BQ$ in the Galois closure of the extension $\BQ(Y_1)/\BQ(Y_2)$, such that for each prime $p\in\CP(f_0)$, we have \begin{equation}\label{eq:est0}
\#\left(\widetilde{f}_0(\CY_1(\BF_p))\right)\leqslant c(f_0)\#\CY_2(\BF_p).
\end{equation}
\item Assume that $f_0$ has geometrically integral generic fibre.
\begin{enumerate}
\item Assume that there exists at least one non-pseudo-split fibre over the codimension one points of $Y_2$. Then for every $p\not\in S$, we have $1-\frac{\#\left(\widetilde{f}_0(\CY_1(\BZ_p)\mod p^2\right)}{\#\CY_2(\BZ/p^2\BZ)}\ll \frac{1}{p}$ and there exists $\triangle(f_0)>0$ such that \begin{equation}\label{eq:est1}
\sum_{p\leqslant N,p\not\in S}\left(1-\frac{\#\left(\widetilde{f}_0(\CY_1(\BZ_p)\mod p^2\right)}{\#\CY_2(\BZ/p^2\BZ)}\right)\log p\sim\triangle(f_0) \log N,
\end{equation}
\begin{equation}\label{eq:est2}
\prod_{p\leqslant N,p\not\in S}\frac{\#\left(\widetilde{f}_0(\CY_1(\BZ_p)\mod p^2\right)}{\#\CY_2(\BZ/p^2\BZ)}\asymp (\log N)^{-\triangle(f)}. \footnote{Theorem \ref{thm:Tamagawazero} already gives the weaker bound $\CN_{\operatorname{loc}}(f;B)=o(B(\log B)^{r-1})$, provided either estimate \eqref{eq:est0} or \eqref{eq:est2} holds.}
\end{equation}
\item Assume that the fibre over each codimension-one point of $Y_2$ is pseudo-split. Then there exist a closed subset $Z\subset Y_2$ of codimension at least two and a finite set of places $S^\prime$ containing $S$ such that for every $\nu\not\in S^\prime$, on denoting $\CZ$ the Zariski closure of $Z$ in $\CY_2$, we have
\begin{equation}\label{eq:surj}
\left(\CY_2\setminus\CZ\right)(\BZ_\nu)\subset f_0(Y_1(\BQ_\nu)).
\end{equation}
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}[Sketch of Proof]
Part (1) follows from the effective Hilbert irreducibility theorem (cf. \cite[Theorem 5.1, Lemma 5.2]{Cohen} and \cite[Theorem 3.6.2]{Serre}).
Part (2a) is \cite[Proposition 3.10]{Browning-Loughran}, which is deduced from \cite[Theorem 2.8]{Loughran-Smeets}, \cite[Corollary 2.4]{Browning-Loughran}. Part (2b) is \cite[Corollary 3.7]{BBL} and \cite[Proposition 4.1]{Loughran-Smeets}. The rough idea is as follows. Since the generic fibre of $f$ is geometrically integral, the Zariski closure $T$ of the locus parametrising non-split fibres, itself contained in the locus parametrising fibres which are not geometrically integral, is proper closed. Let $\CT$ be its Zariski closure in $\CY_2$. Write $Y_2^{(1)}$ for the set of codimension one points of $Y_2$. If $T$ has codimension one, then for each integral point $P\in \CY_2(\BZ_p)$ which intersects transversally with the smooth locus of $\CT$ (outside a set of a codimension at least two) and the fibre above $P\mod p$ is non-split, the fibre above the generic point of $P$ has no $\BQ_p$-point. The existence of at least one non-pseudo-split fibre over certain point of $Y_2^{(1)}$ leads to the estimate \eqref{eq:est1}. On the other hand if all fibres above $Y_2^{(1)}$ are pseudo-split, by restricting $\widetilde{f}$ to $\widetilde{f}^{-1}(\CT)\to \CT$ (which is also proper dominant), then the (Frobenian) set parametrising non-split fibres of any irreducible component of $\CT$ has density zero (in the sense of \cite[\S3]{Loughran-Smeets}), and therefore is not Zariski dense (\cite[Corollary 3.8]{Loughran-Smeets}) in $\CT$. Then an application of the Lang--Weil estimate as in \cite[Lemma 2.2]{Skorobogatov} and Hensel's lemma yield the surjectivity \eqref{eq:surj}.
\end{proof}
To prove Theorem \ref{thm:ratptsfibration} (1) \& (2a), we fix $\widetilde{f}:\CY\to \CX$ a proper model of the morphism $f$ over $\BZ_S$ where $S$ a finite set of places containing $\BR$.
\begin{proof}[Proof of Theorem \ref{thm:ratptsfibration} (1)]
Assume that $f$ is generically finite. Let the set of primes $\CP(f)$ and the constant $c(f)\in ]0,1[$ be as in Theorem \ref{thm:locsolfibre} (1). The set of primes $\CP(f)$ has positive density, say $l(f)$, by the Chebotarev density theorem.
Recall the reduction map \eqref{eq:redmodpk}. For every prime $p\in\CP(f)$ corresponding to $\nu$, let us consider the $p$-adic neighbourhood $$\Theta_\nu:=\operatorname{Mod}_{p}^{-1}(\widetilde{f}(\CY(\BF_p)))\subset\CX(\BZ_p).$$
Then by Theorem \ref{thm:nonarchTmeas} and Equation \eqref{eq:modelmeasurecomp}, we have
$$\frac{\omega_{\operatorname{tor},\nu}^X(\Theta_\nu^c)}{\omega_{\operatorname{tor},\nu}^X(\Theta_\nu)}=\frac{\#\left(\CX(\BF_p)\setminus \widetilde{f}(\CY(\BF_p))\right)}{\# \widetilde{f}(\CY(\BF_p))}>c(f)^{-1}-1>0.$$
Hence for every $N\geqslant 1$, function $G$ \eqref{eq:GN} satisfies (cf. \cite[p. 104]{Serre})
\begin{align*}
G(N)&=\sum_{\substack{j<N\\p\mid j\Rightarrow p\in \CP(f)}}\mu^2(j)\prod_{\nu\mid j }\frac{\omega_{\operatorname{tor},\nu}^X(\Theta_\nu^c)}{\omega_{\operatorname{tor},\nu}^X(\Theta_\nu)}\\ &>\sum_{\substack{j<N\\p\mid j\Rightarrow p\in \CP(f)}}\mu^2(j)(c(f)^{-1}-1)^{\#\{p:p\mid j\}}\asymp N(\log N)^{l(f)(c(f)^{-1}-1)-1}.
\end{align*}
On applying Theorems \ref{thm:Selbergsieve} and \ref{thm:mainequidist}, we obtain that
$$\CN_{\operatorname{loc}}(f;B)\leqslant \CN_X(\CE[(\Theta_\nu)];B)\ll_\varepsilon \frac{B(\log B)^{r-1}}{N^{1-\varepsilon}}+N^{2(r+2\dim X+1)+\varepsilon}B(\log B)^{r-\frac{5}{4}}\log\log B.$$
So it suffices to take $N=(\log B)^\lambda$ where $0<\lambda<(4(2r+4\dim X+3))^{-1}$ to conclude.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ratptsfibration} (2a)]
We now consider the reduction map \eqref{eq:redmodpk} with $k=2$ and for every $p\notin S$ corresponding to $\nu$, the neighbourhood $$\Theta_\nu:=\operatorname{Mod}_{p^2}^{-1}\left(\widetilde{f}(\CY(\BZ_p))\mod p^2\right)\subset\CX(\BZ_p).$$ Then it follows from Theorem \ref{thm:nonarchTmeas}, Equation \eqref{eq:modelmeasurecomp} and Theorem \ref{thm:locsolfibre} (2b) from that there exists $\triangle(f)>0$ such that
\begin{align*}
\sum_{\nu\leftrightarrow p\leqslant N,p\not\in S}\frac{\omega_{\operatorname{tor},\nu}^X(\Theta_\nu^c)}{\omega_{\operatorname{tor},\nu}^X(\Theta_\nu)}\log p &=\sum_{p\leqslant N,p\not\in S}\frac{\#\left(\CX(\BZ/p^2\BZ)\setminus\left(\widetilde{f}(\CY(\BZ_p))\mod p^2\right)\right)}{\#\left(\widetilde{f}(\CY(\BZ_p))\mod p^2\right)} \log p \\& \sim \triangle(f)\log N.
\end{align*}
So an application of Wirsing's theorem (cf. \cite[Lemma 3.11]{Browning-Loughran}) shows that for $N\geqslant 1$, the function $G$ \eqref{eq:GN} satisfies
$$G(N)=\sum_{\substack{j<N\\ p\mid j\Rightarrow p\nmid S}}\mu^2(j)\prod_{p\mid j}\frac{\#\left(\CX(\BZ/p^2\BZ)\setminus\left(\widetilde{f}(\CY(\BZ_p))\mod p^2\right)\right)}{\#\left(\widetilde{f}(\CY(\BZ_p))\mod p^2\right)}\asymp (\log N)^{\triangle(f)}.$$ So it follows from Theorem \ref{thm:Selberg} and \ref{thm:mainequidist} that in this case
$$\CN_{\operatorname{loc}}(f;B)\leqslant \CN_X(\CE[(\Theta_\nu)];B)\ll_\varepsilon \frac{B(\log B)^{r-1}}{(\log N)^{\triangle(f)}}+N^{4(r+2\dim X)+2+\varepsilon}B(\log B)^{r-\frac{5}{4}}\log\log B.$$
We take $N=(\log B)^\lambda$ where $0<\lambda<(4(4r+8\dim X+2))^{-1}$ to finish the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ratptsfibration} (2b)]
We first prove that the family $(f(Y(\BQ_\nu))\subset X(\BQ_\nu))_{\nu\in\operatorname{Val}(\BQ)}$ consists of $\nu$-adic measurable sets and satisfies \eqref{eq:posmeas}. The fact that every such $\nu$-adic set is measurable follows from the theorems of Tarski, Seidenberg (the real place) and Marcintyre (the non-Archimedean places) that it is semi-algebraic. By considering the smooth locus of $f$, since $Y(\BQ_\nu)\neq\varnothing$, the image of $f(Y(\BQ_\nu))$ in $X(\BQ_\nu)$ contains an open $\nu$-adic neighbourhood. Since locally the $\nu$-adic Tamagawa measure is glued by local measures absolutely continuous with respect to the standard $\nu$-adic Lebesgue measure on local charts, this shows that $\omega^X_{\operatorname{tor},\nu}(f(Y(\BQ_\nu)))>0$. See the proof of \cite[Corollary 3.9]{BBL} (which is stated with $X=\BA^n$ or $\BP^n$, but the same argument works for bases being an arbitrary algebraic variety).
On applying Theorem \ref{thm:locsolfibre} (2b), we obtain a Zariski closed subset $Z\subset X$ of codimension at least two satisfying the following property. Let us take $\CZ$ to be the Zariski closure of $Z$ in $\CX$ and write $\CW :=\CX\setminus\CZ$. Then for every sufficiently large prime $p$, if a point $P\in X(\BQ)$ is such that $P\not \in f(Y(\BQ_p))$, then since $\CW(\BZ_p)\subset f(Y(\BQ_p))$, by Lemma \ref{le:preparation}, the lift $\widetilde{P}\in\CX(\BZ)$ satisfies $\widetilde{P}\mod p\in\CZ(\BF_p)$. By Theorem \ref{thm:maingeomsieve}, the condition \textbf{(GS)} is verified for the family $(f(Y(\BQ_\nu))\subset X(\BQ_\nu))_{\nu\in\operatorname{Val}(\BQ)}$.
So it follows directly from Theorem \ref{thm:keyOmega} that
$$\#\{P\in\CT_{O}(\BQ):H_{\operatorname{tor}}(P)\leqslant B,f^{-1}(P)(\RA_\BQ)\neq\varnothing\}\sim \alpha(X)\omega_{\operatorname{tor}}^X\left(\prod_{\nu\in\operatorname{Val}(\BQ)}f(Y(\BQ_\nu))\right)B(\log B)^{r-1},$$
This shows that asymptotic formula in Theorem \ref{thm:ratptsfibration} (2b) holds with $$\mathfrak{G}(f)=\omega_{\operatorname{tor}}^X\left(\prod_{\nu\in\operatorname{Val}(\BQ)}f(Y(\BQ_\nu))\right).\qedhere$$
\end{proof}
\begin{remark}\label{rmk:effectivity}
The condition \textbf{(EE)} confirmed by Theorem \ref{thm:mainequidist} and the effective version of the condition \textbf{(GS)} (cf. Remark \ref{re:effectiveGS}) furnished by Theorem \ref{thm:maingeomsieve} are stronger than the hypotheses in Theorem \ref{thm:keyOmega}. By adapting the strategy in \cite[\S3]{Cao-Huang2}, we can in fact provide effective error terms (depending on the family $(\Omega_\nu)$ and on the functions $h_1,h_2$) in the asymptotic formula \eqref{eq:asympCS}. Consequently, the error terms in the asymptotic formulas of Principle \ref{prin:purity} as well as \eqref{eq:everylocdensity} can all be made explicit. We choose not to pursue this in the present article.
\end{remark}
\section*{Acknowledgements}
We thank Tim Browning for valuable suggestions and encouragements, and we are grateful to Yang Cao for his generosity of sharing numerous ideas since our collaborations \cite{Cao-Huang1,Cao-Huang2}. We thank Daniel Loughran and Florian Wilsch for their comments. The hospitality of Max-Planck-Institut für Mathematik in Bonn during the summer of 2021 is greatly appreciated. This work is supported by the FWF grant No. P32428-N35. | 180,088 |
Impress Online Privacy Policy
We at Impress Online respect your privacy and value the relationship we have with you. Your visit to this website is subject to this Privacy Policy and our Terms and Conditions.
This Privacy Policy describes the types of personal information we collect at the Impress Impress Online. It does not govern any other information or communications that may reference Impress, e.g., communications from Impress counters located within brick and mortar retail stores. Impress Online
Marketing Emails
If you so elect, the information you provide may be used by Impress Online to create and deliver to you emails such as our newsletters, surveys or other email messages containing product and event information, cosmetics tips or promotions (“Emails” ). If you do elect to receive them and later decide that you would no longer like to receive these Emails, see the
”??????” Section below. You may receive the benefit of hearing from Impress Online via mail or phone even if you have not elected to receive Emails. If you do not wish to receive these communications, see the “??????” Impress Online offers its customers.
Customized Service
We may use your personal information to provide you with customized service and use of our Site. For example, we may suggest products that would be of particular interest to you.
Special Events, Questionnaires and Surveys
On occasion, Impress. In addition, we do not sell or otherwise disclose personal information about our Site visitors except as described here.
Impress Companies
We may share your personal information with our Affiliates that distribute and market Impress products (the “Impress Companies”). Impress Companies may use this in accordance with this Privacy Policy. If you prefer that we not share your personal information with Impress Companies, please do not provide it to us. We are unable to offer the benefits of the Impress Online to anyone who does not consent to the sharing of their personal information with Impress Companies. at: [email protected], Impress info@rxskinclinic [email protected]. [email protected]
In addition, if you would like to change your other preferences, e.g, with respect to the transfer of data, please contact us at [email protected].

We have taken great measures to ensure that your visit to our Site is an excellent one and that your privacy is constantly respected. If you have any questions, comments or concerns about our privacy practices, please contact us by e-mail at [email protected] or call 1-480-483-8902 (10:00am – 5:00pm Mtn Standard Time, Monday – Friday). | 417,413 |
What started as an independent crowdfunding campaign which exceeded its objective by offering over $500,000 well worth of systems within a month soon came to be a competitor in home safety and security.
Based in Chicago, Illinois,1 Scout was substantiated of a desire to produce a house protection item that wasn’t outdated or expensive, an honorable cause for sure. With an expanding list of elements as well as accessories to enhance the center, Scout strengthens its defenses the much more you develop it up.
Does more imply better? Extra parts definitely boost your residence’s safety and security, yet they additionally increase expenses. Yet some good information for your purse, whether you require a sensor on every windows and door or you just need the fundamental basics, Scout’s allure originates from its customization how much customers are willing to invest in building it to meet their demands.
In our review, we outlined our impressions and searchings for of the Scout Alarm system safety and security system carefully.
Scout House Protection Packages
Scout Home Safety and security has four safety and security packages for consumers to select from. If the bundles don’t include what you need, or if you need additional items, Scout offers a 5th alternative to build your own bundle.
“Scout Small Pack” Bundle: This is the firm’s most economical system and includes eight things. It includes the items required to start your DIY safety system, consisting of movement as well as accessibility sensors. The pack also features 2 vital fobs so you can conveniently arm and also disarm the system. This pack is best-suited for those who want fundamental protection coverage or stay in a smaller sized residence.
“Scout Large Pack” Package: If you need to protect a larger location in your home or your residence has multiple access points, the Scout Large Pack supplies a safety and security option that covers all your bases. Furnished with greater than twice the access sensors of the Scout Small Pack, this package offers you comfort that all areas of your residence will continue to be safe and secure.
“Scout Aspects Pack” Bundle: This package pairs residence security with disaster avoidance. The bundle comes with a smoke detector and also water sensor to help you capture pricey fires or flooding damage before it gets out of hand.
“Scout Engineer Pack” Plan: This is created to be Scout’s many extensive home security offering. Coupling all the tools consisted of in the Scout Small Pack, the Designer Pack adds added protection actions like a smart lock and also glass break sensing unit.
“Develop Your System” Plan: For consumers who desire an even more customized alarm system for their residence, Scout offers the ability to create your very own system. As soon as you choose your Scout Hub, you can pick as lots of gadgets as you want and develop a system to fit your residence security requirements. You can select from the adhering to 18 items when you develop your very own system: Door panel, Accessibility sensor, Activity sensor, Scout interior cam, Scout video clip buzzer, Keypad, Smoke detector, Water sensing unit, Glass break sensor, Door lock, Lawn indication, Panic button, Remote control, Alarm as well as Zigbee repeater, Home window sticker, Trick fob, as well as a RFID sticker label.
Scout pricing
Even if you don’t choose in to specialist tracking, you still need to pay a minimum of $10 monthly. Essentially, you’re paying to utilize the application, which we assume is somewhat ridiculous. Many apps, safety or otherwise, have some kind of open door.
Agreements
Just like several do it yourself systems, there’s no contract with Scout. Costs are month to month.
You can choose to make a 12-month commitment in exchange for a 10% price cut. However also after that, Scout will certainly offer you a prorated refund if you need to terminate. That’s a rather wonderful handle a sector understood for wringing steep cancellation charges out of consumers.
Fees
You mount Scout on your own, so you don’t have to pay an installment charge. As well as if you wish to move, you can take everything down and also re-install it by yourself. It’s a discomfort without a doubt, but it’s free.
Is Scout right for you?
Scout is great for no-contract self-monitoring and also simple do it yourself installment. We likewise such as that its professional tracking costs less than the industry average of $32 a month. Right here are some excellent reasons to choose Scout over various other security systems:
Mobile back-up: Scout’s cellular backup is distinct compared to various other safety and security systems due to the fact that it does not call for a professional monitoring plan. This means you have a reliable way to access your system through the Scout application without delegating your protection to a monitoring facility. (There’s also a battery backup to keep your system running for approximately 12 hours.).
Text and also phone call signals: While Scout sends push notifications on a smart device, its sms message and call notifies make it a superb choice for people that do not make use of smartphones. Text as well as call informs also give you a backup in case you don’t discover a press alert.
Straightforward wanting to give short-term accessibility to guests.).
Tools: Scout styles attractive devices that covers most safety needs. The system is compatible with several wise residence platforms like Amazon Alexa, Google Aide †, Z-Wave, as well as Zigbee.
Expansion and also mobility: This is a terrific system for people preparing to relocate from a smaller residence to a larger one because it’s simple to take with you and add additional sensing units as needed.
Scout Protection Client Service.
Scout client support is readily available via phone at 844-287-2688 or can be gotten to at [email protected]. Customer support is available from 9:00 AM to 6:00 PM CST, except on significant vacations. Scout also gives different 24/7 tracking, which is constantly reliable on a daily basis of the year in any way times of the day.
Scout Safety and security does not yet have a score with the Better Business Bureau, as the company is a more recent one. Scout consumer testimonials show the company has been receptive as well as prepared to function with customers experience occasional reliability problems with Scout components.
We additionally found Scout Protection’s customer service to be practical when we spoke to the business to examine various elements of its solution. Our Protection Nerd evaluated each of the adhering to procedures:.
Sales: We called client service to ask about acquiring a safety system as well as found support personnel to be educated and pleasant yet not pushy.
Layout: We made use of the Scout website to construct a customized security package.
Service: We got in touch with client support to see the length of time it required to obtain a rep on the phone as well as were offered with fast response to inquiries.
Scout Alarm System FAQs.
What does the Google and Nest merger imply for Scout Alarm?
If you’ve currently combined your Nest and Google accounts, customer support states the Nest assimilations will not work with your Scout system. We’ll maintain you upgraded with movement in this specific space as we see it. Undertaking a renovation to revamp their systems to fit the Works with Google Aide network (that will change Works with Nest).
Exactly How does Scout Alarm shield my data?
Scout says all video footage is secured when sent to the cloud, and Roberts claims all messaging is sent out with AES 128-Bit file encryption. “I would state that clients can rest easy with Scout and know there’s no ad platform, there’s nothing else product that would benefit from the data of what’s going on inside your house, and also we’re very sensitive regarding that,” said Roberts. Since Scout is so quickly integrated with other tools, it’s still helpful to read the personal privacy policy and terms and also conditions of various other 3rd celebrations to recognize far better what legal rights each company has to your information.
What’s on the horizon for Scout?
Roberts states Scout does not have a group of A.I. professionals to enhance a video as Nest does. This is one reason why Scout companions with various other significant gamers– nevertheless, more Scout-specific equipment, consisting of electronic cameras, might be on the way.
Without being terribly particular regarding our roadmap as well as what tools are appearing, we want to expand the perimeter.
“Clearly, you wish to protect what’s taking place inside, once a person’s within, they’re already in the house. I assume that there are some exterior video camera products that are really fascinating to expand the perimeter as well as to operate in tandem with right stuff that’s going on in your house.”. | 277,160 |
I have most of my posts for the month already written and scheduled so that I won’t be stressing about writing something every day, but I decided to change things around a bit and actually write tonight. Today was difficult for me and I felt like I should share it with you. My goal, after all, is to help someone who may be struggling, and to do that I feel the need to talk about my own struggles.
For those who have been participating since the beginning, this was our 3rd day unplugged. I have only been on social media at nighttime, and not for very long then. Saturday and Sunday were pieces of cake, but we were really busy and were surrounded by people.
Today, however, was a different story. Everyone went to work, leaving the kids and myself at home. I am a stay-at-home-mom, so of course I don’t have a problem with that, but we were kind of stranded at the house today. We were all tired from a busy weekend and the weather has been horrible, so we couldn’t even have anyone over.
Our day began really early, and I had a feeling it was going to be a tough one…and it was. I had the urge SO many times to check on Facebook or Instagram really quickly, but I didn’t. I was bored, tired of talking to the kids, and just…grumpy. Normally when I am feeling like this, I would zone out online for a little while, but I couldn’t do that today. When the sun finally set and I could log onto Facebook, I realized something-I had missed nothing. Facebook really is a waste of time for me.
Instead of spending precious time on the Internet today, I got so much done around the house that I had been putting off. The kids and I went through their toy boxes and cleaned out Maddie’s closet. I ironed clothes for Jason, and the kids and I even watched a movie in the middle of the day. It feels so good to have those things done, and I honestly wouldn’t have done them if I had been online.
Our nighttime routine has run much more smoothly tonight, and I don’t fell stressed out at all right now.
I see the point of unplugging, and I hope you do, too. We need to pay attention to how we are spending our time. I am sharing this not to brag about what I accomplished today, but to tell you that it was hard for me today. Now I see that it was worth it, and I know that tomorrow will be easier.
No additional challenge today-just continue on with what we have started. | 82,911 |
\begin{document}
\title{The talented monoid of a Leavitt path algebra}
\author{Roozbeh Hazrat}
\address{Roozbeh Hazrat:
Centre for Research in Mathematics\\
Western Sydney University\\
Australia} \email{[email protected]}
\author{Huanhuan Li}
\address{
Huanhuan Li: Centre for Research in Mathematics\\
Western Sydney University\\
Australia} \email{[email protected]}
\subjclass[2010]{18B40,16D25}
\keywords{Leavitt path algebra, graded Grothendieck group, graded ring, graph monoid}
\date{\today}
\begin{abstract}
There is a tight relation between the geometry of a directed graph and the algebraic structure of a Leavitt path algebra associated to it. In this note, we show a similar connection between the geometry of the graph and the structure of a certain monoid associated to it. This monoid is isomorphic to the positive cone of the graded $K_0$-group of the Leavitt path algebra which is naturally equipped with a $\mathbb Z$-action. As an example, we show that a graph has a cycle without an exit if and only if the monoid has a periodic element. Consequently a graph has Condition (L) if and only if the group $\mathbb Z$ acts freely on the monoid. We go on to show that the algebraic structure of Leavitt path algebras (such as simplicity, purely infinite simplicity, or the lattice of ideals) can be described completely via this monoid. Therefore an isomorphism between the monoids (or graded $K_0$'s) of two Leavitt path algebras implies that the algebras have similar algebraic structures.
These all confirm that the graded Grothendieck group could be a sought-after complete invariant for the classification of Leavitt path algebras.
\end{abstract}
\maketitle
\section{Introduction}
The theory of Leavitt path algebras has sparked a substantial amount of activity in recent years culminating in finding, on the one hand, a complete algebraic structure of these algebras via the geometry of the associated graphs and in finding, on the other hand, a complete invariant for the classification of the algebras. The first two papers in the subject appeared in 2005 and 2006 \cite{abrams2005, ara2006}. The first paper \cite{abrams2005} gave a graph criteria when these algebras are simple and the second paper \cite{ara2006} proved that the non-stable $K$-theory of these algebras can be described via a natural monoid associated to their graphs. In this paper we tie these two threads together by showing how the geometry of a graph is closely related to the structure of \emph{graded} monoid of the associated Leavitt path algebra.
Let $E$ be a (row-finite) directed graph, with vertices denoted by $E^0$ and edges by $E^1$. The monoid $M_E$ considered in \cite{ara2006} is defined as the free abelian monoid over the vertices subject to identifying a vertex with the sum of vertices it arrives at by the edges emitting from it:
\begin{equation*}
M_E= \Big \langle \, v \in E^0 \, \, \Big \vert \, \, v= \sum_{v\rightarrow u} u \, \Big \rangle.
\end{equation*}
It was proved in \cite{ara2006}, using Bergman's machinery, that $M_E = \mathcal V (L_F(E))$. Here $\mathcal V(L_F(E))$ is the monoid of finitely generated projective modules of the Leavitt path algebra $L_F(E)$, with coefficients in a field $F$. Thus the group completion of $M_E$ retrieves the Grothendieck group $K_0(L_F(E))$. For half a century, this group has played a key role in the classification of $C^*$-algebras and in particular graph $C^*$-algebras which are the analytic counterpart of Leavitt path algebras~\cite{tomforde1}.
The ``graded'' version of this monoid is defined as
\begin{equation*}
\Mn= \Big \langle \, v(i), v \in E^0, i \in \mathbb Z \, \, \Big \vert \, \, v(i)= \sum_{v\rightarrow u} u(i+1) \, \Big \rangle.
\end{equation*}
Note that the only difference from the monoid $M_E$ is that we index the vertices by $\mathbb Z$ and keep track of the transformations. There is a natural action of $\mathbb Z$ on $\Mn$: the action of $n\in \mathbb Z$ on $v$ is defined by $v(n)$ and denoted by ${}^n v$. It was proved
in \cite{ahls} that $\Mn = \mathcal V^{\gr} (L_F(E))$ (see also Remark \ref{rmk}). Here $ \mathcal V^{\gr} (L_F(E))$ is the monoid of graded finitely generated projective modules of the Leavitt path algebra $L_F(E)$. Thus the group completion of $\Mn$ is the graded Grothendieck group $K^{\gr}_0(L_F(E))$. The action of $\mathbb Z$ on $\Mn$ corresponds to the shift operation on graded modules over the Leavitt path algebra $L_F(E)$ which is naturally a $\mathbb Z$-graded ring.
The aim of this note is to show that there is a beautiful and close relation between the geometry of a graph $E$ and the monoid structure of $\Mn$ parallel to the correspondence between the algebraic structure of $L_F(E)$ and the geometry of the graph $E$, as the figure below indicates.
\begin{center}
\tikzstyle{decision} = [diamond, draw, fill=red!50]
\tikzstyle{line} = [draw, -stealth, thick]
\tikzstyle{lined} = [draw, bend right=45, thick, dashed]
\tikzstyle{elli}=[draw, ellipse, top color=white, bottom color=white ,minimum height=8mm, text width=6.3em, text centered]
\tikzstyle{elli2}=[draw, ellipse ,minimum height=8mm, text width=6.3em, text centered]
\tikzstyle{elli3}=[draw, ellipse, top color=white, bottom color=blue!90 ,minimum height=8mm, text width=6.3em, text centered]
\tikzstyle{block} = [draw, rounded corners, rectangle, text width=8em, text centered, minimum height=15mm, node distance=7em]
\tikzstyle{block2} = [draw, rounded corners, rectangle, top color=white!80, bottom color=white, text width=8em, text centered, minimum height=15mm, node distance=7em]
\begin{tikzpicture}[scale=0.7, transform shape]
\GraphInit[vstyle = Shade]
\node[block] (graph) {\bf Geometry of the graph $\pmb E$};
\node[elli2, below of=graph, xshift=-12em, yshift=-3em] (ring) { \bf Algebraic structure of $\pmb {L_F(E)}$};
\node[elli2, below of=graph, xshift=12em, yshift=-3em] (monoid) {\bf Monoid structure of $\pmb \Mn$};
\tikzset{
EdgeStyle/.append style = {<->, thin, dotted}
}
\Edge (ring)(monoid)
\tikzset{
EdgeStyle/.append style = {<->, bend right, dashed}
}
\Edge (graph)(ring)
\tikzset{
EdgeStyle/.append style = {<->, bend left}
}
\Edge (graph)(monoid)
\end{tikzpicture}
\end{center}
In turn this shows that if the graded monoids of two Leavitt path algebras are isomorphic, then the algebraic properties of one algebra induces the same properties in the other algebra via the graded monoid $M^{\gr}$ as a bridge.
Any monoid is equipped with a pre-ordering; $a \leq b$ if $b=a+c$. We will see that the pre-order structure of $\Mn$ of the graph $E$ determines the graded structure of the Leavitt path algebra $L_F(E)$, whereas the action of the group $\mathbb Z$ gives information about the non-graded structure.
We show that the properties of a graph $E$ having cycles with/without exits, can be translated as properties of the orbits of the action of the group $\mathbb Z$ on $\Mn$. Specifically, we prove that a graph $E$ has a cycle without an exit if and only if there is an element $a\in \Mn$ such that ${}^na = a$, for some $n\in \mathbb Z$ (Proposition~\ref{goldenprop}). Consequently the graph has condition (L) if and only if $\mathbb Z$ acts freely on $\Mn$ (Corollary~\ref{conLm}).
We go further to show that the (non-graded) algebraic structure of $L_F(E)$ (such as simplicity or purely simplicity) can be described completely by the orbits of $\mathbb Z$-action on $\Mn$ (Corollary~\ref{conKm}) . We conclude the paper by proving that a $\mathbb Z$-module isomorphism between the monoids preserves important structures of corresponding Leavitt path algebras (Theorem~\ref{mainthemethe}).
The fact that the algebraic structure of the Leavitt path algebra $L_F(E)$ can be read via the monoid $\Mn$ should not come as a surprise if $\Mn$ were to be a complete invariant for these algebras. In fact it was conjectured in \cite[Conjecture~1]{roozbehhazrat2013}, that the graded Grothendieck group $K_0^{\gr}$ along with its ordering and its module structure is a complete invariant for the class of (finite) Leavitt path algebras (see also~\cite{arapardo}, \cite[\S~7.3.4]{AAS}).
\begin{conjecture}\label{conj1}
Let $E_1$ and $E_2$ be finite graphs and $F$ a field. Then the following are equivalent.
\begin{enumerate}[\upshape(1)]
\item There is a $\mathbb Z$-module isomorphism $\phi: M^{\gr}_{E_1} \rightarrow M^{\gr}_{E_2}$, such that $\phi\big (\sum_{v\in E_1^0} v\big)=\sum_{v\in E_2^0} v$;
\smallskip
\item There is an order-preserving $\Z[x,x^{-1}]$-module isomorphism
\begin{align*}
K_0^{\gr}(L_F(E_1)) &\longrightarrow K_0^{\gr}(L_F(E_2)),\\
[L_F(E_1)] &\longmapsto [L_F(E_2)].
\end{align*}
\medskip
\item There is a graded ring isomorphism $\varphi:L_F(E_1) \rightarrow L_F(E_2)$.
\end{enumerate}
\end{conjecture}
Note that since $K^{\gr}_0(L_F(E))$ is the group completion of $\Mn$, the directions 1 $\Leftrightarrow$ 2 are immediate.
\section{Graph monoids}
In this section we briefly introduce the notions of the directed graph and the monoid $M_E$ associated to it. We then introduce the ``graded'' version of this monoid $\Mn$.
We refer the reader to the recent monograph \cite{AAS} for the theory of Leavitt path algebras and a comprehensive study of the monoid $M_E$. The monoid $\Mn$ was first considered in \cite{roozbehhazrat2013} (see also \cite[\S3.9.2]{hazi}) as the positive cone of the graded Grothendieck group $K^{\gr}_0(L_F(E))$ and further studied in~\cite{haz3,ahls}.
\subsection{Graphs}\label{graphsec}
A directed graph $E$ is a tuple $(E^{0}, E^{1}, r, s)$, where $E^{0}$ and $E^{1}$ are
sets and $r,s$ are maps from $E^1$ to $E^0$. A graph $E$ is finite if $E^0$ and $E^1$ are both finite. We think of each $e \in E^1$ as an edge
pointing from $s(e)$ to $r(e)$. A vertex $v\in E^0$ is a sink if $s^{-1}(v)=\emptyset$. We use the convention that a (finite) path $p$ in $E$ is
a sequence $p=\a_{1}\a_{2}\cdots \a_{n}$ of edges $\a_{i}$ in $E$ such that
$r(\a_{i})=s(\a_{i+1})$ for $1\leq i\leq n-1$. We define $s(p) = s(\a_{1})$, and $r(p) =
r(\a_{n})$.
If there is a path from a vertex $u$ to a vertex $v$, we write $u\ge v$. A subset $M$ of $E^0$ is \emph{downward directed} if for any two $u,v\in M$ there exists $w\in M$ such that $u\geq w$ and $v\geq w$ (\cite[\S4.2]{AAS}, \cite[\S2]{rangaswamy}).
We will use the following two notations throughout: for $v\in E^0$,
\[T(v):=\{w\in E^0 \mid v\geq w \} \, \text{ and } \, M(v):=\{w\in E^0 \mid w \geq v \}.\]
A graph $E$ is said to be \emph{row-finite} if for each vertex $u\in E^{0}$,
there are at most finitely many edges in $s^{-1}(u)$. A vertex $u$ for which $s^{-1}(u)$
is empty is called a \emph{sink}, whereas $u\in E^{0}$ is called an \emph{infinite
emitter} if $s^{-1}(u)$ is infinite. If $u\in E^{0}$ is neither a sink nor an infinite
emitter, then it is called a \emph{regular vertex}.
Following now the standard notations (see \cite[\S 2.9]{AAS}), we denote by $E^\infty$ the set of all infinite paths and by $E^{\leq \infty}$ the set $E^\infty$ together with the set of finite paths in $E$ whose range vertex is a singular vertex. For a vertex $v\in E^0$, we denote by $vE^{\leq \infty}$ the paths in $E^{\leq \infty}$ that starts from the vertex $v$.
We define the ``local'' version of cofinality which we will use in \S\ref{seciv}. We say a vertex $v\in E^0$ is \emph{cofinal with respect} to $w\in E^0$ if for every $\alpha \in wE^{\leq \infty}$, there is a path from $v$ which connects to a vertex in $\alpha$. We say a vertex $v$ is cofinal, if it is cofinal with respect to any other vertex. Finally, we say the graph $E$ is \emph{cofinal} if every vertex is cofinal with respect to any other vertex. The concept of cofinal graph was originally used to give a criteria for the simplicity of graph algebras (see \cite[Theorem~2.9.7]{AAS} and \cite[\S5.6]{AAS}).
We say a vertex $v\in E^0$ has \emph{no bifurcation} if for any $u\in T(v)$, $| s^{-1}(u)| \leq 1$. We say $v$ is a \emph{line-point} if $v$ has no bifurcation and does not end at a cycle. Note that by our definition, a sink is a line-point (see \cite[\S 2.6]{AAS}).
Throughout the note the graphs we consider are row-finite graphs. The reason is twofold. The proofs are rather more transparent in this case and the conjecture that the graded Grothendieck group for Leavitt path algebras is a complete invariant (Conjecture~\ref{conj1}) is originally formulated for finite graphs. We think one could generalise the results of the paper to arbitrary graphs and possibly other Leavitt path-like algebras, such as weighted Leavitt path algebras.
Throughout the paper we will heavily use the concept of the covering of a graph. The \emph{covering graph} $\overline E$ of $E$ (also denoted by $E\times_1 \mathbb Z$) is given by
\begin{gather*}
\overline E^0 = \big\{v_n \mid v \in E^0 \text{ and } n \in \Z \big\},\qquad
\overline E^1 = \big\{e_n \mid e\in E^1 \text{ and } n\in \Z \big\},\\
s(e_n) = s(e)_n,\qquad\text{ and } \qquad r(e_n) = r(e)_{n+1}.
\end{gather*}
As examples, consider the following graphs
\begin{equation*}
{\def\labelstyle{\displaystyle}
E : \quad \,\, \xymatrix{
u \ar@(lu,ld)_e\ar@/^0.9pc/[r]^f & v \ar@/^0.9pc/[l]^g
}} \qquad \quad
{\def\labelstyle{\displaystyle}
F: \quad \,\, \xymatrix{
u \ar@(ur,rd)^e \ar@(u,r)^f
}}
\end{equation*}
Then
\begin{equation}\label{level421}
\xymatrix@=15pt{
& \text{\bf Level -1} && \text{\bf Level 0} && \text{\bf Level 1} && \text{\bf Level 2}\\
\overline E: & \dots {u_{-1}} \ar[rr]^-{e_{-1}} \ar[drr]^(0.4){f_{-1}} && {u_{0}} \ar[rr]^-{e_0} \ar[drr]^(0.4){f_0} && {u_{1}} \ar[rr]^-{e_{1}} \ar[drr]^(0.4){f_{1}} && \cdots\\
& \dots {v_{-1}} \ar[urr]_(0.3){g_{-1}} && {v_{0}} \ar[urr]_(0.3){g_0} && {v_{1}} \ar[urr]_(0.3){g_{1}}&& \cdots
}
\end{equation}
and
\begin{equation}\label{level422}
\xymatrix@=15pt{
\overline{F}: \quad \,\,&\dots {u_{-1}} \ar@/^0.9pc/[rr]^{f_{-1}} \ar@/_0.9pc/[rr]_{e_{-1}} && {u_{0}} \ar@/^0.9pc/[rr]^{f_0} \ar@/_0.9pc/[rr]_{e_0} && {u_{1}} \ar@/^0.9pc/[rr]^{f_{1}} \ar@/_0.9pc/[rr]_{e_{1}} && \quad \cdots
}
\end{equation}
Notice that for any graph $E$, the covering graph $\overline E$ is an acyclic \emph{stationary} graph, meaning, the graph repeats the pattern going from ``level'' $n$ to ``level'' $n+1$. This fact will be used throughout the article.
Recall that a subset $H \subseteq E^0$ is said to be \emph{hereditary} if
for any $e \in E^1$ we have that $s(e)\in H$ implies $r(e)\in H$. A hereditary subset $H
\subseteq E^0$ is called \emph{saturated} if whenever $0 < |s^{-1}(v)| < \infty$, then $\{r(e)\mid
e\in E^1 \text{~and~} s(e)=v\}\subseteq H$ implies $v\in H$. Throughout the paper we work with hereditary saturated subsets of $E^0$.
For hereditary saturated subsets $H_1$ and $H_2$ of $E$ with $H_1 \subseteq H_2$, define the quotient graph $H_2 / H_1 $ as a graph such that
$(H_2/ H_1)^0=H_2\setminus H_1$ and $(H_2/H_1)^1=\{e\in E^1\;|\; s(e)\in H_2, r(e)\notin H_1\}$. The source and range maps of $H_2/H_1$ are restricted from the graph $E$. If $H_2=E^0$, then $H_2/H_1$ is the \emph{quotient graph} $E/H_1$ (\cite[Definition~2.4.11]{AAS}).
\subsection{Graph Monoids} \label{monsec}
Let $M$ be an abelian monoid with a group $\Gamma$ acting on it. For $\alpha \in \Gamma$ and $a\in M$, we denote the action of $\alpha$ on $a$ by ${}^\alpha a$.
A monoid homomorphism $\phi:M_1 \rightarrow M_2$ is called $\Gamma$-\emph{module homomorphism} if $\phi$ respects the action of $\Gamma$, i.e., $
\phi({}^\alpha a)={}^\alpha \phi(a)$. We define a pre-ordering on the monoid $M$ by $a\leq b$ if $b=a+c$, for some $c\in M$.
Throughout we write $a \parallel b$ if the elements $a$ and $b$ are not comparable. An element $a\in M$ is called an \emph{atom} if $a=b+c$ then $b=0$ or $c=0$. An element $a\in M$ is called \emph{minimal} if $b\leq a$ then $a\leq b$. When $M$ is cofinal and cancellative, these notions coincide with the more intuitive definition of minimality, i.e., $a$ is minimal if $0\not = b\leq a$ then $a=b$. The monoid of interest in the paper, $\Mn$, is cofinal and cancellative and thus all these concepts coincide.
Throughout we assume that the group $\Gamma$ is abelian. Indeed in our setting of graph algebras, this group is the group of integers $\mathbb Z$. We use the following terminologies throughout this paper: We call an element $a\in M$ \emph{periodic} if there is an $\alpha \in \Gamma$ such that ${}^\alpha a =a$. If $a\in M$ is not periodic, we call it \emph{aperiodic}. We denote the orbit of the action of $\Gamma$ on an element $a$ by $O(a)$, so $O(a)=\{{}^\alpha a \mid \alpha \in \Gamma \}$.
A $\Gamma$-\emph{order-ideal} of a monoid $M$ is a subset $I$ of $M$ such that for any $\alpha,\beta \in \Gamma$, ${}^\alpha a+{}^\beta b \in I$ if and only if
$a,b \in I$. Equivalently, a $\Gamma$-order-ideal is a submonoid $I$ of $M$ which is closed under the action of $\Gamma$ and it is
\emph{hereditary} in the sense that $a \le b$ and $b \in I$ imply $a \in I$. The set $\LL(M)$ of $\Gamma$-order-ideals of $M$ forms a (complete) lattice. We say $M$ is \emph{simple} if the only $\Gamma$-order-ideals of $M$ are $0$ and $M$.
For a ring $A$ with unit, the isomorphism classes of finitely generated projective (left/right) $A$-modules with direct sum as an operation form a monoid denoted by $\mathcal V(A)$.
This construction can be extended to non-unital rings via idempotents. For a $\Gamma$-graded ring $A$, considering the graded finitely generated projective modules, it provides us with the monoid $\mathcal V^{\gr}(A)$ which has an action of $\Gamma$ on it via the shift operation on modules (see \cite[\S 3]{hazi} for the general theory).
In this article we consider these monoids when the algebra is a Leavitt path algebra.
Ara, Moreno and Pardo \cite{ara2006} showed that for a Leavitt path algebra associated to a row-finite
graph $E$, the monoid $\mathcal{V}(L_{F}(E))$ is entirely determined by elementary
graph-theoretic data. Specifically, for a row-finite graph $E$, we define $M_E$ to be the
abelian monoid generated by $E^{0}$ subject to
\begin{equation}\label{monoidrelation}
v=\sum_{e\in s^{-1}(v)}r(e),
\end{equation}
for every $v\in E^{0}$ that is not a sink. Theorem~3.5 of~\cite{ara2006} relates this monoid to the theory of Leavitt path algebras:
There is a monoid isomorphism $\mathcal{V}(L_{F}(E)) \cong M_E$.
There is an explicit description of the congruence on the free abelian
monoid given by the defining relations of $M_{E}$ \cite[\S 4]{ara2006}. Let $F_E$ be the free abelian monoid on
the set $E^{0}$. The nonzero elements of $F$ can be written in a unique form up to
permutation as $\sum_{i=1}^{n}v_{i}$, where $v_{i}\in E^{0}$. Define a binary relation
$\xra_{1}$ on $F\setminus\{0\}$ by
\begin{equation}\label{hfgtrgt655}
\sum_{i=1}^{n}v_{i}\longrightarrow_{1}\sum_{i\neq
j}v_{i}+\sum_{e\in s^{-1}(v_{j})}r(e),
\end{equation}
whenever $j\in \{1, \cdots, n\}$ and
$v_{j}$ is not a sink. Let $\xra$ be the transitive and reflexive closure of $\xra_{1}$
on $F\setminus\{0\}$ and $\sim$ the congruence on $F$ generated by the relation $\xra$.
Then $M_{E}=F/\sim$.
The following two part lemma is crucial for our work and we frequently use it throughout the article. For the proof see \cite{ara2006} and \cite[\S 3.6]{AAS}
\begin{lem}\label{aralem6}
Let $E$ be a row-finite graph, $F_E$ the free abelian monoid generated by $E^0$ and $M_E$ the graph monoid of $E$.
\begin{enumerate}[\upshape(i)]
\item If $a=a_1+a_2$ and $a\rightarrow b$, where $a, a_1, a_2, b \in F_E \backslash \{ 0 \}$, then $b$ can be written as $b=b_1+b_2$ with $a_1\rightarrow b_1$ and $a_2\rightarrow b_2$.
\medskip
\item (The Confluence Lemma) For $a, b \in F_E \backslash \{ 0 \}$, we have $a=b$ in $M_E$ if and only if there is $c \in F_E \backslash \{0 \}$ such that
$a \rightarrow c$ and $b\rightarrow c$.
\end{enumerate}
\end{lem}
For a row-finite graph $E$, we define the ``graded'' version of the monoid $M_E$, and denote it by $\Mn$, to be the
abelian monoid generated by $\{v(i) \mid v\in E^0, i\in \mathbb Z\}$ subject to
\begin{equation}\label{monoidrelation2}
v(i)=\sum_{e\in s^{-1}(v)}r(e)(i+1),
\end{equation}
for every $v\in E^{0}$ that is not a sink and $i \in \mathbb Z$. The monoid $\Mn$ is equipped by a natural $\mathbb Z$-action: $${}^n v(k)=v(k+n)$$ for $n,k \in \mathbb Z$. Proposition~5.7 of \cite{ahls} relates this monoid to the theory of Leavitt path algebras: there is a $\mathbb Z$-module isomorphism
$\Mn \cong \mathcal{V}^{\gr}(L_{F}(E))$. In fact we have
\begin{align}\label{yhoperagen}
\Mn &\cong\mathcal V(L_F(\overline E))\cong\, \mathcal V^{\gr}(L_F(E)),
\notag
\end{align} (see also Remark \ref{rmk}). Thus the monoid $\Mn$ is cofinal and cancellative (\cite[\S5]{ahls}). We will use these facts throughout this paper.
Throughout the article, we simultaneously use $v\in E^0$ as a vertex of $E$, as an element of $L_F(E)$ and the element $v=v(0)$ in $\Mn$, as the meaning will be clear from the context. For a subset $H\subseteq E^0$, the ideal it generates in $L_F(E)$ is denoted by $I(H)$, whereas the $\mathbb Z$-order-ideal it generates in $\Mn$ is denoted by $\langle H \rangle $.
Let $I$ be submonoid of the monoid $M$. Define an equivalence relation $\sim_{I}$ on $M$ as follows: For $a, b\in M$, $a\sim_{I} b$ if there exist $i,j\in I$ such that $a+i=b+j$ in $M$. The quotient monoid $M/I$ is defined as $M/\sim$. Observe that $a\sim_{I} 0$ in $M$ for any $a\in I$. If $I$ is an order-ideal then $a\sim_I 0$ if and only if $a\in I$.
There is a natural relationship between the quotient monoids of $\Mn$ and quotient graphs of $E$ as the following lemma shows.
\begin{lem} \label{qmiso} Let $E$ be a row-finite graph. Suppose that $H_1\subseteq H_2$ with $H_1$ and $H_2$ two hereditary saturated subsets of $E^0$. We have a $\mathbb Z$-module isomorphism of monoids
\[M_{H_2/H_1}^{\gr}\cong M_{H_2}^{\gr}/M_{H_1}^{\gr}.\]
In particular, for a hereditary saturated subset $H$ of $E^0$, and the order-ideal $I=\langle H \rangle \subseteq \Mn$, we have a $\mathbb Z$-module isomorphism
\[M_{E/H}^{\gr}\cong M_{E}^{\gr}/I.\]
\end{lem}
\begin{proof}
One can establish this directly and it is similar to the non-graded version which has already been established in the literature (see \cite[Ptoposition~3.6.18]{AAS}).
\end{proof}
\begin{example}
Consider the graph $E$,
\[
\xymatrix{
E : &o \ar@/^0.9pc/[r]^{\alpha} & u \ar@/^0.9pc/[l]^{\beta} \ar@/^1.9pc/[rrr]^{\nu} \ar@/_0.9pc/[r]_{\gamma} & v \ar@/^0.9pc/[rr]^{\mu} && x \ar@/^0.9pc/[ll]^{\delta}}
\]
We do some calculation in the monoid $\Mn$ which prepares us for the theorems in the next section.
Using the relations~\ref{monoidrelation2} in $\Mn$,
$o=u(1)=o(2)+v(2)+x(2)$. Thus we have
\[{}^{-2} o >o.\]
This follows because the cycle $\alpha\beta$ in $E$ has an exit. In fact we show in Proposition~\ref{goldenprop} that if there is an $a\in \Mn$ such that ${}^n a > a$ with $n$ a negative integer then there is a cycle with an exit in the graph. On the other hand, we have $v=x(1)=v(2)$, i.e., ${}^2 v =v.$ This is because the cycle $\mu\delta$ has no exit. We further prove in Proposition~\ref{goldenprop} that if there is an $a\in \Mn$ with ${}^n a=a$ then there is a cycle in $E$ without an exit.
Consider now the hereditary saturated subset $\{v,x\}$ and consider the quotient graph $E/ \{v,x\}$ which is
\[
\xymatrix{
E/ \{v,x\} : &o \ar@/^0.9pc/[r]^{\alpha} & u \ar@/^0.9pc/[l]^{\beta}}
\]
Thus in $M^{\gr}_{E / \{v,x\}} \cong M_E^{\gr} / \langle u, x \rangle$ we have
\[{}^{-2} o =o.\]
This follows because the cycle $\alpha\beta$ has all its exits in the hereditary saturated subset $\{v,x\}$. By the theory of Leavitt path algebras this gives non-graded ideals in $L_F(E)$. In Proposition~\ref{propnongra} we show that if for an order-ideal $I$, the quotient $\Mn/I$ has a periodic element, then $L_F(E)$ has non-graded ideals.
\end{example}
\begin{example}
Consider the graph,
\[
\xymatrix@=13pt{
& & \bullet \ar@/^/[dr] \\
E: & \bullet \ar@/^/[ur] &&\bullet \ar@/^/[dl] & \\
& & \bullet \ar@/^/[ul]
}
\]
One can directly show that
\[\Mn \cong \mathbb N\oplus \mathbb N \oplus \mathbb N \oplus \mathbb N,\] with
\[ {}^1 (a,b,c,d)=(b,c,d,a),\]
(see also~\cite[Proposition~3.7.1]{hazi}). Thus any element is periodic of order 4. Clearly for any vertex $v$, $v\in \Mn$ is a minimal element and the orbit of $v$, $O(v)=\{{}^i v, \mid i \in \mathbb Z \}=E^0$. Note that $|O(v)|=4$ is also the length of the cycle. In Theorem~\ref{mainthemethe} we will show that the set of cycles without exits in $E$ is in one-to-one correspondence with the orbits of minimal periodic elements of $\Mn$ and the length of cycles are equal to the order of the corresponding orbits.
\end{example}
\begin{example}
The following example shows that the monoid $\Mn$ can have a very rich structure. Consider the graph
\begin{equation*}
{\def\labelstyle{\displaystyle}
\xymatrix{
E: &\bullet \ar@(lu,ld)\ar@/^0.9pc/[r] & \bullet \ar@/^0.9pc/[l]
}}
\end{equation*}
\smallskip
with the adjacency matrix $A_E= \left(\begin{matrix} 1 & 1\\ 1 & 0
\end{matrix}\right)$. We know that the Leavitt path algebra $L_F(E)$ is strongly graded~\cite[Theorem~1.6.15]{hazi}. Thus by Dade's theorem (\cite[Theorem~1.5.1]{hazi}) there is an equivalence of categories $\Grr L_F(E) \cong \Modd L_F(E)_0$. Here $\Grr L_F(E)$ is the category of graded modules over $L_F(E)$ and $\Modd L_F(E)_0$ is the category of modules over the zero-component ring $L_F(E)_0$. This implies that $K^{\gr}_0(L_F(E)) \cong K_0(L_F(E)_0)$, with the positive cones mapping to each other. The zero-component $L_F(E)_0$ is the Fibonacci algebra and its $K_0$ and its positive cone is calculated in (see~\cite[Example IV.3.6]{davidson}) which are the direct limit of
\[ \mathbb Z\oplus \mathbb Z
\stackrel{A_E}{\longrightarrow} \mathbb Z\oplus \mathbb Z
\stackrel{A_E}\longrightarrow \mathbb Z\oplus \mathbb Z
\stackrel{A_E}\longrightarrow \cdots,\]
and
\[ \mathbb N\oplus \mathbb N
\stackrel{A_E}{\longrightarrow} \mathbb N\oplus \mathbb N
\stackrel{A_E}\longrightarrow \mathbb N\oplus \mathbb N
\stackrel{A_E}\longrightarrow \cdots,\]
respectively.
Since $\Mn$ is the positive cone of $K^{\gr}_0(L_F(E))$, we have
\[\Mn = \varinjlim_{A} \mathbb N \oplus \mathbb N= \Big \{ (m,n)\in \mathbb Z \oplus \mathbb Z \, \, \Big \vert \, \, \frac{1+\sqrt{5}}{2} m +n \geq 0 \Big \},\]
with the action
\[{}^1 (m,n)=(m,n) \left(\begin{matrix} 1 & 1\\ 1 & 0
\end{matrix}\right)= (m+n,m).\]
In contrast, $M_E= \{0, u\}$, where $u=u+u$.
\end{example}
Denote by $\LL^{\gr}\big(L_F(E)\big)$ the lattice of graded ideals of $L_F(E)$. There is a lattice isomorphism between the set $\TT_E$ of hereditary saturated subsets of $E$ and the set $\LL^{\gr}\big(L_F(E)\big)$ (\cite[Theorem~2.5.9]{AAS}). The correspondence is
\begin{align}\label{latticeisosecideal}
\Phi: \TT_E&\longrightarrow \LL^{\gr}\big(L_F(E)\big),\\
H &\longmapsto I(H), \notag
\end{align}
where $H$ is a hereditary saturated subset and $I(H)$ is the graded ideal generated by the set $\big \{v \;|\; v\in H \big \}.$
On the other hand there is a lattice isomorphism between the set $\TT_E$ and the lattice of $\Gamma$-order-ideals of $\Mn$~\cite[Theorem 5.11]{ahls}. The correspondence is
\begin{align}\label{latticeisosecideal2}
\Phi: \TT_E&\longrightarrow \LL\big(\Mn\big),\\
H &\longmapsto \langle H \rangle, \notag
\end{align}
where $ \langle H \rangle$ is the order-ideal generated by the set $\big \{v \;|\; v\in H \big \}.$ Combining these two correspondence we have a lattice isomorphism
\begin{align}\label{latticeisosecideal3}
\Phi: \LL\big(\Mn\big) &\longrightarrow \LL^{\gr}\big(L_F(E)\big),\\
\langle H \rangle &\longmapsto I(H). \notag
\end{align}
Thus the Leavitt path algebra $L_F(E)$ is a graded simple ring if and only if $\Mn$ is a simple $\mathbb Z$-monoid.
We will frequently use the following two facts: The \emph{forgetful function}
\begin{align}\label{forgthryhr}
\Mn &\longrightarrow M_E,\\
v(i) &\longmapsto v, \notag
\end{align}
relates the graded monoid to the non-graded counterpart. In several of the proofs, we pass the equalities in $\Mn$ to $M_E$ and then use Lemma~\ref{aralem6}. The other key fact is the $\mathbb Z$-module isomorphism of monoids
\begin{align}\label{forgthryhr2}
\Mn &\longrightarrow M_{\overline E},\\
v(i) &\longmapsto v_i, \notag
\end{align}
where $\overline E$ is the covering of the graph $E$ as defined in \S\ref{graphsec}. Again the transition from the graded monoid to $M_{\overline E}$ is a crucial step in several of our proofs, as $\overline E$ is an acyclic stationary graph which repeats in each level going from level $n$ to level $n+1$ (see examples (\ref{level421}) and (\ref{level422})).
\begin{rmk}[{\bf The Abrams-Sklar treatment of the Mad Vet}]
As Alfred North Whitehead put it: ``the paradox is now fully established that the utmost
abstractions are the true weapons with which to control our
thought of concrete fact''.
In \cite{abramssklar} Abrams and Sklar demonstrated this abstract approach beautifully by realising that a recreational puzzle, called the mad veterinarian, can be answered via assigning a graph $E$ to the problem and then solving an equation (if possible) in the monoid $M_E$. We give one instance of the puzzle from \cite{abramssklar} and show how this puzzle can naturally be modified so that $\Mn$ becomes the model of the puzzle.
Suppose a Mad Veterinarian has three machines with the following properties.
\begin{itemize}
\item Machine 1 turns one ant into one beaver;
\item Machine 2 turns one beaver into one ant, one beaver and one cougar;
\item Machine 3 turns one cougar into one ant and one beaver.
\end{itemize}
It is also supposed that each machine can operate in reverse. The puzzle now asks, for example, whether one can start with one ant and then using the machines produce 4 ants. In order to solve the puzzle, as it was observed in \cite{abramssklar}, one can naturally assign the graph $E$ below to this problem and the question then becomes whether $a=4a$ in $M_E$.
\[E: \xymatrix{ {} & a \ar[rd] & & {} \\
c \ar[ru]
\ar@/^{-15pt}/ [rr]& & b
\ar@(r,d)
\ar[ll] \ar@/^{-10pt}/
[lu] & }\]
\bigskip
\smallskip
We modify the puzzle as follows: If the machines create new species of age one month older and we are only allowed to feed species of the same age to the machine, then the puzzle will be modelled by the graded monoid $\Mn$ instead.
As an example one can start with an ant, and obtain two cougars of age 1 and 2 months respectively, because in $\Mn$ we have
\[a=b(1)=c(2)+b(2)+a(2)=c(2)+c(1)={}^{2}c+{}^{1}c.\]
\end{rmk}
\section{Orbits of the monoid $\Mn$}
Recall the pre-ordering one can define on a monoid from \S\ref{monsec}, i.e., $b\leq a$ if $a=b+x$. The following lemma shows that for an $n\in \mathbb Z_{<0}$, and $a\in\Mn$, if ${}^na$ and $a$ are comparable, then either ${}^n a=a$ or ${}^n a > a$. Here $\mathbb Z_{<0}$ is the set of negative integers and ${}^n a$ is the result of the action of $n$ on the element $a\in \Mn$.
\begin{lem}\label{nalem}
Let $E$ be a row-finite graph. For any $a\in \Mn$, it is not possible that ${}^n a < a$, where $n\in \mathbb Z_{<0}$.
\end{lem}
\begin{proof} Suppose ${}^{n} a < a$. Then
\begin{equation}\label{sunghtgrexx}
{}^na+x=a,
\end{equation} where $x\not = 0$ and $n\in \mathbb Z_{<0}$. First assume that no sinks
appear in \emph{any} presentation of $a$. Let $a=v^1(i_1)+\dots+v^k(i_k)$ be a presentation of $a$, where $v^s\in E^0$ and $i_s \in \mathbb Z$, $1\leq s \leq k$.
Since by (\ref{monoidrelation2}), $v^s(i_s)=\sum_{e\in s^{-1}(v^s)} r(e)(i_s+1)$, we can shift each of the vertices enough times so that we re-write $a$ as $a=w^1(l)+\dots+w^p(l)$, for some $l\in \mathbb Z$ and $w^s\in E^0$, $1\leq s \leq p$.
Now without loss of generality we can assume $a=w^1+\dots+w^p$ and ${}^n a < a$.
We now pass the equality (\ref{sunghtgrexx}) to the monoid $\Mnb$, via the isomorphism~(\ref{forgthryhr2}), where $\overline E$ is the covering graph of $E$. By the confluence property, Lemma \ref{aralem6}, there is an element $c=u^1_m+\dots + u^q_m$, where $m\geq 0$ such that
\begin{align}\label{hgfgft}
w^1_0+\dots+w^p_0 &\longrightarrow u^1_m+\dots + u^q_m,\\
w^1_n+\dots+w^p_n + x &\longrightarrow u^1_m+\dots + u^q_m.\notag
\end{align}
Note that $c$ is an element in the free abelian monoid generated by
$\overline{E}^{0}$ and since we assumed all elements in the presentation of $a$ are regular, we can arrange that all generators of $c$ appear on the ``level'' $m$. One should visualise this by thinking that all the elements in $a$ are sitting on level 0, ${}^n a$ on the left hand side of $a$ on level $n$ (because $n$ is negative) and $c$ on the right hand side of $a$ on the level $m$ (see Example~\ref{level421}).
Since the graph $\overline E$ is a stationary, namely the graph repeats going from level $i$ to level $i+1$, from (\ref{hgfgft}) we get
\begin{equation}\label{hfgftgftr63}
u^1_{n+m}+\dots+u^q_{n+m} + x \longrightarrow u^1_m+\dots + u^q_m.
\end{equation}
Since in each transformation $\rightarrow_{1}$ (see (\ref{hfgtrgt655})), the number of generators either increase or stay the same (and $c$ is the sum of independent generators),
it is not possible the left hand side of (\ref{hfgftgftr63}) transforms to the right hand side and therefore we can't have ${}^n a < a$.
We are left to show that indeed no sinks
appear in any presentation of $a$. Let $v^1$ be a sink in the presentation $a=v^1(i_1)+\dots+v^k(i_k)$, where, as in the argument above,
all the shifts appearing in ${}^n a$ are less than shifts $i_1,\dots,i_k$ in $a$. Now ${}^n a +x= a$ implies that
${}^n a +x\rightarrow c$ and $a \rightarrow c$. However, since $v^1$ is sink, there is no transformation for $v^1$ and thus $v^1(n+i_1)$ in ${}^n a$ should appear in $c$. But the shifts under the transformation $\xra_1$ either increase or stay the same. Since $a\xra c$ we have that $c$ can not recover $v_1(n+i_1)$. This is a contradiction.
\end{proof}
The following proposition is crucial for the rest of the results in the paper.
\begin{prop}\label{goldenprop}
Let $E$ be a row-finite graph.
\begin{enumerate}[\upshape(i)]
\item The graph $E$ has a cycle with no exit if and only if there is an $a\in \Mn$ such that ${}^n a=a$, where $n\in \mathbb Z_{<0}$.
\medskip
\item The graph $E$ has a cycle with an exit if and only if there is an $a\in \Mn$ such that ${}^n a > a$, where $n\in \mathbb Z_{<0}$.
\medskip
\item The graph $E$ is acyclic if and only if for any $a\in \Mn$ and $n \in \mathbb Z_{<0}$, ${}^n a \parallel a$, i.e., ${}^n a$ and $a$ are not comparable.
\end{enumerate}
\end{prop}
\begin{proof}
(i) Suppose the graph $E$ has a cycle $c$ of length $n$ with no exit. Writing $c=c_1c_2\dots c_n$, where $c_i \in E^1$, since $c$ has no exit, the relations in $\Mn$ show that $v=s(c_1)=r(c_1)(1)=r(c_2)(2)=\cdots =r(c_n)(n)=v(n)$. It follows that ${}^n v=v$ in $\Mn$ (i.e. ${}^{-n} v=v$). Alternatively, one can see that ${}^{-1} a=a $, where $a=\sum_{i=1}^n r(c_i) \in \Mn$.
Conversely, suppose that there is an $a\in \Mn$ and $n \in \mathbb Z_{<0}$ such that ${}^n a = a$.
Let $a=v^1(i_1)+\dots+v^k(i_k)$ be a presentation of $a$, where $v^s\in E^0$ and $i_s \in \mathbb Z$. Similar to the proof of Lemma~\ref{nalem}, we first assume that no sinks appear in \emph{any} presentation of $a$.
Thus $a=v^1(i_1)+\dots+v^k(i_k)$, where all $v^s$'s are regular. Since $v^s(i_s)=\sum_{e\in s^{-1}(v^s)} r(e)(i_s+1)$, we can shift each of the vertices enough times so that we re-write $a$ as $a=w^1(l)+\dots+w^p(l)$, for some $l\in \mathbb Z$ and $w^s\in E^0$, $1\leq s \leq p$.
Now without loss of generality we can assume $a=w^1+\dots+w^p$ and ${}^n a=a$.
We now pass $a$ to the monoid $\Mnb$, via the isomorphism~(\ref{forgthryhr2}), where $\overline E$ is the covering graph of $E$ which is acyclic by construction. Thus we have $w^1_0+\dots +w^p_0= w^1_n+\dots +w^p_n$ in $\Mnb$, where all $w^s_k \in \overline E^0$. By Lemma~\ref{aralem6}, there is an element $c=u^1_m+\dots + u^q_m$, where $m\geq 0$ such that $w^1_0+\dots +w^p_0 \rightarrow c$ and
$w^1_n+\dots +w^p_n \rightarrow c$. Note that since we assumed all elements in the presentation of $a$ are regular, we can arrange that all generators of $c$ appear on the ``level'' $m$. Since the graph $\overline E$ is a stationary, namely the graph repeats going from level $i$ to level $i+1$, we have
\[w^1_n+\dots +w^p_n \longrightarrow u^1_{m+n}+\dots u^q_{m+n},\] and consequently
\begin{equation}\label{subgdtgee1}
u^1_{m+n}+\dots + u^q_{m+n} \longrightarrow u^1_m+\dots + u^q_m.
\end{equation}
Since the number of generators on the right and the left hand side of (\ref{subgdtgee1}) are the same and the relation $\rightarrow$ would increase the number of generators if there is more than one edge emitting from a vertex, it follows that there is only one edge emitting from each vertex in the list $A=\{u^1_{m-n}, \dots , u^q_{m-n}\}$ and their subsequent vertices until the edges reach the list $B=\{u^1_m, \dots , u^q_m\}$. Thus we have a bijection $\rho: A\rightarrow B$. Consequently, there is an $l\in \mathbb N$ such that $\rho^l=1$. Then we have $u^i_{m+n} \rightarrow u^i_{m-ln}$ for all elements of $A$. Going back to the graph $E$, this means there is a path with no bifurcation from $u$ to itself, namely there is a cycle with no exit based at $u$.
We are left to show that indeed no sinks appear in any presentation of $a$. Let $a=v^1(i_1)+\dots+v^k(i_k)$ be a presentation of $a$ with $v^1$ a sink. Since ${}^n a = a$, we can choose $n<0$ small enough that all the shifts appearing in ${}^n a$ are smaller than shifts $i_1,\dots,i_k$ in $a$. Now ${}^n a = a$ implies that
${}^n a \rightarrow c$ and $a \rightarrow c$.
However, since $v^1$ is sink, there is no transformation for $v^1$ and thus $v^1(n+i_1)$ should appear in $c$. But the shifts appearing in $c$ are bigger than those in $a$ as $a$ also transforms to $c$ which can't happen. Thus there are no sinks in a presentation of $a$.
\medskip
(ii) Suppose the graph $E$ has a cycle $c=c_1c_2\dots c_n$ of length $n$ with exits. Consider $a=\sum_{i=1}^n r(c_i) \in \Mn$. Now applying the transformation rule on each $r(c_i)$ we have $a= \sum_{i=1}^n r(c_i)(1)+x=a(1)+x$, where $x\not =0$ as the cycle has an exit and thus it branches out and other symbols appear in the transformation. This shows ${}^{-1} a>a$ as claimed.
Conversely, suppose that there is an $a\in \Mn$ and $n \in \mathbb Z_{<0}$ such that ${}^n a > a$.
Let $a=v^1(i_1)+\dots+v^k(i_k)$ be a presentation of $a$, where $v^s\in E^0$ and $i_s \in \mathbb Z$. We can choose a positive number $n$ such that ${}^n a < a$. Further, we can choose $n$ big enough such that all the shifts appearing in ${}^n a$ are bigger than the shifts $i_1,\dots, i_k$ in $a$. We then have $a={}^n a +x $, for some nonzero
$x\in \Mn$. We pass the equality to the monoid of the covering graph $\overline E$, via the isomorphism~(\ref{forgthryhr2}). By the confluence property, Lemma~\ref{aralem6}, there is $c$ such that $a\rightarrow c$ and ${}^n a +x \rightarrow c$. Using Lemma~\ref{aralem6}, we can then write $c=d+f$, where ${}^n a \rightarrow d$ and
$x \rightarrow f$. Suppose $d=u^1(j_1)+\dots+u^t(j_t)$. Since the graph $\overline E$ is stationary, applying the same transformations done on ${}^n a$ to $a$ we obtain $a\rightarrow d'$, where $d'=u^1(j'_1)+\dots+u^t(j'_t)$ and $d' \rightarrow c$.
Putting these together we have
\begin{equation}\label{hghgyhuhrfe}
u^1(j'_1)+\dots+u^t(j'_t) \longrightarrow u^1(j_1)+\dots+u^t(j_t) +f.
\end{equation}
Let $A_0$ be the list $\{u^1(j'_1),\dots,u^t(j'_t)\}$ appearing on the left hand side of (\ref{hghgyhuhrfe}) and $B_0$ the list $\{u^1(j_1),\dots,u^t(j_t)\}$ on the right hand side. All the paths emitting from $A_0$ ends up in either $B_0$ or the list of vertices in $f$. Since the number of vertices (symbols) on the right hand side of (\ref{hghgyhuhrfe}) is more than the left hand side, some of the paths emitting from the left hand side has to have bifurcation. Now let $A_1:=\phi^{-1}(B_0)$ denote all the vertices in $A_0$ which has a path ending in $B_0$. Clearly $A_1 \subseteq A_0$. Similarly there are paths coming from $A_1$ which have bifurcation. Denote by $B_1\subseteq B_0$ the list of same vertices (with possibly different shifts) which appear in $A_1$. Consider $A_2:=\phi^{-1}(B_1)$ which is $A_2\subseteq A_1$, i.e., vertices in $A_1$ which has a path ending in $B_1$. Repeating this process, since $A_0$ is finite, there is a $k$ such that $A_{k+1}=\phi^{-1}(B_k)$ and $A_{k+1}=A_{k}$. Note that $A_k$ and $B_k$ consist of same vertices (with possibly different shifts) and for each vertex in $A_k$ there is a path ending in $B_k$ and for each vertex in $B_k$ there is a \emph{unique} path from a vertex in $A_k$ to this vertex. Since $A_k$ is finite, this assignment defines a bijective function from $\rho: A_k \rightarrow B_k$. Thus there is an $l\in \mathbb N$ such that $\rho^l=1$. Since the graph $\overline E$ is stationary, this means after $l$ repeat, all paths return to the same vertices they started from, i.e., there are cycles in the graph. However, as some of the paths had to have bifurcation, there exists cycles with exits.
\medskip
(iii) Suppose $E$ is acyclic. For any $a\in \Mn$ and $n\in \mathbb Z_{<0}$, we can't have ${}^na > a$ or ${}^na = a$, otherwise by (i) and (ii) of the theorem, $E$ has cycles. On the other hand by Lemma \ref{nalem}, it is also not possible to have ${}^na < a$ for any $n\in \mathbb Z_{<0}$. Thus ${}^na$ and $a$ are not comparable.
Conversely, suppose no elements of $\Mn$ are comparable within its orbits. Then (i) and (ii) immediately imply that $E$ is acyclic.
\end{proof}
Recall that if a group $\Gamma$ acts on a set $M$, then the action is \emph{free} if ${}^\gamma m=m$ implies that $\gamma$ is the identity of the group, i.e., all the isotropy groups of the action are trivial.
\begin{cor}\label{conLm}
Let $E$ be a row-finite graph.
\begin{enumerate}[\upshape(i)]
\item The graph $E$ satisfies Condition (L) if and only if $\mathbb Z$ acts freely on $\Mn$.
\medskip
\item The graph $E$ satisfies Condition (K) if and only if $\mathbb Z$ acts freely on any quotient of $\Mn$ by an order-ideal.
\end{enumerate}
\end{cor}
\begin{proof}
(i) Suppose the graph $E$ satisfies Condition (L). If ${}^n a= a$, for some $n \in \mathbb Z_{<0}$, and $a\in \Mn$, then $E$ has a cycle without exit by Proposition~\ref{goldenprop} which is not possible. Thus $\mathbb Z$ acts freely on $\Mn$.
Conversely, if $E$ has a cycle without exit, then there is an $a\in \Mn$ and $n \in \mathbb Z_{<0}$ such that ${}^n a=a$ (Proposition~\ref{goldenprop}) which contradicts the freeness of the action.
\medskip
(ii) Recall that the graph $E$ satisfies Condition (K) if and only if for every hereditary saturated subset $H$ the quotient graph $E/ H$ satisfies Condition (L).
Suppose $E$ satisfies condition (K) and $I$ is an order-ideal of $\Mn$. Then there is a hereditary saturated subset $H\subseteq E^0$ which generates $I$. Since by Lemma~\ref{qmiso}, there is a $\mathbb Z$-module isomorphism $\Mn /I \cong M^{\gr}_{E / H}$, and $E / H$ has condition (L), by part (i), $\mathbb Z$ acts freely on $M^{\gr}_{E / H}$ and thus on
$\Mn /I$.
Conversely, for any hereditary saturated subset $H$, and the corresponding order-ideal $I$ of $\Mn$, since $\mathbb Z$ acts freely on $\Mn /I \cong M^{\gr}_{E / H}$, it follows by part (i) that $E/ H$ has condition (L). It then follows that $E$ has condition (K).
\end{proof}
\section{The ideal structure of $L_F(E)$ via the monoid $\Mn$} \label{seciv}
There is an alluring proposition in the book of Abrams, Ara and Molina Siles~\cite[Proposition~6.1.12]{AAS}, which states that for a finite graph $E$, the Leavitt path algebra $L_F(E)$ is purely infinite simple if and only if $M_E \backslash \{0\}$ is a group. As the monoid $\Mn$ is a much richer object than $M_E$, it is expected to capture more of the structure of the Leavitt path algebra $L_F(E)$.
We start with an immediate corollary of the results on the periodicity of elements of $\Mn$ established in the previous section.
\begin{cor}\label{conKm}
Let $E$ be a row-finite graph and $L_F(E)$ its associated Leavitt path algebra.
\begin{enumerate}[\upshape(i)]
\item The algebra $L_F(E)$ is graded simple if and only if $\Mn$ is simple.
\medskip
\item The algebra $L_F(E)$ is simple if and only if $\Mn$ is simple and for any $a\in \Mn$, if ${}^n a$ and $a$ are comparable, $n \in \mathbb Z_{<0}$, then ${}^n a > a$.
\medskip
\item The algebra $L_F(E)$ is purely infinite simple if and only if $\Mn$ is simple, if ${}^n a$ and $a$ are comparable, $n \in \mathbb Z_{<0}$, then ${}^n a > a$ and there is
an $a\in \Mn$ such that ${}^n a > a$.
\medskip
\item
If $E$ is a finite graph, then $L_F(E)$ is purely infinite simple if and only if $\Mn$ is simple and for any $a\in \Mn$ there is $n \in \mathbb Z_{<0}$ such that ${}^n a > a$.
\end{enumerate}
\end{cor}
\begin{proof}
(i) This follows from the fact that there is a lattice isomorphism between the graded ideals of $L_F(E)$ and $\mathbb Z$-order-ideals of $\Mn$ via the lattice of hereditary saturated subsets of the graph $E$ (see \eqref{latticeisosecideal3}).
\medskip
(ii) The algebra $L_F(E)$ is simple if and only if $E$ has no non-trivial hereditary saturated subsets and $E$ satisfies Condition (L) (\cite[Theorem~2.9.1]{AAS}). By Corollary~\ref{conLm}, $E$ has Condition (L) if and only if for any $a\in \Mn$ and $n\in \mathbb Z$, ${}^n a\neq a$. It follows that if
${}^n a$ and $a$ are comparable, $n \in \mathbb Z_{<0}$, then either ${}^n a > a$ or ${}^n a <a$. The proof now follows from combining this fact with part (i) and Lemma \ref{nalem}.
\medskip
(iii) The algebra $L_F(E)$ is purely infinite simple if and only if $E$ has no non-trivial hereditary saturated subsets, $E$ satisfies Condition (L) and $E$ has at least one cycle with an exit (\cite[Theorem~3.1.10]{AAS}). The proofs now follow from combining this fact with part (i) and (ii) and Corollary~\ref{conLm}.
\medskip
(iv) Suppose that $L_F(E)$ is purely infinite simple. We need to prove that for any $a\in\Mn$, ${}^n a$ and $a$ are comparable for some $n\in \mathbb Z_{<0}$.
We claim that for any $v\in E^0$ there is an $n\in\mathbb Z_{<0}$ such that ${}^n v>v$. For any $v\in E^0$, let $X_v=\{p\;|\; s(p)=v, p \text{~is a finite path not containing a cycle}, \text{~and~} r(p) \text{~is on a cycle}\}$. Then for each $p\in X_v$, the length of $p$ is at most $|E^0|$ as any path of length large than $|E^0|$ contains a cycle. Then the maximal number in $\{l(p)\;|\; p\in X_v\}$ is less than or equal to $|E^0|$. We have a representation $v=\sum_{i=1}^n v_i(s_i)$ such that all $v_i$'s are on cycles $C_1, \cdots, C_n$ (possibly $C_i=C_{i'}$ with $i\neq i'$) by induction on the maximal length of paths in $X_v$.
Let $l_i$ denote the length of the cycle $C_i$. Then $v_i >v_i(l_i)$ for $1\leq i\leq n$ as each cycle among $C_1,\cdots, C_n$ has exits.
Let $l$ be the minimal common multiple of $l_1, \cdots, l_n$. Then $v_i >v_i(l)$ for $1\leq i\leq n$ and thus $v>v(l)$, i.e. ${}^{-l}v>v$ as claimed. Now for any $a\in \Mn$, $a=\sum_{s=1}^ku_t(j_t)$ (possibly $v_t=v_{t'}$ if $t\neq t'$). As for each vertex $u_t$ there exists $m_t$ such that ${}^{-m_t}u_t> u_t$. Take $n=-\prod_{t=1}^k m_t$. Then we have ${}^{n} a={}^{-\prod_{t=1}^k m_t}\sum_{t=1}^k u_t(j_t)>\sum_{t=1}^k u_t(j_t)=a$. Conversely, the proof follows from (iii).
\end{proof}
These results show that acyclicity/simplicity or purely infinite simplicity of algebras are preserved under an order-isomorphism between their graded Grothendieck groups.
Although the monoid $\Mn$ is constructed from the graded projective modules, it can however detect the non-graded structure of a Leavitt path algebra. Here is the first instance in this direction. We will produce more evidence of this in Theorem~\ref{mainthemethe}.
\begin{prop}\label{propnongra}
Let $E$ be a row-finite graph and $L_F(E)$ its associated Leavitt path algebra. Then $L_F(E)$ has a non-graded ideal if and only if there is an order-ideal $I$ of $\Mn$ such that the quotient monoid $\Mn/I$ has a periodic element.
\end{prop}
\begin{proof}
Suppose $L_F(E)$ has a non-graded ideal. Then there is a hereditary saturated subset $H$ and a non-empty set $C$ of cycles which have all their exits in $H$ (see \cite[Proposition 2.8.11]{AAS}). Thus $E/ H$ is a graph for which cycles in $C$ have no exit. By Proposition~\ref{goldenprop}, $M^{\gr}_{E / H}$ has periodic elements. But by Lemma~\ref{qmiso},
$M^{\gr}_{E / H} \cong \Mn / I$, where $I$ is the order-ideal generated by $H$ and so it has periodic elements. The converse argument is similar.
\end{proof}
For $a\in \Mn$, we denote the smallest $\mathbb Z$-order-ideal generated by $a$ by $\langle a \rangle $. It is easy to see that
\begin{equation}\label{oridealhj}
\langle a \rangle=\big \{ x \in \Mn \mid x \leq \sum_{} {}^i a \big \}.
\end{equation}
Thus for a vertex $v$, denoting $\overline v$ for the smallest hereditary saturated subset containing $v$, we have
\begin{equation}\label{oridealhj2}
\langle v \rangle=\langle \overline v \rangle.
\end{equation}
The following lemmas show how these ideals of the monoid capture the geometry of the graph. Recall the notion of ``local'' cofinality from \S\ref{graphsec}. The proof of the following lemma is a ``local'' version of \cite[Lemma 2.9.6]{AAS} and we leave it to the reader.
\begin{lem}\label{localcofin}
Let $E$ be a row-finite graph. For $v,w\in E^0$ the following are equivalent.
\begin{enumerate}[\upshape(i)]
\item $\langle w \rangle \subseteq \langle v \rangle$;
\medskip
\item the vertex $v$ is cofinal with respect to $w\in E^0$;
\medskip
\item if $v \in H$, then $w\in H$, where $H$ is a hereditary saturated subset of $E$.
\end{enumerate}
\end{lem}
\begin{lem}\label{dowdirtwo}
Let $E$ be a row-finite graph.
\begin{enumerate}[\upshape(i)]
\item For $u,v\in E^0$, $\langle u \rangle \cap \langle v \rangle \not = 0$ if and only if $u$ and $v$ are downward directed.
\medskip
\item The vertex $v$ is cofinal if and only if $\langle v \rangle=\Mn$.
\medskip
\item $E$ is cofinal if and only if $\langle v \rangle=\Mn$ for every $v\in E^0$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Suppose $u$ and $v$ are downward directed, i.e., there is an $w\in E^0$ such that $v\geq w$ and $u\geq w$. Thus there is a path $\alpha$ with $s(\alpha)=v$ and $r(\alpha)=w$. This gives that $v=w(k)+t$ for some $k\in \mathbb Z$ and $t\in \Mn$. Thus $w \in \langle v \rangle$. Similarly $w\in \langle u \rangle$ and so $\langle u \rangle \cap \langle v \rangle \not = 0$.
Conversely, suppose $0 \not= a\in \langle u \rangle \cap \langle v \rangle$. Since $a$ is a sum of vertices (with given shifts), and $\langle v \rangle$ and $\langle u \rangle$ are order-ideals, one can find a vertex $z \in \langle u \rangle\cap \langle v \rangle$.
Thus $\sum_{i} {}^i v = z+t$ and $\sum_{j} {}^j u = z+s$ in $\Mn$. Passing to $M_E$ via the forgetful function~\ref{forgthryhr}, we have $nv=z+ t'$ and $mu=z+s'$, for $t',s' \in M_E$ and $m,n\in \mathbb N$. By the confluence property, Lemma~\ref{aralem6}, there are $c, d \in F_E$ such that
$nv \rightarrow c$, $z+t' \rightarrow c$ and $mu \rightarrow d$, $z+s' \rightarrow d$. By Lemma~\ref{aralem6}, one can write $c=c_1+c_2$ and $d=d_1+d_2$, such that $z\rightarrow c_1$, $t'\rightarrow c_2$ and $z\rightarrow d_1$, $t'\rightarrow d_2$. Since in $M_E$, we have $z=c_1=d_1$, again by the confluence property, there is an $e\in F_E$ such that $c_1\rightarrow e$ and $d_1\rightarrow e$. Hence $z\rightarrow e$. Now
\begin{align*}
nv &\longrightarrow c_1+c_2 \longrightarrow e+c_2\\
mu &\longrightarrow d_1+d_2 \longrightarrow e+d_2.
\end{align*}
One more use of \ref{aralem6} shows that all the vertices appearing in $e$ are in both the tree of $v$ and the tree of $u$. Thus $u$ and $v$ are downward directed.
\medskip
(ii) This follows immediately from Lemma~\ref{localcofin}.
\medskip
(iii) This follows from part (ii).
\end{proof}
For a $\Gamma$-monoid $M$, a $\Gamma$-order-ideal $N \subseteq M$ is called \emph{prime} if for any $\Gamma$-order-ideals $N_1, N_2 \subseteq M$, $N_1\cap N_2 \subseteq N$ implies that $N_1\subseteq N$ or $N_2\subseteq N$. In case of $\Mn$, the lattice isomorphism~(\ref{latticeisosecideal3}), immediately implies that prime order-ideals of $\Mn$ are in one-to-one correspondence with the graded prime ideals of $L_F(E)$.
Recall that one can give an element-wise description for a prime ideal of a ring. Namely, an ideal $I$ of a ring $A$ is prime if for any $a, b\not \in I$, there is an $r\in R$ such that $arb \not \in I$. We have a similar description in the setting of $\Mn$ demonstrating how the monoid structure of $\Mn$ mimics the algebraic structure of $L_F(E)$.
\begin{lem} \label{restrictedlatticeiso} Let $E$ be a row-finite graph. Then the order-ideal $I$ of $\Mn$ is prime if and only if for any $a,b \not \in I$, there is a $c \not \in I$ and $n,m \in \mathbb Z$, such that ${}^n c \leq a$ and ${}^m c \leq b$.
\end{lem}
\begin{proof}
$\Rightarrow$ Suppose $I$ is a prime order-ideal. A combination of correspondence~(\ref{latticeisosecideal3}) and \cite[Proposition~4.1.4]{AAS} give that $I= \langle H \rangle$, where $H$ is a hereditary saturated subset such that $E^0 \backslash H$ is downward directed. Let $a,b \in \Mn$ such that $a \not \in I$ and $b\not \in I$. Since $a=\sum v_i(k_i)$ and $b=\sum w_j(k'_j)$ are sum of vertices (with given shifts), and $I$ is an order-ideal, then (possibly after a re-arrangement) $v_1 \not \in H$ and $w_1 \not \in H$. Thus there is a $z\not \in H$ such that $v_1\geq z$ and $w_1\geq z$. Thus there is a path $\alpha$ with $s(\alpha)=v_1$ and $r(\alpha)=z$. This shows that in $\Mn$, for some $i\in \mathbb Z$, ${}^i z \leq v$ and consequently ${}^{i+k_1}z \leq v_1(k_1) \leq a $. Similarly for a $j \in \mathbb Z$, ${}^j z \leq w_1$ and consequently ${}^{j+k'_1}z \leq w_1(k'_1) \leq b$.
$\Leftarrow$ Suppose $I_1$ and $I_2$ are order-ideals such that $I_1\cap I_2 \subseteq I$. If $I_1\nsubseteq I$ and $I_2\nsubseteq I$, then there are $a \in I_1 \backslash I$ and $b \in I_2 \backslash I$. By the property of $I$, there is a $c\not \in I$ such that ${}^n c \leq a$ and ${}^m c \leq b$. Since $I_1$ and $I_2$ are order-ideals $c\in I_1\cap I_2$ and thus $c\in I$ a contradiction. Thus $I$ is prime.
\end{proof}
Recall the notions of a line-point from \S\ref{graphsec} and the minimal elements of monoids from \S\ref{monsec}. By~\cite[Proposition~2.6.11]{AAS}, a minimal left ideal of a Leavitt path algebra $L_F(E)$ is isomorphic to $L_F(E)v$, where $v$ is a line-point. Here we show that we can distinguish these vertices in the monoid $\Mn$.
\begin{lem}\label{tminimi}
Let $E$ be a row-finite graph and $L_F(E)$ its associated Leavitt path algebra.
\begin{enumerate} [\upshape(i)]
\item The vertex $v$ has no bifurcation if and only if $v\in \Mn$ is minimal.
\medskip
\item The vertex $v$ is a line-point if and only if $v\in \Mn$ is minimal and aperiodic.
\medskip
\item The left ideal $L_F(E)v$ is minimal if and only if $v\in \Mn$ is minimal and aperiodic.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Suppose $v\in E^0$ has no bifurcation. If $v\in \Mn$ is not minimal then there is an $a \in \Mn$ such that $a < v$. Thus $a+x=v$ for some $x\not =0$.
Passing the equality to $M_E$ by the forgetful function~\ref{forgthryhr}, and invoking the confluence property of $M_E$, Lemma~\ref{aralem6}, we get an $c \in F$ such that $a+x \rightarrow c$ and $v\rightarrow c$. Since $v$ has no bifurcation, $c$ has to be a vertex and thus $a$ has to be this vertex and $x=0$ which is a contradiction.
Conversely, suppose that $v\in \Mn$ is minimal. If $v$ has a bifurcation, then pick the first $u \in T(v)$, where this bifurcation occurs. Then
\[v=u(k)=\sum_{\alpha \in s^{-1}(u)} r(\alpha)(k+1),\] where $k\in \mathbb N$ is the length of the path connecting $v$ to $u$. Since $|s^{-1}(u)| >1$ then $r(\alpha)(k+1) < v$ which is a contradiction.
\medskip
(ii) Suppose $v$ is a line-point. Then by (i), $v \in \Mn$ is minimal. If $v$ is periodic, then there is $n<0$ such that ${}^n v =v$. Passing the equality to $\overline E$ via the isomorphism~(\ref{forgthryhr2}), since $v_0, v_n \in \overline E^0$
are also line-points, using the congruence~(\ref{monoidrelation}) for this case, we have $v_n\rightarrow w_{l+n}$ and $v_0\rightarrow w_{l+n}$ for n $w\in E^0$ and some $l \geq 0$. Since $\overline E$ is stationary, we have $v_0 \rightarrow w_l$ and $w_{l+n} \rightarrow w_{l}$. This implies that $w$ is on a cycle and thus $v$ connects to a cycle which is a contradiction. Thus $v$ is aperiodic.
Conversely, suppose that $v\in \Mn$ is minimal and aperiodic. By part (i) $v$ has no bifurcation. If $v$ connects to a cycle, then we have
\[v=w(k)=w(k+l),\] where $k$ is the length of the path connecting $v$ to $w$, the based of the cycle, and $l$ is the length of the cycle. It follows that ${}^{-l} v=v$ which is a contradiction. Thus $v$ is a line-point.
\medskip
(iii) By \cite[Proposition~2.6.11]{AAS}, the left ideal $L_F(E)v$ is minimal if and only if $v$ is a line-point. Part (ii) now completes the proof.
\end{proof}
We close the paper with the following theorem which justifies why a conjecture such as Conjecture~\ref{conj1} could be valid.
\begin{thm}\label{mainthemethe}
Let $E_1$ and $E_2$ be row-finite graphs. Suppose there is a $\mathbb Z$-module isomorphism $\phi: M^{\gr}_{E_1} \rightarrow M^{\gr}_{E_2}$. Then we have the following:
\begin{enumerate} [\upshape(i)]
\item $E_1$ has Condition (L) if and only if $E_2$ has condition (L).
\medskip
\item $E_1$ has Condition (K) if and only if $E_2$ has condition (K).
\medskip
\item there is a one-to-one correspondence between the graded ideals of $L_F(E_1)$ and $L_F(E_2)$.
\medskip
\item there is a one-to-one correspondence between the non-graded ideals of $L_F(E_1)$ and $L_F(E_2)$.
\medskip
\item there is a one-to-one correspondence between the isomorphism classes of minimal left/right ideals of $L_F(E_1)$ and $L_F(E_2)$.
\end{enumerate}
\end{thm}
\begin{proof}
(i) and (ii) follows from Corollary~\ref{conLm}.
\medskip
(iii) The correspondence between the graded ideals of Leavitt path algebras follows from~(\ref{latticeisosecideal3}) and that a $\mathbb Z$-module isomorphism of modules induces a lattice isomorphism between the set of their order-ideals.
\medskip
(iv) We first show that the set of cycles without exits in $E$ is in one-to-one correspondence with the orbits of minimal periodic elements of $\Mn$. Notice that a minimal element $a$ of $\Mn$ can be represented by some $v(k)$, for an $v\in E^0$ and $k\in \mathbb Z$. Consider the set $S$ of all minimal and periodic elements of $\Mn$ and partition this set by their orbits, i.e., $S=\sqcup_{v\in S} O(v)$, where $O(v)=\{{}^i v, \mid i \in \mathbb Z \}$. Let $C$ be the set of all cycles without exits in $E$. For a cycle $c\in C$, denote by $c_v$ a vertex on the cycle (there is no need to fix this base vertex). We show that there is a bijection between the sets
\begin{align*}
\phi: C &\longrightarrow \{O(v) \mid v \in S \},\\
c &\longmapsto O(c_v)
\end{align*}
such that the length of the cycle $c$, $|c|$, is the same as the order of the corresponding orbit, $|O(c_v)|$.
First note that for a cycle $c$ without exit, $c_v$ is a vertex with no bifurcation, thus by Lemma~\ref{tminimi}, $c_v$ is minimal in $\Mn$. Furthermore, one can observe that
${}^{|c|} c_v =c_v$ in $\Mn$, which also shows that $|O(c_v)| = |c|$. Further, choosing another base point $w$ on the cycle $c$, we have $O(c_v)=O(c_w)$ and thus the map $\phi$ is well-defined.
Now suppose $c$ and $d$ are two distinct cycles without exits. If $O(c_v)=O(d_v)$, then there is a $k\in \mathbb Z$ such that for the vertices $c_v$ and $d_v$ we have ${}^k c_v=d_v$ in $\Mn$. Passing to $M_E$ via the forgetful function~\ref{forgthryhr}, the equation $c_v=d_v$ implies that $T(c_v) \cap T(d_v) \not = \emptyset$, which can't be the case. Thus the map $\phi$ is injective. On the other hand, if $v$ is minimal and periodic, then, by Lemma~\ref{tminimi}, and its proof of part (ii), $v$ has no bifurcation and is connected to a cycle $c$ without an exit. We show that $O(c_v)=O(v)$. Since $v$ has no bifurcation, $v$ connects to $c_v$ by a path $\alpha$ which has no exit. Thus ${}^{-|\alpha|} v= c_v$ and the claim follows. This shows that $\phi$ is also surjective.
Applying this argument for the quotient graph $E/ H$, for a hereditary and saturated subset $H$, gives a one-to-one correspondence between cycles whose exits are in $H$ and the minimal periodic elements of $\Mn / I$, where $I =\langle H \rangle $.
Now since a $\mathbb Z$-module isomorphism of modules preserves the class of orbits of minimal and periodic elements, an isomorphism of graded monoids of $E_1$ and $E_2$ gives a one-to-one correspondence between the cycles without exits (of the same length) between the graphs $E_1$ and $E_2$. Furthermore combining this with part (iii), we obtain a one to one correspondence between cycles whose exits are in $H$ in $E_1^0$ and the cycles whose exits are in the corresponding hereditary saturated subset in $E^0_2$.
Finally, by the Structure Theorem for ideals (\cite[Proposition 2.8.11]{AAS}, \cite{rangaswamy2}), the non-graded ideals in a Leavitt path algebra are characterised by the ``internal data'' of hereditary saturated subsets $H$, a non-empty set $C$ of cycles whose exits are in $H$ and the ``external'' data of polynomials $p_c(x) \in F[x], c\in C$. Combining this with the correspondences established above, completes the proof.
\medskip
(v) Consider the set $S$ of all minimal and aperiodic elements of $\Mn$ and partition this set by their orbits, i.e., $S=\sqcup_{v\in S} O(v)$. We leave it to the reader to observe that there is a one-to-one correspondence between these orbits of the graph $E$ and the class of minimal left/right ideals $L_F(E)$ by invoking Lemma~\ref{tminimi} and \cite[Proposition~2.6.11]{AAS} that a minimal left ideal of a Leavitt path algebra $L_F(E)$ is isomorphic to $L_F(E)v$, where $v$ is a line-point. The proof now follows from the fact that a $\mathbb Z$-module isomorphism of modules preserves the class of orbits of minimal and aperiodic elements.
\end{proof}
Note that Theorem~\ref{mainthemethe} is in line with what we should obtain from the (conjectural) statement that if $M^{\gr}_{E_1}\cong M^{\gr}_{E_2}$ as a $\mathbb Z$-modules (or equivalently $K_0^{\gr}(L(E_1) \cong K_0^{\gr}(L(E_2)$) then the Leavitt path algebras $L_F(E_1)$ and $L_F(E_2)$ are graded Morita equivalent. Indeed by \cite[Theorem~2.3.8]{hazi}, the graded Morita equivalence lifts to Morita equivalence:
\begin{equation}
\xymatrix{
\Grr L_F(E_1) \ar[rr] \ar[d]_{U}&& \Grr L_F(E_2) \ar[d]^{U}\\
\Modd L_F(E_1) \ar[rr] && \Modd L_F(E_2).
}
\end{equation}
Combining this with the consequences of Morita theory, we have a one-to-one correspondence between the ideals of $L_F(E_1)$ and $L_F(E_2)$ as confirmed by Theorem~\ref{mainthemethe}.
\begin{rmk}
\label{rmk}
A graded version of $M_E$ \cite [\S 5C]{ahls} was defined to be the
abelian monoid generated by $\{v(i) \mid v\in E^0, i\in \mathbb Z\}$ subject to the relations
\begin{equation}\label{monoidrelation2}
v(i)=\sum_{e\in s^{-1}(v)}r(e)(i-1),
\end{equation}
for every $v\in E^{0}$ that is not a sink. We denote it by ${\Mn}'$. There are $\mathbb Z$-module isomorphisms \begin{align}\label{yhoperagen}
{\Mn}' &\cong M_{\overline{E}} \cong \mathcal V(L_F(\overline E))\cong\, \mathcal V^{\gr}(L_F(E)),
\\v(i) &\longmapsto v_i\longmapsto L_F(\overline E) v_i \longmapsto\big (L_F(E)v\big) (-i), \notag
\end{align} see \cite[Proposition~5.7]{ahls}. We correct here that the isomorphism ${\Mn}' \cong M_{\overline{E}}$ should be given by $v(i)\mapsto v_i$ and that the $\mathbb Z$-action on ${\Mn}'$ given by Equation (5-10) in \cite{ahls} should be ${}^n {v(i)}=v(i-n)$ for $n, i\in\mathbb Z$ and $v\in E^0$.
In this note the relations for the graded monoid $\Mn$ are slightly different from that for ${\Mn}'$. However $\Mn$ and ${\Mn}'$ are isomorphic via $v(i)\mapsto v(-i)$ as $\Z$-modules. Note that $\Mn$ has a natural $\Z$-action ${}^n v(i)=v(i+n)$. If we follow the notation of covering graph in \cite[\S 5B]{ahls}, we can obtain $\Mn\cong \VV^{\gr}(L(E))$ similarly as \cite[Propositon 5.7]{ahls}.
\end{rmk}
\section{Acknowledgements} The authors would like to acknowledge Australian Research Council grant DP160101481. They would like to thank Anthony Warwick from Western Sydney University who enthusiastically took part in the discussions related to this work. | 186,052 |
InfiniteSkills - Using Free Webmail by Guy Vaccaro
English | Audio: aac, 44100 Hz, mono (und)
MP4 | Video: h264, yuv420p, 960x720, 15.00 fps(r) (und) | 851.95 MB
Genre: Video Training
In this computer based training course, expert author Guy Vaccaro takes a look at the three most popular free Webmail services, Yahoo! Mail, Google Gmail and Microsoft Hotmail/Windows Live.
With Guy, you will explore the three top free email services on the internet, learning how to signup for them, create, send and manage your emails, as well as the additional services each offer, Calendars and Contacts. You will tour the interface, and become comfortable with the idiosyncrasies of each of the programs. This video based training course allows you to follow the author through teaching you each subject, in an easy to learn manner.
By the completion of this video tutorial, you will be able to choose which service best suits your needs, and be familiar with the interface and use of the features of the free email service that you choose to use!
Table of Contents
01. Overview Of Free Web Based Email Systems
02. Using Windows Live/Hotmail
03. Windows Live/Hotmail Management
04. Windows Live/Hotmail Address Book
05. Windows Live/ Hotmail Calendar
06. Googles Gmail
07. Gmail Contacts
08. The Gmail Calendar
09. Using Yahoo! Mail
10. Yahoo! Contacts
11. The Yahoo! Calendar
12. Summary
More: _
From Ryushare.com
From Uploaded.net
From Rapidgator.net
All Links Are Interchangeable!
Mirror Links
InfiniteSkills_Using_Free_Webmail_by_Guy_Vaccaro.part01.zip
InfiniteSkills_Using_Free_Webmail_by_Guy_Vaccaro.part02.zip
InfiniteSkills - Using Free Webmail by Guy Vaccaro Fast Download via Rapidshare Hotfile Fileserve Filesonic Megaupload, InfiniteSkills - Using Free Webmail by Guy Vaccaro Torrents and Emule Download or anything related. | 78,058 |
TITLE: compact projections to infinite dimensional Banach spaces
QUESTION [3 upvotes]: If I consider $X$ to be an infinite dimensional Banach space and $P\in P(X)$, that is, $P$ is a continuous linear projection. How does one prove that $P$ is compact if and only if $\dim R(P)$ is finite?
Thank you.
REPLY [3 votes]: We will prove implication $\Longrightarrow$ ad absurdum. Assume that $\mathrm{dim}(R(P))=+\infty$, then $R(P)$ is infinite dimensional subspace of $X$. Using Riesz's lemma about almost perpendicular show that $R(P)$ contains sequence $\{x_n:n\in\mathbb{N}\}$ in the unit ball of $R(P)$ such that
$$
m\neq n\Longrightarrow \Vert x_n-x_m\Vert>1/2
$$
Since $P$ acts as identity on $R(P)$ we see that unit ball of $X$ after applying projection $P$ contains this sequence. Note that relatively compact set can't contain such subsets, since they have no limit points. Thus image of the unit ball under projection $P$ is not relatively compact. Hence $P$ is not a compact operator.
Implication $\Longleftarrow$ is easier. Assume that $\mathrm{dim}(R(P))<+\infty$, consider image of unit ball under projection $P$. This bounded subset since $P$ is bounded. Moreover this is subset of finite dimensional subspace $R(P)$. We know that bounded subsets of finite dimensional spaces are relatively compact. Thus $P$ is a compact operator.
REPLY [2 votes]: Here is an outline of one way to show this.
Prove that $P(X)$ is closed, hence a Banach space.
Note that the restriction of $P$ to $P(X)$ is the identity operator on the Banach space $P(X)$.
Note that the identity operator on a Banach space is compact if and only if the closed unit ball of the space is compact.
Prove that the closed unit ball of a Banach space is compact if and only if the space is finite dimensional. (francis-jamet and Norbert have mentioned Riesz's lemma, which is useful for this, perhaps the most important part.) | 145,551 |
Find the most recent information on EU Funding activities in the field of Information and Communication Technologies (ICT) by visiting our ICT in FP7 website, which covers ICT in the 7th Framework Programme (FP7) 2007 - 2013.
Who is who
Director: Mr. Ulf Dahlsten
Communication Officer: Mr. Aymard de Touzalin
- Unit F1: Future and Emerging Technologies (FET)
- Head of unit: Mr. Thierry Van der Pyl
- Communication Officer: Mr. Fabrizio Sestini
- Unit F2: Grid Technologies
- Head of unit: Mr. Wolfgang Boch
- Communication Officer: Mr. Eoghan O'Neill
- Unit F3: Research Infrastructure
- Head of unit: Mr. Mario Campolargo
- Communication Officer: Nicholas Nicholson
- Unit F4: New Working Environments
- Head of unit: Mr. Bror Salmelin
- Communication Officer: Ms. Elena Leibbrand | 47,856 |
In Raleigh, North Carolina, a “Sexy Schoolgirl” race has been cancelled for being too creepy among other things. The competition would’ve raised funds for the MathCounts Foundation while encouraging women to dress like "Hot for Teacher” and run 5K. Somewhere a dude who loves Warby Parker frames and hot pants is sadly putting away his lotion.
According to the Daily News, the fundraiser was scheduled for August 9 until community folks told local Raleigh City Councilwoman May-Ann Baldwin that they weren’t fans. Ms. Baldwin acted swiftly.
The race’s Twitter page describes the event as “the newest, most exciting themed co-ed 5k out there!” but there are no advertisements or images of hot bespectacled men on their website. Boo. Still, Sexy Schoolgirl's website lists upcoming events in other cities like Pittsburgh, Cincinnati and Tampa and something tells me this might just might fly in Florida.
Advertisement
Image via Sexy Schoolgirl. | 139,743 |
TITLE: Question about continuity
QUESTION [4 upvotes]: This is a question from my math book:
Let $a< b < c$. Suppose that $f$ is continuous on $[a,b]$, $g$ is continuous on $[b,c]$, and $f(b) = g(b)$. Define $h$ on $[a,c]$ by $h(x) = f(x)$ for $x\in [a,b]$ and $h(x) = g(x)$ for $x \in [b,c]$. Prove that $h$ is continuous on $[a,c]$.
What I want to do is prove that $h$ is continuous on $[a,c]$ but not at $b$.
I'm thinking that I have to pick an $α$ such that $a<α< b$, and then show that $h(x)$ is continuous at $α$. So basically, I want to show that $∀\epsilon>0, ∃ δ>0$ such that if $x\in [a,c]$ and $|x-α|<δ$ then $|h(x)-h(α)|<\epsilon$. And from the given information, I know that $f:[a,b]\to \mathbb{R}$, and $∀ \epsilon>0, ∃ δ>0$ such that if $x\in [a,b]$ and $|x-α|<δ_f$ then $|f(x)-f(α)|<\epsilon$.
How do I connect these two definitions to find $δ$? And is there anything else I need to prove? Like are there any cases I should be making?
Thanks in advance.
REPLY [5 votes]: What you have to prove is precisely that $h$ is continuous at $b$. Because at any point $d$ other than $b$, if $d<b$ then $h(x)=f(x)$ for all $x$ in a small interval around $d$, and so the continuity of $f$ implies the continuity of $h$ for every $d\in[a,b)$. Similarly one deduces that $h$ is continuous on $(b,c]$ by using the continuity of $g$.
For the continuity at $b$. Fix $\varepsilon>0$. By the continuity of $f$ at $b$, there exists $\delta_1>0$ such that if $x<b$ and $b-x<\delta_1$, then $|f(x)-f(b)|<\varepsilon$. Similarly, there exists $\delta_2>0$ such that if $x>b$ and $x-b<\delta_2$, then $|g(x)-g(b)|<\varepsilon$.
Let $\delta=\min\{\delta_1,\delta_2\}$. Now, if $|x-b|<\delta$, we consider two cases: first, if $x<b$, then $b-x=|x-b|<\delta\leq\delta_1$, and so
$$
|h(x)-h(b)|=|f(x)-f(b)|<\varepsilon;
$$
if $x>b$, then $x-b=|x-b|<\delta\leq\delta_2$, and so
$$
|h(x)-h(b)|=|g(x)-g(b)|<\varepsilon.
$$
So $h$ is continuous at $b$.
REPLY [1 votes]: We can prove that $h$ is continuous on $[a,c]$ in three steps:
First, we prove that $h$ is continuous on $[a,b)$. Since $f$ is continuous on $[a,b)$, given any $x\in [a,b)$ and $\epsilon>0$ we have some $\delta>0$ such that for $y\in [a,b)$ we have $|x-y|<\delta\implies |f(x)-f(y)|<\epsilon$. Let $\delta'=\min\{\delta,b-x\}$. Since $h=f$ on $[a,b)$, we have that for $y\in [a,c]$, $|x-y|<\delta'$ implies that $y\in [a,b)$ and $|x-y|<\delta$ so $|h(x)-h(y)|<\epsilon$, thus $h$ is continuous at $x$. Hence $h$ is continuous on $[a,b)$.
Second, we prove that $h$ is continuous on $(b,c]$. This proof is nearly identical to the previous one.
Finally, we prove that $h$ is continuous at $b$. Since $f$ and $g$ are continuous at $b$, given any $\epsilon>0$ we have some $\delta,\delta'>0$ such that for $y\in [a,b]$ if $|b-y|<\delta$ then $|f(b)-f(y)|<\epsilon$ and for $y\in [b,c]$ if $|b-y|<\delta'$ then $|g(b)-g(y)|<\epsilon$. Thus we can let $\delta''=\min\{\delta,\delta'\}$ and we get that if $y\in [a,c]$ and $|b-y|<\delta''$ then either $y\in [a,b]$ and $|b-y|<\delta$ in which case $|h(b)-h(y)|=|f(b)-f(y)|<\epsilon$, or $y\in [b,c]$ and $|b-y|<\delta'$ in which case $|h(b)-h(y)|=|g(b)-g(y)|<\epsilon$. Thus $h$ is continuous at $b$. | 153,213 |
U.S. Senator. Born Barbara Levy in Brooklyn, New York, on November 11, 1940. Her parents, Sophie Silverstein Levy and Ira Levy, were both first-generation Jewish immigrants to the United States. Barbara Levy met husband Stewart Boxer as a student at Brooklyn College...
| 326,262 |
Subsets and Splits