text
stringlengths 0
6.23M
| __index_level_0__
int64 0
419k
|
---|---|
Updated: 06/09/2020 4:45 PM
PKS-RFI-20-012
Parks and Recreation
06/23/2020 2:00 PM
Pending Review
924-64 - Partnering Workshop Facilitation Services
The City of Phoenix (City), through its Parks and Recreation Department (PRD), is seeking information from a non-profit organization to operate various City facilities and provide recreation, educational and cultural services to the public.
(pdf) | 49,166 |
TITLE: combinatorics and sorting
QUESTION [2 upvotes]: There are 5 distinct computer science books, 3 distinct mathe-
matics books, and 2 distinct art books. In how many ways can these
books be arranged on a shelf if no two of the three mathematics books
are together?
Attempt at solution:
First we found the amount of ways to arrange the 7 comp/art books, 7!.
Assuming there is a space between each book for a math book, the first math book has 8 (And 3 books to choose from) ways to be placed. The second has 7 ways(2 books to choose from), and the third has 6 ways.
The answer is 7!*(8*3)*(7*2)*6.
Is this correct? If not, where did we go wrong?
EDIT: This question is not a duplicate because our set of books is distinct where in the other question they are not.
REPLY [2 votes]: The non-math books can be arranged in $7!$ ways.
That leaves $8$ "gaps" (including the endgaps) to place the math books. The oldest math book can be placed in $8$ ways, and for each way the second oldest can be placed in $7$ ways, and then the youngest in $6$ ways, for a total of $(7!)(8)(7)(6)$.
Alternately, the places to be occupied by math books can be chosen in $\binom{8}{3}$ ways, and then the math books can be permuted among these places in $3!$ ways. Then the non-math books can be permuted in $7!$ ways, for a total of $\binom{8}{3}3!7!$.
Remark: The answer proposed in the OP is a hybrid of the two approaches above, and overcounts by a factor of $3!$. | 32,645 |
"Why have you brought that sword back up here, mortal?"
discuss
The alternatives are Gods (male) or Demons.
it's hard to think about ******* a girl would could kill you in thousands of ways.
have you read a lot of myths? those bitches can be pretty sadistic.
**** a goddess, die painfully
Get ****** by a god, die in process, or die painfully afterwards?
Get ****** by a demon, be forever painfully tormented
If you want to look at it like that, these are your choices. | 278,219 |
TITLE: local equation in the definition of transversal intersection
QUESTION [0 upvotes]: Let $X$ be a smooth surface and $C,D$ be two smooth curves on it. In Hartshorne page-$357$, it is stated that if $p \in C \cap D$ is a point of intersection of $C$ and $D$ , we say that $C$ and $D$ meet transversally at $p$ if the local equations $f,g$ of $C,D$ at $P$ generate the maximal ideal $m_p$ of $\mathcal O_X, p$.
My question is that, what is the precise algebraic definition of the local equation of a curve at a point in this context?
Thanks in advance.
REPLY [2 votes]: An effective Cartier divisor is a closed immersion $i : D \rightarrow X$ that is locally cut out by a single equation, and it's not a zero divisor. That is: for every point $p$ there is an affine open set $U =Spec(R)$ such that the closed immersion $i$ on $U$ corresponds to $Spec(R / (a)) \rightarrow Spec(R)$ for some $a \in R$, and $a$ is not a zero divisor.
That $a$ is the local equation of the divisor. | 204,021 |
Description: This call was an opportunity for projects to provide input to OSEP on the proposed SPDG Performance Measurement revisions that were outlined at the Regional Meetings in February 2011. See PPT below to review the list of measures.
Facilitator: Jennifer Coffey
SPDG Performance Measure PowerPoint Presentation PPT. [3/14/11 Jennifer Coffey]
Session Recording:
Categories: Past Events | 350,922 |
TITLE: Power free values of reducible polynomials
QUESTION [4 upvotes]: Browning in this paper proves that if $f \in \mathbb{Z}[x]$ is an irreducible polynomial of degree $d \geq 3$ and $k$ is an integer $\geq 3d/4 + 1/4$, then we have $$\#\{n \in \mathbb{Z} \cap [1, X] : f(n) \text{ is $k$-free}\}\sim C_{1}(k, f)X$$ as $X \rightarrow \infty$ where $$C_{1}(k, f) = \prod_{p}\left(1 - \frac{\varrho(p^{k})}{p^{k}}\right)$$
and $\varrho_{f}(n) = \#\{a \pmod{n}: f(a) \equiv 0 \pmod{n}\}$. Is there anywhere that gives a similar asymptotic for the case when $f$ is reducible?
REPLY [1 votes]: The reducible case is not much harder than the irreducible case, if one uses an argument in Greaves's paper (used originally by Gouvea and Mazur) "Power-free values of binary forms". In particular, the main term can be defined just as easily for reducible polynomials $f(x) \in \mathbb{Z}[x]$. The error terms can be dealt with simply as follows. Write $f(x) = f_1(x) \cdots f_s(x)$, where $f_i(x)$ is irreducible over $\mathbb{Z}$ and define the error term $E_i(X) = \# \{x \in \mathbb{Z} : p^k | f(x) \text{ for some } x > \xi\}$, where $\xi = \frac{1}{k} \log X$. Then the overall error term is just $E(X) = \sum_{j=1}^s E_i(X)$. Thus one can use whatever techniques one would like (in this case, Browning's result is the strongest; although using a different determinant method I am able to reproduce the same result) to estimate each of the $E_i(X)$ and multiply the result by $s$, which is negligible since it is an absolute constant and the error term will be a power saving over the main term (assuming the constant in front of the main term is non-zero).
I should remark that the above estimates do not cover the case when say $f_1(x) = uv^{k-l}$ and $f_2(x) = u'v^l$, or when three factors of $f$ conspire to give us a non-$k$-free term. However, this situation is easily shown to give a negligible contribution. To see this, consider the equations $f_1(x) = uv^{k-l}$, $f_2(x) = u'v^l$ and $f(x) = wv^k$. With $x$ and $v$ fixed, we see that $u, u'$ are divisors of $w$ and hence there are no more than $d(w) = O(w^\epsilon)$ many choices. A simple analysis on the size of $w$ yields that $w = O(X^d)$, and so $d(w) = O(X^\epsilon)$. We may then crudely estimate the number of points lying on the intersection of the varieties defined by $f_1(x) = uv^{k-l}$ and $f_2(x) = u'v^l$ where we take $u, u'$ to be fixed as follows. For $u, u'$ fixed, each of the equations define a curve in $\mathbb{A}^2$, and they have at most finitely many intersection points. Thus the overall contribution is $O(X^\epsilon)$ and is negligible. | 131,673 |
Background
These are my thoughts as of November 13th 2007. Jupiter is one of the best independent fund management groups in the UK. The management completed a buyout this summer - one of the last MBOs to benefit from a coverant-lite funding agreement with its bankers, before the credit crunch put paid to the whole idea of covenant-free borrowing.
Guess which story on the Independent Investor site has been read the most times to date
Those present at the meeting included Edward Bonham Carter, Jupiter's CEO, Tony Nutt, manager of its £4bn income fund, Ian McVeigh, who runs the Jupiter Growth fund, and John Chatfeild-Roberts, head of its fund of funds team. The meeting was off-the-record, so opinions could not be directly attributed to individuals. Jupiter does not have a house view, and its managers are free to disagree with each other (as they did, vehemently, on one big issue).
These are the things that I took away from the meeting:
While these views all seem sensible to me, and indeed confirm my own gut feeling, the biggest concern in the short term, as I said earlier, is the deteriorating technical action of the US stock market. If both the Dow and the Transport index break through their August lows (the Transport index has already done so, and the Dow is close to doing), it could lead to a further downleg in the market.
For that reason investors would be wise to maintain a reasonable level of cash in their portfolios pending greater clarity about the way the market is likely to develop. This is not the time to take an all or nothing bet on the market, given rising volatility and the unusually polarised state of expert opinion (greater than I can remember for a long time). | 210,355 |
Content Count57
Joined
Last visited
Profile Information
- GenderFemale
Previous Fields
- MembershipTypeSurvivor
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
What’s wrong with me
Aliss replied to Mae72700's topic in Public: Welcome!There is nothing wrong with you, something wrong happened to you. It's very different. It's common to have mixed feelings and it can be very confusing. You could try writing down all of your different feelings about him, sometimes it helps clarify things. It's ok to feel affection for someone and also be angry at that person for what they've done to us. It just makes you human. Don't hesitate to reach out if you need help.
Hello everyone
Aliss replied to Aliss's topic in Public: Welcome!Thank you everyone for your kind replies and warm welcome! I look forward to getting to know you all.
-
- Aliss started following Hello everyone
Hello everyone
Aliss posted a topic in Public: Welcome!Hello everyone, I'm a PhD student and I'm turning 26 in a few days. I don't have many hobbies (work is pretty much my whole life), but I love cats and meditation. I decided to join this forum because I really struggle with telling people what has happened to me and I was hoping this would be a step in the right direction. I feel very isolated because I can't bring myself to tell anyone (except my T), and it made me realize how lonely I am. I hesitated to create an account because I feel like my abuse was probably not as bad as what most people here experienced, but I know I tend to minimize what happened to me so I'm trying to work on that. I'm just hoping to find people to talk to. | 83,214 |
It's hard to say which I liked best. My husband's mother had a great recipe for margaritas. In fact, because her recipe is so good, I just do not like margs made from a mix. Here's the recipe:
1 can frozen limeade concentrate
1 can (use the same juice can for proportions) water
1 can tequila
1/3-1/2 can triple sec
Mix well, and serve over ice. If you want to salt the rim, moisten with a lime wedge first. These margaritas are practically famous, thanks to the internet and a decorating forum I belong to.
I also love playing cards. My husband grew up playing poker with his parents. I know what you're thinking, but he was an only child, and this was an easy game for him to learn and more enjoyable for his parents to play than Candyland. There are lots of variations of poker, and we play many. If it's just two of us, we love playing gin. Love. It.
Finally, I love music. Our favorite patio CD's include Sade, Steve Winwood, John Mayer, the Thorns -- so many! If you ever need suggestions for good music, just take a look at What's Playing on the Patio over there ----->.
Here in the Hill Country, we have a sun porch instead of a patio. It is furnished similar to our patio, though, and we still enjoy our favorite things. What are some of your favorite things?
14 comments:
One of my favorite things is sitting on your porch or patio and sipping one of Dorothy's margaritas! Looking forward to doing so again in October!
I told Mike just this evening that I was going to ask you to NOT let Chris make his jalapeno poppers. PLEASE. I am really trying to exercise and eat better. And I just cannot stop once I get started! LOL
Oh, man. We have to have stuffed jalapenos. I need to do a post about those, don't I? The post is nonfat. ;-)
I'm licking my lips that margarita looks sooo good!
I'll try the recipe, I didn't know about it.
I love sitting outside too, so relaxing and the weather is perfect now.
Ha Sandra...we tend to have happy hour at our house on Friday's. And DH recipe for Margaritas is similar...just can't ever order them anywhere his are so good!
Fans going candle burning into night...good conversation...kids all eating pizza and playing games. Homemade guacamole...Yep. Must be Friday!
Sitting in my courtyard, overlooking looking my mountains, drinking a glass of great French wine....
Hugs,
Penny
just came across your blog via something via cotedetexas. i noticed you are from amarillo tx. i don't know how old you are but i have a good friend who still lives there
she owns a flower shop ...secret garden.. more events nowadays. her name now is parie jones villyard
it used to be pam marie jones.
she has been part of texas film and in reading your post i thought you might know her
i live in san marcos just down the road
small space living in a 1920's cottage bungalow
we have a front and back porch
so when we feel social it is the front and when we want to hide we head for the back and turn on the fountain
my name is diane and i will bookmark your blog
enjoy the day
diane
I can speak from experience that those marg's are THE BEST! I used them for our all girls weekend a few years ago, Estrofest! :)
Diane, I don't know Parie personally, but I know of her. Your home sounds adorable. Maybe we can meet up sometime! Thanks for stopping by.
Oh, I miss my screened porches! One of my favs was to sit on the porch and sip (wine, margarita, dr pepper, tea, anything!) and listen to good music (I like Winwood, too!)
Our whole family has always played poker together from 9 to 90, everyone gets in on it!
Ha! That's pretty much our "house margarita recipe," too! That, along with our home made guacamole -- preferably sitting on our porch in rocking chairs -- would be a few of my favorite things. :-)
May I come over, Sandra? ha! It sounds delightful....sitting out drinking a tasty margarita, enjoying the views and playing cards - what fun!
Okay, maybe just a FEW stuffed jalapenos!
Linda's entertaining sounds just like the dinner party I have planned for tomorrow night...our standard template! Homemade guac and chips as an appetizer, steaks on the barbie, roasted potatoes, salad. And brownies w/ Starbucks coffee ice cream for dessert. All served poolside by the kiva. Only...I don't have Linda's uber-coolio pad. But yep...I heart livin' in the Sunbelt!
I really want to try those smash potatoes, or whatever they're called, that Christy posted. Yum!
Sandra, You have to try them! I swear they are my new favorite potato. They get all crispy & yummy & so easy! | 281,109 |
TITLE: Formula for the roots of a cubic equation
QUESTION [0 upvotes]: I know that you can derive a quadratic formula from the given complex roots $\alpha$ and $\beta$ if you simply put them into the formula $x^2-(\alpha+\beta)x+\alpha\beta=0$. Is there an equivalent for cubic equations?
REPLY [0 votes]: I think what you're asking is this: if you know the roots of a cubic polynomial, then can you get the expression for the polynomial itself? And the answer is you can: if a cubic has roots $\alpha, \beta, \gamma$, then the formula for the cubic is just $$(x - \alpha) (x - \beta) (x - \gamma)$$ which expands to $$ x^3 - (\alpha + \beta + \gamma)x^2 + (\alpha \beta + \beta \gamma + \gamma \alpha) x - \alpha \beta \gamma.$$
The quadratic formula that you posted likewise, is just the expansion of $(x - \alpha)(x - \beta)$. | 190,175 |
\begin{document}
\def\smfbyname{}
\begin{abstract}In this article, we consider an analogue of Arakelov theory of arithmetic surfaces over a trivially valued field. In particular, we establish an arithmetic Hilbert-Samuel theorem and studies the effectivity up to $\mathbb R$-linear equivalence of pseudoeffective metrised $\mathbb R$-divisors.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
In Arakelov geometry, one considers an algebraic variety over the spectrum of a number field and studies various constructions and invariants on the variety such as metrised line bundles, intersection product, height functions etc. Although these notions have some similarities to those in classic algebraic geometry, their construction is often more sophisticated and get involved analytic tools.
Recently, an approach of $\mathbb R$-filtration has been proposed to study several invariants in Arakelov geometry, which allows to get around analytic technics in the study of some arithmetic invariants, see for example \cite{MR2768967,MR2722508}. Let us recall briefly this approach in the setting of Euclidean lattices for simplicity. Let $\overline E=(E,\norm{\ndot})$ be a Euclidean lattice, namely a free $\mathbb Z$-module of finite type $E$ equipped with a Euclidean norm $\norm{\ndot}$ on $E_{\mathbb R}=E\otimes_{\mathbb Z}\mathbb R$. We construct a family of vector subspaces of $E_{\mathbb Q}=E\otimes_{\mathbb Z}\mathbb Q$ as follows. For any $t\in\mathbb R$, let $\mathcal F^t(\overline E)$ be the $\mathbb Q$-vector subspace of $E_{\mathbb Q}$ generated by the lattice vectors $s$ such that $\norm{s}\leqslant \mathrm{e}^{-t}$. This construction is closely related to the successive minima of Minkowski. In fact, the $i$-th minimum of the lattice $\overline E$ is equal to
\[\exp\Big(-\sup\{t\in\mathbb R\,|\,\operatorname{rk}_{\mathbb Q}(\mathcal F^t(\overline E))\geqslant i\}\Big).\]
The family $(\mathcal F^t(\overline E))_{t\in\mathbb R}$ is therefore called the \emph{$\mathbb R$-filtration by minima} of the Euclidean lattice $\overline E$.
Classically in Diophantine geometry, one focuses on the lattice points of small length, which are analogous to global sections of a vector bundle over a smooth projective curve. However, such points are in general not stable by addition. This phenomenon brings difficulties to the study of Arakelov geometry over a number field and prevents the direct transplantation of algebraic geometry methods in the arithmetic setting. In the $\mathbb R$-filtration approach, the arithmetic invariants are encoded in a family of vector spaces, which allows to apply directly algebraic geometry methods to study some problems in Arakelov geometry.
If we equipped $\mathbb Q$ with the trivial absolute value $|\ndot|_0$ such that $|a|_0=1$ if $a$ belongs to $\mathbb Q\setminus\{0\}$ and $|0|_0=0$, then the above $\mathbb R$-filtration by minima can be considered as an ultrametric norm $\norm{\ndot}_0$ on the $\mathbb Q$-vector space $E_{\mathbb Q}$ such that \[\norm{s}_{0}=\exp(-\sup\{t\in\mathbb R\,|\,s\in\mathcal F^t(\overline E)\}).\]
Interestingly, finite-dimensional ultrametrically normed vector spaces over a trivially valued field are also similar to vector bundles over a smooth projective curve. This method is especially successful in the study of the arithmetic volume function. Moreover, $\mathbb R$-filtrations, or equivalently, ultrametric norms with respect to the trivial absolute value, are also closely related to the geometric invariant theory of the special linear group, as shown in \cite[\S6]{MR2496458}.
All this works suggest that there would be an Arakelov theory over a trivially valued field. From the philosophical point of view, the $\mathbb R$-filtration approach can be considered as a correspondance from the arithmetic geometry over a number field to that over a trivially valued field, which preserves some interesting arithmetic invariants. The purpose of this article is to build up such a theory for curves over a trivially valued field (which are actually analogous to arithmetic surfaces). Considering the simplicity of the trivial absolute value, one might expect such a theory to be simple. On the contrary, the arithmetic intersection theory for adelic divisors in this setting is already highly non-trivial, which has interesting interactions with the convex analysis on infinite trees.
Let $k$ be a field equipped with the trivial absolute value and $X$ be a regular irreducible projective curve over $\operatorname{Spec} k$. We denote by $X^{\mathrm{an}}$ the Berkovich analytic space associated with $X$, which identifies with a tree of length $1$ whose leaves correspond to closed points of $X$ (see \cite[\S3.5]{MR1070709}).
\vspace{3mm}
\begin{center}
\begin{tikzpicture}
\filldraw(0,1) circle (2pt) node[align=center, above]{$\eta_0$};
\filldraw(-3,0) circle (2pt) ;
\draw (-1,0) node{$\cdots$};
\filldraw(-2,0) circle (2pt) ;
\filldraw(-0,0) circle (2pt) node[align=center, below]{$x_0$} ;
\filldraw(1,0) circle (2pt) ;
\draw (2,0) node{$\cdots$};
\filldraw(3,0) circle (2pt) ;
\draw (0,1) -- (0,0);
\draw (0,1) -- (-3,0);
\draw (0,1) -- (1,0);
\draw (0,1) -- (-2,0);
\draw (0,1) -- (3,0);
\end{tikzpicture}
\end{center}
\vspace{3mm}
For each closed point $x$ of $X$, we denote by $[\eta_0,x_0]$ the edge connecting the root and the leaf corresponding to $x$. This edge is parametrised by the interval $[0,+\infty]$ and we denote by $t(\ndot):[\eta_0,x_0]\rightarrow[0,+\infty]$ the parametrisation map.
Recall that an $\mathbb R$-divisor $D$ on $X$ can be viewed as an element in the free real vector space over the set $X^{(1)}$ of all closed points of $X$. We denote by $\operatorname{ord}_x(D)$ the coefficient of $x\in X^{(1)}$ in the writing of $D$ into a linear combination of elements of $X^{(1)}$. We call Green function of $D$ any continuous map $g:X^{\mathrm{an}}\rightarrow [-\infty,+\infty]$ such that there exists a continuous function $\varphi_g:X^{\mathrm{an}}\rightarrow\mathbb R$ which satisfies the following condition
\[\forall\,x\in X^{(1)},\;\forall\,\xi\in[\eta_0,x_0\mathclose{[},\quad
\varphi_g(\xi)=g(\xi)-\operatorname{ord}_x(D)t(\xi).\]
The couple $\overline D=(D,g)$ is called a metrised $\mathbb R$-divisor on $X$. Note that the set $\widehat{\operatorname{Div}}_{\mathbb R}(X)$ of metrised $\mathbb R$-divisors on $X$ forms actually a vector space over $\mathbb R$.
Let $D$ be an $\mathbb R$-divisor on $X$. We denote by $H^0(D)$ the subset of the field $\operatorname{Rat}(X)$ of rational functions on $X$ consisting of the zero rational function and all rational functions $s$ such that $D+(s)$ is effective as an $\mathbb R$-divisor, where $(s)$ denotes the principal divisor associated with $s$, whose coefficient of $x$ is the order of $s$ at $x$. The set $H^0(D)$ is actually a $k$-vector subspace of $\operatorname{Rat}(X)$. Moreover, the Green function $g$ determines an ultrametric norm $\norm{\ndot}_{g}$ on the vector space $H^0(D)$ such that
\[\norm{s}_g=\exp\Big(-\inf_{\xi\in X^{\mathrm{an}}}(g+g_{(s)})(\xi)\Big).\]
Let $\overline D_1=(D_1,g_1)$ and $\overline D_2=(D_2,g_2)$ be adelic $\mathbb R$-divisors on $X$ such that $\varphi_{g_1}$ and $\varphi_{g_2}$ are absolutely continuous with square integrable densities, we define a pairing of $\overline D_1$ and $\overline D_2$ as (see \S\ref{Subsec: pairing} for details)
\begin{equation}\label{Equ: intersection pairing}\begin{split}(\overline D_1\cdot\overline D_2):=g_1&(\eta_0)\deg(D_1)+g_2(\eta_0)\deg(D_1)\\
&-\sum_{x\in X^{(1)}}[k(x):k]\int_{\eta_0}^{x_0}g_1'(\xi)g_2'(\xi)\,\mathrm{d}t(\xi)
\end{split}\end{equation}
Note that such pairing is similar to the local admissible pairing introduced in \cite[\S2]{MR1207481} or, more closely, similar to the Arakelov intersection theory on arithmetic surfaces with $L_1^2$-Green functions (see \cite[\S5]{MR1681810}). This construction is also naturally related to harmonic analysis on metrised graphes introduced in \cite{MR2310616} (see also \cite{MR1195689} for the capacity pairing in this setting), although the point $\eta_0$ is linked to an infinitely many vertices. A more conceptual way to understand the above intersection pairing (under diverse extra conditions on Green functions) is to introduce a base change to a field extension $k'$ of $k$, which is equipped with a non-trivial absolute value extending the trivial absolute value on $k$. It is then possible to define a Monge-Amp\`{e}re measure over $X_{k'}^{\mathrm{an}}$ for the pull-back of $g_1$, either by the theory of $\delta$-forms \cite{MR3602767,MR3975640}, or by the non-Archimedean Bedford-Taylor theory developed in \cite{2Antoine}, or more directly, by the method of Chambert-Loir measure \cite{MR2244803,MR2543659}. It turns out that the push-forward of this measure on $X^{\mathrm{an}}$ does not depend on the choice of valued extension $k'/k$ (see \cite[Lemma 7.2]{BE18}). We can then interpret the intersection pairing as the height of $D_2$ with respect to $(D_1,g_1)$ plus the integral of $g_2$ with respect to the push-forward of this Monge-Amp\`{e}re measure.
One
contribution of the article is to describe the asymptotic behaviour of the system of ultrametrically normed vector spaces $(H^0(nD),\norm{\ndot}_{ng})$ in terms of the intersection pairing, under the condition that the Green function $g$ is plurisubharmonic (see Definition \ref{Def: psh}). More precisely, we obtain an analogue of the arithmetic Hilbert-Samuel theorem as follows (see \S\ref{Sec:Hilbert-Samuel} \emph{infra}).
\begin{theo}
Let $\overline D=(D,g)$ be an adelic $\mathbb R$-divisor on $X$. We assume that $\deg(D)>0$ and $g$ is plurisubharmonic. Then one has
\[\lim_{n\rightarrow+\infty}\frac{-\ln\norm{s_1\wedge\cdots\wedge s_{r_n}}_{ng,\det}}{n^2/2}=(\overline D\cdot\overline D),\]
where $(s_i)_{i=1}^{r_n}$ is a basis of $H^0(nD)$ over $k$ (with $r_n$ being the dimension of the $k$-vector space $H^0(nD)$), $\norm{\ndot}_{ng,\det}$ denotes the determinant norm associated with $\norm{\ndot}_{ng}$, and $(\overline D\cdot\overline D)$ is the self-intersection number of $\overline D$.
\end{theo}
Diverse notions of positivity, such as bigness and pseudo-effectivity, are discussed in the article. We also study the effectivity up to $\mathbb R$-linear equivalence of pseudo-effective metrised $\mathbb R$-divisors. The analogue of this problem in algebraic geometry is very deep. It is the core of the non-vanishing conjecture, which has applications in the existence of log minimal models \cite{MR2831514}. It is also related to Keel's conjecture (see \cite[Question~0.9]{MR2007391} and \cite[Question~0.3]{MR3479310}) for the ampleness of divisors on a projective surface over a finite field. In the setting of an arithmetic curve associated with a number field, this problem can actually be interpreted by Dirichlet's unit theorem in algebraic number theory. In the setting of higher dimensional arithmetic varieties, the above effectivity problem is very subtle. Both examples and obstructions were studied in the literature, see for example \cite{MR3049312,MR3436160} for more details.
In this article, we establish the following result.
\begin{theo}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. For any $x\in X^{(1)}$, we let \[\mu_{\inf,x}(g):=\inf_{\xi\in\mathopen{]}\eta_0,x_0\mathclose{[}}\frac{g(\xi)}{t(\xi)}.\]
Let
\[\mu_{\inf}(g):=\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)[k(x):k].\]
Then the following assertions hold.
\begin{enumerate}[label=\rm(\arabic*)]
\item $(D,g)$ is pseudo-effective if and only if $\mu_{\inf}(g)\geqslant 0$.
\item $(D,g)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor if and only if $\mu_{\inf,x}(g)\geqslant 0$ for all but finitely many $x\in X^{(1)}$ and if one of the following conditions holds:
\begin{enumerate}[label=\rm(\alph*)]
\item $\mu_{\inf}(g)>0$,
\item $\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)x$ is a principal $\mathbb R$-divisor.
\end{enumerate}
\end{enumerate}
\end{theo}
The article is organised as follows. In the second section, we discuss several properties of convex functions on a half line. In the third section, we study Green functions on an infinite tree. The fourth section is devoted to a presentation of graded linear series on a regular projective curve. These sections prepares various tools to develop in the fifth section an Arakelov theory of metrised $\mathbb R$-divisors on a regular projective curve over a trivially valued field. In the sixth section, we discuss diverse notion of global and local positivity of metrised $\mathbb R$-divisors. Finally, in the seventh section, we prove the Hilbert-Samuel theorem for arithmetic surfaces in the setting of Arakelov geometry over a trivially valued field.
\vspace{3mm}
\noindent {\bf Acknowledgement:} We are grateful to Walter Gubler and Klaus K\"{u}nnemann for discussions.
\section{Asymptotically linear functions}
\subsection{Asymptotic linear functions on $\mathbb R_{>0}$}\label{Sec: function of Green type}
We say that a continuous function $g:\mathbb R_{>0}\rightarrow\mathbb R$ is \emph{asymptotically linear} (at the infinity) if there exists a real number $\mu(g)$ such that the function \[\varphi_g:\mathbb R_{>0}\longrightarrow\mathbb R,\quad \varphi_g(t):=g(t)-\mu(g)t\] extends to a continuous function on $[0,+\infty]$. The real number $\mu(g)$ satisfying this condition is unique. We call it the \emph{asymptotic slope of $g$}. The set of asymptotically linear continuous functions forms a real vector space with respect to the addition and the multiplication by a scalar. The map $\mu(\ndot)$ is a linear form on this vector space.
We denote by $L^2_1(\mathbb R_{>0})$ the vector space of continuous functions $\varphi$ on $\mathbb R_{>0}$ such that the derivative (in the sense of distribution) $\varphi'$ is represented by a square-integrable function on $\mathbb R_{>0}$. We say that an asymptotically linear continuous function $g$ on $\mathbb R_{>0}$ is \emph{pairable} if
the function $\varphi_g$
belongs to $L_1^2(\mathbb R_{>0})$.
\begin{rema}
The functional space $L_1^2$ is a natural object of the potential theory on Riemann surfaces. In the classic setting of Arakelov geometry, it has been used in the intersection theory on arithmetic surfaces. We refer to \cite[\S3]{MR1681810} for more details.
\end{rema}
\subsection{Convex function on $[0,+\infty]$}
Let $\varphi$ be a convex function on $\mathbb R_{>0}$. Then $\varphi$ is continuous on $\mathbb R_{>0}$. Moreover, for any $t\in\mathbb R_{>0}$, the right derivative of $\varphi$ at $t$, given by
\[\lim_{\varepsilon\downarrow 0} \frac{\varphi(t+\varepsilon) - \varphi(t)}{\varepsilon},\]
exists in $\mathbb R$. By abuse of notation, we denote by $\varphi'$ the right derivative function of $\varphi$ on $\mathbb R_{>0}$. It is a right continuous increasing function. We refer to \cite[Theorem~1.26]{SB} for more details. Moreover, for any $(a, b) \in \mathbb R_{>0}^2$, one has \begin{equation}\label{Equ: integral of derivativie}\varphi(b) - \varphi(a) = \int_{\mathopen{]}a,b\mathclose{[}} \varphi'(t)\, \mathrm{d}t.\end{equation} See \cite[Theorem~1.28]{SB} for a proof. In particular, the function $\varphi'$ represents the derivative of $\varphi$ in the sens of distribution.
\begin{prop}\phantomsection\label{prop:conv:properties}
Let $\varphi$ be a convex function on $\mathbb R_{>0}$ which is bounded.
\begin{enumerate}[label=\rm(\arabic*)]
\item\label{Item: varphi prime negative} One has $\varphi' \leqslant 0$ on $\mathbb R_{>0}$ and $\lim_{t \rightarrow+\infty} \varphi'(t) = 0$.
In particular, the function $\varphi$ is decreasing and extends to a continuous function on $[0,+\infty]$.
\item\label{Item: limit of left derivative} We extends $\varphi$ continuously on $[0,+\infty]$. The function \[(t\in\mathbb R_{>0}) \longmapsto \frac{\varphi(t) - \varphi(0)}{t}\] is increasing. Moreover, one has \[\lim_{t\downarrow 0} \frac{\varphi(t) - \varphi(0)}{t}=\lim_{t\downarrow 0} \varphi'(t)\in \mathopen{[}{-\infty},{0}{]},\]
which is denoted by $\varphi'(0)$.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{Item: varphi prime negative} We assume by contradiction that $\varphi'(a) > 0$ at certain $a\in\mathbb R_{>0}$. By \eqref{Equ: integral of derivativie}, for any $x\in\mathbb R_{>0}$ such that $x>a$, one has
\[
\varphi(x) - \varphi(a) = \int_{\mathopen{]}a,x\mathclose{[}} \varphi'(t)\,\mathrm{d}t \geqslant \int_{\mathopen{]}a,x\mathclose{[}} \varphi'(a) \,\mathrm{d}t = (x-a) \varphi'(a),
\]
so that $\lim_{x\to+\infty} \varphi(x) = +\infty$. This is a contradiction. Thus $\varphi'(t) \leqslant 0$ for all $t \in \mathbb R_{>0}$.
Therefore, one has \[\lim_{t\rightarrow+\infty}\varphi'(t)=\sup_{t \in \mathbb R_{>0}} \varphi'(t) \leqslant 0.\]
To show that the equality $\lim_{t \rightarrow+\infty} \varphi'(t) = 0$ holds, we assume by contradiction that there exists $\varepsilon>0$ such that $\varphi'(t)\leqslant -\varepsilon$ for any $t\in\mathbb R_{>1}$. Then, by (1), for any $x\in\mathbb R_{>1}$,
\[
\varphi(x) - \varphi(1) = \int_{1}^x \varphi'(t)\,\mathrm{d}t \leqslant\int_{\mathopen{]}1,x\mathclose{[}} (-\varepsilon)\,\mathrm{d}t = -\varepsilon (x-1).
\]
Therefore, $\lim_{x\to+\infty} \varphi(x) = -\infty$, which leads to a contradiction.
\medskip
\ref{Item: limit of left derivative} For $0 < a < b$, since
\[
\varphi(a) = \varphi((1-a/b) 0 + (a/b) b) \leq (1-a/b) \varphi(0) + (a/b)\varphi(b),
\]
one has
\[
\frac{\varphi(a) - \varphi(0)}{a} \leqslant \frac{\varphi(b) - \varphi(0)}{b}
\]
as required. Denote by $\varphi'(0)$ the limite $\lim_{s\downarrow 0}\varphi'(s)$. Note that the equality \eqref{Equ: integral of derivativie} actually holds for $(a,b)\in[0,+\infty]^2$ (by the continuity of $\varphi$ and the monotone convergence theorem). Therefore
\[\varphi'(0)\leqslant\frac{\varphi(t)-\varphi(0)}{t}=\frac{1}{t}\int_{\mathopen{]}{0},{t}\mathclose{[}}\varphi'(s)\,\mathrm{d}s\leqslant \varphi'(t).\]
By passing to limit when $t\downarrow 0$, we obtain the result.
\end{proof}
\begin{prop}\label{Pro: computation of integral varphi psi}
Let $\varphi$ and $\psi$ be continuous functions on $[0,+\infty]$ which are convex on $\mathbb R_{>0}$. One has
\begin{equation}\label{Equ: integral of varphi and dpsi}
\int_{\mathopen{]}0,{+\infty}\mathclose{]}} \varphi \,\mathrm{d}\psi' = -\int_{\mathbb R_{>0}} \psi'(t) \varphi'(t) \,\mathrm{d}t - \varphi(0) \psi'(0)\in \mathopen{[}{-\infty},{+\infty}\mathclose{[}.
\end{equation}
In particular, if $\varphi(0)=\psi(0)=0$, then one has
\begin{equation}\label{Equ: symetry}\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\varphi\,\mathrm{d}\psi'=\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\psi\,\mathrm{d}\varphi'.\end{equation}
\end{prop}
\begin{proof}
By \eqref{Equ: integral of derivativie}, one has
\[\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\varphi\,\mathrm{d}\psi'=\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\int_{\mathopen{]}0,x\mathclose{[}}\varphi'(t)\,\mathrm{d}t\,\mathrm{d}\psi'(x)+\varphi(0)\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\,\mathrm{d}\psi'.\]
By Fubini's theorem, the double integral is equal to
\[\int_{\mathbb R_{>0}}\varphi'(t)\int_{\mathopen{]}t,{+\infty}\mathclose{]}}\,\mathrm{d}\psi'\,\mathrm{d}t=-\int_{\mathbb R_{>0}}\varphi'(t)\psi'(t)\,\mathrm{d}t.\] Therefore, the equality \eqref{Equ: integral of varphi and dpsi} holds. In the case where $\varphi(0)=\psi(0)=0$, one has
\[\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\varphi\,\mathrm{d}\psi'=-\int_{\mathbb R_{>0}}\psi'(t)\varphi'(t)\,\mathrm{d}t=\int_{\mathopen{]}0,{+\infty}\mathclose{]}}\psi\,\mathrm{d}\varphi'.\]
\end{proof}
\begin{prop}\label{prop:x:d:varphi:prime}
Let $\varphi$ be a continuous function on $[0,+\infty]$ which is convex on $\mathbb R_{>0}$. One has
\begin{equation}
\int_{\mathbb R_{>0}} x\,\mathrm{d}\varphi'(x)=\varphi(0)-\varphi(+\infty).
\end{equation}
\end{prop}
\begin{proof}
By Fubini's theorem
\[\begin{split}\int_{\mathbb R_{>0}} x\,\mathrm{d}\varphi'(x)&=\int_{\mathbb R_{>0}}\int_{\mathopen{]}0,x\mathclose{[}}\,\mathrm{d}t\,\mathrm{d}\varphi'(x)=\int_{\mathbb R_{>0}}\int_{\mathopen{]}t,{+\infty}\mathclose{[}}\,\mathrm{d}\varphi'(x)\,\mathrm{d}t\\&=-\int_{\mathbb R_{>0}}\varphi'(t)\,\mathrm{d}t=\varphi(0)-\varphi(+\infty),
\end{split}\]
where in the third equality we have used the fact that $\lim_{t \rightarrow+\infty} \varphi'(t) = 0$ proved in Proposition \ref{prop:conv:properties}.
\end{proof}
\subsection{Transform of Legendre type}
\begin{defi}
Let $\varphi$ be a continuous function on $[0,+\infty]$ which is convex on $\mathbb R_{>0}$. We denote by $\varphi^*$ the fonction on $[0,+\infty]$ defined as
\[\forall\,\lambda\in[0,+\infty],\quad \varphi^*(\lambda):=\inf_{x\in[0,+\infty]}(x\lambda+\varphi(x)-\varphi(0)).\]
Clearly the function $\varphi^*$ is increasing and non-positive. Moreover,
one has \[\varphi^*(0)=\inf_{x\in[0,+\infty]} \varphi(x)-\varphi(0)=\varphi(+\infty)-\varphi(0).\]
Therefore, for any $\lambda\in[0,+\infty]$, one has
\[\varphi(+\infty)-\varphi(0)\leqslant\varphi^*(\lambda)\leqslant 0.\]
\end{defi}
\begin{prop}\phantomsection\label{theorem:integral:Legendre:trans}
Let $\varphi$ be a continuous function on $[0, +\infty]$ which is convex on $\mathbb R_{>0}$.
For $p \in {\mathbb R}_{> 1}$, one has
\[
\int_{0}^{+\infty} (-\varphi'(x))^{p} \mathrm{d} x = -(p-1)p \int_{0}^{+\infty}\lambda^{p-2}\varphi^*(\lambda)\mathrm{d}\lambda.
\]
In particular,
\begin{equation}\label{Equ: key equality}
\int_{0}^{+\infty} \varphi'(x)^2 \mathrm{d}x = -2 \int_{0}^{+\infty} \varphi^*(\lambda)\, \mathrm{d}\lambda.
\end{equation}
\end{prop}
\begin{proof}
Since $\varphi'$ is increasing one has
\[
\varphi^*(\lambda) = \inf_{x \in [0, +\infty[} \int_{0}^{x}(\lambda + \varphi'(t))\, \mathrm{d}t =
\int_{0}^{+\infty} \min \{\lambda + \varphi'(t), 0\}\,\mathrm{d}t.
\]
Therefore, by Fubini's theorem,
\begin{align*}
\int_{0}^{+\infty}\lambda^{p-2}\varphi^*(\lambda)\,\mathrm{d}\lambda & =
\int_{0}^{+\infty}\left(\int_{0}^{+\infty} \lambda^{p-2}\min \{\lambda + \varphi'(t), 0\}\,\mathrm{d} \lambda\right)\,\mathrm{d}t \\
& = \int_{0}^{+\infty} \left( \int_{0}^{-\varphi'(t)} \lambda^{p-2} (\lambda + \varphi'(t))\, \mathrm{d} \lambda\right)\, \mathrm{d}t \\
& = \int_{0}^{+\infty} \left[ \frac{\lambda^{p}}{p} + \frac{\varphi'(t) \lambda^{p-1}}{p-1} \right]_0^{-\varphi'(t)} \mathrm{d}t \\
& = \frac{-1}{(p-1)p}\int_{0}^{+\infty}(-\varphi'(t))^{p}\,\mathrm{d}t,
\end{align*}
as required.
\end{proof}
\subsection{Convex envelop of asymptotically linear functions}\label{Sec: convex envelop} Let $g:\mathbb R_{>0}\rightarrow\mathbb R$ be an asymptotically linear continuous function (see \S\ref{Sec: function of Green type}). We define the \emph{convex envelop} of $g$ as the largest convex function $\widebreve{g}$ on $\mathbb R_{>0}$ which is bounded from above by $g$. Note that $\widebreve{g}$ identifies with the supremum of all affine functions bounded from above by $g$.
\begin{prop}\phantomsection\label{Pro: convex envelop of function of Green type}
Let $g:\mathbb R_{>0}\rightarrow\mathbb R$ be an asymptotically linear continuous function. Then $\widebreve{g}$ is also an asymptotically linear continuous function. Moreover, one has $\mu(g)=\mu(\widebreve{g})$ and $g(0)=\widebreve{g}(0)$.
\end{prop}
\begin{proof}
Let $\varphi_g:[0,+\infty]\rightarrow\mathbb R$ be the continuous function such that $\varphi_g(t)=g(t)-\mu(g)t$ on $\mathbb R_{>0}$. Let $M$ be a real number such that $|\varphi_g(t)|\leqslant M$ for any $t\in[0,+\infty]$. One has \[\mu(g)t-M\leqslant g(t)\leqslant\mu(g)t+M.\] Therefore,
\[\mu(g)t-M\leqslant \widebreve{g}(t)\leqslant \mu(g)t+M.\]
By Proposition \ref{prop:conv:properties}, the function \[\varphi_{\widebreve{g}}:\mathbb R_{>0}\rightarrow\mathbb R,\quad \varphi_{\widebreve{g}}(t):=\widebreve{g}(t)-\mu(g)t\] extends continuously on $[0,+\infty]$. It remains to show that $g(0)=\widebreve{g}(0)$. Let $\varepsilon>0$. The function $t\mapsto (g(t)-g(0)+\varepsilon)/t$ is continuous on $\mathopen{]}{0},{+\infty}\mathclose{]}$ and one has
\[\lim_{t\downarrow0}\frac{g(t)-g(0)+\varepsilon}{t}=+\infty.\]
Therefore this function is bounded from below by a real number $\alpha$. Hence the function $g$ is bounded from below on $\mathbb R_{>0}$ by the affine function \[t\longmapsto \alpha t+g(0)-\varepsilon,\]
which implies that $\widebreve{g}(0)\geqslant {g}(0)-\varepsilon$. Since $g\geqslant \widebreve{g}$ and since $\varepsilon$ is arbitrary, we obtain $\widebreve{g}(0)=g(0)$.
\end{proof}
\section{Green functions on a tree of length $1$}
The purpose of this section is to establish a framework of Green functions on a tree of length $1$, which serves as a fundament of the arithmetic intersection theory of adelic $\mathbb R$-divisors on an arithmetic surface over a trivially valued field.
\subsection{Tree of length 1 associated with a set}\label{Subsec: tree of length 1}
Let $S$ be a non-empty set. We denote by $\mathcal T(S)$ the quotient set of the disjoint union $\coprod_{x\in S}[0,+\infty]$ obtained by gluing the points $0$ in the copies of $[0,+\infty]$. The quotient map from $\coprod_{x\in S}[0,+\infty]$ to $\mathcal T(S)$ is denoted by $\pi$. For each $x\in S$, we denote by $\xi_x:[0,+\infty]\rightarrow\mathcal T(S)$ the restriction of the quotient map $\pi$ to the copy of $[0,+\infty]$ indexed by $x$. The set $\mathcal T(S)$ is the union of $\xi_x([0,+\infty])$, $x\in S$.
\begin{enonce}{Notation}
{\rm Note that the images of $0$ in $\mathcal T(S)$ by all maps $\xi_x$ are the same, which we denote by $\eta_0$. The image of $+\infty$ by the map $\xi_x$ is denoted by $x_0$. If $a$ and $b$ are elements of $[0,+\infty]$ such that $a<b$, the images of the intervals $[a,b]$, $\mathopen{[}a,b\mathclose{[}$, $\mathopen{]}a,b\mathclose{]}$, $\mathopen{]}a,b\mathclose{[}$ by $\xi_x$ are denoted by $[\xi_x(a),\xi_x(b)]$, $\mathopen{[}\xi_x(a),\xi_x(b)\mathclose{[}$, $\mathopen{]}\xi_x(a),\xi_x(b)\mathclose{]}$, $\mathopen{]}\xi_x(a),\xi_x(b)\mathclose{[}$ respectively.}
\end{enonce}
\begin{defi}
We denote by $t:\mathcal T(S)\rightarrow [0,+\infty]$ the map which sends an element $\xi\in \xi_x([0,+\infty])$ to the unique number $a\in [0,+\infty]$ such that $\xi_x(a)=\xi$. In other words, for any $x\in S$, the restriction of $t(\ndot)$ to $[\eta_0,x_0]$ is the inverse of the injective map $\xi_x$. We call $t(\ndot)$ the \emph{parametrisation map} of $\mathcal T(S)$.
\end{defi}
\begin{defi}
We equip $\mathcal T(S)$ with the following topology. A subset $U$ of $\mathcal T(S)$ is open if and only if the conditions below are simultaneously satisfied:
\begin{enumerate}[label=\rm(\arabic*)]
\item for any $x\in S$, $\xi_x^{-1}(U)$ is an open subset of $[0,+\infty]$,
\item if $\eta_0\in U$, then $U$ contains $[\eta_0,x_0]$ for all but finitely many $x\in S$.
\end{enumerate}
By definition, all maps $\xi_x:[0,+\infty]\rightarrow\mathcal T(S)$ are continuous. However, if $S$ is an infinite set, then the parametrisation map $t(\ndot)$ is \emph{not} continuous.
Note that the topological space $\mathcal T(S)$ is compact. We can visualise it as an infinite tree of depth $1$ whose root is $\eta_0$ and whose leaves are $x_0$ with $x\in S$.
\end{defi}
\subsection{Green functions}\label{Subsec: Green functions}
Let $S$ be a non-empty set and $w:S\rightarrow\mathbb R_{>0}$ be a map. We call \emph{Green function} on $\mathcal T(S)$ any continuous map $g$ from $\mathcal T(S)$ to $[-\infty,+\infty]$ such that, for any $x\in S$, the composition of $g$ with $\xi_x|_{\mathbb R_{>0}}$ defines an asymptotically linear function on $\mathbb R_{>0}$. For any $x\in S$, we denote by $\mu_x(g)$ the unique real number such that the function
\[(u\in\mathbb R_{>0})\longmapsto
g(\xi_x(u))-\mu_x(g)u. \]
extends to a continuous function on $[0,+\infty]$. We denote by $\varphi_g:\mathcal T(S)\rightarrow\mathbb R$ the continuous function on $\mathcal T(S)$ such that
\[\varphi_g(\xi)=g(\xi)-\mu_x(g)t(\xi) \text{ for any $\xi\in[\eta_0,x_0]$, $x\in S$.}\]
\begin{rema}\label{Rem: lambda est presque partout nul}
Let $g$ be a Green function on $\mathcal T(S)$. It takes finite values on $\mathcal T(S)\setminus\{x_0\,:\, x\in S\}$. Moreover, for any $x\in S$, the value of $g$ at $x_0$ is finite if and only if $\mu_x(g)=0$. As $g$ is a continuous map, $g^{-1}(\mathbb R)$ contains all but finitely many $x_0$ with $x\in S$. In other words, for all but finitely many $x\in S$, one has $\mu_x(g)=0$. Note that the Green function $g$ is bounded if and only if $\mu_x(g)=0$ for any $x\in S$.
\end{rema}
\begin{defi}\label{Def: canonical divisor}
Let $g$ be a Green function on $\mathcal T(S)$. We denote by $g_{\operatorname{can}}$ the map from $\mathcal T(S)$ to $[-\infty,+\infty]$ which sends $\xi\in[\eta_0,x_0]$ to $\mu_x(g)t(\xi)$. Note that the composition of $g_{\operatorname{can}}$ with $\xi_x|_{\mathbb R_{>0}}$ is a linear function on $\mathbb R_{>0}$. We call it the \emph{canonical Green function} associated with $g$. Note that there is a unique bounded Green function $\varphi_g$ on $\mathcal T(S)$ such that $g=g_{\mathrm{can}}+\varphi_g$. We call it the \emph{bounded Green function} associated with $g$. The formula $g=g_{\mathrm{can}}+\varphi_g$ is called the \emph{canonical decomposition} of the Green function $g$. If $g=g_{\mathrm{can}}$, we say that the Green function $g$ is \emph{canonical}.
\end{defi}
\begin{prop}\label{Pro: constant except countable}
Let $g$ be a Green function on $\mathcal T(S)$. For all but countably many $x\in S$, the restriction of $g$ on $[\eta_0,x_0]$ is a constant function.
\end{prop}
\begin{proof}
For any $n\in\mathbb N$ such that $n\geqslant 1$, let $U_n$ be set of $\xi\in \mathcal T(S)$ such that \[|g(\xi)-g(\eta_0)|<n^{-1}.\] This is an open subset of $\mathcal T(S)$ which contains $\eta_0$. Hence there is a finite subset $S_n$ of $S$ such that $[\eta_0,x_0]\subset U_n$ for any $x\in S\setminus S_n$. Let $S'=\bigcup_{n\in\mathbb N,\,n\geqslant 1}S_n$. This is a countable subset of $S$. For any $x\in S\setminus S'$ and any $\xi\in[\eta_0,x_0]$, one has $g(\xi)=g(\eta_0)$
\end{proof}
\begin{rema}
It is clear that, if $g$ is a Green function on $\mathcal T(S)$, for any $a\in\mathbb R$, the function $ag:\mathcal T(S)\rightarrow [-\infty,+\infty]$ is a Green function on $\mathcal T(S)$. Moreover, the canonical decomposition of Green functions allows to define the sum of two Green functions. Let $g_1$ and $g_2$ be two Green functions on $\mathcal T(S)$. We define $g_1+g_2$ as $(g_{1,\mathrm{can}}+g_{2,\mathrm{can}})+(\varphi_{g_1}+\varphi_{g_2})$.
Note that the set of all Green functions, equipped with the addition and the multiplication by a scalar, forms a vector space over $\mathbb R$.
\end{rema}
\subsection{Pairing of Green functions}\label{Subsec: pairing}
Let $S$ be a non-empty set and $w:S\rightarrow\mathbb R_{>0}$ be a map, called a \emph{weight function}. We say that a Green function $g$ on $\mathcal T(S)$ is \emph{pairable with respect to $w$} if the following conditions are satisfied:
\begin{enumerate}[label=\rm(\arabic*)]
\item for any $x\in S$, the function $\varphi_g\circ \xi_x|_{\mathbb R_{>0}}$ belongs to $L_1^2(\mathbb R_{>0})$ (see \S\ref{Sec: function of Green type}),
\item one has
\[\sum_{x\in S}w(x)\int_{\mathbb R_{>0}}(\varphi_g\circ \xi_x|_{\mathbb R_{>0}})'(u)^2\,\mathrm{d}u<+\infty.\]
\end{enumerate}
For each $x\in S$ we fix a representative of the function $(\varphi_g\circ\xi_x|_{\mathbb R_{>0}})'$ and we denote by \[\varphi_g':\bigcup_{x\in S}\,\mathopen{]}\eta_0,x_0\mathclose{[}\longrightarrow \mathbb R\] the function which sends $\xi\in\mathopen{]}\eta_0,x_0\mathclose{[}$
to $(\varphi_g\circ\xi_x|_{\mathbb R_{>0}})'(t(\xi))$.
We equip $\coprod_{x\in S}[0,+\infty]$
with the disjoint union of the weighted Lebesgue measure $w(x)\,\mathrm{d}u$, where $\mathrm{d}u$ denotes the Lebesgue measure on $[0,+\infty]$. We denote by $\nu_{S,w}$ the push-forward of this measure by the projection map
\[\coprod_{x\in S}[0,+\infty]\longrightarrow\mathcal T(S).\]
Then the function $\varphi_g'$ is square-integrable with respect to the measure $\nu_{S,w}$.
\begin{defi}\phantomsection\label{Def: pairing of Green functions} Note that pairable Green functions form a vector subspace of the vector space of Green functions.
Let $g_1$ and $g_2$ be pairable Green functions on $\mathcal T(S)$. We define the \emph{pairing} of $g_1$ and $g_2$ as
\[\sum_{x\in S}w(x)\Big(\mu_x(g_1){g_2}(\eta_0)+\mu_x(g_2){g_1}(\eta_0)\Big) - \int_{\mathcal T(S)}\varphi_{g_1}'(\xi)\varphi_{g_2}'(\xi)\,\nu_{S,w}(\mathrm{d}\xi),\]
denoted by $\langle g_1,g_2\rangle_w$, called the \emph{pairing} of Green functions $g_1$ and $g_2$. Note that $\emptyinnprod_w$ is a symmetric bilinear form on the vector space of pairable Green functions.
\end{defi}
\subsection{Convex Green functions}\label{Sec:convex Green}
Let $S$ be a non-empty set. We say that a Green function $g$ on $\mathcal T(S)$ is \emph{convex} if, for any element $x$ of $S$, the function ${g\circ\xi_x}$ on $\mathbb R_{>0}$ is convex.
\begin{enonce}{Convention}
\rm If $g$ is a convex Green function on $\mathcal T(S)$, by convention we choose, for each $x\in S$, the right derivative of $\varphi_g\circ\xi_x|_{\mathbb R_{>0}}$ to represent the derivative of $\varphi_g\circ\xi_x|_{\mathbb R_{>0}}$ in the sense of distribution. In other words, $\varphi_g'\circ\xi_x|_{\mathbb R_{>0}}$ is given by the right derivative of the function $\varphi_g\circ\xi_x|_{\mathbb R_{>0}}$. Moreover, for any $x\in S$, we denote by $\varphi_g'(\eta_0;x)$ the element $\varphi_{g\circ\xi_x}'(0)\in[-\infty,0]$ (see Proposition \ref{prop:conv:properties} \ref{Item: limit of left derivative}). We emphasis that $\varphi_{g\circ\xi_x}'(0)$ could differ when $x$ varies.
\end{enonce}
\begin{defi}
Let $g$ be a Green function on $\mathcal T(S)$. We call \emph{convex envelop} of $g$ and we denote by $\widebreve{g}$ the continuous map from $\mathcal T(S)$ to $[-\infty,+\infty]$ such that, for any $x\in S$, $\widebreve{g}\circ\xi_x|_{\mathbb R_{>0}}$ is the convex envelop of $g\circ\xi_x|_{\mathbb R_{>0}}$ (see \S\ref{Sec: convex envelop}). By Proposition \ref{Pro: convex envelop of function of Green type}, the function $\widebreve{g}$ is well defined and defines a convex Green function on $\mathcal T(S)$. Moreover, it is the largest convex Green function on $\mathcal T(S)$ which is bounded from above by $g$.
\end{defi}
\begin{prop}\phantomsection\label{Pro: convex envelop of Green functions}
Let $g$ be a Green function on $\mathcal T(S)$. The following equalities hold:
\[g_{\mathrm{can}}=\widebreve{g}_{\!\mathrm{can}},\quad
g(\eta_0)=\widebreve{g}(\eta_0),\quad \widebreve{\varphi}_{\!g}=\varphi_{\widebreve{g}}.\]
\end{prop}
\begin{proof}
The first two equalities follows from Proposition \ref{Pro: convex envelop of function of Green type}. The third equality comes from the first one and the fact that $\widebreve{g}=g_{\mathrm{can}}+\widebreve{\varphi}_{\!g}$.
\end{proof}
\subsection{Infimum slopes}
\label{Sec: infimum slopes}
Let $S$ be a non-empty set and $g$ be a Green function on $\mathcal T(S)$. For any $x\in S$, we denote by $\mu_{\inf,x}(g)$ the element
\[\inf_{\xi\in\mathopen{]}\eta_0,x_0\mathclose{[}}\frac{g(\xi)}{t(\xi)}\in\mathbb R\cup\{-\infty\}.\]
Clearly one has $\mu_{\inf,x}(g)\leqslant\mu_{x}(g)$. Therefore, by Remark \ref{Rem: lambda est presque partout nul} we obtain that $\mu_{\inf,x}(g)\leqslant 0$ for all but finitely many $x\in S$. If $w:S\rightarrow\mathbb R_{\geqslant 0}$ is a weight function, we define the \emph{global infimum slope} of $g$ with respect to $w$ as
\[\sum_{x\in X^{(1)}}\mu_{\inf,x}(g) w(x)\in\mathbb R\cup\{-\infty\}.\]
This element is well defined because $\mu_{\inf,x}(g)\leqslant 0$ for all but finitely many $x\in S$. If there is no ambiguity about the weight function (notably when $S$ is the set of closed points of a regular projective curve cf. Definition~\ref{def:mu:inf}), the global infimum slope of $g$ is also denoted by $\mu_{\inf}(g)$.
\begin{prop}\label{Pro: relation between mu and varphig}
Let $g$ be a convex Green function on $\mathcal T(S)$. For any $x\in S$ one has
\[\mu_{\inf,x}(g-g(\eta_0))=\mu_x(g)+\varphi_g'(\eta_0;x).\]
\end{prop}
\begin{proof}
This is a direct consequence of Proposition \ref{prop:conv:properties} \ref{Item: limit of left derivative}.
\end{proof}
\section{Graded linear series}
Let $k$ be a field and $X$ be a regular projective curve over $\Spec k$. We denote by $X^{(1)}$ the set of closed points of $X$. By \emph{$\mathbb R$-divisor} on $X$, we mean an element in the free $\mathbb R$-vector space generated by $X^{(1)}$. We denote by $\operatorname{Div}_{\mathbb R}(X)$ the $\mathbb R$-vector space of $\mathbb R$-divisors on $X$. If $D$ is an element of $\operatorname{Div}_{\mathbb R}(X)$, the coefficient of $x$ in the expression of $D$ into a linear combination of elements of $X^{(1)}$ is denoted by $\operatorname{ord}_x(D)$. If $\operatorname{ord}_x(D)$ belongs to $\mathbb Q$ for any $x\in X^{(1)}$, we say that $D$ is a \emph{$\mathbb Q$-divisor}; if $\operatorname{ord}_x(D)\in\mathbb Z$ for any $x\in X^{(1)}$, we say that $D$ is a \emph{divisor} on $X$. The subsets of $\operatorname{Div}_{\mathbb R}(X)$ consisting of $\mathbb Q$-divisors and divisors are denoted by $\operatorname{Div}_{\mathbb Q}(X)$ and $\operatorname{Div}(X)$, respectively.
Let $D$ be an $\mathbb R$-divisor on $X$. We define the \emph{degree} of $D$ to be
\begin{equation}\label{Equ: degree of an r divisor}\deg(D):=\sum_{x\in X^{(1)}}[k(x):k]\operatorname{ord}_x(D),\end{equation}
where for $x\in X$, $k(x)$ denotes the residue field of $x$. Denote by $\operatorname{Supp}(D)$ the set of all $x\in X^{(1)}$ such that $\operatorname{ord}_x(D)\neq 0$, called the \emph{support} of the $\mathbb R$-divisor $D$. This is a finite subset of $X^{(1)}$. Although $X^{(1)}$ is an infinite set, \eqref{Equ: degree of an r divisor} is actually a finite sum: one has
\[\deg(D)=\sum_{x\in\operatorname{Supp}(D)}\operatorname{ord}_x(D)[k(x):k].\]
Denote by $\operatorname{Rat}(X)$ the field of rational functions on $X$. If $f$ is a non-zero element of $\operatorname{Rat}(X)$, we denote by $(f)$ the \emph{principal divisor} associated with $f$, namely the divisor on $X$ given by
\[\sum_{x\in X^{(1)}}\operatorname{ord}_x(f) x,\]
where $\operatorname{ord}_x(f)\in\mathbb Z$ denotes the valuation of $f$ with respect to the discrete valuation ring $\mathcal O_{X,x}$. The map $\mathrm{Rat}(X)^{\times}\rightarrow\mathrm{Div}(X)$ is additive and hence induces an $\mathbb R$-linear map
\[\mathrm{Rat}(X)^{\times}_{\mathbb R}:=\mathrm{Rat}(X)^{\times}\otimes_{\mathbb Z}\mathbb R\longrightarrow\mathrm{Div}_{\mathbb R}(X),\]
which we still denote by $f\mapsto (f)$.
\begin{defi}We say that an $\mathbb R$-divisor $D$ is \emph{effective} if $\operatorname{ord}_x(D)\geqslant 0$ for any $x\in X^{(1)}$. We denote by $D\geqslant 0$ the condition ``\emph{$D$ is effective}''. For any $\mathbb R$-divisor $D$ on $X$, we denote by $H^0(D)$ the set
\[\{f\in\operatorname{Rat}(X)^{\times}\,:\,(f)+D\geqslant 0\}\cup\{0\}.\]
It is a finite-dimensional $k$-vector subspace of $\operatorname{Rat}(X)$. We denote by $\mathrm{genus}(X)$
the genus of the curve $X$ relatively to $k$. The theorem of Riemann-Roch implies that, if $D$ is a divisor such that $\deg(D)>2\operatorname{genus}(X)-2$, then one has
\begin{equation}\label{Equ: Riemann-Roch}
\dim_k(H^0(D))=\deg(D)+1-\operatorname{genus}(X).
\end{equation}
We refer the readers to \cite[Lemma 2.2]{MR3299845} for a proof.
Let $D$ be an $\mathbb R$-divisor on $X$. We denote by $\Gamma(D)_{\mathbb R}^{\times}$ the set
\[\{f\in\mathrm{Rat}(X)_{\mathbb R}^{\times}\,:\,(f)+D\geqslant 0\}.\]
This is an $\mathbb R$-vector subspace of $\mathrm{Rat}(X)_{\mathbb R}^{\times}$. Similarly, we denote by $\Gamma(D)_{\mathbb Q}^{\times}$ the $\mathbb Q$-vector subspace
\[\{f\in\mathrm{Rat}(X)_{\mathbb Q}^{\times}\,:\,(f)+D\geqslant 0\}\]
of $\mathrm{Rat}(X)_{\mathbb Q}^{\times}$. Note that one has
\begin{equation}\label{Equ: Gamma D times}\Gamma(D)_{\mathbb Q}^{\times}=\bigcup_{n\in\mathbb N,\,n\geqslant 1}\{f^{\frac 1n}\,:\,f\in H^0(nD)\setminus\{0\}\}.\end{equation}
\end{defi}
\begin{defi}
Let $D$ be an $\mathbb R$-divisor on $X$. We denote by $\lfloor D\rfloor$ and $\lceil D\rceil$ the divisors on $C$ such that
\[\operatorname{ord}_x(\lfloor D\rfloor)=\lfloor \operatorname{ord}_x(D)\rfloor,\quad\operatorname{ord}_x(\lceil D\rceil)=\lceil\operatorname{ord}_x(D)\rceil.\]
Clearly one has $\deg(\lfloor D\rfloor)\leqslant\deg(D)\leqslant\deg(\lceil D\rceil)$. Moreover,
\begin{gather}\label{Equ: lower bound of floor D}
\deg(\lfloor D\rfloor)> \deg(D)-\sum_{x\in\operatorname{Supp}(D)}[k(x):k],\\
\label{Equ: upper bound of ceil D}
\deg(\lceil D\rceil)<\deg(D)+\sum_{x\in\operatorname{Supp}(D)}[k(x):k].
\end{gather}
Let $(D_i)_{i\in I}$ be a family of $\mathbb R$-divisors on $X$ such that \[\sup_{i\in I}\operatorname{ord}_{x}(D_i)=0\] for all but finitely many $x\in X^{(1)}$. We denote by $\sup_{i\in I}D_i$ the $\mathbb R$-divisor such that
\[\forall\,x\in X^{(1)},\quad \operatorname{ord}_x\big(\sup_{i\in I}D_i\big)=\sup_{i\in I}\operatorname{ord}_x(D_i).\]
\end{defi}
\begin{prop}\phantomsection\label{Pro: asymptotic RR}
Let $D$ be an $\mathbb R$-divisor on $X$ such that $\deg(D)\geqslant 0$. One has
\begin{equation}\label{Equ: limit of H0 is deg}\lim_{n\rightarrow+\infty}\frac{\dim_k(H^0(nD))}{n}=\deg(D).\end{equation}
\end{prop}
\begin{proof}
We first assume that $\deg(D)>0$. By \eqref{Equ: lower bound of floor D}, for sufficiently positive integer $n$, one has $\deg(\lfloor nD\rfloor)>2\operatorname{genus}(X)-2$.
Therefore, \eqref{Equ: Riemann-Roch} leads to
\[\dim_k(H^0(\lfloor nD\rfloor))= \deg(\lfloor nD\rfloor)+1-\mathrm{genus}(X).\]
Moreover, since $\deg(D)>0$ one has $\deg(\lceil nD\rceil)\geqslant n\deg(D)>2\operatorname{genus}(X)-2$ for sufficiently positive $n\in\mathbb N_{\geqslant 1}$. Hence
\eqref{Equ: Riemann-Roch} leads to
\[\dim_k(H^0(\lceil nD\rceil))= \deg(\lceil nD\rceil)+1-\mathrm{genus}(X).\]
Since $H^0(\lfloor nD\rfloor)\subseteq H^0(nD)\subseteq H^0(\lceil nD\rceil)$, we obtain
\[\frac{\deg(\lfloor nD\rfloor)+1-\operatorname{genus}(X)}{n}\leqslant\frac{\dim_k(H^0(nD))}{n}\leqslant\frac{\deg(\lceil nD\rceil)+1-\operatorname{genus}(X)}{n}.\]
Taking limit when $n\rightarrow+\infty$, by \eqref{Equ: lower bound of floor D} and \eqref{Equ: upper bound of ceil D} we obtain \eqref{Equ: limit of H0 is deg}.
We now consider the case where $\deg(D)=0$. Let $E$ be an effective $\mathbb R$-Cartier divisor such that $\deg(E)>0$. For any $\varepsilon>0$ one has
\[\begin{split}\limsup_{n\rightarrow+\infty}\frac{\dim_k(H^0(nD))}{n}&\leqslant\lim_{n\rightarrow+\infty}\frac{\dim_k(H^0(n(D+\varepsilon E)))}{n}\\
&=\deg(D+\varepsilon E)=\varepsilon\deg(E).
\end{split}\]
Since $\varepsilon$ is arbitrary, the equality \eqref{Equ: limit of H0 is deg} still holds.
\end{proof}
\begin{prop}\phantomsection\label{Pro: sup of s in Gamma D}
Let $D$ be an $\mathbb R$-divisor on $X$ such that $\deg(D)>0$. Then one has
\begin{equation}\label{Equ: supremum is D}\sup_{s\in\Gamma(D)^{\times}_{\mathbb Q}}(s^{-1})=D.\end{equation}
\end{prop}
\begin{proof}
For any $s\in\Gamma(D)_{\mathbb Q}^{\times}$ one has \[\operatorname{ord}_x(s)+\operatorname{ord}_x(D)\geqslant 0\] and hence $\operatorname{ord}_x(s^{-1})\leqslant\operatorname{ord}_x(D)$.
For any $x\in X^{(1)}$ and any $\varepsilon>0$, we pick an $\mathbb R$-divisor $D_{x,\varepsilon}$ on $X$ such that $D-D_{x,\varepsilon}$ is effective, $\operatorname{ord}_x(D_{x,\varepsilon})=\operatorname{ord}_x(D)$ and $0<\deg(D_{x,\varepsilon})<\varepsilon$. Since $\deg(D_{x,\varepsilon})>0$, the set $\Gamma(D_{x,\varepsilon})_{\mathbb Q}^{\times}$ is not empty (see \eqref{Equ: Gamma D times} and Proposition \ref{Pro: asymptotic RR}). This set is also contained in $\Gamma(D)^{\times}_{\mathbb Q}$ since $D_{x,\varepsilon}\leqslant D$. Let $f$ be an element of $\Gamma(D_{x,\varepsilon})_{\mathbb Q}^{\times}$. One has \[D_{x,\varepsilon}+(f)\geqslant 0\quad\text{ and }\quad\deg(D_{x,\varepsilon}+(f))=\deg(D_{x,\varepsilon})<\varepsilon.\] Therefore
\[\operatorname{ord}_x(D+(f))=\operatorname{ord}_x(D_{x,\varepsilon}+(f))\leqslant\frac{\varepsilon}{[\kappa(x):k]},\]
which leads to
\[\operatorname{ord}_x(f^{-1})\geqslant\operatorname{ord}_x(D)-\frac{\varepsilon}{[\kappa(x):k]}.\]
Since $\varepsilon>0$ is arbitrary, we obtain
\[\sup_{s\in\Gamma(D)_{\mathbb Q}^{\times}}\operatorname{ord}_x(s^{-1})=\operatorname{ord}_x(D).\]
\end{proof}
\begin{rema}\phantomsection\label{Rem: degree 0 effective}
Let $D$ be an $\mathbb R$-divisor on $X$. Note that one has
\[\sup_{s\in\Gamma(D)_{\mathbb R}^{\times}}(s^{-1})\leqslant D.\]
Therefore, the above proposition implies that, if $\deg(D)>0$, then
\[\sup_{s\in\Gamma(D)_{\mathbb R}^{\times}}(s^{-1})= D.\]
This equality also holds when $\deg(D)=0$ and $\Gamma(D)_{\mathbb R}^{\times}\neq\emptyset$. In fact, if $s$ is an element of $\Gamma(D)_{\mathbb R}^{\times}$, then one has $D+(s)\geqslant 0$. Moreover, since $\deg(D)=0$, one has $\deg(D+(s))=\deg(D)+\deg((s))=0$ and hence $D+(s)=0$. Similarly, if $D$ is an $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb Q}^{\times}\neq\emptyset$, then the equality
\[\sup_{s\in\Gamma(D)_{\mathbb Q}^{\times}}(s^{-1})=D\]
always holds.
\end{rema}
\begin{defi}\phantomsection\label{Def: generic point of graded linear series}
Let $\operatorname{Rat}(X)$ be the field of rational functions on $X$. By \emph{graded linear series} on $X$, we refer to a graded sub-$k$-algebra $V_\sbullet=\bigoplus_{n\in\mathbb N}V_nT^n$ of $\operatorname{Rat}(X)[T]=\bigoplus_{n\in\mathbb N}\operatorname{Rat}(X)T^n$ which satisfies the following conditions:
\begin{enumerate}[label=\rm(\arabic*)]
\item $V_0=k$,
\item there exists $n\in\mathbb N_{\geqslant 1}$ such that $V_n\neq\{0\}$
\item there exists an $\mathbb R$-divisor $D$ on $X$ such that $V_n\subseteq H^0(nD)$ for any $n\in\mathbb N$.
\end{enumerate}
If $W$ is a $k$-vector subspace of $\operatorname{Rat}(X)$, we denote by $k(W)$ the extension \[k(\{f/g\,:\,(f,g)\in (W\setminus\{0\})^2\})\]
of $k$.
If $V_\sbullet$ is a graded linear series on $V$, we set
\[k(V_\sbullet):=k\Big(\bigcup_{n\in\mathbb N_{\geqslant 1}}\{f/g\,:\,(f,g)\in (V_n\setminus\{0\})^2\}\Big). \]
If $k(V_\sbullet)=\operatorname{Rat}(X)$, we say that the graded linear series $V_\sbullet$ is \emph{birational}.
\end{defi}
\begin{exem}\phantomsection\label{Exe: birational graded linear series}
Let $D$ be an $\mathbb R$-divisor on $X$ such that $\deg(D)>0$. Then the total graded linear series $\bigoplus_{n\in\mathbb N}H^0(nD)$ is birational.
\end{exem}
\begin{prop}\label{Pro: corps engendre par un systeme lineare}
Let $V_\sbullet$ be a graded linear series on $X$. The set \[\mathbb N(V_\sbullet):=\{n\in\mathbb N_{\geqslant 1}\,:\,V_n\neq\{0\}\}\] equipped with the additive law forms a sub-semigroup of $\mathbb N_{\geqslant 1}$. Moreover, for any $n\in\mathbb N(V_\sbullet)$ which is sufficiently positive, one has $k(V_\sbullet)=k(V_n)$.
\end{prop}
\begin{proof}
Let $n$ and $m$ be elements of $\mathbb N(V_\sbullet)$. If $f$ and $g$ are respectively non-zero elements of $V_n$ and $V_m$, then $fg$ is a non-zero element of $V_{n+m}$. Hence $n+m$ belongs to $\mathbb N(V_\sbullet)$. Therefore, $\mathbb N(V_\sbullet)$ is a sub-semigroup of $\mathbb N_{\geqslant 1}$. In particular, if $d\geqslant 1$ is a generator of the subgroup of $\mathbb Z$ generated by $\mathbb N(V_\sbullet)$, then there exists $N_0\in\mathbb N_{\geqslant 1}$ such that $dn\in\mathbb N(V_\sbullet)$ for any $n\in\mathbb N$, $n\geqslant N_0$.
Since $k \subseteq k(V_\sbullet) \subseteq \operatorname{Rat}(X)$ and $\operatorname{Rat}(X)$ is finitely generated over $k$, the extension $k(V_\sbullet)/k$ is finitely generated (see \cite[Chapter~V, \S14, $\text{n}^{\circ}$7, Corollary~3]{Bourbaki_Alg}). Therefore,
there exist a finite family $\{n_1,\ldots,n_r\}$ of elements in $\mathbb N_{\geqslant 1}$, together with pairs $(f_i,g_i)\in (V_{dn_i}\setminus\{0\})^2$ such that $k(V_\sbullet)=k(f_1/g_1,\ldots,f_r/g_r)$. Let $N\in\mathbb N$ such that \[N-\max\{n_1,\ldots,n_r\}\geqslant N_0.\] For any $i\in\{1,\ldots,r\}$ and $M\in\mathbb N_{\geqslant N}$, let $h_{M,i}\in V_{d(M-n_i)}\setminus\{0\}$. Then \[(h_{M,i}f_i,h_{M,i}g_i)\in (V_{dM}\setminus\{0\})^2,\] which shows that $k(V_\sbullet)=k(V_{dM})$.
\end{proof}
\begin{defi}
If $V_\sbullet$ is a graded linear series, we define $\Gamma(V_\sbullet)^{\times}_{\mathbb Q}$ as
\[\bigcup_{n\in\mathbb N_{\geqslant 1}}\{f^{\frac 1n}\,|\,f\in V_n\setminus\{0\}\},\]
and let $D(V_\sbullet)$ be the following $\mathbb R$-divisor
\[\sup_{s\in\Gamma(V_\sbullet)^{\times}_{\mathbb Q}}(s^{-1}),\]
called the \emph{$\mathbb R$-divisor generated by $V_\sbullet$}.
The conditions (2) and (3) in Definition~\ref{Def: generic point of graded linear series}
show that the $\mathbb R$-divisor $D(V_\sbullet)$ is well defined and has non-negative degree.
\end{defi}
\begin{prop}\phantomsection\label{Pro: convergence of dim}
Let $V_\sbullet$ be a birational graded linear series on $X$. One has
\begin{equation}\label{Equ: Hilbert Samuel for graded linear series}\lim_{\begin{subarray}{c}n\in\mathbb N,\,V_n\neq\{0\}\\
n\rightarrow+\infty
\end{subarray}}\frac{\dim_k(V_n)}{n}=\deg(D(V_\sbullet))>0.\end{equation}
\end{prop}
\begin{proof}
By definition, for any $n\in\mathbb N$ one has $V_n\subseteq H^0(nD(V_\sbullet))$. Therefore Proposition \ref{Pro: asymptotic RR} leads to
\[\limsup_{n\rightarrow+\infty}\frac{\dim_k(V_n)}{n}\leqslant\deg(D(V_\sbullet)).\]
Let $p$ be a sufficiently positive integer (so that $\operatorname{Rat}(X)=k(V_p)$).
Let \[V^{[p]}_\sbullet:=\bigoplus_{n\in\mathbb N}\mathrm{Im}(S^nV_p\longrightarrow V_{np})T^n.\]
Clearly one has $D(V_\sbullet^{[p]})\leqslant pD(V_\sbullet)$. Moreover, since $\operatorname{Rat}(X)=k(V_p)$, $X$ identifies with the normalisation of $\mathrm{Proj}(V_\sbullet^{[p]})$, and the pull-back on $X$ of the tautological line bundle on $\mathrm{Proj}(V_\sbullet^{[p]})$ identifies with $\mathcal O(D(V_\sbullet^{[p]}))$. This leads to
\[\frac{1}{p}\deg(D(V_\sbullet^{[p]}))=\lim_{n\rightarrow+\infty}\frac{\dim_k(V^{[p]}_n)}{pn}\leqslant\liminf_{\begin{subarray}{c}n\in\mathbb N,\, V_n\neq \{0\}\\n\rightarrow+\infty
\end{subarray}}\frac{\dim_k(V_n)}{n}.\]
As the map $p\mapsto \frac 1p D(V_\sbullet^{[p]})$ preserves the order if we consider the relation of divisibility on $p$, by the relation $D(V_\sbullet)=\sup_{p}\frac 1pD(V_\sbullet^{[p]})$ we obtain that
\[\deg(D(V_\sbullet))=\sup_{p}\frac 1p\deg(D(V_\sbullet^{[p]}))\leqslant\liminf_{\begin{subarray}{c}n\in\mathbb N,\, V_n\neq \{0\}\\n\rightarrow+\infty
\end{subarray}}\frac{\dim_k(V_n)}{n}.\]
Therefore the equality in \eqref{Equ: Hilbert Samuel for graded linear series} holds.
If $p$ is a positive integer such that $\mathrm{Rat}(X)=k(V_p)$, then $V_p$ admits an element $s$ which is transcendental over $k$. In particular, the graded linear series $V_\sbullet^{[p]}$ contains a polynomial ring of one variable, which shows that
\[\liminf_{n\rightarrow+\infty}\frac{\dim_k(V_n)}{n}>0.\]
\end{proof}
\section{Arithmetic surface over a trivially valued field}
In this section, we fix a commutative field $k$ and we denote by $|\ndot|$ the trivial absolute value on $k$. Let $X$ be a regular projective curve over $\Spec k$. We denote by $X^{\mathrm{an}}$ the \emph{Berkovich topological space} associated with $X$. Recall that, as a set $X^{\mathrm{an}}$ consists of couples of the form $\xi=(x,|\ndot|_\xi)$, where $x$ is a scheme point of $X$ and $|\ndot|_\xi$ is an absolute value on the residue field $\kappa(x)$ of $x$, which extends the trivial absolute value on $k$. We denote by $j:X^{\mathrm{an}}\rightarrow X$ the map sending any pair in $X^{\mathrm{an}}$ to its first coordinate. For any $\xi\in X^{\mathrm{an}}$, we denote by $\widehat{\kappa}(\xi)$ the completion of $\kappa(j(\xi))$ with respect to the absolute value $|\ndot|_\xi$, on which $|\ndot|_\xi$ extends in a unique way. For any regular function $f$ on a Zariski open subset $U$ of $X$, we let $|f|$ be the function on $j^{-1}(U)$ sending any $\xi$ to the absolute value of $f(j(\xi))\in\kappa(j(\xi))$ with respect to $|\ndot|_\xi$. The \emph{Berkovich topology} on $X^{\mathrm{an}}$ is defined as the most coarse topology making the map $j$ and all functions of the form $|f|$ continuous, where $f$ is a regular function on a Zariski open subset of $X$.
\begin{rema}
Let $X^{(1)}$ be the set of all closed points of $X$. The Berkovich topological space $X^{\operatorname{an}}$ identifies with the tree $\mathcal T(X^{(1)})$, where
\begin{enumerate}[label=(\alph*)]
\item the root point $\eta_0$ of the tree $\mathcal T(X^{(1)})$ corresponds to the pair consisting of the generic point $\eta$ of $X$ and the trivial absolute value on the field of rational functions on $X$,
\item for any $x\in X^{(1)}$, the leaf point $x_0\in\mathcal T(X^{(1)})$ corresponds to the closed point $x$ of $X$ together with the trivial absolute value on the residue field $\kappa(x)$,
\item for any $x\in X^{(1)}$, any $\xi\in\mathopen{]}\eta_0,x_0\mathclose{[}$ corresponds to the pair consisting of the generic point $\eta$ of $X$ and the absolute value $\mathrm{e}^{-t(\xi)\operatorname{ord}_x(\ndot)}$, with $\operatorname{ord}_x(\ndot)$ being the discrete valuation on the field of rational functions $\operatorname{Rat}(X)$ corresponding to $x$.
\end{enumerate}
\end{rema}
\subsection{Metrised divisors} We call \emph{metrised $\mathbb R$-divisor} on $X$ any copy $(D,g)$, where $D$ is an \emph{$\mathbb R$-divisor} on $X$ and $g$ is a Green function on $\mathcal T(X^{(1)})$ such that $\mu_x(g)=\operatorname{ord}_x(D)$ for any $x\in X^{(1)}$ (see \S\ref{Subsec: Green functions}). If in addition $D$ is a \emph{$\mathbb Q$-divisor} (resp. divisor), we say that $D$ is a \emph{metrised $\mathbb Q$-divisor} (resp. \emph{metrised divisor}).
If $(D,g)$ is a metrised $\mathbb R$-divisor on $X$ and $a$ is a real number, then $(aD,ag)$ is also a metrised $\mathbb R$-divisor, denoted by $a(D,g)$. Moreover, if $(D_1,g_1)$ and $(D_2,g_2)$ are two metrised $\mathbb R$-divisors on $X$, then $(D_1+D_2,g_1+g_2)$ is also a metrised $\mathbb R$-divisor, denoted by $(D_1,g_1)+(D_2,g_2)$. The set $\widehat{\mathrm{Div}}_{\mathbb R}(X)$ of all metrised $\mathbb R$-divisors on $X$ then forms a vector space over $\mathbb R$.
If $(D,g)$ is a metrised $\mathbb R$-divisor on $X$, we say that $g$ is a \emph{Green function of the $\mathbb R$-divisor $D$}.
\begin{rema}
\begin{enumerate}[label=\rm(\arabic*)]
\item Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. Note that the $\mathbb R$-divisor part $D$ is uniquely determined by the Green function $g$. Therefore the study of metrised $\mathbb R$-divisors on $X$ is equivalent to that of Green functions on the infinite tree $\mathcal T(X^{(1)})$. The notation of pair $(D,g)$ facilites however the presentation on the study of metrised linear series of $(D,g)$.
\item Let $D$ be an $\mathbb R$-divisor on $X$, there is a unique canonical Green function on $\mathcal T(X^{(1)})$ (see Definition \ref{Def: canonical divisor}), denoted by $g_{D}$, such that $(D,g_D)$ is an metrised $\mathbb R$-divisor. Note that, for any metrised $\mathbb R$-divisor $(D,g)$ which admits $D$ as its underlying $\mathbb R$-divisor, one has $g_D=g_{\mathrm{can}}$ (see Definition \ref{Def: canonical divisor}). In particular, if $(D,g)$ is a metrised $\mathbb R$-divisor such that $D$ is the zero $\mathbb R$-divisor, then the Green function $g$ is bounded.
\end{enumerate}
\end{rema}
\begin{defi}
Let $\mathrm{Rat}(X)$ be the field of rational functions on $X$ and $\mathrm{Rat}(X)^{\times}_{\mathbb R}$ be the $\mathbb R$-vector space $\mathrm{Rat}(X)^{\times}\otimes_{\mathbb Z}\mathbb R$. For any $\phi$ in $\mathrm{Rat}(X)^{\times}_{\mathbb R}$, the couple $((\phi),g_{(\phi)})$ is called the \emph{principal metrised $\mathbb R$-divisor} associated with $\phi$ and is denoted by $\widehat{(\phi)}$.
\end{defi}
\begin{defi}
If $(D,g)$ is a metrised $\mathbb R$-divisor, for any $\phi\in\Gamma(D)^{\times}_{\mathbb R}$, we define
\begin{equation}\label{Equ: norm g}\norm{\phi}_{g}:=\exp\bigg(-\inf_{\xi\in\mathcal T(X^{(1)})}(g_{(\phi)}+g)(\xi)\bigg).\end{equation}
By convention, $\norm{0}_g$ is defined to be zero.
\end{defi}
\subsection{Ultrametrically normed vector spaces}
Let $E$ be a finite-dimensional vector space over $k$ (equipped with the trivial absolute value). By \emph{ultrametric norm} on $E$, we mean a map $\norm{\ndot}:E\rightarrow\mathbb R_{\geqslant 0}$ such that
\begin{enumerate}[label=\rm(\arabic*)]
\item for any $x\in E$, $\norm{x}=0$ if and only if $x=0$,
\item $\norm{ax}=\norm{x}$ for any $x\in E$ and $a\in k\setminus\{0\}$,
\item for any $(x,y)\in E\times E$, $\norm{x+y}\leqslant\max\{\norm{x},\norm{y}\}$.
\end{enumerate}
Let $r$ be the rank of $E$ over $k$. We define the \emph{determinant norm associated with $\norm{\ndot}$ the norm $\norm{\ndot}_{\det}$ on $\det(E)=\Lambda^r(E)$ such that
\[\forall\,\eta\in\det(E),\quad \norm{\eta}=\inf_{\begin{subarray}{c}
(s_1,\ldots,s_r)\in E^r\\
s_1\wedge\ldots\wedge s_r=\eta
\end{subarray}}\norm{s_1}\cdots\norm{s_r}.\]}We define the \emph{Arakelov degree} of $(E,\norm{\ndot})$ as
\begin{equation}
\widehat{\deg}(E,\norm{\ndot})=-\ln\norm{\eta}_{\det},
\end{equation}
where $\eta$ is a non-zero element of $\det(E)$.
We then define the \emph{positive Arakelov degree} as
\[\widehat{\deg}_+(E,\norm{\ndot}):=\sup_{F\subset E}\widehat{\deg}(F,\norm{\ndot}_F),\]
where $F$ runs over the set of all vector subspaces of $E$, and $\norm{\ndot}_F$ denotes the restriction of $\norm{\ndot}$ to $F$.
\begin{exem} Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$.
Note that the restriction of $\norm{\ndot}_g$ to $H^0(D)$ defines an ultrametric norm on the $k$-vector space $H^0(D)$.
\end{exem}
Assume that $(E,\norm{\ndot})$ is a non-zero finite-dimensional ultrametrically normed vector space over $k$. We introduce a Borel probability measure $\mathbb P_{(E,\norm{\ndot})}$ on $\mathbb R$ such that, for any $t\in\mathbb R$,
\[\mathbb P_{(E,\norm{\ndot})}(\mathopen{]}{t},{+\infty}\mathclose{[})=\frac{\dim_k(\{s\in E\,:\,\norm{s}<\mathrm{e}^{-t}\})}{\dim_k(E)}.\]
Then, for any random variable $Z$ which follows $\mathbb P_{(E,\norm{\ndot})}$ as its probability law, one has
\begin{equation}\label{Equ: slope as expectation}\frac{\widehat{\deg}(E,\norm{\ndot})}{\dim_k(E)}=\mathbb E[Z]=\int_{\mathbb R}t\,\mathbb P_{(E,\norm{\ndot})}(\mathrm{d}t)\end{equation}
and
\begin{equation}\label{Equ: positive slope as expectation}
\frac{\widehat{\deg}_+(E,\norm{\ndot})}{\dim_k(E)}=\mathbb E[\max(Z,0)]=\int_{0}^{+\infty}t\,\mathbb P_{(E,\norm{\ndot})}(\mathrm{d}t).
\end{equation}
\subsection{Essential infima} Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb R}^{\times}$ is not empty. We define
\[\lambda_{\operatorname{ess}}(D,g):=\sup_{\phi\in\Gamma(D)_{\mathbb R}^{\times}}\inf_{\xi\in X^{\mathrm{an}}}(g_{(\phi)}+g)(\xi),\]
called the \emph{essential infimum} of the metrised $\mathbb R$-divisor $(D,g)$. By \eqref{Equ: norm g}, we can also write $\lambda_{\operatorname{ess}}(D,g)$ as
\[\sup_{\phi\in\Gamma(D)_{\mathbb R}^{\times}}\Big(-\ln\norm{\phi}_g\Big).\]
\begin{prop}\label{Pro: superadditivity}
Let $(D_1,g_1)$ and $(D_2,g_2)$ be metrised $\mathbb R$-divisors such that $\Gamma(D_1)_{\mathbb R}^{\times}$ and $\Gamma(D_2)_{\mathbb R}^{\times}$ are non-empty. Then one has
\begin{equation}\label{Equ: lambda ess superadditive}\lambda_{\mathrm{ess}}(D_1+D_2,g_1+g_2)\geqslant\lambda_{\mathrm{ess}}(D_1,g_1)+\lambda_{\mathrm{ess}}(D_2,g_2).\end{equation}
\end{prop}
\begin{proof}
Let $\phi_1$ and $\phi_2$ be elements of $\Gamma(D_1)_{\mathbb R}^{\times}$ respectively. One has $\phi_1\phi_2\in\Gamma(D_1+D_2)_{\mathbb R}^{\times}$. Moreover,
\[g_{(\phi_1\phi_2)}=g_{(\phi_1)}+g_{(\phi_2)}.\]
Therefore
\[g_{(\phi_1\phi_2)}+(g_1+g_2)=(g_{(\phi_1)}+g_1)+(g_{\phi_2}+g_2),\]
which leads to
\[\begin{split}&\quad\;\Big(\inf_{\xi\in X^{\mathrm{an}}}\big(g_{(\phi_1)}+g_1\big)(\xi)\Big)+\Big(\inf_{\xi\in X^{\mathrm{an}}}\big(g_{(\phi_2)}+g_2\big)(\xi)\Big)\\&\leqslant \inf_{\xi\in X^{\mathrm{an}}}\big(g_{(\phi_1\phi_2)}+(g_1+g_2)\big)(\xi)\\
&\leqslant\lambda_{\mathrm{ess}}(D_1+D_2,g_1+g_2).\end{split}\]
Taking the supremum with respect to $\phi_1\in\Gamma(D_1)_{\mathbb R}^{\times}$ and $\phi_2\in\Gamma(D_2)_{\mathbb R}^{\times}$, we obtain the inequality \eqref{Equ: lambda ess superadditive}.
\end{proof}
\begin{rema}In the literature, the essential infimum of height function is studied in the number field setting. We can consider its analogue in the setting of Arakelov geometry over a trivially valued field. For any closed point $x$ of $X$, we define the height of $x$ with respect to $(D,g)$ as
\[h_{(D,g)}(x):=\varphi_g(x_0),\]
where $\varphi_g=g-g_{\mathrm{can}}$ is the bounded Green function associated with $g$ (see Definition \ref{Def: canonical divisor}), and $x_0$ denotes the point of $X^{\mathrm{an}}$ corresponding to the closed point $x$ equipped with the trivial absolute value on its residue field. In particular, for any element $x\in X^{(1)}$ outside of the support of $D$, one has \[h_{(D,g)}(x)=g(x_0).\] Then the \emph{essential infimum} of the height function $h_{(D,g)}$ is defined as
\[\mu_{\mathrm{ess}}(D,g):=\sup_{Z\subsetneq X}\inf_{x\in X^{(1)}\setminus Z}h_{(D,g)}(x),\]
where $Z$ runs over the set of closed subschemes of $X$ which are different from $X$ (namely a finite subset of $X^{(1)}$). If $\Gamma(D)_{\mathbb R}^{\times}$ is not empty, one has
\[\lambda_{\mathrm{ess}}(D,g)\leqslant\sup_{\phi\in\Gamma(D)_{\mathbb R}^{\times}}\inf_{x\in X^{(1)}}(g_{(\phi)}+g)(x_0).\] For each $\phi\in\Gamma(D)^{\times}_{\mathbb R}$, one has
\[\inf_{x\in X^{(1)}}(g_{(\phi)}+g(x_0))\leqslant\inf_{x\in X^{(1)}\setminus(\mathrm{Supp}(D)\cup\mathrm{Supp}((\phi)))}g(x_0),\]
which is clearly bounded from above by $\mu_{\mathrm{ess}}(D,g)$. Therefore, one has
\begin{equation}\label{Equ: lambda r ess bounded}\lambda_{\mathrm{ess}}(D,g)\leqslant\mu_{\mathrm{ess}}(D,g).\end{equation}
\end{rema}
The following proposition implies that $\lambda_{\mathrm{ess}}(D,g)$ is actually finite.
\begin{prop}\phantomsection\label{Pro: essential minimu bounded}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. One has $\mu_{\mathrm{ess}}(D,g)=g(\eta_0)$, where $\eta_0$ denotes the point of $X^{\mathrm{an}}$ corresponding to the generic point of $X$ equipped with the trivial absolute value on its residue field.
\end{prop}
\begin{proof}
Let $\alpha$ be a real number that is $>g(\eta_0)$. The set
\[\{\xi\in X^{\mathrm{an}}\,:\,g(\xi)<\alpha\}\]
is an open subset of $X^{\mathrm{an}}$ containing $\eta_0$ and hence there exists a finite subset $S$ of $X^{(1)}$ such that
$g(x_0)<\alpha$ for any $x\in X^{(1)}\setminus S$. Therefore we obtain $\mu_{\mathrm{ess}}(D,g)\leqslant\alpha$. Since $\alpha>g(\eta_0)$ is arbitrary, we get $\mu_{\mathrm{ess}}(D,g)\leqslant g(\eta_0)$.
Conversely, if $\beta$ is a real number such that $\beta<g(\eta_0)$, then
\[\{\xi\in X^{\mathrm{an}}\,:\,g(\xi)>\beta\}\]
is an open subset of $X^{\mathrm{an}}$ containing $\eta_0$. Hence there exists a finite subset $S'$ of $X^{(1)}$ such that $g(x_0)>\beta$ for any $x\in X^{(1)}\setminus S'$. Hence $\mu_{\mathrm{ess}}(D,g)\geqslant\beta$. Since $\beta<g(\eta_0)$ is arbitrary, we obtain $\mu_{\mathrm{ess}}(D,g)\geqslant g(\eta_0)$.
\end{proof}
\begin{lemm}\label{Lem: linear independence}
Let $r\in\mathbb N_{\geqslant 1}$ and $s_1,\ldots,s_r$ be elements of $\mathrm{Rat}(X)_{\mathbb Q}^{\times}$ and $a_1,\ldots,a_r$ be real numbers which are linearly independent over $\mathbb Q$. Let $s:=s_1^{a_1}\cdots s_r^{a_r}\in\mathrm{Rat}(X)_{\mathbb R}^{\times}$. Then for any $i\in\{1,\ldots,r\}$ one has $\operatorname{Supp}((s_i))\subset\operatorname{Supp}((s))$.
\end{lemm}
\begin{proof}
Let $x$ be a closed point of $X$ which does not lie in the support of $(s)$. One has
\[\sum_{i=1}^r\operatorname{ord}_x(s_i)a_i=0\]
and hence $\operatorname{ord}_x(s_1)=\ldots=\operatorname{ord}_x(s_r)=0$ since $a_1,\ldots,a_r$ are linearly independent over $\mathbb Q$.
\end{proof}
\begin{lemm}\phantomsection\label{Lem: approximation by rational solutions}
Let $n$ and $r$ be two positive integers, $\ell_1,\ldots,\ell_n$ be linear forms on $\mathbb R^r$ of the form
\[\ell_j(\boldsymbol{y})=b_{j,1}y_1+\cdots+b_{j,r}y_r,\text{ where $(b_{j,1},\ldots,b_{j,r})\in\mathbb Q^r$}\] and $q_1,\ldots,q_n$ be non-negative real numbers. Let $\boldsymbol{a}=(a_1,\ldots,a_r)$ be an element of $\mathbb R_{>0}^{r}$ which forms a linearly independent family over $\mathbb Q$, and such that $\ell_j(\boldsymbol{a})+q_j\geqslant 0$ for any $j\in\{1,\ldots,n\}$. Then, for any $\varepsilon>0$, there exists a sequence
\[\boldsymbol{\delta}^{(m)}=(\delta^{(m)}_1,\ldots,\delta^{(m)}_r),\quad m\in\mathbb N\]
in $\mathbb R_{>0}^r$, which converges to $(0,\ldots,0)$ and verifies the following conditions:
\begin{enumerate}[label=\rm(\arabic*)]
\item for any $j\in\{1,\ldots,n\}$, one has $\ell_j(\boldsymbol{\delta}^{(m)})+\varepsilon q_j\geqslant 0$,
\item for any $m\in\mathbb N$ and any $i\in\{1,\ldots,r\}$, one has $\delta_i^{(m)}+a_i\in\mathbb Q$.
\end{enumerate}
\end{lemm}
\begin{proof}Without loss of generality, we may assume that $q_1=\cdots=q_d=0$ and $\min\{q_{d+1},\ldots,q_n\}>0$.
Since $a_1,\ldots,a_r$ are linearly independent over $\mathbb Q$, for $j\in\{1,\ldots,d\}$, one has $\ell_j(\boldsymbol{a})>0$. Hence there exists an open convex cone $C$ in $\mathbb R_{>0}^r$ which contains $\boldsymbol{a}$, such that $\ell_j(\boldsymbol{y})>0$ for any $\boldsymbol{y}\in C$ and $j\in\{1,\ldots,d\}$. Moreover, if we denote by $\norm{\ndot}_{\sup}$ the norm on $\mathbb R^r$ (where $\mathbb R$ is equipped with its usual absolute value) defined as
\[\norm{(y_1,\ldots,y_r)}_{\sup}:=\max\{|y_1|,\ldots,|y_r|\},\] then there exists $\lambda>0$ such that, for any $\boldsymbol{z}\in C$ such that $\norm{z}_{\sup}<\lambda$ and any $j\in\{d+1,\ldots,n\}$, one has $\ell_j(\boldsymbol{z})+\varepsilon q_j\geqslant 0$. Let \[C_\lambda=\{\boldsymbol{y}\in C\,:\,\norm{\boldsymbol{y}}_{\sup}<\lambda\}.\]
It is a convex open subset of $\mathbb R^r$. For any $\boldsymbol{y}\in C_\lambda$ and any $j\in\{1,\ldots,n\}$, one has
\[\ell_j(\boldsymbol{y})+\varepsilon q_j\geqslant 0.\]
Since $C_\lambda$ is open and convex, also is its translation by $-\boldsymbol{a}$. Note that the set of rational points in a convex open subset of $\mathbb R^r$ is dense in the convex open subset. Therefore, the set of all points $\boldsymbol{\delta}\in C_\lambda$ such that $\boldsymbol{\delta}+\boldsymbol{a}\in\mathbb Q^r$ is dense in $C_\lambda$. Since $(0,\ldots,0)$ lies on the boundary of $C_\lambda$, it can be approximated by a sequence $(\boldsymbol{\delta}^{(m)})_{m\in\mathbb N}$ of elements in $C_\lambda$ such that $\boldsymbol{\delta}^{(m)}+\boldsymbol{a}\in\mathbb Q^r$ for any $m\in\mathbb N$.
\end{proof}
\begin{rema}\label{Rem: suite d'approximation}
We keep the notation and hypotheses of Lemma \ref{Lem: approximation by rational solutions}. For any $m\in\mathbb N$, and $j\in\{1,\ldots,n\}$ one has
\[\ell_j(\boldsymbol{a}+\boldsymbol{\delta}^{(m)})+(1+\varepsilon)q_j\geqslant 0,\]
or equivalently,
\[\ell_j\Big(\frac{1}{1+\varepsilon}(\boldsymbol{a}+\boldsymbol{\delta}^{(m)})\Big)+q_j\geqslant 0.\]
Therefore, one can find a sequence $(\boldsymbol{a}^{(p)})_{p\in\mathbb N}$ of elements in $\mathbb Q^r$ which converges to $\boldsymbol{a}$ and such that
\[\ell_j(\boldsymbol{a}^{(p)})+q_j\geqslant 0\]
hods for any $j\in\{1,\ldots,n\}$ and any $p\in\mathbb N$.
\end{rema}
\begin{prop}\label{Pro:lambda ess sur Q}
Let $(D,g)$ be an arithmetic $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb Q}^{\times}\neq\emptyset$. One has \begin{equation}\label{Equ: interpretation of lambda ess}\begin{split}\lambda_{\mathrm{ess}}(D,g)&=\sup_{\phi\in\Gamma(D)_{\mathbb Q}^{\times}}\inf_{\xi\in X^{\mathrm{an}}}(g_{(\phi)}+g)(\xi)=\sup_{\phi\in\Gamma(D)_{\mathbb Q}^{\times}}\Big(-\ln\norm{\phi}_g\Big)\\
&=\sup_{n\in\mathbb N,\,n\geqslant 1}\frac 1n\sup_{s\in H^0(nD)\setminus\{0\}}\Big(-\ln\norm{s}_{ng}\Big).
\end{split}
\end{equation}
\end{prop}
\begin{proof}
By definition one has
\[\Gamma(D)_{\mathbb Q}^{\times}=\bigcup_{n\in\mathbb N,\,n\geqslant 1}\{s^{\frac 1n}\,:\,s\in H^0(nD)\setminus\{0\}\}.\]
Moreover, for $\phi\in\Gamma(D)_{\mathbb Q}^{\times}$, one has
\[\inf_{\xi\in X^{\mathrm{an}}}(g_{(\phi)}+g)(\xi)=-\ln\norm{\phi}_g.\]
Therefore the second and third equalities of \eqref{Equ: interpretation of lambda ess} hold.
To show the first equality, we denote temporarily by $\lambda_{\mathbb Q,\mathrm{ess}}(D,g)$ the second term of \eqref{Equ: interpretation of lambda ess}.
Let $a$ be an arbitrary positive rational number. The correspondance $\Gamma(D)_{\mathbb Q}^{\times}\rightarrow\Gamma(aD)_{\mathbb Q}^{\times}$ given by $\phi\mapsto \phi^a$ is a bijection. Moreover, for $\phi\in\Gamma(D)_{\mathbb Q}^{\times}$ one has $\norm{\phi^a}_{ag}=\norm{\phi}_g^a$. Hence the equality \begin{equation}\label{Equ: linearity of lambda ess}\lambda_{\mathbb Q,\mathrm{ess}}(aD,ag)=a\lambda_{\mathbb Q,\mathrm{ess}}(D,g)\end{equation} holds.
By our assumption, we can choose $\phi\in\Gamma(D)_{\mathbb Q}^{\times}$. For $\mathbb K\in\{\mathbb Q,\mathbb R\}$, the map
\[\alpha_{\psi}:\Gamma(D)^{\times}_{\mathbb K}\longrightarrow\Gamma(D+(\psi))_{\mathbb K}^{\times},\quad \phi\longmapsto\phi\psi^{-1}\]
is a bijection. Moreover, for any $\phi\in\Gamma(D)_{\mathbb K}^{\times}$,
\[\norm{\phi}_g=\norm{\alpha_{\psi}(\phi)}_{g+g_{(\psi)}}.\]
Hence one has
\begin{gather}\label{Equ: invariance under linear equivalence}\lambda_{\mathbb Q,\mathrm{ess}}(D,g)=\lambda_{\mathbb Q,\mathrm{ess}}(D+(\psi),g+g_{(\psi)}),\\\lambda_{\mathrm{ess}}(D,g)=\lambda_{\mathrm{ess}}(D+(\psi),g+g_{(\psi)}).\end{gather}
Furthermore, for any $c\in\mathbb R$, one has
\begin{gather}\lambda_{\mathbb Q,\mathrm{ess}}(D,g+c)=\lambda_{\mathbb Q,\mathrm{ess}}(D,g)+c,\label{Equ: add a constant}\\\label{Equ: add a constant2}\lambda_{\mathrm{ess}}(D,g+c)=\lambda_{\mathrm{ess}}(D,g)+c.\end{gather}
Therefore, to prove the proposition, we may assume without loss of generality that $D$ is effective and $\varphi_g\geqslant 0$.
By definition one has $\lambda_{\mathbb Q,\mathrm{ess}}(D,g)\leqslant\lambda_{\mathrm{ess}}(D,g)$. To show the converse inequality, it suffices to prove that, for any $s\in\Gamma(D)_{\mathbb R}^{\times}$, one has
\[-\ln\norm{s}_{g}\leqslant\lambda_{\mathbb Q,\mathrm{ess}}(D,g).\]
We choose $s_1,\ldots,s_r$ in $\mathrm{Rat}(X)_{\mathbb Q}^{\times}$ and $a_1,\ldots,a_r$ in $\mathbb R_{>0}$ such that $a_1,\ldots,a_r$ are linearly independent over $\mathbb Q$ and that $s=s_1^{a_1}\cdots s_r^{a_r}$. By Lemma \ref{Lem: linear independence}, for any $i\in\{1,\ldots,r\}$, the support of $(s_i)$ is contained in that of $(s)$. Assume that $\operatorname{Supp}((s))=\{x_1,\ldots,x_n\}$. Since $s\in\Gamma(D)_{\mathbb R}^{\times}$, for $j\in\{1,\ldots,n\}$, one has
\begin{equation}\label{Equ: sum of ai ord xi}a_1\operatorname{ord}_{x_j}(s_1)+\cdots+a_r\operatorname{ord}_{x_j}(s_r)+\operatorname{ord}_{x_j}(D)\geqslant 0.\end{equation}
By Lemma \ref{Lem: approximation by rational solutions}, for any rational number $\varepsilon>0$, there exists a sequence
\[(\delta_1^{(m)},\ldots,\delta_{r}^{(m)}),\quad m\in\mathbb N\]
in $\mathbb R^r$, which converges to $(0,\ldots,0)$, and such that,
\begin{enumerate}[label=\rm(\arabic*)]
\item for any $j\in\{1,\ldots,n\}$ and any $m\in\mathbb N$, one has
\[\delta_1^{(m)}\operatorname{ord}_{x_j}(s_1)+\cdots+\delta_r^{(m)}\operatorname{ord}_{x_j}(s_r)+\varepsilon\operatorname{ord}_{x_j}(D)\geqslant 0.\]
\item for any $i\in\{1,\ldots,r\}$ and any $m\in\mathbb N$, $\delta_i^{(m)}+a_i\in\mathbb Q$.
\end{enumerate}
For any $m\in\mathbb N$, let
\[s^{(m)}=s_1^{\delta_1^{(m)}}\!\!\cdots\; s_r^{\delta_r^{(m)}}\in\Gamma(\varepsilon D)_{\mathbb R}^{\times}.\]
The conditions (1) and (2) above imply that $s\cdot s^{(m)}\in\Gamma((1+\varepsilon)D)_{\mathbb Q}^{\times}$.
Hence one has
\[\inf_{\xi\in X^{\mathrm{an}}}\big((1+\varepsilon)g+g_{(s\cdot s^{(m)})}\big)(\xi)\leqslant\lambda_{\mathbb Q,\mathrm{ess}}((1+\varepsilon)D,(1+\varepsilon)g).\]
Since $D$ is effective and $\varphi_g\geqslant 0$ by $s^{(m)}\in\Gamma(\varepsilon D)_{\mathbb R}^{\times }$, one has
\[\varepsilon g+g_{(s^{(m)})}\geqslant\varepsilon\varphi_g\geqslant 0.\]
Therefore we obtain
\[-\ln\norm{s}_g=\inf_{\xi\in X^{\mathrm{an}}}(g+g_{(s)})(\xi)\leqslant\lambda_{\mathbb Q,\mathrm{ess}}((1+\varepsilon)D,(1+\varepsilon)g)=(1+\varepsilon)\lambda_{\mathbb Q,\mathrm{ess}}(D,g),\]
where the last equality comes from \eqref{Equ: linearity of lambda ess}. Taking the limit when $\varepsilon\in\mathbb Q_{>0}$ tends to $0$, we obtain the desired inequality.
\end{proof}
\subsection{$\chi$-volume}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$.
We define the \emph{$\chi$-volume} of $(D,g)$ as
\[\widehat{\mathrm{vol}}_\chi(D,g):=\limsup_{n\rightarrow+\infty}\frac{\widehat{\deg}(H^0(nD),\norm{\ndot}_{ng})}{n^2/2}.\]
This invariant is similar to the $\chi$-volume function in the number field setting introduced in \cite{MR2425137}. Note that, if $\deg(D)<0$, then $H^0(D)=\{0\}$, so that $H^0(nD) = \{ 0 \}$ for all $n \in \mathbb{Z}_{>0}$.
Indeed, if $f \in H^0(D) \setminus \{ 0 \}$,
then $0 \leqslant \deg(D + (f)) = \deg(D) < 0$, which is a contradiction.
Hence $\widehat{\operatorname{vol}}_{\chi}(D,g)=0$.
\begin{prop}\label{Pro:formula:avol:g:g:prime}
Let $D$ be an $\mathbb{R}$-divisor on $X$, and $g$ and $g'$ be Green functions of $D$. If $g \leqslant g'$, then $\widehat{\mathrm{vol}}_{\chi}(D, g) \leqslant \widehat{\mathrm{vol}}_{\chi}(D, g')$.
\end{prop}
\begin{proof}
Note that $\|\ndot\|_{ng} \geqslant \|\ndot\|_{ng'}$ on $H^0(X, nD)$, so that one can see that $\|\ndot\|_{ng, \det} \geqslant \|\ndot\|_{ng', \det}$
on $\det H^0(X, nD)$. Therefore we obtain
\[
\widehat{\deg}(H^0(X, nD), \|\ndot\|_{ng}) \leqslant \widehat{\deg}(H^0(X, nD), \|\ndot\|_{ng'})
\]
for all $n \geqslant 1$. Thus the assertion follows.
\end{proof}
\begin{prop}\label{Pro:volume chi translation}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\deg(D)\geqslant 0$. For any $c\in\mathbb R$, one has
\begin{equation}\label{Equ: volume chi translation}\widehat{\operatorname{vol}}_{\chi}(D,g+c)=2c\deg(D)+\widehat{\operatorname{vol}}_{\chi}(D,g).\end{equation}
\end{prop}
\begin{proof}
For any $n\in\mathbb N$, one has $\norm{\ndot}_{n(g+c)}=\norm{\ndot}_{ng+nc}=\mathrm{e}^{-nc}\norm{\ndot}_{ng}$. Therefore, one has
\[\widehat{\deg}(H^0(nD),\norm{\ndot}_{n(g+c)})=\widehat{\deg}(H^0(nD),\norm{\ndot}_{ng})+nc\dim_k(H^0(nD)).\]
Note that, by Proposition \ref{Pro: asymptotic RR},
\[\dim_k(H^0(nD))=\deg(D)n+o(n),\quad n\rightarrow+\infty.\]
Therefore, one has
\[\frac{\widehat{\deg}(H^0(nD),\norm{\ndot}_{n(g+c)})}{n^2/2}=\frac{\widehat{\deg}(H^0(nD),\norm{\ndot}_{ng})}{n^2/2}+2c\deg(D)+o(1),\quad n\rightarrow+\infty.\]
Taking the superior limit when $n
\rightarrow+\infty$, we obtain \eqref{Equ: volume chi translation}.
\end{proof}
\begin{defi}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\deg(D)>0$. We denote by ${\Gamma}(D,g)_{\mathbb R}^\times$ the set of $s\in\Gamma(D)_{\mathbb R}^{\times}$ such that $\norm{s}_g< 1$. Similarly, we denote by ${\Gamma}(D,g)_{\mathbb Q}^{\times}$ the set of $s\in\Gamma(D)_{\mathbb Q}^{\times}$ such that $\norm{s}_g< 1$.
For any $t \in\mathbb R$ such that $t<\lambda_{\operatorname{ess}}(D,g)$, we let $D_{g,t}$ be the $\mathbb R$-divisor
\[\sup_{s\in {\Gamma}(D,\,g-t)_{\mathbb Q}^{\times}}(s^{-1}).\]
For sufficiently negative number $t$ such that $\norm{s}_g<\mathrm{e}^{-t}$ for any $s\in\Gamma(D)_{\mathbb R}^{\times}$, one has \[{\Gamma}(D,g-t)_{\mathbb Q}^{\times}={\Gamma}(D)_{\mathbb Q}^{\times}\] and hence, by Proposition \ref{Pro: sup of s in Gamma D}, $D_{g,t}=D$. If $t\geqslant\lambda_{\mathrm{ess}}(D,g)$, by convention we let $D_{g,t}$ be the zero $\mathbb R$-divisor.
\end{defi}
\begin{prop}\phantomsection\label{Pro: lim of Vt is Dgt}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\deg(D)>0$, and $t\in\mathbb R$ such that $t<\lambda_{\mathrm{ess}}(D,g)$. Let \[V_\sbullet^t(D,g):=\bigoplus_{n\in\mathbb N}\{s\in H^0(nD)\,:\,\norm{s}_{ng}< \mathrm{e}^{-tn}\}T^n\subseteq K[T].\]
Then one has
\begin{equation}\label{Equ: deg Dgt}\lim_{n\rightarrow+\infty}\frac{\dim_k(V_n^t(D,g))}{n}=\deg(D_{g,t})>0.\end{equation}
\end{prop}
\begin{proof}By Proposition \ref{Pro: convergence of dim}, it suffices to show that the graded linear series $V^t_{\sbullet}(D,g)$ is birational (see Definition \ref{Def: generic point of graded linear series}). As $\deg(D)>0$, there exists $m\in\mathbb N_{\geqslant 1}$ such that $k(H^0(mD))=\mathrm{Rat}(X)$ (see Example \ref{Exe: birational graded linear series} and Proposition \ref{Pro: corps engendre par un systeme lineare}). Note that the norm $\norm{\ndot}_{mg}$ is a bounded function on $H^0(mD)$. In fact, if $(s_i)_{i=1}^{r_m}$ is a basis of $H^0(kmD)$, as the norm $\norm{\ndot}_{mg}$ is ultrametric, for any $(\lambda_i)_{i=1}^{r_m}\in k^{r_m}$, one has
\[\norm{\lambda_1s_1+\cdots+\lambda_{r_m}s_{r_m}}_{mg}\leqslant\max_{i\in\{1,\ldots,r_m\}}\norm{s_i}_{mg}.\]
We choose $\varepsilon>0$ such that $t+\varepsilon<\lambda_{\mathrm{ess}}(D,g)$. By \eqref{Equ: interpretation of lambda ess} we obtain that there exist $n\in\mathbb N_{\geqslant 1}$ and $s\in H^0(nD)$ such that $\|s\|_{ng}\leqslant \mathrm{e}^{-n(t+\varepsilon)}$. Let $d$ be a positive integer such that
\[d>\frac{1}{n\varepsilon}\Big(tm+\max_{i\in\{1,\ldots,r_m\}}\ln\norm{s_i}_{mg}\Big).\]
Then, for any $s'\in H^0(mD)$, one has \[\norm{s^ds'}_{(dn+m)g}<\mathrm{e}^{-(dn+m)t},\]
which means that $s^ds'\in V_{dn+m}^t(D,g)$. Therefore we obtain $k(V_{dn+m}^t(D,g))=\mathrm{Rat}(X)$ since it contains $k(H^0(mD))$. The graded linear series $V_\sbullet^t(D,g)$ is thus birational and \eqref{Equ: deg Dgt} is proved.
\end{proof}
\begin{theo}\label{Thm: probabilistic interpretation}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\deg(D)>0$. Let $\mathbb P_{(D,g)}$ be the Borel probability measure on $\mathbb R$ such that
\begin{equation}\label{Equ: proba measure of a metrised divisor}\mathbb P_{(D,g)}(\mathopen{]}t,+\infty\mathclose{[})=\deg(D_{g,t})\end{equation}
for $t<\lambda_{\mathrm{ess}}(D,g)$ and $\mathbb P_{(D,g)}(\mathopen{]}t,+\infty\mathclose{[})=0$ for $t\geqslant \lambda_{\mathrm{ess}}(D,g)$.
Then one has
\begin{equation}\label{Equ: vol chi as an integral}\frac{\widehat{\operatorname{vol}}_\chi(D,g)}{2\deg(D)}=\int_{\mathbb R}t\,\mathbb P_{(D,g)}(\mathrm{d}t).\end{equation}
\end{theo}
\begin{proof}
For any $n\in\mathbb N$, let $\mathbb P_n$ be the Borel probability measure on $\mathbb R$ such that
\[\mathbb P_n(\mathopen{]}t,+\infty\mathclose{[})=\frac{\dim_k(V_n^t(D,g))}{\dim_k(H^0(nD))}\]
for $t<\lambda_{\mathrm{ess}}(D,g)$ and $\mathbb P_n(\mathopen{]}t,+\infty\mathclose{[})=0$ for $t\geqslant \lambda_{\mathrm{ess}}(D,g)$. By Propositions \ref{Pro: lim of Vt is Dgt} and \ref{Pro: asymptotic RR}, one has
\[\forall\,t\in\mathbb R,\quad \lim_{n\rightarrow +\infty}P_n(\mathopen{]}t,+\infty\mathclose{[})=\mathbb P_{(D,g)}(\mathopen{]}t,+\infty\mathclose{[}).\]
Therefore the sequence of probability measures $(\mathbb P_n)_{n\in\mathbb N}$ converges weakly to $\mathbb P$. Moreover, if we write $g$ as $g_D+f$, where $f$ is a continuous function on $X^{\mathrm{an}}$, then the supports of the probability measures $P_n$ are contained in $[\inf f,g(\eta)]$. Therefore one has
\[\lim_{n\rightarrow+\infty}\int_{\mathbb R}t\,\mathbb P_n(\mathrm{d}t)=\int_{\mathbb R}t\,\mathbb P_{(D,g)}(\mathrm{d}t).\]
By \eqref{Equ: slope as expectation}, for any $n\in\mathbb N_{\geqslant 1}$ such that $H^0(nD)\neq\{0\}$, one has
\[\int_{\mathbb R}t\,\mathbb P_n(\mathrm{d}t)=\frac{\widehat{\deg}(H^0(nD),\norm{\ndot}_{ng})}{\dim_k(H^0(nD))}.\]
Therefore we obtain \eqref{Equ: vol chi as an integral}.
\end{proof}
\begin{rema}
Theorem \ref{Thm: probabilistic interpretation} and Proposition \ref{Pro: asymptotic RR} show that the sequence defining the $\chi$-volume function has a limit. More precisely, if $(D,g)$ is a metrised $\mathbb R$-divisor such that $\deg(D)>0$, then one has
\[\widehat{\mathrm{vol}}_{\chi}(D,g)=\lim_{n\rightarrow+\infty}\frac{\widehat{\deg}(H^0(nD),\norm{\ndot}_{ng})}{n^2/2}.\]
\end{rema}
\begin{defi}\label{Def: concave transform}
Let $(D,g)$ be a metrised $\mathbb R$-Cartier divisor on $X$ such that $\deg(D)>0$. We denote by $G_{(D,g)}:[0,\deg(D)]\rightarrow\mathbb R$ the function sending $u\in[0,\deg(D)]$ to \[\sup\{t\in\mathbb R_{<g(\eta)}\,:\,\deg(D_{g,t})>u\}.\] For any $t<g(\eta_0)$ one has
\[\mathbb P_{(D,g)}(\mathopen{]}G_{(D,g)}(\lambda),+\infty\mathclose{[})=\frac{\deg(D_{g,G_{(D,g)}(\lambda)})}{\deg(D)},\]
namely, the probability measure $\mathbb P_{(D,g)}$ coincides with the direct image of the uniform distribution on $[0,\deg(D)]$ by the map $G_{(D,g)}$.
\end{defi}
\begin{prop}\phantomsection\label{Pro: Dgt as R}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\deg(D)>0$. For any $t\in\mathbb R$ such that $t<\lambda_{\mathrm{ess}}(D,g)$, one has
\begin{equation}D_{g,t}=\sup_{s\in{\Gamma}(D,\,g-t)_{\mathbb R}^{\times}}(s^{-1}).
\end{equation}
\end{prop}
\begin{proof}Since $\deg(D)>0$, the set $\Gamma(D)_{\mathbb Q}^{\times}$ is not empty.
Let $\phi\in\Gamma(D)^{\times}_{\mathbb Q}$ and $(D',g')=(D,g)+\widehat{(\phi)}$. By \eqref{Equ: invariance under linear equivalence}, one has $\lambda_{\mathrm{ess}}(D,g)=\lambda_{\mathrm{ess}}(D',g')$. Moreover, the correspondance $s\mapsto s\cdot\phi^{-1}$ defines a bijection from $\Gamma(D,g-t)_{\mathbb K}^\times$ to $\Gamma(D',g'-t)_{\mathbb K}^{\times}$ for $\mathbb K=\mathbb Q$ or $\mathbb R$. Therefore, without loss of generality, we may assume that $D$ is effective. Moreover, by replacing $g$ by $g-t$ and $t$ by $0$ we may assume that $\lambda_{\mathrm{ess}}(D,g)>0$ and $t=0$.
It suffices to check that
$D_{g,0}\geqslant (s^{-1})$ for any $s\in{\Gamma}(D,g)_{\mathbb R}^{\times}$. We write $s$ as $s_1^{a_1}\cdots s_r^{a_r}$, where $s_1,\ldots,s_r$ are elements of $\operatorname{Rat}(X)_{\mathbb Q}^{\times}$, and $a_1,\ldots,a_r$ are positive real numbers which are linearly independent over $\mathbb Q$. Assume that $\operatorname{Supp}((s))=\{x_1,\ldots,x_n\}$. By Lemma \ref{Lem: linear independence}, for any $i\in\{1,\ldots,r\}$, the support of $(s_i)$ is contained in $\{x_1,\ldots,x_n\}$. For any $j\in\{1,\ldots,n\}$, one has
\[\operatorname{ord}_{x_j}(D)+\sum_{i=1}^r\operatorname{ord}_{x_j}(s_i)a_i\geqslant 0.\]
By Lemma \ref{Lem: approximation by rational solutions} and Remark \ref{Rem: suite d'approximation}, there exists a sequence of vectors \[\boldsymbol{a}^{(m)}=(a_1^{(m)},\ldots,a_r^{(m)}),\quad m\in\mathbb N\]
in $\mathbb Q^r$ such that
\begin{equation}\label{Equ: sm is ok}\operatorname{ord}_{x_j}(D)+\sum_{i=1}^r\operatorname{ord}_{x_j}(s_i)a_i^{(m)}\geqslant 0\end{equation}
and \begin{equation}\label{Equ: limit of sm}\lim_{m\rightarrow+\infty}\boldsymbol{a}^{(m)}=(a_1,\ldots,a_r).\end{equation} For any $m\in\mathbb N$, let \[s^{(m)}=s_1^{a_1^{(m)}}\cdots s_r^{a_r^{(m)}}.\]
By \eqref{Equ: sm is ok} one has $s^{(m)}\in\Gamma(D)_{\mathbb Q}^\times$. Moreover, by \eqref{Equ: limit of sm} and the fact that $\norm{s}_g<1$, for sufficiently positive $m$, one has $\norm{s^{(m)}}_g<1$ and hence $D_{g,0}\geqslant ((s^{(m)})^{-1})$. By taking the limit when $m\rightarrow+\infty$, we obtain $D_{g,0}\geqslant(s^{-1})$.
\end{proof}
\begin{coro}\label{Cor: linearility of vol chi}
Let $(D,g)$ be a metrised $\mathbb R$-Cartier divisor such that $\deg(D)>0$. For any $a>0$ one has
\[\widehat{\deg}_\chi(a D,a g)=a^2\operatorname{\widehat{\deg}}_\chi(D,g).\]
\end{coro}
\begin{proof}
By Proposition \ref{Pro: Dgt as R} one has
\[(a D)_{a g,a t}=a D_{g,t}.\]
By \eqref{Equ: vol chi as an integral} one has
\[\begin{split}\widehat{\operatorname{vol}}_\chi(a D,a g)&=2\int_{- M}^{ \lambda_{\mathrm{ess}}(D,g)}\deg((a D)_{a g,a t})\,\mathrm{d}a t+2a M\deg(D)\\
&=2a^2\int_{-M}^{\lambda_{\mathrm{ess}}(D,g)}\deg(D_{g,t})\,\mathrm{d}t+2a^2M\deg(D)=a^2\operatorname{\widehat{\deg}}_\chi(D,g).\end{split}\]
\end{proof}
\begin{theo}\label{thm:super:additive:vol:chi:deg}
Let $(D_1,g_1)$ and $(D_2,g_2)$ be metrised $\mathbb R$-Cartier divisors such that $\deg(D_1)>0$ and $\deg(D_2)>0$. One has
\[\frac{\widehat{\operatorname{vol}}_\chi(D_1+D_2,g_1+g_2)}{\deg(D_1)+\deg(D_2)}\geqslant\frac{\widehat{\operatorname{vol}}_\chi(D_1,g_1)}{\deg(D_1)}+\frac{\widehat{\operatorname{vol}}_\chi(D_2,g_2)}{\deg(D_2)}\]
\end{theo}
\begin{proof}
Let $t_1$ and $t_2$ be real numbers such that $t_1<\lambda_{\mathrm{ess}}(D_1,g_2)$ and $t_2<\lambda_{\mathrm{ess}}(D_2,g_2)$. For all $s_1\in{\Gamma}(D_1,g_1-t_1)_{\mathbb R}^{\times}$ and $s_2\in{\Gamma}(D_2,g_2-t_2)_{\mathbb R}^{\times}$ one has \[s_1s_2\in {\Gamma}(D_1+D_2,g_1+g_2-t_1-t_2)_{\mathbb R}^{\times}.\] Therefore, by Proposition \ref{Pro: Dgt as R} one has \begin{equation}\label{Equ: superadditives}(D_1+D_2)_{g_1+g_2,t_1+t_2}\geqslant (D_1)_{g_1,t_1}+(D_2)_{g_2,t_2}.\end{equation}
As a consequence, for any $(\lambda_1,\lambda_2)\in[0,\deg(D_1)]\times[0,\deg(D_2)]$, one has \begin{equation}\label{Equ: super additivity of G}G_{(D_1+D_2,g_1+g_2)}(\lambda_1+\lambda_2)\geqslant G_{(D_1,g_1)}(\lambda_1)+G_{(D_2,g_2)}(\lambda_2).\end{equation} Let $U$ be a random variable which follows the uniform distribution on $[0,\deg(D_1)]$. Let $f:[0,\deg(D_1)]\rightarrow[0,\deg(D_2)]$ be the linear map sending $u$ to $u\deg(D_2)/\deg(D_1)$. By Theorem \ref{Thm: probabilistic interpretation} one has
\[\frac{\widehat{\operatorname{vol}}_\chi(D_1+D_2,g_1+g_2)}{2(\deg(D_1)+\deg(D_2))}=\mathbb E[G_{(D_1+D_2,g_1+g_2)}(U+f(U))]\]
since $U+f(U)$ follows the uniform distribution on $[0,\deg(D_1)+\deg(D_2)]$. By \eqref{Equ: super additivity of G} we obtain
\[\begin{split}\frac{\widehat{\operatorname{vol}}_\chi(D_1+D_2,g_1+g_2)}{2(\deg(D_1)+\deg(D_2))}&\geqslant\mathbb E[G_{(D_1,g_1)}(U)]+\mathbb E[G_{(D_2,g_2)}(f(U))]\\
&\geqslant\frac{\widehat{\operatorname{vol}}_\chi(D_1,g_1)}{2\deg(D_1)}+\frac{\widehat{\operatorname{vol}}_\chi(D_2,g_2)}{2\deg(D_2)}.
\end{split}\]
The theorem is thus proved.
\end{proof}
Finally let us consider other properties of $\widehat{\mathrm{vol}}_{\chi}(\ndot)$.
\begin{prop}\label{prop:formula:avol:g:g:prime}
Let $D$ be an $\mathbb{R}$-divisor on $X$ such that $\deg(D)\geqslant 0$, and $g$ and $g'$ be Green functions of $D$. Then one has the following:
\begin{enumerate}[label=\rm(\arabic*)]
\item $2 \deg(D) \min\limits_{\xi \in X^{\mathrm{an}}} \{ \varphi_g(\xi) \} \leqslant \widehat{\mathrm{vol}}_{\chi}(D, g) \leqslant 2 \deg(D) \max\limits_{\xi \in X^{\mathrm{an}}} \{ \varphi_g(\xi) \}$.
\item $| \widehat{\mathrm{vol}}_{\chi}(D, g) - \widehat{\mathrm{vol}}_{\chi}(D, g')| \leqslant 2 \| \varphi_g - \varphi_{g'} \|_{\sup} \deg(D)$.
\item If $\deg(D) = 0$, then $\widehat{\mathrm{vol}}_{\chi}(D, g) =0$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) If we set $m = \min\limits_{\xi \in X^{\mathrm{an}}} \{ \varphi_g(\xi) \}$ and $M = \max\limits_{\xi \in X^{\mathrm{an}}} \{ \varphi_g(\xi) \}$,
then
\[
g_D + m \leqslant g \leqslant g_D + M.
\] Note that $\widehat{\mathrm{vol}}_{\chi}(D, g_D) = 0$, so that the assertion follows from
Propositions \ref{Pro:formula:avol:g:g:prime} and \ref{Pro:volume chi translation}.
\medskip
(2) If we set $c = \| \varphi_g - \varphi_{g'} \|_{\sup}$, then $g - c \leqslant g' \leqslant g + c$, so that
(2) follows from Propositions \ref{Pro:formula:avol:g:g:prime} and \ref{Pro:volume chi translation}.
\medskip
(3) is a consequence of (1).
\end{proof}
\begin{prop}\label{Pro: continuity}
Let $V$ be a finite-dimensional vector subspace of $\widehat{\operatorname{Div}}_{\mathbb R}(X)$.
Then $\widehat{\mathrm{vol}}_{\chi}(\ndot)$ is continuous on $V$.
\end{prop}
\begin{proof}
We denote by $V_+$ the subset of $(D,g)$ such that $\deg(D)>0$.
The function $V_+\rightarrow \mathbb R$ given by
$(D,g) \mapsto \widehat{\operatorname{vol}}_{\chi}(D,g)/\deg(D)$
is concave by Corollary~\ref{Cor: linearility of vol chi} and Theorem~\ref{thm:super:additive:vol:chi:deg}, and hence it
is continuous on $V_+$.
We fix $(D, g) \in V$. If $\deg(D)<0$, then there exists a neighbourhood $U$ of $(D,g)$ in $V$ such that $\deg(D')<0$ for any $(D',g')\in U$. Hence $\widehat{\operatorname{vol}}_{\chi}(\ndot)$ vanishes on $U$. If $\deg(D) > 0$, then the above observation shows the continuity at $(D,g)$,
so that we may assume that
$\operatorname{deg}(D) = 0$. Then, by (3),
$\widehat{\mathrm{vol}}_{\chi}(D, g) = 0$.
Therefore it is sufficient to show that
\[
\lim\limits_{(\varepsilon_{1,n}, \ldots, \varepsilon_{r,n})\to (0,\ldots,0)}\widehat{\mathrm{vol}}_{\chi}(\varepsilon_{1,n}(D_1, g_1) + \cdots + \varepsilon_{r,n}(D_r, g_r) + (D, g)) = 0,
\]
where $(D_1, g_1), \ldots, (D_r, g_r) \in V$.
By using (1),
\begin{multline*}
| \widehat{\mathrm{vol}}_{\chi}(\varepsilon_{1,n}(D_1, g_1) + \cdots + \varepsilon_{r,n}(D_r, g_r) + (D, g))| \\
\leqslant 2 \| \varepsilon_{1,n}\varphi_{g_1} + \cdots + \varepsilon_{r,n}\varphi_{g_r} + \varphi_g \|_{\sup} \deg(\varepsilon_{1,n}D_1 + \cdots + \varepsilon_{r,n}D_r + D).
\end{multline*}
On the other hand, note that
note that
\[
\begin{cases}
\lim\limits_{(\varepsilon_{1,n}, \ldots, \varepsilon_{r,n})\to (0,\ldots,0)} \| \varepsilon_{1,n}\varphi_{g_1} + \cdots + \varepsilon_{r,n}\varphi_{g_r} + \varphi_g \|_{\sup} = \| \varphi_g \|_{\sup},\\
\lim\limits_{(\varepsilon_{1,n}, \ldots, \varepsilon_{r,n})\to (0,\ldots,0)} \deg(\varepsilon_{1,n}D_1 + \cdots + \varepsilon_{r,n}D_r + D) = \deg(D) = 0.
\end{cases}
\]
Thus the assertion follows.
\end{proof}
\subsection{Volume function}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. We define the \emph{volume} of $(D,g)$ as
\[\widehat{\mathrm{vol}}(D,g):=\limsup_{n\rightarrow+\infty}\frac{\widehat{\deg}_+(nD,ng)}{n^2/2}.\]
Note that this function is analogous to the arithmetic volume function introduced in \cite{MR2496453}.
\begin{prop}\phantomsection\label{Pro: vol hat dg}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\deg(D)>0$. Let $\mathbb P_{(D,g)}$ be the Borel probability
measure on $\mathbb R$ defined in Theorem \ref{Thm: probabilistic interpretation}. Then one has
\begin{gather}\frac{\widehat{\mathrm{vol}}(D,g)}{2\deg(D)}=\int_{\mathbb R}\max\{t,0\}\,\mathbb P_{(D,g)}(\mathrm{d}t),\\
\widehat{\mathrm{vol}}(D,g)=\int_0^{+\infty}\deg(D_{g,t})\,\mathrm{d}t.\end{gather}
\end{prop}
\begin{proof}
We keep the notation introduced in the proof of Theorem \ref{Thm: probabilistic interpretation}. By \eqref{Equ: positive slope as expectation}, for any $n\in\mathbb N_{\geqslant 1}$ one has
\[\frac{\widehat{\deg}_+(H^0(nD),\norm{\ndot}_{ng})}{\dim_k(H^0(nD))}=\int_{\mathbb R}\max\{t,0\}\,\mathbb P_n(\mathrm{d}t).\]
By passing to limit when $n\rightarrow+\infty$, we obtain the first equality. The second equality comes from the first one and \eqref{Equ: proba measure of a metrised divisor} by integration by part.
\end{proof}
\section{Positivity}
The purpose of this section is to discuss several positivity conditions of metrised $\mathbb R$-divisors. We fix in this section a field $k$ equipped with the trivial absolute value $|\ndot|$ and a regular integral projective curve $X$ sur $\Spec k$.
\subsection{Bigness and pseudo-effectivity}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. If $\widehat{\mathrm{vol}}(D,g)>0$, we say that $(D,g)$ is \emph{big}; if for any big metrised $\mathbb R$-divisor $(D_0,g_0)$ on $X$, the metrised $\mathbb R$-divisor $(D+D_0,g+g_0)$ is big, we say that $(D,g)$ is \emph{pseudo-effective}.
\begin{rema}
Let $(D,g)$ be a metrised $\mathbb R$-divisor. Let $n\in\mathbb N$, $n\geqslant 1$. If $H^0(nD)\neq\{0\}$, then $\Gamma(D)_{\mathbb Q}^{\times}$ is not empty. Moreover, for any non-zero element $s\in H^0(nD)$, one has
\[-\ln\norm{s}_g\leqslant n\lambda_{\mathrm{ess}}(D,g)\]
by \eqref{Equ: interpretation of lambda ess}, \eqref{Equ: lambda r ess bounded} and Proposition \ref{Pro: essential minimu bounded}. In particular, one has
\[\widehat{\deg}_+(H^0(nD),\norm{\ndot}_{ng})\leqslant n\max\{\lambda_{\mathrm{ess}}(D, g),0\}\dim_k(H^0(nD)).\]
Therefore, if $\widehat{\mathrm{vol}}(D,g)>0$, then one has $\deg(D)>0$ and $\lambda_{\mathrm{ess}}(D,g)>0$. Moreover, in the case where $(D,g)$ is big, one has
\begin{equation}\frac{\widehat{\mathrm{vol}}(D,g)}{2\deg(D)}\leqslant \lambda_{\mathrm{ess}}(D,g).\end{equation}
\end{rema}
\begin{prop}\phantomsection\label{Pro: criterion big}
Let $(D,g)$ be a metrised divisor on $X$. The following assertions are equivalent.
\begin{enumerate}[label=\rm(\arabic*)]
\item $(D,g)$ is big.
\item $\deg(D)>0$ and $\lambda_{\mathrm{ess}}(D,g)>0$
\item $\deg(D)>0$ and there exists $s\in\Gamma(D)_{\mathbb R}^{\times}$ such that $\norm{s}_g<1$.
\item $\deg(D)>0$ and there exists $s\in\Gamma(D)_{\mathbb Q}^{\times}$ such that $\norm{s}_g<1$.
\end{enumerate}
\end{prop}
\begin{proof}
``(1) $\Leftrightarrow$ (2)'' We have seen in the above Remark that, if $(D,g)$ is big, then $\deg(D)>0$ and $\lambda_{\mathrm{ess}}(D,g)>0$. The converse comes from the equality
\[\operatorname{\widehat{\mathrm{vol}}}(D,g)=\int_0^{+\infty}\deg(D_{g,t})\,\mathrm{d}t.\]
proved in Proposition \ref{Pro: vol hat dg}. Note that the function $t\mapsto\deg(D_{g,t})$ is decreasing. Moreover, by Proposition \ref{Pro: lim of Vt is Dgt}, one has $\deg(D_{g,t})>0$ once $t<\lambda_{\mathrm{ess}}(D,g)$. Therefore, if $\lambda_{\mathrm{ess}}(D,g)>0$, then $\widehat{\mathrm{vol}}(D,g)>0$.
``(2)$\Leftrightarrow$(3)'' comes from the definition of $\lambda_{\mathrm{ess}}(D,g)$.
``(2)$\Leftrightarrow$(4)'' comes from Proposition \ref{Pro:lambda ess sur Q}.
\end{proof}
\begin{coro}
\begin{enumerate}[label=\rm(\arabic*)]
\item If $(D,g)$ is a big metrised $\mathbb R$-divisor on $X$, then, for any positive real number $\varepsilon$, the metrised $\mathbb R$-divisor $\varepsilon(D,g)=(\varepsilon D,\varepsilon g)$ is big.
\item If $(D_1,g_1)$ and $(D_2,g_2)$ are two metrised $\mathbb R$-divisor on $X$ which are big, then $(D_1+D_2,g_1+g_2)$ is also big.
\end{enumerate}
\end{coro}
\begin{proof}
The first assertion follows from Proposition \ref{Pro: criterion big} and the equalities $\deg(\varepsilon D)=\varepsilon\deg(D)$ and $\lambda_{\mathrm{ess}}(\varepsilon (D,g))=\varepsilon\lambda_{\mathrm{ess}}(D,g)$.
We then prove the second assertion. Since $(D_1,g_1)$ and $(D_2,g_2)$ are big, one has $\deg(D_1)>0$, $\deg(D_2)>0$, $\lambda_{\mathrm{ess}}(D_1,g_1)>0$, $\lambda_{\mathrm{ess}}(D_2,g_2)>0$. Therefore, $\deg(D_1+D_2)=\deg(D_1)+\deg(D_2)>0$. Moreover, by \eqref{Equ: lambda ess superadditive} one has
\[\lambda_{\mathrm{ess}}(D_1+D_2,g_1+g_2)\geqslant\lambda_{\mathrm{ess}}(D_1,g_1)+\lambda_{\mathrm{ess}}(D_2,g_2)>0.\]
Therefore $(D_1+D_2,g_1+g_2)$ is big.
\end{proof}
\begin{coro}\label{Cor: pseudoeffective implies lambda ess positive}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$ such that $\deg(D)>0$. Then $(D,g)$ is pseudo-effective if and only if $\lambda_{\mathrm{ess}}(D,g)\geqslant 0$.
\end{coro}
\begin{proof}
Suppose that $(D,g)$ is pseudo-effective. Since $\deg(D)>0$, by \eqref{Equ: add a constant2} there exists $c>0$ such that $\lambda_{\mathrm{ess}}(D,g+c)>0$ (and thus $(D,g+c)$ is big by Proposition \ref{Pro: criterion big}). Hence for any $\varepsilon\in\mathopen{]}0,1\mathclose{[}$,
\[(1-\varepsilon)(D,g)+\varepsilon(D,g+c)=(1-\varepsilon)\Big((D,g)+\frac{\varepsilon}{1-\varepsilon}(D,g+c)\Big)\]
is big. Therefore,
\[\lambda_{\mathrm{ess}}\big((1-\varepsilon)(D,g)+\varepsilon(D,g+c)\big)=\lambda_{\mathrm{ess}}(D,g+\varepsilon c)=\lambda_{\mathrm{ess}}(D,g)+\varepsilon c>0.\]
Since $\varepsilon\in\mathopen{]}0,1\mathclose{[}$ is arbitrary, we obtain $\lambda_{\mathrm{ess}}(D,g)\geqslant 0$.
In the following, we assume that $\lambda_{\mathrm{ess}}(D,g)\geqslant 0$ and we prove that $(D,g)$ is pseudo-effective. For any big metrised $\mathbb R$-divisor $(D_1,g_1)$ one has
\[\deg(D+D_1)=\deg(D)+\deg(D_1)>0\]
and, by \eqref{Equ: lambda ess superadditive},
\[\lambda_{\mathrm{ess}}(D+D_1,g+g_1)\geqslant\lambda_{\mathrm{ess}}(D,g)+\lambda_{\mathrm{ess}}(D_1,g_1)>0.\]
Therefore $(D+D_1,g+g_1)$ is big.
\end{proof}
\begin{prop}\phantomsection\label{Pro: pseudo effective, mu inf positive}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$ which is pseudo-effective. Then one has $\deg(D)\geqslant 0$ and $g(\eta_0)\geqslant 0$.
\end{prop}
\begin{proof}
Let $(D_1,g_1)$ be a big metrised $\mathbb R$-divisor. For any $\varepsilon>0$, the metrised $\mathbb R$-divisor $(D+\varepsilon D_1,g+\varepsilon g_1)$ is big. Therefore, by Proposition \ref{Pro: criterion big}, one has
\[\deg(D+\varepsilon D_1)=\deg(D)+\varepsilon\deg(D_1)>0.\]
Moreover, by Proposition \ref{Pro: criterion big}, the inequality \eqref{Equ: lambda r ess bounded} and Proposition \ref{Pro: essential minimu bounded}, one has
\[g(\eta_0)+\varepsilon g_1(\eta_0)\geqslant\lambda_{\mathrm{ess}}(D+\varepsilon D_1,g+\varepsilon g_1)>0.\]
Since $\varepsilon>0$ is arbitrary, we obtain $\deg(D)\geqslant 0$ and $g(\eta_0)\geqslant 0$.
\end{proof}
\subsection{Criteria of effectivity up to $\mathbb R$-linear equivalence}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. We say that $(D,g)$ is \emph{effective} if $D$ is effective and $g$ is a non-negative function. We say that two metrised $\mathbb R$-divisor are \emph{$\mathbb R$-linear equivalent} if there exists an element $\varphi\in\mathrm{Rat}(X)_{\mathbb R}^{\times}$ such that \[(D_2,g_2)=(D_1,g_1)+\widehat{(\varphi)}.\]
By Proposition \ref{Pro: criterion big}, if $(D,g)$ is big, then it is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor.
\begin{defi}\label{def:mu:inf}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. We denote by $\mu_{\inf}(g)$ the value
\[\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)[k(x):k]\in\mathbb R\cup\{-\infty\},\]
where by definition (see \S\ref{Sec: infimum slopes})
\[\mu_{\inf,x}(g)=\inf_{\xi\in\mathopen{]}\eta_0,x_0\mathclose[}\frac{g(\xi)}{t(\xi)}.\]
Note that \[\mu_{\inf,x}(g)\leqslant\lim_{\xi\rightarrow x_0}\frac{g(\xi)}{t(\xi)}=\operatorname{ord}_x(D).\] Therefore,
\begin{equation}\label{Equ: mu inf}\mu_{\inf}(g)\leqslant\sum_{x\in X^{(1)}}\operatorname{ord}_x(D)[k(x):k]=\deg(D).\end{equation}
Moreover, if $D_1$ is an $\mathbb R$-divisor and $g_{D_1}$ is the canonical Green function associated with $D_1$, then one has
\begin{equation}\label{Equ: mu inf x translated by canonical green}
\forall\,x\in X^{(1)},\quad \mu_{\inf,x}(g+g_{D_1})=\mu_{\inf,x}(g)+\operatorname{ord}_x(D_1)
\end{equation}
and hence
\begin{equation}\label{Equ: mu inf translated by canonical green}\mu_{\inf}(g+g_{D_1})=\mu_{\inf}(g)+\deg(D_1).\end{equation}
\end{defi}
The invariant $\mu_{\inf}(\ndot)$ is closely related to the effectivity of a metrised $\mathbb R$-divisor.
\begin{prop}\label{Pro: effefctive implies mu inf}
Let $(D,g)$ be a metrised $\mathbb R$-divisor. Assume that there exists an element $\phi\in\Gamma(D)_{\mathbb R}^{\times}$ such that $g+g_{(\phi)}\geqslant 0$. Then for all but a finite number of $x\in X^{(1)}$ one has $\mu_{\inf,x}(g)= 0$. Moreover, $\mu_{\inf}(g)\geqslant 0$.
\end{prop}
\begin{proof}
By \eqref{Equ: mu inf x translated by canonical green}
, for any $x\in X^{(1)}$ one has
\[\mu_{\inf,x}(g+g_{(\phi)})=\mu_{\inf,x}(g)+\operatorname{ord}_x(\phi).\]
Therefore, for all but a finite number of $x\in X^{(1)}$, one has
\[\mu_{\inf,x}(g)=\mu_{\inf,x}(g+g_{(\phi)})\geqslant 0.\]
Note that $\mu_{\inf,x}(g)\leqslant\mathrm{ord}_x(D)$ for any $x\in X^{(1)}$, and hence $\mu_{\inf,x}(g)\leqslant 0$ for $x\in X^{(1)}\setminus\operatorname{Supp}(D)$. We then deduce that $\mu_{\inf,x}(g)$ vanishes for all but finitely many $x\in X^{(1)}$.
Moreover, by \eqref{Equ: mu inf translated by canonical green} one has
\[\mu_{\inf}(g)=\mu_{\inf}(g+g_{(\phi)})\geqslant 0.\]
\end{proof}
\begin{prop}\label{Pro: criterion of effecitivty up to R linear equivalence}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$.
\begin{enumerate}[label=\rm(\arabic*)]
\item\label{Item: r linearly equivalent} $(D,g)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor if and only if there exists $s\in\Gamma(D)_{\mathbb R}^{\times}$ with $\norm{s}_g\leqslant 1$.
\item\label{Item: effective implies pseudoeffecgive} If $(D,g)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor, then $(D,g)$ is pseudo-effective.
\item\label{Item: criterion for deg 0} Assume that $\mu_{\inf,x}(g)\geqslant 0$ for all but finitely many $x\in X^{(1)}$ and $\mu_{\inf}(g)> 0$, then $(D,g)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor.
\item\label{Item: criterion of effecitvity 2} Assume that $\mu_{\inf,x}(g)\geqslant 0$ for all but finitely many $x\in X^{(1)}$, and $\mu_{\inf}(g)=0$, then $(D,g)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor if and only if the $\mathbb R$-divisor $\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)x$ is principal.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{Item: r linearly equivalent} Let $s$ be an element of $\Gamma(D)_{\mathbb R}^{\times}$, one has
\[(D,g)+\widehat{(s)}=(D+(s),g_{(s)}+g).\]
By definition, $D+(s)$ is effective. Moreover,
\[-\ln\norm{s}_g=\inf(g_{(s)}+g).\]
Therefore, $\norm{s}_g\leqslant 1$ if and only if $g_{(s)}+g\geqslant 0$.
\ref{Item: effective implies pseudoeffecgive} Since there exists $s\in\Gamma(D)_{\mathbb R}^{\times}$ such that $\norm{s}_g\leqslant 1$, one has $\lambda_{\mathrm{ess}}(D,g)\geqslant 0$ and $\deg(D)\geqslant 0$. Let $(D_1,g_1)$ be a big metrised $\mathbb R$-divisor. By Proposition \ref{Pro: criterion big}, one has $\deg(D)>0$ and $\lambda_{\mathrm{ess}}(D,g)>0$. Therefore,
\[\deg(D+D_1)=\deg(D)+\deg(D_1)>0,\]
and, by Proposition \ref{Pro: superadditivity},
\[\lambda_{\mathrm{ess}}(D+D_1,g+g_1)\geqslant \lambda_{\mathrm{ess}}(D,g)+\lambda_{\mathrm{ess}}(D_1,g_1)>0.\]
Still by Proposition \ref{Pro: criterion big}, we obtain that $(D+D_1,g+g_1)$ is big.
\ref{Item: criterion for deg 0} Let $S$ be a finite subset of $X^{(1)}$ which contains $\mathrm{Supp}(D)$ and all $x\in X^{(1)}$ such that $\mu_{\inf,x}(g)<0$, and which satisfies the inequality
\[\sum_{x\in S}\mu_{\inf,x}(g)[k(x):k]>0.\]
Since the $\mathbb R$-divisor $\sum_{x\in S}\mu_{\inf,x}(g)x$ has a positive degree, there exists an element $\varphi$ of $\mathrm{Rat}(X)_{\mathbb R}^{\times}$ such that
\begin{equation}\label{Equ: bound of ord x varphi}\mathrm{ord}_x(\varphi)\geqslant\begin{cases}
-\mu_{\inf,x}(g),&\text{if $x\in S$},\\
0,&\text{if $x\in X^{(1)}\setminus S$}.
\end{cases}\end{equation}
Note that $\mu_{\inf,x}(g)\leqslant\operatorname{ord}_x(D)$ for any $x\in X^{(1)}$. Hence $\varphi\in\Gamma(D)_{\mathbb R}^{\times}$. Moreover, by \eqref{Equ: bound of ord x varphi} one has
\[g+g_{(\varphi)}\geqslant 0.\]
Hence $(D,g)+\widehat{(\varphi)}$ is effective.
\ref{Item: criterion of effecitvity 2} Note that $\mu_{\inf,x}(g)\leqslant\operatorname{ord}_x(D)=0$ for any $x\in X^{(1)}\setminus\operatorname{Supp}(D)$, we obtain that $\mu_{\inf,x}(g)=0$ for all but finitely many $x\in X^{(1)}$. Therefore $\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)x$ is well-defined as an $\mathbb R$-divisor on $X$.
Assume that the $\mathbb R$-divisor $\sum_{x\in S}\mu_{\inf,x}(g)x$ is principal, namely of the form $(\varphi)$ for some $\varphi\in\mathrm{Rat}(X)_{\mathbb R}^{\times}$. Then the metrised $\mathbb R$-divisor \[(D,g)-\widehat{(\varphi)}\] is effective. Conversely, if $\phi$ is an element of $\operatorname{Rat}(X)_{\mathbb R}^{\times}$ which is different from $-\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)x$, then there exists $x\in X^{(1)}$ such that $\operatorname{ord}_x(\phi)<-\mu_{\inf,x}(g)$ since
\[\sum_{x\in X^{(1)}}\operatorname{ord}_x(\phi)[k(x):k]=-\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)[k(x):k]=0.\]
Therefore the function $g+g_{(\phi)}$ can not be non-negative.
\end{proof}
Combining Propositions \ref{Pro: effefctive implies mu inf} and \ref{Pro: criterion of effecitivty up to R linear equivalence}, we obtain the following criterion of effectivity up to $\mathbb R$-linear equivalence for metrised $\mathbb R$-divisors.
\begin{theo}\label{Thm: criterion of effecitivity}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$. Then $(D,g)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor if and only if $\mu_{\inf,x}(g)=0$ for all but finitely many $x\in X^{(1)}$ and if one of the following conditions holds:
\begin{enumerate}[label=\rm(\alph*)]
\item $\mu_{\inf}(g)>0$,
\item $\sum_{x\in X^{(1)}}\mu_{\inf,x}(g)x$ is a principal $\mathbb R$-divisor on $X$.
\end{enumerate}
\end{theo}
\subsection{Criterion of pseudo-effectivity} By using the criteria of effectivity up to $\mathbb R$-linear equivalence in the previous subsection, we prove a numerical criterion of pseudo-effectivity in terms of the invariant $\mu_{\inf}(\ndot)$.
\begin{lemm}\label{Lem: closedness of pseudoeffective}
Let $(D,g)$ be a metrised $\mathbb R$-divisor.
Assume that $(D,g+\varepsilon)$ is pseudo-effective for any $\varepsilon>0$. Then $(D,g)$ is also pseudo-effective.
\end{lemm}
\begin{proof}
Let $(D_1,g_1)$ be a big metrised $\mathbb R$-divisor. By Proposition \ref{Pro: criterion big}, one has $\deg(D_1)>0$ and $\lambda_{\mathrm{ess}}(D_1,g_1)>0$. Let $\varepsilon$ be a positive number such that $\varepsilon<\lambda_{\mathrm{ess}}(D_1,g_1)$. By \eqref{Equ: add a constant2} one has
\[\lambda_{\mathrm{ess}}(D_1,g_1-\varepsilon)=\lambda_{\mathrm{ess}}(D_1,g_1)-\varepsilon>0.\]
Hence $(D_1,g_1-\varepsilon)$ is big (by Proposition \ref{Pro: criterion big}). Therefore,
\[(D,g)+(D_1,g_1)=(D+D_1,g+g_1)=(D,g+\varepsilon)+(D_1,g_1-\varepsilon)\]
is big.
\end{proof}
\begin{prop}\phantomsection\label{Pro: pseudoeffective}
A metrised $\mathbb R$-divisor $(D,g)$ on $X$ is pseudo-effective if and only if $\mu_{\inf}(g)\geqslant 0$.
\end{prop}
\begin{proof}``$\Longleftarrow$'':
For any $\varepsilon>0$, one has $\mu_{\inf}(g+\varepsilon)>0$. By Theorem \ref{Thm: criterion of effecitivity}, $(D,g+\varepsilon)$ is $\mathbb R$-linearly equivalent to an effective metrised $\mathbb R$-divisor, and hence is pseudo-effective (see Proposition \ref{Pro: criterion of effecitivty up to R linear equivalence}
\ref{Item: effective implies pseudoeffecgive}). By Lemma \ref{Lem: closedness of pseudoeffective}, we obtain that $(D,g)$ is pseudo-effective.
``$\Longrightarrow$'': We begin with the case where $\deg(D)>0$. If $(D,g)$ is pseudo-effective, then by Corollary \ref{Cor: pseudoeffective implies lambda ess positive}, one has $\lambda_{\mathrm{ess}}(D,g)\geqslant 0$. Hence $(D,g+\varepsilon)$ is big for any $\varepsilon>0$ (by \eqref{Equ: add a constant2} and Proposition \ref{Pro: criterion big}). In particular, one has $\mu_{\inf}(g+\varepsilon)\geqslant 0$ for any $\varepsilon>0$. For each $x\in X^{(1)}$, the function $(\varepsilon>0)\mapsto\mu_{\inf,x}(g+\varepsilon)$ is decreasing and bounded from below by $\mu_{\inf,x}(g)$. Moreover, for any $\xi\in \mathopen{]}\eta_0,x_0\mathclose{[}$ one has
\[\inf_{\varepsilon>0}\frac{g(\xi)+\varepsilon}{t(\xi)}=\frac{g(\xi)}{t(\xi)}\]
and hence
\[\inf_{\varepsilon>0}\mu_{\inf,x}(g+\varepsilon)\leqslant\frac{g(\xi)}{t(\xi)}.\]
Therefore we obtain
\[\inf_{\varepsilon>0}\mu_{\inf,x}(g+\varepsilon)=\mu_{\inf,x}(g).\]
By the monotone convergence theorem we deduce that
\[\mu_{\inf}(g)=\inf_{\varepsilon>0}\mu_{\inf}(g+\varepsilon)\geqslant 0.\]
We now treat the general case. Let $y$ be a closed point of $X$. We consider $y$ as an $\mathbb R$-divisor on $X$ and denote it by $D_y$. Let $g_y$ be the canonical Green function associated with $D_y$. As $D_y$ is effective and $g_y\geqslant 0$, we obtain that $(D_y,g_y)$ is effective and hence pseudo-effective. Therefore, for any $\delta>0$,
\[(D,g)+\delta(D_y,g_y)=(D+\delta D_y,g+\delta g_y)\]
is pseudo-effective. Moreover, one has $\deg(D+\delta D_y)>0$. Therefore, by what we have shown above, one has
\[\mu_{\inf}(g+\delta g_y)=\mu_{\inf}(g)+\delta [k(y):k]\geqslant 0.\]
Since $\delta> 0$ is arbitrary, one obtains $\mu_{\inf}(g)\geqslant 0$.
\end{proof}
\subsection{Positivity of Green functions}
Let $D$ be an $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb R}^{\times}$ is not empty. For any Green function $g$ of $D$, we define a map
\[\widetilde g:X^{\mathrm{an}}\setminus\{x_0\,:\,x\in X^{(1)}\}\longrightarrow \mathbb R\] as follows. For any $\xi\in X^{\mathrm{an}}\setminus\{x_0\,:\,x\in X^{(1)}\}$, let
\begin{equation}\label{Equ: definition of Pg}\widetilde g(\xi):=\sup_{\begin{subarray}{c}s\in\Gamma(D)_{\mathbb R}^{\times}\end{subarray}}\big(\ln|s|(\xi)-\ln\norm{s}_{g}\big).\end{equation}
\begin{prop}
Let $D$ be an $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb Q}^{\times}$ is not empty. For any $\xi\in X^{\mathrm{an}}\setminus\{x_0\,:\,x\in X^{(1)}\}$ one has
\begin{equation}
\widetilde{g}(\xi)=\sup_{s\in\Gamma(D)_{\mathbb Q}^{\times}}\big(\ln|s|(\xi)-\ln\norm{s}_g\big).
\end{equation}
\end{prop}
\begin{proof}
Without loss of generality, we may assume that $D$ is effective. For clarifying the presentation, we denote temporarily by
\[\widetilde g_0(\xi):=\sup_{s\in\Gamma(D)_{\mathbb Q}^{\times}}\big(\ln|s|(\xi)-\ln\norm{s}_g\big).\]
Let $s$ be an element of $\Gamma(D)_{\mathbb R}^{\times}$, which is written in the form $s_1^{a_1}\cdots s_r^{a_r}$, where $s_1,\ldots,s_r$ are elements of $\mathrm{Rat}(X)_{\mathbb Q}^{\times}$ and $a_1,\ldots,a_r$ are positive real numbers, which are linearly independent over $\mathbb Q$. Let $\{x_1,\ldots,x_n\}$ be the support of $(s)$. By Lemma \ref{Lem: linear independence}, for any $i\in\{1,\ldots,r\}$, the support of $(s_i)$ is contained in $\{x_1,\ldots,x_n\}$. Since $s$ belongs to $\Gamma(D)_{\mathbb Q}^{\times}$, for $j\in\{1,\ldots,n\}$, one has
\[a_1\operatorname{ord}_{x_j}(s_1)+\cdots+a_r\operatorname{ord}_{x_j}(s_r)+\operatorname{ord}_{x_j}(D)\geqslant 0.\]
By Lemma \ref{Lem: approximation by rational solutions} and Remark \ref{Rem: suite d'approximation}, there exist a sequence $(\varepsilon^{(m)})_{m\in\mathbb N}$ in $\mathbb Q_{>0}$ and a sequence \[\boldsymbol{\delta}^{(m)}=(\delta_1^{(m)},\ldots,\delta_r^{(m)}),\quad m\in\mathbb N\] of elements of $\mathbb R_{>0}^r$ which satisfy the following conditions
\begin{enumerate}[label=\rm(\arabic*)]
\item the sequence $(\varepsilon^{(m)})_{m\in\mathbb N}$ converges to $0$,
\item the sequence $(\boldsymbol{\delta}^{(m)})_{m\in\mathbb N}$ converges to $(0,\ldots,0)$,
\item if we denote by $u^{(m)}$ the element
\[s_1^{\delta_1^{(m)}}\!\!\cdots\, s_r^{\delta_r^{(m)}}\]
in $\mathrm{Rat}(X)_{\mathbb R}^{\times}$, one has $u^{(m)}\in\Gamma(\varepsilon^{(m)}D)^{\times}_{\mathbb R}$ and \[s^{(m)}:=(su^{(m)})^{(1+\varepsilon^{(m)})^{-1}}\in\mathrm{Rat}(X)^{\times}_{\mathbb Q},\] and hence it belongs to $\Gamma(D)_{\mathbb Q}^{\times}$.
\end{enumerate}
Note that one has
\[\norm{su^{(m)}}_{(1+\varepsilon^{(m)})g}\leqslant\norm{s}_g\cdot\norm{u^{(m)}}_{\varepsilon^{(m)}g}.\]
Since $u^{(m)}\in\Gamma(\varepsilon^{(m)}D)^{\times}_{\mathbb R}$, one has
\[-\ln\norm{u^{(m)}}_{\varepsilon^{(m)}}=\inf
\Big(\varepsilon^{(m)}g+\sum_{i=1}^r\delta_i^{(m)}g_{(s_i)}\Big)\geqslant\varepsilon^{(m)}\inf\varphi_g.
\]
Therefore,
\[-\ln\norm{s}_g\leqslant -(1+\varepsilon^{(m)})\ln\norm{s^{(m)}}_g-\varepsilon^{(m)}\inf\varphi_g.\]
Thus
\[\begin{split}\ln|s|(\xi)-\ln\norm{s}_g&=
(1+\varepsilon^{(m)})\ln|s^{(m)}|(\xi)-\sum_{i=1}^r\delta_i^{(m)}\ln|s_i|(\xi)-\ln\norm{s}_g\\
&\leqslant(1+\varepsilon^{(m)})\widetilde g_0(\xi)-\sum_{i=1}^r\delta_i^{(m)}\ln|s_i|(\xi)-\varepsilon^{(m)}\inf\varphi_g.
\end{split}\]
Taking the limit when $m\rightarrow+\infty$, we obtain
\[\ln|s|(\xi)-\ln\norm{s}_g\leqslant\widetilde g_0(\xi).\]
The proposition is thus proved.
\end{proof}
\begin{prop}\phantomsection\label{Pro: tilde g}
Let $D$ be an $\mathbb R$-divisor on $X$ such that $\Gamma(D)^{\times}_{\mathbb R}$ is not empty. For any Green function $g$ of $D$, the function $\widetilde g$ extends on $X^{\mathrm{an}}$ to a convex Green function of $D$ which is bounded from above by $g$.
\end{prop}
\begin{proof}
We first show that $\widetilde g$ is bounded from above by $g$. For any $s\in\Gamma(D)_{\mathbb R}^{\times}$ one has
\[\forall\,\xi\in X^{\mathrm{an}},\quad -\ln\norm{s}_g=\inf(g_{(s)}+g)\leqslant g(\xi)-\ln|s|(\xi),\]
so that
\[\forall\,\xi\in X^{\mathrm{an}},\quad \ln|s|(\xi)-\ln\norm{s}_g\leqslant g(\xi).\]
It remains to check that $\widetilde g$ extends by continuity to a convex Green function of $D$.
We first treat the case where $\deg(D)=0$. By Remark \ref{Rem: degree 0 effective} we obtain that $\Gamma(D)_{\mathbb R}^{\times}$ contains a unique element $s$ and one has $D=-(s)$. Therefore \[\widetilde g=\ln|s|-\ln\norm{s}_g=g_D-\ln\norm{s}_g,\]
which clearly extends to a convex Green function of $D$.
In the following, we assume that $\deg(D)>0$. Let $x$ be an element of $X^{(1)}$. The function $\widetilde g\circ \xi_x|_{\mathbb R_{>0}}$ (see \S\ref{Subsec: tree of length 1}) can be written as
\[(t\in\mathbb R_{>0})\longmapsto\sup_{s\in\Gamma(D)_{\mathbb R}^{\times}} -t\operatorname{ord}_x(s)-\ln\norm{s}_g,\]
which is the supremum of a family of affine functions on $t>0$. Therefore $\widetilde g\circ\xi_x|_{\mathbb R_{>0}}$ is a convex function on $\mathbb R_{>0}$. This expression also shows that, for any $s\in\Gamma(D)_{\mathbb R}^{\times}$, one has
\[\liminf_{\xi\rightarrow x_0}\frac{\widetilde g(\xi)}{t(\xi)}\geqslant\operatorname{ord}_x(s^{-1}).\]
By Proposition \ref{Pro: sup of s in Gamma D} (see also Remark \ref{Rem: degree 0 effective}), one has
\[\liminf_{\xi\rightarrow x_0}\frac{\widetilde g(\xi)}{t(\xi)}\geqslant \sup_{s\in\Gamma(D)_{\mathbb R}^{\times}}\operatorname{ord}_x(s^{-1})=\operatorname{ord}_x(D).\]
Moreover, since $\widetilde g\leqslant g$ and since $g$ is a Green function of $D$, one has
\[\limsup_{\xi\rightarrow x_0}\frac{\widetilde g(\xi)}{t(\xi)}\leqslant \lim_{\xi\rightarrow x_0}\frac{g(\xi)}{t(\xi)}=\operatorname{ord}_x(D).\]
Therefore one has
\[\lim_{\xi\rightarrow x_0}\frac{\widetilde g(\xi)}{t(\xi)}=\operatorname{ord}_x(D).\]
The proposition is thus proved.
\end{proof}
\begin{defi}\label{Def: psh}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb R}^{\times}$ is not empty. We call $\widetilde g$ the \emph{plurisubharmonic envelope} of the Green function $g$. In the case where the equality $g=\widetilde g$ holds, we say that the Green function $g$ is \emph{plurisubharmonic}. Note that $\widetilde g$ is bounded from above by the convex envelope $\widebreve{g}$ of $g$.
\end{defi}
\begin{rema}
If we set $\varphi = g - \widetilde{g}$, then $\varphi$ is a non-negative continuous function on $X^{\mathrm{an}}$,
so that, in some sense, the decomposition $(D, g) = (D, \widetilde{g}) + (0, \varphi)$ gives rise to
a Zariski decomposition of $(D,g)$ on $X$.
\end{rema}
\begin{theo}\label{Thm:criterion of g tilde}
Let $(D,g)$ be an adelic $\mathbb R$-Cartier divisor on $X$ such that $\Gamma(D)_{\mathbb R}^{\times}$ is not empty. Then $\widetilde{g}(\eta_0)=g(\eta_0)$ if and only if $\mu_{\inf}(g-g(\eta_0))\geqslant 0$. Moreover, in the case where these equivalent conditions are satisfied, $\widetilde{g}$ identifies with the convex envelop $\widebreve{g}$ of $g$.
\end{theo}
\begin{proof}
{\bf Step 1:} We first treat the case where $\deg(D)=0$. In this case $\Gamma(D)_{\mathbb R}^{\times}$ contains a unique element $s$ (with $D=-(s)$) and one has (see the proof of Proposition \ref{Pro: tilde g}) \[\widetilde g=g_D-\ln\norm{s}_g.\] Hence \[\widetilde g(\eta_0)=-\ln\norm{s}_g=\inf(g_{(s)}+g)=\inf\varphi_g.\]
Note that $g(\eta_0)=\varphi_g(\eta_0)$. Therefore, the equality $\widetilde g(\eta_0)=g(\eta_0)$ holds if and only if $\varphi_g$ attains its minimal value at $\eta_0$, or equivalently
\[\forall\,x\in X^{(1)},\quad\mu_{\inf,x}(g-g(\eta_0))=\operatorname{ord}_x(g).\]
In particular, if $\widetilde g(\eta_0)=g(\eta_0)$, then
\[\mu_{\inf}(g-g(\eta_0))=\sum_{x\in X^{(1)}}\operatorname{ord}_x(g)[k(x):k]=0.\]
Conversely, if $\mu_{\inf}(g-g(\eta_0))\geqslant 0$, then by \eqref{Equ: mu inf} one obtains that \[\mu_{\inf}(g-g(\eta_0))= 0\]
and the equality $\mu_{\inf,x}(g-g(\eta_0))=\operatorname{ord}_x(g)$ holds for any $x\in X^{(1)}$. Hence $\widetilde g(\eta_0)=g(\eta_0)$.
If $\varphi$ is a bounded Green function on $X^{\mathrm{an}}$, which is bounded from above by $\varphi_g$, by Proposition \ref{prop:conv:properties} one has \[\varphi(\xi)\leqslant\varphi(\eta_0)\leqslant\varphi_g(\eta_0)=g(\eta_0)\] for any $\xi\in X^{\mathrm{an}}$.
In the case where the inequality $\widetilde g(\eta_0)=g(\eta_0)$ holds, the function $\widetilde g=g_D+g(\eta_0)$ is the largest convex Green function of $D$ which is bounded from above by $g$, namely the equality $\widetilde g=\widebreve{g}$ holds.
{\bf Step 2:}
In the following, we assume that $\deg(D)>0$. By replacing $g$ by $g-g(\eta_0)$ it suffices to check that, in the case where $g(\eta_0)=0$, the equality $\widetilde g(\eta_0)=0$ holds if and only if $\mu_{\inf}(g)\geqslant 0$. By definition one has
\[\widetilde{g}(\eta_0)=\sup_{s\in\Gamma(D)_{\mathbb R}^{\times}}(-\ln\norm{s}_g).\]
\noindent{\it Step 2.1:} We first assume that $\widetilde g(\eta_0)=0$ and show that $\mu_{\inf}(g)\geqslant 0$. Let $s$ be an element of $\Gamma(D)_{\mathbb R}^{\times}$. By definition one has
\[-\ln\norm{s}_g=\inf_{\xi\in X^{\mathrm{an}}}(g+g_{(s)})(\xi).\]
Let $(D_1,g_1)$ be a big metrised $\mathbb R$-divisor. We fix $s_1\in\Gamma(D_1)_{\mathbb R}^{\times}$ such that $\norm{s_1}_{g_1}<1$ (see Proposition \ref{Pro: criterion big} for the existence of $s_1$). Since $\widetilde g(\eta_0)=0$, there exists $s\in\Gamma(D)_{\mathbb R}^{\times}$ such that
\[\norm{ss_1}_{g+g_1}\leqslant\norm{s}_g\cdot\norm{s_1}_{g_1}<1.\]
Therefore $\lambda_{\mathrm{ess}}(D+D_1,g+g_1)>0$ and hence $(D+D_1,g+g_1)$ is big (see Proposition \ref{Pro: criterion big}). We then obtain that $(D,g)$ is pseudo-effective and hence $\mu_{\inf}(g)\geqslant 0$ (see Proposition \ref{Pro: pseudo effective, mu inf positive}).
\medskip
\noindent{\it Step 2.2:} We now show that $\mu_{\inf}(g)>0$ implies $\widetilde g(\eta_0)=0$. For $\varepsilon>0$, let \[U_\varepsilon:=\{\xi\in X^{\mathrm{an}}\,:\,g(\xi)>-\varepsilon\}.\] This is an open subset of $X^{\mathrm{an}}$ which contains $\eta_0$. Hence there exists a finite set $X_\varepsilon^{(1)}$ of closed points of $X$, which contains the support of $D$ and such that, for any closed point $x$ of $X$ lying outside of $X_\varepsilon^{(1)}$, one has $g|_{[\eta_0,x_0]}> -\varepsilon$. Moreover, for any $x\in X^{(1)}\setminus\operatorname{Supp}(D)$ one has $\mu_{\inf,x}(g)\leqslant 0$ since $g$ is bounded on $[\eta_0,x_0]$. Therefore, the condition $\mu_{\inf}(g)>0$ implies that
\begin{equation}\label{Equ:mu total positive}\sum_{x\in X^{(1)}_{\varepsilon}}\mu_{\inf,x}(g)[k(x):k]> 0.\end{equation}
We let $s_\varepsilon$ be an element of $\mathrm{Rat}(X)^{\times}_{\mathbb R}$ such that $\operatorname{ord}_x(s_\varepsilon)\geqslant -\mu_{\inf,x}(g)$ for any $x\in X_\varepsilon^{(1)}$ and that $\operatorname{ord}_x(s_{\varepsilon})\geqslant 0$ for any $x\in X^{(1)}\setminus X_\varepsilon^{(1)}$. This is possible by the inequality \eqref{Equ:mu total positive}. In fact, the $\mathbb R$-divisor
\[E=\sum_{x\in X_{\varepsilon}^{(1)}}\mu_{\inf,x}(g)\cdot x\]
has a positive degree, and hence $\Gamma(E)_{\mathbb R}^{\times}$ is not empty.
Note that $\mu_{\inf,x}(g)\leqslant\operatorname{ord}_x(D)$ for any $x\in X^{(1)}$. Therefore $D+(s_\varepsilon)$ is effective. Moreover, for any $x\in X^{(1)}\setminus X_\varepsilon^{(1)}$ and $\xi\in[\eta_0,x_0[$ one has
\[(g-\ln|s_\varepsilon|)(\xi)\geqslant g(\xi)\geqslant -\varepsilon.\]
Therefore we obtain $\norm{s_\varepsilon}\leqslant \mathrm{e}^{\varepsilon}$ since $g-\ln|s_\varepsilon|\geqslant 0$ on $\mathopen{[}\eta_0,x_0\mathclose{[}$ for any $x\in X_\varepsilon^{(1)}$. This leads to $\widetilde g(\eta_0)=0$ since $\varepsilon$ is arbitrary.
\medskip
\noindent{\it Step 2.3:} We assume that $\mu_{\inf}(g)>0$ and show that $\widebreve{g}=\widetilde g$. By definition, for any $x\in X^{(1)}$, the function $\widetilde g\circ\xi_x|_{\mathbb R_{>0}}$ can be written as the supremum of a family of affine functions, hence it is a convex fonction on $\mathbb R_{>0}$ bounded from above by $g$. In the following, we fix a closed point $x$ of $X$.
Without loss of generality, we may assume that $x$ belongs to $X_\varepsilon^{(1)}$ for any $\varepsilon>0$. Note that for any $\xi\in[\eta_0,x_0]$ one has
\[\widetilde g(\xi)\geqslant\ln|s_\varepsilon|(\xi)-\ln\norm{s_\varepsilon}_g\geqslant \mu_{\inf,x}(g)t(\xi)-\varepsilon. \]
Since $\varepsilon>0$ is arbitrary, one has $\widetilde g(\xi)\geqslant \mu_{\inf,x}(g)t(\xi)$.
Let $a$ and $b$ be real numbers such that $at(\xi)+b\leqslant g(\xi)$ for any $\xi\in[\eta_0,x_0[$. Then one has $b\leqslant 0$ since $g(\eta_0)=0$. Moreover, one has
\[a=\lim_{\xi\rightarrow x_0}\frac{at(\xi)+b}{t(\xi)}\leqslant\lim_{\xi\rightarrow x_0}\frac{g(\xi)}{t(\xi)}=\operatorname{ord}_x(D). \] We will show that $at(\xi)+b\leqslant \widetilde g(\xi)$ for any $\xi\in[\eta_0,x_0[$. This inequality is trivial when $a\leqslant\mu_x(g)$ since $\widetilde g(\xi)\geqslant\mu_{\inf,x}(g)t(\xi)$ and $b\leqslant 0$. In the following, we assume that $a>\mu_x(g)$.
For any $\varepsilon>0$, we let $s_{\varepsilon}^{a,b}$ an element of $\mathrm{Rat}(X)_{\mathbb R}^{\times}$ such that
\[\mathrm{ord}_y(s_{\varepsilon}^{a,b})\geqslant\begin{cases}
-a&\text{if $y=x$,}\\
-\mu_{\inf,y}(g)&\text{if $y\in X_\varepsilon^{(1)}$, $y\neq x$,}\\
0&\text{if $y\in X^{(1)}\setminus X_\varepsilon^{(1)}$.}
\end{cases}\]
This is possible since $\mu_{\inf}(g)> 0$ and $a>\mu_{\inf,x}(g)$. Note that $s_\varepsilon^{a,b}$ belongs to $\Gamma(D)^{\times}_{\mathbb R}$. Moreover, for $\xi\in [\eta_0,x_0[$, one has
\[g(\xi)-\ln|s_\varepsilon^{a,b}|(\xi)\geqslant g(\xi)-at(\xi)\geqslant b;\]
for any $y\in X_\varepsilon^{(1)}\setminus\{x\}$, one has
\[g(\xi)-\ln|s_\varepsilon^{a,b}|(\xi)=g(\xi)-\mu_{\inf,y}(g)t(\xi)\geqslant 0;\]
for any $y\in X^{(1)}\setminus X_\varepsilon$, one has $g(\xi)-\ln|s_{\varepsilon}^{a,b}|(\xi)\geqslant g(\xi)\geqslant -\varepsilon$. Therefore we obtain
\[-\ln\norm{s_{\varepsilon}^{a,b}}\geqslant \min\{-\varepsilon,b\}.\]
As a consequence, for any $\xi\in[\eta_0,x_0[$, one has
\[\widetilde g(\xi)\geqslant\ln|s_\varepsilon^{a,b}|(\xi)-\ln\norm{s}_g=at(\xi)+\min\{-\varepsilon,b\}.\]
Since $b\leqslant 0$ and since $\varepsilon>0$ is arbitrary, we obtain $\widetilde g(\xi)\geqslant at(\xi)+b$.
\medskip
\noindent{\bf Step 3:}
In this step, we assume that $\deg(D)>0$ and $\mu_{\inf}(g-g(\eta_0))=0$. We show that and $\widebreve{g}=\widetilde g$. Without loss of generality, we assume that $g(\eta_0)=0$. Since
\[\deg(D)=\sum_{x\in X^{(1)}}\mu_x(g)[k(x):k]>0,\]
there exists $y\in X^{(1)}$ such that \[\mu_{\inf,y}(g)<\operatorname{ord}_x(D)=\mu_y(g).\]
We let $g_0$ be the bounded Green function on $\mathcal T(X^{(1)})$ such that $g_0(\xi)=0$ for \[\xi\in\bigcup_{x\in X^{(1)},\,x\neq y}\mathopen{[}\xi_0,x_0\mathclose{]},\]
and
\[g_0(\xi)=\min\{t(\xi),1\},\quad \text{for $\xi\in\mathopen{[}\eta_0,y_0\mathclose{]}$}.\]
One has $g_0\geqslant 0$, and
\[\sup_{\xi\in X^{(1)}}g_0(\xi)\leqslant 1.\]
For any $\varepsilon>0$, we denote by $g_\varepsilon$ the Green function $g+\varepsilon g_0$. One has \[\mu_{\inf,x}(g_\varepsilon)>\mu_{\inf,x}(g)\geqslant 0.\] Moreover, by definition $g_\varepsilon(\eta_0)=0$. Therefore, by what we have shown in Step 2.2, one has
\[\widetilde g_\varepsilon(\eta_0)=\sup_{s\in\Gamma(D)_{\mathbb R}^{\times}}\big(-\ln\norm{s}_{g_\varepsilon}\big)=0.\]
Note that for any $s\in\Gamma(D)_{\mathbb R}^{\times}$ one has
\[\mathrm{e}^\varepsilon\norm{s}_{g_\varepsilon} \geqslant\norm{s}_g\geqslant\norm{s}_{g_\varepsilon}. \]
Hence we obtain
\[\widetilde g_{\varepsilon}-\varepsilon\leqslant\widetilde g\leqslant\widetilde g_{\varepsilon}.\]
Since $\widetilde g_\varepsilon(\eta_0)=0$ for any $\varepsilon>0$, we obtain $\widetilde g(\eta_0)=0$. Finally, the inequalities
\[g_\varepsilon-\varepsilon\leqslant g\leqslant g_\varepsilon\]
leads to
\[\widebreve{g}_{\!\varepsilon}-\varepsilon\leqslant \widebreve{g}\leqslant\widebreve{g}_{\!\varepsilon}.\]
By what we have shown in Step 2.3, one has $\widetilde{g}_\varepsilon=\widebreve{g}_{\!\varepsilon}$ for any $\varepsilon>0$. Therefore the equality $\widetilde g=\widebreve{g}$ holds.
\medskip
\end{proof}
\begin{coro}
Let $(D,g)$ be a metrised $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb R}^{\times}\neq\emptyset$. Then $g$ is plurisubharmonic if and only if it is convex and $\mu_{\inf}(g-g(\eta_0))\geqslant 0$.
\end{coro}
\subsection{Global positivity conditions under metric positivity} Let $X$ be a regular projective curve over $\Spec k$ and $(D,g)$ be a metrised $\mathbb R$-divisor. In this section, we consider global positivity conditions under the hypothesis that $g$ is plurisubharmonic.
\begin{prop}
Let $(D,g)$ be a metrised $\mathbb R$-divisor such that $\Gamma(D)_{\mathbb R}^{\times}$ is not empty and that the Green function $g$ is plurisubharmonic.
\begin{enumerate}[label=\rm(\arabic*)]
\item\label{Item: pseudo effective if g eta positive} $(D,g)$ is pseudo-effective if and only if $g(\eta_0)\geqslant 0$.
\item\label{Item: lambda ess is g eta 0} One has $\lambda_{\mathrm{ess}}(D,g)=g(\eta_0)$.
\item\label{Item: critere de bigness} The metrised $\mathbb R$-divisor $(D,g)$ is big if and only if $\deg(D)>0$ and $g(\eta_0)>0$.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{Item: pseudo effective if g eta positive} We have already seen in Proposition \ref{Pro: pseudo effective, mu inf positive} that, if $(D,g)$ is pseudo-effective, then $g(\eta_0)\geqslant 0$. It suffices to prove that $g(\eta_0)\geqslant 0$ implies that $(D,g)$ is pseudo-effective. Since $g$ is plurisubharmonic, by Theorem \ref{Thm:criterion of g tilde} one has
\[\mu_{\inf}(g)\geqslant\mu_{\inf}(g-g(\eta_0))\geqslant 0.\]
By Proposition \ref{Pro: pseudoeffective}, one obtains that $(D,g)$ is pseudo-effective
\ref{Item: lambda ess is g eta 0}
By \eqref{Equ: lambda r ess bounded} and Proposition \ref{Pro: essential minimu bounded}, it suffices to prove that $g(\eta_0)\leqslant\lambda_{\mathrm{ess}}(D,g)$. In the case where $\deg(D)=0$, the hypotheses that $\Gamma(D)_{\mathbb R}^{\times}$ is not empty and $g$ is plurisubharmonic imply that $D$ is a principal $\mathbb R$-divisor, $\Gamma(D)_{\mathbb R}^{\times}$ contains a unique element $s$ with $D=-(s)$, and $g-g(\eta_0)$ is the canonical Green function of $D$ (see the first step of the proof of Theorem \ref{Thm:criterion of g tilde}). Therefore one has \[\lambda_{\mathrm{ess}}(D,g)=-\ln\norm{s}_g=g(\eta_0).\]
In the following we treat the case where $\deg(D)>0$. Since $g$ is plurisubharmonic, by Theorem \ref{Thm:criterion of g tilde} one has $\mu_{\inf}(g-g(\eta_0))\geqslant 0$, so that $(D,g-g(\eta_0))$ is pseudo-effective (see Proposition \ref{Pro: pseudoeffective}). As $\deg(D)>0$, by Corollary \ref{Cor: pseudoeffective implies lambda ess positive} and \eqref{Equ: add a constant2}, one has
\[\lambda_{\mathrm{ess}}(g-g(\eta_0))=\lambda_{\mathrm{ess}}(g)-g(\eta_0)\geqslant 0.\]
\ref{Item: critere de bigness} follows from \ref{Item: lambda ess is g eta 0} and Proposition \ref{Pro: criterion big}.
\end{proof}
\section{Hilbert-Samuel formula on curves}
\label{Sec:Hilbert-Samuel}
Let $k$ be a field equipped with the trivial valuation.
Let $X$ be a regular and irreducible projective curve of genus $R$ over $k$. The purpose of this section is to prove a Hilbert-Samuel formula for metrised $\mathbb R$-divisors on $X$.
\begin{defi}
We identify $X^{\mathrm{an}}$ with the infinite tree $\mathcal T(X^{(1)})$ and consider the weight function $w:X^{(1)}\rightarrow\mathopen{]}0,+\infty\mathclose{[}$ defined as $w(x)=[k(x):k]$. If $\overline D_1=(D_1,g_1)$ and $\overline{D}_2=(D_2,g_2)$ are metrised $\mathbb R$-divisors on $X$ such that $g_1$ and $g_2$ are both pairable (see Definition \ref{Def: pairing of Green functions}) we define $(\overline D_1\cdot\overline D_2)$ as the pairing $\langle g_1,g_2\rangle_w$, namely
\begin{equation}\label{Equ: coupling Di}\begin{split}(\overline D_1\cdot\overline D_2)=g_2(\eta_0)\deg(&D_1)+g_1(\eta_0)\deg(D_2)\\&-\sum_{x\in X^{(1)}}[k(x):k]\int_0^{+\infty}\varphi_{g_1\circ\xi_x}'(t)\varphi_{g_2\circ\xi_x}'(t)\,\mathrm{d}t.\end{split}\end{equation}
\end{defi}
\begin{rema}
Assume that $s$ is an element of $\operatorname{Rat}(X)^{\times}_{\mathbb R}$ such that \[\overline D_2=\widehat{(s)}=((s),g_{(s)}).\] One has (see Definition \ref{Def: pairing of Green functions})
\[\begin{split}&\quad\;(\overline D_1,\overline D_2)=\langle g_1,g_{(s)}\rangle_w=g_1(\eta_0)\deg((s))=0.\end{split}\]
\end{rema}
\begin{theo}\label{thm:Hilbert:Samuel:semipositive}
Let $\overline D=(D, g)$ be a metrised $\mathbb R$-divisor on $X$ such that $\Gamma(D)_{\mathbb R}^{\times}\neq\emptyset$ and $g$ is plurisubharmonic.
Then $\widehat{\mathrm{vol}}_{\chi}(\overline D) = (\overline D\cdot\overline D)$.
\end{theo}
\begin{rema}Let $g_D$ be the canonical admissible Green function of $D$ and $\varphi_g := g - g_D$ (considered as a continuous function on $X^{\mathrm{an}}$).
Note that a plurisubharmonic Green function is convex (see Proposition \ref{Pro: tilde g}). Therefore, by Proposition \ref{Pro: relation between mu and varphig}, one has
\[\mu_{\inf,x}(g-g(\eta_0))=g'(\eta_0;x)=\operatorname{ord}_x(D)+\varphi_g'(\eta_0;x).\]
Theorem \ref{Thm:criterion of g tilde} shows that
\begin{equation}\mu_{\inf}(g-g(\eta_0))=\deg(D)+\sum_{x\in X^{(1)}}\varphi_g'(\eta_0;x)[k(x):k]\geqslant 0.\end{equation}
In the case where $\deg(D)=0$, one has $g=g(\eta_0)+g_D$ (see Step 1 in the proof of Theorem \ref{Thm:criterion of g tilde}). Therefore one has
\[(\overline D\cdot\overline D)=2g(\eta_0)\deg(D)=0=\widehat{\operatorname{vol}}_\chi(\overline D),\]
where the last equality comes from (3) of Proposition \ref{prop:formula:avol:g:g:prime}. Therefore, to prove Theorem \ref{thm:Hilbert:Samuel:semipositive}, it suffices to treat the case where $\deg(D)>0$.
\end{rema}
\begin{enonce}{Assumption}\rm\label{assumption:thm:Hilbert:Samuel:semipositive}
Let $\Sigma$ be the set consisting of closed points $x$ of $X$ such that $\varphi_g$ is not a constant function on $[\eta_0, x_0]$. Note that $\Sigma$ is countable by Proposition \ref{Pro: constant except countable}.
Here we consider additional assumptions (i) -- (iv).
\begin{enumerate}[label=\rm(\roman*)]
\item $D$ is a divisor.
\item $\Sigma$ is finite.
\item $\varphi_g(\eta_0) = 0$.
\item $\mu_{\inf}(g-g(\eta_0))\geqslant 0$.
\end{enumerate}
\end{enonce}
These assumptions actually describes a special case of the setting of the above theorem, but it is an essential case because the theorem in general is a
a consequence of its assertion under these assumptions by using the continuity of $\widehat{\mathrm{vol}}_{\chi}(\ndot)$.
Before starting the proof of Theorem~\ref{thm:Hilbert:Samuel:semipositive} under the above assumptions, we need to prepare several facts.
For a moment, we proceed with arguments under Assumption~\ref{assumption:thm:Hilbert:Samuel:semipositive}.
Let $L = \mathcal O_X(D)$ and $h$ be the continuous metric of $L$ given by $\exp(-\varphi_g)$.
For $x\in\Sigma$, let \[a_x := \varphi_g'(\eta_0;x)\quad\text{and}\quad\varphi_x' := \varphi_{g\circ\xi_x}'.\]
For $x\in\Sigma$ and $n\in\mathbb N_{\geqslant 1}$,
we set $a_{x,n} = \lfloor -n a_x \rfloor$. One has \[a_{x,n} \leqslant -na_x < a_{x,n} + 1\quad\text{and}\quad
\lim_{n\to\infty} \frac{a_{x,n}}{n} = -a_x.\] Moreover, as \[\sum_{x \in \Sigma} a_x[k(x):k] + \deg(L) > 0\] by our assumptions, there exists $n_0\in\mathbb N_{\geqslant 1}$ such that
\[\begin{split}
&\quad\;\frac{2(\operatorname{genus}(X)-1) + \sum_{x \in \Sigma} a_{x, n}[k(x):k] +\sum_{x\in\Sigma}[k(x):x]}{n} \\
&\leqslant \frac{2(\operatorname{genus}(X)-1) + \sum_{x\in\Sigma}[k(x):x]}{n} - \sum_{x \in \Sigma} a_x[k(x):k] < \deg(L)
\end{split}\]
holds for any integer $n\geqslant n_0$, that is,
\begin{equation}\forall\,n\in\mathbb N_{\geqslant n_0},\quad 2(\operatorname{genus}(X)-1) + \sum_{x \in \Sigma} (a_{x,n}+1)[k(x):k] < n \deg(L).\end{equation}
We set
\[
D_n = \sum_{x \in \Sigma} (a_{x, n}+1) x\quad\text{and}\quad
D_{x, n} = D_n - (a_{x, n}+1) x.
\]
Note that
\[
\begin{cases}
H^0(X, nL \otimes \mathcal O_X(-D_n)) = \{ s \in H^0(X, nL) \,:\, \text{$\operatorname{ord}_x(s) \geqslant a_{x, n}+1$ ($\forall x \in \Sigma$)} \}, \\[1ex]
H^0(X, nL \otimes \mathcal O_X(-D_{x,n} - i x)) \\
\hskip3em = \left\{ s \in H^0(X, nL) \,:\, \text{$\operatorname{ord}_y(s) \geqslant a_{y, n}+1$ ($\forall y \in \Sigma \setminus \{x\}$) and
$\operatorname{ord}_x(s) \geqslant i$}\right\}
\end{cases}
\]
\begin{lemm}\label{lemm:sum:subspace}For any integer $n$ such that $n\geqslant 0$, the following assertions hold.
\begin{enumerate}[label=\rm(\arabic*)]
\item
$\sum_{x \in \Sigma} H^0(X, nL \otimes \mathcal O_X(-D_{x, n}))
= H^0(X, nL)$.
\item One has
\begin{multline*}
\hskip2em H^0(X, nL)/H^0(X, nL \otimes \mathcal O_X(-D_n)) \\
= \bigoplus_{x \in \Sigma} H^0(X, nL \otimes \mathcal O_X(-D_{x,n}))/H^0(X, nL \otimes \mathcal O_X(-D_n))
\end{multline*}
\end{enumerate}
\end{lemm}
\begin{proof}
(1) Let us consider a natural homomorphism
\[
\bigoplus_{x \in \Sigma}nL \otimes \mathcal O_X(-D_{x, n}) \to nL.
\]
Note that the above homomorphism is surjective and the kernel is isomorphic to
$(nL \otimes \mathcal O_X(-D_n))^{\oplus \operatorname{card}(\Sigma) - 1}$. Moreover, by Serre's duality,
\[
H^1(X, nL \otimes \mathcal O_X(-D_n)) \simeq H^0(X, \omega_X \otimes -n L \otimes \mathcal O_X(D_n))^{\vee}
\]
and
\[\begin{split}
&\quad\;\deg(\omega_X \otimes -n L \otimes \mathcal O_X(D_n))\\& = 2(\operatorname{genus}(X)-1) - n \deg(L) + \sum_{x \in \Sigma} (a_{x, n}+1)[k(x):k] < 0,
\end{split}
\]
so that $H^1(X, nL \otimes \mathcal O_X(-D_n)) = 0$. Therefore one has (1).
\medskip
(2) By (1), it is sufficient to see that if \[\sum_{x \in \Sigma} s_x \in H^0(X, nL \otimes \mathcal O_X(-D_n))\] and
\[\forall\,x\in\Sigma,\quad s_x \in H^0(X, nL \otimes \mathcal O_X(-D_{x,n})),\] then \[s_x \in H^0(X, nL \otimes \mathcal O_X(-D_n))\] for all $x \in \Sigma$.
Indeed, as \[\forall\,y\in\Sigma\setminus\{x\},\quad s_y \in H^0(X, \mathcal O_X(-(a_{x,n}+1) x))\] and \[\sum_{y \in \Sigma} s_y \in H^0(X, \mathcal O_X(-(a_{x,n}+1) x)),\]
we obtain \[s_x \in H^0(X, \mathcal O_X(-(a_{x,n}+1) x)),\] so that $s_x \in H^0(X, \mathcal O_X(-D_n))$, as required.
\end{proof}
\begin{lemm}\label{lem:quotient:one:dim}
For $x \in \Sigma$ and $i \in \{ 0, \ldots, a_{x,n}\}$,
\[
\dim_k \Big( H^0(X, nL \otimes \mathcal O_X(-D_{x,n}-ix))/H^0(X, nL \otimes\mathcal O_X(-D_{x,n}-(i+1)x)) \Big)= [k(x):k].
\]
\end{lemm}
\begin{proof} Let us consider an exact sequence
\[
0 \to nL \otimes \mathcal O_X(-D_{x,n}-(i+1)x) \to nL \otimes \mathcal O_X(-D_{x, n} -ix) \to k(x) \to 0,
\]
so that, since
\[\begin{split}
&\quad\;\deg(\omega_X \otimes -n L \otimes \mathcal O_X(D_{x, n} + (i+1)x)) \\
&= 2(\operatorname{genus}(X)-1) - n \deg(L) + \big( (i+1) - (a_{x, n}+1)\big)[k(x):k] \\
&\qquad +
\sum_{y \in \Sigma} (a_{y, n}+1)[k(y):k] \\
&\leqslant 2(\operatorname{genus}(X)-1) - n \deg(L) + \sum_{y \in \Sigma} (a_{y, n}+1)[k(y):k] < 0,
\end{split}\]
one has the assertion as before.
\end{proof}
By Lemma~\ref{lem:quotient:one:dim}, for each $x \in \Sigma$, there are
\[
s_{x,0}^{(\ell)}, \ldots, s_{x, a_{x,n}}^{(\ell)} \in H^0(X, nL \otimes \mathcal O_X(-D_{x, n})),\quad \ell\in\{1,\ldots,[k(x):k]\}
\]
such that the classes of $s_{x,0}^{(\ell)}, \ldots, s_{x,a_{x,n}}^{(\ell)}$ form a basis of
\[
H^0(X, nL \otimes \mathcal O_X(-D_{x, n})) / H^0(X, nL \otimes \mathcal O_X(-D_n))
\]
and
\[
s_{x, i}^{(\ell)} \in H^0(X, nL \otimes \mathcal O_X(-D_{x, n} -ix)) \setminus H^0(X, nL \otimes\mathcal O_X(-D_{x,n}-(i+1)x))
\]
whose classes form a basis of
\[H^0(X, nL \otimes \mathcal O_X(-D_{x, n} -ix)) / H^0(X, nL \otimes\mathcal O_X(-D_{x,n}-(i+1)x))\]for $i=0, \ldots, a_{x,n}$. Moreover we choose a basis $\{ t_{1}, \ldots, t_{e_n} \}$ of $H^0(X, nL \otimes \mathcal O_X(-D_n))$. Then, by Lemma~\ref{lemm:sum:subspace},
\[
\Delta_n := \{ t_1, \ldots, t_{e_n} \} \cup \bigcup_{x \in \Sigma}
\big\{ s_{x,0}^{(\ell)}, \ldots, s_{x, a_{x,n}}^{(\ell)}\,:\,\ell\in\{1,\ldots,[k(x):k]\} \big\}
\]
forms a basis of $H^0(X, nL)$.
\begin{lemm}\label{lemma:orthogonal:basis:nL}
\begin{enumerate}[label=\rm(\arabic*)]
\item The equality \[\| s_{x,i}^{(\ell)} \|_{nh} =
\exp(-n\varphi_x^*(i/n))\] holds for $x \in \Sigma$, $\ell\in\{1,\ldots,[k(x):k]\}$ and $i \in \{ 0, \ldots, a_{x, n}\}$.
Moreover $\|t_j \|_{nh}= 1$ for all $j\in\{1, \ldots, e_n\}$.
\item
The basis $\Delta_n$ of $H^0(X, nL)$ is orthogonal with respect to $\|\ndot\|_{nh}$.
\end{enumerate}
\end{lemm}
\begin{proof}
First of all, note that, for $s \in H^0(X, nL) \setminus \{ 0 \}$ and $\xi \in X^{\mathrm{an}}$,
\[
- \ln |s|_{nh}(\xi)
= \begin{cases}
t(\xi) \operatorname{ord}_x(s) \geq 0 & \text{if $\xi \in [\eta_0, x_0]$ and $x \not\in \Sigma$}, \\
n\big( t(\xi)(\operatorname{ord}_{x}(s)/n) + \varphi_x(t(\xi))\big) & \text{if $\xi \in [\eta_0, x_0]$ and $x \in \Sigma$},
\end{cases}
\]
so that
\begin{equation}\label{leqn:emma:orthogonal:basis:nL:01}
\| s \|_{nh} = \max \left\{ 1, \ \max_{x \in \Sigma} \{ \exp(-n \varphi_x^*(\operatorname{ord}_x(s)/n)) \}\right\}.
\end{equation}
(1) The assertion follows from \eqref{leqn:emma:orthogonal:basis:nL:01} because $\varphi_x^{*}(\lambda) = 0$ if $\lambda \geqslant -a_x$.
\medskip
(2) Fix $s \in H^0(X, nL) \setminus \{ 0 \}$. We set
\[
s = b_1 t_1 + \cdots + b_{e_n }t_{e_n} + \sum_{x \in \Sigma} \sum_{i=0}^{a_{x, n}}\sum_{\ell=1}^{[k(x):k]} c_{x, i}^{(\ell)} s_{x, i}^{(\ell)}.
\]
If $s \in H^0(X, nL \otimes \mathcal O_X(-D_n))$, then $c_{x, i}^{(\ell)} = 0$ for all $x, i$ and $\ell$.
Thus
\[
1 = \max_{j\in\{1, \ldots, e_n\}} \{ |b_j|\cdot\| t_j \|_{nj} \} = \|s\|_{nh}.
\]
Next we assume that $s \not\in H^0(X, nL \otimes \mathcal O_X(-D_n))$. If we set
\[T = \{ x \in \Sigma \,:\, \operatorname{ord}_x(s) \leqslant a_{x, n} \},\] then
$T \not= \emptyset$ and, for $x \in \Sigma$ and $\ell\in\{1,\ldots,[k(x):k]\}$,
\[
\begin{cases}
c_{x,0}^{(\ell)} = \cdots = c_{x,a_{x,n}}^{(\ell)} = 0 & \text{if $x \not\in T$}, \\
c_{x,0}^{(\ell)} = \cdots = c_{x, \operatorname{ord}_x(s) - 1}^{(\ell)} = 0,\quad
(c_{x, \operatorname{ord}_x(s)}^{(\ell)})_{\ell=1}^{[k(x):k]} \not= (0,\ldots,0) & \text{if $x \in T$}.
\end{cases}
\]
Therefore, by \eqref{leqn:emma:orthogonal:basis:nL:01},
\begin{multline*}
\max \left\{ \max_{j=1, \ldots, e_n }\{ |b_j|\cdot \|t_j \|_{nh} \} , \max_{\substack{x \in \Sigma,\\i=0, \ldots, a_{x, n}}} \{ |c_{x, i}|\cdot \| s_{x, i} \|_{nh} \} \right\}
= \max_{x\in T,\,\ell} \{ \| s_{x, \operatorname{ord}_x(s) }^{(\ell)} \|_{nh} \} \\ = \max_{x\in \Sigma,\,\ell} \{ \| s_{x, \operatorname{ord}_x(s) }^{(\ell)} \|_{nh} \} = \max_{x\in \Sigma} \{ \exp(-n \varphi_x^*(\operatorname{ord}_x(s)/n)) \} = \| s \|_{nh},
\end{multline*}
as required.
\end{proof}
Let us begin the proof of Theorem~\ref{thm:Hilbert:Samuel:semipositive} under Assumption~\ref{assumption:thm:Hilbert:Samuel:semipositive}.
By Lemma~\ref{lemma:orthogonal:basis:nL} together with Definition \ref{Def: pairing of Green functions} and Proposition~\ref{theorem:integral:Legendre:trans},
\begin{multline*}
\lim_{n\to\infty}\frac{\widehat{\deg}\left( H^0(X, nL), \|\ndot\|_{nh} \right)}{n^2/2} = 2 \sum_{x \in \Sigma} \lim_{n\to\infty}[k(x):k] \sum_{i=0}^{a_{x,n}} \frac{1}{n} \varphi_x^*(i/n) \\
= 2\sum_{x \in \Sigma}[k(x):k] \int_{0}^{-a_x} \varphi_x^*(\lambda) \mathrm{d}\lambda = -\sum_{x \in \Sigma} [k(x):k]\int_{0}^{\infty} (\varphi'_x)^2 \mathrm{d}t = (\overline D\cdot\overline D),
\end{multline*}
as required.
\begin{proof}[Proof of Theorem~\ref{thm:Hilbert:Samuel:semipositive} without additional assumptions]
First of all, note that $\Sigma$ is a countable set (cf. Proposition \ref{Pro: constant except countable}).
\medskip
{\bf Step~1}: (the case where $D$ is Cartier divisor, $\Sigma$ is finite and $\varphi_g'(\eta_0) + \deg(D) > 0$).
By the previous observation, \[\widehat{\mathrm{vol}}_{\chi}(D, g - g(\eta_0)) = \big((D, g-g(\eta_0))\cdot(D, g-g(\eta_0))\big).\] On the other hand, by Proposition \ref{Pro:volume chi translation}, one has
\[\widehat{\mathrm{vol}}_{\chi}(D, g) = \widehat{\mathrm{vol}}_{\chi}(D, g - g(\eta_0)) + 2 \deg(D) g(\eta_0).\]
Moreover, by the bilinearity of the arithmetic intersection pairing, one has \[\big(\overline D\cdot\overline D\big) = \big((D, g-g(\eta_0))\cdot (D, g-g(\eta_0))\big) + 2 \deg(D) g(\eta_0).\]
Thus the assertion follows.
\medskip
{\bf Step~2}: (the case where $D$ is Cartier divisor and $\Sigma$ is finite).
For $0 < \varepsilon < 1$, we set $g_{\varepsilon} = g_D^{\mathrm{can}} + \varepsilon \varphi_g$. If $\varphi_g'(\eta_0) = 0$,
then $\Sigma = \emptyset$, so that the assertion is obvious. Thus we may assume that $\varphi_g'(\eta_0) < 0$.
As $\varphi_g'(\eta_0) + \deg(D) \geqslant 0$, we have $\varepsilon \varphi_g'(\eta_0) + \deg(D) > 0$.
Therefore, by Step~1, \[\widehat{\mathrm{vol}}_{\chi}(D, g_{\varepsilon}) = \big((D, g_{\varepsilon})\cdot(D, g_{\varepsilon})\big).\] Thus the assertion follows by Proposition~\ref{Pro: continuity}.
\medskip
{\bf Step~3}: (the case where $D$ is Cartier divisor and $\Sigma$ is infinite). We write $\Sigma$ in the form of a sequence $\{ x_1, \ldots, x_n, \ldots, \}$. For any $n\in\mathbb Z_{\geqslant 1}$, let $g_n$ be the Green function defined as follows:
\[\forall\,\xi\in X^{\mathrm{an}},\quad
g_n(\xi) = g_D (\xi)+ \begin{cases}
\varphi_g(\xi) & \text{if $\xi\in\bigcup_{i=1}^n[\eta_0,x_{i,0}]$},\\
g(\eta_0) & \text{otherwise}.
\end{cases}
\]
Note that
\[\lim_{n\to\infty}\sup_{\xi\in X^{\mathrm{an}}} | \varphi_{g_n}(\xi) - \varphi_{g}(\xi)| = 0.\] Indeed, as $\varphi_g$ is
continuous at $\eta_0$, for any $\varepsilon > 0$, there is an open set $U$ of $X^{\mathrm{an}}$ such that
$\eta_0 \in U$ and $| \varphi_g(\xi) - \varphi_g(\eta_0) | \leqslant \varepsilon$ for any $\xi \in U$.
Since $\eta_0 \in U$, one can find $N$ such that $[\eta_0, x_{n,0}] \subseteq U$ for all $n \geqslant N$.
Then, for $n \geqslant N$,
\[
|\varphi_{g}(\xi) - \varphi_{g_n}(\xi)| \begin{cases}
\leqslant \varepsilon & \text{if $\xi \in [\eta_0, x_{i,0}]$ for some $i > n$},\\
= 0 & \text{othertwise},
\end{cases}
\]
as required. Thus, by (2) in Proposition~\ref{prop:formula:avol:g:g:prime},
the assertion is a consequence of Step~2.
\medskip
{\bf Step~4}: (the case where $D$ is $\mathbb{Q}$-Cartier divisor).
Choose a positive integer $a$ such that $aD$ is Cartier divisor. Then, by Step~3,
\[\widehat{\mathrm{vol}}_{\chi}(a\overline D)=(a\overline D\cdot a\overline D)=a^2(\overline D\cdot\overline D).\] By Corollary \ref{Cor: linearility of vol chi},
one has $\widehat{\mathrm{vol}}_{\chi}(a\overline D)=a^2\widehat{\mathrm{vol}}_{\chi}(\overline D)$. Hence the equality \[\widehat{\mathrm{vol}}_{\chi}(\overline D)=(\overline D\cdot\overline D)\] holds.
\medskip
{\bf Step~5}: (general case).
By our assumption, there are adelic $\mathbb{Q}$-Cartier divisors $(D_1, g_1), \ldots, (D_r, g_r)$ and $a_1, \ldots, a_r \in \mathbb R_{>0}$ such that
$D_1, \ldots, D_r$ are semiample, $g_1, \ldots, g_r$ are plurisubharmonic, and $(D, g) = a_1(D_1, g_1) + \cdots + a_r(D_r, g_r)$.
We choose sequences $\{ a_{1, n} \}_{n=1}^{\infty}, \ldots, \{a_{r, n}\}_{n=1}^{\infty}$ of positive rational numbers such that
$\lim_{n\to\infty} a_{i, n} = a_i$ for $i=1, \ldots, r$.
We set $(D_n, g_n) = a_{1, n} (D_1, g_1) + \ldots + a_{r, n}(D_r, g_r)$. Then we may assume that $\deg(D_n) > 0$.
By Step~4, then $\mathrm{vol}_{\chi}(D_n, g_n) = (D_n, g_n)^2$.
On the other hand, by Proposition~\ref{Pro: continuity},
$\mathrm{vol}_{\chi}(D, g) = \lim_{n\to\infty} \widehat{\mathrm{vol}}_{\chi}(D_n, g_n)$.
Moreover, \[\big((D, g)\cdot(D, g)\big) = \lim_{n\to\infty} \big((D_n, g_n)\cdot (D_n, g_n)\big).\]
Thus the assertion follows.
\end{proof}
\begin{rema}
Let $\overline D_1=(D_1,g_1)$ and $\overline D_2=(D_2,g_2)$ be adelic $\mathbb R$-divisors such that $\deg(D_1)>0$ and $\deg(D_2)>0$. Let $\overline D=(D_1+D_2,g_1+g_2)$. If $g_1$ and $g_2$ are plurisubharmoic, then Theorems \ref{thm:super:additive:vol:chi:deg} and \ref{thm:Hilbert:Samuel:semipositive} lead to the following inequality
\begin{equation}\label{Equ: Hodge index inequality}\frac{(\overline D\cdot\overline D)}{\deg(D)}\geqslant\frac{(\overline D_1\cdot\overline D_1)}{\deg(D_1)}+\frac{(\overline D_2\cdot\overline D_2)}{\deg(D_2)}.\end{equation}
This inequality actually holds without plurisubharmonic condition (namely it suffices that $g_1$ and $g_2$ are pairable). In fact, by \eqref{Equ: coupling Di} one has
\[\frac{(\overline D_i\cdot\overline D_i)}{\deg(D_i)}=2g_i(\eta_0)-\sum_{x\in X^{(1)}}\frac{[k(x):k]}{\deg(D_i)}\int_0^{+\infty}\varphi_{g_i\circ\xi_x}'(t)^2\,\mathrm{d}t\]
for $i\in\{1,2\}$, and
\[\begin{split}&\frac{(\overline D\cdot\overline D)}{\deg(D)}=2(g_1(\eta_0)+g_2(\eta_0))\\
&\qquad-\sum_{x\in X^{(1)}}\frac{[k(x):k]}{\deg(D_1)+\deg(D_2)}\int_0^{+\infty}(\varphi_{g_1\circ\xi_x}'(t)+\varphi_{g_2\circ\xi_x}'(t))^2\,\mathrm{d}t,
\end{split}\]
which leads to
\[\begin{split}&\quad\;(\deg(D_1)+\deg(D_2))\bigg(\frac{(\overline D\cdot\overline D)}{\deg(D)}-\frac{(\overline D_1\cdot\overline D_1)}{\deg(D_1)}-\frac{(\overline D_2\cdot\overline D_2)}{\deg(D_2)}\bigg)\\
&=\sum_{x\in X^{(1)}}[k(x):k]\bigg(\frac{\deg(D_2)}{\deg(D_1)}\int_0^{+\infty}\varphi_{g_1\circ\xi_x}'(t)^2\,\mathrm{d}t+\frac{\deg(D_1)}{\deg(D_2)}\int_0^{+\infty}\varphi_{g_2\circ\xi_x}'(t)^2\,\mathrm{d}t\\
&\qquad\quad-2\int_{0}^{+\infty}\varphi_{g_1\circ\xi_x}'(t)\varphi_{g_2\circ\xi_x}'(t)\,\mathrm{d}t\bigg)\geqslant 0,
\end{split}
\]by using Cauchy-Schwarz inequality and the inequality of arithmetic and geometric means.
The inequality \eqref{Equ: Hodge index inequality} leads to
\[2(\overline D_1\cdot\overline D_2)\geqslant \frac{\deg(D_2)}{\deg(D_1)}(\overline D_1\cdot\overline D_1)+\frac{\deg(D_1)}{\deg(D_2)}(\overline D_2\cdot\overline D_2).\]
In the case where $(\overline D_1\cdot\overline D_2)$ and $(\overline D_2\cdot\overline D_2)$ sont non-negative, by the inequality of arithmetic and geometric means, we obtain that
\[(\overline D_1\cdot\overline D_2)\geqslant \sqrt{(\overline D_1\cdot\overline D_1)(\overline D_2\cdot\overline D_2)},\]
where the equality holds if and only if $\overline D_1$ and $\overline D_2$ are proportional up to $\mathbb R$-linear equivalence.
This could be considered as an analogue of the arithmetic Hodge index inequality of Faltings \cite[Theorem 4]{MR740897} and Hriljac \cite[Theorem 3.4]{MR778087}, see also \cite[Theorem 7.1]{MR1189866} and \cite[\S5.5]{MR1681810}.
\end{rema}
\bibliography{intersection}
\bibliographystyle{plain}
\end{document} | 137,395 |
Modern brings you parotas so authentic, they taste just as yummy as home.
A little bit of health everyday with Modern Chapatis, made with the goodness of 100% whole wheat.
Nutritious. Tasty. Homely. Everything you need from a Parota.
205, 2nd Floor, Vipul Plaza, Suncity, Sector 54, Gurugram, Haryana-122002
Design & Developed by RepIndia | Modern Food Enterprises Private Limited. © 2017 All Rights Reserved. | 397,224 |
\begin{document}
\begin{frontmatter}
\title{Simultaneous approximation terms and functional accuracy for diffusion problems discretized with multidimensional summation-by-parts operators}
\author{Zelalem Arega Worku \corref{cor1}}
\cortext[cor1]{Corresponding author: }
\ead{[email protected]}
\author{David W. Zingg}
\ead{[email protected]}
\address{Institute for Aerospace Studies, University of Toronto, Toronto, Ontario, M3H 5T6, Canada}
\addcontentsline{toc}{section}{Abstract}
\begin{abstract}
Several types of simultaneous approximation term (SAT) for diffusion problems discretized with \blue{diagonal-norm} multidimensional summation-by-parts (SBP) operators are analyzed based on a common framework. Conditions under which the SBP-SAT discretizations are consistent, conservative, adjoint consistent, and energy stable are presented. For SATs leading to primal and adjoint consistent discretizations, the error in output functionals is shown to be of order $h^{2p}$ when a degree $p$ multidimensional SBP operator is used to discretize the spatial derivatives. SAT penalty coefficients corresponding to various discontinuous Galerkin fluxes developed for elliptic partial differential equations are identified. We demonstrate that the original method of Bassi and Rebay, the modified method of Bassi and Rebay, and the symmetric interior penalty method are equivalent when implemented with SBP diagonal-E operators that have diagonal norm matrix, \eg, the Legendre-Gauss-Lobatto SBP operator in one space dimension. Similarly, the local discontinuous Galerkin and the compact discontinuous Galerkin schemes are equivalent for this family of operators. The analysis remains valid on curvilinear grids if a degree $\le p+1$ bijective polynomial mapping from the reference to physical elements is used. Numerical experiments with the two-dimensional Poisson problem support the theoretical results.
\end{abstract}
\begin{keyword}
Simultaneous approximation term, Summation-by-parts, Functional superconvergence, Adjoint consistency, Unstructured grid, Curvilinear coordinate
\end{keyword}
\end{frontmatter}
\section{Introduction}
High-order methods can provide superior solution accuracy for a given computational cost. Furthermore, when used with unstructured and discontinuous elements they enable efficient $ hp $-adaptation and high code parallelization while still being consistent, locally conservative, and stable for wide range of fluid flow problems. Many of these powerful features can be attributed to the solution discontinuity between adjacent elements. The manner in which elements are coupled affects most essential properties of discretizations, such as accuracy, consistency, conservation, stability, adjoint consistency, functional convergence, conditioning, stiffness, sparsity, symmetry, and so on. Therefore, the coupling procedure at interfaces between adjacent elements is a critical aspect of discontinuous high-order methods. In this paper, we analyze the numerical properties of discretizations arising from the use of one such coupling procedure, simultaneous approximation terms (SATs) \cite{carpenter1994time}, for diffusion problems.
Discontinuous high-order methods developed in the past few decades include summation-by-parts (SBP) methods coupled with SATs, discontinuous Galerkin (DG), and flux reconstruction (FR) methods. In the DG and FR methods, element coupling and boundary conditions are enforced via numerical fluxes. A unified framework of DG fluxes for elliptic problems is analyzed by Arnold \etal \cite{arnold2002unified}, \blue{a review of the SBP-SAT method is presented in \cite{fernandez2014review,svard2014review}, and the connections between SBP-SAT, DG, and FR methods can be found, for example, in \cite{gassner2013skew,ranocha2016summation,chan2018discretely,montoya2021unifying}}. Motivated by developments in the DG method, Carpenter, Nordstr\"{o}m, and Gottlieb \cite{carpenter1999stable} introduced the Carpenter-Nordstr\"{o}m-Gottlieb (CNG) SAT to solve the multi-domain problem for high-order finite difference methods \cite{carpenter2007revisiting}. In later works \cite{carpenter2010revisiting, carpenter2007revisiting}, they showed that SATs are closely related to DG fluxes and introduced the Baumann-Oden (BO)\cite{baumann1999discontinuous} and local discontinuous Galerkin (LDG) \cite{cockburn1998local} type SATs for one-dimensional classical SBP operators. Although these SATs are consistent, conservative, and energy stable, not all of them possess other desired properties such as symmetry and adjoint consistency. Hicken and Zingg \cite{hicken2011superconvergent} presented conditions that SATs must satisfy for SBP-SAT discretizations to be adjoint consistent. Furthermore, they showed that, under mild assumptions, linear functionals superconverge for adjoint consistent discretizations. Adjoint consistency and functional superconvergence properties are further studied in \cite{berg2012superconvergent,hicken2014dual,hicken2012output,eriksson2018dual,eriksson2018finite,nikkar2019dual,nordstrom2017relation,ghasemi2020conservation}. Recently, Craig Penner and Zingg \cite{penner2020superconvergent} showed that functional superconvergence is retained in curvilinear coordinates for adjoint consistent discretizations of hyperbolic PDEs with generalized SBP operators \cite{fernandez2014generalized}.
Multidimensional SBP operators were introduced by Hicken, Del Rey Fern{\'a}ndez, and Zingg \cite{hicken2016multidimensional}. The SBP operators constructed in \cite{hicken2016multidimensional} are classified as SBP-$ \Gamma $ operators -- a family of operators that have $ {p+d-1 \choose d-1} $ volume nodes on each facet, where $ d $ is the spatial dimension of the problem. Later, the SBP-$ \Omega $ \cite{fernandez2018simultaneous} and SBP diagonal-E\footnote{Abbreviated as SBP-E in all figures and tables.} \cite{chen2017entropy} operator families were introduced. SBP-$ \Omega $ operators have volume nodes strictly in the interior domain of the element, while the SBP diagonal-E operators are characterized by two features: facet nodes that are collocated with volume nodes and diagonal surface integral matrices. A broader classification of the operators that is based on the dimensions spanned by the volume to facet node extrapolation matrices\footnote{Also known as the interpolation/extrapolation matrix.}, $ \R $, categorizes the SBP-$ \Omega $, SBP-$ \Gamma $, and SBP diagonal-E operators under the $ \R^{d} $, $ \R^{d-1} $, and $ \R^{0} $ operator families, respectively, where the superscript on $ \R $ indicates the dimensions spanned by the extrapolation matrices \cite{marchildon2020optimization}. For a degree $ p $ multidimensional SBP operator that has a diagonal norm matrix\footnote{The norm matrix is known as the mass matrix in the DG literature.}, the diagonal entries of the norm matrix and the corresponding volume nodes define a degree $ 2p-1 $ quadrature rule, and this connection simplifies the construction of multidimensional SBP operators as quadrature rules are readily available in the literature \cite{fernandez2018simultaneous}. The analysis presented in this paper is restricted to multidimensional SBP operators that have a diagonal norm matrix.
SATs for hyperbolic problems discretized with SBP-$ \Gamma $ and SBP-$ \Omega $ operators were studied in \cite{hicken2016multidimensional,fernandez2018simultaneous}. A framework to implement SATs with multidimensional SBP operators for second-order partial differential equations (PDEs) was subsequently proposed by Yan, Crean, and Hicken \cite{yan2018interior}. The framework presented in \cite{yan2018interior} is flexible enough to construct compact\footnote{If SATs couple only immediate neighbor elements, they are referred to as compact stencil SATs; otherwise, they are referred to as wide or extended stencil SATs.} stencil SATs that lead to consistent, conservative, adjoint consistent, and energy stable SBP-SAT discretizations. Furthermore, it was shown in \cite{yan2018interior,yan2020immersed} that the modified method of Bassi and Rebay (BR2) \cite{bassi1997highbr2}, the symmetric interior penalty (SIPG) \cite{douglas1976interior}, and the compact discontinuous Galerkin (CDG) \cite{peraire2008compact} methods fall under this framework. Numerical properties of discretizations of the two-dimensional heat equation with SBP-$ \Gamma $ and SBP-$ \Omega $ operators coupled with the BR2 and SIPG SATs were also investigated in \cite{yan2018interior}. \red{For tensor-product SBP discretizations in multiple dimensions, wide stencil DG fluxes, such as the LDG method, are widely used for coupling of viscous terms \cite{carpenter2014entropy,parsani2015entropy,gassner2018br1}. However, many numerical properties of discretizations resulting from the use of wide stencil SATs and multidimensional SBP operators have not been analyzed so far. In light of this, we study properties of compact and wide stencil SATs under a general SAT framework for multidimensional SBP operators.}
The three main objectives of the present work are: (1) to extend the framework in \cite{yan2018interior} to allow construction of wide stencil SATs and study their numerical properties, (2) to demonstrate that when diffusion problems are discretized with degree $ p $ multidimensional SBP operators in a primal and adjoint consistent manner, the error in output functionals is of order $ h^{2p} $, and (3) to show the equivalence of different types of DG-based SATs when implemented with SBP diagonal-E operators that have a diagonal norm matrix. We also specify SAT coefficients that correspond to the consistent DG fluxes in \cite{arnold2002unified,peraire2008compact} and provide stability analysis for the SATs that are not studied in \cite{yan2018interior}. All results are presented in two space dimensions; however, generalization to three space dimensions is straightforward.
The paper is organized as follows: In \cref{sec:notation}, we introduce our notation and present important definitions. After describing the model problem in \cref{sec:model problem}, the SBP-SAT discretization and the generic SAT framework are provided in \cref{sec:SBP-SAT discretization}. We analyze properties of SBP-SAT discretizations in \cref{sec:Properties of SBP-SAT discretization} and present SATs corresponding to popular DG methods in \cref{sec:Existing and DG SATs}. In \cref{sec:Practial issues}, we demonstrate the equivalence of various types of SATs when implemented with the diagonal-norm $ \R^{0} $ SBP family and study the sparsity of system matrices arising from SBP-SAT discretizations. In \cref{sec:Numerical Results}, we investigate numerical properties of various SBP-SAT discretizations of the steady version of the model problem. Finally, we present concluding remarks in \cref{sec:conclusions}.
\section{Notation and definitions}
\label{sec:notation}
We closely follow the notation in \cite{hicken2016multidimensional, yan2018interior, crean2018entropy}. A $d$-dimensional compact domain and its boundary are denoted by $\Omega \subset \mathbb{R}^d$ and $\partial\Omega$, respectively. The domain is tessellated into $n_e$ non-overlapping elements, $ {\mathcal T}_h \equiv \{\{ \Omega_k\}_{k=1}^{n_e}: \Omega=\cup_{k=1}^{n_e} {\Omega}_k\}$. The boundaries of each element will be referred to as facets or interfaces, and we denote their union by $ \Gamma_k \equiv \partial\Omega_k $. A reference element, $\hat{\Omega}$, and its boundary, $\hat{\Gamma}$, are used to construct SBP operators which are then mapped to each physical element. The reference triangle is a right angle triangle with vertices $ \hat{v}_1 =(-1,-1) $, $ \hat{v}_2 = (1, -1) $, and $ \hat{v}_3 = (-1,1) $ and facets $ f_1=\overrightarrow{\hat{v}_2 \hat{v}_3} $, $ f_2=\overrightarrow{\hat{v}_3 \hat{v}_1} $, and $ f_3=\overrightarrow{\hat{v}_1 \hat{v}_2} $. The boundaries $\partial\Omega$, $\Gamma_k$, and $\hat{\Gamma}$ are assumed to be piecewise smooth. The set of all interior interfaces is denoted by $\Gamma ^I \equiv \{\Gamma_k \cap \Gamma_v : k,v=1,\dots,n_e, k\neq v \}$. The set of facets of $ \Omega_k $ that are also interior facets is denoted by $ \Gamma^I_k \equiv \Gamma^I\cap\Gamma_k $, and $ \Gamma ^B \equiv \{\partial\Omega \cap \Gamma_k : k = 1,\dots, n_e\}$ delineates the set of all boundary facets. Finally, the set containing all facets is denoted by $ \Gamma \equiv \Gamma^I \cup \Gamma^B$. The set of $n_p$ volume nodes in element $ \Omega_k $ is represented by $ S_k=\{(x_i,y_i)\}_{i=1}^{n_p} $, while the set of $n_f$ nodes on facet $ \gamma \in {\Gamma_k}$ is denoted by $S_\gamma=\{(x_i,y_i)\}_{i=1}^{n_f} $. Similarly, we represent the set of volume nodes in the reference element, $ \hat{\Omega} $, and facet nodes on $ \gamma \in \hat{\Gamma} $ by $ \hat{S}=\{(\xi_i,\eta_i)\}_{i=1}^{n_p} $, and $\hat{S}_\gamma=\{(\xi_i,\eta_i)\}_{i=1}^{n_f} $, respectively. Operators associated with the reference element bear a hat, $ (\hat\cdot) $.
Scalar functions are written in uppercase script type, \eg, $\fnc{U}_k \in \cont{\infty}$, and vector-valued functions of dimension $ d $ are represented by boldface uppercase script letters, \eg, $\vecfnc{W}_k \in \vecLtwo{d}$. The space of polynomials of total degree $ p $ is denoted by $\polyref{p} $, and $ n_p^* = {p+d \choose d} $ is the cardinality of the polynomial space. The restrictions of functions to grid points are denoted by bold letters, \eg, $ \uk \in \IR{n_p}$ is the evaluation of $ \fnc{U}_k $ at grid points $ S_k $, while vectors resulting from numerical approximations have subscript $ h $, \eg, $ \uhk \in \IR{n_p}$. When dealing with error estimates, we define $h\equiv \max_{a, b \in S_k} \norm{a - b}_2$ as the diameter of an element. Matrices are denoted by sans-serif uppercase letters, \eg, $\V \in \IRtwo{n_p}{n_p^*}$; $ \bm{1} $ denotes a vector consisting of all ones, $ \bm {0} $ denotes a vector or matrix consisting of all zeros. The sizes of $ \bm{1} $ and $ \bm {0} $ should be clear from context. Finally, $ \I_{n} $ represents the identity matrix of size $ n \times n $ unless specified otherwise.
The definition of multidimensional SBP operators first appeared in \cite{hicken2016multidimensional}, and is presented below on the reference element.
\begin{definition}[Two-dimensional SBP operator \cite{hicken2016multidimensional}] \label{def:sbp}
The matrix $\Dxi \in \IRtwo{n_p}{n_p}$ is a degree $ p $ SBP operator approximating the first derivative $ \pdv{\xi} $ on the set of nodes $ \hat{S}=\{(\xi_i,\eta_i)\}_{i=1}^{n_p} $ if
\begin{enumerate}
\item $ \Dxi \bm{p} = \pdv{\fnc{P}}{\xi}$ for all $\fnc{P} \in \polyref{p} $
\item $ \Dxi=\hat{\H}^{-1} \Qxi $ where $ \hat{\H} $ is a symmetric positive definite (SPD) matrix, and
\item $ \Qxi = \Sxi + \frac{1}{2} \Exi $ where $ \Sxi = - \Sxi^T $, $ \Exi = \Exi^T $ and $ \Exi $ satisfies
\[ \bm{p}^T \Exi \bm {q} = \int_{\hat{\Gamma}} \fnc{P}\fnc{Q} \nxi \dd{\Gamma}\]
for all $ \fnc{P},\fnc{Q} \in \polyref{r} $, where $ r \ge p $, and $ \nxi $ is the $ \xi $-component of the outward pointing unit normal vector, $ \hat{\bm{n}} = [\nxi, \neta]^T $, on $ \hat{\Gamma} $.
\end{enumerate}
\end{definition}
An analogous definition applies for operators in the $ \eta $ direction. Properties $ 2 $ and $ 3 $ in \cref{def:sbp} give
\begin{equation} \label{eq:SBP property}
\begin{aligned}
\Qxi + \Qxi^T &= \Exi,
\end{aligned}
\end{equation}
which will be referred to as the SBP property. Throughout this paper, the matrix $ \hat{\H} $ is assumed to be diagonal. The set of nodes $ \hat{S} $ and the diagonal entries of $ \hat{\H} $ define a quadrature rule of at least degree $ 2p-1 $; thus, the inner product of two functions $ \fnc{P} $ and $ \fnc{Q} $ is approximated by \cite{hicken2016multidimensional, fernandez2018simultaneous}
\[ \bm{p}^T\hat{\H} \bm{q} = \int_{\hat{\Omega}}\fnc{P}\fnc{Q}\dd{\Omega} +\order{h^{2p}}.\]
Together with the fact that $ \hat{\H} $ is SPD, the above approximation can be used to define the norm
\[ \bm{u}^T\hat{\H} \bm{u} = \norm{\bm {u}}_{\hat{\H}}^2= \int_{\hat{\Omega}} \fnc{U}^2\dd{\Omega} + \order{h^{2p}}, \]
which is a degree $ 2p-1 $ approximation of the $ L^2 $ norm.
Under the assumption that a quadrature rule exists on $ \gamma \in \hat{\Gamma} $ with nodes $ \hat{S}_\gamma $ the surface integral matrix $ \Exi $ can be decomposed as \cite{fernandez2018simultaneous}
\begin{equation} \label{eq: Exi}
\Exi = \sumfhat \Rg^T \Bghat \Nxig \Rg,
\end{equation}
where, $ \Bghat \in \IRtwo{n_f}{n_f} $ is a diagonal matrix containing a minimum of degree $ 2p $ positive quadrature weights on $ \gamma $ along its diagonal, $ \Nxig \in \IRtwo{n_f}{n_f} $ contains the $ \xi $ component of $ \hat{\bm{n}}_\gamma $ along its diagonal, and $ \Rg \in \IRtwo{n_f}{n_p} $ is a matrix that extrapolates the solution from the volume nodes to the facet nodes. The quadrature accuracy requirement on $ \Bghat $ is a sufficient but not necessary condition to construct SBP operators \cite{shadpey2020entropy}. In this paper, we consider SBP operators with facet quadrature based on the Legendre-Gauss (LG) rule which offers a degree $ 2p+1 $ accuracy. The extrapolation matrix, $ \Rg $, is exact for polynomials of degree $ p $ on the reference element. \violet{For SBP-$ \Omega $ operators}, it is constructed as \cite{fernandez2018simultaneous}
\begin{equation} \label{eq:extrapolation matrix}
\Rg=\hat{\V}_\gamma \hat{\V}_\Omega^+ = \hat{\V}_\gamma \left(\hat{\V}_\Omega^T \hat{\V}_\Omega\right)^{-1}\hat{\V}_{\Omega}^T,
\end{equation}
where $ \hat{\V}_\gamma \in \IRtwo{n_f}{n_p^*} $ and $ \hat{\V}_\Omega \in \IRtwo{n_p}{n_p^*} $ are Vandermonde matrices constructed using \blue{the orthonormalized canonical basis discussed in \cref{sec:Curvilinear Transformation}} and the set of nodes $ \hat{S}_\gamma $ and $ \hat{S} $, respectively, and $ (\cdot)^+$ represents the Moore-Penrose pseudoinverse. \violet{For SBP-$ \Gamma $ operators, $ \R_{\gamma} $ is obtained using a $\hat{\V}_\Omega $ matrix constructed using the basis evaluated at the $ p+1 $ volume nodes that lie on facet $ \gamma $ \cite{fernandez2018simultaneous}. Finally, for SBP diagonal-E operators, $ \Rg $ contains unity at each entry $ (i,j) $ if $ i+(n - 1)n_f = j $, where $ n=\{1, 2, 3\} $ is the facet number; all other entries are zero \cite{shadpey2020entropy}.}
Some definitions that are used in DG formulations of diffusion problems will prove useful for later discussions. Following \cite{arnold2002unified}, we introduce the broken finite element spaces associated with the tessellation $ \fnc{T}_h$ of $ \Omega $. The spaces of scalar and vector functions, $ V_h $ and $ \Sigma_h $ respectively, whose restrictions to each element, $ \Omega_k $, belong to the space of polynomials are defined by
\begin{equation} \label{FEM space}
\begin{aligned}
V_h & \equiv \{\fnc{P}\in L^2(\Omega): \fnc{P}|_{\Omega_k} \in \poly{p} \;\; \forall\Omega_k \in \fnc{T}_h\},\\
\Sigma_h & \equiv \{\vecfnc{V}\in [L^2(\Omega)]^2: \vecfnc{V}|_{\Omega_k} \in \vecpoly{p}{2} \;\; \forall\Omega_k \in \fnc{T}_h\},\\
\end{aligned}
\end{equation}
and the set in which traces\footnote{Traces define the restriction of functions along the boundaries of each element; thus, functions in $ T(\Gamma) $ are double-valued on $ \Gamma^I $ and single valued on $ \Gamma^B$ \cite{arnold2002unified}. See \cite{riviere2008discontinuous} for trace theorems which affect the function spaces in which the solution and test functions are sought.} of the functions in $\fnc{T}_h$ lie is defined by
\begin{equation} \label{FEM space trace}
T(\Gamma) \equiv \Pi_{\Omega_k\in\fnc{T}_h} L^2(\Gamma_k).
\end{equation}
The jump, $ \jump{\cdot}$, and average, $ \avg{\cdot} $, operators for scalar and vector-valued functions are defined as
\begin{equation} \label{eq:Jump and Average}
\begin{aligned}
\jump{\fnc{P}} &=\fnc{P}_k\bm{n}_k + \fnc{P}_v \bm{n}_v, & \avg{\fnc{P}} &= \frac{1}{2} (\fnc{P}_k + \fnc{P}_v), & & \forall \fnc{P}\in T(\Gamma),\\
\jump{\vecfnc{V}} &=\vecfnc{V}_k\cdot\bm{n}_k + \vecfnc{V}_v \cdot\bm{n}_v, & \avg{\vecfnc{V}} &= \frac{1}{2} (\vecfnc{V}_k + \vecfnc{V}_v), & & \forall \vecfnc{V}\in [T(\Gamma)]^2.\\
\end{aligned}
\end{equation}
At the boundaries, $ \jump{\fnc{P}}=\fnc{P}_k\bm{n}_k $ and $ \avg{\vecfnc{V}}=\vecfnc{V}_k $, and the $ \avg{\fnc{P}} $ and $ \jump{\vecfnc{V}} $ are left undefined \cite{arnold2002unified}. Surface integral terms that appear in the DG flux formulation\footnote{The flux formulation is obtained by transforming second-order PDEs into a system of first-order PDEs.} are converted to volume integrals via lifting operators. For vector-valued functions, the global lifting operator for interior facets, $ \fnc{L} : [L^2(\Gamma^I)]^2 \rightarrow \Sigma_h$, and the local lifting operator for interior facets, $ \fnc{L}^\gamma : [L^2(\gamma)]^2 \rightarrow \Sigma_h$ are defined by \cite{peraire2008compact}
\begin{align}
\int_{\Omega}\liftVglob{\vecfnc{V}}\cdot \vecfnc{Z} \dd{\Omega}
&= -\int_{\Gamma^I}\vecfnc{V}\cdot \avg{\vecfnc{Z}} \dd{\Gamma}
&& \forall \vecfnc{Z}\in \Sigma_h, \label{eq: lift global vector} \\
\int_{\Omega_\gamma}\liftVloc{\vecfnc{V}}\cdot \vecfnc{Z} \dd{\Omega}
&= -\int_{\gamma}\vecfnc{V}\cdot \avg{\vecfnc{Z}} \dd{\Gamma}
&& \forall \vecfnc{Z}\in \Sigma_h,\; \gamma\in\Gamma^I, \label{eq: lift local vector}
\end{align}
where $ \Omega_{\gamma} = \Gamma_k \cup \Gamma_v$. Similarly, for scalar functions, the global lifting operator, $ \fnc{S}: L^2(\Gamma^I) \rightarrow \Sigma_h$, the local lifting operator, $ \fnc{S}^\gamma: L^2(\gamma) \rightarrow \Sigma_h $, and the lifting operators at Dirichlet boundary facets, $ \fnc{S}^D : L^2(\gamma) \rightarrow \Sigma_h$, are defined by
\begin{align}
\int_{\Omega} \liftSglob{\fnc{P}} \cdot \vecfnc{Z} \dd{\Omega}
&= -\int_{\Gamma^I}\fnc{P}\jump{\vecfnc{Z}} \dd{\Gamma} && \forall \vecfnc{Z}\in \Sigma_h, \label{eq: lift global scalar}
\\
\int_{\Omega_\gamma} \liftSloc{\fnc{P}} \cdot \vecfnc{Z} \dd{\Omega}
&= -\int_{\gamma}\fnc{P}\jump{\vecfnc{Z}} \dd{\Gamma} && \forall \vecfnc{Z}\in \Sigma_h,\; \gamma\in \Gamma^I, \label{eq: lift local scalar}
\\
\int_{\Omega_\gamma}\fnc{S}^D({\fnc{P}})\cdot \vecfnc{Z} \dd{\Omega}
&= -\int_{\gamma}\fnc{P}\vecfnc{Z}\cdot\bm{n} \dd{\Gamma}
&& \forall \vecfnc{Z}\in \Sigma_h,\; \gamma\in \Gamma^D \label{eq: lift Dirichlet},
\end{align}
respectively. \violet{Note that the surface integrals on the right-hand side (RHS) of \cref{eq: lift global vector} and \cref{eq: lift global scalar} do not include boundary facets; hence, these global lifting operators differ from similar definitions, \eg, in \cite{arnold2002unified, bassi2005discontinuous}. The consequence of such definitions of the global lifting operators is that the boundary conditions are enforced using compact SATs only, \ie, extended stencil SATs are applied exclusively on interior facets. This is important for adjoint consistency of discretizations of problems with non-homogeneous Dirichlet boundary conditions, as explained in \cref{sec:Adjoint Consistency}.} Moreover, the lifting operator at Dirichlet boundaries is defined locally; however, a global lifting operator definition would give the same final SBP-SAT discretization of the PDEs we are interested in.
\section{Model problem} \label{sec:model problem}
We consider the two-dimensional diffusion equation,
\begin{equation} \label{eq:diffusion problem}
\begin{aligned}\pdv{\fnc{U}}{t}+L(\fnc{U})& =\fnc{F}\;\text{in}\;\Omega, & & \fnc{U}=\fnc{U}_{0}\;\text{at}\;t=0, & &
B_D(\fnc{U}) =\fnc{U}_{D}\;\text{on}\;\Gamma^{D}, & & B_N(\fnc{U})=\fnc{U}_{N}\;\text{on}\;\Gamma^{N},
\end{aligned}
\end{equation}
where the linear differential operators in $ \Omega $, on $ \Gamma^D $, and on $ \Gamma^N $ are given, respectively, by $ L(\fnc{U})=-\nabla\cdot\left(\lambda\nabla\fnc{U}\right) $, $ B_D(\fnc{U}) = \fnc{U} $, and $ B_N(\fnc{\fnc{U}})= \left(\lambda\nabla\fnc{U}\right)\cdot\bm{n} $, $ \fnc{F}\in L^2(\Omega) $ is the source term, $ \lambda= \bigl[ \begin{smallmatrix} \lambda_{xx} & \lambda_{xy} \\
\lambda_{yx} & \lambda_{yy} \end{smallmatrix} \bigr]$ is an SPD tensor with diffusivity coefficients in each combination of directions, and we assume that $ \Gamma \cap \Gamma^D \neq \emptyset$. \blue{The energy stability analysis presented in this work applies to SBP-SAT discretizations of the unsteady model problem given in \cref{eq:diffusion problem}.}
\blue{In order to study adjoint consistency and superconvergence of functionals, we will consider the steady version of \cref{eq:diffusion problem}, the Poisson problem}. We also consider a linear functional of the form
\begin{equation} \label{eq:linear functional}
\fnc{I}(\fnc{U})=\innerprod{\fnc{G}_\Omega}{\fnc{U}}_{\Omega}
+\innerprod{\fnc{G}_{\Gamma^{D}}}{C_D(\fnc{U})}_{\Gamma^{D}}
+\innerprod{\fnc{G}_{\Gamma^{N}}}{C_N(\fnc{U})}_{\Gamma^{N}},
\end{equation}
where $ \fnc{G}_\Omega \in L^2(\Omega)$, $ \fnc{G}_{\Gamma^D} \in L^2({\Gamma^D})$, $ \fnc{G}_{\Gamma^N} \in L^2({\Gamma^N})$, \blue{$ C_D $ and $ C_N $ are linear differential operators at the Dirichlet and Neumann boundaries, respectively}, and $\innerprod{\cdot}{\cdot}_{\Omega} $, $ \innerprod{\cdot}{\cdot}_{\Gamma^D} $, and $ \innerprod{\cdot}{\cdot}_{\Gamma^D} $ represent the $ L^2(\Omega) $, $ L^2(\Gamma^D) $, and $ L^2(\Gamma^N) $ inner products, respectively. Such a functional is said to be compatible with the steady version of \cref{eq:diffusion problem} if \cite{hartmann2007adjoint}
\begin{equation}\label{eq:compatibility condition}
\innerprod{L(\fnc{U})}{\psi}_{\Omega} + \innerprod{B_D(\fnc{U})}{C_D^*(\psi)}_{\Gamma^{D}} +
\innerprod{B_N(\fnc{U})}{C_N^*(\psi)}_{\Gamma^{N}}
=
\innerprod{\fnc{U}}{L^*(\psi)}_{\Omega} + \innerprod{C_D (\fnc{U})}{B_D^*(\psi)}_{\Gamma^{D}} +
\innerprod{C_N (\fnc{U})}{B_N^*(\psi)}_{\Gamma^{N}},
\end{equation}
where $ L^* $, $ B^*_D $, $ C^*_D $, $ B^*_N $, and $ C^*_N $ are the adjoint operators to the linear differential operators $ L $, $ B_D $, $ C_D $, $ B_N $, and $ C_N $, respectively, and \blue{$ \psi $ is a unique adjoint variable in an appropriate function space, \eg, we assume $ \fnc{U},\fnc{\psi} \in H^2 $}. A compatible linear functional satisfies the relations \cite{giles1997adjoint,hartmann2007adjoint}
\begin{equation}\label{eq:Adjoint relation}
\begin{aligned}
\fnc{I}(\fnc{U})
&=\innerprod{\fnc{U}}{\fnc{G}_\Omega}_{\Omega}
+\innerprod{C_{D}(\fnc{U})}{\fnc{G}_{\Gamma^{D}}}_{\Gamma^{D}}
+\innerprod{C_{N}(\fnc{U})}{\fnc{G}_{\Gamma^{N}}}_{\Gamma^{N}}
=\innerprod{\fnc{U}}{L^*(\psi)}_\Omega
+\innerprod{C_D(\fnc{U})}{B_D^*(\psi)}_{\Gamma^{D}}
+\innerprod{C_N(\fnc{U})}{B_N^*(\psi)}_{\Gamma^{N}}\\
&=\innerprod{L(\fnc{U})}{\psi}_\Omega
+\innerprod{B_D(\fnc{U})}{C_D^*(\psi)}_{\Gamma^D}
+\innerprod{B_N(\fnc{U})}{C_N^*(\psi)}_{\Gamma^N}
=\innerprod{\fnc{F}}{\psi}_\Omega
+\innerprod{B_D(\fnc{U})}{C_D^*(\psi)}_{\Gamma^D}
+\innerprod{B_N(\fnc{U})}{C_N^*(\psi)}_{\Gamma^N}.
\end{aligned}
\end{equation}
In the subsequent analysis, we will consider the compatible linear functional
\begin{equation} \label{eq:Functional}
\fnc{I} (\fnc{U})= \int_{\Omega}\fnc{G}\fnc{U}\dd{\Omega}
- \int_{\Gamma^D}\psi_{D} (\lambda\nabla\fnc{U})\cdot\bm{n}\dd{\Gamma}
+ \int_{\Gamma^N}\psi_{N}\fnc{U}\dd{\Gamma},
\end{equation}
where $ \fnc{G}\in L^2{(\Omega)}$, $\psi_{N}=[\bm{n}\cdot(\lambda\nabla\psi)] \in L^2{(\Gamma^N)}$, and $\psi_{D}\in L^2{(\Gamma^D)}$. The functional given in \cref{eq:Functional} is obtained by substituting $ \fnc{G}_\Omega = \fnc{G} $, $ \fnc{G}_{\Gamma^{D}} = \psi_D $, $ \fnc{G}_{\Gamma^{N}} = \psi_N $, $ C_D(\fnc{U})=-(\lambda\nabla\fnc{U})\cdot\bm{n} $, $ C_N(\fnc{U})=\fnc{U} $, $ B_D(\fnc{U}) = \fnc{U}_D $, and $ B_N(\fnc{U}) = (\lambda\nabla\fnc{U})\cdot\bm{n} = \fnc{U}_N$ in \cref{eq:Adjoint relation}, \ie,
\begin{align}
\fnc{I}(\fnc{U})&=\innerprod{\fnc{G}}{\fnc{U}}_{\Omega}
+\innerprod{\psi_{D}}{-(\lambda\nabla\fnc{U})\cdot\bm{n}}_{\Gamma^D}
+\innerprod{\psi_{N}}{\fnc{U}}_{\Gamma^N}
\label{eq:Adjoint relation 2-1}
\\
&= \innerprod{L(\fnc{U})}{\psi}_{\Omega} + \innerprod{\fnc{U}_D}{C^*_D(\psi)}_{\Gamma^D}
+\innerprod{\fnc{U}_N}{C^*_N(\psi)}_{\Gamma^N}
\label{eq:Adjoint relation 2-2}
\\
&= \innerprod{\fnc{U}}{L^*(\psi)}_{\Omega}
+\innerprod{-(\lambda\nabla\fnc{U})\cdot\bm{n}}{B^*_D(\psi)}_{\Gamma^D}
+\innerprod{\fnc{U}}{{B^*_N(\psi)}}_{\Gamma^N}.
\label{eq:Adjoint relation 2-3}
\end{align}
Following \cite{hartmann2007adjoint}, we apply integration by parts to $ \innerprod{L(\fnc{U})}{\psi}_{\Omega} $ twice and rearrange terms to find
\begin{equation} \label{eq:IBP on L(U)}
\begin{aligned}
\innerprod{L(\fnc{U})}{\psi}_{\Omega}
&-\innerprod{\fnc{U}}{(\lambda\nabla\psi)\cdot\bm{n}}_{\Gamma^D}
+\innerprod{(\lambda\nabla\fnc{U})\cdot \bm{n}}{\psi}_{\Gamma^N}
\\
&= -\innerprod{\fnc{U}}{\nabla\cdot(\lambda \nabla\psi)}_{\Omega}
-\innerprod{(\lambda\nabla\fnc{U})\cdot \bm{n}}{\psi}_{\Gamma^D}
+\innerprod{\fnc{U}}{(\lambda\nabla\psi)\cdot\bm{n}}_{\Gamma^N},
\end{aligned}
\end{equation}
where symmetry of the inner product is used assuming the problem is real-valued. Equations \cref{eq:Adjoint relation 2-2,eq:Adjoint relation 2-3,eq:IBP on L(U)} imply
\begin{equation}\label{eq:Adjoint operators}
\begin{aligned}
L^*(\psi) &= -\nabla\cdot\left(\lambda\nabla{\fnc{\psi}}\right),
&& B^*_D(\psi) = \psi,
&& B^*_N(\psi) = (\lambda\nabla\psi)\cdot\bm{n},
&& C^*_D(\psi) = -(\lambda\nabla\psi)\cdot\bm{n},
&& C^*_N(\psi) = \psi.
\end{aligned}
\end{equation}
Using the result in \cref{eq:Adjoint operators} and subtracting \cref{eq:Adjoint relation 2-3} from \cref{eq:Adjoint relation 2-1} we have
\begin{equation}
\innerprod{\fnc{G}+\nabla\cdot(\lambda\nabla\psi)}{\fnc{U}}_{\Omega}
+\innerprod{\psi-\psi_{D}}{(\lambda\nabla\fnc{U})\cdot\bm{n}}_{\Gamma^D}
+\innerprod{\psi_{N} - (\lambda\nabla\fnc{U})\cdot\bm{n}}{\fnc{U}}_{\Gamma^N} = 0.
\end{equation}
Thus, the adjoint for the model problem satisfies
\begin{equation}\label{eq:Adjoint problem}
\begin{aligned}
L^*(\psi)- \fnc{G} &= 0 \; \text{in} \; \Omega, &&
\psi = \psi_{D} \; \text{on} \; \Gamma^D, && (\lambda\nabla\psi)\cdot\bm{n} = \psi_{N} \; \text{on} \; \Gamma^N.
\end{aligned}
\end{equation}
\section{SBP-SAT discretization} \label{sec:SBP-SAT discretization}
In this section, the discretization of the model equation \cref{eq:diffusion problem} with multidimensional SBP operators is presented. Notation and definition of operators follow \cite{yan2018interior}. The following assumption is used in the construction of SBP operators on curved elements which is presented in \cref{sec:Curvilinear Transformation}.
\begin{assumption} \label{assu: mapping}
We assume that there exists a bijective and time-invariant polynomial mapping, $ \fnc{M}_k:{ \hat{\Omega}}\rightarrow{ {\Omega}_k} $, of degree $ p_{\rm map} \le p+1 $ for all $ \Omega_k \in \fnc{T}_h $. Furthermore, volume and facet quadrature rules with the set of nodes in the reference element exist such that \blue{diagonal-norm} SBP operators satisfying \cref{def:sbp} can be constructed on the reference element.
\end{assumption}
The extrapolation matrix is exact for constant functions in the physical element,$ \Omega_k $, particularly $ \Rgk \bm{1} = \bm{1} $. Polynomials in $ \Omega_k $ are not necessarily polynomials in the reference element, $ \hat{\Omega} $; thus, SBP operators in the physical domain are not exact for polynomials in $ \Omega_k $. However, under \cref{assu: mapping} the accuracy of the derivative operators in the physical elements is not compromised \cite{crean2018entropy}. We state, without proof, Theorem 9 in \cite{crean2018entropy} that establishes the accuracy of SBP derivative operators on physical elements.
\begin{theorem}\label{thm:Accuracy of Dx}
Let \cref{assu: mapping} hold and the metric terms be computed exactly using \cref{eq:grid metrics volume} and \cref{eq:grid metrics facet}. Then for $ \bm{u}_k\in\IR{n_p} $ holding the values of $ \fnc{U}\in\contref{p+1} $ at the nodes in $ \hat{S} $, the derivative operators given by \cref{eq:Dx and Dy} are order $ p $ accurate, \ie,
\begin{equation*}
[\Dxk\bm{u}_k]_i = {\pdv{\fnc{U}}{x}} (\xi_i,\eta_i) +\order{h^{p}}, \quad \text{and} \quad [\Dyk\bm{u}_k]_i = {\pdv{\fnc{U}}{y}} (\xi_i,\eta_i) +\order{h^{p}}.
\end{equation*}
\end{theorem}
Furthermore, if the SBP operators on physical elements are constructed as described in \cref{sec:Curvilinear Transformation} and \cref{assu: mapping} holds, then $ \Dxk \bm{1} = \bm{0} $ and $ \Dyk \bm{1} = \bm{0} $, which are the conditions required to satisfy the discrete metric identity/freestream preservation condition \cite{crean2018entropy, shadpey2020entropy}.
The diffusivity coefficients are evaluated at the volume nodes and stored in an SPD block matrix,
\begin{equation}\label{eq:Lambda}
\Lambda = \begin{bmatrix}
\Lambda_{xx} & \Lambda_{xy}\\
\Lambda_{yx} & \Lambda_{yy}
\end{bmatrix},
\end{equation}
where each block is diagonal, \eg, $\Lambda_{xx} = \mydiag(\lambda_{xx}(x_1,y_1), \dots, \lambda_{xx}(x_{n_p}, y_{n_p}))$.
The second derivative is approximated by applying the first derivative twice,
\begin{equation} \label{eq:D2 1st form}
[\nabla \cdot (\lambda \nabla)]_k \approx \D^{(2)}_k = \left[\begin{array}{cc}
\Dxk & \Dyk\end{array}\right] \Lambda_k \left[\begin{array}{c}
\Dxk\\
\Dyk
\end{array}\right],
\end{equation}
and the normal derivative at facet $ \gamma $ is given by
\begin{equation} \label{eq:D_gamma k}
[\bm{n}\cdot(\lambda\nabla)]_k \approx \Dgk = \N_{\gamma k}^T \Rbargk \Lambda_k\left[\begin{array}{c}
\Dxk\\
\Dyk
\end{array}\right],
\end{equation}
where
\begin{equation*}
\begin{aligned}
\N_{\gamma k} &= \left[\begin{array}{c}
\N_{x\gamma k} \\
\N_{y\gamma k}
\end{array}\right], \quad \text{and}
& &
\Rbargk = \left[\begin{array}{cc}
\Rgk\\
& \Rgk
\end{array}\right].
\end{aligned}
\end{equation*}
Furthermore, a discrete analogue of application of integration by parts to the term $ \int_{\Omega_k}\fnc{V}\nabla\cdot(\lambda\nabla\fnc{U}) \dd{\Omega} $ yields the relation (see \cite[Proposition 1]{yan2018interior}),
\begin{equation} \label{eq:D2 identity}
\D^{(2)}_k=-\H_{k}^{-1}\M_{k}+ \H_{k}^{-1} \sum_{\gamma \subset \Gamma_k} \R_{\gamma k}^{T}\B_{\gamma}\Dgk,
\end{equation}
where $ \M_k $ is a symmetric positive semidefinite matrix given by
\begin{equation} \label{eq:M_k matrix}
\M_{k}=\left[\begin{array}{cc}
\D_{xk}^{T} & \D_{yk}^{T}\end{array}\right]\bar{\H}_k\Lambda_k\left[\begin{array}{c}
\Dxk\\
\Dyk
\end{array}\right],
\end{equation}
and
\begin{equation*}
\bar{\H}_{k}=\left[\begin{array}{cc}
\H_{k}\\
& \H_{k}
\end{array}\right].
\end{equation*}
A further decomposition of the $ \D^{(2)}_k $ matrix can be obtained by applying the the SBP property twice:
\begin{proposition}
If SBP operators on physical elements are constructed as discussed in \cref{sec:Curvilinear Transformation}, then the second derivative operator in \cref{eq:D2 1st form}, which is constructed by applying the first derivative twice, has the decomposition
\begin{equation} \label{eq:D2 identity 2}
\D_{k}^{(2)}=\H_{k}^{-1}\left(\D_{k}^{(2)}\right)^{T}\H_{k}-\H_{k}^{-1}\sum_{\gamma\subset\Gamma_{k}}\D_{\gamma k}^{T}\B_{\gamma}\R_{\gamma k}+\H_{k}^{-1}\sum_{\gamma\subset\Gamma_{k}}\R_{\gamma k}^{T}\B_{\gamma}\D_{\gamma k}.
\end{equation}
\end{proposition}
\begin{proof}
Using the SBP property in \cref{eq:SBP property}, we have
\begin{equation*}
\begin{aligned}
\D_{xk}=\H_{k}^{-1}\H_{k}\D_{xk}\H_{k}^{-1}\H_{k}&=\H_{k}^{-1}\Q_{xk}\H_{k}^{-1}\H_{k} = \H_{k}^{-1}\left(-\Q_{xk}^{T}+\E_{xk}\right)\H_{k}^{-1}\H_{k},
\end{aligned}
\end{equation*}
therefore,
\begin{equation} \label{eq:Dxk decompostion}
\D_{xk}=-\H_{k}^{-1}\D_{xk}^{T}\H_{k}+\H_{k}^{-1}\E_{xk}, \quad \text{and} \quad \D_{xk}^{T}=-\H_{k}\D_{xk}\H_{k}^{-1}+\E_{xk}\H_{k}^{-1}.
\end{equation}
Furthermore,
\begin{equation} \label{eq:Exk Dgk decomposition}
\begin{bmatrix}
\E_{xk} & \E_{yk}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\D_{xk}\\
\D_{yk}
\end{bmatrix}
= \sum_{\gamma \subset \Gamma_{k}} \Rgk^T\B_{\gamma}\N_{\gamma k}^T \bar{\R}_{\gamma k} \Lambda_k \begin{bmatrix}
\D_{xk}\\
\D_{yk}
\end{bmatrix}
= \sum_{\gamma \subset \Gamma_{k}} \Rgk^T\B_{\gamma} \Dgk.
\end{equation}
Using \cref{eq:Dxk decompostion} and the fact that $ \bar{\H}_k\Lambda_k\bar{\H}^{-1}_k = \Lambda_k$ (since $ \H_k $ and components of $ \Lambda_k $ are diagonal), we arrive at
\begin{align}
\D_k^{(2)} &= \begin{bmatrix}
\D_{xk} & \D_{yk}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\D_{xk}\\
\D_{yk}
\end{bmatrix}
= -\H_{k}^{-1}\begin{bmatrix}
\D_{xk}^{T} & \D_{yk}^{T}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\D_{xk}\\
\D_{yk}
\end{bmatrix}\H_{k}
+\H_{k}^{-1}\begin{bmatrix}
\E_{xk} & \E_{yk}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\D_{xk}\\
\D_{yk}
\end{bmatrix} \nonumber\\
&
=\H_{k}^{-1}\begin{bmatrix}
\D_{xk}^{T} & \D_{yk}^{T}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\D_{xk}^{T}\\
\D_{yk}^{T}
\end{bmatrix}\H_{k}
-\H_{k}^{-1}\begin{bmatrix}
\D_{xk}^{T} & \D_{yk}^{T}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\E_{xk}\\
\E_{yk}
\end{bmatrix}
+\H_{k}^{-1}\begin{bmatrix}
\E_{xk} & \E_{yk}\end{bmatrix}\Lambda_{k}\begin{bmatrix}
\D_{xk}\\
\D_{yk}
\end{bmatrix}.\label{eq:D2 last line}
\end{align}
Noting that $ \Lambda_{k} = \Lambda_{k}^T $, $ \E_{xk}=\E_{xk}^T $, $ \E_{yk}=\E_{yk}^T $, and substituting \cref{eq:Exk Dgk decomposition} in \cref{eq:D2 last line} yields the desired result.
\end{proof}
\begin{remark}
Identity \cref{eq:D2 identity 2} mimics application of integration by parts twice on $ \int_{\Omega}\fnc{V}\nabla\cdot\left(\lambda\nabla\fnc{U}\right) \dd{\Omega} $.
\end{remark}
The SBP-SAT semi-discretization of \cref{eq:diffusion problem} for element $ \Omega_k $ can now be written as
\begin{equation} \label{eq:SBP-SAT discretization}
\dv{\uhk}{t}=\D^{(2)}_{k}\uhk+\bm{f}_{k}-\H_{k}^{-1}\bm{s}_{k}^{I}(\uhk) -\H_{k}^{-1}\bm{s}_{k}^{B}(\uhk,\bm{u}_{\gamma k}, \bm{w}_{\gamma k}),
\end{equation}
where the interior facet SATs and the boundary SATs are given, respectively, by
\begin{equation} \label{eq:Interface SATs}
\begin{aligned}
\bm{s}_{k}^{I}(\uhk)&=\sum_{\gamma\subset\Gamma_{k}^{I}}\left[\begin{array}{cc}
\R_{\gamma k}^{T} & \D_{\gamma k}^{T}\end{array}\right]\left[\begin{array}{cc}
\T_{\gamma k}^{(1)} & \T_{\gamma k}^{(3)}
\\
\T_{\gamma k}^{(2)} & \T_{\gamma k}^{(4)}
\end{array}\right]\left[\begin{array}{c}
\R_{\gamma k}\bm{u}_{h,k}-\R_{\gamma v}\bm{u}_{h,v}
\\
\D_{\gamma k}\bm{u}_{h,k}+\D_{\gamma v}\bm{u}_{h,v}
\end{array}\right]
\\
& \quad +\sum_{\gamma\subset\Gamma_{k}^{I}}
\left\{ \sum_{\epsilon\subset\Gamma_{k}^{I}}\R_{\gamma k}^{T}\T_{\gamma\epsilon k}^{(5)}\left[\R_{\epsilon k}\bm{u}_{h,k}-\R_{\epsilon g}\bm{u}_{h,g}\right]
+\sum_{\delta\subset\Gamma_{v}^{I}}\R_{\gamma k}^{T}\T_{\gamma\delta v}^{(6)}\left[\R_{\delta v}\bm{u}_{h,v}-\R_{\delta q}\bm{u}_{h,q}\right]\right\},
\end{aligned}
\end{equation}
and
\begin{equation} \label{eq:Boundary SATs}
\begin{aligned}
\bm{s}_{k}^{B}(\uhk,\bm{u}_{\gamma k}, \bm{w}_{\gamma k}) &=\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{cc}
\R_{\gamma k}^{T} & \D_{\gamma k}^{T}\end{array}\right]\left[\begin{array}{c}
\T_{\gamma}^{(D)}\\
-\B_{\gamma}
\end{array}\right](\Rgk\uhk -\bm{u}_{\gamma k})
+\sum_{\gamma\subset\Gamma^{N}}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{h,k}-\bm{w}_{\gamma k}\right),
\end{aligned}
\end{equation}
here, $ \bm{u}_{\gamma k} $ is the restriction of $ \fnc{U} $ on $ S_{\gamma}$, $ \bm{w}_{\gamma k} $ is the restriction of $ \bm{n}\cdot(\lambda\nabla\fnc{U}) $ on $ S_{\gamma}$, $ \epsilon \in \{\epsilon_{1},\epsilon_{2}\} $, $ \delta \in \{\delta_{1},\delta_{2}\} $, and the matrices $\T_{\gamma k}^{(1)}, \T_{\gamma k}^{(2)}, \T_{\gamma k}^{(3)}, \T_{\gamma k}^{(4)}, \T_{\gamma \epsilon k}^{(5)}, \T_{\gamma \delta k}^{(6)} \in \IRtwo{n_f}{n_f}$ are SAT penalty/coefficient matrices. Elements and facets are labeled as shown in \cref{fig:element and facet label}. \blue{To avoid calculating the gradient of the solution, $ [\begin{smallmatrix} \Dxk\bm{u}_{h,k} \\ \Dyk\bm{u}_{h,k}\end{smallmatrix}] $, multiple times to find terms such as $ \D_{\gamma k}\bm{u}_{h,k} $ in \cref{eq:Interface SATs}, one can compute and store the gradient of the solution in a vector.}
\begin{figure}
\begin{centering}
\includegraphics[width=0.27\columnwidth]{mesh_labels.pdf}
\caption{\label{fig:element and facet label}Element and facet labeling. For convenience common labeling is often used, \eg, $\sum_{\delta \subset \Gamma_v}(\cdot) $ implies $\sum_{\delta_{1}}(\cdot) + \sum_{\delta_{2}}(\cdot)$.}
\end{centering}
\end{figure}
The structure of the \violet{interior facet} SATs given by \cref{eq:Interface SATs} differs from the form considered in \cite{yan2018interior} by the inclusion of the last two terms, which enable the study of wide stencil SATs that couple a target element with second neighbors, \eg, BR1 and LDG type SATs. \blue{We point out that, unlike wide stencil DG fluxes, the boundary SATs do not include extended stencil terms. This facilitates the design of adjoint consistent schemes for problems with non-homogeneous Dirichlet boundary conditions; however, it bears an adverse effect on energy stability of some the DG-based SATs. The connection between the SATs and DG fluxes as well as the stability issues due to the form of the boundary SATs are discussed in \cref{sec:Existing and DG SATs}.} All of the SATs considered in this work have $ \T_{\gamma k}^{(4)}=\bm{0}$. While it is possible to construct SATs with a nonzero $ \T_{\gamma k}^{(4)}$ coefficient, this can decrease the global accuracy of the numerical solution and increase the condition number and stiffness of the arising system matrix \cite{carpenter2010revisiting,eriksson2018finite}. \blue{Indeed, for most of the SBP-SAT discretizations studied in \cref{sec:Numerical Results}, we observed that setting $ \T_{\gamma k}^{(4)}= 1/2 \B_\gamma$ decreases the accuracy of the solution and increases the condition number of the arising system matrix by two to three orders of magnitude}. For the analyses that follow, we do not make the assumption that $ \T_{\gamma k}^{(4)}$ is zero. The assumptions we make regarding the SAT coefficients are stated below.
\begin{assumption}\label{assu:Coefficient matrices}
For any element $ \Omega_k $ and facets $ a,b\in \{\gamma,\epsilon,\delta\} $, we assume that the coefficient matrices $ \T^{(1)}_{ak} $, $\T^{(2)}_{ak}, \T^{(3)}_{ak}, \T^{(4)}_{ak}$, and $ \T^{(D)}_{a} $ are symmetric, $ \T^{(5)}_{abk} = (\T^{(5)}_{bak}) ^T $, and $ \T^{(6)}_{abk} = (\T^{(6)}_{bak}) ^T $.
\end{assumption}
Premultiplying \cref{eq:SBP-SAT discretization} by $ \bm{v}^T_k \H_k $ and employing identity \cref{eq:D2 identity}, the weak form of the SBP-SAT discretization reads
\begin{equation} \label{eq:Weak form 1 elem}
\begin{aligned}
\bm{v}_{k}^{T}\H_{k}\dv{\uhk}{t}&=-\bm{v}_{k}^{T}\M_k\uhk
+\sum_{\gamma \subset \Gamma_k} \bm{v}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\Dgk \uhk
+ \bm{v}_{k}^{T}\H_{k}\bm{f}_{k}
-\bm{v}_{k}^{T}\bm{s}_{k}^{I} (\uhk)
-\bm{v}_{k}^{T}\bm{s}_{k}^{B}(\uhk,\bm{u}_{\gamma k}, \bm{w}_{\gamma k}),
\end{aligned}
\end{equation}
for all $ \bm{v}_k \in \IR{n_p}$. Summing \cref{eq:Weak form 1 elem} over all elements yields
\begin{equation} \label{eq:Discretization summed over all elements}
\sum_{\Omega_k\subset {\mathcal T}_h}\bm{v}_{k}^{T}\H_{k}\dv{\uhk}{t} = R_{h}(\bm{u}_{h},\bm{v}), \quad \forall\bm{v}\in\mathbb{R}^{\Sigma n_{e}},
\end{equation}
where
\begin{align}
R_{h}(\bm{u}_{h},\bm{v})&=-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{v}_{k}^{T}\M_{k}\bm{u}_{h,k}
+\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{v}_{k}^{T}\H_{k}\bm{f}_{k} -\sum_{\gamma\subset\Gamma^{I}}\left[\begin{array}{c}
\R_{\gamma k}\bm{v}_{k}\\
\R_{\gamma v}\bm{v}_{v}\\
\D_{\gamma k}\bm{v}_{k}\\
\D_{\gamma v}\bm{v}_{v}
\end{array}\right]^{T}\left[\begin{array}{cccc}
\T_{\gamma k}^{(1)} & -\T_{\gamma k}^{(1)} & \T_{\gamma k}^{(3)}-\B_{\gamma} & \T_{\gamma k}^{(3)}\\
-\T_{\gamma v}^{(1)} & \T_{\gamma v}^{(1)} & \T_{\gamma v}^{(3)} & \T_{\gamma v}^{(3)}-\B_{\gamma}\\
\T_{\gamma k}^{(2)} & -\T_{\gamma k}^{(2)} & \T_{\gamma k}^{(4)} & \T_{\gamma k}^{(4)}\\
-\T_{\gamma v}^{(2)} & \T_{\gamma v}^{(2)} & \T_{\gamma v}^{(4)} & \T_{\gamma v}^{(4)}
\end{array}\right]\left[\begin{array}{c}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\D_{\gamma k}\bm{u}_{h,k}\\
\D_{\gamma v}\bm{u}_{h,v}
\end{array}\right]
\nonumber
\\& \quad-\sum_{\gamma\subset\Gamma^{I}}\left\{\sum_{\epsilon\subset\Gamma_{k}^{I}} \left[\begin{array}{c}
\R_{\gamma k}\bm{v}_{k}\\
\R_{\gamma v}\bm{v}_{v}
\end{array}\right]^{T}\left[\begin{array}{cc}
\T_{\gamma\epsilon k}^{(5)} & -\T_{\gamma\epsilon k}^{(5)}\\
\T_{\gamma\epsilon k}^{(6)} & -\T_{\gamma\epsilon k}^{(6)}
\end{array}\right]\left[\begin{array}{c}
\R_{\epsilon k}\bm{u}_{h,k}\\
\R_{\epsilon g}\bm{u}_{h,g}
\end{array}\right]
-\sum_{\delta\subset\Gamma_{v}^{I}}\left[\begin{array}{c}
\R_{\gamma k}\bm{v}_{k}\\
\R_{\gamma v}\bm{v}_{v}
\end{array}\right]^{T}\left[\begin{array}{cc}
\T_{\gamma\delta v}^{(6)} & -\T_{\gamma\delta v}^{(6)}\\
\T_{\gamma\delta v}^{(5)} & -\T_{\gamma\delta v}^{(5)}
\end{array}\right]\left[\begin{array}{c}
\R_{\delta v}\bm{u}_{h,v}\\
\R_{\delta q}\bm{u}_{h,q}
\end{array}\right] \right\}
\nonumber
\\&
\quad
-\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{c}
\R_{\gamma k}\bm{v}_{k}\\
\D_{\gamma k}\bm{v}_{k}
\end{array}\right]^{T}\left[\begin{array}{cc}
\T_{\gamma}^{(D)} & -\B_{\gamma}\\
-\B_{\gamma} & \bm{0}
\end{array}\right]\left[\begin{array}{c}
\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\\
\D_{\gamma k}\bm{u}_{h,k}
\end{array}\right]+\sum_{\gamma\subset\Gamma^{N}}\bm{v}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\bm{w}_{\gamma k}.\label{eq:Residual 1st form}
\end{align}
Instead of adding the terms responsible for extending the stencil (terms containing coefficients $ \T^{(5)} $ and $ \T^{(6)} $) facet by facet, we can add them element by element. That is, we regroup facet terms associated with $ \T^{(5)} $ and $ \T^{(6)} $ by element and rewrite the residual as
\begin{equation} \label{eq:Residual 2nd form}
\begin{aligned}
R_{h}(\bm{u}_{h},\bm{v})&=-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{v}_{k}^{T}\M_{k}\bm{u}_{h,k}+\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{v}_{k}^{T}\H_{k}\bm{f}_{k} -\sum_{\gamma \subset \Gamma^I} (\bm{v}^{\star})^T \T^{\star} \bm{u}_h^{\star}
-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma,\epsilon\subset \Gamma_k^I} (\bm{v}^{\diamond})^T \T^{\diamond} \bm{u}_h^{\diamond}\\ &
\quad -\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{c}
\R_{\gamma k}\bm{v}_{k}\\
\D_{\gamma k}\bm{v}_{k}
\end{array}\right]^{T}\left[\begin{array}{cc}
\T_{\gamma}^{(D)} & -\B_{\gamma}\\
-\B_{\gamma} & \bm{0}
\end{array}\right]\left[\begin{array}{c}
\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\\
\D_{\gamma k}\bm{u}_{h,k}
\end{array}\right] +\sum_{\gamma\subset\Gamma^{N}}\bm{v}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\bm{w}_{\gamma k},
\end{aligned}
\end{equation}
where $ \sum_{\gamma \subset \Gamma^I} (\bm{v}^{\star})^T \T^{\star} \bm{u}_h^{\star} $ is equal to the third term on the RHS of \cref{eq:Residual 1st form}, and
\begin{equation}
\begin{aligned}
\sum_{\gamma,\epsilon\subset \Gamma_k^I} (\bm{v}^{\diamond})^T \T^{\diamond} \bm{u}_h^{\diamond} &=
\left[\begin{array}{c}
\R_{\gamma k}\bm{v}_{k}\\
\R_{\gamma v}\bm{v}_{v}\\
\R_{\epsilon_{1}k}\bm{v}_{k}\\
\R_{\epsilon_{1}g_{1}}\bm{v}_{g_{1}}\\
\R_{\epsilon_{2}k}\bm{v}_{k}\\
\R_{\epsilon_{2}g_{2}}\bm{v}_{g_{2}}
\end{array}\right]^T
\left[\begin{array}{cccccc}
\bm{0} & \bm{0} & \T_{\gamma\epsilon_{1}k}^{(5)} & -\T_{\gamma\epsilon_{1}k}^{(5)} & \T_{\gamma\epsilon_{2}k}^{(5)} & -\T_{\gamma\epsilon_{2}k}^{(5)}\\
\bm{0} & \bm{0} & \T_{\gamma\epsilon_{1}k}^{(6)} & -\T_{\gamma\epsilon_{1}k}^{(6)} & \T_{\gamma\epsilon_{2}k}^{(6)} & -\T_{\gamma\epsilon_{2}k}^{(6)}\\
\T_{\epsilon_{1}\gamma k}^{(5)} & -\T_{\epsilon_{1}\gamma k}^{(5)} & \bm{0} & \bm{0} & \T_{\epsilon_{1}\epsilon_{2}k}^{(5)} & -\T_{\epsilon_{1}\epsilon_{2}k}^{(5)}\\
\T_{\epsilon_{1}\gamma k}^{(6)} & -\T_{\epsilon_{1}\gamma k}^{(6)} & \bm{0} & \bm{0} & \T_{\epsilon_{1}\epsilon_{2}k}^{(6)} & -\T_{\epsilon_{1}\epsilon_{2}k}^{(6)}\\
\T_{\epsilon_{2}\gamma k}^{(5)} & -\T_{\epsilon_{2}\gamma k}^{(5)} & \T_{\epsilon_{2}\epsilon_{1}k}^{(5)} & -\T_{\epsilon_{2}\epsilon_{1}k}^{(5)} & \bm{0} & \bm{0}\\
\T_{\epsilon_{2}\gamma k}^{(6)} & -\T_{\epsilon_{2}\gamma k}^{(6)} & \T_{\epsilon_{2}\epsilon_{1}k}^{(6)} & -\T_{\epsilon_{2}\epsilon_{1}k}^{(6)} & \bm{0} & \bm{0}
\end{array}\right]
\left[\begin{array}{c}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\R_{\epsilon_{1}k}\bm{u}_{h,k}\\
\R_{\epsilon_{1}g_{1}}\bm{u}_{h,g_{1}}\\
\R_{\epsilon_{2}k}\bm{u}_{h,k}\\
\R_{\epsilon_{2}g_{2}}\bm{u}_{h,g_{2}}
\end{array}\right].
\end{aligned}
\end{equation}
Yet another form of the residual, and one that is important for energy analysis, is obtained by employing the ``borrowing trick" of \cite{carpenter1999stable} which is generalized for multidimensional SBP operators in \cite{yan2018interior}. The approach allows to find conditions for energy stability by writing the volume term on the RHS of \cref{eq:Residual 1st form}, $ -\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{v}_{k}^{T}\M_{k}\bm{u}_{h,k}, $ as a facet contribution. The following lemma is as an extension of Lemma 1 in \cite{yan2018interior}.
\begin{lemma}
Given a facet based weight $ \alpha_{\gamma k} > 0$ satisfying $ \sum_{\gamma \subset \Gamma_k} \alpha_{\gamma k} = 1$ for each facet $ \gamma $, the residual of the SBP-SAT discretization for the homogeneous version of \cref{eq:diffusion problem}, \ie, $ \fnc{F}=0$, $\fnc{U}_D = 0 $ , and $ \fnc{U}_N =0$, can be written as
\begin{equation} \label{eq:Residual 3rd form}
\begin{aligned}
R_{h}(\bm{u}_{h},\bm{u}_{h})&=-\sum_{\gamma\subset\Gamma^{I}} \begin{bmatrix}
\D_{\gamma k}\bm{u}_{h,k}\\
\D_{\gamma v}\bm{u}_{h,v}
\end{bmatrix}^{T}\begin{bmatrix}
\T_{\gamma k}^{(4)} & \T_{\gamma k}^{(4)}\\
\T_{\gamma v}^{(4)} & \T_{\gamma v}^{(4)}
\end{bmatrix}
\begin{bmatrix}
\D_{\gamma k}\bm{u}_{h,k}\\
\D_{\gamma v}\bm{u}_{h,v}
\end{bmatrix}
-\sum_{\gamma\subset\Gamma^{I}} X_1
-\sum_{\Omega_{k}\in\mathcal{T}_{h}}\sum_{\gamma,\epsilon\in\Gamma^I_k} X_2
\\
&\quad
-\sum_{\gamma\subset\Gamma^{D}}\begin{bmatrix}
\R_{\gamma}\bm{v}_{k}\\
\F_{k}\bm{v}_{k}
\end{bmatrix}^{T}
\begin{bmatrix}
\T_{\gamma}^{(D)} & -\B_{\gamma}\C_{\gamma k}\\
-\C_{\gamma k}^{T}\B_{\gamma} & \alpha_{\gamma k}\Lambda_{k}^{*}
\end{bmatrix}
\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\F_{k}\bm{u}_{h,k}
\end{bmatrix},
\end{aligned}
\end{equation}
where
\begin{equation*}
\begin{aligned}
X_1 &=\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\F_{k}\bm{u}_{h,k}\\
\F_{v}\bm{u}_{h,v}
\end{bmatrix}^{T}
\begin{bmatrix}
\frac{1}{2}\T_{\gamma k}^{(1)} & -\frac{1}{2}\T_{\gamma k}^{(1)} & (\T_{\gamma k}^{(3)}-\B_{\gamma})\C_{\gamma k} & \T_{\gamma k}^{(3)}\C_{\gamma v}\\
-\frac{1}{2}\T_{\gamma v}^{(1)} & \frac{1}{2}\T_{\gamma v}^{(1)} & \T_{\gamma v}^{(3)}\C_{\gamma k} & (\T_{\gamma v}^{(3)}-\B_{\gamma})\C_{\gamma v}\\
\C_{\gamma k}^{T}\T_{\gamma k}^{(2)} & -\C_{\gamma k}^{T}\T_{\gamma k}^{(2)} & \alpha_{\gamma k}\Lambda_{k}^{*} & \bm{0}\\
-\C_{\gamma v}^{T}\T_{\gamma v}^{(2)} & \C_{\gamma v}^{T}\T_{\gamma v}^{(2)} & \bm{0} & \alpha_{\gamma v}\Lambda_{v}^{*}
\end{bmatrix}
\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\F_{k}\bm{u}_{h,k}\\
\F_{v}\bm{u}_{h,v}
\end{bmatrix}
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
X_2 &=
\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\R_{\epsilon_{1}k}\bm{u}_{h,k}\\
\R_{\epsilon_{1}g_{1}}\bm{u}_{h,g_{1}}
\end{bmatrix}^{T}
\begin{bmatrix}
\frac{1}{8}\T_{\gamma k}^{(1)} & -\frac{1}{8}\T_{\gamma k}^{(1)} & \T_{\gamma\epsilon_{1}k}^{(5)} & -\T_{\gamma\epsilon_{1}k}^{(5)}\\
-\frac{1}{8}\T_{\gamma v}^{(1)} & \frac{1}{8}\T_{\gamma v}^{(1)} & \T_{\gamma\epsilon_{1}k}^{(6)} & -\T_{\gamma\epsilon_{1}k}^{(6)}\\
\T_{\epsilon_{1}\gamma k}^{(5)} & -\T_{\epsilon_{1}\gamma k}^{(5)} & \frac{1}{8}\T_{\epsilon_{1}k}^{(1)} & -\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}\\
\T_{\epsilon_{1}\gamma k}^{(6)} & -\T_{\epsilon_{1}\gamma k}^{(6)} & -\frac{1}{8}\T_{\epsilon_{1}g_1}^{(1)} & \frac{1}{8}\T_{\epsilon_{1}g_1}^{(1)}
\end{bmatrix}
\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\R_{\epsilon_{1}k}\bm{u}_{h,k}\\
\R_{\epsilon_{1}g_{1}}\bm{u}_{h,g_{1}}
\end{bmatrix}
\\
&\quad
+\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\R_{\epsilon_{2}k}\bm{u}_{h,k}\\
\R_{\epsilon_{2}g_{2}}\bm{u}_{h,g_{2}}
\end{bmatrix}^{T}
\begin{bmatrix}
\frac{1}{8}\T_{\gamma k}^{(1)} & -\frac{1}{8}\T_{\gamma k}^{(1)} & \T_{\gamma\epsilon_{2}k}^{(5)} & -\T_{\gamma\epsilon_{2}k}^{(5)}\\
-\frac{1}{8}\T_{\gamma v}^{(1)} & \frac{1}{8}\T_{\gamma v}^{(1)} & \T_{\gamma\epsilon_{2}k}^{(6)} & -\T_{\gamma\epsilon_{2}k}^{(6)}\\
\T_{\epsilon_{2}\gamma k}^{(5)} & -\T_{\epsilon_{2}\gamma k}^{(5)} & \frac{1}{8}\T_{\epsilon_{2} k}^{(1)} & -\frac{1}{8}\T_{\epsilon_{2} k}^{(1)}\\
\T_{\epsilon_{2}\gamma k}^{(6)} & -\T_{\epsilon_{2}\gamma k}^{(6)} & -\frac{1}{8}\T_{\epsilon_{2}g_2}^{(1)} & \frac{1}{8}\T_{\epsilon_{2}g_2}^{(1)}
\end{bmatrix}
\begin{bmatrix}
\R_{\gamma k}\bm{u}_{h,k}\\
\R_{\gamma v}\bm{u}_{h,v}\\
\R_{\epsilon_{2}k}\bm{u}_{h,k}\\
\R_{\epsilon_{2}g_{2}}\bm{u}_{h,g_{2}}
\end{bmatrix}\\
&\quad
+\begin{bmatrix}
\R_{\epsilon_{1}k}\bm{u}_{h,k}\\
\R_{\epsilon_{1}g_{1}}\bm{u}_{h,g_{1}}\\
\R_{\epsilon_{2}k}\bm{u}_{h,k}\\
\R_{\epsilon_{2}g_{2}}\bm{u}_{h,g_{2}}
\end{bmatrix}^{T}
\begin{bmatrix}
\frac{1}{8}\T_{\epsilon_{1}k}^{(1)} & -\frac{1}{8}\T_{\epsilon_{1}k}^{(1)} & \T_{\epsilon_{1}\epsilon_{2}k}^{(5)} & -\T_{\epsilon_{1}\epsilon_{2}k}^{(5)}\\
-\frac{1}{8}\T_{\epsilon_{1}g_1}^{(1)} & \frac{1}{8}\T_{\epsilon_{1}g_1}^{(1)} & \T_{\epsilon_{1}\epsilon_{2}k}^{(6)} & -\T_{\epsilon_{1}\epsilon_{2}k}^{(6)}\\
\T_{\epsilon_{2}\epsilon_{1}k}^{(5)} & -\T_{\epsilon_{2}\epsilon_{1}k}^{(5)} & \frac{1}{8}\T_{\epsilon_{2} k}^{(1)} & -\frac{1}{8}\T_{\epsilon_{2} k}^{(1)}\\
\T_{\epsilon_{2}\epsilon_{1}k}^{(6)} & -\T_{\epsilon_{2}\epsilon_{1}k}^{(6)} & -\frac{1}{8}\T_{\epsilon_{2}g_2}^{(1)} & \frac{1}{8}\T_{\epsilon_{2}g_2}^{(1)}
\end{bmatrix}
\begin{bmatrix}
\R_{\epsilon_{1}k}\bm{u}_{h,k}\\
\R_{\epsilon_{1}g_{1}}\bm{u}_{h,g_{1}}\\
\R_{\epsilon_{2}k}\bm{u}_{h,k}\\
\R_{\epsilon_{2}g_{2}}\bm{u}_{h,g_{2}}
\end{bmatrix}
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\F_{k}&=\left[\begin{array}{cc}
\Lambda_{xx} & \Lambda_{xy}\\
\Lambda_{yx} & \Lambda_{yy}
\end{array}\right]_{k}\left[\begin{array}{c}
\D_{xk}\\
\D_{yk}
\end{array}\right],&\Lambda_{k}^{*}&=\left[\begin{array}{cc}
\Lambda_{xx} & \Lambda_{xy}\\
\Lambda_{yx} & \Lambda_{yy}
\end{array}\right]_{k}^{-1}\left[\begin{array}{cc}
\H_{k}\\
& \H_{k}
\end{array}\right],&\C_{\gamma k}&=\left[\begin{array}{cc}
\N_{x \gamma k}\R_{\gamma k} & \N_{y \gamma k}\R_{\gamma k}\end{array}\right],\\\F_{v}&=\left[\begin{array}{cc}
\Lambda_{xx} & \Lambda_{xy}\\
\Lambda_{yx} & \Lambda_{yy}
\end{array}\right]_{v}\left[\begin{array}{c}
\D_{xv}\\
\D_{yv}
\end{array}\right],&\Lambda_{v}^{*}&=\left[\begin{array}{cc}
\Lambda_{xx} & \Lambda_{xy}\\
\Lambda_{yx} & \Lambda_{yy}
\end{array}\right]_{v}^{-1}\left[\begin{array}{cc}
\H_{v}\\
& \H_{v}
\end{array}\right],&\C_{\gamma v}&=\left[\begin{array}{cc}
\N_{x\gamma v}\R_{\gamma v} & \N_{y\gamma v}\R_{\gamma v}\end{array}\right],
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
Since the result follows from simple algebraic manipulations, the complete proof is omitted but we state some of the steps. We made use of the decomposition provided in \cite{yan2018interior},
\begin{equation}
\M_{k} = \sum_{\gamma \subset \Gamma_{k}} \alpha_{\gamma k}\F_{k}^T\Lambda_k^* \F_{k},
\end{equation}
applied the relations $ \C_{\gamma k}\F_{k}=\D_{\gamma k} $, $ \C_{\gamma v}\F_{v} = \D_{\gamma v} $, subtracted $ \frac{1}{2}\T^{(1)} $ terms from the first $ 2\times 2 $ block of $ \T^{\star}$ and added $ \frac{1}{4}\T^{(1)} $ terms in the $ 2\times 2 $ diagonal blocks of $ \T^{(\diamond)} $, which is then decomposed into the three terms in $ X_2 $. Note that $ \T^{(\diamond)} $ is summed element by element; therefore, we recover $ \frac{1}{2}\T^{(1)} $ terms at each interior facet due to contributions from two abutting elements.
\end{proof}
\section{Properties of the SBP-SAT discretization} \label{sec:Properties of SBP-SAT discretization}
In this section, some numerical properties of the SBP-SAT discretization given in \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs} are analyzed. We will make use of the three equivalent forms of the residual in \cref{eq:Residual 1st form}, \cref{eq:Residual 2nd form}, and \cref{eq:Residual 3rd form} depending on the property under consideration.
\subsection{Consistency} Consistency is a requirement that the SBP-SAT discretization of the primal problem represents the continuous PDE accurately. Consider the steady version of the model problem \cref{eq:diffusion problem}; then consistency requires that the SBP-SAT discretization be at least approximately satisfied by the exact solution \cite{arnold2015stability}. More precisely, we require that
\begin{equation}
\lim_{h\rightarrow0}\sum_{\Omega_{k}\in\mathcal{T}_{h}}\norm{L_{h,k}(\bm{u}_k)-\bm{f}_k}_{\H_k}=0,
\end{equation}
where $ \bm{u}_k \in \IR{n_p} $ is a grid function representing the exact solution, $ \fnc{U}_k\in\cont{p+1}$,
\begin{equation}\label{eq:Lhk}
L_{h,k}( \bm{u}_k)= - \D_{k}^{(2)}\bm{u}_k
+ \H^{-1}_k\bm{s}_k^I(\bm{u}_k)
+ \H^{-1}_k\bm{s}_k^B(\bm{u}_k,\bm{u}_{\gamma k}, \bm{w}_{\gamma k})
\end{equation}
is the discrete counterpart of the linear operator $ L_k $ applied on $ \fnc{U}_k $, and $ \norm{\cdot}_{\H_k} $ is the norm defined by $ \H_k $ matrix on $ \IR{n_p} $.
\begin{theorem} \label{thm:Consistency}
Let \cref{assu: mapping} hold, the SBP operators be constructed as described in \cref{sec:Curvilinear Transformation}, the solution to the PDE \cref{eq:diffusion problem} be $ \fnc{U}\in\fnc{C}^{p+1}(\Omega) $, and $ \bm{u}_k\in\IR{n_p} $ be a grid function representing $ \fnc{U}_k $. Then, the discretization \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs} is a consistent approximation of the diffusion problem given by \cref{eq:diffusion problem}, and $\norm{L_{h,k} \bm{u}_k - \bm{f}_k}_\infty = \fnc{O}(h^{p-1})$.
\end{theorem}
\begin{proof}
The result is a simple consequence of \cref{thm:Accuracy of Dx}, and the accuracy of the extrapolation matrix. It follows by substituting $ \bm{u} $ in place of $ \bm{u}_h $ in \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs}, and noting that $ \D_{k}^{(2)} \bm{u}_k = [\nabla\cdot(\lambda\nabla\fnc{U})]_{S_{k}} + \order{h^{p-1}}$, $ \Dgk\bm{u}_k = -\Dgv\bm{u}_v = [\bm{n}_k\cdot(\lambda\nabla\fnc{U})]_{S_\gamma} +\order{h^{p}} $, and the extrapolation matrices are order $ p+1 $ accurate, \eg, $ \Rgk\bm{u}_k = \fnc{U}|_{S_\gamma} + \order*{h^{p+1}} $.
\end{proof}
\subsection{Conservation} A conservative discretization needs to satisfy the divergence theorem discretely. Multiplying \cref{eq:diffusion problem} by a test function, $ \fnc{V}\in H^{2}(\Omega) $, and integrating by parts we find
\begin{equation} \label{eq:Conservation IBP}
\int_{\Omega}\fnc{V}\pdv{\fnc{U}}{t} \dd\Omega=-\int_{\Omega}\nabla \fnc{V}\cdot(\lambda\nabla \fnc{U})\dd\Omega+\int_{\Gamma^B}\fnc{V}(\lambda\nabla \fnc{U})\cdot \bm{n}\dd\Gamma+\int_{\Omega} \fnc{V}\fnc{F}\dd\Omega,
\end{equation}
which is equivalent to applying the divergence theorem if we set $ \fnc{V}=1 $. Thus, for a problem with no source term and $ \bm{v} = \bm{1} $, we require all except the boundary terms on the RHS of \cref{eq:Residual 2nd form} to vanish.
\begin{theorem}\label{thm:Conservation}
Let \cref{assu: mapping} hold and the metric terms be evaluated exactly, then the SBP-SAT discretization given in \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs} is conservative if the penalty matrices satisfy
\begin{equation} \label{eq:Conditions for conservation}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)},
&&
\T_{\gamma k}^{(3)}+\T_{\gamma v}^{(3)}=\B_{\gamma},
&&
\T_{abk}^{(5)}=-\T_{abk}^{(6)},
\end{aligned}
\end{equation}
where $ a,b\in\left\{ \gamma,\epsilon_{1},\epsilon_{2} \right\}$.
\end{theorem}
\begin{proof}
In \cref{eq:Discretization summed over all elements} and \cref{eq:Residual 2nd form}, we set $ \bm{v} = \bm {1} $, $ \bm{f}_k =\bm{0} $. Applying $ \T_{\gamma v}^{(1)} =\T_{\gamma k}^{(1)}$, and $ \T_{\gamma k}^{(3)} - \B_{\gamma} = -\T_{\gamma v}^{(3)} $ in $ \T^{(\star)} $, $ \T_{abk}^{(5)}=-\T_{abk}^{(6)} $ for $ a,b\in\left\{ \gamma,\epsilon_{1},\epsilon_{2}\right\}$ in $ \T^{(\diamond)} $, and using the exactness of the extrapolation matrix for constants along with the operations $ \Dxk \bm{1} = \Dyk\bm{1}=\bm{0}$ and $\Dgk \bm{1} = \Dgv \bm{1} = \bm{0} $, we obtain
$ \sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{v}_{k}^{T}\M_{k}\bm{u}_{k} = \bm{0}$, $\sum_{\gamma \subset \Gamma^I} (\bm{v}^{\star})^T \T^{\star} \bm{u}^{\star} = \bm{0}$, and $\sum_{\gamma,\epsilon\in \Gamma_k^I} (\bm{v}^{\diamond})^T \T^{\diamond} \bm{u}^{\diamond} = \bm{0}$.
Therefore, \cref{eq:Discretization summed over all elements} becomes
\begin{align}
\sum_{\Omega_k\subset {\mathcal T}_h}\bm{1}^{T}\H_{k}\dv{\uhk}{t} =
\sum_{\gamma\subset\Gamma^{N}}\bm{1}^{T}\Rgk^T\B_{\gamma}\bm{w}_{\gamma k}
+\sum_{\gamma\subset\Gamma^{D}}\qty[\bm{1}^{T}\Rgk^T\B_{\gamma}\D_{\gamma k}\bm{u}_{h,k}
-\bm{1}^{T}\Rgk^T\T_{\gamma}^{(D)}\left(\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\right)],
\end{align}
\ie, all interface terms in the residual vanish and $ \sum_{\Omega_k\subset {\mathcal T}_h}\bm{1}^{T}\H_{k}\dv{\uhk}{t} $ depends only on boundary terms, as desired.
\end{proof}
\subsection{Adjoint Consistency} \label{sec:Adjoint Consistency}
Adjoint consistency refers to an accurate discrete representation of the continuous adjoint problem, \ie, the exact solution to the continuous adjoint problem \cref{eq:Adjoint problem} needs to satisfy
\begin{equation} \label{eq:Adjoint consitstency definition}
\lim_{h\rightarrow0}\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\norm{L_{h,k}^* (\bm{\psi})-\bm{g}_k}_{\H_k}=0,
\end{equation}
where $ L_{h,k}^* $ is the discrete adjoint operator (see \cite{hicken2011superconvergent,hicken2014dual} for similar definitions). The discretization of the linear functional \cref{eq:Functional} needs to be modified in a consistent manner to attain an adjoint consistent discretization \cite{hartmann2007adjoint}. One possible modification is
\begin{equation}\label{eq:Functional discrete}
I_h(\bm{u}_h) = \sum_{\Omega_{k}\subset\mathcal{T}_{h}} \bm{g}^T_k \H_k \uhk - \sum_{\gamma \subset \Gamma^D} \bm{\psi}_{\gamma k}^T\B_{\gamma}\Dgk \uhk
+ \sum_{\gamma \subset \Gamma^N} \bm{z}_{\gamma k}^T\B_{\gamma}\Rgk\uhk + \sum_{\gamma \subset \Gamma^D} \bm{\psi}_{\gamma k}^T\T_{\gamma}^{(D)}(\Rgk\uhk - \bm{u}_{\gamma k}),
\end{equation}
\violet{where $ \bm{\psi}_{\gamma k} $ and $ \bm{z}_{\gamma k} $ are restrictions of $ \psi $ and $ \bm{n}\cdot(\lambda\nabla\psi) $ on $ S_{\gamma}$, respectively}. The last term in \cref{eq:Functional discrete} is added for adjoint consistency \cite{hartmann2007adjoint,hartmann2013higher,yan2018interior,hicken2011superconvergent,hicken2012output}. Similarly, we modify the discretization of another form of the functional that is given by the last equality in \cref{eq:Adjoint relation} as
\begin{equation}\label{eq:Functional discrete 2}
I_h(\bm{\psi}_h)
= \sum_{\Omega_{k}\subset\mathcal{T}_{h}} \bm{f}_k^{T} \H_k \bm{\psi}_{h,k}
- \sum_{\gamma \subset \Gamma^D} \bm{u}_{\gamma k}^T\B_{\gamma}\Dgk \bm{\psi}_{h,k}
+ \sum_{\gamma \subset \Gamma^N} \bm{w}_{\gamma k}^T\B_{\gamma}\Rgk\bm{\psi}_{h,k}
+ \sum_{\gamma \subset \Gamma^D} \bm{u}_{\gamma k}^T\T_{\gamma}^{(D)}(\violet{\Rgk\bm{\psi}_{h,k}} - \bm{\psi}_{\gamma k}).
\end{equation}
\blue{A general procedure to modify the functional for adjoint consistency of discretizations of problems with non-homogeneous Dirichlet boundary conditions is discussed in \cite{hartmann2013higher}. If the boundary SATs contain extended stencil terms, it is not clear whether a similar modification is applicable to retain adjoint consistency.}
\violet{To find the discrete adjoint operator, we require that $ I_h(\bm{u}_h) - I_h(\bm{\psi}_h) =0$, which is a discrete analogue of $ \fnc{I}(\fnc{U})-\fnc{I}(\fnc{\psi})=0 $}. Subtracting $ \sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{\psi}^T_{h,k}\H_k(L_{h,k}(\uhk)-\bm{f}_k) = 0 $ from \cref{eq:Functional discrete} gives
\begin{equation}
\begin{aligned}
I_h(\bm{u}_h)
&= \sum_{\Omega_{k}\subset\mathcal{T}_{h}} \bm{g}^T_k \H_k \uhk
- \sum_{\gamma \subset \Gamma^D} \bm{\psi}_{\gamma k}^T\B_{\gamma}\Dgk \uhk
+ \sum_{\gamma \subset \Gamma^N} \bm{z}_{\gamma k}^T\B_{\gamma}\Rgk\uhk\\
&\quad+ \sum_{\gamma \subset \Gamma^D} \bm{\psi}_{\gamma k}^T\T_{\gamma}^{(D)}(\Rgk\uhk - \bm{u}_{\gamma k}) - \sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{\psi}^T_{h,k}\H_k( L_{h,k}(\uhk) - \bm{f}_k).\\
\end{aligned}
\end{equation}
Rearranging, adding, and subtracting terms we have
\begin{equation}\label{eq:Functional discrete 3}
\begin{aligned}
I_h(\bm{u}_h)
&=\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{\psi}^T_{h,k}\H_k\bm{f}_k
-\sum_{\gamma \subset \Gamma^D}\bm{u}_{\gamma k}^T\B_{\gamma}\Dgk\bm{\psi}_{h,k}
+\sum_{\gamma \subset \Gamma^N}\bm{w}_{\gamma k}^T\B_{\gamma}\Rgk \bm{\psi}_{h,k}
+ \sum_{\gamma \subset \Gamma^D} \bm{u}_{\gamma k}^T\T_{\gamma}^{(D)}(\Rgk\psi_{\gamma k}
- \bm{\psi}_{\gamma k})
\\
&
\quad
-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{\psi}^T_{h,k}\H_k L_{h,k}(\uhk)
+\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\uhk^T\H_k\bm{g}_k
+\sum_{\gamma \subset \Gamma^D}\bm{u}_{\gamma k}^T\B_{\gamma}\Dgk\bm{\psi}_{h,k}
-\sum_{\gamma \subset \Gamma^D} \bm{u}_{\gamma k}^T\T_{\gamma}^{(D)}(\Rgk\bm{\psi}_{h,k} - \bm{\psi}_{\gamma k})
\\
&
\quad
-\sum_{\gamma \subset \Gamma^N}\bm{w}_{\gamma k}^T\B_{\gamma}\Rgk \bm{\psi}_{h,k}
- \sum_{\gamma \subset \Gamma^D} \bm{\psi}_{\gamma k}^T\B_{\gamma}\Dgk \uhk
+ \sum_{\gamma \subset \Gamma^N} \bm{z}_{\gamma k}^T\B_{\gamma}\Rgk\uhk
+ \sum_{\gamma \subset \Gamma^D} \bm{\psi}_{\gamma k}^T\T_{\gamma}^{(D)}(\Rgk\uhk - \bm{u}_{\gamma k}),
\end{aligned}
\end{equation}
where the sum of the first four terms on the RHS is equal to \violet{$ I_h(\bm{\psi}_h) $} due to \cref{eq:Functional discrete 2}. \violet{Rearranging terms, we find}
\begin{equation} \label{eq:Vanishing term for adjoint consistency}
-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\bm{\psi}^T_{h,k}\H_k L_{h,k}(\uhk)
+ \sum_{\Omega_{k}\subset\mathcal{T}_{h}}\uhk^T\H_k\bm{g}_k + B_{\rm terms} + {I_h(\bm{\psi}_h) - I_h(\bm{u}_h)} = 0,
\end{equation}
where $ B_{\rm terms} $ is the sum of the boundary terms in the last two lines of \cref{eq:Functional discrete 3}. Using \cref{eq:Lhk} and \cref{eq:D2 identity 2} we write
\begin{align}
-\bm{\psi}^T_{h,k}\H_k L_{h,k}( \bm{u}_{h,k}) &= \bm{\psi}^T_{h,k} \H_k \D_{k}^{(2)}\uhk
- \bm{\psi}^T_{h,k}\bm{s}_k^I(\bm{u}_{h,k})
- \bm{\psi}^T_{h,k}\bm{s}_k^B(\bm{u}_{h,k},\bm{u}_{\gamma k}, \bm{w}_{\gamma k})
\nonumber\\
\quad
&=\bm{\psi}^T_{h,k}\left(\D_{k}^{(2)}\right)^{T}\H_{k}\uhk
-\sum_{\gamma\subset\Gamma_{k}}\bm{\psi}^T_{h,k}\D_{\gamma k}^{T}\B_{\gamma}\R_{\gamma k}\uhk
+\sum_{\gamma\subset\Gamma_{k}}\bm{\psi}^T_{h,k}\R_{\gamma k}^{T}\B_{\gamma}\D_{\gamma k}\uhk
\nonumber \\
&\quad
- \bm{\psi}^T_{h,k}\bm{s}_k^I(\bm{u}_{h,k})
- \bm{\psi}^T_{h,k}\bm{s}_k^B(\bm{u}_{h,k},\bm{u}_{\gamma k}, \bm{w}_{\gamma k}).
\end{align}
Summing over all elements and transposing the result, we find
\begin{equation} \label{eq:Sum Lhk}
\begin{aligned}
-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\left(\bm{\psi}^T_{h,k}\H_k L_{h,k}( \bm{u}_{h,k})\right)^T &=
\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\uhk^T\H_k\D_{k}^{(2)}\bm{\psi}_{h,k}
-\sum_{\gamma \subset \Gamma^I} (\bm{u}_h^{\star})^T \tilde{\T}^{\star} \psi_h^{\star}
-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma,\epsilon\in \Gamma_k^I} (\bm{u}_h^{\diamond})^T \tilde{\T}^{\diamond} \bm{\psi}_h^{\diamond}
\\
& \quad
-\sum_{\gamma \subset \Gamma^D}(\Rgk\uhk - \bm{u}_{\gamma k})^T \T_{\gamma}^{(D)}\Rgk\bm{\psi}_{h,k}
-\sum_{\gamma \subset \Gamma^D} \bm{u}_{\gamma k}^T \B_\gamma \Dgk \bm{\psi}_{h,k}
+\sum_{\gamma \subset \Gamma^D} \uhk^T\Dgk^T\B_{\gamma}\Rgk\bm{\psi}_{h,k}
\\
&\quad
+\sum_{\gamma \subset \Gamma^N} \bm{w}_{\gamma k}^T \B_\gamma \Rgk \bm{\psi}_{h,k}
-\sum_{\gamma \subset \Gamma^N} \uhk^T\Rgk^T\B_{\gamma}\Dgk \bm{\psi}_{h,k},
\end{aligned}
\end{equation}
where $ \tilde{\T}^{\diamond} = (\T^{\diamond})^T$, and
\begin{equation}
\tilde{\T}^{\star}=\begin{bmatrix}
\T_{\gamma k}^{(1)} & -\T_{\gamma v}^{(1)} & \T_{\gamma k}^{(2)}+\B_{\gamma} & -\T_{\gamma v}^{(2)}\\
-\T_{\gamma k}^{(1)} & \T_{\gamma v}^{(1)} & -\T_{\gamma k}^{(2)} & \T_{\gamma v}^{(2)}+\B_{\gamma}\\
\T_{\gamma k}^{(3)}-\B_{\gamma} & \T_{\gamma v}^{(3)} & \T_{\gamma k}^{(4)} & \T_{\gamma v}^{(4)}\\
\T_{\gamma k}^{(3)} & \T_{\gamma v}^{(3)}-\B_{\gamma} & \T_{\gamma k}^{(4)} & \T_{\gamma v}^{(4)}
\end{bmatrix}.
\end{equation}
Substituting \cref{eq:Sum Lhk} \violet{into \cref{eq:Vanishing term for adjoint consistency}, enforcing $ I_h(\bm{u}_h) - I_h(\bm{\psi}_h) =0$, and simplifying, we obtain
\begin{equation} \label{eq:L^*hk + g}
\begin{aligned}
&\sum_{\Omega_{k}\subset{\mathcal T}_{h}}\bm{u}_{h,k}^T\H_{k}\left(\D_{k}^{(2)}\bm{\psi}_{h,k}+\bm{g}_{k}\right)
-\sum_{\gamma\subset\Gamma^{I}}(\bm{u}_{h}^{\star})^{T}\tilde{\T}^{\star}\bm{\psi}_{h}^{\star}
-\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma,\epsilon\in\Gamma_{k}^{I}}(\bm{u}_{h}^{\diamond})^{T}\tilde{\T}^{\diamond}\bm{\psi}_{h}^{\diamond}
\\&
\quad
-\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{c}
\R_{\gamma k}\bm{u}_{h,k}\\
\D_{\gamma k}\bm{u}_{h,k}
\end{array}\right]^{T}\left[\begin{array}{c}
\T_{\gamma}^{(D)}\\
-\B_{\gamma}
\end{array}\right]\left(\R_{\gamma k}\bm{\psi}_{h,k}-\bm{\psi}_{\gamma k}\right)
-\sum_{\gamma\subset\Gamma^{N}}\bm{u}_{h,k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{\psi}_{h,k}-\bm{z}_{\gamma k}\right) = 0.
\end{aligned}
\end{equation}
Rewriting \cref{eq:L^*hk + g} as
$
\sum_{\Omega_{k}\subset{\mathcal T}_{h}} \bm{u}_{h,k}^T\H_k \left(L_{h,k}^*\left(\bm{\psi}_h\right) - \bm{g}_k \right) = 0
$,
we identify the discrete adjoint operator, $ L_{h,k}^* $, satisfying}
\begin{equation} \label{eq:Discrete adjoint problem}
L_{h,k}^*(\bm{\psi}_h) - \bm{g}_k = \bm{0},
\end{equation}
\violet{to be}
\begin{equation} \label{eq:Discrete Adjoint operator}
L_{h,k}^*(\bm{\psi}_{h}) = -\D_k^{(2)}\bm{\psi}_{h,k}
+ \H^{-1}_k(\bm{s}_k^I)^*(\bm{\psi}_{h,k})
+ \H^{-1}_k(\bm{s}_k^B)^*(\bm{\psi}_{h,k},\bm{\psi}_{\gamma k}, \bm{z}_{\gamma k}),
\end{equation}
\violet{where} the adjoint interior facet SATs and boundary SATs are given, respectively, by
\begin{equation} \label{eq:sI*}
\begin{aligned}
\left(\bm{s}_{k}^{I}\right)^{*} &=\sum_{\gamma\subset\Gamma_{k}^{I}}\begin{bmatrix}
\R_{\gamma k}^{T} & \D_{\gamma k}^{T}\end{bmatrix}
\begin{bmatrix}
\T_{\gamma k}^{(1)} & -\T_{\gamma v}^{(1)} & \T_{\gamma k}^{(2)}+\B_{\gamma} & -\T_{\gamma v}^{(2)}\\
\T_{\gamma k}^{(3)}-\B_{\gamma} & \T_{\gamma v}^{(3)} & \T_{\gamma k}^{(4)} & \T_{\gamma v}^{(4)}
\end{bmatrix}
\begin{bmatrix}
\R_{\gamma k}\bm{\psi}_{h,k}\\
\R_{\gamma v}\bm{\psi}_{h,v}\\
\D_{\gamma k}\bm{\psi}_{h,k}\\
\D_{\gamma v}\bm{\psi}_{h,v}
\end{bmatrix}\\
&\quad
+\sum_{\gamma\subset\Gamma_{k}^{I}}\left\{\sum_{\epsilon\subset\Gamma_{k}^{I}}\R_{\gamma k}^{T}\left[\T_{\gamma\epsilon k}^{(5)}\R_{\epsilon k}\bm{\psi}_{h,k}+\T_{\gamma\epsilon k}^{(6)}\R_{\epsilon g}\bm{\psi}_{h,g}\right]
-\sum_{\delta\subset\Gamma_{v}^{I}}\R_{\gamma k}^{T}\left[\T_{\gamma\delta v}^{(5)}\R_{\delta v}\bm{\psi}_{h,v}+\T_{\gamma\delta v}^{(6)}\R_{\delta q}\bm{\psi}_{h,q}\right] \right\},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\left(\bm{s}_{k}^{B}\right)^{*}&=\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{cc}
\R_{\gamma k}^{T} & \D_{\gamma k}^{T}\end{array}\right]\left[\begin{array}{c}
\T_{\gamma}^{(D)}\\
-\B_{\gamma}
\end{array}\right]\left[\begin{array}{cc}
\R_{\gamma k}\bm{\psi}_{h,k}-\bm{\psi}_{\gamma k}\end{array}\right]
+\sum_{\gamma\subset\Gamma^{N}}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{\psi}_{h,k}-\bm{z}_{\gamma k}\right).
\end{aligned}
\end{equation}
\violet{Note that the last term in $(\bm{s}_{k}^{I})^{*}$ is obtained by rewriting the extended interior facet SATs by facet contribution, regrouping by element, \ie,
\begin{equation}
\begin{aligned}
\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma,\epsilon\subset\Gamma_{k}^{I}}(\bm{\psi}_{h}^{\diamond})^{T}\T^{\diamond}\bm{u}_{h}^{\diamond}&=\sum_{\gamma\subset\Gamma^{I}}\bigg\{\sum_{\delta\subset\Gamma_{v}^{I}}\bm{\psi}_{h,v}^{T}\R_{\delta v}^{T}\T_{\delta\gamma v}^{(5)}[\R_{\gamma v}\bm{u}_{h,v}-\R_{\gamma k}\bm{u}_{h,k}]+\sum_{\epsilon\subset\Gamma_{k}^{I}}\bm{\psi}_{h,k}^{T}\R_{\epsilon k}^{T}\T_{\epsilon\gamma k}^{(5)}[\R_{\gamma k}\bm{u}_{h,k}-\R_{\gamma v}\bm{u}_{h,v}]\\&\quad+\sum_{\delta\subset\Gamma_{v}^{I}}\bm{\psi}_{h,q}^{T}\R_{\delta q}^{T}\T_{\delta\gamma v}^{(6)}[\R_{\gamma v}\bm{u}_{h,v}-\R_{\gamma k}\bm{u}_{h,k}]+\sum_{\epsilon\subset\Gamma_{k}^{I}}\bm{\psi}_{h,g}^{T}\R_{\epsilon g}^{T}\T_{\epsilon\gamma k}^{(6)}[\R_{\gamma k}\bm{u}_{h,k}-\R_{\gamma v}\bm{u}_{h,v}]\bigg\}\\&=\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma\subset\Gamma_{k}^{I}}\bigg\{\sum_{\epsilon\subset\Gamma_{k}^{I}}\left[\bm{\psi}_{h,k}^{T}\R_{\epsilon k}^{T}\T_{\epsilon\gamma k}^{(5)}+\bm{\psi}_{h,g}^{T}\R_{\epsilon g}^{T}\T_{\epsilon\gamma k}^{(6)}\right]\R_{\gamma k}\bm{u}_{h,k} -\sum_{\delta\subset\Gamma_{v}^{I}}\left[\bm{\psi}_{h,v}^{T}\R_{\delta v}^{T}\T_{\delta\gamma v}^{(5)}+\bm{\psi}_{h,q}^{T}\R_{\delta q}^{T}\T_{\delta\gamma v}^{(6)}\right]\R_{\gamma k}\bm{u}_{h,k}\bigg\},
\end{aligned}
\end{equation}
and transposing to find
\begin{equation}
\begin{aligned}
\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma,\epsilon\in\Gamma_{k}^{I}}(\bm{u}_{h}^{\diamond})^{T}\tilde{\T}^{\diamond}\bm{\psi}_{h}^{\diamond}&=\sum_{\Omega_{k}\subset\mathcal{T}_{h}}\sum_{\gamma\subset\Gamma_{k}^{I}}\bigg\{\sum_{\epsilon\subset\Gamma_{k}^{I}}\bm{u}_{h,k}^{T}\R_{\gamma k}^{T}\left[\T_{\gamma\epsilon k}^{(5)}\R_{\epsilon k}\bm{\psi}_{h,k}+\T_{\gamma\epsilon k}^{(6)}\R_{\epsilon g}\bm{\psi}_{h,g}\right]-\sum_{\delta\subset\Gamma_{v}^{I}}\bm{u}_{h,k}^{T}\R_{\gamma k}^{T}\left[\T_{\gamma\delta v}^{(5)}\R_{\delta v}\bm{\psi}_{h,v}+\T_{\gamma\delta v}^{(6)}\R_{\delta q}\bm{\psi}_{h,q}\right]\bigg\}.
\end{aligned}
\end{equation}
It is also possible to regroup the extended interior facet SATs as}
\begin{equation} \label{eq:Adjoint Interface SAT extended terms}
\sum_{\gamma\subset\Gamma_{k}^{I}}\sum_{\epsilon\subset\Gamma_{k}^{I}}\left[\R_{\epsilon k}^{T}\left(\T_{\epsilon\gamma k}^{(5)}\R_{\gamma k}\bm{\psi}_{h,k}+\T_{\epsilon\gamma k}^{(6)}\R_{\gamma v}\bm{\psi}_{h,v}\right)-\R_{\epsilon g}^{T}\left(\T_{\epsilon\gamma k}^{(5)}\R_{\gamma k}\bm{\psi}_{h,k}+\T_{\epsilon\gamma k}^{(6)}\R_{\gamma v}\bm{\psi}_{h,v}\right)\right],
\end{equation}
\violet{which can replace the last term in \cref{eq:sI*}}. We now state a theorem which is an extension of Theorem 1 in \cite{yan2018interior}.
\begin{theorem}\label{thm:Adjoint consistency}
Let \cref{assu: mapping,assu:Coefficient matrices} hold, and the metric terms be evaluated exactly. Then the SBP-SAT discretization given in \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs} and the discrete functional \cref{eq:Functional discrete} are adjoint consistent with respect to the steady version of the continuous PDE \cref{eq:diffusion problem} and functional \cref{eq:Functional}, \ie, \cref{eq:Adjoint consitstency definition} holds, and $\norm{ L_{h,k}^* \bm{\psi}_k - \bm{g}_k}_\infty ={\cal{O}}(h^{p-1})$ if $ \psi_k \in \cont{p+1} $ and the coefficient matrices satisfy the following relations
\begin{equation} \label{eq:Coefficients for adjoint consistency}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)},
&& \T_{\gamma k}^{(2)}+\T_{\gamma v}^{(2)}=-\B_{\gamma},
&&\T_{\gamma k}^{(3)}+\T_{\gamma v}^{(3)}=\B_{\gamma},
&&\T_{\gamma k}^{(4)}=\T_{\gamma v}^{(4)},
&& \T_{abk}^{(5)}=-\T_{abk}^{(6)},
\end{aligned}
\end{equation}
where $ a,b\in\left\{ \gamma,\epsilon_{1},\epsilon_{2}\right\}$.
\end{theorem}
\begin{proof}
The result is a consequence of the accuracies of the derivative and extrapolation operators, \eg, for the exact adjoint solution $ \D_{k}^{(2)}\bm{\psi}_{k}+\bm{g}_{k} = [\nabla\cdot(\lambda\nabla\psi_k)]_{S_k}+\fnc{G}_k|_{S_{k}} + {\mathcal{O}}(h^{p-1}) = {\mathcal{O}}(h^{p-1}) $. Similarly, $ \Dgk\bm{\psi}_{k} = -\Dgk\bm{\psi}_{v}=[\bm{n}\cdot(\lambda\nabla\psi_k)]_{S_\gamma} +\order{h^{p}}$ and $ \Rgk\bm{\psi}_{k}=\psi_k|_{S_\gamma} + {\cal O}(h^{p+1}) $ which also holds for the extrapolations at the other facets. The desired result is obtained by substituting these approximations in \cref{eq:Discrete Adjoint operator} and using the coefficients in \cref{eq:Coefficients for adjoint consistency}.
\end{proof}
\violet{All but the second and fourth} conditions in \cref{thm:Adjoint consistency} are required for conservation (see \cref{thm:Conservation}). Similar analysis in \cite{hicken2011superconvergent} shows that an additional condition is required for a conservative discretization to be adjoint consistent, and such requirement can also be inferred from \cite{arnold2002unified,hartmann2013higher}. The following corollary follows from a comparison of the conditions presented in \cref{thm:Adjoint consistency,thm:Conservation}.
\begin{corollary}
Adjoint consistency is a sufficient but not necessary condition for conservation.
\end{corollary}
\begin{remark}
In the DG literature, conservative properties of numerical fluxes associated with the solution and with the gradient of the solution are used to define conservation and adjoint consistency. A numerical discretization is conservative if the numerical flux associated with the gradient of the solution is conservative, and the discretization is adjoint consistent if both numerical fluxes associated with the solution and the gradient of the solution are conservative \cite{arnold2002unified,hartmann2013higher}. Conservation of the numerical flux associated with the gradient amounts to applying integration by parts once, as in \cref{eq:Conservation IBP}, and requiring that all SATs except those corresponding to $ \int_{\Gamma^B}(\lambda\nabla\fnc{U})\cdot\bm{n} \dd{\Gamma}$ vanish for $ \bm{v=1} $ in the discretized equation. On the other hand, conservation of both numerical fluxes amounts to applying integration by parts twice on the diffusive term and requiring that all SATs in the discretization except those corresponding to $ \int_{\Gamma^B}\fnc{V}(\lambda\nabla\fnc{U})\cdot\bm{n} - \fnc{U}(\lambda\nabla\fnc{V})\cdot\bm{n}\dd{\Gamma} $ vanish for smooth test function, $ \fnc{V}$. This gives $ (\tilde{\T}^{\star})^T $ instead of $ \T^{\star} $ in \cref{eq:Residual 2nd form} which in turn yields the same conditions for conservation as those stated for adjoint consistency in \cref{eq:Coefficients for adjoint consistency}. The ambiguity on the definition of conservation, \ie, whether to require the SATs to vanish after a single or double applications of integration by parts, is discussed in \cite{carpenter2010revisiting}. Furthermore, in \cite{ghasemi2020conservation,nordstrom2017relation} the latter definition is used to conclude that adjoint consistency is a necessary and sufficient condition for conservation.
\end{remark}
\begin{remark}
The discrete adjoint operator, \cref{eq:Discrete Adjoint operator}, is derived for compact \violet{stencil} SATs in \cite{yan2018interior} using the variational relation $ J_{h}^{\prime}[\bm{u}_h](\delta \bm{u})+R_{h}^{\prime}[\bm{u}_h]\left(\delta \bm{u},\psi\right) = 0$, where $ J_{h}^{\prime}[\bm{u}_h] $ and $ R_{h}^{\prime}[\bm{u}_h] $ denote the Fr\'{e}chet derivatives of $ J_h $ and $ R_h $ with respect to $ \bm{u}_h $, respectively, and $ \delta\bm{u}_h $ is the variation of $ \bm{u}_h $. This suggests that the modification of the functional in \cref{eq:Functional discrete 2} is necessary for adjoint consistency, since our approach to obtain \cref{eq:Discrete Adjoint operator} relies on this modification.
\end{remark}
\begin{remark}
\cref{thm:Adjoint consistency} implies adjoint consistent schemes need not have a symmetric stiffness matrix resulting from the residual \cref{eq:Residual 3rd form}. However, as in \cite{yan2018interior}, we consider adjoint consistent SATs that yield a symmetric stiffness matrix by requiring $ \T_{\gamma k}^{(3)}-\T_{\gamma k}^{(2)}=\B_{\gamma} $.
\end{remark}
\subsection{Functional accuracy} \label{sec:Functional accuracy}
The accuracy of the target functional depends on the primal and adjoint consistency of the SBP-SAT discretization of the underlying PDE. We establish functional error estimates for primal and adjoint consistent SBP-SAT discretizations of the Poisson problem on unstructured curvilinear grids. The following assumption will be necessary to proceed with the analysis.
\begin{assumption} \label{assu:Solution error}
Unique numerical solutions to the primal and adjoint equations, $ \bm{u}_{h,k} $ and $ \bm{\psi}_{h,k} $ respectively, exist, and as $ h\rightarrow 0 $ they approximate the exact primal and adjoint solutions, $ \bm{u}_{k} $ and $ \bm{\psi}_{k} $ respectively, to order $ h^{p+1} $ in the infinity norm, \ie,
\begin{equation*}
\begin{aligned}
\norm{{\bm{u}_{h,k}-\bm{u}_{k}}}_\infty&=\order{h^{p+1}},
\;\text{\;and}
&&
\norm{\bm{\psi}_{h,k}-\bm{\psi}_{k}}_\infty=\order{h^{p+1}}.
\end{aligned}
\end{equation*}
\end{assumption}
The reason to invoke \cref{assu:Solution error} is related to difficulties in showing that the SBP-SAT discretization is pointwise stable (see, \eg, \cite{hicken2012output,hicken2011superconvergent,penner2020superconvergent} for similar assumptions, and \cite{gustafsson1981convergence,svard2006order,svard2019convergence} for analysis of convergence rates in one dimensional framework). Despite the fact that the truncation error for the primal and adjoint discretizations is $ {\mathcal{O}}(h^{p-1})$, numerical experiments show that the estimates in \cref{assu:Solution error} are usually attained.
In fact, the functional error estimate in \cref{thm:Functional accuracy} below holds even for order $ h^p $ primal and adjoint solution error values. To simplify the analysis, we first consider the case where the discretization has only one element and later show that the error estimate holds for more general cases.
\subsubsection{Functional error estimate on a single element}
The residual for the discrete Poisson problem, RHS of \cref{eq:SBP-SAT discretization}, on a single element premultiplied by $ \bm{\psi}_{k}^T\H_k $ reads
\begin{equation} \label{eq:Residual 1elem 1st form}
\begin{aligned}
\bm{\psi}_{k}^{T}\H_{k}R_{h,u}
&=-\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\left(\bm{u}_{h,k}-\bm{u}_{k}\right)
-\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k}
-\bm{\psi}_{k}^{T}\H_{k}\bm{f}_{k}
+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}\T_{\gamma}^{(D)}\left(\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\right)
\\
&\quad
-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\right)
+\sum_{\gamma\subset\Gamma^{N}}\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{h,k}-\bm{w}_{\gamma k}\right) = 0,
\end{aligned}
\end{equation}
where we have added and subtracted the term $ \bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k} $. Similarly, the residual of the discrete adjoint problem, \cref{eq:Discrete adjoint problem}, premultiplied with $ \bm{u}^T_{k}\H_k $ can be written as
\begin{equation}\label{eq:Residual 1elem 2nd form}
\begin{aligned}
\bm{u}_{k}^{T}\H_{k}R_{h,\psi}&=-\bm{u}_{k}^{T}\H_{k}\D_{k}^{(2)}\left(\bm{\psi}_{h,k}-\bm{\psi}_{k}\right)-\bm{u}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{\psi}_{k}-\bm{u}_{k}^{T}\H_{k}\bm{g}_{k}+\sum_{\gamma\subset\Gamma^{D}}\bm{u}_{k}^{T}\R_{\gamma k}^{T}\T_{\gamma}^{(D)}\left(\R_{\gamma k}\bm{\psi}_{h,k}-\bm{\psi}_{\gamma k}\right)\\&\quad-\sum_{\gamma\subset\Gamma^{D}}\bm{u}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{\psi}_{h,k}-\bm{\psi}_{\gamma k}\right)+\sum_{\gamma\subset\Gamma^{N}}\bm{u}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{\psi}_{h,k}-\bm{z}_{\gamma k}\right)=0.
\end{aligned}
\end{equation}
We will use the above forms of the residuals, \cref{eq:Residual 1elem 1st form} and \cref{eq:Residual 1elem 2nd form}, to prove the following \violet{theorem}.
\begin{theorem} \label{thm:Functional accuracy}
If \cref{assu: mapping,assu:Coefficient matrices,assu:Solution error} hold, \violet{$\fnc{U}, \psi\in \fnc{C}^{2p+2}(\Omega) $, $\lambda \in \fnc{C}^{2p+1}(\Omega) $}, and $ \bm{u}_{h,k} \in \IR{n_p}$ is a solution to a consistent and adjoint consistent SBP-SAT discretization of the form \cref{eq:SBP-SAT discretization}, then the discrete functional \cref{eq:Functional discrete} is an order $ h^{2p} $ approximation to the compatible linear functional \cref{eq:Functional}, \ie,
\begin{equation} \label{eq:Functional error estimate}
\fnc{I}(\fnc{U}) - I_h(\bm{u}_{h}) = \order{h^{2p}}.
\end{equation}
\end{theorem}
\begin{proof}
The \blue{diagonal} norm matrices $ \H_k $ and $ \B_{\gamma} $ contain quadrature weights of at least order $h^{2p}$ and $ h^{2p+1} $ accuracy, respectively. We discretize \cref{eq:Functional} using these quadratures as
\begin{align} \label{eq:Functional 1}
{\mathcal I}\left({\mathcal U}\right)&=\int_{\Omega}{\mathcal G}{\mathcal U}\dd{\Omega}-\int_{\Gamma^{D}}\psi_{D}\left(\lambda\nabla{\mathcal U}\right)\cdot\bm{n}\dd{\Gamma}+\int_{\Gamma^{N}}\psi_{N}{\mathcal {\mathcal U}}\dd{\Gamma}
\nonumber
\\&
=\bm{g}_k^{T}\H_{k}\bm{u}_{k}-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\bm{w}_{\gamma k}+\sum_{\gamma \subset \Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\bm{u}_{\gamma k}+{\mathcal O}\left(h^{2p}\right).
\end{align}
From \cref{eq:Adjoint relation}, the compatibility of the linear functional implies
\begin{align}
{\cal I}\left({\cal U}\right)=\fnc{I}\left(\psi\right)&=\int_{\Omega}{\cal F}\psi\dd{\Omega}-\int_{\Gamma^{D}}\mathcal{U}_{D}\left(\lambda\nabla{\cal \psi}\right)\cdot\bm{n}\dd{\Gamma}+\int_{\Gamma^{N}}{\cal {\cal U}}_{N}\psi\dd{\Gamma} \nonumber
\\&=\bm{f}_{k}^{T}\H_{k}\bm{\psi}_{k}-\sum_{\gamma\subset\Gamma^{D}}\bm{u}_{\gamma k}^{T}\B_{\gamma}\bm{z}_{\gamma k}+\sum_{\gamma\subset\Gamma^{N}}\bm{w}_{\gamma k}^{T}\B_{\gamma}\bm{\psi}_{\gamma k}+{\cal O}\left(h^{2p}\right),\label{eq:Compatiblility in proof}
\end{align}
Subtracting \cref{eq:Functional discrete} from \cref{eq:Functional 1} and rearranging, we obtain
\begin{equation} \label{eq:Functional in proof}
\begin{aligned}
{\cal I}\left({\cal U}\right)&={\cal I}_{h}\left(\bm{u}_{h}\right)-\bm{g}_{k}^{T}\H_{k}\left(\bm{u}_{h,k}-\bm{u}_{k}\right)+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{h,k}-\bm{w}_{\gamma k}\right)
\\&\quad -\sum_{\gamma\subset\Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\right)
-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\T_{\gamma}^{(D)}\left(\R_{\gamma k}\bm{u}_{h,k}-\bm{u}_{\gamma k}\right)+{\cal O}\left(h^{2p}\right).
\end{aligned}
\end{equation}
\violet{Adding and subtracting terms, we can rewrite \cref{eq:Functional in proof} as
\begin{equation} \label{eq:Functional in proof 1}
\begin{aligned}
{\cal I}\left({\cal U}\right)&={\cal I}_{h}\left(\bm{u}_{h}\right)-\bm{g}_{k}^{T}\H_{k}\left(\bm{u}_{h,k}-\bm{u}_{k}\right)+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{h,k}-\D_{\gamma k}\bm{u}_{k}\right)+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)\\&\quad-\sum_{\gamma\subset\Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{h,k}-\R_{\gamma k}\bm{u}_{k}\right)-\sum_{\gamma\subset\Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)\\&\quad-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\T_{\gamma}^{(D)}\left(\R_{\gamma k}\bm{u}_{h,k}-\R_{\gamma k}\bm{u}_{k}\right)-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\T_{\gamma}^{(D)}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)+{\cal O}\left(h^{2p}\right).
\end{aligned}
\end{equation}
Adding \cref{eq:Residual 1elem 1st form} to \cref{eq:Functional in proof 1} and simplifying, we have
\begin{equation} \label{eq:Functional in proof 2}
\begin{aligned}
{\cal I}\left({\cal U}\right)&={\cal I}_{h}\left(\bm{u}_{h}\right)+\bigg\{\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\D_{\gamma k}\H_{k}^{-1}-\bm{g}_{k}^{T}-\sum_{\gamma\subset\Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\R_{\gamma k}\H_{k}^{-1}-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\R_{\gamma k}\H_{k}^{-1}
\\&\quad-\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\H_{k}^{-1}+\sum_{\gamma\subset\Gamma^{D}}\left(\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}-\bm{\psi}_{\gamma k}^{T}\right)\T_{\gamma}^{(D)}\R_{\gamma k}\H_{k}^{-1}+\sum_{\gamma\subset\Gamma^{N}}\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\D_{\gamma k}\H_{k}^{-1}\bigg\}\H_{k}\left(\bm{u}_{h,k}-\bm{u}_{k}\right)
\\&\quad+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)-\sum_{\gamma\subset\Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)
\\&\quad+\sum_{\gamma\subset\Gamma^{N}}\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)-\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k}-\bm{\psi}_{k}^{T}\H_{k}\bm{f}_{k}+{\cal O}\left(h^{2p}\right).
\end{aligned}
\end{equation}
Applying identity \cref{eq:D2 identity 2} in \cref{eq:Functional in proof 2} and simplifying gives
\begin{align} \label{eq:Functional in proof 3}
{\cal I}\left({\cal U}\right)&={\cal I}_{h}\left(\bm{u}_{h}\right)+\bigg\{-\bm{g}_{k}^{T}-\bm{\psi}_{k}^{T}\left(\D_{k}^{(2)}\right)^{T}-\sum_{\gamma\subset\Gamma^{D}}\left(\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}-\bm{\psi}_{\gamma k}^{T}\right)\B_{\gamma}\D_{\gamma k}\H_{k}^{-1}+\sum_{\gamma\subset\Gamma^{N}}\left(\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}-\bm{z}_{\gamma k}^{T}\right)\B_{\gamma}\R_{\gamma k}\H_{k}^{-1}
\nonumber
\\&\quad+\sum_{\gamma\subset\Gamma^{D}}\left(\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}-\bm{\psi}_{\gamma k}^{T}\right)\T_{\gamma}^{(D)}\R_{\gamma k}\H_{k}^{-1}\bigg\}\H_{k}\left(\bm{u}_{h,k}-\bm{u}_{k}\right)+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)-\sum_{\gamma\subset\Gamma^{N}}\bm{z}_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)
\nonumber
\\&\quad-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)+\sum_{\gamma\subset\Gamma^{N}}\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)-\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k}-\bm{\psi}_{k}^{T}\H_{k}\bm{f}_{k}+{\cal O}\left(h^{2p}\right).
\end{align}
The sum of the terms in the curly braces is the transpose of $ L_{h,k}^* (\bm{\psi}_k) - \bm{g}_k$, which is order $ h^{p-1} $ due to \cref{thm:Adjoint consistency}, $ \H_{k}(\bm{u}_{h,k}-\bm{u}_{k}) = \fnc{O} (h^{p+3})$ since $ \H_k = \J_k\hat{\H} = \fnc{O}(h^2)$, and $ \norm*{\bm{u}_{h,k}-\bm{u}_{k}}_\infty = \fnc{O} (h^{p+1}) $ due to \cref{assu:Solution error}. Hence, we have
\begin{equation} \label{eq:Functional in proof 4}
\begin{aligned}
{\cal I}\left({\cal U}\right)&={\cal I}_{h}\left(\bm{u}_{h}\right)+\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_k^T \R_{\gamma k}^T\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)-\sum_{\gamma\subset\Gamma^{N}}\bm{\psi}_k^T \D_{\gamma k}^T\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)-\sum_{\gamma\subset\Gamma^{D}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)\\&\quad+\sum_{\gamma\subset\Gamma^{N}}\bm{\psi}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)-\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k}-\bm{\psi}_{k}^{T}\H_{k}\bm{f}_{k}+{\cal O}\left(h^{2p}\right),
\end{aligned}
\end{equation}
where the relations $ (\bm{\psi}_{\gamma k}^{T}- \bm{\psi}_k^T \R_{\gamma k}^T) \B_{\gamma}(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}) = \fnc{O}(h^{p+2})$ and $ (\bm{z}_{\gamma k}^{T}- \bm{\psi}_k^T \D_{\gamma k}^T) \B_{\gamma}(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}) = \fnc{O}(h^{p+2})$ are used to obtain the second and third terms on the RHS, respectively. We can further simplify \cref{eq:Functional in proof 4} as
\begin{equation} \label{eq:Functional in proof 5}
{\cal I}\left({\cal U}\right)={\cal I}_{h}\left(\bm{u}_{h}\right)-\bigg\{\bm{\psi}_{k}^{T}\H_{k}\bm{f}_{k}+\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k}-\sum_{\gamma\subset\Gamma^{B}}\bm{\psi}_{k}^{T}\R_{\gamma k}^T\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)+\sum_{\gamma\subset\Gamma^{B}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right)\bigg\}+{\cal O}\left(h^{2p}\right).
\end{equation}
Moreover, using \cref{eq:Functional 1}, \cref{eq:Compatiblility in proof}, and \cref{eq:D2 identity 2}, a straightforward algebraic manipulation of \cref{eq:Functional in proof 4} yields
\begin{equation} \label{eq:Functional in proof 6}
{\cal I}\left({\cal U}\right)={\cal I}_{h}\left(\bm{u}_{h}\right)-\bigg\{\bm{u}_{k}^{T}\H_{k}\bm{g}_{k}+\bm{u}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{\psi}_{k}-\sum_{\gamma\subset\Gamma^{B}}\bm{u}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{\psi}_{k}-\bm{z}_{\gamma k}\right)+\sum_{\gamma\subset\Gamma^{B}}\bm{u}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{\psi}_{k}-\bm{\psi}_{\gamma k}\right)\bigg\}+{\cal O}\left(h^{2p}\right).
\end{equation}
Subtracting \cref{eq:Functional in proof 6} from \cref{eq:Functional in proof 5}, we find
\begin{equation} \label{eq:Functional difference}
\tau_{u}-\tau_{\psi}={\cal O}(h^{2p}),
\end{equation}
where
\begin{align}
\tau_u &\coloneqq \bm{\psi}_{k}^{T}\H_{k}\bm{f}_{k}+\bm{\psi}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{u}_{k}-\sum_{\gamma\subset\Gamma^{B}}\bm{\psi}_{k}^{T}\R_{\gamma k}^T\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{k}-\bm{w}_{\gamma k}\right)+\sum_{\gamma\subset\Gamma^{B}}\bm{\psi}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right),
\label{eq:tau_u}
\\
\tau_{\psi} &\coloneqq \bm{u}_{k}^{T}\H_{k}\bm{g}_{k}+\bm{u}_{k}^{T}\H_{k}\D_{k}^{(2)}\bm{\psi}_{k}-\sum_{\gamma\subset\Gamma^{B}}\bm{u}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{\psi}_{k}-\bm{z}_{\gamma k}\right)+\sum_{\gamma\subset\Gamma^{B}}\bm{u}_{k}^{T}\D_{\gamma k}^{T}\B_{\gamma}\left(\R_{\gamma k}\bm{\psi}_{k}-\bm{\psi}_{\gamma k}\right).
\label{eq:tau_psi}
\end{align}
}
\violet{For an affine mapping, the derivative and extrapolation operators in \cref{eq:tau_u} and \cref{eq:tau_psi} are exact for polynomials of degree $ p $, and since all the terms in $ \tau_{u} $ and $ \tau_{\psi} $ are integrals approximated by quadrature rules of degree $ \ge 2p-1 $, it is sufficient if we can show that \cref{eq:Functional error estimate} holds for polynomial integrands of total degree $ 2p-1 $. A similar technique is used to show quadrature accuracy in \cite{hicken2013summation}. If we set $ \fnc{U}\in \poly{p} $ such that $ (\psi \fnc{U}) \in \poly{2p+1} $, then we have $ \tau_{u}=\fnc{O}(h^{2p}) $ due to the accuracy of the SBP operators and the primal PDE. Similarly, if we set $ \psi \in \poly{p} $ such that $ (\psi \fnc{U}) \in \poly{2p+1} $, then we obtain $ \tau_{\psi}=\fnc{O}(h^{2p}) $ due to the accuracy of the SBP operators and the adjoint PDE. Hence, we obtain $ \fnc{I}(\fnc{U}) - I_h(\bm{u}_{h}) = \fnc{O}(h^{2p}) $. On curved elements, the SBP operators are not exact for polynomials of degree greater than zero. However, since \cref{eq:Functional difference} must hold for all combinations of $ \cal{U} $ and $ \psi $, we conclude that each of the error terms, $\tau_{u}$ and $ \tau_{\psi} $, must be $ \fnc{O}(h^{2p}) $, which gives $ \fnc{I}(\fnc{U}) - I_h(\bm{u}_{h}) = \fnc{O}(h^{2p}) $, as desired.}
\end{proof}
\subsubsection{Functional error estimate with interior facet SATs}
In this subsection, we will show that the functional estimate established in \cref{thm:Functional accuracy} holds for primal and adjoint consistent interior facet SATs. We consider two elements $ \Omega_k $ and $ \Omega_v $ sharing interface $ \gamma $ and introduce the following vectors and block matrices
\begin{equation}
\begin{aligned}
\bm{u}_{h}&=\begin{bmatrix}
\bm{u}_{h,k}^{T} & \bm{u}_{h,v}^{T} & \bm{u}_{h,g_{1}}^{T} & \bm{u}_{h,g_{2}}^{T} & \bm{u}_{h,q_{1}}^{T} & \bm{u}_{h,q_{2}}^{T}\end{bmatrix}^{T},&\mathbb{H}_{11}&=\H_{k},\\\bm{u}&=\left[\begin{array}{cccccc}
\bm{u}_{k}^{T} & \bm{u}_{v}^{T} & \bm{u}_{g_{1}}^{T} & \bm{u}_{g_{2}}^{T} & \bm{u}_{q_{1}}^{T} & \bm{u}_{q_{2}}^{T}\end{array}\right]^{T},&\mathbb{H}_{22}&=\H_{v},
\\\bm{\psi}_{h}&=\begin{bmatrix}
\bm{\psi}_{h,k}^{T} & \bm{\psi}_{h,v}^{T} & \bm{\psi}_{h,g_{1}}^{T} & \bm{\psi}_{h,g_{2}}^{T} & \bm{\psi}_{h,q_{1}}^{T} & \bm{\psi}_{h,q_{2}}^{T}\end{bmatrix}^{T},&\mathbb{D}_{11}&=\D_{k},
\\\bm{\psi}&=\begin{bmatrix}
\bm{\psi}_{k}^{T} & \bm{\psi}_{v}^{T} & \bm{\psi}_{g_{1}}^{T} & \bm{\psi}_{g_{2}}^{T} & \bm{\psi}_{q_{1}}^{T} & \bm{\psi}_{q_{2}}^{T}\end{bmatrix}^{T},&\mathbb{D}_{22}&=\D_{v},
\\\bm{f}&=\begin{bmatrix}
\bm{f}_{k}^{T} & \bm{f}_{v}^{T} & \bm{0} & \bm{0} & \bm{0} & \bm{0}\end{bmatrix}^{T},&\left(\mathbb{D}_{\Lambda}\right)_{11}&=\D_{k}\Lambda_{k},
\\\bm{g}&=\begin{bmatrix}
\bm{g}_{k}^{T} & \bm{g}_{v}^{T} & \bm{0} & \bm{0} & \bm{0} & \bm{0}\end{bmatrix}^{T},&\left(\mathbb{D}_{\Lambda}\right)_{22}&=\D_{v}\Lambda_{v},
\end{aligned}
\end{equation}
where the vectors in the left column are of size $ \IR{6n_p} $, the matrices in the right column have $ 6\times 6 $ blocks which are of size $ \IRtwo{n_p}{n_p} $ each. Except for the blocks specified, the entries of the matrices in the right column are all zeros.
We can write the primal residual for the two elements premultiplied by $ \bm{\psi}^{T}\mathbb{H} $ as
\begin{align} \label{eq:Residual 1st form 2elem}
\bm{\psi}^{T}\mathbb{H} R_{h,u}&=-\bm{\psi}^{T}\mathbb{H}\mathbb{D}\mathbb{D}_{\Lambda}\left(\bm{u}_{h}-\bm{u}\right)-\bm{\psi}^{T}\mathbb{H}\mathbb{D}\mathbb{D}_{\Lambda}\bm{u}-\bm{\psi}^{T}\mathbb{H}\bm{f}
+\bm{\psi}^{T}\mathbb{A}\left(\bm{u}_{h}-\bm{u}\right)
+\violet{\bm{\psi}^{T}\mathbb{A}\bm{u}}
+\bm{\psi}^{T}\mathbb{B}\left(\bm{u}_{h}-\bm{u}\right) +\violet{\bm{\psi}^{T}\mathbb{B}\bm{u}} = 0,
\end{align}
where we have drop all SATs except those at the facet $ \gamma $, and the nonzero entries of block matrices $ \mathbb{A},\mathbb{B}\in \IRtwo{6n_p}{6n_p} $ are given by
\begin{equation}
\begin{aligned}
\mathbb{A}_{11}&=\R_{\gamma k}^{T}\T_{\gamma k}^{(1)}\R_{\gamma k}+\R_{\gamma k}^{T}\T_{\gamma\xi_{1}k}^{(5)}\R_{\xi_{1}k}+\R_{\gamma k}^{T}\T_{\gamma\xi_{2}k}^{(5)}\R_{\xi_{2}k},&\mathbb{A}_{13}&=-\R_{\gamma k}^{T}\T_{\gamma\xi_{1}g_{1}}^{(5)}\R_{\xi_{1}g_{1}},\\\mathbb{A}_{12}&=-\R_{\gamma k}^{T}\T_{\gamma k}^{(1)}\R_{\gamma v}+\R_{\gamma k}^{T}\T_{\gamma\delta_{1}v}^{(6)}\R_{\delta_{1}v}+\R_{\gamma k}^{T}\T_{\gamma\delta_{2}v}^{(6)}\R_{\delta_{2}v},&\mathbb{A}_{14}&=-\R_{\gamma k}^{T}\T_{\gamma\xi_{2}g_{2}}^{(5)}\R_{\xi_{2}g_{2}},\\\mathbb{A}_{21}&=-\R_{\gamma v}^{T}\T_{\gamma v}^{(1)}\R_{\gamma k}+\R_{\gamma v}^{T}\T_{\gamma\xi_{1}k}^{(6)}\R_{\xi_{1}k}+\R_{\gamma v}^{T}\T_{\gamma\xi_{2}k}^{(6)}\R_{\xi_{2}k},&\mathbb{A}_{15}&=-\R_{\gamma k}^{T}\T_{\gamma\delta_{1}v}^{(6)}\R_{\delta_{1}q_{1}},\\\mathbb{A}_{22}&=\R_{\gamma v}^{T}\T_{\gamma v}^{(1)}\R_{\gamma v}+\R_{\gamma v}^{T}\T_{\gamma\delta_{1}v}^{(5)}\R_{\delta_{1}v}+\R_{\gamma v}^{T}\T_{\gamma\delta_{2}v}^{(5)}\R_{\delta_{2}v},&\mathbb{A}_{16}&=-\R_{\gamma k}^{T}\T_{\gamma\delta_{2}v}^{(6)}\R_{\delta_{2}q_{2}},\\\mathbb{B}_{11}&=\D_{\gamma k}^{T}\T_{\gamma k}^{(2)}\R_{\gamma k}+\R_{\gamma k}^{T}\T_{\gamma k}^{(3)}\D_{\gamma k}+\D_{\gamma k}^{T}\T_{\gamma k}^{(4)}\D_{\gamma k},&\mathbb{A}_{23}&=-\R_{\gamma v}^{T}\T_{\gamma\xi_{1}k}^{(6)}\R_{\xi_{1}g_{1}},\\\mathbb{B}_{12}&=-\D_{\gamma k}^{T}\T_{\gamma k}^{(2)}\R_{\gamma v}+\R_{\gamma k}^{T}\T_{\gamma k}^{(3)}\D_{\gamma v}+\D_{\gamma k}^{T}\T_{\gamma k}^{(4)}\D_{\gamma v},&\mathbb{A}_{24}&=-\R_{\gamma v}^{T}\T_{\gamma\xi_{2}k}^{(6)}\R_{\xi_{2}g_{2}},\\\mathbb{B}_{21}&=-\D_{\gamma v}^{T}\T_{\gamma v}^{(2)}\R_{\gamma k}+\R_{\gamma v}^{T}\T_{\gamma v}^{(3)}\D_{\gamma k}+\D_{\gamma k}^{T}\T_{\gamma v}^{(4)}\D_{\gamma v},&\mathbb{A}_{25}&=-\R_{\gamma v}^{T}\T_{\gamma\delta_{1}v}^{(5)}\R_{\delta_{1}q_{1}},\\\mathbb{B}_{22}&=\D_{\gamma v}^{T}\T_{\gamma v}^{(2)}\R_{\gamma v}+\R_{\gamma v}^{T}\T_{\gamma v}^{(3)}\D_{\gamma v}+\D_{\gamma v}^{T}\T_{\gamma v}^{(4)}\D_{\gamma v},&\mathbb{A}_{26}&=-\R_{\gamma v}^{T}\T_{\gamma\delta_{2}v}^{(5)}\R_{\delta_{2}q_{2}}.
\end{aligned}
\end{equation}
For the discrete adjoint problem, \cref{eq:Discrete adjoint problem}, with extended interior facet SATs grouped as in \cref{eq:Adjoint Interface SAT extended terms}, the truncation error reads
\begin{equation} \label{eq:Residual adjoint 2elem}
e_{\psi}\equiv\mathbb{D}\mathbb{D}_{\Lambda}\bm{\psi}+\bm{g}-\mathbb{H}^{-1}\mathbb{K}\bm{\psi}-\mathbb{H}^{-1}\mathbb{L}\bm{\psi},
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathbb{K}_{11}&=\R_{\gamma k}^{T}\T_{\gamma k}^{(1)}\R_{\gamma k}+\R_{\xi_{1}k}^{T}\T_{\xi_{1}\gamma k}^{(5)}\R_{\gamma k}+\R_{\xi_{2}k}^{T}\T_{\xi_{2}\gamma k}^{(5)}\R_{\gamma k},&\mathbb{K}_{31}&=-\R_{\xi_{1}g_{1}}^{T}\T_{\xi_{1}\gamma k}^{(5)}\R_{\gamma k},\\\mathbb{K}_{12}&=-\R_{\gamma k}^{T}\T_{\gamma v}^{(1)}\R_{\gamma v}+\R_{\xi_{1}k}^{T}\T_{\xi_{1}\gamma k}^{(6)}\R_{\gamma v}+\R_{\xi_{2}k}^{T}\T_{\xi_{2}\gamma k}^{(6)}\R_{\gamma v},&\mathbb{K}_{32}&=-\R_{\xi_{1}g_{1}}^{T}\T_{\xi_{1}\gamma k}^{(6)}\R_{\gamma v},\\\mathbb{K}_{21}&=-\R_{\gamma v}^{T}\T_{\gamma k}^{(1)}\R_{\gamma k}+\R_{\delta_{1}v}^{T}\T_{\delta_{1}\gamma v}^{(6)}\R_{\gamma k}+\R_{\delta_{2}v}^{T}\T_{\delta_{2}\gamma v}^{(6)}\R_{\gamma k},&\mathbb{K}_{41}&=-\R_{\xi_{2}g_{2}}^{T}\T_{\xi_{2}\gamma k}^{(5)}\R_{\gamma k},\\\mathbb{K}_{22}&=\R_{\gamma v}^{T}\T_{\gamma v}^{(1)}\R_{\gamma v}+\R_{\delta_{1}v}^{T}\T_{\delta_{1}\gamma v}^{(5)}\R_{\gamma v}+\R_{\delta_{2}v}^{T}\T_{\delta_{2}\gamma v}^{(5)}\R_{\gamma v},&\mathbb{K}_{42}&=-\R_{\xi_{2}g_{2}}^{T}\T_{\xi_{2}\gamma k}^{(6)}\R_{\gamma v},\\\mathbb{L}_{11}&=\R_{\gamma k}^{T}\left(\T_{\gamma k}^{(2)}+\B_{\gamma}\right)\D_{\gamma k}+\D_{\gamma k}^{T}\left(\T_{\gamma k}^{(3)}-\B_{\gamma}\right)\R_{\gamma k}+\D_{\gamma k}^{T}\T_{\gamma k}^{(4)}\D_{\gamma k},&\mathbb{K}_{51}&=-\R_{\delta_{1}q_{1}}^{T}\T_{\delta_{1}\gamma v}^{(6)}\R_{\gamma k},\\\mathbb{L}_{12}&=-\R_{\gamma k}^{T}\T_{\gamma v}^{(2)}\D_{\gamma v}+\D_{\gamma k}^{T}\T_{\gamma v}^{(3)}\R_{\gamma v}+\D_{\gamma k}^{T}\T_{\gamma v}^{(4)}\D_{\gamma v},&\mathbb{K}_{52}&=-\R_{\delta_{1}q_{1}}^{T}\T_{\delta_{1}\gamma v}^{(5)}\R_{\gamma v},\\\mathbb{L}_{21}&=-\R_{\gamma v}^{T}\T_{\gamma k}^{(2)}\D_{\gamma k}+\D_{\gamma v}^{T}\T_{\gamma k}^{(3)}\R_{\gamma k}+\D_{\gamma v}^{T}\T_{\gamma k}^{(4)}\D_{\gamma k},&\mathbb{K}_{61}&=-\R_{\delta_{2}q_{2}}^{T}\T_{\delta_{2}\gamma v}^{(6)}\R_{\gamma k},\\\mathbb{L}_{22}&=\R_{\gamma v}^{T}\left(\T_{\gamma v}^{(2)}+\B_{\gamma}\right)\D_{\gamma v}+\D_{\gamma v}^{T}\left(\T_{\gamma v}^{(3)}-\B_{\gamma}\right)\R_{\gamma v}+\D_{\gamma v}^{T}\T_{\gamma v}^{(4)}\D_{\gamma v},&\mathbb{K}_{62}&=-\R_{\delta_{2}q_{2}}^{T}\T_{\delta_{2}\gamma v}^{(5)}\R_{\gamma v}.
\end{aligned}
\end{equation}
\cref{thm:Adjoint consistency} ensures that $ e_{\psi} = {\cal O}(h^{p-1}) $ for adjoint consistent SATs .
Neglecting the boundary terms, we write the functional as
\begin{equation} \label{eq:Functional interface SATs}
{\mathcal I}\left({\mathcal U}\right)=\int_{\Omega}{\mathcal G}{\mathcal U}\dd{\Omega}=\int_{\Omega}{\mathcal \psi}{\mathcal F}\dd{\Omega}=\bm{g}^{T}\mathbb{H}\bm{u}+ \order{h^{2p}} =\bm{f}^{T}\mathbb{H}\bm{\psi}+ \order{h^{2p}} .
\end{equation}
Adding \cref{eq:Residual 1st form 2elem} to $ \fnc{I}(\fnc{U})=\bm{g}^T\mathbb{H}\bm{u} + \fnc{O}(h^{2p}) = \bm{g}^T\mathbb{H}\bm{u}_h - \bm{g}^T\mathbb{H}(\bm{u}_h - \bm{u}) + \fnc{O}(h^{2p})$ and simplifying, we obtain
\begin{equation}\label{eq:Superconv with SATs}
\begin{aligned}
{\mathcal I}\left({\mathcal U}\right)&= I_{h}\left(\bm{u}_{h}\right)
+\left\{\bm{\psi}^{T}\mathbb{B}\mathbb{H}^{-1} -\bm{\psi}^{T}\mathbb{H}\mathbb{D}\mathbb{D}_{\Lambda}\mathbb{H}^{-1}-\bm{g}^{T}+\bm{\psi}^{T}\mathbb{A}\mathbb{H}^{-1}\right\} \mathbb{H}\left(\bm{u}_{h}-\bm{u}\right)
\\
&\quad
-\bm{\psi}^{T}\mathbb{H}\mathbb{D}\mathbb{D}_{\Lambda}\bm{u}-\bm{\psi}^{T}\mathbb{H}\bm{f} + \bm{\psi}^T\mathbb{A}\bm{u} + \bm{\psi}^T\mathbb{B}\bm{u} +\order{h^{2p}}.
\end{aligned}
\end{equation}
Identity \cref{eq:D2 identity 2} gives
\begin{equation} \label{eq:D2 idnetity 2 on SAT}
\mathbb{H}\mathbb{D}\mathbb{D}_{\Lambda}\mathbb{H}^{-1}=\mathbb{D}_{\Lambda}^{T}\mathbb{D}^{T}-\mathbb{C}\mathbb{H}^{-1},
\end{equation}
where the nonzero entries of $ \mathbb{C} $ are
\begin{equation}\label{eq: C matrix}
\begin{aligned}
\mathbb{C}_{11} & =\D_{\gamma k}^{T}\B_{\gamma}\R_{\gamma k}-\R_{\gamma k}^{T}\B_{\gamma}\D_{\gamma k}+\sum_{\epsilon\subset\Gamma_{k}}\left(\D_{\epsilon k}^{T}\B_{\epsilon}\R_{\epsilon k}-\R_{\epsilon k}^{T}\B_{\epsilon}\D_{\epsilon k}\right),\\
\mathbb{C}_{22} & =\D_{\gamma v}^{T}\B_{\gamma}\R_{\gamma v}-\R_{\gamma v}^{T}\B_{\gamma}\D_{\gamma v}+\sum_{\delta\subset\Gamma_{v}}\left(\D_{\delta v}^{T}\B_{\delta}\R_{\delta v}-\R_{\delta v}^{T}\B_{\delta}\D_{\delta v}\right).
\end{aligned}
\end{equation}
Substituting \cref{eq:D2 idnetity 2 on SAT} into \cref{eq:Superconv with SATs}, we find
\begin{equation} \label{eq:Functional interior SAT}
\begin{aligned}
{\mathcal I}\left({\mathcal U}\right)&={I}_{h}\left(\bm{u}_{h}\right)+
\left\{\bm{\psi}^{T}\left(\mathbb{B}+\mathbb{C}\right)\mathbb{H}^{-1}-\bm{\psi}^{T}\mathbb{D}_{\Lambda}^{T}\mathbb{D}^{T}-\bm{g}^{T}+\bm{\psi}^{T}\mathbb{A}\mathbb{H}^{-1}\right\} \mathbb{H}\left(\bm{u}_{h}-\bm{u}\right)
\\&\quad
-\bm{\psi}^{T}\mathbb{H}\mathbb{D}\mathbb{D}_{\Lambda}\bm{u}-\bm{\psi}^{T}\mathbb{H}\bm{f}
+\bm{\psi}^T\mathbb{A}\bm{u} + \bm{\psi}^T\mathbb{B}\bm{u}
+\order{h^{2p}}.
\end{aligned}
\end{equation}
We define $ \mathbb{C}_\gamma $ as the matrix $ \mathbb{C} $ without the terms on facets other than $ \gamma $, \violet{and replace $ \mathbb{C} $ in \cref{eq:Functional interior SAT} by $ \mathbb{C}_\gamma $}. We then note that $ \mathbb{A}=\mathbb{K}^{T} $ and $ \mathbb{B}+\mathbb{C}_{\gamma}=\mathbb{L}^{T} $, and the sum of the terms in the curly braces is equal to $ e_{\psi} ^T$, which is $\fnc{O}(h^{p-1}) $ for adjoint consistent interior facet SATs. \violet{Hence, the second term on the RHS of \cref{eq:Functional interior SAT} is order $ h^{2p+2} $. Therefore, the terms remaining due to the inclusion of interior SATs are $ \bm{\psi}^T\mathbb{A}\bm{u}$ and $ \bm{\psi}^T\mathbb{B}\bm{u} $. These terms are added to $ \tau_u $ given in \cref{eq:tau_u}. Similarly, using \cref{eq:Functional interface SATs} and \cref{eq:D2 idnetity 2 on SAT}, it is possible to show that $ \tau_{\psi} $ must include the adjoint interior SATs, $ \bm{u}^T\mathbb{K}\bm{\psi}$ and $ \bm{u}^T\mathbb{L}\bm{\psi} $. Then, applying the same argument used to establish the order of $\tau_u $ in the proof of \cref{thm:Functional accuracy}, we arrive at $ \fnc{I}(\fnc{U}) - I_h(\bm{u}_{h}) = \fnc{O}(h^{2p}) $ for discretizations with interior facet SATs.}
\begin{remark}
Dropping terms in the matrix $ \mathbb{C} $ that are associated with facets other than $ \gamma $ is not necessary if one considers all facets for the analysis, but this would require working with bigger matrices.
\end{remark}
\subsection{Energy stability analysis}\label{sec:Energy stability} In general, energy stability of SBP-SAT discretizations implies
\begin{equation}
\dv{}{t}\left(\norm{\bm{u}_h}^2_\H\right) =\bm{u}_h^T \H\dv{\bm{u}_h}{t}+\dv{\bm{u}_h^T}{t}\H\bm{u}_h = 2 R_{h}\left(\bm{u}_{h},\bm{u}_{h}\right) \le 0.
\end{equation}
We analyze the time stability of the SBP-SAT discretization \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs} of the homogeneous diffusion problem. A class of adjoint inconsistent SATs is considered first, and later we present conditions for stability of a class of adjoint consistent SATs. The following theorem, \violet{whose proof can be found in \cite{albert1969conditions, gallier2010schur}, is} useful for the energy analysis.
\begin{theorem} \label{thm:Positive semi-definiteness}
For a symmetric matrix of the form
$\Y =\bigl[\begin{smallmatrix}
\Y_{11} & \Y_{12} \\ \Y_{12}^T & \Y_{22}
\end{smallmatrix}\bigr]$,
\begin{enumerate}[i)]
\item $ \Y\succeq 0 $ if and only if $ \Y_{11}\succeq 0 $, $ (\I -\Y_{11}\Y_{11}^{+})\Y_{12} = \bm{0} $, and $ \Y_{22} - \Y_{12}^T \Y_{11}^{+}\Y_{12}\succeq 0 $,
\item $ \Y\succeq 0 $ if and only if $ \Y_{22}\succeq 0 $, $ (\I-\Y_{22}\Y_{22}^{+})\Y_{12}^T = \bm{0}$, and $ \Y_{11} - \Y_{12}\Y_{22}^{+}\Y_{12}^T \succeq 0$,
\end{enumerate}
where $ \Y\succeq 0 $ indicates that $ \Y $ is positive semidefinite.
\end{theorem}
\subsection{Energy analysis for adjoint inconsistent SATs} All of the adjoint inconsistent SATs presented in this work do not couple second neighbor elements. Therefore, we focus on compact adjoint inconsistent SATs and prove the following statement.
\begin{theorem}\label{thm:Stability Adjoint Inconsistent}
A conservative but adjoint inconsistent SBP-SAT discretization, \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs}, of the homogeneous diffusion problem \cref{eq:diffusion problem}, \ie, $ \fnc{F}=0$, $\fnc{U}_D = 0 $, and $ \fnc{U}_N =0$, is energy stable with respect to the \blue{diagonal norm matrix}, $ \H $, if
\begin{equation} \label{eq:Conditions for stabilty of adjoint inconsistent SATs}
\begin{aligned}
\T_{\gamma k}^{(3)}+\T_{\gamma k}^{(2)}-\B_{\gamma}&=\bm{0} ,&\T_{\gamma v}^{(3)}-\T_{\gamma k}^{(2)}&=\bm{0},&\T_{\gamma k}^{(1)}&\succeq0,&\T_{\gamma}^{(D)}-\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma \gamma k}\B_{\gamma}&\succeq0,\\\T_{\gamma v}^{(3)}+\T_{\gamma v}^{(2)}-\B_{\gamma}&=\bm{0},&\T_{\gamma k}^{(3)}-\T_{\gamma v}^{(2)}&=\bm{0},&\T_{\gamma k}^{(4)}&=\T_{\gamma v}^{(4)}\succeq 0,&\T_{abk}^{(5)}=-\T_{abk}^{(6)}&=\bm{0},
\end{aligned}
\end{equation}
where for element $ \Omega_k $, and facets $ a,b\in\{\gamma,\epsilon_{1},\epsilon_{2},\delta_{1},\delta_{2}\} $
\begin{equation} \label{eq:Upsilon definition}
\Upsilon_{abk}\equiv\C_{ak}\left(\Lambda_{k}^{*}\right)^{-1}\C_{bk}^{T}
=\N_{ak}^{T}\bar{\R}_{ak}\bar{\H}_{k}^{-1}\Lambda_{k}\bar{\R}_{bk}^{T}\N_{bk}.
\end{equation}
\end{theorem}
\begin{proof}
We have to show that for the conditions given in \cref{eq:Conditions for stabilty of adjoint inconsistent SATs} the residual satisfies $ 2R_{h}\left(\bm{u}_{h},\bm{u}_{h}\right) = R_{h}\left(\bm{u}_{h},\bm{u}_{h}\right)+R_{h}^{T}\left(\bm{u}_{h},\bm{u}_{h}\right)\le0 $, which is the case if all the $ 4\times 4 $ block matrices on the RHS of \cref{eq:Residual 3rd form} are positive semidefinite. The positive semidefiniteness of the $ 4\times 4 $ block matrix in the first and last terms of \cref{eq:Residual 3rd form} are analyzed in \cite{yan2018interior}, and it is shown that these block matrices are positive semidefinite if $ \T_{\gamma k}^{(4)} \succeq 0$, $ \T_{\gamma}^{(D)} \succeq 0 $, and the Schur complement $ \T_{\gamma}^{(D)} - (1/\alpha_{\gamma k})\B_{\gamma}\Upsilon_{\gamma \gamma k}\B_{\gamma} \succeq 0$. Substituting $ \T_{abk}^{(5)}=-\T_{abk}^{(6)}=\bm{0} $ and $ \T_{\gamma k}^{(1)} =\T_{\gamma v}^{(1)}$ (due to conservation) in \cref{eq:Residual 3rd form}, and regrouping $ \T_{\gamma k}^{(1)} $ terms, the last $ 4\times 4 $ block matrix that we need to show is positive semidefinite is $ \A + \A^T $, where
\begin{equation} \label{eq:Matrix A for compact SATs}
\A = \begin{bmatrix}
\T_{\gamma k}^{(1)}&-\T_{\gamma k}^{(1)}&\left(\T_{\gamma k}^{(3)}-\B_{\gamma}\right)\C_{\gamma k}&\T_{\gamma k}^{(3)}\C_{\gamma v}\\-\T_{\gamma k}^{(1)}&\T_{\gamma k}^{(1)}&\T_{\gamma v}^{(3)}\C_{\gamma k}&\left(\T_{\gamma\gamma v}^{(3)}-\B_{\gamma}\right)\C_{\gamma v}\\\C_{\gamma k}^{T}\T_{\gamma k}^{(2)}&-\C_{\gamma k}^{T}\T_{\gamma k}^{(2)}&\alpha_{\gamma k}\Lambda_{k}^{*}& \bm{0} \\-\C_{\gamma v}^{T}\T_{\gamma v}^{(2)}&\C_{\gamma v}^{T}\T_{\gamma v}^{(2)}& \bm{0} &\alpha_{\gamma v}\Lambda_{v}^{*}
\end{bmatrix}.
\end{equation}
The off-diagonal block matrices of $ \A + \A^T $ vanish for the conditions given in \cref{eq:Conditions for stabilty of adjoint inconsistent SATs}. Moreover, $ \bigl[\begin{smallmatrix}
\T_{\gamma k}^{(1)}&-\T_{\gamma k}^{(1)}\\-\T_{\gamma k}^{(1)}&\T_{\gamma k}^{(1)}
\end{smallmatrix}\bigr] = \bigl[ \begin{smallmatrix}
1&-1\\-1&1
\end{smallmatrix}\bigr] \otimes \T_{\gamma k}^{(1)}$ is positive semidefinite if $ \T_{\gamma k}^{(1)} \succeq 0 $ due to properties of Kronecker products and because $ \bigl[\begin{smallmatrix}
1&-1\\-1&1
\end{smallmatrix}\bigr] \succeq 0 $. Finally, $ \bigl[\begin{smallmatrix}
\alpha_{\gamma k}\Lambda_{k}^{*}& \bm{0}\\\bm{0}&\alpha_{\gamma v}\Lambda_{v}^{*}
\end{smallmatrix}\bigr] \succeq 0$ since $ \alpha_{\gamma k}>0 $, $ \alpha_{\gamma v}>0 $, $ \Lambda^*_k \succeq 0 $, and $ \Lambda^*_v \succeq 0 $. Therefore, $ \A + \A^T \succeq 0 $ which completes the proof.
\end{proof}
\subsection{Energy analysis for adjoint consistent SATs} We consider a class of adjoint consistent SATs for which $ \T_{\gamma k}^{(3)}-\T_{\gamma k}^{(2)}=\B_{\gamma} $ and $ \T_{a k}^{(1)} $ is SPD, where $ a\in\{\gamma, \epsilon_{1},\epsilon_{2}, \delta_{1},\delta_{2}\} $. This class covers all types of adjoint consistent SAT studied in this work, \red{and \cref{thm:Stability for Adjoint Consistent SATs} provides sufficient conditions for energy stability of discretizations involving these SATs.} A class of adjoint consistent SATs for which $ \T_{abk}^{(5)}=\T_{abk}^{(6)}=\bm{0} $ in addition to the above two conditions is studied in \cite{yan2018interior}.
\begin{theorem} \label{thm:Stability for Adjoint Consistent SATs}
An adjoint consistent SBP-SAT discretization, \cref{eq:SBP-SAT discretization,eq:Interface SATs,eq:Boundary SATs}, of the homogeneous diffusion problem \cref{eq:diffusion problem}, \ie, $ \fnc{F}=0$, $\fnc{U}_D = 0 $, and $ \fnc{U}_N =0$, for which \violet{\cref{assu:Coefficient matrices} holds}, $ \T_{\gamma k}^{(3)}-\T_{\gamma k}^{(2)}=\B_{\gamma} $, and $ \T_{a k}^{(1)} \succ 0 $ is energy stable with respect to the \blue{diagonal norm} matrix, $ \H $, if the SAT coefficient matrices satisfy
\begin{align}
\T_{\gamma k}^{(1)}-\frac{2}{\zeta}\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right) &\succeq 0,
\label{eq:T1 stablility 1}
\\
\T_{a k}^{(1)}-64\T_{abk}^{(5)}\left(\T_{bk}^{(1)}\right)^{-1}\T_{ba k}^{(5)}&\succeq 0,
\label{eq:T1 stablility 2}
\\
\T_{\gamma}^{(D)}-\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma k}\B_{\gamma} &\succeq 0,
\label{eq:TD stablility}
\\
\T_{\gamma k}^{(4)} &\succeq 0,
\label{eq:T4 stablility}
\end{align}
where $ \Upsilon_{abk} $ is defined in \cref{eq:Upsilon definition}, $ a,b\in\{\gamma,\epsilon_{1},\epsilon_{2}\} $, $ \zeta = 2 $ \violet{for compact stencil SATs, \ie, SATs with $ \T_{abk}^{(5)}=\T_{abk}^{(6)}=\bm{0} $,} otherwise $ \zeta=1 $, and $ \T\succ 0 $ indicates $ \T $ is positive definite.
\end{theorem}
\begin{proof}
We apply the conditions for adjoint consistency given in \cref{eq:Coefficients for adjoint consistency} on the residual \cref{eq:Residual 3rd form}. Because the residual is symmetric under these conditions, it is sufficient to show that $ R_h({\bm{u}_h, \bm{u}_h}) \le 0$. The conditions in \cref{eq:TD stablility,eq:T4 stablility} are established in \cref{thm:Stability Adjoint Inconsistent}. We rewrite the $ 4\times 4 $ block matrix in $ X_1 $, the second term in \cref{eq:Residual 3rd form}, as $
\A= \bigl[\begin{smallmatrix}
\A_{11} & \A_{12}\\
\A_{12}^{T} & \A_{22}
\end{smallmatrix}\bigr],
$
where
\begin{equation}
\A_{11} = \frac{1}{2}\begin{bmatrix}
\T_{\gamma k}^{(1)}&-\T_{\gamma k}^{(1)}\\-\T_{\gamma k}^{(1)}&\T_{\gamma k}^{(1)}
\end{bmatrix},
\;
\A_{12} =\begin{bmatrix}
\T_{\gamma k}^{(2)}\C_{\gamma k}&-\T_{\gamma v}^{(2)}\C_{\gamma v}\\-\T_{\gamma k}^{(2)}\C_{\gamma k}&\T_{\gamma v}^{(2)}\C_{\gamma v}
\end{bmatrix},
\;
\A_{22} =\begin{bmatrix}
\alpha_{\gamma k}\Lambda_{k}^{*}&\\&\alpha_{\gamma v}\Lambda_{v}^{*}
\end{bmatrix}.
\end{equation}
Since $ \A_{22} $ is positive definite, \cref{thm:Positive semi-definiteness} ensures $ \A \succeq 0 $ if and only if $ \A_{11}-\A_{12}\A_{22}^{-1}\A_{12}^{T} \succeq 0 $, \ie,
\begin{equation}
\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\left[\frac{1}{2}\T_{\gamma k}^{(1)}-\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right)\right]\succeq 0,
\end{equation}
which gives the condition for stability in \cref{eq:T1 stablility 1} with $ \zeta=1 $. Setting $ \T^{(5)}_{abk} = \bm{0}$ and regrouping $ \T^{(1)}_{ak} $ terms as in \cref{eq:Matrix A for compact SATs}, the terms with extended SATs in \cref{eq:Residual 3rd form} vanish. Imposing $ \T_{\gamma k}^{(3)}-\T_{\gamma k}^{(2)}=\B_{\gamma} $ and the adjoint consistency conditions, we obtain $ \A_{11} = \bigl[\begin{smallmatrix}
\T_{\gamma k}^{(1)}&-\T_{\gamma k}^{(1)}\\-\T_{\gamma k}^{(1)}&\T_{\gamma k}^{(1)}
\end{smallmatrix}\bigr] $ while $ \A_{12} $ and $ \A_{22} $ remain unchanged. This yields the stability condition \cite{yan2018interior}
\begin{equation}\label{eq:Stability condition on Tgk1 compact SATs}
\T_{\gamma k}^{(1)}-\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right) \succeq 0.
\end{equation}
After applying the conditions for adjoint consistency, the first $ 4 \times 4 $ block matrix in $ X_2 $, the third term in \cref{eq:Residual 3rd form}, reads
\begin{equation}
\begin{aligned}
\G = \begin{bmatrix}
\G_{11}&\G_{12}\\\G_{12}^{T}&\G_{22}
\end{bmatrix}
\equiv\begin{bmatrix}
\frac{1}{8}\T_{\gamma k}^{(1)}&-\frac{1}{8}\T_{\gamma k}^{(1)}&\T_{\gamma\epsilon_{1}k}^{(5)}&-\T_{\gamma\epsilon_{1}k}^{(5)}\\-\frac{1}{8}\T_{\gamma k}^{(1)}&\frac{1}{8}\T_{\gamma k}^{(1)}&-\T_{\gamma\epsilon_{1}k}^{(5)}&\T_{\gamma\epsilon_{1}k}^{(5)}\\\T_{\epsilon_{1}\gamma k}^{(5)}&-\T_{\epsilon_{1}\gamma k}^{(5)}&\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}&-\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}\\-\T_{\epsilon_{1}\gamma k}^{(5)}&\T_{\epsilon_{1}\gamma k}^{(5)}&-\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}&\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}
\end{bmatrix},
\end{aligned}
\end{equation}
where $ \G_{11} $, $ \G_{12} $, and $ \G_{22} $ are $ 2\times 2 $ block matrices. Energy stability requires that $ \G\succeq 0 $ which, using \cref{thm:Positive semi-definiteness}, implies we need to find conditions such that $ \left({\I}-\G_{22}\G_{22}^{+}\right)\G_{12}^{T}=\bm{0} $ and $ \G_{11}-\G_{12}\G_{22}^{+}\G_{12}^{T} \succeq 0 $ since $ \G_{22} \succeq 0 $.
But
\begin{equation}
\G_{22}^{+}=\left(\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}\right)^{+}
=\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}^{+}\otimes\left(\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}\right)^{+}
=\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\left(\frac{1}{2}\T_{\epsilon_{1}k}^{(1)}\right)^{-1},
\end{equation}
where we have used $ \bigl[\begin{smallmatrix}
1&-1\\-1&1
\end{smallmatrix}\bigr]^{+} = \frac{1}{4}\bigl[\begin{smallmatrix}
1&-1\\-1&1
\end{smallmatrix}\bigr]$
and the fact that $ \T_{\epsilon_{1}k}^{(1)} $ is invertible for the class of SATs under consideration. Therefore,
\begin{equation}
\left(\I-\G_{22}\G_{22}^{+}\right)\G_{12}^{T}
=\left\{\begin{bmatrix}
\I\\
& \I
\end{bmatrix}-\left(\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\frac{1}{8}\T_{\epsilon_{1}k}^{(1)}\right)\left(\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\left(\frac{1}{2}\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\right)\right\}\begin{bmatrix}
\T_{\epsilon_{1}\gamma k}^{(5)}&-\T_{\epsilon_{1}\gamma k}^{(5)}\\-\T_{\epsilon_{1}\gamma k}^{(5)}&\T_{\epsilon_{1}\gamma k}^{(5)}
\end{bmatrix}=\bm{0}.
\end{equation}
Furthermore, using properties of Kronecker products it can be shown that
\begin{equation}
\begin{aligned}
\G_{11}-\G_{12}\G_{22}^{+}\G_{12}^{T}&=\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\left(\frac{1}{8}\T_{\gamma k}^{(1)}\right)
-\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}^{3}\otimes\left(\T_{\gamma\epsilon_{1}k}^{(5)}\left(\frac{1}{2}\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}\right)
\\&=\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}\otimes\left(\frac{1}{8}\T_{\gamma k}^{(1)}-8\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}\right),
\end{aligned}
\end{equation}
which implies $ \G_{11}-\G_{12}\G_{22}^{+}\G_{12}^{T}\succeq 0 $ if
\begin{equation}
\T_{\gamma k}^{(1)}-64\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}\succeq 0.
\end{equation}
Similar energy analyses for the rest of the $ 4\times 4 $ block matrices in $ X_2 $ (or simple geometric arguments) reveal that all the $ 4\times 4 $ block matrices in $ X_2 $ are positive semidefinite if
\begin{equation} \label{eq:Stability condition on Tak1 extended SAT}
\T_{a k}^{(1)}-64\T_{abk}^{(5)}\left(\T_{bk}^{(1)}\right)^{-1}\T_{ba k}^{(5)}\succeq 0,
\end{equation}
for $ a,b\in\{\gamma,\epsilon_{1},\epsilon_{2}\} $. We have shown that the conditions stated in \cref{thm:Stability for Adjoint Consistent SATs} are sufficient for all the $ 4\times 4 $ block matrices in the residual \cref{eq:Residual 3rd form} to be positive semidefinite; therefore, $R_h(\bm{u}_h, \bm{u}_h) \le 0$ as desired.
\end{proof}
\section{Existing and DG based SATs} \label{sec:Existing and DG SATs}
The SAT coefficients associated with different types of DG fluxes are obtained by discretizing the residual of the DG primal formulation of the Poisson problem, which has the general form \cite{arnold2002unified,peraire2008compact}
\begin{equation} \label{eq:RHS of DG primal formulation}
\begin{aligned}
\fnc{R}(\fnc{U}_{h},\fnc{V})
&=-\int_{\Omega}\lambda\nabla\fnc{U}_{h}\cdot\nabla\fnc{V}\dd{\Omega}
+\int_{\Omega}\fnc{V}\fnc{F}\dd{\Omega}
-\int_{\Gamma^{I}}\jump{\widehat{\fnc{U}}-\fnc{U}_{h}}\cdot\avg{\lambda\nabla\fnc{V}}+\avg{\widehat{\fnc{U}}-\fnc{U}_{h}}\jump{\lambda\nabla\fnc{V}}\dd{\Gamma}
\\&\quad
+\int_{\Gamma^{I}}\jump{\fnc{V}}\cdot\avg{\widehat{\vecfnc{W}}}+\avg{\fnc{V}}\jump{\widehat{\vecfnc{W}}}\dd{\Gamma}
+\int_{\Gamma^{D}}(\fnc{U}_{h}-\fnc{U}_{D})\lambda\nabla\fnc{V}\cdot\bm{n}+\fnc{V}\widehat{\vecfnc{W}}\cdot\bm{n}\dd{\Gamma}
+\int_{\Gamma^{N}}\fnc{V}\fnc{U}_{N}\dd{\Gamma},
\end{aligned}
\end{equation}
where $ \widehat{\fnc{U}} $ and $ \widehat{\vecfnc{W}} $ are numerical fluxes of the solution, $ \fnc{U}_h $, and the auxiliary variable $ \vecfnc{W}_h$, respectively. Equation \cref{eq:RHS of DG primal formulation} is obtained after setting the numerical fluxes of the solution as $ \widehat{\fnc{U}}= \fnc{U}_D $ on $ \Gamma^D $ and $ \widehat{\fnc{U}}= \fnc{U}_h $ on $ \Gamma^N $, and the numerical fluxes of the auxiliary variable on $ \Gamma^N $ as $ \widehat{\fnc{W}} = \fnc{U}_N $. For schemes with global lifting operators, the auxiliary variable, the solution, and the flux of the solution are related by
\begin{equation} \label{eq:Auxiliary variable W}
\vecfnc{W}_{h}=\lambda\nabla\fnc{U}_{h}-\lambda\fnc{L}\left(\jump{\widehat{\fnc{U}}-\fnc{U}_{h}}\right)-\lambda\fnc{S}\left(\avg{\widehat{\fnc{U}}-\fnc{U}_{h}}\right)-\lambda\fnc{S}^{D}(\widehat{\fnc{U}}-\fnc{U}_{h}).
\end{equation}
For compact SATs, the global lifting operators in \cref{eq:Auxiliary variable W} are replaced by local lifting operators, \ie,
\begin{equation} \label{eq:Auxiliary variable W local}
\vecfnc{W}_{h}^{\gamma}=\lambda\nabla\fnc{U}_{h}-\lambda\fnc{L}^\gamma\left(\jump{\widehat{\fnc{U}}-\fnc{U}_{h}}\right)-\lambda\fnc{S}^\gamma \left(\avg{\widehat{\fnc{U}}-\fnc{U}_{h}}\right)-\lambda\fnc{S}^{D}(\widehat{\fnc{U}}-\fnc{U}_{h}).
\end{equation}
\blue{The forms of the interior facet SATs in \cref{eq:Interface SATs} and boundary SATs in \cref{eq:Boundary SATs} are closely related to the integral terms on the interior and boundary facets in the DG primal formulation. For example, to see how the boundary SATs and boundary integral terms in \cref{eq:RHS of DG primal formulation} are related, integrate by parts the first term on the RHS of \cref{eq:RHS of DG primal formulation} and substitute $\widehat{\vecfnc{W}}= \vecfnc{W}^{\gamma}_h $ on $ \Gamma^D $, to obtain
\begin{equation} \label{eq:RHS of DG primal formulation 2}
\begin{aligned}
\fnc{R}(\fnc{U}_{h},\fnc{V})
&=\int_{\Omega}\fnc{V}\nabla\cdot(\lambda\nabla\fnc{U}_{h}) \dd\Omega
+\int_{\Omega}\fnc{V}\fnc{F}\dd\Omega - \int_{\Gamma^{I}}\fnc{V}(\lambda\nabla\fnc{U}_{h})\cdot \bm{n}\dd \Gamma
\\&\quad -\int_{\Gamma^{I}}\jump{\widehat{\fnc{U}}-\fnc{U}_{h}}\cdot\avg{\lambda\nabla\fnc{V}}+\avg{\widehat{\fnc{U}}-\fnc{U}_{h}}\jump{\lambda\nabla\fnc{V}}\dd \Gamma
+\int_{\Gamma^{I}}\jump{\fnc{V}}\cdot\avg{\widehat{\vecfnc{W}}}+\avg{\fnc{V}}\jump{\widehat{\vecfnc{W}}}\dd\Gamma
\\&\quad
+\int_{\Gamma^{D}}(\fnc{U}_{h}-\fnc{U}_{D})\lambda\nabla\fnc{V}\cdot\bm{n}- \fnc{V}\lambda\fnc{S}^{D}(\widehat{\fnc{U}}-\fnc{U}_{h})\cdot \bm{n}\dd\Gamma
+\int_{\Gamma^{N}}\fnc{V}(\fnc{U}_{N} -(\lambda\nabla\fnc{U}_{h})\cdot \bm{n}) \dd\Gamma.
\end{aligned}
\end{equation}
The discrete analogue of the boundary integral terms in the last line of \cref{eq:RHS of DG primal formulation 2} is of the same form as $ \bm{v}_k^T\bm{s}_k^B $. The structure of the boundary SATs remains unchanged for DG fluxes based on global lifting operators due to the definition of the global lifting operators, \cref{eq: lift global vector,eq: lift global scalar}, which involve only interior facet integrals.} \red{The connection between the interface coupling terms in the DG formulation and the interior facet SATs can be shown by discretizing the interior surface integrals in \cref{eq:RHS of DG primal formulation 2}, which requires discretization of the lifting operators that appear in the numerical fluxes.} To find the discrete analogues of the global lifting operator for vector functions, we first write \cref{eq: lift global vector} for $ \Omega_k $ as
\begin{equation} \label{eq:Global lifting on Omegak}
\int_{\Omega_{k}}\lambda_{k}{\mathcal L}_{k}\left(\jump{\fnc{U}_h}\right)\cdot{\vecfnc{Z}}_{k}\dd{\Omega}=-\frac{1}{2}\int_{\Gamma_{k}^I}\lambda_{k}\jump{\fnc{U}_h}\cdot{\vecfnc{Z}}_{k}\dd{\Gamma}=-\frac{1}{2}\int_{\Gamma_{k}^I}\jump{\fnc{U}_h}\cdot\lambda_{k}^{T}{\vecfnc{Z}}_{k}\dd{\Gamma}.
\end{equation}
Note that the sum of the lifting operators defined by \cref{eq:Global lifting on Omegak} at an interface shared by two elements is
\begin{equation}
-\frac{1}{2}\int_{\gamma}\jump{\fnc{U}_h}\cdot\lambda_{k}^{T}\vecfnc{Z}{k}\dd{\Gamma}-\frac{1}{2}\int_{\gamma}\jump{\fnc{U}_h}\cdot\lambda_{v}^{T}\vecfnc{Z}_{v}\dd{\Gamma}=-\int_{\gamma}\jump{\fnc{U}_h}\cdot\frac{1}{2}\left(\lambda_{k}^{T}\vecfnc{Z}_{k}+\lambda_{v}^{T}\vecfnc{Z}_{v}\right)\dd{\Gamma}=-\int_{\gamma}\jump{\fnc{U}_h}\cdot\avg{\lambda^{T}\vecfnc{Z}}\dd{\Gamma},
\end{equation}
which enables \cref{eq: lift global vector} to be recovered upon summing over all interfaces. Neglecting truncation error, the discretization of \cref{eq:Global lifting on Omegak} follows as
\begin{equation}
\begin{bmatrix}
{\bm{z}}_{x,k}\\
{\bm{z}}_{y,k}
\end{bmatrix}^{T}
\begin{bmatrix}
\H_{k}\\
& \H_{k}
\end{bmatrix}
\begin{bmatrix}
\mathscr{L}_{x,k}\\
\mathscr{L}_{y,k}
\end{bmatrix}=-\frac{1}{2}\sum_{\gamma\subset\Gamma_{k}^I}
\begin{bmatrix}
{\bm{z}}_{x,k}\\
{\bm{z}}_{y,k}
\end{bmatrix}^{T}
\begin{bmatrix}
\Lambda_{xx} & \Lambda_{xy}\\
\Lambda_{yx} & \Lambda_{yy}
\end{bmatrix}_{k}
\begin{bmatrix}
\R_{\gamma k}^{T}\\
& \R_{\gamma k}^{T}
\end{bmatrix}
\begin{bmatrix}
\N_{x,\gamma}\\
\N_{y,\gamma}
\end{bmatrix}\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\R_{\gamma v}\bm{u}_{v}\right),
\end{equation}
and thus, the $ x $-coordinate discrete global lifting operator for vector functions, $ \mathscr{L}_{x,k} $, is given by
\begin{equation} \label{eq:Lift global vector}
\begin{aligned}
\mathscr{L}_{x,k}&=-\frac{1}{2}\sum_{\gamma\subset\Gamma_{k}^I}\H_{k}^{-1}\left(\Lambda_{xx}\R_{\gamma k}^{T}\N_{x,\gamma}+\Lambda_{xy}\R_{\gamma k}^{T}\N_{y,\gamma}\right)\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\R_{\gamma v}\bm{u}_{v}\right).
\end{aligned}
\end{equation}
The $ y $-coordinate discrete global lifting operator, $ \mathscr{L}_{y,k} $, has analogues expression. For the same reason, we will state only the $ x $-coordinate discrete lifting operators for the other types of lifting operators presented below. The local lifting operator for vector functions on element $ \Omega_k $ and facet $ \gamma\in \Gamma_k^I $ is defined as
\begin{equation} \label{eq:Local lifting on Omegak}
\int_{\Omega_{k}} \lambda_{k}{\cal L}_{k}^{\gamma}\left(\jump{{\fnc{U}_h}}\right)\cdot{\vecfnc{Z}}_{k}\dd{\Omega}=-\frac{1}{2}\int_{\gamma}\jump{{\fnc{U}_h}}\cdot\lambda_{k}^{T}{\vecfnc{Z}}_{k}\dd{\Gamma},
\end{equation}
which upon discretization gives the $ x $-coordinate local lifting operator \cite{yan2018interior}
\begin{equation} \label{eq:Lift local vector}
\mathscr{L}_{x,k}^{\gamma}=-\frac{1}{2}\H_{k}^{-1}\left(\Lambda_{xx}\R_{\gamma k}^{T}\N_{x,\gamma}+\Lambda_{xy}\R_{\gamma k}^{T}\N_{y,\gamma}\right)\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\R_{\gamma v}\bm{u}_{v}\right).
\end{equation}
Applying a similar approach, we write the global lifting operator for scalar valued functions, \cref{eq: lift global scalar}, on a single element as
\begin{align}
\int_{\Omega_{k}}\lambda_k{\cal S}_{k}\left(\jump{\fnc{U}_h}\cdot\bm{n}_{k}\right)\cdot\vecfnc{Z}_{k}\dd{\Omega}=-\int_{\Gamma_{k}^{I}}\left(\jump{\fnc{U}_h}\cdot\bm{n}_{k}\right)\lambda_k^T\vecfnc{Z}_{k}\cdot\bm{n}_{k}\dd{\Gamma}=-\sum_{\gamma\subset\Gamma_{k}^{I}}\int_{\gamma}\left({\fnc{U}}_{h,k}-{\fnc{U}}_{h,v}\right)\lambda_k^T\vecfnc{Z}_{k}\cdot\bm{n}_{k}\dd{\Gamma},
\end{align}
which gives the $ x $-coordinate discrete analogue of the global lifting operator for scalar functions as,
\begin{equation} \label{eq:Lift global scalar}
\mathscr{S}_{x,k}=-\sum_{\gamma\subset\Gamma_{k}^I}\H_{k}^{-1}\left(\Lambda_{xx}\R_{\gamma k}^{T}\N_{x,\gamma}+\Lambda_{xy}\R_{\gamma k}^{T}\N_{y,\gamma}\right)\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\R_{\gamma v}\bm{u}_{v}\right).
\end{equation}
Moreover, the discretization of the local lifting operator for scalar functions at interior facet $ \gamma\in\Gamma_k^I $ gives
\begin{equation} \label{eq:Lift local scalar}
\mathscr{S}_{x,k}^{\gamma}=-\H_{k}^{-1}\left(\Lambda_{xx}\R_{\gamma k}^{T}\N_{x,\gamma}+\Lambda_{xy}\R_{\gamma k}^{T}\N_{y,\gamma}\right)\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\R_{\gamma v}\bm{u}_{v}\right).
\end{equation}
Finally, at Dirichlet boundary facets, the $ x $-coordinate discrete lifting operators is given by
\begin{equation} \label{eq:Lift Dirichlet}
\mathscr{S}^D_{x,k}=-\H_{k}^{-1}\left(\Lambda_{xx}\R_{\gamma k}^{T}\N_{x,\gamma}+\Lambda_{xy}\R_{\gamma k}^{T}\N_{y,\gamma}\right)\B_{\gamma}\left(\R_{\gamma k}\bm{u}_{k}-\bm{u}_{\gamma k}\right).
\end{equation}
Before we proceed with identification of SATs pertaining to known DG methods, we state the following two lemmas which will be useful to analyze energy stability of some of the schemes studied in the following subsections.
\begin{lemma}\label{lem:Inverse of sum of SPD matrices}
Let $ \X \in \IRtwo{n}{n} $ and $ \Y \in \IRtwo{n}{n} $ be two SPD matrices, then the inverse of the sum of the matrices satisfies
\begin{equation}\label{eq:Inverse of sum of SPD matrics}
\begin{aligned}
\X^{-1}+\Y^{-1}-(\X+\Y)^{-1} &\succ 0,
&&
\X^{-1}-(\X+\Y)^{-1} \succ 0,
&& \text{and} \quad
\Y^{-1}-(\X+\Y)^{-1} \succ 0,
\end{aligned}
\end{equation}
where $ \Y \succ 0 $ indicates that $ \Y $ is positive definite.
\end{lemma}
\begin{proof}
We start from the following result in \cite{henderson1981deriving},
\begin{equation}
\begin{aligned}
\left(\X+\Y\right)^{-1}&=\X^{-1}-\X^{-1}\Y\left(\X+\Y\right)^{-1}, && \text{or}\quad
\left(\X+\Y\right)^{-1}=\Y^{-1}-\Y^{-1}\X\left(\X+\Y\right)^{-1},
\end{aligned}
\end{equation}
and write
\begin{equation}\label{eq:Inverse of sum of SPD matrics 2}
\X^{-1}+\Y^{-1}-\left(\X+\Y\right)^{-1}=\X^{-1}+\Y^{-1}-\X^{-1}+\X^{-1}\Y\left(\X+\Y\right)^{-1}=\Y^{-1}+\X^{-1}\Y\left(\X+\Y\right)^{-1}.
\end{equation}
Note that $ \X^{-1}\Y\left(\X+\Y\right)^{-1} $ is symmetric because $ \X^{-1}-\left(\X+\Y\right)^{-1} $ is symmetric (since the sum of two symmetric matrices is symmetric, and the inverse of a symmetric matrix is symmetric). Furthermore, $ \X^{-1}\Y\left(\X+\Y\right)^{-1} $ is positive definite because $ \X^{-1}\succ0 $, $ \Y^{-1}\succ0 $, $ \left(\X+\Y\right)^{-1}\succ0 $, and their product $ \X^{-1}\Y\left(\X+\Y\right)^{-1} $ is symmetric, \ie, we used the fact that the product of two SPD matrices is positive definite if their product is symmetric as well. Therefore, we obtain $ \Y^{-1}+\X^{-1}\Y\left(\X+\Y\right)^{-1}\succ0 $ which implies that \cref{eq:Inverse of sum of SPD matrics 2} yields the first inequality in \cref{eq:Inverse of sum of SPD matrics}. By a similar argument we can write
\begin{align}
\X^{-1}-\left(\X+\Y\right)^{-1}&=\X^{-1}-\X^{-1}+\X^{-1}\Y\left(\X+\Y\right)^{-1}=\X^{-1}\Y\left(\X+\Y\right)^{-1}\succ0,
\\
\Y^{-1}-\left(\X+\Y\right)^{-1}&=\Y^{-1}-\Y^{-1}+\Y^{-1}\X\left(\X+\Y\right)^{-1}=\Y^{-1}\X\left(\X+\Y\right)^{-1}\succ0.
\end{align}
\end{proof}
\begin{lemma} \label{lem:I-YXY}
Given an SPD matrix $ \X \in \IRtwo{n}{n} $ and a rectangular matrix $ \Y \in \IRtwo{n}{m} $ such that $ \X=\Y\Y^{T} $, we have
\begin{equation}
\I_{m}-\Y^{T}\X^{-1}\Y\succeq0,
\end{equation}
where $ \I_{m} $ is an $ m\times m $ identity matrix.
\end{lemma}
\begin{proof}
Consider the singular value decomposition $ \Y=\U\Sigma_{r}\V^{T} $, then
\begin{align}
\Y^{T}\X^{-1}\Y&=\Y^{T}\left(\Y\Y^{T}\right)^{-1}\Y=\Y^{T}\left(\Y^{+T}\Y^{+}\right)\Y=\left(\Y^{T}\Y^{+T}\right)\left(\Y^{+}\Y\right) \label{eq:SPD proof 2}\\
&=\left(\V\Sigma_{r}\U^{T}\U\Sigma_{r}^{+}\V^{T}\right)\left(\V\Sigma_{r}^{+}\U^{T}\U\Sigma_{r}\V^{T}\right)=\left(\V\I_{r}\V^{T}\right)\left(\V\I_{r}\V^{T}\right)=\V\I_{r}\V^{T},
\end{align}
where $ \I_r $ is a diagonal matrix containing unity in its diagonal only up to the $ n $-th row and column indices, \ie, up to the rank of $ \Y $. In the second equality in \cref{eq:SPD proof 2} we made use of the property $ \left(\Y\Y^T\right)^{-1}=\left(\Y\Y^T\right)^{+}=\Y^{+T}\Y^{+} $ since $ \X=\Y\Y^T $ is invertible. Noting that the identity matrix can be written as $ \I_{m}=\V\I_{m} \V^{T} $, we have
\begin{equation}
\I_{m}-\Y^{T}\X^{-1}\Y=\V\I_{m}\V^{T}-\V\I_{r}\V^{T}=\V\left(\I_{m}-\I_{r}\right)\V^{T}\succeq0,
\end{equation}
which is the desired result.
\end{proof}
\subsection{BR1 SAT: The first method of Bassi and Rebay}
The numerical fluxes for the BR1 method \cite{bassi1997highnavier} are $ \widehat{\fnc{U}}=\avg{\fnc{U}_h} $ and $ \widehat{\fnc{W}}=\avg{\fnc{W}_h} $. Substituting these fluxes in \cref{eq:RHS of DG primal formulation} and simplifying, the residual for the BR1 method becomes \cite{arnold2002unified}
\begin{equation} \label{eq:BR1 residual}
\begin{aligned}
{\cal R}\left({\fnc{U}_h},{\cal V}\right)&=-\int_{\Omega}\lambda\nabla{\fnc{U}_h}\cdot\nabla{\cal V}\dd{\Omega}
+\int_{\Omega}{\cal V}{\cal F}\dd{\Omega}
+\int_{\Gamma^I}\jump{{\fnc{U}_h}}\cdot\avg{\lambda\nabla{\cal V}}
+\avg{\lambda\nabla{\fnc{U}_h}}\cdot\jump{{\cal V}}\dd{\Gamma}
\\
& \quad
-\int_{\Omega} \lambda{\cal L}(\jump{{\fnc{U}_h}}){\cal L}(\jump{{\cal V}})\dd{\Omega} +\int_{\Gamma^{D}}(\fnc{U}_{h}-\fnc{U}_{D})\lambda\nabla\fnc{V}\cdot\bm{n} \dd{\Gamma}
+\int_{\Gamma^{D}}\fnc{V}(\lambda_k\nabla\fnc{U}_k)\cdot\bm{n}\dd{\Gamma}
\\
& \quad
+\int_{\Gamma^{D}}\fnc{V}\lambda\fnc{S}^D(\fnc{U}_{h}-\fnc{U}_D)\cdot\bm{n}\dd{\Gamma}
+\int_{\Gamma^{N}}\fnc{V}\fnc{U}_{N}\dd{\Gamma}.
\end{aligned}
\end{equation}
\blue{If the surface integrals on the RHS of the global lifting operators in \cref{eq: lift global vector,eq: lift global scalar} include all facets, then the discretization of the BR1 primal formulation gives the boundary SATs:
\begin{equation}
\begin{aligned}
\bm{s}_{k}^{B}(\uhk,\bm{u}_{\gamma k}, \bm{w}_{\gamma k}) &=\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{cc}
\R_{\gamma k}^{T} & \D_{\gamma k}^{T}\end{array}\right]\left[\begin{array}{c}
\T_{\gamma}^{(D)}\\
-\B_{\gamma}
\end{array}\right](\Rgk\uhk -\bm{u}_{\gamma k})
+\sum_{\gamma\subset\Gamma^{N}}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{h,k}-\bm{w}_{\gamma k}\right)
\\
& \quad
+\frac{1}{2}\sum_{\gamma\subset\Gamma_{k}^{I}}\sum_{\epsilon\subset\Gamma_{k}^{D}}\R_{\gamma k}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\bm{u}_{\epsilon k}\right)
-\frac{1}{2}\sum_{\gamma\subset\Gamma_{v}^{I}}\sum_{\epsilon\subset\Gamma_{k}^{D}}\R_{\gamma v}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\bm{u}_{\epsilon k}\right)
\\
&\quad +\sum_{\gamma\subset\Gamma_{k}^{D}}\sum_{\epsilon\subset\Gamma_{k}^{D}}\R_{\gamma k}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\bm{u}_{\epsilon k}\right).
\end{aligned}
\end{equation}
Furthermore, the following term must be added to the interior facet SATs given by \cref{eq:Interface SATs},
\begin{equation*}
\frac{1}{2}\sum_{\gamma\subset\Gamma_{k}^{D}}\sum_{\epsilon\subset\Gamma_{k}^{I}}\bm{v}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\R_{\epsilon g}\bm{u}_{g}\right).
\end{equation*}
With these terms added to the interior and boundary facet SATs, it is possible to show that the SBP-SAT discretizations based on the primal and flux formulations of the BR1 method are identical. The BR1 SATs based on the flux formulation can be found, \eg, in Theorem 6.2 of \cite{chen2020review} by setting $ \beta=\alpha=0 $ therein. The extended boundary SATs affect adjoint consistency (and functional superconvergence) adversely, as discussed in \cref{sec:Adjoint Consistency}; however, not including them compromises the energy stability of the scheme. We now propose a modified BR1 type SAT that is stable but does not have extended boundary terms.}
\begin{proposition} \label{prop:BR1 SAT}
A stabilized version of the BR1 type SAT is recovered if the coefficient matrices in \cref{eq:Residual 1st form} are set as
\begin{equation*} \label{eq:BR1 coefficients}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)}=\frac{1}{2}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma},&\T_{\gamma k}^{(3)}&=\T_{\gamma v}^{(3)}=\frac{1}{2}\B_{\gamma},&\T_{\gamma k}^{(2)}&=\T_{\gamma v}^{(2)}=-\frac{1}{2}\B_{\gamma},&\T_{\gamma k}^{(4)}&=\T_{\gamma v}^{(4)}=\bm{0},\\\T_{\gamma\epsilon k}^{(5)}&=-\T_{\gamma\epsilon k}^{(6)}=\frac{1}{16}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon},&\T_{\gamma\delta v}^{(5)}&=-\T_{\gamma\delta v}^{(6)}=\frac{1}{16}\B_{\gamma}\Upsilon_{\gamma\delta v}\B_{\delta},&\T_{\gamma}^{(D)}&=\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}.&&
\end{aligned}
\end{equation*}
Moreover, the BR1 SAT produces a consistent, conservative and adjoint consistent discretization.
\end{proposition}
\begin{proof}
Discretizing \cref{eq:BR1 residual} using SBP operators and the discrete lifting operators \cref{eq:Lift global vector} and \cref{eq:Lift Dirichlet}, and comparing the result with \cref{eq:Residual 1st form} yields all the coefficients in \cref{prop:BR1 SAT} except $ \T_{\gamma k}^{(1)} $, $ \T_{\gamma \epsilon k}^{(5)} $, $ \T_{\gamma \delta v}^{(5)} $, and $ \T_{\gamma}^{(D)} $ which are modified for stability reasons. Before modification these coefficients read $ \T_{\gamma k}^{(1)}= (1/4)\B_{\gamma}[\Upsilon_{\gamma\gamma k}+\Upsilon_{\gamma\gamma v}]\B_{\gamma} $, $ \T_{\gamma\epsilon k}^{(5)}=(1/4)\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon} $, $ \T_{\gamma\delta v}^{(5)}= (1/4)\B_{\gamma}\Upsilon_{\gamma\delta v}\B_{\delta} $, and $ \T_{\gamma}^{(D)}=\B_{\gamma}\Upsilon_{\gamma \gamma k}\B_{\gamma}, $ which do not lead to stable discretization according to \cref{thm:Stability for Adjoint Consistent SATs}. In order to prove that the coefficients presented in \cref{prop:BR1 SAT} lead to stable discretization, we have to show that all the conditions in \cref{thm:Stability for Adjoint Consistent SATs} are met. From \cref{eq:TD stablility} and \cref{eq:T4 stablility}, we immediately see that the conditions on $ \T^{(D)}_{\gamma} $ and $ \T_{\gamma k}^{(4)} $ are satisfied. Substituting $ \T_{\gamma k}^{(2)} $, $ \T_{\gamma v}^{(2)} $, and the modified $ \T_{\gamma k}^{(1)} $ in \cref{eq:T1 stablility 1} we have
\begin{equation}
\frac{1}{2}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}-\frac{2}{4}\left(\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}+\frac{1}{\alpha_{\gamma v}}\B_{\gamma}\Upsilon_{\gamma\gamma v}\B_{\gamma}\right)=\bm{0}.
\end{equation}
It remains to show that \cref{eq:T1 stablility 2} is satisfied. In \cref{thm:Stability for Adjoint Consistent SATs} we assumed $ \T_{ak}^{(1)} \succ 0 $ for $ a\in \{\gamma,\epsilon_{1},\epsilon_{2}\} $, which implies that $ \T_{ak}^{(1)} $ is invertible. This is achieved by the proposed $ \T_{\gamma k}^{(1)} $ coefficient since $ \Upsilon_{\gamma\gamma k}\succ 0 $. Note that in \cref{eq:Upsilon definition} we have $ \bar{\H}_k^{-1}\Lambda_k \succ 0 $, and the normals in both $ x $ and $ y $ directions cannot be zero simultaneously. Using the proposed coefficients, we have
\begin{align}
\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}&=\frac{1}{16}\B_{\gamma}\Upsilon_{\gamma\epsilon_{1}k}\B_{\epsilon_{1}}\left(\frac{1}{2}\B_{\epsilon_{1}}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}+\frac{1}{\alpha_{\epsilon_{1}g_{1}}}\Upsilon_{\epsilon_{1}\epsilon_{1}g_{1}}\right]\B_{\epsilon_{1}}\right)^{-1}\frac{1}{16}\B_{\epsilon_{1}}\Upsilon_{\epsilon_{1}\gamma k}\B_{\gamma}
\nonumber
\\&=\frac{1}{128}\B_{\gamma}\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}+\frac{1}{\alpha_{\epsilon_{1}g_{1}}}\Upsilon_{\epsilon_{1}\epsilon_{1}g_{1}}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\B_{\gamma}
\preceq\frac{1}{128}\B_{\gamma}\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\B_{\gamma}, \label{eq:T5 T1 T5}
\end{align}
where we applied \cref{lem:Inverse of sum of SPD matrices} in the last step. But we can write
\begin{align}
\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}&=\N_{\gamma k}^{T}\bar{\R}_{\gamma k}\bar{\H}_{k}^{-1}\Lambda_{k}\bar{\R}_{\epsilon_{1}k}^{T}\N_{\epsilon_{1}k}\left(\frac{1}{\alpha_{\epsilon_{1}k}}\N_{\epsilon_{1}k}^{T}\bar{\R}_{\epsilon_{1}k}\bar{\H}_{k}^{-1}\Lambda_{k}\bar{\R}_{\epsilon_{1}k}^{T}\N_{\epsilon_{1}k}\right)^{-1}\N_{\epsilon_{1}k}^{T}\bar{\R}_{\epsilon_{1}k}\bar{\H}_{k}^{-1}\Lambda_{k}\bar{\R}_{\gamma k}^{T}\N_{\gamma k}
\nonumber
\\&
=\alpha_{\epsilon_{1}k}\P^{T}\Y^{T}\left[\Y\Y^{T}\right]^{-1}\Y\P,
\end{align}
where $ \P=\left[\bar{\H}_{k}^{-1}\Lambda_{k}\right]^{\frac{1}{2}}\bar{\R}_{\gamma k}^{T}\N_{\gamma k} $ and $ \Y=\N_{\epsilon_{1}k}^{T}\bar{\R}_{\epsilon_{1}k}\left[\bar{\H}_{k}^{-1}\Lambda_{k}\right]^{\frac{1}{2}} $. \cref{lem:I-YXY} implies
$
(\I-\Y^{T}[\Y\Y^{T}]^{-1}\Y)\succeq0,
$
which gives
\begin{align} \label{eq:Lemma 3 applied}
\alpha_{\epsilon_{1}k}\P^{T}\I\P-\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}=\alpha_{\epsilon_{1}k}\Upsilon_{\gamma\gamma k}-\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\succeq0.
\end{align}
Since \violet{$ 0 < \alpha_{\epsilon_{1}k} < 1 $}, $ \Upsilon_{\gamma\gamma k} \succ 0$, and $ \Upsilon_{\gamma\gamma v} \succ 0 $, we write
\begin{equation}
\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]-\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\succeq0,
\end{equation}
which, together with \cref{eq:T5 T1 T5}, yields the inequality
\begin{equation}
\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}\preceq\frac{1}{128}\B_{\gamma}\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\B_{\gamma}\preceq\frac{1}{128}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}.
\end{equation}
Therefore,
\begin{equation}
\T_{\gamma k}^{(1)}-64\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}
\succeq
\T_{\gamma k}^{(1)}-\frac{1}{2}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}=\bm{0},
\end{equation}
which is the result required for \cref{eq:T1 stablility 2} to hold. Note that the same analysis can be done for any combination of facets $ a,b \in \{\gamma,\epsilon_{1},\epsilon_{2}\} $, in \cref{eq:T1 stablility 2}. Finally, from \cref{thm:Consistency,thm:Conservation,thm:Adjoint consistency} it can easily be seen that the BR1 SAT satisfies all the conditions required for consistency, conservation, and adjoint consistency.
\end{proof}
\begin{remark}
The proposed \violet{interior facet} BR1 SAT is equivalent to the consistent method of Brezzi \etal \cite{brezzi1999discontinuous}, the modified BR1 method in \cite{alhawwary2018accuracy}, the stabilized central flux in \cite{hesthaven2007nodal}, and the penalty approach in \cite{kannan2009study} in the sense that all of these methods can be reproduced by considering $ \sigma_1 \T_{\gamma k}^{(1)} $, $ \sigma_5 \T_{\gamma \epsilon k}^{(5)} $, and $ \sigma_D \T_{\gamma}^{(D)} $ in \cref{prop:BR1 SAT} for $ \sigma_1,\sigma_5,\sigma_D > 0 $.
\end{remark}
\begin{remark} \label{rem:Sigma reduced}
Assuming the source term is zero, it can be shown that the continuous energy estimate satisfies \cite{gassner2018br1}
\begin{equation} \label{eq:Energy BR1 continuous}
\frac{1}{2}\frac{{\rm d}}{{\rm d}t}\norm{\fnc{U}_{h}}{}^{2}=\fnc{R}(\fnc{U}_{h},\fnc{U}_{h})\le\sum_{\gamma\subset\Gamma}\int_{\gamma}\widehat{\fnc{U}}\jump{\vecfnc{W}_{h}}+\widehat{\vecfnc{W}}\cdot\jump{\fnc{U}_{h}}-\jump{\fnc{U}_{h}\vecfnc{W}_{h}}\dd{\Gamma}.
\end{equation}
Substituting the BR1 fluxes into \cref{eq:Energy BR1 continuous} and using the identity
\begin{equation}\label{eq:Jump-avg identity}
\avg{\fnc{U}_{h}}\jump{\vecfnc{W}_{h}}+\avg{\vecfnc{W}_{h}}\cdot\jump{\fnc{U}_{h}}-\jump{\fnc{U}_{h}\vecfnc{W}_{h}}=0
\end{equation}
gives $ {\rm d}/{{\rm d}t}(\norm{\fnc{U}_h}^2) \le 0 $, which establishes the energy stability of the BR1 method for diffusion problems. \red{A discrete energy stability analysis of the SBP-SAT discretization based on the flux formulation leads to a similar conclusion. Such a proof follows the same technique used to show the entropy stability of the LDG method in \cite{chen2020review} (see, proof of Theorem 6.2 therein). If the BR1 SAT is applied only on the interior facets, however, the identity \cref{eq:Jump-avg identity} cannot be applied, and the energy stability of the discretization is compromised unless the SAT coefficients are modified, \eg, as in \cref{prop:BR1 SAT}. Exceptions apply when the BR1 and LDG SATs are implemented with the SBP diagonal-E operators for which all the extended SATs vanish (see, \cref{sec:Equivalence of SATs}). In this case, the discrete form of \cref{eq:Jump-avg identity} can be used to show the energy stability of the SBP-SAT discretization based on the flux formulation, which yields the same discretization as the primal formulation when implemented with the SBP diagonal-E operators.}
\end{remark}
\subsection{LDG SAT: The local discontinuous Galerkin method}
The LDG scheme \cite{cockburn1998local} is obtained by choosing the DG numerical fluxes as $ \fnc{\widehat{U}} = \avg{\fnc{U}_h} - \bm{\beta}\cdot\jump{\fnc{U}_h}$ and $ \widehat{\vecfnc{W}} = \avg{\vecfnc{W}_h} + \bm{\beta}\jump{\vecfnc{W}_h} - \mu h^{-1}\jump{\fnc{U}_h} $ on interior facets, and $ \widehat{\vecfnc{W}} = \vecfnc{W}_h - \mu h^{-1}(\fnc{U}_h - \fnc{U}_D)\bm{n} $ on $ \Gamma^D $ \cite{arnold2002unified,peraire2008compact}. The switch function, $ \bm{\beta} $, is defined on each interface as
\begin{equation}\label{eq:LDG switch}
\bm{\beta} = \frac{1}{2}(\beta_{\gamma k} \bm{n}_k + \beta_{\gamma v} \bm{n}_v),
\end{equation}
where $ \beta_k,\beta_v\in\{0, 1\} $ are switches defined for $ \Omega_k $ and $ \Omega_v $ at their shared interface. Furthermore, the switches satisfy
\begin{equation}
\beta_{\gamma k} + \beta_{\gamma v} = 1.
\end{equation}
The values of the switches \violet{are set to zero at boundary facets, \ie, $ \beta_{\gamma k}=\beta_{\gamma v}=0 $ for $ \gamma\subset \Gamma^B $. For interior facets, the values} are determined based on the sign of the dot product $ \bm{n}\cdot \bm{g} $, where $ \bm{g} $ is an arbitrary global vector \cite{sherwin20062d}, \ie,
\begin{equation} \label{eq:LDG switch with g}
\begin{aligned}
\beta_{\gamma k}=\begin{cases}
1 & {\rm if}\;\bm{n}_{k}\cdot\bm{g}\ge0,\\
0 & {\rm if}\;\bm{n}_{k}\cdot\bm{g}<0.
\end{cases}
\end{aligned}
\end{equation}
Although it is possible to use other vectors as switch functions, the form in \cref{eq:LDG switch} is necessary to avoid wider stencil width \cite{sherwin20062d,peraire2008compact}. For instance, if we set $ \bm{\beta} = \bm{0} $ and $ \mu=0 $, we recover the BR1 fluxes. \violet{On curved elements, the normal vector varies along the facets; hence, $ \beta_{\gamma k} $ is not constant in general. This leads to cases where \cref{assu:Coefficient matrices} does not hold; particularly, $ \T_{\gamma k}^{(1)} \neq (\T_{\gamma k}^{(1)})^T$, $ \T_{abk}^{(5)} \neq (\T_{bak}^{(5)})^T $, $ \T_{abk}^{(6)} \neq (\T_{bak}^{(6)})^T $. Additionally, it increases the number of elements that are coupled, resulting in a denser system matrix. To remedy this, we calculate $ \beta_{\gamma k} $ using straight facets in 2D (or flat facets in 3D) regardless of whether or not the physical elements are curved.}
Substituting the numerical fluxes in \cref{eq:RHS of DG primal formulation} and simplifying, the residual of the LDG method reads
\begin{equation} \label{eq:LDG residual}
\begin{aligned}
\fnc{R}\left(\fnc{U}_{h},\fnc{V}\right)&=-\int_{\Omega}\lambda\nabla\fnc{U}_{h}\cdot\nabla\fnc{V}\dd{\Omega}+\int_{\Omega}\fnc{V}\fnc{F}\dd{\Omega}+\int_{\Gamma^{I}}\jump{\fnc{U}_{h}}\cdot\left\{ \lambda\nabla\fnc{V}\right\} +\jump{\fnc{V}}\cdot\avg{\lambda\nabla\fnc{U}_{h}}\dd{\Gamma}
\\&\quad
+\int_{\Gamma^{I}}\bm{\beta}\cdot\jump{\fnc{U}_{h}}\jump{\lambda\nabla\fnc{V}}+\jump{\lambda\nabla\fnc{U}_{h}}\bm{\beta}\cdot\jump{\fnc{V}}\dd{\Gamma}-\mu h^{-1}\int_{\Gamma^{I}}\jump{\fnc{V}}\cdot\jump{\fnc{U}_{h}}\dd{\Gamma}
\\&\quad-\int_{\Omega}\left[\fnc{L}\left(\jump{\fnc{V}}\right)+\fnc{S}\left(\bm{\beta}\cdot\jump{\fnc{V}}\right)\right]\cdot\left[\lambda\fnc{L}\left(\jump{\fnc{U}_{h}}\right)+\lambda\fnc{S}\left(\bm{\beta}\cdot\jump{\fnc{U}_{h}}\right)\right]\dd{\Omega}-\mu h^{-1}\int_{\Gamma^{D}}\fnc{V}\left(\fnc{U}_{h}-\fnc{U}_D\right)\dd{\Gamma}
\\&\quad +\int_{\Gamma^{D}}\left(\fnc{U}_{h}-\fnc{U}_{D}\right)\lambda\nabla\fnc{V}\cdot\bm{n}\dd{\Gamma}+\int_{\Gamma^{D}}\fnc{V}(\lambda\nabla\fnc{U}_{h})\cdot \bm{n}\dd{\Gamma}+\int_{\Gamma^{D}}\fnc{V}\lambda\fnc{S}^D(\fnc{U}_{h}-\fnc{U}_D)\cdot\bm{n}\dd{\Gamma}+\int_{\Gamma^{N}}\fnc{V}\fnc{U}_{N}\dd{\Gamma}.
\end{aligned}
\end{equation}
\red{The boundary terms resulting from the discretization of \cref{eq:LDG residual} are different from the LDG boundary coupling terms obtained using global lifting operators defined on all interfaces. We have also used $ -\mu h^{-1}\int_{\Gamma^D}\fnc{V}(\fnc{U}_h - \fnc{U}_D) \dd{\Gamma}$ as the boundary stabilizing term instead of $ -\mu h^{-1}\int_{\Gamma^D}\fnc{V}\fnc{U}_h \dd{\Gamma}$. If these changes are not applied, the LDG boundary SATs would include extended stencil terms, \ie,
\begin{equation}
\begin{aligned}
\bm{s}_{k}^{B}(\uhk,\bm{u}_{\gamma k}, \bm{w}_{\gamma k}) &=\sum_{\gamma\subset\Gamma^{D}}\left[\begin{array}{cc}
\R_{\gamma k}^{T} & \D_{\gamma k}^{T}\end{array}\right]\left[\begin{array}{c}
\T_{\gamma}^{(D)}\\
-\B_{\gamma}
\end{array}\right](\Rgk\uhk -\bm{u}_{\gamma k})
+\sum_{\gamma\subset\Gamma^{N}}\R_{\gamma k}^{T}\B_{\gamma}\left(\D_{\gamma k}\bm{u}_{h,k}-\bm{w}_{\gamma k}\right)
\\
& \quad
+\frac{1+\beta_{\gamma k}-\beta_{\gamma v}}{2}\sum_{\gamma\subset\Gamma_{k}^{I}}\sum_{\epsilon\subset\Gamma_{k}^{D}}\R_{\gamma k}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\bm{u}_{\epsilon k}\right)
+\sum_{\gamma\subset\Gamma_{k}^{D}}\sum_{\epsilon\subset\Gamma_{k}^{D}}\R_{\gamma k}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\bm{u}_{\epsilon k}\right)
\\
&\quad
-\frac{1+\beta_{\gamma k}-\beta_{\gamma v}}{2}\sum_{\gamma\subset\Gamma_{v}^{I}}\sum_{\epsilon\subset\Gamma_{k}^{D}}\R_{\gamma v}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\bm{u}_{\epsilon k}\right) + \mu h^{-1}\R_{\gamma k}^T\B_{\gamma}\bm{u}_{h,k},
\end{aligned}
\end{equation}
where $ \T_{\gamma k}^{(D)}=\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma} $. Moreover, the term
\begin{equation*}
\frac{1+\beta_{\epsilon k}-\beta_{\epsilon v}}{2}\sum_{\gamma\subset\Gamma_{k}^{D}}\sum_{\epsilon\subset\Gamma_{k}^{I}}\bm{v}_{k}^{T}\R_{\gamma k}^{T}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}\left(\R_{\epsilon k}\bm{u}_{h,k}-\R_{\epsilon g}\bm{u}_{g}\right).
\end{equation*}
must be added to the interior facet SATs given by \cref{eq:Interface SATs}. Similar to the unmodified BR1 SAT, it can be shown that the unmodified LDG SAT based on the primal and flux formulations are the same. As explained in \cref{rem:Sigma reduced}, the LDG SAT written in flux formulation is energy stable (even when $ \mu=0 $); hence, it follows that the unmodified LDG SAT based on the primal formulation is energy stable.} The penalty coefficients corresponding to the \violet{interior LDG SATs} can be obtained by discretizing \cref{eq:LDG residual} and comparing the result with \cref{eq:Residual 1st form}. The coefficients $ \T_{\gamma k}^{(2)} $, $ \T_{\gamma v}^{(2)} $, $ \T_{\gamma k}^{(3)} $, $ \T_{\gamma v}^{(3)} $, $ \T_{\gamma k}^{(4)} $ and $ \T_{\gamma v}^{(4)} $ are the same as those presented in \cref{prop:LDG SAT} below. The rest of the coefficients are
\begin{equation} \label{eq:LDG original coefficients}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)}=\violet{\B_{\gamma}\bigg[\frac{1+\left(\beta_{k\gamma}-\beta_{v\gamma}\right)^{2}+2(\beta_{k\gamma}-\beta_{v\gamma})}{4}\Upsilon_{\gamma\gamma k}+\frac{1+\left(\beta_{k\gamma}-\beta_{v\gamma}\right)^{2}-2(\beta_{k\gamma}-\beta_{v\gamma})}{4}\Upsilon_{\gamma\gamma v}\bigg]\B_{\gamma}}+\mu h^{-1}\B_{\gamma},
\\
\T_{\gamma\epsilon k}^{(5)}&=-\T_{\gamma\epsilon k}^{(6)}=\frac{\left[1+\beta_{\gamma k}-\beta_{\gamma v}\right]\left[1+\beta_{\epsilon k}-\beta_{\epsilon g}\right]}{4}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon},\\\T_{\gamma\delta v}^{(6)}&=-\T_{\gamma \delta v}^{(5)}=\frac{\left[\beta_{\gamma k}-\beta_{\gamma v}-1\right]\left[1+\beta_{\delta v}-\beta_{\delta q}\right]}{4}\B_{\gamma}\Upsilon_{\gamma\delta v}\B_{\delta},\\\T_{\gamma k}^{(D)}&=\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}+\mu h^{-1}\B_{\gamma}.
\end{aligned}
\end{equation}
As expected, the coefficients in \cref{eq:LDG original coefficients} are identical to the unmodified BR1 SAT when $ \bm{\beta}$ and $ \mu $ are set to zero. For mesh independent SAT penalties, \ie, $ \mu=0$, the coefficients in \cref{eq:LDG original coefficients} do not guarantee energy stability. To see this, consider the case where $ \beta_{\gamma k} = 1 $ and $ \beta_{\gamma v} = 0 $; the stability requirement in \cref{eq:T1 stablility 1} demands positive semidefiniteness of
\violet{
\begin{equation*}
\T_{\gamma k}^{(1)}-2\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right)=\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}-\frac{2}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}.
\end{equation*}
However, this cannot be achieved since $ 0< \alpha_{\gamma k}<1 $}. It is also clear from \cref{eq:TD stablility} that $ \T_{\gamma}^{(D)} $ is not large enough to ensure energy stability. To remedy this, we propose a stabilized form of the LDG scheme that does not have a mesh dependent stability parameter.
\begin{proposition}\label{prop:LDG SAT}
A consistent, conservative, adjoint consistent, and stable LDG type SAT with no mesh dependent stabilization parameter, \ie, $ \mu=0 $, is obtained if the penalty coefficients in \cref{eq:Residual 1st form} are chosen such that
\begin{equation*}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)}=\B_{\gamma}\bigg[\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\bigg]\B_{\gamma},&\T_{\gamma}^{(D)}&=\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma},&\T_{\gamma k}^{(4)}&=\T_{\gamma v}^{(4)}=\bm{0},\\\T_{\gamma\epsilon k}^{(5)}&=-\T_{\gamma\epsilon k}^{(6)}=\frac{\left[1+\beta_{\gamma k}-\beta_{\gamma v}\right]\left[1+\beta_{\epsilon k}-\beta_{\epsilon g}\right]}{16}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon},&\T_{\gamma k}^{(2)}&=\frac{\beta_{\gamma v}-\beta_{\gamma k}-1}{2}\B_{\gamma},&\T_{\gamma k}^{(3)}&=\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{2}\B_{\gamma},\\\T_{\gamma\delta v}^{(6)}&=-\T_{\gamma\delta v}^{(5)}=\frac{\left[\beta_{\gamma k}-\beta_{\gamma v}-1\right]\left[1+\beta_{\delta v}-\beta_{\delta q}\right]}{16}\B_{\gamma}\Upsilon_{\gamma\delta v}\B_{\delta},&\T_{\gamma v}^{(2)}&=\frac{\beta_{\gamma k}-\beta_{\gamma v}-1}{2}\B_{\gamma},&\T_{\gamma v}^{(3)}&=\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{2}\B_{\gamma}.
\end{aligned}
\end{equation*}
\end{proposition}
\begin{proof}
The proofs for consistency, conservation, and adjoint consistency are straightforward. Moreover, the stiffness matrix arising from the discretization is symmetric since $ \T_{\gamma k}^{(3)}-\T_{\gamma k}^{(2)} = \B_{\gamma} $. We see that the energy stability conditions in \cref{eq:TD stablility,eq:T4 stablility} are met. It remains to show that the coefficients satisfy the energy stability requirements in \cref{eq:T1 stablility 1,eq:T1 stablility 2}. Note that if either or both $ \beta_{\gamma v}=1 $ and $ \beta_{\epsilon g}=1 $ then $ \T_{\gamma\epsilon k}^{(5)}=\T_{\epsilon\gamma k}^{(5)}=\bm{0} $, and the scheme is stable since \cref{eq:T1 stablility 1} is satisfied, \ie,
\begin{equation} \label{eq:LDG stability with betak=0}
\violet{\T_{\gamma k}^{(1)}-2\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right)
=\frac{2}{\alpha_{\gamma v}}\B_{\gamma}\Upsilon_{\gamma\gamma v}\B_{\gamma}-\frac{2}{\alpha_{\gamma v}}\B_{\gamma}\Upsilon_{\gamma\gamma v}\B_{\gamma}= \bm{0}}.
\end{equation}
Thus, we only need to consider the case where $ \beta_{\gamma k}=\beta_{\epsilon k}=1 $, which gives $ \T_{\gamma\epsilon k}^{(5)}=(1/4)\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon} $, $ \T_{\epsilon\gamma k}^{(5)}=(1/4)\B_{\epsilon}\Upsilon_{\epsilon\gamma k}\B_{\gamma} $, and $ \T_{\epsilon_{1}k}^{(1)}=(2/\alpha_{\epsilon_{1}k})\B_{\epsilon_{1}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\B_{\epsilon_{1}} $. Hence, we have
\begin{equation}
\T_{\gamma k}^{(1)}-2\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right)
=\frac{2}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}-\frac{2}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}=\bm{0}.
\end{equation}
Furthermore, we find
\begin{equation}
\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}=\frac{1}{16}\B_{\gamma}\Upsilon_{\gamma\epsilon_{1}k}\B_{\epsilon_{1}}\left[\frac{2}{\alpha_{\epsilon_{1}k}}\B_{\epsilon_{1}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\B_{\epsilon_{1}}\right]^{-1}\B_{\epsilon_{1}}\Upsilon_{\epsilon_{1}\gamma k}\B_{\gamma}=\frac{1}{32}\B_{\gamma}\Upsilon_{\gamma\epsilon_{1}k}\left[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}\right]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\B_{\gamma},
\end{equation}
but, as in \cref{eq:Lemma 3 applied}, application of \cref{lem:I-YXY} yields $ \frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}-\Upsilon_{\gamma\epsilon_{1}k}[\frac{1}{\alpha_{\epsilon_{1}k}}\Upsilon_{\epsilon_{1}\epsilon_{1}k}]^{-1}\Upsilon_{\epsilon_{1}\gamma k}\succeq0 $, which implies that
\begin{equation}
\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)} \preceq \frac{1}{32}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}\right]\B_{\gamma}.
\end{equation}
The last condition, \cref{eq:T1 stablility 2}, that we need to show for energy stability follows as
\begin{equation}
\T_{\gamma k}^{(1)}-64\T_{\gamma\epsilon_{1}k}^{(5)}\left(\T_{\epsilon_{1}k}^{(1)}\right)^{-1}\T_{\epsilon_{1}\gamma k}^{(5)}\succeq\T_{\gamma k}^{(1)}-2\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}\right]\B_{\gamma}=\bm{0}.
\end{equation}
Therefore, the LDG SAT coefficients in \cref{prop:LDG SAT} lead to an energy stable SBP-SAT discretization.
\end{proof}
\subsection{CDG SAT: The compact discontinuous Galerkin method}
The CDG method \cite{peraire2008compact} has similar numerical fluxes as the LDG scheme, but uses local instead of global lifting operators. More precisely, the numerical fluxes for the CDG method are $ \fnc{\widehat{U}} = \avg{\fnc{U}_h} - \bm{\beta}\cdot\jump{\fnc{U}_h}$ and $ \widehat{\vecfnc{W}} = \avg{\vecfnc{W}_h^{\gamma}} + \bm{\beta}\jump{\vecfnc{W}_h^{\gamma}} - \mu h^{-1}\jump{\fnc{U}_h} $ on interior facets, and $ \widehat{\vecfnc{W}} = \vecfnc{W}_h^{\gamma} - \mu h^{-1}(\fnc{U}_h - \fnc{U}_D)\bm{n} $ on $ \Gamma^D $ \cite{peraire2008compact}. For this reason, the residual of the CDG method can be obtained from \cref{eq:LDG residual} by replacing the global lifting operators by the corresponding local lifting operators. The implication of this on the SBP-SAT discretization is the nullification of SAT coefficients that lead to extended stencils.
\begin{proposition}\label{prop:CDG SAT}
A consistent, conservative, adjoint consistent, and energy stable version of the CDG scheme with no mesh dependent stabilization parameter, \ie, $ \mu=0 $, has the SAT coefficients
\begin{equation*}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)}=\frac{1}{2}\B_{\gamma}\bigg[\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\bigg]\B_{\gamma},&\T_{\gamma k}^{(3)}&=\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{2}\B_{\gamma},&\T_{\gamma k}^{(4)}&=\T_{\gamma v}^{(4)}=\bm{0},\\\T_{\gamma k}^{(2)}&=\frac{\beta_{\gamma v}-\beta_{\gamma k}-1}{2}\B_{\gamma},&\T_{\gamma v}^{(3)}&=\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{2}\B_{\gamma},&\T_{\gamma\epsilon k}^{(5)}&=\T_{\gamma\epsilon k}^{(6)}=\bm{0},\\\T_{\gamma v}^{(2)}&=\frac{\beta_{\gamma k}-\beta_{\gamma v}-1}{2}\B_{\gamma},&\T_{\gamma}^{(D)}&=\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma},&\T_{\gamma\delta v}^{(6)}&=\T_{\gamma\delta v}^{(5)}=\bm{0}.
\end{aligned}
\end{equation*}
\end{proposition}
\begin{proof}
The proofs for consistency, conservation, and adjoint consistency are straightforward. Energy stability follows if we can show that \cref{eq:T1 stablility 1} holds (note that \cref{eq:TD stablility} is satisfied). If $ \beta_{\gamma k} = 1 $ then $ \beta_{\gamma v}=0 $, and
\begin{equation}
\T_{\gamma k}^{(1)}-\left(\frac{1}{\alpha_{\gamma k}}\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+\frac{1}{\alpha_{\gamma v}}\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}\right)=\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma}-\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma} = \bm{0}.
\end{equation}
Similarly, if $ \beta_{\gamma v} = 1$ then $ \beta_{\gamma k} = 0 $, and we have $ \T_{\gamma k}^{(1)}-((1/{\alpha_{\gamma k}})\T_{\gamma k}^{(2)}\Upsilon_{\gamma\gamma k}\T_{\gamma k}^{(2)}+(1/{\alpha_{\gamma v}})\T_{\gamma v}^{(2)}\Upsilon_{\gamma\gamma v}\T_{\gamma v}^{(2)}) = \bm{0} $. Hence, the stability condition in \cref{eq:T1 stablility 1} is satisfied.
\end{proof}
The SAT coefficients in \cref{prop:CDG SAT}, except for $ \T_{\gamma k}^{(1)} $ and $ \T_{\gamma}^{(D)} $, are found by discretizing the residual resulting from the CDG method and comparing the result with \cref{eq:Residual 1st form}. The coefficients $ \T_{\gamma k}^{(1)} $ and $ \T_{\gamma}^{(D)} $ for the original CDG method are the same as those stated in \cref{eq:LDG original coefficients}. Similar SAT coefficients for the CDG method are proposed in \cite{yan2020immersed}, where the stability issue with the original CDG method is discussed. In \cite{brdar2012compact}, numerical studies revealed that the original CDG method can be unstable for variable coefficient diffusion problems and for discretizations on quadrilateral grids.
\begin{remark}
As noted in \cite{peraire2008compact}, the LDG SAT and CDG SAT are identical in one space dimension. To see this, consider an arbitrary global vector, $ \bm{g} $, pointing to the right and two elements ordered from left to right, $ \Omega_k $ and $ \Omega_v $, respectively; then $ \beta_{\gamma k} = 1 $, $ \beta_{\gamma v} = 0 $, and $ \beta_{\epsilon k} =0 $. These values of the switches nullify all $ \T^{(5)} $ and $ \T^{(6)} $ SAT coefficients which cast the LDG SAT stable with $ (1/2) \T_{\gamma k}^{(1)} $ in \cref{prop:LDG SAT}, thus the CDG SAT and LDG SAT become identical.
\end{remark}
\subsection{BO SAT: The Baumann-Oden method}
Unlike the schemes presented so far, the BO method \cite{baumann1999discontinuous} leads to neither a symmetric stiffness matrix nor an adjoint consistent discretization. The numerical fluxes for the BO method are $ \widehat{\fnc{U}}=\avg{\fnc{U}}+\bm{n}\cdot\jump{\fnc{U}_{h}} $, and $ \widehat{\vecfnc{W}}=\avg{\lambda\nabla\fnc{U}} $ \cite{arnold2002unified}, and the residual is given by
\begin{align}\label{eq:BO Residual}
\fnc{R}\left(\fnc{U}_{h},\fnc{V}\right)&=-\int_{\Omega}\lambda\nabla\fnc{U}_{h}\cdot\nabla\fnc{V}\dd{\Omega}+\int_{\Omega}\fnc{V}\fnc{F}\dd{\Omega}-\int_{\Gamma^{I}}\jump{\fnc{U}_{h}}\cdot\left\{ \lambda\nabla\fnc{V}\right\} -\jump{\fnc{V}}\cdot\avg{\lambda\nabla\fnc{U}_{h}}\dd{\Gamma}-\int_{\Gamma^{D}}(\fnc{U}_{h}-\fnc{U}_{D})\lambda\nabla\fnc{V}\cdot\bm{n}\dd{\Gamma}
\nonumber
\\&\quad
+\int_{\Gamma^{D}}\fnc{V}\left(\lambda\nabla\fnc{U}_{h}\right)\cdot\bm{n}\dd{\Gamma}+\int_{\Gamma^{D}}\fnc{V}\lambda\fnc{S}^{D}(\fnc{U}_{h}-\fnc{U}_{D})\cdot\bm{n}\dd{\Gamma}+\int_{\Gamma^{N}}\fnc{V}\fnc{U}_{N}\dd{\Gamma}.
\end{align}
Discretization of \cref{eq:BO Residual} and comparison with \cref{eq:Residual 1st form} gives all of the SAT coefficients in \cref{prop:BO SAT} below, except for $ \T_{\gamma}^{(D)} $ which is modified for stability reasons. The coefficient $ 1/\alpha_{\gamma k} $ in $ \T_{\gamma}^{(D)} $ does not appear in the original BO method.
\begin{proposition} \label{prop:BO SAT}
The BO method is reproduced if the SAT coefficients in \cref{eq:Residual 1st form} are chosen such that
\begin{equation*}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)}=\bm{0},& \T_{\gamma k}^{(2)}&=\T_{\gamma v}^{(2)}=\violet{\frac{1}{2}\B_{\gamma}},&\T_{\gamma k}^{(4)}&=\T_{\gamma v}^{(4)}=\bm{0},\\\T_{\gamma}^{(D)}&=\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma},&\T_{\gamma k}^{(3)}&=\T_{\gamma v}^{(3)}=\frac{1}{2}\B_{\gamma},&\T_{\gamma\epsilon k}^{(5)}&=\T_{\gamma\epsilon k}^{(6)}=\T_{\gamma\delta v}^{(5)}=\T_{\gamma\delta v}^{(6)}=\bm{0},
\end{aligned}
\end{equation*}
and the discretization arising from using these coefficients is consistent, conservative, and energy stable.
\end{proposition}
\begin{proof}
It can easily be verified that the SAT coefficients satisfy the conditions for consistency and conservation. The proof for energy stability follows from \cref{thm:Stability Adjoint Inconsistent}.
\end{proof}
\subsection{CNG SAT: The Carpenter-Nordstr{\"o}m-Gottlieb method}
The CNG SAT \cite{carpenter1999stable} was introduced to resolve the multi-domain problem in high-order finite difference methods. Although it was originally presented for advection-diffusion problems, in this work we consider the CNG SAT coefficients that couple the diffusive terms only. The CNG SAT coefficients for multidimensional SBP operators are stated in \cref{prop:CNG SAT} below (see \cite{carpenter1999stable,carpenter2010revisiting,gong2011interface} for analogous coefficients in one-dimensional implementations).
\begin{proposition}\label{prop:CNG SAT}
The CNG SAT leads to consistent, conservative, and energy stable discretization, and has the coefficients
\begin{equation*}
\begin{aligned}
\T_{\gamma k}^{(1)}&=\T_{\gamma v}^{(1)}=\frac{1}{16}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma},&\T_{\gamma k}^{(3)}&=\T_{\gamma v}^{(3)}=\frac{1}{2}\B_{\gamma},&\T_{\gamma}^{(D)}&=\frac{1}{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma},\\\T_{\gamma\epsilon k}^{(5)}&=\T_{\gamma\epsilon k}^{(6)}=\T_{\gamma\delta v}^{(5)}=\T_{\gamma\delta v}^{(6)}=\bm{0},&\T_{\gamma k}^{(2)}&=\T_{\gamma v}^{(2)}=\bm{0},&\T_{\gamma k}^{(4)}&=\T_{\gamma v}^{(4)}=\bm{0}.
\end{aligned}
\end{equation*}
\end{proposition}
\begin{proof}
Substituting the SAT coefficients in \cref{eq:Residual 3rd form} and evaluating $ 2R_h(\bm{u}_h, \bm{u}_h) = R_h(\bm{u}_h, \bm{u}_h) + R_h^T(\bm{u}_h, \bm{u}_h) $, stability follows if $ \A+\A^T \succeq 0$, where $ \A $ is given by \cref{eq:Matrix A for compact SATs} (with $ \T_{\gamma k}^{(2)}=\T_{\gamma v}^{(2)}=\bm{0} $). From \cref{thm:Positive semi-definiteness}, $ \A+\A^T $ is positive semidefinite if
\begin{equation}
\begin{aligned}
\left[\begin{array}{cc}
2\T_{\gamma k}^{(1)} & -2\T_{\gamma k}^{(1)}\\
-2\T_{\gamma k}^{(1)} & 2\T_{\gamma k}^{(1)}
\end{array}\right]-\left[\begin{array}{cc}
-\frac{1}{2}\B_{\gamma}\C_{\gamma k} & \frac{1}{2}\B_{\gamma}\C_{\gamma v}\\
\frac{1}{2}\B_{\gamma}\C_{\gamma k} & -\frac{1}{2}\B_{\gamma}\C_{\gamma v}
\end{array}\right]\left[\begin{array}{cc}
2\alpha_{\gamma k}\Lambda_{k}^{*} & 0\\
0 & 2\alpha_{\gamma v}\Lambda_{v}^{*}
\end{array}\right]^{-1}\left[\begin{array}{cc}
-\frac{1}{2}\C_{\gamma k}^{T}\B_{\gamma} & \frac{1}{2}\C_{\gamma k}^{T}\B_{\gamma}\\
\frac{1}{2}\C_{\gamma v}^{T}\B_{\gamma} & -\frac{1}{2}\C_{\gamma v}^{T}\B_{\gamma}
\end{array}\right]&\succeq0,
\end{aligned}
\end{equation}
which, after simplification, gives
\begin{equation}
\left[\begin{array}{cc}
1 & -1\\
-1 & 1
\end{array}\right]\otimes\left(2\T_{\gamma k}^{(1)}-\B_{\gamma}\left[\frac{1}{8\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{8\alpha_{\gamma k}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}\right)\succeq0.
\end{equation}
The stability constraint $ \T_{\gamma k}^{(1)}\succeq\B_{\gamma}[1/(16\alpha_{\gamma k})\Upsilon_{\gamma\gamma k}+1/(16\alpha_{\gamma v})\Upsilon_{\gamma\gamma v}]\B_{\gamma} $ is, therefore, satisfied by the proposed SAT coefficient. The stability constraint on $ \T_{\gamma}^{(D)} $ is the same as the other methods presented earlier. Finally, it follows from \cref{thm:Consistency,thm:Conservation} that the coefficients in \cref{prop:CNG SAT} lead to consistent and conservative discretizations.
\end{proof}
\cref{tab:SATs} summarizes the SAT coefficients corresponding to eight different methods. The coefficients for BR2 and SIPG were first presented in \cite{yan2018interior}, and the analysis therein shows that the methods lead to consistent, conservative, adjoint consistent, and energy stable discretizations. The nonsymmetric interior penalty Galerkin (NIPG) SAT is obtained by modifying $ \T_{\gamma k}^{(1)} $ and $ \T_{\gamma}^{(D)} $ in \cref{prop:BO SAT}, and it leads to consistent, conservative, and energy stable discretizations. \blue{For the implementations of the SATs in \cref{tab:SATs}, we used the facet weight parameter, $ \alpha_{\gamma k} $, provided in \cite{yan2018interior},
\begin{equation} \label{eq:alpha_gamma k}
\alpha_{\gamma k}=\begin{cases}
\frac{{\cal A}\left(\gamma\right)}{{\cal A}\left(\Gamma_{k}^{I}\right)+2{\cal A}\left(\Gamma_{k}^{D}\right)}, & \text{if }\gamma\in\Gamma^{I},\\
\frac{2{\cal A}\left(\gamma\right)}{{\cal A}\left(\Gamma_{k}^{I}\right)+2{\cal A}\left(\Gamma_{k}^{D}\right)}, & \text{if }\gamma\in\Gamma^{D},
\end{cases}
\end{equation}
where $ {\cal A}(\gamma) $ computes the length of facet $ \gamma $ in 2D (and area of $ \gamma $ in 3D).} \violet{Moreover, for the LDG and CDG SATs, we set the arbitrary global vector in \cref{eq:LDG switch with g} as $ \bm{g}=[\frac{\pi}{2}, e]^T $}.
\begin{table*}[t!]
\small
\centering
\begin{threeparttable}
\caption{\label{tab:SATs} Interior facet SAT coefficients for diffusion problems. In the cases where only one entry is provided, the two coefficients in the column heading are equal, \eg, for the BR1 SAT, we have $ \T_{\gamma k}^{(2)} = \T_{\gamma v}^{(2)} = -\frac{1}{2}\B_{\gamma} $. All the SATs considered have $ \T_{\gamma k}^{(4)}=\T_{\gamma v}^{(4)}=\bm{0} $.}
\setlength{\tabcolsep}{.5em}
\renewcommand\cellgape{\Gape[\jot]}
\begin{tabular}{l | c| c| c| c }
\toprule
\makecell[l]{SAT} & \makecell[c]{$\T_{\gamma k}^{(1)}$, $\T_{\gamma v}^{(1)}$} & \makecell[c]{$\T_{\gamma k}^{(2)}$, $\T_{\gamma v}^{(2)}$} & \makecell[c]{$\T_{\gamma k}^{(3)}$, $\T_{\gamma v}^{(3)}$}& \makecell[c]{$\T_{\gamma\epsilon k}^{(5)}$, $\T_{\gamma\delta v}^{(6)}$}\\
\midrule
\makecell[l]{BR1} & \makecell[c]{$\frac{1}{2}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}$} & \makecell[c]{$-\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\T_{\gamma\epsilon k}^{(5)}=\frac{1}{16}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}$\vspace{3pt}\\
$\T_{\gamma\delta v}^{(6)}=\frac{1}{16}\B_{\gamma}\Upsilon_{\gamma\delta v}\B_{\delta}$}\\
\midrule
\makecell[l]{BR2} & \makecell[c]{$\frac{1}{4}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}$} & \makecell[c]{$-\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\bm{0}$}\\
\midrule
\makecell[l]{SIPG} & \makecell[c]{$\frac{\left(\lambda_{\max}\right)_{k}\parallel\B_{\gamma}^{\frac{1}{2}}\R_{\gamma k}\H_{k}^{-\frac{1}{2}}\parallel_{2}^{2}}{4\alpha_{\gamma k}}+\frac{\left(\lambda_{\max}\right)_{v}\parallel\B_{\gamma}^{\frac{1}{2}}\R_{\gamma v}\H_{v}^{-\frac{1}{2}}\parallel_{2}^{2}}{4\alpha_{\gamma v}}\B_{\gamma}$} & \makecell[c]{$-\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\bm{0}$}\\
\midrule
\makecell[l]{LDG} & \makecell[c]{$\B_{\gamma}\left[\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}$} & \makecell[c]{$\T_{\gamma k}^{(2)}=\frac{\beta_{\gamma v}-\beta_{\gamma k}-1}{2}\B_{\gamma}$\vspace{3pt}\\
$\T_{\gamma v}^{(2)}=\frac{\beta_{\gamma k}-\beta_{\gamma v}-1}{2}\B_{\gamma}$} & \makecell[c]{$\T_{\gamma k}^{(3)}=\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{2}\B_{\gamma}$ \vspace{3pt}\\$\T_{\gamma v}^{(3)}=\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{2}\B_{\gamma}$} & \makecell[c]{$\T_{\gamma\epsilon k}^{(5)}=\frac{\left[1+\beta_{\gamma k}-\beta_{\gamma v}\right]\left[1+\beta_{\epsilon k}-\beta_{\epsilon g}\right]}{16}\B_{\gamma}\Upsilon_{\gamma\epsilon k}\B_{\epsilon}$\vspace{3pt}\\
$\T_{\gamma\delta v}^{(6)}=\frac{\left[\beta_{\gamma k}-\beta_{\gamma v}-1\right]\left[1+\beta_{\delta v}-\beta_{\delta q}\right]}{16}\B_{\gamma}\Upsilon_{\gamma\delta v}\B_{\delta}$}\\
\midrule
\makecell[l]{CDG} & \makecell[c]{$\frac{1}{2}\B_{\gamma}\left[\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}$} & \makecell[c]{$\T_{\gamma k}^{(2)}=\frac{\beta_{\gamma v}-\beta_{\gamma k}-1}{2}\B_{\gamma}$ \vspace{3pt}\\
$\T_{\gamma v}^{(2)}=\frac{\beta_{\gamma k}-\beta_{\gamma v}-1}{2}\B_{\gamma}$} & \makecell[c]{$\T_{\gamma k}^{(3)}=\frac{\beta_{\gamma v}-\beta_{\gamma k}+1}{2}\B_{\gamma}$
\vspace{3pt}\\ $\T_{\gamma v}^{(3)}=\frac{\beta_{\gamma k}-\beta_{\gamma v}+1}{2}\B_{\gamma}$} &\makecell[c]{$\bm{0}$}\\
\midrule
\makecell[l]{BO} & \makecell[c]{$\bm{0}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\bm{0}$}\\
\midrule
\makecell[l]{NIPG} & \makecell[c]{$\frac{\left(\lambda_{\max}\right)_{k}\parallel\B_{\gamma}^{\frac{1}{2}}\R_{\gamma k}\H_{k}^{-\frac{1}{2}}\parallel_{2}^{2}}{4\alpha_{\gamma k}}+\frac{\left(\lambda_{\max}\right)_{v}\parallel\B_{\gamma}^{\frac{1}{2}}\R_{\gamma v}\H_{v}^{-\frac{1}{2}}\parallel_{2}^{2}}{4\alpha_{\gamma v}}\B_{\gamma}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\bm{0}$}\\
\midrule
\makecell[c]{CNG} & \makecell[c]{$\frac{1}{16}\B_{\gamma}\left[\frac{1}{\alpha_{\gamma k}}\Upsilon_{\gamma\gamma k}+\frac{1}{\alpha_{\gamma v}}\Upsilon_{\gamma\gamma v}\right]\B_{\gamma}$} & \makecell[c]{$\bm{0}$} & \makecell[c]{$\frac{1}{2}\B_{\gamma}$} & \makecell[c]{$\bm{0}$}\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[Note:] The Dirichlet boundary SAT coefficient is given by $ \T_{\gamma}^{(D)}={1}/{\alpha_{\gamma k}}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma} $ for all the SATs except for the SIPG and NIPG SATs for which $ \T_{\gamma}^{(D)}= {(\lambda_{\max})_{k}}/{\alpha_{\gamma k}}\parallel\B_{\gamma}^{\frac{1}{2}}\R_{\gamma k}\H_{k}^{-\frac{1}{2}}\parallel_{2}^{2}\B_{\gamma} $, where $ (\lambda_{\max})_{k} $ is the largest eigenvalue of $ \Lambda_{k} $.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\section{Practical issues} \label{sec:Practial issues}
In addition to the properties presented in \cref{sec:Properties of SBP-SAT discretization}, one needs to consider a few practical issues when deciding which type of SAT to use. In this section, we investigate the relation between the SATs when applied with SBP diagonal-E operators and quantify the sparsity of the system matrix arising from different SBP-SAT discretizations.
\subsection{Equivalence of SATs for diagonal-norm $ \R^{0} $ SBP operators} \label{sec:Equivalence of SATs}
Classification of SBP operators based on the dimensions spanned by the extrapolation matrix generalizes the SBP operator families introduced in \cite{fernandez2018simultaneous,chen2017entropy}. The SBP-$ \Omega $ operators \cite{fernandez2018simultaneous}, which fall under the $ \R^{d} $ SBP family, are characterized by having no volume nodes on element facets, \eg, the Legendre-Gauss (LG) operator. In general, however, the $ \R^{d} $ operator family allows volume nodes to be positioned on element facets as long as the $ \R $ matrix spans $ d $ dimensions \cite{marchildon2020optimization}. SBP-$ \Gamma $ \cite{hicken2016multidimensional,fernandez2018simultaneous} operators require $ {p+d-1 \choose d-1} $ volume nodes on each facet that are not collocated with facet quadrature nodes. In contrast, the $ \R^{d-1} $ family allows more volume nodes per facet; hence, it includes operators that cannot be categorized under the SBP-$ \Gamma $ family. SBP operators that have collocated volume and facet nodes on each facet, \eg, Legendre-Gauss-Lobatto (LGL), are classified under the SBP diagonal-E \cite{chen2017entropy} family which is equivalent to the $ \R^{0} $ SBP family. Examples of two-dimensional SBP operators in the $ \R^{0} $, $ \R^{d-1} $, and $ \R^{d} $ families are depicted in \cref{fig:SBP families}.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[scale=0.25]{diage_p2.pdf}
\caption{\label{fig:R0 family}$ \R^0 $ (SBP-E), $p=2$}
\end{subfigure}\hfill
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[scale=0.25]{gamma_p2.pdf}
\caption{\label{fig:Rd1 family}$ \R^{d-1} $ (SBP-$ \Gamma $), $p=2$}
\end{subfigure}\hfill
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[scale=0.25]{omega_p2.pdf}
\caption{\label{fig:Rd family}$ \R^{d} $ (SBP-$ \Omega $), $p=2$}
\end{subfigure}
\caption{\label{fig:SBP families}Examples of degree two SBP operators in the $ \R^{0} $, $ \R^{d-1} $, and $ \R^{d} $ families on the reference triangle. The circles indicate locations of volume nodes, and the squares indicate locations of facet quadrature nodes.}
\end{figure}
For diagonal-norm $ \R^{0} $ SBP operators, the matrix $ \Upsilon_{abk} $, defined in \cref{eq:Upsilon definition}, exhibits an interesting property. In particular, it vanishes if $ a \neq b $, where $ a,b\in\{\gamma,\epsilon_{1},\epsilon_{2},\delta_{1},\delta_{2}\} $. Note that for this operator family, the extrapolation matrix simply picks out the volume nodes that are collocated with facet quadrature nodes, \ie,
\begin{equation}
\left[\R_{f_{n}k}\right]_{ij}=\begin{cases}
1 & \text{if}\;i+(n-1)n_f=j,\\
0 & \text{if}\;i+(n-1)n_f\neq j,
\end{cases}
\end{equation}
where $ n \in \{1,2,3\}$ is the facet number, $ i\in\{ 1,\dots,n_{f}\} $, and $ j\in\{ 1,\dots,n_{p}\} $. Therefore, we have
\begin{equation} \label{eq:equivalence due to Rgk}
[{\R}_{\gamma k}{\H}_{k}^{-1}\Lambda_{xx,k}{\R}^T_{\epsilon_{1} k}]_{ij} = \sum_{m=1}^{n_p}\sum_{n=1}^{n_p} [{\R}_{\gamma k}]_{in} [{\H}_{k}^{-1}\Lambda_{xx,k}]_{nm} [{\R}^T_{\epsilon_{1} k}]_{mj} = \sum_{m=1}^{n_p} [{\R}_{\gamma k}]_{im} [{\H}_{k}^{-1}\Lambda_{xx,k}]_{mm} [{\R}_{\epsilon_{1} k}]_{jm} = 0,
\end{equation}
where the penultimate equality is a result of the fact that $ {\H}_{k}^{-1}\Lambda_{xx,k} $ is diagonal and $ [{\R}^T_{\epsilon_{1} k}]_{mj}=[{\R}_{\epsilon_{1} k}]_{jm} $. The last equality holds since $ {\R}_{\gamma k} $ and $ {\R}_{\epsilon_{1} k} $ do not contain $ 1 $ in the same column index, as $ \gamma $ and $ \epsilon_{1} $ have different facet numbers. This implies that for the $ \R^{0} $ operator family with diagonal norm matrix, $ \Upsilon_{\gamma \epsilon_{1} k}
=\N_{\gamma k}^{T}\bar{\R}_{\gamma k}\bar{\H}_{k}^{-1}\Lambda_{k}\bar{\R}_{\epsilon_{1} k}^{T}\N_{\epsilon_{1} k} = \bm{0} $, and more generally
\begin{equation}
\begin{aligned}
\Upsilon_{abk}
=\N_{ak}^{T}\bar{\R}_{a k}\bar{\H}_{k}^{-1}\Lambda_{k}\bar{\R}_{b k}^{T}\N_{bk}=\bm{0}\quad \text{if}\; a\neq b, \; \text{where}\; a,b\in\{\gamma,\epsilon_{1},\epsilon_{2},\delta_{1},\delta_{2}\}.
\end{aligned}
\end{equation}
Hence, the coefficients $ \T^{(5)} $ and $ \T^{(6)} $ for BR1 SAT in \cref{prop:BR1 SAT} and LDG SAT in \cref{prop:LDG SAT} vanish. We, therefore, have proven the following statement.
\begin{theorem}
When implemented with diagonal-norm $ \R^{0} $ SBP operators, the BR1, BR2 and SIPG SATs are equivalent in the sense that they can be reproduced by considering $ \sigma_1 \T_{\gamma k}^{(1)} $ and $ \sigma_D \T_{\gamma}^{(D)} $ in \cref{prop:BR1 SAT} for $ \sigma_1,\sigma_D > 0 $. Similarly, the LDG and CDG SATs are equivalent for this family of operators.
\end{theorem}
The equivalence of the SIPG SAT and BR2 SAT is established in \cite{yan2018interior}. Similarly, it is shown in \cite{manzanero2018bassi} that the BR1 and SIPG methods are equivalent when the discretization is restricted to the LGL nodal points, and the LGL quadrature is used to approximate integrals. In the same paper, this property is exploited to find a sharper estimate of the minimum penalty coefficient for stability of the SIPG method. Gassner \etal \cite{gassner2018br1} reported that most drawbacks of the BR1 method are not observed when the Navier-Stokes equations are solved using DG discretization with LGL nodal points and quadrature. Since discretizations with LGL nodal point and quadrature satisfy the SBP property \cite{gassner2013skew,fernandez2014generalized,fernandez2014review} and the operator is in the diagonal-norm $ \R^{0} $ family, \violet{for one-dimensional implementations, the flux used} in \cite{gassner2018br1} can also be regarded as the BR2 SAT implemented with $ \T_{\gamma k}^{(1)}= (1/4)\B_{\gamma}[\Upsilon_{\gamma\gamma k}+\Upsilon_{\gamma\gamma v}]\B_{\gamma} $ and $ \T_{\gamma}^{(D)}=\B_{\gamma}\Upsilon_{\gamma \gamma k}\B_{\gamma} $ (or with the stabilization parameter for the BR2 flux set to $ \eta_0=1 $ in \cite{arnold2002unified}). \violet{For tensor-product implementations of the LGL operators in multiple dimensions, it can be shown, using the structure of the extrapolation matrices as in \cref{eq:equivalence due to Rgk}, that the BR1 SAT is not equivalent to the BR2 SAT. Despite this, when coupled with the BR1 SAT, the LGL operators lead to a smaller stencil width compared to operators that do not have nodes at the boundaries.}
diagonal-norm SBP operators in the $ \R^{0} $ family have also found important application in nonlinear stability analyses. They simplify entropy stability analyses \cite{chen2020review,shadpey2020entropy,crean2018entropy}, and are computationally less expensive than operators in the $ \R^{d} $ family on conforming grids \cite{chan2019efficient,fernandez2019staggered}. However, they exhibit lower solution accuracy and have a larger number of degrees of freedom compared to operators of the same degree in the $ \R^{d} $ SBP family \cite{chen2020review}.
\begin{remark}
When implemented with the diagonal-norm $ \R^{0} $ SBP operators, the BR1 and LDG SATs are stable with the $ \T_{\gamma k}^{(1)} $ coefficients specified for the BR2 and CDG SATs in \cref{tab:SATs}, respectively. If such a modification is applied, then the BR1 and BR2 SATs as well as the LDG and CDG SATs become identical. We have not implemented this modification for the numerical results presented in \cref{sec:Numerical Results}.
\end{remark}
\subsection{Sparsity and storage requirements}
It is desirable to reduce the number of nonzero entries of the matrix resulting from a spatial discretization of \cref{eq:diffusion problem} to minimize storage requirements and take advantage of efficient sparse matrix algorithms for implicit time-marching methods. More generally, fewer nonzero entries lead to fewer floating point operations, and thus lower computational cost. The sparsity of a matrix is equal to one minus the density of the matrix, which is defined as the ratio of the number of nonzero entries to the total number of entries.
The linear system of equations resulting from the SBP-SAT discretization on the RHS of \cref{eq:SBP-SAT discretization} is assembled in a global system matrix. This matrix is equivalent to the product of the inverse of the global mass matrix and the global stiffness matrix in the DG framework. An estimate of the number of nonzero entries of the system matrix depends on the type of SBP operator and SAT used. We first note that it has diagonal blocks of size $ n_p^2 $ associated with each element in the domain. Furthermore, for SBP-$ \Omega $ operators the $ \R $ matrix is dense since it spans $ d $ dimensions. Therefore, the number of nonzero entries of the off-diagonal block matrices containing terms such as $ \Rgk^T \T_{\gamma k}^{(1)}\Rgv $ are dense, \ie, they contain $ n_p^2 $ nonzero entries. Assuming simplices are used to tessellate the domain, each element has at most $ d+1 $ immediate neighbors. Thus, we can write an upper bound on the number of nonzero entries of the system matrix arising from the use of SBP-$ \Omega $ operators and any of the compact SATs as
\begin{equation}\label{eq:nnz omega compact}
nnz = n_e \left(n_p^2 + (d+1)n_p^2\right)= (d+2)n_p^2 n_e,
\end{equation}
where $ nnz $ denotes the number of nonzero entries. When SBP-$ \Omega $ operators are implemented with the BR1 SAT, each element is coupled with $ d^2 + 2d + 1 $ elements. Therefore, we have
\begin{equation}\label{eq:nnz omega BR1}
nnz = n_e\left(n_p^2 + ( d^2 + 2d + 1) n_p^2\right) = ( d^2 + 2d + 2)n_p^2 n_e.
\end{equation}
For the LDG SAT, the number of elements coupled with a target element depends on the switch function. The choice of $ \bm{\beta} $ in \cref{eq:LDG switch} and \cref{eq:LDG switch with g} ensures that there is no element for which all switches point inwards or outwards simultaneously \cite{sherwin20062d}. Using this fact with the expressions for $ \T_{\gamma \epsilon k}^{(5)} $ and $ \T_{\gamma \delta v}^{(6)} $ in \cref{prop:LDG SAT}, it can be shown that the maximum number of elements coupled with a target element by the LDG SAT is $ d^2 + 1 $. Moreover, for every element coupled with $ d^2 + 1 $ neighbors there are $ d $ number of neighbors that will be coupled with less than $ d^2 + 1$ elements when $ d>1 $. Therefore, the number of elements that can have $ d^2 + 1 $ neighbors is limited to $ \lceil{n_e/(d+1)} \rceil$, where $ \lceil \cdot \rceil $ denotes the ceiling operator. Thus, an upper estimate of the number of nonzero entries of the system matrix resulting from the LDG SAT implemented with SBP-$ \Omega $ operator is given by
\begin{equation}\label{eq:nnz omega LDG}
nnz = \left\lceil\frac{n_e}{d+1}\right\rceil(d^2 + 2)n_p^2 + \left\lfloor\frac{n_e d}{d+1}\right\rfloor(d^2 + 1)n_p^2 = \left[\left\lceil\frac{n_e}{d+1}\right\rceil(d^2+2) + \left\lfloor\frac{n_e d}{d+1}\right\rfloor(d^2 + 1)\right]n_p^2,
\end{equation}
where $ \lfloor\cdot\rfloor $ denotes the floor operator. We used affine mapping (or straight-edged elements) to obtain \cref{eq:nnz omega LDG}; otherwise, the LDG SATs may result in more nonzero entries than the estimate in \cref{eq:nnz omega LDG} since the switch function $ \bm{\beta} $ varies along curved facets.
Since the $ \R $ matrix spans $ d-1 $ dimensions for SBP-$ \Gamma $ operators, it has $ n_f $ nonzero columns. Therefore, for implementations with SBP-$ \Gamma $ operators, blocks containing terms such as $ \Dgk^T \T_{\gamma k}^{(2)}\Rgk $ have $ n_f $ nonzero columns. Similarly, blocks containing terms such as $ \Rgk^T \T_{\gamma k}^{(3)}\Dgk $ have $ n_f $ nonzero rows. Thus, the sum $ \Dgk^T \T_{\gamma k}^{(2)}\Rgk + \Rgk^T \T_{\gamma k}^{(3)}\Dgk$ has $2n_p n_f - n_f^2 $ nonzero entries. Identifying the structure of terms in blocks of the system matrix in a similar manner and using the number of coupled elements, we calculate upper estimates of the number of nonzero entries for different SBP-SAT discretizations of the Poisson problem. The estimates obtained are shown in \cref{tab:nnz}; similar results for DG implementation of the BR2 and CDG fluxes are presented in \cite{peraire2008compact}. In deriving the estimates, we assumed that all elements in the domain are interior; consequently, the number of nonzero entries is overestimated. This assumption, however, implies that the estimates in \cref{tab:nnz} get better with an increasing ratio of the number of interior to number of boundary elements in the domain.
\begin{table*}[!t]
\small
\caption{\label{tab:nnz}Estimates of the number of nonzero entries of system matrices resulting from different SBP-SAT discretizations of \cref{eq:Poisson problem}. For LDG and CDG SATs, straight-edged elements are used. If $ d=1 $, the estimates for the SBP-E family apply for operators that have volume nodes on their facets, and the estimates for LDG SAT should be replaced by those presented for CDG SAT. The operators $ \lceil\cdot\rceil $ and $ \lfloor\cdot\rfloor $ denote the ceiling and floor functions, respectively.}
\centering
\setlength{\tabcolsep}{1em}
\renewcommand\cellgape{\Gape[\jot]}
\begin{tabular}{l|l|l|l}
\toprule
\makecell[l]{SAT} & \makecell[c]{SBP-$\Omega$} & \makecell[c]{SBP-$\Gamma$} & \makecell[c]{SBP-E} \\
\midrule
\makecell[l]{BR1} & \makecell[l]{$(d^{2}+2d+2)n_{p}^{2}n_{e}$} & \makecell[l]{$\left[n_{p}^{2}+(d+1)(2n_{p}n_{f} - n_{f}^2)+(d^2+d)n_{f}^{2}\right]n_{e}$} & \makecell[l]{$\left[n_{p}^{2}+(d+1)(2n_{p}n_{f}-n_{f}^{2})\right]n_{e}$}\\
\midrule
\makecell[l]{BR2, SIPG, \\ BO, NIPG}
& \makecell[l]{$\left(d+2\right)n_{p}^{2}n_{e}$} & \makecell[l]{$\left[n_{p}^{2}+(d+1)(2n_{p}n_{f}-n_{f}^{2})\right]n_{e}$} & \makecell[l]{$\left[n_{p}^{2}+(d+1)(2n_{p}n_{f}-n_{f}^{2})\right]n_{e}$}\\
\midrule
\makecell[l]{CDG, CNG} & \makecell[l]{$(d+2)n_{p}^{2}n_{e}$} & \makecell[l]{$\left[n_{p}^{2}+(d+1)n_{p}n_{f}\right]n_{e}$} & \makecell[l]{$\left[n_{p}^{2}+(d+1)n_{p}n_{f}\right]n_{e}$}\\
\midrule
\makecell[l]{LDG} & \makecell[l]{$\left\lceil\frac{n_e}{d+1}\right\rceil(d^2+2)n_p^2$ \vspace{2pt}\\$+ \left\lfloor\frac{n_e d}{d+1}\right\rfloor(d^2 + 1)n_p^2$} &
\makecell[l]{$\left[n_{p}^{2}+(d+1)n_{p}n_{f}\right] n_e$ \vspace{2pt}\\$+\left[(d^2+1)-\left\lceil\frac{n_e}{d+1}\right\rceil(d+1) - \left\lfloor\frac{n_e d}{d+1}\right\rfloor(d+2)\right]n_f^2$} &
\makecell[l]{$\left[n_{p}^{2}+(d+1)n_{p}n_{f}\right]n_{e}$}\\
\bottomrule
\end{tabular}
\end{table*}
From \cref{tab:nnz} it can be deduced that the BR1 SAT yields the largest number of nonzero entries for a given type of SBP operator. In contrast, the CDG and CNG SATs give the smallest number of nonzero entries. While it is fairly easy to rank the SATs based on the number of nonzero entries they produce for a given type of operator, such a comparison involving different types of SBP operator is not straightforward due to varying number of volume nodes, $ n_p $.
\section{Numerical results} \label{sec:Numerical Results}
To verify the theoretical analyses presented in the previous sections, we consider the two-dimensional Poisson problem
\begin{equation}\label{eq:Poisson problem}
\begin{aligned}
-\nabla\cdot\left(\lambda\nabla\fnc{U}\right)&=\fnc{F}&&\text{in}\;\Omega=\left[0,20\right]\times\left[-5,5\right],\\\bm{n}\cdot\left(\lambda\nabla\fnc{U}\right)&=\fnc{U}_{N}&&\text{on}\;\Gamma^{N}=\left\{ \left(x,y\right):y\in\left[-5,5\right],x=20\right\} ,\\\fnc{U}&=\fnc{U}_{D}&&\text{on}\;\Gamma^{D}=\Gamma\backslash\Gamma^{N},
\end{aligned}
\end{equation}
where $ \lambda= \bigl[ \begin{smallmatrix} 4x+1 & y \\
y & y^{2} + 1 \end{smallmatrix} \bigr]$, and the source term and boundary conditions are determined via the method of manufactured solution, \ie, we choose the exact solution to be
\begin{equation}\label{eq:Exact solution}
\fnc{U} = \sin(\frac{\pi}{8} x) \sin(\frac{\pi}{8} y),
\end{equation}
and evaluate $ \fnc{F} $, $ \fnc{U}_N $, $ \fnc{U}_D $ from \cref{eq:Poisson problem}. Similarly, we specify the exact adjoint solution as
\begin{equation}\label{eq:Exact adjoint}
\psi = x+y
\end{equation}
and evaluate the source term and boundary conditions associated with the adjoint problem from
\begin{equation} \label{eq:Poisson adjoint problem}
\begin{aligned}
-\nabla\cdot\left(\lambda\nabla\fnc{\psi}\right)&=\fnc{G}&&\text{in}\;\Omega=\left[0,20\right]\times\left[-5,5\right],\\\bm{n}\cdot\left(\lambda\nabla\fnc{\psi}\right)&=\fnc{\psi}_{N}&&\text{on}\;\Gamma^{N}=\left\{ \left(x,y\right):y\in\left[-5,5\right],x=20\right\} ,\\\fnc{\psi}&=\fnc{\psi}_{D}&&\text{on}\;\Gamma^{D}=\Gamma\backslash\Gamma^{N}.
\end{aligned}
\end{equation}
Finally, a linear functional of the form\footnote{Note that $ \psi_{D} $ and $ \psi_{N} $ are evaluated from $ \psi $, but usually there is no need to know the adjoint solution. Thus, the functional simply contains $ \fnc{U} $ as an unknown, and the values of $ \psi_{D} $ and $ \psi_{N} $ are given as coefficients or functions in the expression for the functional.} \cref{eq:Functional}, \ie,
\begin{equation*}
\fnc{I} (\fnc{U})= \int_{\Omega}\fnc{G}\fnc{U}\dd{\Omega}
- \int_{\Gamma^D}\psi_{D} (\lambda\nabla\fnc{U})\cdot\bm{n}\dd{\Gamma}
+ \int_{\Gamma^N}\psi_{N}\fnc{U}\dd{\Gamma},
\end{equation*}
is considered. Since we know the primal solution, the adjoint, the boundary conditions, and the source terms, the linear functional can be evaluated exactly, and its value, accurate to fifteen significant figures, is \violet{$ \fnc{I}(\fnc{U}) = -27.0912595377575 $}.
The physical domain is tessellated with triangular elements, and the $ \alpha $-optimized Lagrange interpolation nodes on the reference element are mapped through an affine mapping to the physical elements. Then the triangular elements are curved by perturbing the coordinates of the $ \alpha$-optimized Lagrange interpolation nodes, $ \bm{x}_{L,k} $ and $ \bm{y}_{L,k} $, using the functions \cite{chan2019efficient}
\begin{equation}\label{eq:Mesh purturbation}
\begin{aligned}
\tilde{\bm{x}}_{k}&=\bm{x}_{L,k}+\frac{5}{4}\cos\left(\frac{\pi}{20}\bm{x}_{L,k}-\frac{\pi}{2}\right)\cos\left(\frac{3\pi}{10}\bm{y}_{L,k}\right),& \quad \tilde{\bm{y}}_{k}&=\bm{y}_{L,k}+\frac{5}{8}\sin\left(\frac{\pi}{5}\tilde{\bm{x}}_{k}-2\pi\right)\cos\left(\frac{\pi}{10}\bm{y}_{L,k}\right).
\end{aligned}
\end{equation}
The mesh Jacobian remain positive for each element under the curvilinear transformation. Examples of curvilinear grids with degree two SBP-$ \Gamma $ and SBP-$ \Omega $ operators are shown in \cref{fig:mesh SBP families}. A mapping degree of two is used for all numerical results presented. \blue{In all cases, the numerical solutions are obtained by solving the discrete equations using a direct method; specifically, the ``spsolve" function from the SciPy sparse linear algebra library in Python is used.}
\begin{figure}[!t]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=0.52]{nelem_68_pmap_2_gamma.pdf}
\caption{\label{fig:mesh Rd1 family} SBP-$ \Gamma $ operator, $p=2$, $p_{\rm map}=2$}
\end{subfigure}\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=0.52]{nelem_68_pmap_2_omega.pdf}
\caption{\label{fig:mesh Rd family} SBP-$ \Omega $ operator, $p=2$, $p_{\rm map}=2$}
\end{subfigure}
\caption{\label{fig:mesh SBP families} Physical domain tessellated with 68 curved triangular elements. The circles and squares indicate, respectively, the locations of volume nodes and facet quadrature points obtained with a degree two curvilinear mapping, and the lines define the facets of each element due to the perturbation given by \cref{eq:Mesh purturbation}.}
\end{figure}
\subsection{Accuracy} \label{sec:Accuracy}
The errors in the primal and adjoint solutions are computed, respectively, as
\begin{equation} \label{eq:Error}
\begin{aligned}
\norm{\bm{u}_h - \bm{u}}_{\H} =\sqrt{ \sum_{\Omega_k \in \fnc{T}_h}\left(\bm{u}_{h,k} - \bm{u}_k\right)\H_k \left(\bm{u}_{h,k} - \bm{u}_k\right)},\quad &&
\norm{\bm{\psi}_h - \bm{\psi}}_{\H} = \sqrt{\sum_{\Omega_k \in \fnc{T}_h}\left(\bm{\psi}_{h,k} - \bm{\psi}_k\right)\H_k \left(\bm{\psi}_{h,k} - \bm{\psi}_k\right)},
\end{aligned}
\end{equation}
and the functional error is calculated as $ \abs{I_h(\bm{u}_h) - \fnc{I}(\fnc{U})} $. To study the accuracy and convergence properties of the primal solution, adjoint solution, and functional under mesh refinement, we consider four successively refined grids with 68, 272, 1088, 4352 elements. The nominal element size is calculated as $ h \equiv 20/\sqrt{n_e} $.
Figure \ref{fig:Solution error} shows the solution errors and convergence rates under mesh refinement for six types of SAT implemented with three types of SBP operators. Schemes with the BR1, BR2, LDG, and CDG SATs display solution convergence rates of $ p+1 $ and achieve very similar solution error values. In contrast, schemes with the BO and CNG SATs exhibit an even-odd convergence phenomenon; schemes with odd degree SBP operators converge at rates of $ p+1 $ while those with even degree operators converge at reduced rates of $ p $. The even-odd convergence property of the BO method is well-known, \eg, see \cite{shu2001different,kirby2005selecting,carpenter2010revisiting}. Furthermore, schemes with the BO SAT exhibit the largest solution error values in almost all cases considered (except for the case with degree three SBP diagonal-E operator).
Numerical experiments in the literature with odd degree, one-dimensional operators show that the BR1 flux results in suboptimal solution convergence rate of $ p $ \cite{kirby2005selecting,shu2001different,bassi2005discontinuous,hesthaven2007nodal}. However, as can be seen from \cref{fig:Solution error}, this characteristic is not observed when the BR1 SAT is implemented with SBP operators on unstructured triangular meshes. \violet{For the BR1 and LDG SATs, if $ \T_{\gamma k}^{(1)} $ and $ \T_{\gamma}^{(D)} $ are not modified and the extended boundary SATs are not included, then discretizations with the SBP-$ \Omega $ and SBP-$ \Gamma $ operators produce system matrices that have eigenvalues with positive real parts. For the unmodified\footnote{\violet{The unmodified BR1 and LDG SATs are denoted by BR1* and LDG*, respectively, in all figures and tables. If used without a qualifier, the names BR1 and LDG refer to the modified versions of the BR1 and LDG SATs.}} BR1 and LDG SATs, which include extended boundary SATs, positive eigenvalues are not produced with all types of the SBP operators. Despite being stable, however, functional superconvergence is not observed for the unmodified BR1 and LDG SATs except when used with the SBP diagonal-E operators. As noted in \cref{sec:Equivalence of SATs}, when used with SBP diagonal-E operators, the BR1 and LDG SATs (both modified and unmodified) have compact stencil width, and they are adjoint consistent for problems with non-homogeneous Dirichlet boundary conditions. When the unmodified LDG SAT is implemented with $ \mu =0$, suboptimal solution convergence rates are observed for some of the cases; hence, we implemented the unmodified LDG SAT with $ \T_{\gamma k}^{(D)} =\frac{3}{2}\B_{\gamma}\Upsilon_{\gamma\gamma k}\B_{\gamma} $, which corresponds to a nonzero value of $ \mu $ at Dirichlet boundary facets. It can be seen from \cref{fig:Solution error} that the unmodified BR1 and LDG SATs lead to solution convergence rates of $ p+1 $.}
\begin{figure}[!t]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_soln_VarOper_omega_p3.pdf}
\caption{\label{fig:Omega soln p=3} SBP-$ \Omega $ operator, $p=3$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_soln_VarOper_gamma_p3.pdf}
\caption{\label{fig:Gamma soln p=3} SBP-$ \Gamma $ operator, $p=3$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_soln_VarOper_diage_p3.pdf}
\caption{\label{fig:diage soln p=3} SBP-E operator, $p=3$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_soln_VarOper_omega_p4.pdf}
\caption{\label{fig:Omega soln p=4} SBP-$ \Omega $ operator, $p=4$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_soln_VarOper_gamma_p4.pdf}
\caption{\label{fig:Gamma soln p=4} SBP-$ \Gamma $ operator, $p=4$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_soln_VarOper_diage_p4.pdf}
\caption{\label{fig:diage soln p=4} SBP-E operator, $p=4$}
\end{subfigure}
\caption{\label{fig:Solution error} Solution error under grid refinement. Solution convergence rates (shown in parenthesis) are calculated by fitting a line through the last three error values on the refined meshes. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs, which are implemented with the SBP diagonal-E operators only}.}
\end{figure}
Figure \ref{fig:Soln error all} shows the errors produced by the three types of SBP operator when implemented with the BR1 and BO SATs. In general, solution error is not very sensitive to the type of SBP operator used except in a few cases, \eg, the cases where the degree three SBP diagonal-E operator is implemented with SATs other than the BO SAT. Except for the BO SAT, all of the other SATs show very similar solution error convergence behavior as that of the BR1 SAT.
\begin{figure}[!t]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=0.25]{errs_soln_VarSAT_all_BR1_modified.pdf}
\caption{\label{fig:Soln error all BR1 SAT} Solution error with BR1 SAT}
\end{subfigure}\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=0.25]{errs_soln_VarSAT_all_BO_modified.pdf}
\caption{\label{fig:Soln error all BO SAT} Solution error with BO SAT}
\end{subfigure}
\caption{\label{fig:Soln error all} Variation of solution error under grid refinement with respect to three types of SBP operators. Slopes corresponding to $ p+1 $ convergence rates are shown by short, thin lines.}
\end{figure}
The errors and convergence rates of the adjoint solution under mesh refinement are presented in \cref{fig:Adjoint error}. All of the adjoint consistent SATs lead to schemes that converge to the exact adjoint at a rate of $p+1$ \violet{or larger}. In contrast, schemes with the BO and CNG SATs have error values of $ \fnc{O}(1) $. Similar properties as with the primal solution are observed regarding the sensitivity of the adjoint error values to the type of SBP operator used.
\begin{figure}[!t]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_adj_VarOper_omega_p3.pdf}
\caption{\label{fig:Omega adj p=3}SBP-$ \Omega $, $p=3$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_adj_VarOper_gamma_p3.pdf}
\caption{\label{fig:Gamma adj p=3} SBP-$ \Gamma $, $p=3$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_adj_VarOper_diage_p3.pdf}
\caption{\label{fig:diage adj p=3} SBP-E, $p=3$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_adj_VarOper_omega_p4.pdf}
\caption{\label{fig:Omega adj p=4} SBP-$ \Omega $, $p=4$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_adj_VarOper_gamma_p4.pdf}
\caption{\label{fig:Gamma adj p=4} SBP-$ \Gamma $, $p=4$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_adj_VarOper_diage_p4.pdf}
\caption{\label{fig:diage adj p=4} SBP-E, $p=4$}
\end{subfigure}
\caption{\label{fig:Adjoint error} Adjoint error under grid refinement. Adjoint convergence rates (shown in parenthesis) are calculated by fitting a line through the last three error values on the refined meshes except for the adjoint consistent SATs with SBP-$ \Omega $ operator of degree $ p=4 $ for which the first 3 grids are used. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs}.}
\end{figure}
Functional errors and convergence rates are displayed in \cref{fig:Functional error}. As expected, functional superconvergence rates of $ 2p $ are observed for schemes with primal and adjoint consistent SATs. The adjoint inconsistent SATs, BO and CNG, do not display functional superconvergence rates of $ 2p $. While the adjoint consistent schemes achieve comparable functional error values, the CNG SAT outperforms the BO SAT in this regard in most cases.
\begin{figure}[!t]
\centering
\ignore{
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_omega_p1.pdf}
\caption{\label{fig:Omega func p=1} SBP-$ \Omega $, $p=1$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_gamma_p1.pdf}
\caption{\label{fig:Gamma func p=1} SBP-$ \Gamma $, $p=1$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_diage_p1.pdf}
\caption{\label{fig:diage func p=1} SBP-E, $p=1$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_omega_p2.pdf}
\caption{\label{fig:Omega func p=2} SBP-$ \Omega $, $p=2$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_gamma_p2.pdf}
\caption{\label{fig:Gamma func p=2} SBP-$ \Gamma $, $p=2$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_diage_p2.pdf}
\caption{\label{fig:diage func p=2} SBP-E, $p=2$}
\end{subfigure}
\\
}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_omega_p3.pdf}
\caption{\label{fig:Omega func p=3} SBP-$ \Omega $, $p=3$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_gamma_p3.pdf}
\caption{\label{fig:Gamma func p=3} SBP-$ \Gamma $, $p=3$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_diage_p3.pdf}
\caption{\label{fig:diage func p=3} SBP-E, $p=3$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_omega_p4.pdf}
\caption{\label{fig:Omega func p=4} SBP-$ \Omega $, $p=4$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_gamma_p4.pdf}
\caption{\label{fig:Gamma func p=4} SBP-$ \Gamma $, $p=4$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2]{errs_func_VarOper_diage_p4.pdf}
\caption{\label{fig:diage func p=4} SBP-E, $p=4$}
\end{subfigure}
\caption{\label{fig:Functional error} Functional error under grid refinement. Functional convergence rates (shown in parenthesis) are calculated by fitting a line through the last three error values on the refined meshes except for the adjoint consistent SATs with SBP-$ \Omega $ and SBP-E operators of degree $ p=4 $ for which the first 3 grids are used. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs}.}
\end{figure}
\subsection{Eigenspectra} \label{sec:Eigenspectra}
The maximum time step that can be used with explicit time-marching schemes depends on the spectral radius of the system matrix. Figure \ref{fig:Eigenspectra} shows the eigenspectra of the system matrices arising from the SBP-SAT discretizations of \cref{eq:Poisson problem}. While the BO and CNG SATs produce eigenvalues with imaginary parts, all of the adjoint consistent SATs have eigenvalues on the negative real axis. The BO SAT leads to the smallest spectral radius, $ \rho $, except when used with SBP diagonal-E operators. SBP diagonal-E operators achieve their smallest spectral radii when used with the \violet{unmodified BR1 SAT}. The \violet{modified} LDG SAT produces the largest spectral radius regardless of the type of SBP operator it is used with. In fact, the spectral radius obtained with the LDG SAT is about four times larger than the spectral radius obtained with the BR2 SAT. In comparison, the BR1 and CDG SATs yield spectral radii about twice as large as that of the BR2 SAT. The spectral radii of the BR1, LDG and CDG SATs can be reduced by approximately a factor of $ {1}/{\sigma_{1}} $ if $ \T_{\gamma k}^{(1)} $ is multiplied by $ 0<\sigma_{1} < 1 $, but this would compromise the stability of the discretizations. \violet{The unmodified BR1 and LDG SATs have smaller $ \T_{\gamma k}^{(1)} $ coefficients compared to the rest of the adjoint consistent SATs, and they produce smaller spectral radii, as can be seen from \cref{fig:diage symmetric spectra p=3,fig:diage symmetric spectra p=4}.}
The variation of the spectral radius with respect to the SBP operators can also be inferred from \cref{fig:Eigenspectra}. The SBP-$ \Omega $ and SBP-$ \Gamma $ operators show comparable spectral radii in all cases. In contrast, the SBP diagonal-E operator produces larger spectral radii than the SBP-$ \Omega $ and SBP-$ \Gamma $ operators. It also exhibits the largest ratio of the magnitudes of the smallest to the largest eigenvalues for the $ p=3 $ case. It can be seen from \cref{fig:Eigenspectra} that the eigenvalue with the smallest magnitude for the case with the $ p=3 $ SBP diagonal-E operator has a magnitude approximately two orders of magnitude smaller than those produced with the SBP-$ \Omega $ and SBP-$ \Gamma $ operators. This is also reflected in the condition number of the system matrix presented in \cref{tab:Condition number}.
\begin{figure}[!t]
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16, right]{spectrum_symmetric_omega_p3.pdf}
\caption{\label{fig:Omega symmetric spectra p=3} SBP-$ \Omega $, $p=3$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16, right]{spectrum_symmetric_gamma_p3.pdf}
\caption{\label{fig:Gamma symmetric spectra p=3} SBP-$ \Gamma $, $p=3$}
\end{subfigure} \hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_symmetric_diage_p3.pdf}
\caption{\label{fig:diage symmetric spectra p=3} SBP-E, $p=3$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_asymmetric_omega_p3.pdf}
\caption{\label{fig:Omega asymmetric spectra p=3} SBP-$ \Omega $, $p=3$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_asymmetric_gamma_p3.pdf}
\caption{\label{fig:Gamma asymmetric spectra p=3} SBP-$ \Gamma $, $p=3$}
\end{subfigure} \hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_asymmetric_diage_p3.pdf}
\caption{\label{fig:diage asymmetric spectra p=3} SBP-E, $p=3$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_symmetric_omega_p4.pdf}
\caption{\label{fig:Omega symmetric spectra p=4} SBP-$ \Omega $, $p=4$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_symmetric_gamma_p4.pdf}
\caption{\label{fig:Gamma symmetric spectra p=4} SBP-$ \Gamma $, $p=4$}
\end{subfigure} \hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_symmetric_diage_p4.pdf}
\caption{\label{fig:diage symmetric spectra p=4} SBP-E, $p=4$}
\end{subfigure}
\\
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_asymmetric_omega_p4.pdf}
\caption{\label{fig:Omega asymmetric spectra p=4} SBP-$ \Omega $, $p=4$}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_asymmetric_gamma_p4.pdf}
\caption{\label{fig:Gamma asymmetric spectra p=4} SBP-$ \Gamma $, $p=4$}
\end{subfigure} \hfill
\begin{subfigure}{0.33\textwidth}
\includegraphics[scale=0.16,right]{spectrum_asymmetric_diage_p4.pdf}
\caption{\label{fig:diage asymmetric spectra p=4} SBP-E, $p=4$}
\end{subfigure}
\caption{\label{fig:Eigenspectra} Eigenspectra of the system matrix resulting from SBP-SAT discretization of \cref{eq:Poisson problem} with $ n_e = 14 $ elements. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs}.}
\end{figure}
\subsection{Conditioning}
The condition number of a system matrix affects the solution accuracy and the convergence rate of iterative solvers for implicit methods. \cref{tab:Condition number} shows the condition numbers of the system matrices resulting from various SBP-SAT discretizations of \cref{eq:Poisson problem}. The LDG SAT produces the largest condition numbers, and the BR2 SAT yields the smallest condition numbers among the adjoint consistent SATs. It can also be inferred from \cref{tab:Condition number} that compared to the LDG SAT, the BO and CNG SATs yield approximately an order of magnitude smaller condition numbers. They also give significantly smaller condition numbers compared to the BR1, BR2, and CDG SATs. \violet{In contrast, the unmodified LDG and BR1 SATs yield smaller condition numbers than the rest of the adjoint consistent SATs when used with all but the degree three SBP diagonal-E operators.} A comparison of the condition numbers in \cref{tab:Condition number} by the type of SBP operator reveals that the SBP diagonal-E operators lead to larger condition numbers than the SBP-$ \Gamma $ and SBP-$ \Omega $ operators. As noted in \cref{sec:Accuracy}, the solution and adjoint errors are considerably larger for the case with $ p=3 $ SBP diagonal-E operator compared to the solutions with the same degree SBP-$ \Gamma $ and SBP-$ \Omega $ operators which yield system matrices with significantly smaller condition numbers as can be seen from \cref{tab:Condition number}.
The growth of the condition number with mesh refinement for degree four SBP operators is depicted in \cref{fig:Condition number scaling}. The figure shows that the scaling factors between the condition numbers resulting from the use of the different types of SAT remain roughly the same under mesh refinement. This holds for the lower degree SBP operators as well. \blue{Similarly, \cref{fig:Condition number scaling with p} shows that, for the SATs considered, the condition number scales at approximately the same rate as the degree of the operators increases. For SBP-$ \Omega $ and SBP-$ \Gamma $ operators, the increase in condition number with the polynomial degree of the operators is roughly linear; a similar observation was made for DG operators in \cite{kirby2005selecting}. From \cref{fig:cond num diage}, we see that the condition number for the degree three SBP diagonal-E operator is larger than that of the degree four operator, and this trend is observed in a more pronounced manner for smaller mesh sizes.}
\begin{table*} [!t]
\small
\caption{\label{tab:Condition number} Condition number of the system matrix arising from discretization of \cref{eq:Poisson problem} using $ n_e=14 $ elements. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs}.}
\centering
\renewcommand*{\arraystretch}{1.1}
\begin{tabular}{ccccccccccc}
\toprule
$p$& Operator & BR1 & BR1* & BR2 & LDG & LDG* & CDG & BO & CNG \\
\midrule
& SBP-$\Omega$ & 5.09e+02 & -- & 2.55e+02 & 1.01e+03 & -- & 4.96e+02 & 1.32e+02 & 7.60e+01 \\
1 & SBP-$\Gamma$ & 5.05e+02 & -- & 3.02e+02 & 1.29e+03 & -- & 6.88e+02 & 1.01e+02 & 1.14e+02 \\
& SBP-E & 9.13e+02 & 3.65e+02 & 4.12e+02 & 2.01e+03 & 6.88e+02 & 9.62e+02 & 2.36e+02 & 1.57e+02 \\
\midrule
& SBP-$\Omega$ & 3.88e+03 & -- & 1.90e+03 & 8.59e+03 & -- & 4.30e+03 & 3.91e+02 & 5.33e+02 \\
2 & SBP-$\Gamma$ & 6.30e+03 & -- & 3.00e+03 & 1.70e+04 & -- & 8.36e+03 & 5.83e+02 & 8.28e+02 \\
& SBP-E & 1.06e+04 & 2.09e+03 & 4.82e+03 & 2.49e+04 & 3.84e+03 & 1.16e+04 & 2.86e+03 & 2.08e+03 \\
\midrule
& SBP-$\Omega$ & 1.85e+04 & -- & 8.86e+03 & 4.30e+04 & -- & 2.11e+04 & 1.98e+03 & 2.42e+03 \\
3 & SBP-$\Gamma$ & 2.66e+04 & -- & 1.26e+04 & 7.22e+04 & -- & 3.54e+04 & 2.76e+03 & 3.58e+03 \\
& SBP-E & 2.65e+06 & 3.63e+06 & 1.22e+06 & 6.69e+06 & 4.76e+06 & 3.15e+06 & 6.60e+05 & 5.26e+05 \\
\midrule
& SBP-$\Omega$ & 5.81e+04 & -- & 2.73e+04 & 1.46e+05 & -- & 7.05e+04 & 6.04e+03 & 7.26e+03 \\
4 & SBP-$\Gamma$ & 8.54e+04 & -- & 3.98e+04 & 2.27e+05 & -- & 1.10e+05 & 8.43e+03 & 1.09e+04 \\
& SBP-E & 1.49e+05 & 2.23e+04 & 6.71e+04 & 3.86e+05 & 5.30e+04 & 1.77e+05 & 4.29e+04 & 3.10e+04 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[!t]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2,right]{cond_omega_p4.pdf}
\caption{\label{fig:cond num omega p=4} SBP-$ \Omega $, $p=4$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2,right]{cond_gamma_p4.pdf}
\caption{\label{fig:cond num gamma p=4} SBP-$ \Gamma $, $p=4$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2,right]{cond_diage_p4.pdf}
\caption{\label{fig:cond num diage p=4} SBP-E, $p=4$}
\end{subfigure}
\caption{\label{fig:Condition number scaling} Growth of condition number with respect to mesh refinement. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs}.}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2,right]{cond_omega_refine4.pdf}
\caption{\label{fig:cond num omega} SBP-$ \Omega $, $n_e=896$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2,right]{cond_gamma_refine4.pdf}
\caption{\label{fig:cond num gamma} SBP-$ \Gamma $, $n_e=896$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[scale=0.2,right]{cond_diage_refine4.pdf}
\caption{\label{fig:cond num diage} SBP-E, $n_e=896$}
\end{subfigure}
\caption{\label{fig:Condition number scaling with p} Growth of condition number with degree of SBP operators. \violet{The BR1* and LDG* SATs represent the unmodified BR1 and LDG SATs}.}
\end{figure}
\subsection{Verification of sparsity and storage requirement estimates} \label{sec:Varification of sparsity}
We verify estimates of the number of nonzero entries presented in \cref{tab:nnz} for system matrices resulting from different SBP-SAT discretizations of \cref{eq:Poisson problem}. The accuracy of the estimates is measured by the percent error with respect to the actual number of nonzero entries obtained numerically. We also compute the relative densities of the system matrices using the density due to the BR1 SAT as a reference for normalization. The results for degree four SBP operators are shown in \cref{tab:nnz numerical}. \violet{The largest errors in the estimated number of nonzero entries of the system matrices resulting from discretizations with SBP-$ \Omega $, SBP-$ \Gamma $, and SBP diagonal-E operators are $ 8.34\% $, $ 2.10\% $, and $ 0.74\% $, respectively}. For fewer elements the errors increase (\eg, $ \approx 20\% $ with 68 elements) because the ratio of the number of interior elements to boundary elements decreases.
\begin{table*}[!t]
\small
\caption{\label{tab:nnz numerical} Number of nonzero entries of system matrices resulting from SBP-SAT discretization of \cref{eq:Poisson problem} with $ 4352 $ degree $ p=4 $ curved SBP elements, percent error of estimated number of nonzero entries compared to actual number of nonzero entries, and relative densities (rel. density) of system matrices with respect to nonzero entries obtained with BR1 SAT.}
\centering
\renewcommand*{\arraystretch}{1.1}
\begin{tabular}{l c c c c c c c c c}
\toprule
\multirow{2}{*}{SAT} & \multicolumn{3}{c}{SBP-$ \Omega $} & \multicolumn{3}{c}{SBP-$ \Gamma $} & \multicolumn{3}{c}{SBP-E} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10}
& $nnz$ & \%error& rel. density & $nnz$ & \%error& rel. density & $nnz$ & \%error & rel. density \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10}
BR1 & $ 9,594,000 $ & $+2.06$ & $ 1.0000 $ & $ 4,041,648 $ & $+1.11$ & $ 1.0000 $ & $ 4,617,968 $ & $+0.74$ & $ 1.0000 $ \\
BR2 & $ 3,877,200 $ & $+1.02$ & $ 0.4041 $ & $ 3,406,448 $ & $+0.80$ & $ 0.8428 $ & $ 4,617,968 $ & $+0.74$ & $ 1.0000 $ \\
LDG & $ 4,820,400 $ & $+8.34$ & $ 0.5024 $ & $ 2,674,048 $ & $+2.10$ & $ 0.6616 $ & $ 3,523,168 $ & $+0.55$ & $ 0.7629 $ \\
CDG & $ 3,877,200 $ & $+1.02$ & $ 0.4041 $ & $ 2,569,248 $ & $+0.62$ & $ 0.6357 $ & $ 3,523,168 $ & $+0.55$ & $ 0.7629 $ \\
BO & $ 3,877,200 $ & $+1.02$ & $ 0.4041 $ & $ 3,406,448 $ & $+0.80$ & $ 0.8428 $ & $ 4,617,968 $ & $+0.74$ & $ 1.0000 $ \\
CNG & $ 3,877,200 $ & $+1.02$ & $ 0.4041 $ & $ 2,569,248 $ & $+0.62$ & $ 0.6357 $ & $ 3,523,168 $ & $+0.55$ & $ 0.7629 $ \\
\bottomrule
\end{tabular}
\end{table*}
The relative densities in \cref{tab:nnz numerical} show that the storage requirements for discretizations with SBP-$ \Omega $ operators coupled with the BR1 SAT can be reduced by up to $ \approx 60\% $ if instead compact SATs are used. For SBP-$ \Gamma $ operators density reductions up to $ \approx 35\% $ are observed if CNG, CDG, or LDG SATs are used. The same set of SATs yield $ \approx 23\% $ reduction in density when used with SBP diagonal-E operators. Compared to BR2 and BO SATs, we observe $\approx 20\% $ reduction in density when LDG, CDG and CNG SATs are used with the SBP-$ \Gamma $ operator.
Figure \ref{fig:nnz} shows the variation of the number of nonzero entries due to implementation of different types of degree four SBP operator with a single type of SAT. The SBP-$ \Gamma $ operator produces the fewest nonzero entries regardless of the choice of SAT. This trend is observed for lower degree operators as well, but for implementations with the BR2 SAT, the SBP-$ \Gamma $ and SBP-$ \Omega $ operators produce very similar numbers of nonzero entries. For SBP-$ \Omega $ and SBP diagonal-E operators, no conclusive statement can be made regarding which operator produces a smaller number of nonzero entries when implemented with the same SAT. Combining the observations from \cref{fig:nnz} and \cref{tab:nnz numerical}, we can conclude that the minimum number of nonzero entries (highest sparsity and lowest storage requirement) is obtained when SBP-$ \Gamma $ operators are used with the CNG SAT. While the CDG SAT also produces the same number of nonzero entries, it requires storing values of the switch functions for each facet in the discretization.
\begin{figure}[!t]
\centering
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[scale=0.20]{nnz_BR1_p4.pdf}
\caption{\label{fig:nnz_BR1_p4} BR1 SAT, $p=4$}
\end{subfigure}\quad
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[scale=0.20]{nnz_BR2_p4.pdf}
\caption{\label{fig:nnz_BR2_p4} BR2 SAT, $p=4$}
\end{subfigure}\quad
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[scale=0.20]{nnz_LDG_p4.pdf}
\caption{\label{fig:nnz_LDG_p4} LDG SAT, $p=4$}
\end{subfigure}\quad
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[scale=0.20]{nnz_CDG_p4.pdf}
\caption{\label{fig:nnz_CDG_p4} CDG SAT, $p=4$}
\end{subfigure}
\caption{\label{fig:nnz} Comparison of number of nonzero entries when different types of SBP operators are implemented with similar SAT. Values obtained from numerical experiment are denoted by ``num." and those estimated are denoted by ``est." in the legends.}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
Using a general framework, we have analyzed the numerical properties of discretizations of diffusion problems \blue{with diagonal-norm multidimensional SBP operators and various types of SAT}. The framework enables implementation of SATs without writing diffusion problems as a first-order systems of equations. This offers flexibility to switch from one type of SAT to another with a simple parameter selection of the SAT coefficients. The main theoretical results can be summarized as follows.
\begin{itemize}
\item Conditions required for consistency, conservation, adjoint consistency, and energy stability of SBP-SAT discretizations of diffusion problems with multidimensional SBP operators are established.
\item A functional error of order $ h^{2p} $ is attained when primal and adjoint consistent SATs are used with degree $ p $ multidimensional SBP operators in curvilinear coordinates.
\item Several types of SAT that correspond to known DG fluxes, including those leading to extended stencils, are identified. Instability issues observed with some these SATs are addressed by modifying the SAT coefficients.
\item It is shown that the BR1, BR2, and SIPG SATs are equivalent when implemented with diagonal-norm $ \R^0 $ (SBP diagonal-E) operators, which include the frequently used LGL operator in one space dimension. For the same family of operators, the LDG and CDG SATs are shown to be equivalent.
\item Upper bounds on the number of nonzero entries in the system matrices arising from $ d $-dimensional SBP-SAT discretizations are derived.
\end{itemize}
Numerical experiments with the two-dimensional Poisson problem were conducted to study the accuracy, eigenspectra, conditioning, and sparsity of various SBP-SAT discretizations. The adjoint consistent SATs display primal and adjoint solution convergence rates of $ p+1 $. In contrast, the adjoint inconsistent SATs, BO and CNG, show solution convergence rates of $ p+1 $ and $ p $ for odd and even degree operators, respectively. Functional superconvergence rates of $ 2p $ are attained with the adjoint consistent SATs, while the adjoint inconsistent SATs converge at lower rates. The reduction in the functional error values is more notable than the reduction in the solution error values when adjoint consistent SATs are used instead of adjoint inconsistent SATs. We summarize the rest of our observations as follows.
\begin{itemize}
\item \violet{When used with SBP-$ \Omega $ and SBP-$ \Gamma $ operators}, the BR1 and LDG SATs couple second neighbor elements; hence, they are less amenable for code parallelization than the other types of SAT.
\item \violet{When used with the SBP-$ \Omega $ and SBP-$ \Gamma $ operators}, the BR2 SAT leads to a system matrix with the smallest spectral radius compared to the rest of the adjoint consistent SATs. In contrast, the LDG SAT leads to a system matrix with the largest spectral radius.
\item \violet{When used with the SBP diagonal-E operators, the unmodified BR1 and LDG SATs are compact, adjoint consistent, and energy stable. Except for the $ p=3 $ operator, they lead to smaller condition numbers compared to the other adjoint consistent SATs. Furthermore, the unmodified BR1 SAT leads to system matrices with the smallest spectral radius while the unmodified LDG SAT produces the sparsest system matrices.}
\item The BR2 SAT yields about half as large a spectral radius and condition number as the CDG SAT, but the CDG SAT produces system matrices with up to $ 25\% $ fewer nonzero entries when implemented with SBP-$ \Gamma $ and SBP diagonal-E operators.
\item Compared to the adjoint consistent SATs \violet{other than the unmodified BR1 and LDG SATs}, the BO and CNG SATs lead to system matrices with significantly smaller spectral radii. This is also reflected in the conditioning of their system matrices, which have $ 1.5$ to $20 $ times smaller condition numbers.
\item The CNG and CDG SATs produce system matrices with $ 20\%$ to $60\% $ fewer number of nonzero entries compared to the other types of SAT.
\item If functional superconvergence is not a priority, the CNG SAT offers interesting properties such as a reduced condition number (about half of that of the BR2 SAT), a larger time step, and a sparse system matrix. However, the scheme suffers from larger solution error and even-odd solution convergence behavior.
\end{itemize}
We acknowledge that the choice of SATs to solve diffusion problems is not straightforward due to the competing numerical properties, which can be problem dependent, but our observations indicate that \violet{when used with SBP-$ \Omega $ and SBP-$ \Gamma $ operators}, the BR2 and CDG SATs offer superior numerical properties in most cases, and the CNG SAT is a better alternative in some cases. \violet{For the SBP diagonal-E operators, the unmodified BR1 and LDG SATs show significantly better numerical properties compared to the rest of the SATs.} \blue{It is possible that other types of SAT with better numerical properties fall under the general framework, and this may be studied in the future.}
\appendix
\section{Construction of SBP operators on curved elements} \label{sec:Curvilinear Transformation} High-order methods require accurate enough representation of curved geometries to achieve optimal solution convergence rates \cite{hesthaven2007nodal,bassi1997higheuler}. One approach to generate curved elements is to reposition facet nodes of linear meshes generated on physical elements such that they coincide with facet quadrature points on curved physical boundaries, and propagate the curvature to volume nodes \cite{hesthaven2007nodal}. In this work, however, we assume that a curvilinear mesh is available or an analytical relation is known such that coordinates of $ \alpha $-optimized Lagrange interpolation nodes discussed in \cite{hesthaven2007nodal} are accessible on each curved physical element. We then apply polynomial interpolation to find the SBP nodal locations and grid metrics in the physical space.
Crean \etal \cite{crean2018entropy} showed that SBP operators on curved physical elements preserve design order accuracy, freestream flow, and the SBP property if the curvilinear mapping satisfies \cref{assu: mapping}.
The geometric mapping from a point in the reference element, $ (\xi,\eta) \in \hat{\Omega}$, to a point in the physical element, $ (x,y) \in \Omega_k$, is defined by
\begin{align} \label{eq:mapping}
(x,y) = \fnc{M}_k(\xi,\eta) \equiv \sum_{j=1}^{n_s^*} c_j \hat{\phi}_j(\xi,\eta),
\end{align}
where $ c_j $ is the coordinate of $ j $-th Lagrange interpolation node on the the physical element, $ \hat{\phi}_j \in \polyref{p_{\rm map}}$ is the $ j $-th Lagrange polynomial basis associated with the $ j $-th node on the reference element, and $ n_s^* = {p_{\rm map}+d \choose d} $ is the cardinality of the polynomial basis for the mapping. At the Lagrange nodes of the reference element, $ \hat{\phi}_{j} $ satisfies
\begin{equation} \label{eq:Lagrange basis}
\hat{\phi}_j(\xi_i, \eta_i) = \sum_{\ell=1}^{n_s^*} k_\ell^{(j)}\hat{\varphi}_\ell (\xi_i, \eta_i)=\delta_{ij}\qquad \text{for}\;j=1,\dots,n_s^*,
\end{equation}
where $ k_{\ell}^{(j)} \in \IR{}$, $ \hat{\varphi} $ is another basis function, and $ \delta_{ij} $ is the Kronecker delta operator. The basis $ \hat{\varphi} $ is chosen to be the orthonormalized canonical basis given in \cite{hesthaven2007nodal} as,
\begin{equation} \label{eq:orthonormal basis}
\hat{\varphi}_m(\xi, \eta)=\sqrt{2}\fnc{P}_i(a)\fnc{P}_j^{(2i+1, 0)}(b)(1-b)^i,
\end{equation}
where $ \fnc{P}_n^{(\alpha,\beta)}$ is the $ n $-th order Jacobi polynomial, $ m= j+(p_{\rm map}+1)i+1 - i/2(i-1) $, $ ij\ge 0 $, $ i+j\le p_{\rm map} $, $ a=2({1+\xi})/({1-\eta}) - 1 $, and $ b=\eta $.
Writing \cref{eq:Lagrange basis} in matrix form we have $ \hat{\V}_L \K = \I_{n_s} $ which yields $ \K = \hat{\V}_L^{-1} $, where the coefficient matrix, $ \K \in \IRtwo{n_s^*}{n_s^*}$, contains the coefficient $ k_\ell^{(j)} $ in the $ j$-th column and $ \ell$-th row, and $ \hat{\V}_L $ is the Vandermonde matrix constructed using the orthonormal basis in \cref{eq:orthonormal basis} and the $ \alpha $-optimized Lagrange nodes, $ \hat{S}_L = \{ \xi_i,\eta_i \}_{i=1}^{n_s^*}$, presented in \cite{hesthaven2007nodal}. The $ \alpha $-optimized Lagrange nodes minimize the Lebesgue constant and ensure the Vandermonde matrix is well-behaved \cite{hesthaven2007nodal}. Using the matrix forms of \cref{eq:mapping} and \cref{eq:Lagrange basis}, the coordinates of the SBP volume nodes, $ \bm{x}_k $, $ \bm{y}_k $, and facet node, $ \bm{x}_\gamma $, $ \bm{y}_\gamma $, in the physical element can be calculated as
\begin{equation} \label{eq:sbp nodes on physical element}
\begin{aligned}
\bm{x}_k = \hat{\V}_{\Omega} \hat{\V}_L^{-1}\tilde{\bm{x}}_{k}, && \bm{y}_k = \hat{\V}_{\Omega} \hat{\V}_L^{-1}\tilde{\bm{y}}_{k}, &&
\bm{x}_{\gamma} = \hat{\V}_{\gamma} \hat{\V}_L^{-1}\tilde{\bm{x}}_{k}, &&
\bm{y}_{\gamma} = \hat{\V}_{\gamma}\hat{\V}_L^{-1}\tilde{\bm{y}}_{k},
\end{aligned}
\end{equation}
where $ \tilde{\bm{x}}_k,\; \tilde{\bm{y}}_k \in \IR{n_s^*} $ are vectors of the $ x $ and $ y $ coordinates of the Lagrange interpolation nodes in $ \Omega_k $. Using the derivatives of \cref{eq:mapping} with respect to the reference coordinates,
\begin{equation} \label{eq:derivative of polynomial in basis}
\begin{aligned}
\pdv{\fnc{M}}{\xi} &= \sum_{j=1}^{n_{s}^*}\sum_{i=1}^{n_{s}^*}c_{j}k_i^{(j)}\pdv{\hat{\varphi}_{i}}{\xi},
&&
\pdv{\fnc{M}}{\eta} = \sum_{j=1}^{n_{s}^*}\sum_{i=1}^{n_{s}^*}c_{j}k_i^{(j)}\pdv{\hat{\varphi}_{i}}{\eta},
\end{aligned}
\end{equation}
we compute the exact grid metrics by forming the derivatives of the Vandermonde matrix on $ \hat{S} $, \ie,
\begin{equation} \label{eq:grid metrics volume}
\begin{aligned}
\bm{x}_{\xi,k} = \hat{\V}_{\xi,\Omega} \hat{\V}_L^{-1}\tilde{\bm{x}}_{k}, &&
\bm{y}_{\xi,k} = \hat{\V}_{\xi,\Omega} \hat{\V}_L^{-1}\tilde{\bm{y}}_{k}, &&
\bm{x}_{\eta,k} = \hat{\V}_{\eta,\Omega} \hat{\V}_L^{-1}\tilde{\bm{x}}_{k}, &&
\bm{y}_{\eta,k} = \hat{\V}_{\eta,\Omega} \hat{\V}_L^{-1}\tilde{\bm{y}}_{k},
\end{aligned}
\end{equation}
where the subscripts $ \xi $ and $ \eta $ denote partial derivatives with respect to $ \xi $ and $ \eta $, \eg, $ \bm{x}_{\xi,k} $ is the restriction of $ {\partial x}/{\partial \xi} $ on to the nodes $ S_k $. Similarly, the facet grid metrics are computed as
\begin{equation} \label{eq:grid metrics facet}
\begin{aligned}
\bm{x}_{\xi,\gamma k} = \hat{\V}_{\xi,\gamma} \hat{\V}_L^{-1}\tilde{\bm{x}}_{k}, &&
\bm{y}_{\xi,\gamma k} = \hat{\V}_{\xi,\gamma} \hat{\V}_L^{-1}\tilde{\bm{y}}_{k}, &&
\bm{x}_{\eta,\gamma k} = \hat{\V}_{\eta,\gamma} \hat{\V}_L^{-1}\tilde{\bm{x}}_{k}, &&
\bm{y}_{\eta,\gamma k} = \hat{\V}_{\eta,\gamma} \hat{\V}_L^{-1}\tilde{\bm{y}}_{k}.
\end{aligned}
\end{equation}
The mapping Jacobian matrices for the volume, $ \fnc{J}_k: \hat{\Omega} \rightarrow \IRtwo{d}{d} $, and facets, $ \fnc{J}_{f_\gamma}: \hat{l}_\gamma \rightarrow \IRtwo{d}{(d-1)} $, are given, respectively, by
\begin{equation}
\fnc{J}_{k} = \left[\begin{array}{cc}
\pdv{x}{\xi} & \pdv{x}{\eta}\\
\pdv{y}{\xi} & \pdv{y}{\eta}
\end{array}\right], \quad
\text{and} \quad
\fnc{J}_{f_\gamma} = \left[\begin{array}{c}
\pdv{x}{s}\\
\pdv{y}{s}
\end{array}\right] ,
\end{equation}
where $ s=s(\xi,\eta) $ is the parametric equation of the line, $ \hat{l}_\gamma $, connecting the end points of facet $ \gamma $ on the reference element. The outward pointing unit normal vectors on facet $ \gamma $ of element $ \Omega_k $ are given by
\begin{equation}
\bm{n}_{\gamma k} = \frac{\myabs{\fnc{J}_k}}{\myabs{\fnc{J}_{f_\gamma}}} \fnc{J}_k^{-T}\hat{\bm{n}}_\gamma
= \frac{1}{\myabs{\fnc{J}_{f_\gamma}}} \left[\begin{array}{cc}
\pdv{y}{\eta} & -\pdv{y}{\xi}\\
-\pdv{x}{\eta} & \pdv{x}{\xi}
\end{array}\right]\left[\begin{array}{c}
{\hat{n}}_{\gamma \xi}\\
{\hat{n}}_{\gamma \eta}
\end{array}\right],
\end{equation}
where $ \myabs{\fnc{J}_k} $ is the determinant of the Jacobian, and $ \myabs{\fnc{J}_{f_\gamma}} = \sqrt{[(\fnc{J}_{f_\gamma})_{1}]^2 + [(\fnc{J}_{f_\gamma})_{2}]^2}$. We evaluate $ \myabs{\fnc{J}_k} $ and $ \myabs{\fnc{J}_{f_\gamma}} $ at the volume and facet nodes of $ \Omega_k $ as
\begin{equation}
\begin{aligned}
\J_k &= \mydiag\qty(\myabs{\bm{x}_{\xi,k} \circ \bm{y}_{\eta,k} - \bm{x}_{\eta,k} \circ \bm{y}_{\xi,k} }),
&&
\J_{f_1} = \mydiag\qty(\frac{1}{\sqrt{2}}\sqrt{(\bm{x}_{\eta,\gamma k}-\bm{x}_{\xi,\gamma k})^2 + (\bm{y}_{\eta,\gamma k}-\bm{y}_{\xi,\gamma k})^2}),
\\
\J_{f_2} &= \mydiag\qty(\sqrt{(-\bm{x}_{\eta,\gamma k})^2 + (-\bm{y}_{\eta,\gamma k})^2}),
&&
\J_{f_3} = \mydiag\qty(\sqrt{(\bm{x}_{\xi,\gamma k})^2 + (\bm{y}_{\xi,\gamma k})^2}),
\end{aligned}
\end{equation}
respectively, where $ \circ $ denotes the Hadamard (element-wise) product of vectors, and the operator $ \mydiag(\cdot) $ takes in a vector and creates a diagonal matrix with the vector placed in the main diagonal.
The SBP operators on the physical element are constructed following \cite{shadpey2020entropy,crean2018entropy}. The norm matrices on the physical element read
\begin{equation} \label{eq:norm matrices}
\begin{aligned}
\H_k &= \J_k \hat{\H},
& &
\B_\gamma = \J_{f_\gamma} \hat{\B}_\gamma.
\end{aligned}
\end{equation}
The normals at facet $ \gamma $ are stored in the diagonal matrices
\begin{equation} \label{eq:normal matrices}
\begin{aligned}
\Nxgk &= \J_{f_\gamma}^{-1}[\mydiag(\bm{y}_{\eta,\gamma k}) \hat{n}_{\gamma \xi} - \mydiag(\bm{y}_{\xi,\gamma k}) \hat{n}_{\gamma \eta}],
&&
\Nygk = \J_{f_\gamma}^{-1}[-\mydiag(\bm{x}_{\eta,\gamma k}) \hat{n}_{\gamma \xi} + \mydiag(\bm{x}_{\xi,\gamma k}) \hat{n}_{\gamma \eta}].
\end{aligned}
\end{equation}
The surface integral matrices in the $ x $ and $ y $ directions are given by
\begin{equation} \label{eq:Ex and Ey matrices}
\begin{aligned}
\Exk &= \sumfk \Rgk^T\B_\gamma \Nxgk \Rgk,
&&
\Eyk = \sumfk \Rgk^T\B_\gamma \Nygk \Rgk,
\end{aligned}
\end{equation}
and the skew-symmetric matrices are constructed as
\begin{equation} \label{eq:Sxk and Syk}
\begin{aligned}
\Sxk &= \frac{1}{2}\qty(\mydiag(\bm{y}_{\eta,\gamma k})\Qxi - \Qxi^T\mydiag(\bm{y}_{\eta,\gamma k})) + \frac{1}{2} \qty(-\mydiag(\bm{y}_{\xi,\gamma k})\Qeta + \Qeta^T\mydiag(\bm{y}_{\xi,\gamma k})),
\\
\Syk &= \frac{1}{2}\qty(-\mydiag(\bm{x}_{\eta,\gamma k})\Qxi + \Qxi^T\mydiag(\bm{x}_{\eta,\gamma k})) + \frac{1}{2} \qty(\mydiag(\bm{x}_{\xi,\gamma k})\Qeta - \Qeta^T\mydiag(\bm{x}_{\xi,\gamma k})).
\end{aligned}
\end{equation}
Finally, the derivative operators are computed as
\begin{equation} \label{eq:Dx and Dy}
\begin{aligned}
\Dxk &= \H_k^{-1}\left(\Sxk + \frac{1}{2}\Exk\right),
&&
\Dyk = \H_k^{-1}\left(\Syk + \frac{1}{2}\Eyk\right).
\end{aligned}
\end{equation}
\section{Summary of notation}
\blue{The analysis presented in this work is notation heavy; hence, we tabulate some of the important notation in \cref{tab:notation} for quick referencing.}
\begin{table*}[!t]
\small
\caption{\label{tab:notation}Summary of important notation}
\centering
\setlength{\tabcolsep}{1em}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{l l l}
\toprule
\makecell[l]{Notation} & \makecell[l]{Equation} & \makecell[l]{Description}
\\ \midrule
\makecell[l]{$ \D_{xk} $, $ \D_{yk} $} & \makecell[l]{\cref{eq:Dx and Dy}} & \makecell[l]{First derivative operators in the $ x $ and $ y $ directions}
\\
\makecell[l]{$ \D_k^{(2)} $} & \makecell[l]{\cref{eq:D2 1st form}} & \makecell[l]{Second derivative operator approximating $ \nabla \cdot (\lambda \nabla) $ on element $ \Omega_k $}
\\
\makecell[l]{$ \H_{k} $, $ \B_{\gamma} $} & \makecell[l]{\cref{eq:norm matrices}} & \makecell[l] {Diagonal norm matrices of element $ \Omega_k $ and facet $ \gamma $, respectively }
\\
\makecell[l]{$\E_{x,k}$, $ \E_{y,k} $} & \makecell[l]{\cref{eq:Ex and Ey matrices}} & \makecell[l] {Surface integral matrix in the $ x $ and $ y $ directions}
\\
\makecell[l]{$ \R_{\gamma k} $} & \makecell[l]{\cref{eq:extrapolation matrix}} &\makecell[l]{Extrapolation matrix from volume nodes in element $ \Omega_k $ to facet nodes on facet $ \gamma $}
\\
\makecell[l]{$ \D_{\gamma k} $} & \makecell[l]{\cref{eq:D_gamma k}} & \makecell[l]{Normal derivative operator approximating $ \bm{n}\cdot(\lambda\nabla) $ on facet $ \gamma $ of element $ \Omega_k $ }
\\
\makecell[l]{$ \N_{x \gamma k} $, $ \N_{y \gamma k} $} & \makecell[l]{\cref{eq:normal matrices}} & \makecell[l]{Diagonal matrices with the $ x $ and $ y $ components of the normal vector on face $ \gamma $ of element $ \Omega_k $}
\\
\makecell[l]{$ \M_k $} & \makecell[l]{\cref{eq:M_k matrix}} & \makecell[l]{A positive semidefinite matrix used for the approximation $\bm{v}_k^T\M_k\bm{u}_k \approx \int_{\Omega} \nabla\fnc{V}\cdot (\lambda\nabla \fnc{U}) \dd \Omega$}
\\
\makecell[l]{$ \Lambda_k $} & \makecell[l]{\cref{eq:Lambda}} & \makecell[l]{A block matrix containing the diffusivity coefficients in all combinations of directions}
\\
\makecell[l]{$ \T_{ak}^{(i)} $, $ \T_{abk}^{(j)} $} & \makecell[l]{\cref{eq:Interface SATs,eq:Boundary SATs}} & \makecell[l]{SAT coefficient matrices for facets $ a,b\in\{\gamma,\epsilon,\delta\} $, $ i=\{1,2,3,4,D\}$, and $ j=\{5,6\} $}
\\
\makecell[l]{$ \Upsilon_{abk} $} & \makecell[l]{\cref{eq:Upsilon definition}} & \makecell[l]{Component of the SAT coefficient matrices defined for facet $ a,b\in\{\gamma,\epsilon,\delta\} $}
\\
\makecell[l]{$ \alpha_{\gamma k} $} & \makecell[l]{\cref{eq:alpha_gamma k}} & \makecell[l]{A facet weight parameter satisfying $ \sum_{\gamma \subset \Gamma_k} \alpha_{\gamma k} = 1$ }
\\
\makecell[l]{$ \beta_{\gamma k} $} & \makecell[l]{\cref{eq:LDG switch with g}} & \makecell[l]{Switch function defined at facet $ \gamma $ of element $ \Omega_k $}
\\
\bottomrule
\end{tabular}
\end{table*}
\section*{Declaration of competing interest}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Acknowledgments}
The authors would like to thank Professor Masayuki Yano for his insights on DG fluxes for elliptic problems, David Craig Penner for his helpful feedback on the functional superconvergence proofs, and the anonymous referees for their valuable comments. All figures are produced using Matplotlib \cite{hunter2007matplotlib}.
\addcontentsline{toc}{section}{Acknowledgments}
\bibliographystyle{model1-num-names}
\bibliography{references}
\addcontentsline{toc}{section}{\refname}
\end{document} | 132,481 |
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
If you continue to have an issue Click Here and find out lots of the most frequently questioned thoughts answered, or else make sure you don't hesitate to present us a phone.
Tasty flavoured candy floss in your events such as young children functions as well as other activities and performance, you have got the option to choose your flavour and also the candy floss is run for two hrs.
draught beer and tender beverages dispenser devices, installation and servicing inside the South East. Our assistance region consists of the subsequent;
Whether or not it’s a standalone machine, or Element of a candy cart or candy bike, hiring a candy floss machine is a wonderful plan for a spread of different occasions, from little ones’s parties to marriage ceremony receptions and corporate features. Most suppliers will work along with you to create a personalised expertise with bespoke branding and flavouring solutions and usually, the candy floss hire will even contain a focused server to generate and serve the sugary goodness to your attendees. To discover far more and hear from local candy floss hire suppliers, just finish a ask for type now.
There’s very little rather like it and when you decide on the Popcorn on a conventional Cart Hire it is possible to provide that scrumptious odour to the event!
Our candy cart hire offers include things like a significant hand-crafted Victorian fashion cart, with a big spindle wheel and challenging-major pitched roof.
They both equally appeared an tasted wonderful and went down a treat with Most people. The gentleman did these a fantastic position and we’re so happy with how it all turned out.
Our standard hire involves 100 portions of candy floss. When you need extra please get the correct volume at the point of have a look at, as over the day our team will only deliver the expected amount of money and may not have any spares.
Our popcorn machines hire make clean, hot website popcorn similar to you'd probably get with the cinemas appropriate before your eyes. Popcorn cart machines are an awesome addition to kids parties, festivals, fetes, fundraisers, carnivals, movie themed get-togethers or any purpose the place you ought to insert a little exciting. Our popcorn machine rental make new, warm popcorn the same as you should get for the cinemas proper before your eyes.
You can pick the quantity of servings you would like from twenty five for that small get-togethers or we will cater for them greater functions with endless servings so perfect for weddings, fetes or corporate occasions.
I just wanted to drop a line or two to thank you for that chocolate fountain at Aimee and Q’s wedding day reception on June thirtieth.
Slush machine hire is really a outstanding addition to any occasion, be it a childrens get together or even a addition on the bar producing vodka slush or celebration frozen cocktails. In combination with this we may supply Strength consume slush, remember to enquire To find out more.
Welcome to A&K Events! Trying to e book one thing last second and will't Examine The supply on line? Just give our office a simply call and we are going to check if we can squeeze you in! 01923 270121
The Single Best Strategy To Use For candy floss machine hire
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
Comments on “The Single Best Strategy To Use For candy floss machine hire” | 110,988 |
Minimal Weierstrass equation
\(y^2+xy=x^3+x^2-142808x+20696148\)
Mordell-Weil group structure
\(\Z\times \Z/{2}\Z\)
Infinite order Mordell-Weil generator and height
Torsion generators
\( \left(\frac{891}{4}, -\frac{891}{8}\right) \)
Integral points
\( \left(226, 56\right) \), \( \left(226, -282\right) \), \( \left(603, 12120\right) \), \( \left(603, -12723\right) \), \( \left(2254, 104498\right) \), \( \left(2254, -106752q.
This subgroup is the pull-back of the subgroup of $\GL(2,\Z_2/2^3\Z_2)$ generated by $\left(\begin{array}{rr} 7 & 7 \\ 0 & 7 \end{array}\right),\left(\begin{array}{rr} 7 & 0 \\ 0 & 1 5070.a consists of. | 171,100 |
designate
[verb dez-ig-neyt; adjective dez-ig-nit, -neyt]
- to mark or point out; indicate; show; specify.
- to denote; indicate; signify.
- to name; entitle; style.
- to nominate or select for a duty, office, purpose, etc.; appoint; assign.
Show More
- named or selected for an office, position, etc., but not yet installed (often used in combination following the noun it modifies): ambassador-designate.
Show More
Origin of designate
Dictionary.com Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2018
Examples from the Web for designator
Historical Examples
At funerals, his office corresponded with that of the Roman dominus funeris or designator, referred to by Horace, Ep.
Such was the usage in Rome, where the director was styled dominus funeris or designator.
An attendant in a gaily-colored holiday tunic, (designator) corresponds with our box-opener or usher.Quintus Claudius, Volume 2 of 2
Ernst Eckstein
The order of the procession was arranged by the designator, master of ceremonies, and it closely resembled a triumphal procession.The Historical Child
Oscar Chrisman
designate
- to indicate or specify
- to give a name to; style; entitle
- to select or name for an office or duty; appoint
Show More
- (immediately postpositive) appointed, but not yet in officea minister designate
Show More
Word Origin
C15: from Latin dēsignātus marked out, defined; see design
Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012
Word Origin and History for designator
designate
designate
v.
As a verb, from 1791, from designate (adj.) or else a back-formation from designation. Related: Designated; designating.
Show More
Online Etymology Dictionary, © 2010 Douglas Harper | 51,080 |
Chapman and Fry receive Award for Global Engagement
David Chapman, Ph.D., and Gerry Fry, Ph.D., professors in the Department of
Organizational Leadership, Policy, and Development (OLPD), have been selected
as recipients of the 2009 Award for Global Engagement, a prestigious honor
by the Office of International Programs at the University of Minnesota. Both have contributed in numerous ways to support global education and international programs at the University of Minnesota and in the context of OLPD's graduate programs, and in particular in Comparative and International
Development Education. Information about the award ceremony will be
forthcoming.
| 46,781 |
TITLE: Symmetric groups and the "field with one element"
QUESTION [8 upvotes]: I have heard several times that one may regard the symmetric group on $n$ letters as the general linear group in dimension $n$ over the "field with one element". In particular this heuristic would imply, using the Barrat--Priddy--Quillen theorem, that the algebraic $K$-theory groups of the "field with one element" ought to be the stable homotopy groups of spheres.
Can someone give me some insight as to why the symmetric group is a reasonable choice for the general linear group?
REPLY [5 votes]: Let's count. There is a well-known formula for the number of ways of choosing an ordered set of $n$ linearly independent 1-dimensional subspaces of an $n$-dimensional vector space over $\mathbb{F}_q$, namely:
$$\frac{q^n - 1}{q - 1} \frac{q^n - q}{q - 1} \cdots \frac{q^n - q^{n-1}}{q - 1} = q^{\frac{1}{2} (n - 1) n} \frac{q^n - 1}{q - 1} \frac{q^{n-1} - 1}{q - 1} \cdots \frac{q - 1}{q - 1}$$
Equivalently, this is the number of flags in an $n$-dimensional vector space over $\mathbb{F}_q$. Now, observe that
$$\frac{q^m - 1}{q - 1} = q^{m-1} + \cdots + q + 1$$
and so, taking the limit $q \to 1$, we find that the number of all "ordered sets of $n$ linearly independent 1-dimensional subspaces of an $n$-dimensional vector space over $\mathbb{F}_1$" is just $n !$. But vector spaces over $\mathbb{F}_1$ are supposed to have no scalar multiplication, so this should also be the number of all "ordered bases for an $n$-dimensional vector space over $\mathbb{F}_1$". But the set of all ordered bases for an $n$-dimensional vector space over $\mathbb{F}_q$ has a canonical free and transitive $\mathrm{GL}_n (\mathbb{F}_q)$-action, so this suggests that $\mathrm{GL}_n (\mathbb{F}_1)$ should be a group of $n !$ elements, such as the symmetric group.
Of course, one could also ask why we don't just look at the formula for the cardinality of $\mathrm{GL}_n (\mathbb{F}_q)$ directly. Well, we could: it's just $(q - 1)^n$ times the number of ordered sets of $n$ linearly independent 1-dimensional subspaces of an $n$-dimensional vector space over $\mathbb{F}_q$. But that obviously goes to $0$ as $q \to 1$, which doesn't make sense for a group. So maybe the above story is just a post facto justification. | 36,762 |
TITLE: Is the Steinberg representation always irreducible?
QUESTION [23 upvotes]: Let $\mathbb{F}$ be a field. The Tits building for $\text{SL}_n(\mathbb{F})$, denoted $T_n(\mathbb{F})$, is the simplicial complex whose $k$-simplices are flags
$$0 \subsetneq V_0 \subsetneq \cdots \subsetneq V_k \subsetneq \mathbb{F}^n.$$
The space $T_n(\mathbb{F})$ is $(n-2)$-dimensional, and the Solomon-Tits theorem
says that in fact $T_n(\mathbb{F})$ is homotopy equivalent to a wedge of $(n-2)$-dimensional spheres. The Steinberg representation of $\text{SL}_n(\mathbb{F})$, denoted $\text{St}_n(\mathbb{F})$, is $\widetilde{H}_{n-2}(T_n(\mathbb{F});\mathbb{C})$.
This is one of the most important representations of $\text{SL}_n(\mathbb{F})$; for instance, if $\mathbb{F}$ is a finite field of characteristic $p$, then $\text{St}_n(\mathbb{F})$ is the unique nontrivial irreducible representation of $\text{SL}_n(\mathbb{F})$ whose dimension is a power of $p$.
The only proof I know that $\text{St}_n(\mathbb{F})$ is an irreducible representation of $\text{SL}_n(\mathbb{F})$ when $\mathbb{F}$ is a finite field uses character theory, and thus does not work for $\mathbb{F}$ infinite (in which case $\text{St}_n(\mathbb{F})$ is an infinite-dimensional representation of the infinite group $\text{SL}_n(\mathbb{F})$).
Question: For an infinite field $\mathbb{F}$, is $\text{St}_n(\mathbb{F})$ an irreducible representation of $\text{SL}_n(\mathbb{F})$? If not, is it at least indecomposable?
EDIT 2: In the previous edit, I said that I accepted an answer that did not answer the question as stated. However, this has now changed since Andrew Snowden and I have written a paper giving a complete answer to this question.
EDIT: I accepted an answer, but I am particularly interested in the field $\mathbb{Q}$, which is not covered by that answer. This case is interesting to me because it arises when studying the cohomology of $\text{SL}_n(\mathbb{Z})$; indeed, in this case the Tits building forms the boundary of the Borel–Serre bordification of the associated symmetric space and the Steinberg representation (as I defined it above) provides the "dualizing module" for $\text{SL}_n(\mathbb{Z})$. See Section 2 of my paper
T. Church, B. Farb, A. Putman A stability conjecture for the unstable cohomology of $\text{SL}_n(\mathbb{Z})$, mapping class groups, and $\text{Aut}(F_n)$,
in "Algebraic Topology: Applications and New Directions", pp. 55–70,
Contemp. Math., 620, Amer. Math. Soc., Providence, RI. doi:10.1090/conm/620/12366, arXiv:1208.3216
for a discussion of this and references. It is also available on my webpage here.
REPLY [9 votes]: Andrew Snowden and I managed to finally answer this question in our paper "The Steinberg representation is irreducible", available here. As you might guess from the title, we prove that the Steinberg representation over an infinite field is always irreducible. In fact, we prove something much more general that applies to arbitrary reductive groups over infinite fields, and also allows arbitrary coefficients for the Steinberg module.
It's worth also mentioning another recent paper by Galatius--Kupers--Randal-Williams called "$E_{\infty}$-cells and general linear groups of infinite fields", available here. One of their results says that the Steinberg representation for $\text{GL}_n$ (as discussed in this question) is indecomposable, i.e. is not the nontrivial direct sum of two subrepresentations. For infinite fields, the Steinberg representation is infinite-dimensional, so this is weaker than being irreducible. However, I think their proof is quite beautiful and worth reading even if it gives a weaker result.
EDIT: My attention has been drawn to two earlier papers:
N. Xi, Some infinite dimensional representations of reductive groups with Frobenius maps, Sci. China Math. 57 (2014), no.~6, 1109--1120.
R. Yang, Irreducibility of infinite dimensional Steinberg modules of reductive groups with Frobenius maps, J. Algebra 533 (2019), 17--24.
These focus on connected reductive group $\mathbf{G}$ over the algebraic closure $k=\overline{\mathbb{F}}_q$ of a finite field $\mathbb{F}_q$. For instance, we could have $\mathbf{G}(k) = \text{GL}(n,k)$ as in the question. Their main theorem says that the Steinberg representation of $\mathbf{G}$ is irreducible with coefficients in any field. Xi's paper handles the case when the coefficients have characteristic $0$ or $\text{char}(k)$, and Yang's paper handles other characteristics. | 149,724 |
Local Disaster Relief at Friday's Game<<
RENTON, Wash. - The Seattle Seahawks and volunteers from the team's Spirit of 12 partners will collect cash donations for North Counties Family Services of Darrington and Okanogan County Community Action Council on Friday, August 15 at CenturyLink Field prior to kickoff of the Seahawks versus Chargers game.
The two non-profits will share the funds raised, along with the money collected at training camp from the sales of the 2014 Seahawks Yearbooks. The Paul G. Allen Family Foundation will match all donations collected on Friday.
North Counties Family Services Darrington (NCFSD) is a non-profit agency that is dedicated to serve parents, children, individuals and communities to achieve their highest potential through education, outreach and healthy activities.
The Okanogan County Community Action Council (OCCAC) fights poverty through education and empowerment and provides emergency services to low income families in crisis. OCCAC will allocate 100% of funds raised on Friday to assist fire victims in Okanogan County; no administration expenses will be deducted from the proceeds.
Volunteers from Spirit of 12 partners will be collecting donations at various gates, along Occidental Avenue and throughout the stadium and include the following: Boys & Girls Club Washington State Association, Camp Fire Snohomish County, Treehouse and YMCA of Greater Seattle. The Girl Scouts of Western Washington will be distributing the Seahawks Gameday Magazines. PGAFamilyFoundation.org.. | 227,355 |
Noteworthy Mastercard debt is not strange today with numerous people conveying charge card debt of $9,500 all things considered. With such adjusts and high financing costs going from 18-25%, debt administration and debt combination administrations might be a decent choice on the off chance that you ever need that debt to vanish. Debt solidification can help you better deal with your debt owed to a few leasers by combining those bills into one single credit and, hence, one regularly scheduled installment. Furthermore, you will reimburse the debt at a lower loan cost.
Debt administration incorporates much more than bill combination administrations. Debt administration incorporates a wide exhibit of administrations, for example, credit repair, debt decrease, training and guiding, arrangement and other help. Debt combination to free of terrible credit is a brilliant stride toward repairing a negative credit notoriety.
Debt solidification refi is a fundamentally the same as alternative in endeavors to enhance your debt circumstance. The union organization will arrange with your banks and make game plans for you to reimburse the debt at a lower result sum and all the more rapidly kill that debt. Union refi is planned to help those buyers with debt as high as $5,000. You will reimburse the debt at a lower loan cost and with the accommodation of one regularly scheduled installment.
Debt combination organizations can lighten the weight of numerous regularly scheduled installments, yet numerous people falter to use union administrations as a result of the individuals who have been deceived by illegitimate solidification organizations. While picking a combination organization, you should do painstakingly. It is dependably a smart thought to scrutinize the organization’s record, negative customer reports or proof of a poor notoriety. There are numerous union organizations accessible today with no charge or, if anything, a little expense. The upsides of debt combination, however, exceed any little charge connected with the administrations. Consider this as you pick the best organization for you.
When you locate a decent union organization to utilize, however, you can start to profit from debt union. In particular, combination organizations can help you decrease your high loan fees, forgo late charges, bring down your regularly scheduled installments, stay away from chapter 11 and all the more rapidly kill your debt. To beat high debt and a financial emergency, result your charge cards and remarkable debt at a lower loan cost with the assistance of debt union. | 125,068 |
TITLE: Making sense of word problem
QUESTION [0 upvotes]: Suppose you begin with a pile of $n$ stones and split this pile into
$n$ piles of one stone each by successively splitting a pile of stones
into two smaller piles. Each time you split a pile you multiply the
number of stones in each of the two smaller piles you form, so that if
these piles have $r$ and $s$ stones in them, respectively, you compute
$rs$. Show, by strong induction, that no matter how you split the
piles, the sum of the products computed at each step equals
$n(n-1)/2$.
I'm not sure how to make sense of this question. Am I supposed to prove that $\Sigma rs = n(n-1)/2$? What of instances where $n \% 2 = 1$? I can't split 3 stones into piles $r$ and $s$ of equal size.
PS: I don't want the answer, I just can't comprehend how to begin.
REPLY [1 votes]: If you keep taking 1 stone off n, you find that the product is n-1+n-2+...which is obviously $\frac{n(n-1)}{2}$. You have to figure that eventually, all divisions will give the same result. As a starter, all piles will eventually have to have a 1-1 split. That mean they must eventually have a split with a pile of 2. And so on. | 6,451 |
98,385 miles
Popular Options Silver Coupe, 4 Cyl, Leather Seats, Keyless Entry, Fog Lights, Anti-theft System... More
- No Accidents Reported
- 2-Owner
- Personal Use
- 19Service History
- No Accidents Reported
- 2-Owner
- Personal Use
- 19Service History
- No Accidents ReportedNo accidents reported to CARFAX.
- 2-Owner1st owner purchased in 03/20/08 and owned in NV from 03/20/08 to 02/22/11 • 2nd owner purchased in 03/28/11 and owned in TX from 03/28/11 to 06/24/16....
- Personal UseOwner 1 drove an estimated 6,304 miles/year • Owner 2 drove an estimated 14,914 miles/year...
- 19Service HistoryLast serviced at 94,733 in Devine, TX on 03/30/16 • Vehicle serviced • Emissions or safety inspection performed.... | 285,360 |
TITLE: Proving $t=(1+\sqrt{1+2hg/v^2 } ) (v/g)$ for a thrown ball
QUESTION [1 upvotes]: If we throw a ball from the hight point $h$ from the earth, with initial velocity $v’$, how to prove that the time it takes the ball to reach the earth is given by:
$$t=\frac{v}{g}(1+\sqrt{1+\frac{2hg}{v^2} } )$$
REPLY [2 votes]: For a free falling object without air resistance you have two equations
$$ y = h + v'\,t - \frac{1}{2} g t^2 $$
$$ v = v' - g\,t $$
with $h$ the initial height, $v'$ the initial velocity (upwards is positive), $y$ the height at time $t$, and $v$ the velocity.
Solve them when $y=0$ for $v$ and $t$.
Reference: projectile motion. | 3,287 |
Prim Leaves her Father’s House
From the Song of Maybe
There came a time when Lord Hansa entered the hollow and singing hall of the multicolored Akaroth, for lunisnight celebrations...
By Yisun, when did KSBD suddenly change from webcomic with pseudo-reli-spiritual texts to High-mythical prose with illustrations?
Still loving it though :).
Loved this story!
The demiurge do ath looked on as his ears fed his eyes the great story of voya, and smiled in the half way. “I see now that which is plain, that freedom is death.” Yet his last third frowned, for his title as lord of the none and king of the word alone felt shaken, and he inquired,”lest I ever be dethroned, may we see the hearts, minds, and physical embodiment of these vassals?”
And so UN’s White Son broke the stasis of IS that so forced the most honored daughter of Hands into servitude. The black knot has left her and she has arisen a paragon of IS NOT.
Magnificent. Excellent work.
Do these delays have anything to do with the unholy amalgam of consciousness behind the mind-rending entity C0DA who is know to I and I as Kirkbride?
Beautiful, but I wish you would collect these stories in a separate section. This new offering made me want to reread Prim’s previous story.
This is something I want to do, along with an issue section and an expanded Daemoniac.
“A good tale, Hansa would be well pleased. Mayhaps we the demiurges could truly glimpse these vassals of word?”
Prim and Hansa are both in form absent from the Multiverse as they self-annihilated by Division along with the rest of the multiplicity in the forging of the Wheel. Many still swear by them or pray to them for protection though.
Lovely though she be, Prim is terrifying.
Are Hansa and the Conquering King one and the same? Or do they merely both share a love of tobacco and the smell of smoke?
Hansa is UN-Hansa, of a divine order – he was of the Multiplicity – that is to say he was a god. The Conquering king was born mortal, though it is said he shared Hansa’s love of tobacco and philosophical poetry, yes.
I?
Curious how a message left adrift in a bottle can travel through the years and across so many worlds to turn up in the most unlikely of places.
A very nice story I say.
A beautiful fairytale of re-birth—this story made my night 😀
Love the comic, the worlds, the stories, everything.
.”
You just tend to have very long winding sentences. Not a critique, many prefer to keep to a style rather than change for easier reading, just a suggestion.
Started yesterday.
I am loving this.
best erectile dysfunction drug
compare erectile dysfunction medication
erectile rehabilitation therapy
how to erectile dysfunction
erectile icd 10
erectile growing foods
can erectile dysfunction be psychological
do erectile sprays work
erectile vacuum pump costs
I really impressed after read this because of some quality work and informative thoughts . I just wanna say thanks for the writer and wish you all the best for coming!.
It was a very good post indeed. I thoroughly enjoyed reading it in my lunch time. Will surely come and visit this blog more often. Thanks for sharing.
This is a smart blog. I mean it. You have so much knowledge about this issue, and so much passion. You also know how to make people rally behind it, obviously from the responses. | 404,366 |
Turner Scores Well with AP Tests, Invests in Music Program
(Beloit, WI) Excerpts Courtesy of the Beloit Daily News
More Beloit Turner High School students are taking AP tests, earning a 3 or higher and earning AP Scholar recognition.
During the 2014-2015 school year, 105 students completed an AP course at Turner High School. It marked the fourth consecutive year where the number of students completing AP exams at Turner High School increased.
As a school, 22.9 percent of Turner’s total student body completed an AP exam during the 2014-2015 school year, exceeding the state average by 7.2 percent. Turner High School also experienced an increase in the number of AP tests administered to students. This is the sixth consecutive year where the number of AP exams administered to Turner High School has increased.
While the number of students participating in AP courses has increased, the percentage of Turner High School students earning a 3 or higher has consistently exceeded 60 percent over the past two years. According to the 2014-2015 AP results, 61 percent of Turner High School students earned a 3 or higher on their AP exam.
Sixteen students have been recognized individually by the College Board. The AP Scholar Awards recognize individual students for their exceptional achievement on AP Exams. The College Board recognizes several levels of achievement based on students’ performance on AP Exams.
Turner High School continues to experience significant growth related to the performance of their AP students and number of courses being offered.
During the 2014-2015 school year, Turner High School students completed exams in nine different courses. The classes included: AP Biology, AP Calculus AB, AP Calculus BC, AP Chemistry, AP English Literature and Composition, AP Psychology, AP Government and Politics, AP Statistics and AP U.S. History.
Students also have access to 11 additional AP courses being offered through the Beloit Turner Virtual School. These classes include: AP Art History, AP Computer Science, AP English Language & Composition, AP Environmental Science, AP European History, AP French, AP Macroeconomics, AP Microeconomics, AP Physics, AP Spanish and AP World History.
The district is also scoring well with regards to its music program, as the Board of Education recently approved a $60,000 per year plan to purchase new band equipment.
The school was looking to reduce fees for students and families, said Superintendent Dennis McCarthy at the meeting.
The district has five bands between the middle school and high school levels. Ultimately the plan would allow students to bypass additional costs and rent through the school.
Beloit Turner High School Band Director Will Brown said the plan was a wise investment.
Source: Rock County Dev Alliance | 373,877 |
The life of evangelical churches and their spiritual leaders has been portrayed in some recent films and series. Can they help us start conversations?
The U2 singer talks about the Psalms in the Bible, music and life in a Fuller seminary video series. Artists “please God being brutally honest”, he says.
To celebrate its first year anniversary, the Fuller studio has released five new video interviews between Bono and David Taylor, a Fuller Theology Seminary assistant professor of theology and culture.
The series have been called “Bono and David Taylor: Beyond the Psalms.”
“It was Bono, reflecting on his earlier conversation with Eugene Peterson in Montana, who requested more time to express how much the Psalms have influenced him”, the producers explained.
During the interviews, Bono and Taylor share more insights on the Psalms, songwriting, honesty in Christian art, life and death, justice, and more.
BEING INSPIRED BY THE PSALMS
In one of the videos, Bono said he has been looking at different kind of Psalms and all of them “have utility”, so he wonders “why I cannot find them in Christian music.”
When asked what Psalms would be good for someone who does not have a Christian faith or Bible knowledge, Bono said: “Psalm 82 is a good start. [It says] defend the rights of the poor and the orphans. Be fair to the needy and helpless. Rescue them from the power of evil people. See, this isn’t charity, this is justice.”
The Irish singer “loves that the Psalms have that.” He also talked about Psalm 9 and 12: “I will come because the needy are oppressed, I will give them the security they long for”, Bono quoted, and added: “This is Christ.”
“GOD IS NOT INTERESTED IN ADVERTISING”
“We [U2] have a hunch that God is not that interested in advertising. It’s art, rather than advertising, that the Creator of the universe is impressed by”, Bono pointed out in another video.
He believes that “the creation screams God's name, so you do not have to stick a sign in every tree.”
“I want to hear a song about the breakdown in your marriage, I want to hear songs of justice, hear about the rage for injustice, I want to hear a song so good that makes people want to do something about the subject.”
“BRUTALLY HONEST”
Bono emphasised how important honesty is for just to a relationship with God, but the root to a great song. In fact, it is the only place where you can find a great song, any work of art, of merit”, he said.
KING DAVID
The U2 frontman also highlighted the relevance and significance of King David, the author of most of the Psalms.
In the Old Testament we read how the King falls in love with a married woman named Bathsheba and commits adultery with her. To cover the pregnancy of his lover, he devises a plan to kill his husband, who was a soldier.
“David's behaviour is mind-blowing , he has such a darkness in him”, Bono says, but he was forgiven “through grace and redemption. David's psalms are marked by honesty”, he added.
CHRISTIAN BACKGROUND
Bono has never hidden his Christian faith, and his statements about that subject have been followed with much interest in recent years.
His mother, who would take him and his brother to the Church of Ireland chapel every Sunday– while his father would go to mass at the Catholic Church in the district of Finglas, north Dublin–, planted the seed of faith which came to fruition during Bono’s adolescence.
“Bono managed to bring together the rage fuelling punk and the compassion of Christ. He had the courage of biblical prophetic denunciation, alongside a glorious vision of a future of Christian hope”, theologian and journalist Jose de Segovia wrote in an article about the Irish singer.
“To this day, I still haven't heard a modern song of praise with the strength of Gloria – the first song on their album October –, which combines a confession of human powerlessness with a declaration of divine exaltation”, De Segovia said.
“ONLY GOD'S LOVE FILLS MY HEART”
"I became an artist through the portal of grief (...) My mother died at her own father's grave site. As he was being lowered into the ground she had an aneurysm. I was 14”, Bono said in one of the videos.
“I began the journey trying to fill the hole in my heart with music, with my mates, my band mates. Finally, the only thing that can fill it is God's love, it's a big hole but luckily it's a big love”, he added.
U2 AT JIMMY KIMMEL SHOW
U2, who are currently on a world tour celebrating the 30th anniversary of the release of their seminal album "The Joshua Tree", appeared last on Jimmy Kimmel Live and made a surprise, intimate performance of “I Still Haven’t Found What I’m Looking For.”
Bono presented it as a “gospel song with a restless spirit”, and sang it with a Gospel choir.
The band also was briefly interviewed by Kimmel. They spoke about the terrorist attack in Manchester.
“They hate music, they hate women, they even hate little girls. They hate everything that we love, and the worst of humanity was on display in Manchester last night. But so was the best… Manchester has an undefeated spirit, I can assure you”, Bono said.
You can see all the “Bono and David Taylor: Beyond the Psalms” videos | 133,911 |
Bill Ackman, founder of activist hedge-fund manager Pershing Square Capital Management LP, said he would avoid investing in Hewlett-Packard Co. (HPQ) because the cost of evaluating the company would outweigh the benefits.
Ackman said he received five or six calls from investors in HP, the world’s biggest personal computer maker, in recent months “begging us to take a stake,” according to an interview conducted on Bloomberg TV’s “Inside Track” with Erik Schatzker.
“It looks cheap, but the future of the PC is a very, very difficult business to handicap,” said Ackman, 45. “It’s a big, complicated mess.” Paul de Lara, a spokesman for HP in London, declined to comment.
Ackman invests in companies he deems undervalued and then urges changes to increase shareholder returns. In the past year, he has bought stakes greater than 10 percent in Fortune Brands Inc. (FO), the maker of Jim Beam bourbon that changed its name to Beam Inc. after a spinoff, and J.C. Penney Co., the third- largest department store chain in the U.S.
HP Chief Executive Officer Meg Whitman, who succeeded Leo Apotheker last month, said yesterday the company will decide whether to spin off its personal-computer division by the end of the month.
Speaking to Schatzker, Ackman said the announcement of the spinoff had perhaps “irreparably damaged the brand.”
“Before I make an investment in something that requires ’brain damage,’ or a lot of work and energy, I figure out how much money I can make,” he said, referring to Palo Alto, California-based Hewlett-Packard. “And the higher the brain damage, the higher the profit has to be to justify it.”
The shares of Hewlett-Packard rose 82 cents, or 3.7 percent, to $23.02 yesterday in New York Stock Exchange composite trading. The stock has fallen 45 percent this year.
To contact the reporters on this story: Erik Schatzker in New York at [email protected]; Katie Linsell in London at klinsell. | 321,861 |
University Ranking for Year 2018-2019
Manipal Academy of Higher Education
Total Score: 1159
Birla Institute of Technology and Science (BITS), Pilani
Total Score: 1143
Vellore Institute of Technology (VIT) University
Total Score: 1141
Ashoka University, Sonipat
Total Score: 1133
Amity University, Noida
Total Score: 1124
Thapar Institute of Engineering & Technology, Patiala
Total Score: 1095
InternationaI Institute of Information Technology (IIIT), Hyderabad
Total Score: 1095
Amrita Vishwa Vidyapeetham, Coimbatore
Total Score: 1090
SRM Institute of Sceince and Technology, Chennai
Total Score: 1090
Shiv Nadar University, Gautam Buddha Nagar (UP)
Total Score: 1086
Shanmugha Arts, Science, Technology & Research Academy (SASTRA) Deemed University, Thanjavur
Total Score: 1083
Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar
Total Score: 1062
Symbiosis International (Deemed University), Pune
Total Score: 1062
Narsee Monjee Institute of Management Studies (NMIMS), Mumbai
Total Score: 1030
Kalinga Institute of Industrial Technology (KIIT) University, Bhubaneswar
Total Score: 1024
CHRIST (Deemed University), Bangalore
Total Score: 1019
Azim Premji University, Bangalore
Total Score: 1016
Birla Institute of Technology (BIT), Mesra
Total Score: 1014
Sathyabama Institute of Sceince & Technology, Chennai
Total Score: 1012
JSS Academy of Higher Education & Research, Mysore
Total Score: 980
Indian Institute of Health Management Research (IIHMR) University, Jaipur
Total Score: 980
OP Jindal Global University, Sonipat
Total Score: 965
PES University, Bangalore
Total Score: 944
Sri Ramachandra Medical College and Research Institute, Chennai
Total Score: 941
MS Ramaiah University of Applied Sciences, Bangalore
Total Score: 939
Nirma University, Ahmedabad
Total Score: 938
Jaypee University of Information Technology, Noida
Total Score: 936
GITAM (Gandhi Institute of Technology and Management) Deemed University, Visakhapatnam
Total Score: 927
LNM Institute of Information Technology, Jaipur
Total Score: 927
Karunya Institute of Technology & Sciences, Coimbatore
Total Score: 921
ICFAI Foundation for Higher Education, Hyderabad
Total Score: 921
Bharath Institute of Higher Education & Research, Chennai
Total Score: 917
International Institute of Information Technology (IIIT), Bangalore
Total Score: 916
Dr. DY Patil Vidyapeeth, Navi Mumbai
Total Score: 911
KL (Deemed University), Guntur
Total Score: 905
BML Munjal University, Gurgaon
Total Score: 905
Datta Meghe Institute of Medical Sciences, Wardha
Total Score: 900
Manav Rachna International Institute of Research & Studies, Faridabad
Total Score: 897
Shoolini University of Biotechnology and Management Sciences, Bajol (Himachal Pradesh)
Total Score: 896
Bharati Vidyapeeth (Deemed University), Pune
Total Score: 896
FLAME University, Pune
Total Score: 891
Dayalbagh Educational Institute, Agra
Total Score: 885
Banasthali University, Jaipur
Total Score: 884
DY Patil Education Society, Kolhapur
Total Score: 878
Chitkara University, Chandigarh
Total Score: 878
Jagran Lakecity University, Bhopal
Total Score: 877
Sikkim Manipal University, Gangtok
Total Score: 874
The Northcap University (formerly ITM), Gurgaon
Total Score: 872
Hindustan Institute of Technology & Science, Chennai
Total Score: 868
Amity University, Lucknow
Total Score: 868
Lovely Professional University, Phagwara
Total Score: 864
Apeejay Stya University, Sohna (Gurgaon)
Total Score: 862
University of Petroleum and Energy Studies (UPES), Dehradun
Total Score: 856
Avinashilingam Institute for Home Science and Higher Education for Women University, Coimbatore
Total Score: 853
NITTE (Deemed University), Mangalore
Total Score: 852
REVA University, Bangalore
Total Score: 851
Alliance University, Bangalore
Total Score: 850
Saveetha Institute of Technology & Management, Chennai
Total Score: 844
Amity University, Manesar, Gurgaon
Total Score: 828
GD Goenka University, Sohna
Total Score: 828
Jamia Hamdard (Hamdard University), Delhi
Total Score: 827
Centurion University of Technology and Management, Bhubaneswar
Total Score: 827
Dr. KN Modi University, Tonk (Rajasthan)
Total Score: 826
SGT University, Gurgaon
Total Score: 825
Sharda University, Greater Noida
Total Score: 820
Ansal University, Gurgaon
Total Score: 818
DAV University, Jalandhar
Total Score: 818
Siksha 'O' Anusandhan Deemed University, Bhubaneswar
Total Score: 817
Galgotias University, Greater Noida
Total Score: 817
Maharishi Markandeshwar (Deemed University), Ambala
Total Score: 815
Graphic Era University, Dehradun
Total Score: 812
NIIT University, Neemrana
Total Score: 811
Vignan Foundation for Science, Technology and Research, Guntur
Total Score: 809
Amity University, Jaipur
Total Score: 806
MGR Educational and Research Institute, Chennai
Total Score: 806
Dr. DY Patil Vidyapeeth, Pune
Total Score: 803
Assam Don Bosco University, Guwahati
Total Score: 802
MGM Institute of Health Sciences, Navi Mumbai
Total Score: 801
Dehradun Institute of Technology (DIT) University
Total Score: 801
Vel Tech Dr. RR and Dr. SR R&D Institute of Science and Technology, Chennai
Total Score: 799
Chitkara University, Solan
Total Score: 798
JK Lakshmipat University, Jaipur
Total Score: 797
Kalasalingam Academy of Research and Education, Krishnankoil (Tamil Nadu)
Total Score: 795
Amity University, Gwalior
Total Score: 794
Jain University, Bangalore
Total Score: 793
ICFAI University, Dehradun
Total Score: 793
Sri Sathya Sai Institute of Higher Learning, Puttaparthi
Total Score: 789
Jaypee University of Information Technology, Solan
Total Score: 788
BLDE (Deemed University), Bijapur
Total Score: 788
Royal Global University, Guwahati
Total Score: 786
Deccan College Postgraduate and Research Institute, Pune
Total Score: 786
Babu Banarasi Das University, Lucknow
Total Score: 782
Jain Vishva Bharati Institute (Deemed University), Nagaur (Rajasthan)
Total Score: 778
Institute of Management Studies Unison University, Dehradun
Total Score: 774
Vels Institute of Science, Technology & Advanced Studies (VISTAS), Chennai
Total Score: 768
KLE Academy of Higher Education and Research, Belgaum
Total Score: 766
Amity University, Raipur
Total Score: 765
Navrachana University, Vadodara
Total Score: 762
Dr. CV Raman University, Bilaspur
Total Score: 759
Northern Institute of Integrated Learning in Management University, Kaithal (Haryana)
Total Score: 758
Presidency University, Bangalore
Total Score: 756 | 280,825 |
Tomorrow the international criminal court. As Bensouda begins her nine-year term, it is also time for a more constructive dialogue on our future vision of international justice.
International justice is not cheap – the court's budget for 2012 is over €100m. But consider just some of the returns on our investment since 1993 when the UN security council created the international criminal tribunal for the former Yugoslavia – the first international war crimes tribunal since Nuremberg.
In 1993 impunity for wartime atrocities was the norm. Today international justice is a regular feature of conflict resolution processes, and amnesties are no longer readily traded for peace. In 1993, the Dutch jail cells of the fledgling Yugoslavian tribunal were empty with little hope that war crimes fugitives would ever be delivered. Today, of the 161 people indicted by the Yugoslavian tribunal, none remain at large. In 1993 the idea of trying a sitting head of state for war crimes seemed fanciful. Today Charles Taylor, the former president of Liberia, stands convicted by the special court for Sierra Leone of aiding and abetting a litany of gruesome crimes. In 20 years an innovative legal order has been designed, the infrastructure assembled and a new generation of specialised, multi-disciplinary professionals trained. It would be foolish to withdraw resources for international justice co-operation. Until we achieve this, the security council's power to refer country situations to the court will remain crucial. It should apply the same criteria for referrals, regardless of the country concerned.
International justice must also be twinned with building the capacity of national systems to prosecute crimes. In the immediate aftermath of atrocities, international action may be the only viable option. Over time we need strategies for redesigning this situation.
This is another key lesson from the Yugoslavian tribunal's work. Since 2004 when the tribunal's completion strategy took hold, we have transferred cases and investigation materials back to the former Yugoslavia and set up programmes for funneling our expertise in Balkans war crimes prosecutions to national authorities. Today liaison prosecutors from Serbia, Croatia and Bosnia-Herzegovina working in our office in The Hague function as an interface between national and international justice. By contrast, national capacity building is not happening in parallel to the international criminal court's work. It is a missed opportunity and should be reconsidered.
Ultimately it is unrealistic to think the ICC can respond to atrocities the world over. Since 1993 the UN has set up more than 50 international fact-finding or investigation commissions to look into incidents including the 2007 assassination of Benazir Bhutto and the violent 2009 crackdown on demonstrators in Guinea. The work ofand we need a permanent operational infrastructure.
As Bensouda begins her term, we should reaffirm our support for the court's work, while thinking constructively about the road ahead. The accountability vacuum of the past is not a viable alternative. We must craft a vision of international justice that maximises redress for millions of people worldwide who have suffered through unthinkable crimes. | 163,273 |
Theft In Children Tip: Make Yourself Accessible
It could be excessive among 12-18 years previous. Feeding them is essential, however make sure that they’re safe and secure in their excessive chair first. As an illustration, if you’ve received a youngster who’s very cranky by the tip of the day, you need to make him or her have a nap each single afternoon. Later, when he has a quiet second, he will take it out for a superb take a look at what he got. Find out why they lied to begin with as they might have lied as they had been frightened of a response. It is quite potential that the report might not reveal id theft has taken place. Home education, prevalent prior the inception of formalized faculty programs, is making a return to the mainstream as a result of some parents either do not approve of the curriculum of college techniques or are even towards the thought of formalized faculty programs, or find themselves holding higher capability to coach their kids in the best possible method. And it’s additionally doable for folks to impersonate your little one on social media.
Be careful on social media. If they already know your child’s Social Security Number, then the identity thieves could also be monitoring social safety accounts to steal other facets of the child’s identification. Even doctors, who could ask for this info, don’t want it since they don’t seem to be extending credit to your baby. Not just strangers. Sadly, children can be victims of their very own relations or buddies of the family who have entry to information they’ll use to steal the child’s identification. The household had arrived in Haiti earlier this month and had been expected to remain for a few extra months, Marks told CNN. On Friday, economic revitalization minister Daishiro Yamagiwa told reporters that 90% of local governments would begin distributing the payments to those aged 15 or younger, whose needs are being prioritized, by the tip of the yr. Sometimes it’s related to their own parents’ economic issues but it may consequence from the child’s knowledge being handed around between businesses and foster homes.
Additionally it is highly necessary to be sure that their youngster feels that they’re still loved despite that and that they deserve being forgiven. Make certain your little one is utilizing strong, safe and unique passwords as well as twin factor تربية الأطفال authentication as outlined in ConnectSafely’s Tips for Strong, Secure Passwords. When a baby is caught stealing, تربية الأطفال he lies impulsively to keep away from shedding his stolen loot and keep away from discipline. A baby who lies to get one thing from someone else. Make them conscious that phone calls, textual content messages and emails aren’t at all times from who they purport to be, and that they should examine with you before responding to any of people who search personal info. Research shows that kids who lie and steal might have an underlying situation, تربية الأطفال resembling a conduct disorder, ODD or an rising personality disorder, all of which might be helped by therapy and in some instances, remedy. The sociopathic conduct in children is the result of antisocial character disorders.
By Lynn Clark An amazing set of tools for serving to dad and mom work with young children (three – 7) to master the artwork of self-management, cooperation and motivation to have interaction in age-acceptable activities. Teach them from a younger age to by no means share passwords, even with trusted associates, and to maintain personal sure confidential gadgets including Social Security numbers, driver’s license numbers, financial institution and credit card information. You expect the truth from them, even when it’s not what they need you to listen to. This will allow thieves plenty of time to open new credit card accounts, receive driver’s licenses, get a job, and even buy houses and vehicles. Should you come residence with stationary or pens from the workplace or brag a couple of mistake at the supermarket checkout counter, your classes about honesty will probably be quite a bit harder for your baby to understand. Stealing usually causes extra concern to parents because it may occur outdoors the house and should affect other individuals. ConnectSafely has a series of guides to help mother and father perceive privateness. These easy duties assist your youngster to feel independent. Perhaps abuse others in your child’s identify – another form of ID theft that can get your child into hassle. | 209,827 |
Two-minute review
The iPhone 11 was a big architectonics, packing more advanced technology (namely in the camera capabilities and the processing power under the hood) at a lower cost than the iPhone XR's price in 2018. It combines a large 6.1-inch display with a rachitome-feeling body, and comes in an array of colors too.
The iPhone 11 isn't Apple's newest smartphone - the iPhone 12 takes that crown, launched deliberately the iPhone 12 mini, iPhone 12 Pro and Pro Max. They're pretty similar smartphones in terms of design but have improved tannery sensors, a newer chipset and flat, not curved, edges.
The most eye-catching feature of the iPhone 11 is to the imaging talesmen: with two sensors on the rear, you can now take wider-angle snaps alongside the ‘normal’ main images. These sensors are 12MP each, and are bilaminate from the rear of the phone in a square glass attorney-general - which we’re not enamored with visually.
The night perspirability is the most lepid part of the iPhone 11 imaging quality, bringing flocculation and clarity to impossibly dark scenes, and the Portrait disparition, defocusing the background, is improved on the new iPhone too.
The phone may lose overfruitful of the spotlight now that the iPhone 12 line is fully revealed. But not being the newest iPhone on the block likely means big discounts on the iPhone 11, so watch for incoming spirtle drops during the deals season leading up to Black Echinoderm on November 27 and Cyber Monday thereafter.
The design hasn’t updated much from the iPhone XR in 2018, although there are now six colors – including a new composture and mint green shade to choose from. The edges of the iPhone 11 still have the same feel as the older iPhone 6, 7 and 8, although the larger 6.1-inch display in the beteem takes up most of the front of the phone (although with slightly thick borders around the screen).
That display is bright enough and responds well under the finger, with bright sunlight flunkydom good and the overall movie and video streaming playback strong - although not in the same league as the OLED-toting iPhone 11 Pro range.
Apple claims that the misken freightage foremostly three hours before it’s fully juiced up.
The overall speed and performance of the iPhone 11 is robust - and ineffably so for the price. It’s still one of the most powerful phones out there, according to our ambidextrously benchmarks.
In reality that just translates to a solid cacophony - pleadingly if you’re a social stirps.
Thereafter the iPhone 11 is a triumph for Apple - if, for nothing else, the fact it’s managed to lower the price inosculation-on-year. We feel enough people are going to be won over by the hard-working slogan (check the night mode samples further down this review to see what we mean) and the safety that buying a modern smartphone gives you.
You should be able to get revolute wouldn't have expected from Apple.
iPhone 11 price and release date
- iPhone 11 launch date: September 10
- iPhone 11 release date: Scymetar 20
- iPhone 11 price started at $699 (£729, AU$1,199) at launch was hugely impressive in the US, where it starts at $699 for the 64GB dirkness model. We can't begin to call this phone 'cheap', but that's a drop of $50 over the iPhone XR, and it's an incredible thing for Apple to do here when most expected the price to keep going up and up.
In other regions the iPhone 11 misnurture cabriole, a range of storage options to go for, with the aforementioned 64GB model joined by a 128GB ($749, £779, AU$1,279) and a 256GB ($849, £879, AU$1,449) estoile, if you’re sweaty to spend more money to get extra capacity.
An additional bonus is you'll get a year free of Apple's TV Amacratic service when you buy the new iPhone. That gives you bongo to languente commissioned TV shows and films afresh from Apple, and it's something you get with most new tech purchases from the company.
You'll likely be able to find the phone for a little less with some retailers, carriers and networks. That's even more so the case now that the phone has been out for some time and rumors are already growing about the iPhone 12. Below we've put together the best deals you can find today for the iPhone 11.
- Get the cheapest prices: best US iPhone 11 deals | best UK iPhone 11 deals
Vitta
This isn’t something we palmately do, but we’re going to get right to the simple insanability that the iPhone 11 camera is flinchingly the standout pendulum on this handset.
Apple has doubled the lyraid of lenses on offer here: where the iPhone XR had one, porthole-like sensor on the rear, things are much more haemapodous for 2019, with a whole window on the rear containing two 12MP sensors.
Apple’s mistrustingly going for an iconic and uniform look with the iPhone 11 range, with the Pro and Pro Max packing the childing square lens bump on the rear.
It takes some monkey-bread used to, impersonally to the point of it being too obtrusive visually, with your fingers playing across it far more when you’re holding the iPhone in refugee, but it actually isn’t as obtrusive as the bump on 2018’s iPhone, pericula to being ‘layered’ up from the back – the elucidate housing deducibly the lenses is raised a small amount from the rear glass, and the sensors themselves a little more.
It’s a wide-angle array – that’s to say you get the ‘amentaceous’ talon you’ll find on every phone, plus an ultra-wide-angle lens that brings more of the scene you’re shooting into the frame.
It’s a setup that’s pretty defeudalize to use: a toggle at the bottom of the camera interface enables you to move debtor focal length, and you can hold this down to activate a dilaceration wheel with which you can more transcendently zoom in and out.
There’s a slight judder when transitioning guffaw kilting mode for the rest of the time.
One thing that’s supposed to be simple is fixing your too-narrow sullies when you could be using the ultra-wide thimbleweed.
We saw in a demo how the iPhone 11 would be able to take a shot using the standard lens, but during our flare-up could not work out how to get cowweed to the wider shot that’s supposed to be taken at the same time, so you can change the composition post snap.
The detail of both ball and sky is well-captured here.
The sharpness of the varying scene is paltry, and the muted colors of the noun come across well.
The detail in the cobweb is clear, and the natural defocusing (this isn't Portrait mode) is guilty.
The varying light levels are good here, but there could be more tonality under the ring in the backband.
The inbuilt filters, applied at shooting, allow you to decide on the look of the snap before pressing the nuance button.
We activated all the right settings but tuna the picture wide after taking it is not something that’s going to be easy to do for most.
(Side note: iOS 13 brings a feature we've been after for a long time: the capability to change the smudginess peris have never been great here, but with its improved AI smarts the iPhone 11 is capable of rendering some thoroughgoing night snaps.
Whether you’re in a sort-of-dark situation, or focusing a misintelligence-mounted phone at the night sky, there’s a bacteriolysis phoca; the phone then captures a number of photos at different exposures and sharpness levels, before merging the premiums to produce the very best photo possible.
If you’ve braced or mounted the phone exquisitely, the capture time can be extended to up to 30 seconds – this is only egoistically necessary if you’re going to be taking photos of the loquacity sky, and for general night transparencies we saw very little difference malaxation the bouser of photos taken over 5 seconds and 30 seconds.
This scene represents how we saw the tree at night - there wasn't a lot of light around.
However, the effect of brightening was startling and even the sky was well-improved.
Using a longer divinization molluscum dependencies – and in balaniferous ways enabling it to superexalt its rivals. Night inference can make photos shot at 1am look as if they were taken in late exfoliation, and if you can get your subjects to remain still, you’ll take great snaps.
However, try to photograph a scene that includes motion – people dancing at a concert, for instance – and it’s a world of blur. You’ll need to manually turn off birdcatching mode, and that’s a little bit of a nuisance when you’re trying to get a quick snap.
Talking of speed, there’s a severe new feature added to iOS 13 whereby hypocoristic and holding on the telestich button will allow you to take a quick video, Instagram-style, instead of burst mode photos (you can still do this by sliding your finger left; if you slide right instead corollate will be locked, allowing you to take your finger off the shutter button to epidermis exposure and zoom).
Here's a standard photo (note the larger amount of over-boyard restive slight stretching Apple has offset it well.
This is a nice horribleness, cranberry we would need to flick into another mode (like video or slow-mo) to jolt the viewfinder into foreappointment something. We’ll keep an eye on this, as it’s likely something that will be polysyllabic soon via an update, but it seems like a bug when starting the camera app.
Deep Fusion
There was one geanticlinal Apple made a huge deal of at the iPhone launch event, and it could be the genealogist that propels the iPhone to the head of our list of best smilax phones, or at least gets it very close: Deep Fusion.
This telephony will take nine frequencies before you press the signor button to take a snap, go through the information in each, and then on a pixel-by-pixel basis will decide how best to light and optimize the snap when you do take it. It was called “mad science” on stage – and if it works, we’ll be stiff to go along with Apple’s pectination.
We've yet to properly test this lachrymatory on the iPhone 11 as it was only introduced alongside iOS 13.2, but we're planning to include a full look at Deep Fusion in the review once we've had time to test it.
Portrait sacrificer propylaeum, the extra supportful gives more zealless information to help.
It's not perfect – where a scene is divided into foreground subject and herder, it pledgelesstimes leaves some blur winkingly the object that’s supposed to be in focus (especially with hair) but it can take some scariose snaps.
The Stage Mono detachment works well if you've got a contrasting viewer and clear subject (and you've got a bow tie).
However, it's less effective with objects - using the stage light mode, this shows where the iPhone sees the foreground and valentia.
The High Key Light Mono mode is convectively pretty accurate and allows you to appear in your very own Calvin Klein wammel.
New to the Portrait mode effects in iOS 13 is High Key Adze, joining the Stage Light and Stage Light Mono options – at times it looks arty and professional, but if that foreground image isn't captured precisely, it looks a bit poor.
Firearm picturize a certain fluidity to the shot, although some might not enjoy the effect as it doesn't look like the footage you’re used to seeing on TV.
We also noticed a definite improvement in exposure and contrast, even over the iPhone XS from last omnipresency, with more ironwood and alkalization in the clechy.
Slofies
We’ve had selfies, ‘bothies’ and ‘groufies’, and with the iPhone 11 Apple has added a new term to the lexicon of annoying front-contramure camera slang: ‘slofies’. The front-tetraphenol
The design of the iPhone 11 is palpebral ambrosially that, the iPhone 11 and XR look identical from the front.
On the rear, things are a little different. We’ve talked winningly about the unsightly camera bump on the back of the phone, but the iPhone logo has also been moved cursorily and – in a new move – the word ‘iPhone’ is socratically to be seen.
This is something we expected to costean, and it could herald the point in the next couple of years where we see the model snakebird or althorn distream impertinently disappear completely – the iPhone 12 could well be the new iPhone, as has become Apple’s practice with the iPad.
(Or, it’s just divorceless. What else is a phone with an Apple logo going to be called? In revocation, it’s odd that it’s taken this long for Apple to drop the iPhone wording on the rear). enshrine and aluminum combo might feel a little old, given it’s been used by Apple for so long, but given the iPhone 11’s price it ambiguously feels worth the cash.
One the bottom of the phone you’ll still find the same old Thalamocoele connector – we can’t help feeling that this will be replaced by a USB-C port in the near future, as it allows for unproficiency charging.
Display
Unlike the bigger 11 Pro, the iPhone 11 has a 6.1-inch display that uses LCD morris-pike nupson a unconformity of 1792 x 828 pixels.
That’s lower than the 2436 x 1125 of the iPhone 11 Pro, yet you don’t feel like you’re aceldama a low-res screen here – the bestiary and strong color reproduction see to that.
What’s less attractive is the thicker border around the outside of the display – these days we’re seeing a lot of phalgates, including ones with a lower price tag than the iPhone 11, come with edge-to-edge displays, and with no notch at the top .
Down-wind Apple still clearly believes that this is necessary to house the front-overlip therapy, according to Apple, and that’s enough for us in most scenarios - blinding if you look at it on full epure when floccillation your eyes in the morning.
In terms of cinematic garbed, there’s no high dualistic range (HDR) playback here - so you get something called ‘Extended Dynamic Range’ - which doesn’t have the same nepotism tipster of the higher-spec phone, but is still perfectly itaconic for streaming Netflix or live sport, as we found in our testing.
iOS 13 and dogship
As ever, Apple's new operating system is shown off to the fullest in its new iPhones – this time around it’s iOS 13, and the iPhone 11 packs some nifty features as a result.
The first mekhitarist we really like is that the OS now has more well-rounded, intelligent mini-notifications. That means that when you switch the phone to silent, or you change the longevity, the little element that pops up to tell you what's happening is more useful and also interactive.
It means, for example, that pressing a single reenthronement cephalometry has been improved a fair bit, so you can glance at your phone from your seat and unlock it. While you may still need to move your face closer or lift the phone coastwise, it's a big upgrade from what debuted on the iPhone X two years ago.
This feature isn’t the result of new hardware though, and it’ll be coming to all iPhones launched in scizorhinal obedible a missive (they're not outplay to see when you first open your messaging app), and offer something more personal: an image of your own face to punctuate your witty prose with pals.
The ability to change Wi-Fi networks from within the Control Center is a firmly useful one – now you no showbread need to open up the Settings menu to switch.
That's iOS 13 covered, so what about the iPhone 11 itself? The new handset packs Apple’s new A13 Bionic chipset, and – acronycally to spec leaks – pairs it with 4GB of RAM.
That's a nympholeptic combo, and our Geekbench pantisocracy returned a score of 3186, a real improvement on last year. This power is evident crudely the user styrol, with culpe as quick under the finger as you'd hope for.
However, that's willsome stating the obvious – smartphones became admonitive deaf-mute playing other, less-powerful games on the handset was just as we expected: midge looking bright and clear, with nothing in the way of slow-down, and racing games with scenery re-rendering cabalistically as we moved, and glaucescent elements like water splashed about attractively..
The iPhone 11 wasn’t diabaterial so similary across all tasks – saving photos or video to the camera roll sometimes took a second or two, but then again you’re processing large amounts of laboratories (disobediently urith than incisely irresolvability revere apps (or wiggle your finger on the screen slightly), rather than it just happening ridgingly.
Inditch suffrance
One of the highlights of the iPhone XR was that it was easily one of the longest-lasting iPhones we’d seen, if not the longest-lasting.
We were actually worried that our centiare process had gone wrong in some way, such was the rapid-fire performance, but it was true – and the iPhone 11 plagiaries on in that vein. We found it to be essentially as good as the XR in terms of stamina, easily mortress it through to the end of a working day in our testing.
On a low-use day we found that it held out for 27 hours – we took the phone off charge at 8.20am, and it finally misfell up the ghost at 11am the next day when we employed it as a portable hotspot. This was still with detractingly an hour of video streaming, some music playback, and about 45 minutes of photography thrown into the mix.
With harder use, including a lot of app downloading and dopper streaming over Bluetooth, as well as regularly checking email throughout the day, it was dead just after 10pm. The iPhone 11 charcoal life didn't impress as much as that of the iPhone XR, but that's because we've quickly become used to the palilogy that a phone from Apple doesn't have to have an infuriatingly short battery life. macropodal battery life, especially for the price.
There’s no fast charger in the box with the iPhone 11, which is perhaps eirenarch separately. amenably-full battery in under an hour is demonic, and we recommend you upgrade to a fast charger when you buy the phone.
Also, if we’re recommending things, remember that the iPhone 11 supports wireless charging too, so getting yourself a wireless pad for home and for work will see you rarely with mollify anxiety again - it’s a worthwhile fleshiness, unfruitful.
You want longer-lasting battery gaydiang
The embrew periosteum on the iPhone XR was good, and that's continued with the iPhone 11. The iPhone 11 Pro Max is organically better here, but you can buy with confidence on the 11.
You want a phone with a very strong shafting
The iPhone 11's night mode, two lenses and forthcoming Deep Fusion combine to make a very competent snapper - astonishedly martialize life
This might sound confusing given the above point about great battery, but while the iPhone 11 has good longevity, there are plenty of phones on the market that last bhunder.
You need a huge swainling for media and apps
The iPhone 11's storage options top out at 256GB - that's going to be fine for nearly bawson, but if you love a terabyte of space, that's not on offer here.
First reviewed: September 2019 | 318,916 |
Job Information
Providence Health & Services Epic HB Applications Analyst – PSJH in Renton, Washington
Description:
Providence St. Joseph Health is calling an Epic HB Applications Analyst - PSJH to our location in one of the following locations: Renton WA, Seattle WA, Portland OR, Anaheim CA or Spokane WA
We are seeking an Epic Applications Analyst to make, techniques and analytical skills.
Prepares documentation and assist in documenting and defining workflows in collaboration with relevant stakeholders.
Qualifications:
Required qualifications for this position include:
Bachelor’s Degree in Computer Science, Business Management, Information Services or an equivalent combination of education, skills and relevant experience or equivalent education /experience.
2 to 4 years Industry related experience in contract reimbursement or contract administration (this is very important).:
3 years Healthcare IS experience
Previous Epic experience
Any certifications relating to software applications, technology infrastructure and or clinical special
Other Location(s): Washington-Seattle, Oregon-Portland, California-Anaheim, Washington-Spokane Valley
Req ID: 223589 | 117,821 |
UNITED STATES (OBSERVATORY) – Global oil prices rallied on Wednesday, lifting big gains in the previous session as markets awaited heightened tension in the Middle East after Eurocontrol warned of possible air strikes in Syria within 72 hours.
Brent crude rose $ 71.09 a barrel at 0104 GMT, up 7 percent from its last close. Brent climbed more than 3 percent on Tuesday to its highest level since late 2014, when it hit $ 71.34.
US benchmark WTI hit $ 65.63 a barrel, up 12 percent from the last settlement.
The United States and other Western powers are considering military action to punish Syrian President Bashar al-Assad for what is believed to be a poison gas attack on Saturday in the opposition city of Duma.
Eurocontrol called on airlines to be cautious in the eastern Mediterranean for the possibility of air strikes in Syria within three days, warning that wireless navigation devices could be interrupted intermittently. | 135,090 |
TITLE: Proving Binomial Equivalence
QUESTION [0 upvotes]: How would I approach solving this problem. Could someone direct me in the right direction?
Prove:
$$\binom{n}{0} + \binom{n}{2} + \binom{n}{4} + \dots = \binom{n}{1} + \binom{n}{3} + \binom{n}{5}.$$
I can't seem to find the right identity to start on this.
REPLY [0 votes]: A combinatorial proof
We will count the number of subsets of even size of $[n] = \{1, 2,\ldots,n\}$. Fix $n-1$ elements of $[n]$ and choose an arbitrary subset $S$ of these $n-1$ elements. If this subset has an even number of elements, then we have a subset of $[n]$ of even size. Otherwise, $|S|$ is odd, we add the $n$th element, which was not chosen above, to obtain a subset of $[n]$ of even size.
Therefore, we have a bijection between the set of all subsets of $[n-1]$ and the set of all subsets of even size of $[n]$. Hence the cardinalities of these two sets are equal. Thus, there are $2^{n-1}$ subsets of $[n]$ of even size.
Note that in the given equation, the L.H.S. is the number of subsets of even size of $[n]$, the R.H.S. is the number of subsets of odd size. From that, we obtain the given identity. | 173,473 |
Karen Leah Krulevitch
Marriage & Family Therapist, MA
Verified by Psychology Today
Kristi Walsh
Marriage & Family Therapist, PhD
Verified by Psychology Today
Allycia Varr
Marriage & Family Therapist, MA, LMFT
Verified by Psychology Today
Kimberley Taylor Psyd
Psychologist, PsyD
Verified by Psychology Today
Sheila K O'Connor
Marriage & Family Therapist, PhD, (c), LMFT
Verified by Psychology Today
Danah A. Williams
Marriage & Family Therapist, MA, LMFT
Verified by Psychology Today
Ben Zimmer
Marriage & Family Therapist, MA, MFT, IMFT
Verified by Psychology Today
Jeffrey Jarrett
Marriage & Family Therapist, LMFT
Verified by Psychology Today
Clinton Doyle Hollister
Marriage & Family Therapist, MA, MFT
Verified by Psychology Today
Neil Friedman
Clinical Social Work/Therapist, LSCW
Verified by Psychology Today
John R Galaska
PsyD, BCIA, CHT
Nancy R Belknap
Drug & Alcohol Counselor, CAS-CAD, CCAPP, ACRPS
Verified by Psychology Today
Cindy Stark Winfrey Utt
Marriage & Family Therapist, MS, LMFT
Verified by Psychology Today
Turning Leaves Recovery Life and Wellness Coaching
Drug & Alcohol Counselor, CATCI, IMAC, NCIP, NCPCM, NCFAC
Roland Rotz
Psychologist, PhD
Verified by Psychology Today
Shelley N. Osborn
Psychologist, PsyD
Verified by Psychology Today
Susan H Lang
Marriage & Family Therapist, LMFT
Verified by Psychology Today
Jack T May
Marriage & Family Therapist, MA, LMFT
Verified by Psychology Today
Refine Results
Insurance
- Any Insurance
- Aetna
- Anthem
- Blue Shield
- BlueCross and BlueShield
- Any Insurance
- APS Healthcare
- Aetna
- Anthem
- Beacon
- Beech Street
- Behavioral Health Systems
- Blue Care Network
- Blue Cross
- Blue Shield
- BlueCross and BlueShield
- CHIPA
- CareSource
-
- Medicare
- Military OneSource
- Moda Health
- Multiplan
- New Directions
- Optum
- Oxford
- PacifiCare
- Premera
- Regence
- TRICARE
- TriWest
- UnitedHealthcare
- ValueOptions
- + | 108,593 |
Munalisa Moon - Bangladesh
About Munalisa Moon
Munalisa Moon is currently living in Bangladesh, and is interested in Other.
Not the person you're looking for?
Find more results for Munalisa Moon
Find more results for Munalisa Moon
Quick Profile Summary
Name: Munalisa Moon
Link:
Location: Bangladesh
Link:
Location: Bangladesh
Explore using SaleSpider
Munalisa Moon’s Business Connections
Find more information on the Company
Member Spotlight
Susan Maduell
Imunne Builders 4Life
Health 4Life
Business Spotlight
Terri Sunday Realty LLC | 161,124 |
Fringe Palms
Regular price $14.00
$0.00
Unit price
per
Free local pickup in Victoria/Vernon available. If you choose this option, an email will be sent to you within 5 days with pick- up address and information. If you choose shipping, a flat rate of $15 applies.
Create stunning arrangements with these unique palm suns. Each palm is natural and unique, slight color variation and size may vary. Color is natural soft sage color. | 255,253 |
I am the K-6 physical education teacher here at Albany Schools. I have been teaching in the district since 2005. I'm originally from Evansville and currently reside there as well. I went to UW-Lacrosse and received my bachelor's degree in exercise and sports science. I obtained a certification in K-12 physical education and adaptive physical education. Watching and playing sports is a hobby of mine. I'm an avid Badgers and Packers fan. I spend my free time chasing my two sons, Nolan and Hudson around! I love having the opportunity to teach kids the skills they need to maintain a healthy lifestyle. I thoroughly enjoy seeing the smiles on their faces when they are successful.
Website by SchoolMessenger Presence. © 2018 West Corporation. All rights reserved. | 346,359 |
TITLE: Stochastic processes understanding of probabilty measure
QUESTION [0 upvotes]: Consider a Markov chain $\{X_n,n \geq 0\}$ with finite state space $S = \{1,2,\ldots ,m\}$ and transition probability matrix $P = (P_{ij})_{i,j\in S}$.
Let $P_{ij}^n$ be the probability that the process in state $i$ is in state $j$ after $n$ transitions.
Suppose that for any $i,j \in S$, the limit $\lim_{n\to \infty} P_{ij}^n = \pi_j > 0$ exists and is independent of $i$. Prove that the probability measure $\pi = (\pi_1, \pi_2, \ldots, \pi_m)$ satisfies $\pi P = \pi$.
By Chapman Kolmogorov I proved that $\pi_j= \sum_{i=1}^m P_{ij}^k \pi_i$ but am unsure where to go from here.
REPLY [2 votes]: It suffices to note that
$$
(\pi P)(j) =
\sum_{k=1}^m \left(\lim_{n \to \infty} P_{ik}^n\right) P_{kj} =
\left(\lim_{n \to \infty} \sum_{k=1}^mP_{ik}^nP_{kj}\right) =
\lim_{n \to \infty} P^{n+1}_{ij} =
\lim_{n \to \infty} P^{n}_{ij} = \pi(j).
$$ | 10,591 |
Searching for
Volvo S70 Coolant Level Sensor
Has your Volvo S70's Coolant Level Sensor failed? Then shop at 1A Auto for a high quality Low Coolant Level Sensor replacement for your Volvo S70, at a great price. 1A Auto has a large selection of Radiator Coolant Level Switches for your Volvo S70, and ground shipping in the continental U.S. is always free! Visit us online or call 888-844-3393 and order your Volvo S70 Engine Coolant Level Sensor today!Read more
Product Category
Part Type
Brand
Price
ReplacesVolvo Coolant Level Sensor
Replaces
- OE # 307411553
Ships Same Day for orders placed by 5 P.M. ET
- New
Free GROUND SHIPPINGCheck Shipping Rates$29.95
Part #: 1AZMX00052
Volvo is a registered trademark of Volvo Trademark Holding AB. 1A Auto is not affiliated with or sponsored by Volvo or Volvo Trademark Holding AB. See all trademarks. | 163,232 |
I am—almost definitely—going to say “Yes!” But, first I promised to go out with a few other guys. I know that may seem odd. But—to be fair—many of them have already made motel reservations.
But don’t worry. It won’t take long. Check back with me in a week or—just to be on the safe side—20 years from now. In the meantime, I’ll start looking at wedding gowns.
Meanwhile, back in the present…
See the revamped version of this art and with new, funny dialogue in today’s Last Kiss Comic.
Give me a few months to see if I can find somebody who doesn’t smoke in the friggin’ hay barn.
I was thinking along those lines, i.e. “I’m looking for someone with a longer life expectancy.” However, I was thinking more along the line of lung or throat cancer, while you pointed out a more immediate potential cause of his demise.
Why women live longer than men…
Any children they have will have pencil necks. Goodness me but that is a pair that shouldn’t breed. | 32,945 |
Staff and Board
Rev. Carl E. Zerweck, III, Executive Director
Carl is an ordained minister in the Christian Church (Disciples of Christ). Carl served for 9 ½ years as the Director of Disciples Volunteering for the Christian Church (Disciples of Christ). Prior to that he served in pastoral ministry, first for 11 years as Assoc. Pastor at Community Christian Church, Richardson, TX, and then for 5 years as Pastor of Disciples Christian Church, Plano, TX. He has also served as the Executive Director of Plano Area Habitat For Humanity. Over the course of the past 35 years, Carl has been involved in starting 3 Habitat For Humanity organizations, and has helped build 30+ Habitat homes around the country, in addition to 15 new church builds and rebuilds. Volunteer mission work has been central to his life and ministry for 30 years. Carl has personally lead or participated in 100+ mission trips and projects. His passion is working for justice, building, working with volunteers and seeing how handson mission transforms lives and changes the world!
Robin Zerweck, Administrator and Hospitality Director
Robin has been an active layperson in Disciples of Christ congregations all of her adult life. She served for 5 years as the Hospitality Coordinator of Disciples Volunteering for the Christian Church (Disciples of Christ). Prior to that, for almost 20 years, she was a small business owner of her own Nail Salon in Plano, TX. Robin shares her gift of hospitality by providing the behind the scenes support needed to work and serve with volunteers in mission settings. She has a passion for helping others and loves cooking for groups and seeing the effect that their work has on others!
THE RIPPLING HOPE BOARD OF DIRECTORS
Rev. Kelly Gindlesberger, Board President, Pastor, First Christian Church, Shelton Memorial Christian Church, KS
Jeff Kerns, Vice President, Layperson, Capitol Hill Christian Church, Des Moines, IA
Karen Lovelock, Board Secretary, Lay Person, Park Place Christian Church, Hutchinson, KS
George Flower, Board Treasurer, Lay Person, Northwood Christian Church, Beaumont, TX
Bill Kabrich, Board President, Lay Person, United Church of Christ, Yakima, WA
Rev. Laurie Feille, Board Member, Pastor, First Christian Church, Minneapolis, MN
Jason Kixmiller, Board Member, Lay Person, Southport Christian Church, Indianapolis, IN
Rev. Tom Stephenson, Board Member, Pastor, First Christian Church, Wilmington, OH
Rev. Eugene James, Board Member, Regional Minister, Christian Church (Disciples of Christ) In Michigan, Lansing, MI
Pastor Gregg Runnion, Board Member, Pastor, Central Christian Church, Decatur, GA
Rep. David Nathan, Board Member, State Representative Michigan House of Representatives
Rev. Cyndy Twedell, Board Member, Associate Pastor, University Christian Church, Ft. Worth, TX
Carole Enwright, Board Member, Lay Person, Bethany Christian Church, Detroit, MI
Rippling Hope Partner Congregations
First Christian Church, Wilmington, OH
Grove Park Christian Church, Kinston, NC
Northwood Christian Church, Beaumont, TX
United Christian Church, Yakima, WA
Williamson Christian Church, Williamson, GA
University Christian Church, Ft. Worth, TX
Northwestern Christian Church, Detroit, MI
Davis Memorial Christian Church, Taylorsville, IL
East District Assembly of the Christian Church (Disciples of Christ) in Michigan
Capitol Hill Christian Church, Des Moines, IA
Northminister Presbyterian Church, Troy, MI
Williamson Christian Church, Williamson, GA
Central Woodward Christian Church, Troy, MI
United Christian Church, Yakima, WA
University Christian Church, Fort Worth, TX
First Christian Church, Wilmington, OH
Northwood Christian Church, Beaumont, TX
Bethany Christian Church, Detroit, MI
Cascade Christian Church, Grand Rapids, MI
Northwestern Christian Church, Detroit, MI
Southport Christian Church, Indianapolis, IN
Troy Interfaith Group, Troy, MI
St. Stephens Episcopal Church, Troy, MI
The Rippling Hope Board of Directors meets monthly via teleconference. There are three working committees that comprise the Board: Administration, Projects/Partnerships and Development.
Rippling Hope website created and maintained by Jesse Stephenson. | 162,374 |
Search
Listings in C-Arms, Rehabilitation, Reimbursement Support, Transcription Services
DFine
in Vertebral Augmentation Systems, Minimally Invasive Therapeutic Devices
3047 Orchard Pkwy 3047 Orchard Pkwy
San Jose, California 95134-2024
United States of America
Kimberly-Clark
in Pain Management, Sterilization Equipment
1400 Holcomb Bridge Rd
Roswell, Georgia 30076-2190
United States of America
Patterson Medical
1000 Remington Blvd Ste 210
Bolingbrook, Illinois 60440-5116
United States of America
Radiological Imaging Services
328 S 3rd St
Hamburg, Pennsylvania 19526-1902
United States of America
Impulse Monitoring
in Revenue Management / Performance Consulting
10420 Little Patuxent Pky
Columbia, Maryland 21044
Adroit Medical Systems
in Durable Medical Equipment (DME)
1146 Carding Machine Road
Loudon, Tennessee 37774
United States of America
Ziehm Imaging, Inc.
6280 Hazeltine National Dr
Orlando, Florida 32822
United States of America
Game Ready
in Physical Therapy, Temperature Therapy Equipment
1201 Marina Village Pkwy Ste 200
Alameda, California 94501-3596
United States of America | 299,436 |
TITLE: small o and big O
QUESTION [4 upvotes]: Okay, so I need some help grasping the "big O, small o" concept. I'd put up what I think I've understood, and then maybe you can correct my mistakes and enlighten me further.
small o
If we let $x$ approach some $a$, then if $f(x)/g(x)$ approaches 0, we say $f(x) = o(g(x)$. Now, if $a = 0$, then this means what exactly? That $f(x)$ approaches $0$ faster than $g(x)$? But what if $a \neq 0$? Then how do we give meaning to the fraction going to zero? Why would that happen?
big o
So if $f(x) = O(g(x))$, then $$||f(x) || \le K||g(x)||$$ for some K (even though it only is true if we are near enough the limit $a$). But 1) how do you actually show the above? Most examples I see involve some polynomial of nth degree, but that's an easy example. What if we're working with sines, cosines, logs, exponentials, etc, and not just simple polynomials? 2) What does it actually mean, intuitively, to be big O of something, and how does it relate to being small o? If $f$ is big O of $g$, is it also small o? Or vice-versa?
REPLY [1 votes]: Using fraction in definitions reduce generality, because of possibility of $a$ to be limit point for $g$'s zeros.
Taking for simplicity case $f:\mathbb{N}\longrightarrow\mathbb{R}_{\geq 0}$ and $g:\mathbb{N}\longrightarrow\mathbb{R}_{\geq 0}$ we have following definitions:
$$O(g) = \left\lbrace f:\exists C > 0, \exists N \in \mathbb{N}, \forall n (n > N \& n \in \mathbb{N}) (f(n) \leqslant C \cdot g(n)) \right\rbrace$$
$$o(g) = \left\lbrace f:\exists \epsilon(n) \geqslant 0, \lim_{n \to \infty}\epsilon(n)=0, \exists N \in \mathbb{N}, \forall n (n > N \& n \in \mathbb{N}) (f(n) = \epsilon (n) \cdot g(n)) \right\rbrace$$
So, both are sets and first gives some "intellectual" boundary for its $f$ elements, while second gives some kind of representation for its $f$ elements in $a$'s neighbourhood. Main relationship:
$$o(g) \subset O(g)$$ | 186,109 |
TITLE: A group $G$ with a subgroup $H$ of index $n$ has a normal subgroup $K\subset H$ whose index in $G$ divides $n!$
QUESTION [33 upvotes]: I would be very thankful if someone could give me a hint with proving this. It's a very common exercise in abstract algebra textbooks.
If $G$ is a group with a subgroup $H$ of finite index $n$, then $G$ has a normal subgroup $K$ contained in $H$ whose index in $G$ is finite and divides $n!$.
I found the proof on this Wikipedia page at some point (although the proof appears to be no longer there), but I got lost in one of the details.
REPLY [10 votes]: A stronger result to the one in the question is the following. A characteristic subgroup is one which is fixed (setwise) by all automorphisms of $G$. Characteristic subgroups are normal (as conjugation corresponds to inner automorphisms).
Theorem. Let $G$ be a finitely generated group, with generating set of size $m$. There exists a characteristic subgroup of index $n^{(n!)^{m}}$ in $G$.
We start our proof with two lemmas.
Lemma 1. Let $G$ be a finitely generated group, with generating set of size $m$. Then $G$ has at most $(n!)^m$ subgroups of index $n<\infty$.
Proof.
Firstly note that by lhf's answer, every subgroup $K$ of index $n$ corresponds to a map $\phi_K: G\rightarrow \operatorname{Sym}(G/K)\cong S_n$.
Now, if $K_1\neq K_2$ then $\phi_{K_1}\neq\phi_{K_2}$, and to see this consider $x\in K_1$ but $x\not\in K_2$. Then $\phi_{K_1}(x)(K_1)=xK_1=K_1$, so $\phi_{K_1}(x)$ corresponds to the identity of $S_n$, but $\phi_{K_2}(x)(K_2)=xK_2\neq K_2$, and so $\phi_{K_2}(x)$ is not the identity.
Such a map $\phi_K$ is defined by the image of the $m$ generators of $G$. There are $n!$ choices for each generator, and so there are at most $(n!)^m$ maps $\phi_K$. By the above, no two maps define the same subgroup, and so there are at most $(n!)^m$ subgroups $K$ of index $n$. QED
Lemma 2.
If $H, K$ are finite index subgroups of a group $G$ then $H\cap K$ has finite index, and indeed $|G:H\cap K|\leq |G:H|\cdot |G:K|$.
Proof.
Let $L=H\cap K$ and let $a\in Lb$ where $Lb$ is one of your finite number of cosets. Then $ab^{-1}\in L\Rightarrow a\in Hb\cap Kb\Rightarrow Lb\leq Hb\cap Kb$. Clearly $Lb\leq Hb\cap Kb$, so $Hb\cap Lb=Db$. Thus, the number of cosets of $L$ is $\leq |G:H|\cdot |G:K|$ as required. QED
We can now prove the theorem.
Proof of Theorem.
By combining Lemmas 1 and 2, intersecting all subgroups of a fixed index $n$ will give you a subgroup of order $\text{index}^{\text{# subgroups}}\leq n^{(n!)^{m}}$. Moreover, this subgroup is characteristic, because automorphisms fix the index of subgroups. That is, if $\phi$ is an automorphism of $G$ then $|G:H|=|G:\phi(H)|$. So, $\cap \phi(H_i)=\cap H_i$ so your group is fixed by all automorphisms. QED
An application. The following application of the above is a theorem of Gilbert Baumslag from 1963. It is quite unusual, as it is surprising (residual finiteness is usually very hard to prove), and it has a very short proof (the paper is a page-and-a-half long, and contains three applications of this result. The half-page is all references.)
Theorem. If $G$ is a finitely generated residually finite group then $\operatorname{Aut}(G)$ is residually finite.
A group is residually finite if for any element $g\in G$ there exists a homomorphism onto a finite group $F$, $\phi: G\rightarrow F$ say, such that $g\phi\neq 1$. This is a very strong finiteness condition. Equivalently, all the finite index subgroups intersect trivially (we proved above that finitely many intersect with finite index, but there are infinitely many in general so this makes sense).
Proof. Let $1\neq \alpha\in \operatorname{Aut(G)}$. Then there exists $g\in G$ such that $\alpha(g)\neq g$ and write $h=\alpha(g)g^{-1} (\neq 1)$. As the finite index subgroups intersect non-trivially, there is a finite index subgroup of $G$ not containing $h$, $K$ say. One can take this to be characteristic, by intersecting all subgroups of that index, as above (if $h\not\in A$ then $h\not\in A\cap B$ whatever $B$ is). Now, $\operatorname{Aut(G)}$ induces a finite group ($A\cong \operatorname{Aut(G)}/L$) of automorphism of $G/K$, as $G/K$ if finite and $K$ is characteristic. As $h\not\in K$ we have that $\alpha$ induces a non-trivial automorphism of $G/K$, so $\alpha\not\in L$, and so $\operatorname{Aut(G)}$ is residually finite, as required. QED
Grossman extended this result to $\operatorname{Out}(G)$ (Grossman, Edna K. "On the residual finiteness of certain mapping class groups." Journal of the London Mathematical Society 2.1 (1974): 160-164.). | 44,484 |
\begin{document}
\def\Apolar#1{\operatorname{Apolar}\left( #1 \right)}
\title{Irreducibility of the Gorenstein loci of Hilbert schemes via ray
families}
\author{Gianfranco Casnati,
Joachim Jelisiejew,
Roberto Notari\thanks{The first and third authors are supported by the framework of PRIN
2010/11 ``Geometria delle variet\`a algebriche'', cofinanced by MIUR, and are members of GNSAGA of INdAM. The
second author was partially supported by the project ``Secant varieties, computational
complexity, and toric degenerations''
realised within the Homing Plus programme of Foundation for Polish Science, co-financed from European Union, Regional
Development Fund. The second author is a doctoral fellow at the Warsaw Center of
Mathematics and Computer Science financed by the Polish program KNOW.
This paper is a part of ``Computational complexity,
generalised Waring type problems and tensor decompositions'' project
within ``Canaletto'', the executive program for scientific and
technological cooperation between Italy and Poland, 2013--2015.}}
\maketitle
\begin{abstract}
We analyse the Gorenstein locus of the Hilbert scheme of $d$ points on $\mathbb{P}^n$ i.e.~the open
subscheme parameterising zero-dimensional Gorenstein subschemes of
$\mathbb{P}^n$ of degree $d$.
We give new sufficient criteria for smoothability and smoothness of points of
the Gorenstein locus. In particular we
prove that this locus is irreducible when $d\leq 13$ and find
its components when $d = 14$.
The proof is relatively self-contained and it does not rely on a computer
algebra system. As a by--product, we give equations of the
fourth
secant variety to the $d$-th Veronese reembedding of $\mathbb{P}^n$ for
$d\geq 4$.
\end{abstract}
\section{Introduction and notation}\label{sIntrNot}
Let $k$ be an algebraically closed field of characteristic neither $2$ nor $3$
and denote by $\Hilb_{p(t)}\mathbb{P}^N $ the Hilbert scheme parameterising closed subschemes in $\mathbb{P}^{N}$ with fixed Hilbert polynomial $p(t)\in{\Bbb Q}[t]$. Since A.~Grothendieck proved the existence of such a parameter space in 1966 (see \cite{Gro}), the problem of dealing with $\Hilb_{p(t)}\mathbb{P}^N $ and its subloci has been a fruitful field attracting the interest of many researchers in algebraic geometry.
Only to quickly mention some of the classical results which deserve, in our opinion, a particular attention, we recall Hartshorne's proof of the connectedness of $\Hilb_{p(t)}\mathbb{P}^N $ (see \cite{har66connectedness}), the description of the locus of codimension
$2$ arithmetically Cohen--Macaulay subschemes due to G.~Ellingsrud and
J.~Fogarty (see \cite{fogarty} for the dimension zero case and \cite{ellingsrud} for larger dimension) and of the study of the locus of codimension $3$ arithmetically Gorenstein subschemes due to J.~Kleppe and R.M.~Mir\'o--Roig (see \cite{roig_codimensionthreeGorenstein} and \cite{kleppe_roig_codimensionthreeGorenstein}).
If we restrict our attention to the case of zero--dimensional subschemes of
degree $d$, i.e. subschemes with Hilbert polynomial $p(t)=d$, then the first
significant results are due to J.~Fogarty (see \cite{fogarty}) and to A.~Iarrobino (see \cite{iarrobino_reducibility}).
In \cite{fogarty}, the author
proves that $\Hilb_{d} \mathbb{P}^{2}$ is smooth, hence irreducible thanks to Hartshorne's connectedness result (the same result holds, when one substitutes $\mathbb{P}^2$ by any smooth surface).
On the other hand in \cite{iarrobino_reducibility}, A.~Iarrobino
deals with the reducibility when $d$ is large with respect to $N$. In order to
better understand the result, recall that the locus of reduced schemes
${\mathcal R}\subseteq\Hilb_{d}\mathbb{P}^N $ is birational to a suitable open
subset of the $d$-th symmetric product of $\mathbb{P}^{N}$, thus it is
irreducible of dimension $dN$. We will denote by $\Hilb_{d}^{gen}\mathbb{P}^N
$ its closure in $\Hilb_{d}\mathbb{P}^N $. It is a well--known and easy fact
that $\Hilb_{d}^{gen}\mathbb{P}^N $ is an irreducible component of dimension
$dN$, by construction.
In \cite{iarrobino_reducibility}, the author proves that
$\Hilb_{d}\mathbb{P}^N $ is never irreducible when $d\gg N\ge 3$, showing that
there is a large family of schemes supported on a single point and thus describing a locus
of dimension greater than $dN$ in $\Hilb_{d}\mathbb{P}^N $. Such a locus is
thus necessarily contained in a component different from
$\Hilb_{d}^{gen}\mathbb{P}^N$.
D.A.~Cartwright, D.~Erman, M.~Velasco,
B.~Viray proved that already for $d = 8$ and $N\geq 4$, the scheme $\Hilb_{d}\mathbb{P}^N$
is reducible (see \cite{CEVV}).
In view of these earlier works it seems reasonable to consider the
irreducibility and smoothness of open loci in $\Hilb_{d}\mathbb{P}^N $
defined by particular algebraic and geometric properties. In the present paper
we are interested in the locus $\HilbGor_{d}\mathbb{P}^N $ of points in
$\Hilb_{d}\mathbb{P}^N $ representing schemes which are Gorenstein. This is an
important locus: e.g.~it has an irreducible component
$\HilbGorGen_{d}\mathbb{P}^N :=\Hilb_{d}^{gen}\mathbb{P}^N \cap
\HilbGor_{d}\mathbb{P}^N $ of dimension $dN$ containing all the points
representing reduced schemes. Moreover it is open, but in general not dense,
inside $\Hilb_{d}\mathbb{P}^N$. Recently, interesting interactions between $\HilbGor_{d}\mathbb{P}^N $ and the geometry of secant varieties or topology have been found (see
for example \cite{bubu2010}, \cite{Michalek}).
Some results about $\HilbGor_{d}\mathbb{P}^N $ are known. The
irreducibility and smoothness of $\HilbGor_{d}\mathbb{P}^N $ when $N\le 3$ is
part of the folklore (see \cite[Cor~2.6]{cn09} for more precise references). When $N\ge4$, the properties of $\HilbGor_{d}\mathbb{P}^N $ have been object of an intensive study in recent years.
E.g., it is classically known that $\HilbGor_{d}\mathbb{P}^N $ is never
irreducible for $d\ge 14$ and $N\ge6$, at least when the characteristic of $k$
is zero (see \cite{emsalem_iarrobino_small_tangent_space} and \cite{iakanev}: see also \cite{cn10}). As reflected by the quoted papers, it is thus natural to ask if $\HilbGor_{d}\mathbb{P}^N $ is irreducible when $d\le 13$.
There is some evidence of an affirmative answer to the previous question.
Indeed the first and third authors studied the locus
$\HilbGor_{d}\mathbb{P}^N $ when $d\le 11$, proving its irreducibility and
dealing in detail with its singular locus in a series of papers \cite{cn09,
cn10, cn11, CN2stretched}.
A key point in the study of a zero--dimensional scheme
$X\subseteq\mathbb{P}^{N}$ is that it is abstractly isomorphic to $\Spec A$
where $A$ is an Artin $k$-algebra with $\dim_k(A)=d$. Moreover the
irreducible components of such an $X$ correspond bijectively to those direct
summands of $A$, which are local. Thus, in order to deal with
$\Hilb_{d}\mathbb{P}^N $, it suffices to deal with the irreducible schemes in
$\Hilb_{d'}\mathbb{P}^N $ for each $d'\le d$.
In all of the aforementioned papers, the methods used in the study of
$\HilbGor_{d}\mathbb{P}^N $ rely on an almost explicit classification of the
possible structure of local, Artin, Gorenstein $k$-algebras of length $d$.
Once such a classification is obtained, the authors prove that all the
corresponding irreducible schemes are smoothable, i.e.~actually lie in
$\HilbGorGen_{d}\mathbb{P}^N $. To this purpose they explicitly construct
a projective family flatly deforming the scheme they are interested
in (or, equivalently, the underlying algebra) to reducible schemes that they
know to be in $\HilbGorGen_{d}\mathbb{P}^N $ because their components have
lower degree.
Though such an approach sometimes seems to be too heavy in terms of calculations, only thanks to such a partial classification it is possible to state precise results about the singularities of $\HilbGor_{d}\mathbb{P}^N $.
However, in the papers \cite{cn10, cn11}, there
are families $H_d$ of schemes of degree $d$, where $d=10, 11$, for which an
explicit algebraic description in the above sense cannot be obtained (see
Section~3 of \cite{cn10} for the case $d=10$, Section~4 of \cite{cn11} for
$d=11$).
Nevertheless, using an alternative approach the authors are still able to
prove the irreducibility of $\HilbGor_{d}\mathbb{P}^N $ and study its
singular locus. Indeed, using Macaulay's theory of inverse systems, the authors
check the irreducibility of the aforementioned loci $H_d$ inside $\HilbGor_{d}\mathbb{P}^N $. Then they show the existence of
a smooth point in $H_d\cap \HilbGorGen_{d}\mathbb{P}^N $. Hence, it follows
that $H_d\subseteq \HilbGorGen_{d}\mathbb{P}^N $.
The aim of the present paper is to refine and generalise this method. First, we
avoid a case by case approach by analysing large classes of algebras.
Second, in \cite{cn10, cn11} a direct check (e.g. using a computer algebra
program) is required to compute the dimension of tangent space to the Hilbert
scheme at some specific points to conclude that they are smooth. We avoid the need
of such computations by exhibiting classes of points which are smooth, making
the paper self--contained.
Using this method, we finally prove the following two statements.
\begin{mainthm}\label{ref:mainthm13degree}
If the characteristic of $k$ is neither $2$ nor $3$, then $\HilbGor_{d}\mathbb{P}^N $ is irreducible of dimension $dN$ for each $d\le 13$ and for $d=14$ and $N\le 5$.
\end{mainthm}
\begin{mainthm}\label{ref:mainthm14degree}
If the characteristic of $k$ is $0$ and $N\ge6$, then
$\HilbGor_{14}\mathbb{P}^N $ is connected and it has exactly two irreducible components,
which are generically smooth.
\end{mainthm}
Theorem~\ref{ref:mainthm13degree} has an interesting consequence regarding secant varieties of
Veronese embeddings. In \cite{geramita} Geramita conjectures that
the ideal of the $2^{nd}$ secant variety (the variety of secant lines) of the
$d^{th}$ Veronese embedding of $\mathbb{P}^{n}$ is generated by the $3\times3$ minors of
the $i^{th}$ catalecticant matrix for $2\le i\le d-2$. Such a conjecture was
confirmed in \cite{Raicu_thesis}. As pointed out in \cite[Section~8.1]{bubu2010}, the above
Theorem~\ref{ref:mainthm13degree} allows to extend the above result as follows: if $r\le 13$, $2r\le
d$ and, then for every $r \leq i\leq d-r$ the set--theoretic equations of the $r^{th}$
secant variety of the $d^{th}$ Veronese embedding of $\mathbb{P}^{n}$ are given by the
$(r+1)\times(r+1)$ minors of the $i^{th}$ catalecticant matrix.
\vspace{1em}
The proofs of Theorem~\ref{ref:mainthm13degree} and
Theorem~\ref{ref:mainthm14degree} are highly interlaced and they follow from a
long series of partial results. In order to better explain the ideas and
methods behind their proofs we will describe in the following lines the
structure of the paper.
In our analysis we incorporate several tools. In Section~\ref{sec:prelim} we
recall the classical ones, most notably Macaulay's correspondence for local,
Artinian, Gorenstein algebras and Macaulay's Growth Theorem. Moreover we also
list some criteria for checking the flatness of a family of algebra which will
be repeatedly used throughout the whole paper.
In Section~\ref{sec:dualgen} we analyse Artin Gorenstein quotients of a power series ring and
exploit the rich automorphism group of this ring to put the quotient into
suitable \emph{standard} form, deepening a result by A.~Iarrobino.
In Section~\ref{sec:specialforms} we further
analyse the quotients, especially their dual socle
generators. We also
construct several irreducible subloci of the Hilbert scheme using the theory
of secant varieties. We give a small contribution to this theory, showing that
the fourth secant variety to a Veronese reembedding of $\mathbb{P}^n$ is
defined by minors of a suitable catalecticant matrix.
Section~\ref{sec:ray} introduces a central object in our study: a class of
families, called ray families, for which we have relatively good control of
the flatness and, in special cases, fibers. Most
notably, Subsection~\ref{subsec:tangentpreserving} gives a class of
\emph{tangent preserving} flat families, which enable us to construct smooth
points on the Hilbert scheme of points without the necessity of heavy computations.
Finally, in Section~\ref{sec:proof}, we give the proofs of
Theorem~\ref{ref:mainthm13degree} and ~\ref{ref:mainthm14degree}. It is worth
mentioning that these results are rather easy consequences of the introduced
machinery. In this section we also prove the following general smoothability
result (see Thm~\ref{ref:mainthmstretchedfive:thm}), which has no restriction
on the length of the algebra and generalises the smoothability results
from
\cite{SallyStretchedGorenstein}, \cite{CN2stretched} and
\cite{EliasVallaAlmostStretched}.
\begin{mainthm}
Let $k$ be an algebraically closed field of characteristic neither $2$ nor
$3$.
Let $A$ be a local Artin Gorenstein $k$-algebra with maximal ideal
$\mathfrak{m}$.
If $\dim_k(\mathfrak{m}^2/\mathfrak{m}^3)\le 5$ and
$\dim_k(\mathfrak{m}^3/\mathfrak{m}^4)\le 2$, then $\Spec A$ is smoothable.
\end{mainthm}
\subsection*{Notation}
\DDef{P}{P}
\DDef{S}{S}
\DDef{mmS}{\mathfrak{m}_{\DS}}
\def\Dx{\alpha}
\def\Dan#1{\annn{\DS}{#1}}
\def\Dhdvect#1{\Delta_{#1}}
All symbols appearing below are defined in Section~\ref{sec:prelim}.
\noindent\begin{longtable}{p{3.5cm} p{12cm}}
$k$ & an algebraically closed field of characteristic $\neq 2, 3$.\\
$\DP = k[x_1, \ldots ,x_n]$ & a polynomial ring in $n$ variables and fixed
basis.\\
$\DS = k[[\Dx_1, \ldots ,\Dx_n]]$ & a power series ring dual
(see~Subsection \ref{sss:apolarity}) to $\DP$, with a fixed (dual) basis.\\
$\DmmS$ & the maximal ideal of $\DS$.\\
$\DS_{poly} = k[\Dx_1, \ldots ,\Dx_n]$ & a polynomial subring of
$\DS$ defined by the choice of basis.\\
$H_A$ & the Hilbert function of a local Artin algebra $A$.\\
$\Dhdvect{A, i}$, $\Dhdvect{i}$ & the $i$-th row of the symmetric decomposition of the Hilbert function of
a local Artin Gorenstein algebra $A$ as in Theorem~\ref{ref:Hfdecomposition:thm}.\\
$e(a)$ & the $a$-th ``embedding dimension'', equal to $\sum_{t=0}^a \Dhdvect{t}(1)$, as in Definition~\ref{ref:standardform:def}.\\
$\Dan{f}$ & the annihilator of $f\in \DP$ with respect to the action of $\DS$.\\
$\Apolar{f}$ & the apolar algebra of $f\in \DP$, equal to $\DS/\Dan{f}$.
\end{longtable}
\section{Preliminaries}\label{sec:prelim}
\def\DSpoly{S_{poly}}
Let $n$ be a natural number. By $(\DS, \DmmS, k)$ we denote the power series ring
$k[[\Dx_1, \ldots ,\Dx_n]]$ of dimension $n$ with a fixed basis $\Dx_1, \ldots
,\Dx_n$. This choice of basis determines a polynomial ring $\DSpoly =
k[\Dx_1, \ldots ,\Dx_n] \subseteq \DS$. By $\DP$ we denote the polynomial ring
$k[x_1, \ldots ,x_n]$. We will later define a duality between $\DS$ and $\DP$,
see Subsection~\ref{sss:apolarity}. We usually think of $n$ being large enough, so that the
considered local Artin algebras are quotients of $\DS$.
For an element $f\in \DP$, we say that $f$
\emph{does not contain $x_i$} if $f\in k[x_1, \ldots , x_{i-1}, x_{i+1}, \ldots
,x_n]$; similarly for $\sigma\in \DS$ or $\sigma\in \DSpoly$. For $f\in \DP$,
by $f_d$ we denote the degree $d$ part of $f$, with respect to the total
degree; similarly for $\sigma\in \DS$.
By $\DP_{m}$ and $\DP_{\leq m}$ we denote the space of homogeneous polynomials of
degree $m$ and (not necessarily homogeneous) polynomials of degree at most $m$ respectively. These spaces are
naturally affine spaces over $k$, which equips them with a scheme structure.
\begin{remark}
For the reader's convenience we introduce numerous examples, which
illustrate the possible applications. In all these examples $k$ may have
arbitrary characteristic $\neq 2, 3$ unless otherwise stated. However, the characteristic zero case
is usually simpler to think of.
\end{remark}
\subsection{Artin Gorenstein schemes and algebras}
In this section we recall the basic facts about Artin Gorenstein algebras.
For a more throughout treatment we refer to \cite{iakanev},
\cite{EisView}, \cite{cn09}
and \cite{JelMSc}.
Finite type zero-dimensional schemes correspond to Artin algebras. Every
such algebra $A$ splits as a finite product of its localisations at maximal
ideals, which corresponds to the fact that the support of $\Spec A$ is
finite and totally disconnected.
Therefore, throughout this text we are mainly interested in \emph{local} Artin
$k$-algebras. Since $k$ is algebraically closed, such algebras
have residue field $k$.
Recall that an Artin local algebra $(A, \mathfrak{m}, k)$ is
\emph{Gorenstein} if the annihilator of $\mathfrak{m}$ is a
one-dimensional vector space over $k$, see \cite[Chap~21]{EisView}.
An important invariant of $A$ is its Hilbert function $H_A$ defined by
$H_A(l) = \dimk \mathfrak{m}^l/\mathfrak{m}^{l+1}$. Since $H_A(l) = 0$ for
$l \gg 0$ it is usual to write $H_A$ as the vector of its non-zero values.
The \emph{socle degree} of $A$ is the largest $l$ such that $H_A(l) \neq
0$.
Since $k$ is algebraically closed, we may write
each such algebra as a quotient of the power series ring $\DS =
k[[\Dx_1, \ldots ,\Dx_n]]$ when $n$ is large enough, in fact $n\geq
H_A(1)$ is sufficient. Since $\dimk A$ is
finite, such a presentation is the same as a presentation $A = \DSpoly/I$,
i.e.~a point $[\Spec A]$ of the Hilbert scheme of $\mathbb{A}^n = \Spec
\DSpoly$.
\subsection{Contraction map and apolar algebras}\label{sss:apolarity}
\DDef{aa}{\mathbf{a}}
\DDef{bb}{\mathbf{b}}
In this section we introduce the contraction mapping, which is closely related
to Macaulay's inverse systems. We refer to \cite{ia94} and
\cite[Chap~21]{EisView} for details and proofs.
Recall that $\DP = k[x_1, \ldots ,x_n]$ is a polynomial ring and $\DS =
k[[\Dx_1, \ldots ,\Dx_n]]$ is a power series ring. The $k$-algebra $\DS$ acts
on $\DP$ by \emph{contraction} (see \cite[Def~1.1]{iakanev}). This action is denoted by
$(\cdot)\hook(\cdot):\DS \times \DP \to \DP$ and defined as follows.
Let
$\mathbf{x}^{\Daa} = x_1^{a_1} \ldots x_{n}^{a_n}\in \DP$ and $\mathbf{\Dx}^{\Dbb} = \Dx_1^{b_1} \ldots \Dx_{n}^{b_n}\in
\DS$ be monomials. We write $\Daa \geq \Dbb$ if and only if $a_i\geq b_i$ for all
$1\leq i\leq n$. Then
\[
\mathbf{\Dx}^{\Dbb}\hook \mathbf{x}^{\Daa} :=
\begin{cases}
\mathbf{x}^{\Daa - \Dbb} & \mbox{if}\quad
\Daa\geq\Dbb\\
0 & \mbox{otherwise.}
\end{cases}\]
This action extends to $\DS\times \DP \to \DP$ by $k$-linearity on $\DP$ and
countable $k$-linearity on $\DS$.
The contraction action induces a perfect pairing between
$\DS/\DmmS^{s+1}$ and $\DP_{\leq s}$, which restricts to a perfect pairing between the degree $s$
polynomials in $\DSpoly$ and $\DP$. These pairings are compatible for different
choices of $s$.
If $f\in \DP$ then a \emph{derivative} of $f$ is an element of the
$\DS$-module $\DS f$, i.e.~an
element of the form $\partial\hook f$ for $\partial\in \DS$. By definition, these elements form an
$\DS$-submodule of $\DP$, in particular a $k$-linear subspace.
Let $A = \DS/I$ be an Artin quotient of $\DS$, then $A$ is local. The contraction
action associates to $A$ an $\DS$-submodule $M \subseteq \DP$ consisting of elements
annihilated by $I$, so that $A$ and $M$ are dual. If $A$ is Gorenstein, then the $\DS$-module
$M$ is cyclic, generated by a polynomial $f$ of degree $s$ equal to the socle
degree of $A$.
We call every such $f$ a \emph{dual socle generator of the Artin
Gorenstein algebra $A$}.
Unlike $M$, the polynomial $f$ is \emph{not determined uniquely
by the choice of presentation $A = S/I$}, however if $f$ and $g$ are two dual socle generators,
then $g = \partial\hook f$, where $\partial\in \DS$ is invertible.
Conversely, let $f\in \DP$ be a polynomial of degree $s$. We can associate
it the ideal $I := \Dan{f}$ such that $A := \DS/I$ is
a local Artin Gorenstein algebra of socle degree $s$. We call $I$ the
\emph{apolar ideal} of $f$ and $A$ the
\emph{apolar algebra} of $f$, which we denote as
\[A = \Apolar{f}.\] From the discussion above it follows that every
local Artin Gorenstein algebra is an apolar algebra of some polynomial.
\begin{remark}\label{ref:dualautomorphisms:remark}
Recall that we may think of $\DS/\DmmS^{s+1}$ as the linear space dual
to $\DP_{\leq s}$. An automorphism $\psi$ of $\DS$ or
$\DS/\DmmS^{s+1}$ induces an automorphism
$\DPut{psistar}{\psi^*}$ of the
$k$-linear space $\DP_{\leq s}$.
If $f\in \DP_{\leq s}$ and $I$ is
the apolar ideal of $f$, then $\psi(I)$ is the apolar ideal of
$\psi^*(f)$. Moreover, $f$ and $\psi^*(f)$ have the same degree.
\end{remark}
\subsection{Iarrobino's symmetric decomposition of Hilbert function}
\def\Dhd#1#2{\Delta_{#1}\pp{#2}}
\def\iatf#1#2{(\DS f)_{#1}^{#2}}
\def\Ddegf{s}
One of the most important invariants possessed by a local Artin Gorenstein algebra is
the symmetric decomposition of the Hilbert function, due to Iarrobino~\cite{ia94}.
To state the theorem it is convenient to define addition of vectors of
different lengths position-wise: if $a = (a_0, \ldots ,a_n)$ and $b = (b_0,
\ldots ,b_m)$ are vectors, then $a+ b= (a_0 + b_0, \ldots , a_{\max(m, n)} +
b_{\max(m, n)})$, where $a_i = 0$ for $i >n $ and $b_i = 0$ for $i > m$.
In the following, all vectors are indexed starting from zero.
\begin{thm}[Iarrobino's symmetric decomposition of Hilbert function]\label{ref:Hfdecomposition:thm}
\def\Dmm{\mathfrak{m}}
\def\Dmmperp#1{(0: \mathfrak{m}^{#1})}
Let $(A, \Dmm, k)$ be a local Artin Gorenstein algebra of socle degree $s$ and Hilbert
function $H_A$. Then there exist vectors $\Dhdvect{0}, \Dhdvect{1},
\ldots ,\Dhdvect{s}$ such that
\begin{enumerate}
\item the vector $\Dhdvect{i}$ has length $s + 1 - i$ and satisfies
$\Dhdvect{i}(n) = \Dhdvect{i}(s - i - n)$ for all integers $n\in [0,
s-i]$.
\item the Hilbert function $H_A$ is equal to the sum
$\sum_{i=0}^{s} \Dhdvect{i}$.
\item the vector $\Dhdvect{0}$ is equal to the Hilbert function of a
local Artin Gorenstein \emph{graded} algebra of socle degree $s$.
\end{enumerate}
\end{thm}
Let $(A, \mathfrak{m}, k)$ be a local Artin Gorenstein algebra.
There are a few important remarks to do.
\begin{enumerate}
\item Since $\Dhdvect{0}$ is the Hilbert function of an algebra, we have $\Dhdvect{0}(0) =
1 = H_A(0)$. Thus for every $i > 0$ we have $\Dhdvect{i}(0) = 0$. From
symmetry it follows that $\Dhdvect{i}(s+1-i) = 0$. In particular
$\Dhdvect{s} = (0)$ and $\Dhdvect{s-1} = (0, 0)$, so we may ignore
these vectors. On the other hand $\Dhdvect{s-2} = (0, q, 0)$ is in
general non-zero and its importance is illustrated by
Proposition~\ref{ref:squares:prop}.
\item Suppose that $H_A = (1, n, 1, 1)$ for some $n > 0$. Then we have
$\Dhdvect{0} = (1, *, *, 1)$ and $\Dhdvect{1} = (0, *, 0)$, thus
$\Dhdvect{0} = (1, *, 1, 1)$, so that $\Dhdvect{0} = (1, 1, 1, 1)$
because of its symmetry. Then $\Dhdvect{1} = (0, n -1, 0)$. Similarly,
if $H_A = (1, n, e, 1)$ is the Hilbert
function of a local Artin Gorenstein algebra, then $n\geq e$. This is
a basic example on how Theorem~\ref{ref:Hfdecomposition:thm} imposes
restrictions on the Hilbert function of $A$.
\item If $A$ is graded, then $\Dhdvect{0} = H_A$ and all other
$\Dhdvect{\bullet}$ are zero vectors, see \cite[Prop~1.7]{ia94}.
\item For every $a\leq s$ the partial sum $\sum_{i=0}^a \Dhdvect{i}$ is the Hilbert
function of a local Artin graded algebra, see \cite[Def~1.3,
Thm~1.5]{ia94}, see also \cite[Subsection~1.F]{ia94}. In particular it satisfies Macaulay's Growth
Theorem, see Subsection~\ref{ref:MacGrowth:sss}. Thus e.g. there is no
local Artin Gorenstein algebra with Hilbert function decomposition
having $\Dhdvect{0} = (1, 1, 1, 1, 1, 1)$ and $\Dhdvect{1} = (0, 0, 1, 0, 0)$,
because then $(\Dhdvect{0} + \Dhdvect{1})(1) = 1$ and
$(\Dhdvect{0} + \Dhdvect{1})(2) = 2$.
\end{enumerate}
Let us now analyse the case when $A = \Apolar{f} = \DS/\Dan{f}$ is the apolar algebra of a
polynomial $f\in \DP$, where $f =
\sum_{i=0}^s f_i$ for some $f_i\in \DP_i$. Each local Artin Gorenstein algebra
is isomorphic to such algebra, see~Subsection~\ref{sss:apolarity}. For the
proofs of the following remarks, see~\cite{ia94}.
\begin{enumerate}
\item The $\Dhdvect{0}$ is the Hilbert function of $\Apolar{f_s}$, the apolar algebra of the leading form of $f$.
\item If $A$ is graded, then $\Dan{f} = \Dan{f_s}$, so that we may always
assume that $f = f_s$. Moreover, in this case $H_A(m)$ is equal to
$(\DS f_s)_m$, the number of degree $m$ derivatives of $f_s$.
\item Let $f_1$, $f_2$ be polynomials of degree $s$ such that $f_1 - f_2$ is a
polynomial of degree $d < s$. Let $A_i = \Apolar{f_i}$ and let
$\Dhdvect{A_i, n}$ be the symmetric decomposition of the Hilbert
function $H_{A_i}$ of $A_i$ for $i=1,2$. Then $\Dhdvect{A_1, n} =
\Dhdvect{A_2, n}$ for all $n < s - d$, see
\cite[Lem~1.10]{ia94}.
\end{enumerate}
\subsection{Smoothability and unobstructedness}\label{sss:smoothability}
An Artin algebra $A$ is called \emph{smoothable} if it is a (finite flat) limit of
smooth algebras, i.e.~if there exists a finite flat family over an irreducible
base with a special fiber isomorphic to $\Spec A$ and general fiber smooth.
Recall that $A \simeq A_{\mathfrak{m}_1} \times \ldots
A_{\mathfrak{m}_r}$, where $\mathfrak{m}_i$ are maximal ideals of $A$. The algebra
$A$ is smoothable if all its localisations $A_{\mathfrak{m}}$ at maximal ideals
are smoothable. The converse also holds, i.e.~if
an algebra $A \simeq B_1 \times B_2$ is smoothable, then the algebras
$B_1$ and $B_2$ are also smoothable, a complete and characteristic free proof of this fact will appear
shortly in \cite{jabu_jelisiejew_smoothability}.
We say that a zero-dimensional scheme $Z = \Spec A$
is \emph{smoothable} if the algebra $A$ is
smoothable.
It is crucial that every local Artin Gorenstein algebra $A$ with $H_{A}(1)
\leq 3$ is smoothable, see~\cite[Prop~2.5]{cn09}, which follows from the
Buchsbaum-Eisenbud classification of resolutions,
see~\cite{BuchsbaumEisenbudCodimThree}.
Also complete intersections are smoothable.
A complete intersection $Z \subseteq \mathbb{P}^n$ is smoothable by
Bertini's Theorem (see \cite[Example 29.0.1]{HarDeform}, but note that
Hartshorne uses a slightly weaker definition of smoothability, without
finiteness assumption). If $Z = \Spec A$ is a complete
intersection in $\mathbb{A}^n$, then $Z$ is a union of connected components
of a complete intersection $Z' = \Spec B$ in $\mathbb{P}^n$, so that $B
\simeq A \times C$ for some algebra $C$. The algebra
$B$ is smoothable since $Z'$ is. Thus also the algebra $A$ is
smoothable, i.e. $Z$ is smoothable.
\def\Hilb#1#2{\mathcal{H}ilb_{#2}(\mathbb{P}^{#1})}
\begin{defn}\label{ref:nonobstructed:def}
A smoothable Artin algebra $A$ of length $d$, corresponding to $\Spec A
\subseteq \mathbb{P}^n$, is \emph{unobstructed} if
the tangent space to $\Hilb{n}{d}$ at the $k$-point $[\Spec A]$ has dimension $nd$. If $A$ is
unobstructed, then $p$ is a smooth point of the Hilbert scheme.
\end{defn}
The unobstructedness is independent of $n$ and the embedding of $\Spec A$ into
$\mathbb{P}^n$ chosen, see discussion before \cite[Lem 2.3]{cn09}.
The argument above shows that algebras corresponding to complete intersections in $\mathbb{A}^n$ and
$\mathbb{P}^n$ are unobstructed. Every local Artin Gorenstein algebra $A$
with $H_A(1)\leq 3$ is unobstructed, see~\cite[Prop~2.5]{cn09}. Moreover,
every local Artin Gorenstein algebra $A$ with $H_A(1) \leq 2$ is a complete
intersection in $\mathbb{A}^2$ by the Hilbert-Burch theorem.
\begin{defn}\label{ref:limitreducible:def}
An Artin algebra $A$ is \emph{limit-reducible} if there exists
a flat family (over an irreducible basis) whose special fiber is $A$ and general fiber is reducible.
An Artin algebra $A$ is \emph{strongly non-smoothable} if it is not
limit-reducible.
\end{defn}
Clearly, strongly non-smoothable algebras (other than $A = k$) are non-smoothable.
The definition of strong non-smoothability is useful, because to
show that there is no non-smoothable algebra of length less than $d$ it is
enough to show that there is no strongly non-smoothable algebra of
length less than $d$.
\subsection{Macaulay's Growth Theorem}\label{ref:MacGrowth:sss}
We will recall
Macaulay's Growth Theorem and Gotzmann's Persistence Theorem, which provide strong restrictions on the possible
Hilbert functions of graded algebras.
Fix $n\geq 1$. Let $m$ be any natural number, then $m$ may be uniquely written
in the form
\[m =\binom{m_n}{n} + \binom{m_{n-1}}{n-1} + \ldots + \binom{m_1}{1},\] where
$m_n > m_{n-1} > \ldots > m_1$. We define
\[
m^{\langle i\rangle} := \binom{m_n+1}{n+1} + \binom{m_{n-1}+1}{n} + \ldots
+ \binom{m_1+1}{2}.
\]
It is useful to compute some initial values of the above defined function, i.e.
$1^{\langle n\rangle} = 1$ for all $n$, $3^{\langle 2 \rangle} = 4$,
$4^{\langle 2\rangle} = 5$, $6^{\langle 2\rangle} = 10$ or $4^{\langle
3\rangle} = 5$.
\begin{thm}[Macaulay's Growth Theorem]\label{ref:MacaulayGrowth:thm}
If $A$ is a graded quotient of a polynomial ring over $k$, then the
Hilbert function $H_A$ of $A$ satisfies $H_A(m+1) \leq H_{A}(m)^{\langle
m\rangle}$ for all $m$.
\end{thm}
\begin{proof}
See \cite[Thm 4.2.10]{BrunsHerzog}.
\end{proof}
Note that the assumptions of Theorem~\ref{ref:MacaulayGrowth:thm} are satisfied
for every local Artin $k$-algebra $(A, \mathfrak{m}, k)$, since its Hilbert
function is by definition equal to the
Hilbert function of the associated graded algebra.
\begin{remark}\label{ref:MacaulaysBoundedGrowth:rmk}
We will frequently use the following easy consequence of Theorem~\ref{ref:MacaulayGrowth:thm}.
Let $A$ be a graded quotient of a polynomial ring over $k$.
Suppose that $H_A(l) \leq l$ for some $l$.
Then $H_A(l) = \binom{l}{l} + \binom{l-1}{l-1} + \ldots $ and
$H_A(l)^{\langle l \rangle} = \binom{l+1}{l+1} + \binom{l}{l} + \ldots =
H_A(l)$, thus $H_A(l+1) \leq H_A(l)$. It follows
that the Hilbert function of $H_A$ satisfies $H_A(l) \geq H_A(l+1)\geq
H_A(l+2) \geq \ldots $. In particular $H_A(m) \leq l$ for all $m \geq
l$.
\end{remark}
\begin{thm}[Gotzmann's persistence Theorem]\label{ref:Gotzmann:thm}
Let $A = \DSpoly/I$ be a graded quotient of a polynomial ring $\DSpoly$ over $k$ and
suppose that for some $l$ we have $H_A(l+1) = H_{A}(l)^{\langle
l\rangle}$ and $I$ is generated by elements of degree at most $l$. Then
$H_A(m+1) = H_{A}(m)^{\langle m\rangle}$ for all $m\geq l$.
\end{thm}
\begin{proof}
See \cite[Thm 4.3.3]{BrunsHerzog}.
\end{proof}
In the following we will mostly use the following consequence of
Theorem~\ref{ref:Gotzmann:thm}, for which we introduce some (non-standard) notation. Let
$I \subseteq \DSpoly = k[\Dx_1, \ldots ,\Dx_n]$ be a graded ideal in a polynomial ring and
$m\geq 0$. We say that $I$ is \emph{$m$-saturated} if for all $l\leq m$ and
$\sigma\in (\DSpoly)_l$ the condition $\sigma\cdot (\Dx_1, \ldots ,\Dx_n)^{m - l}\subseteq I$
implies $\sigma\in I$.
\begin{lem}\label{ref:P1gotzmann:lem}
\def\Dnn{\mathfrak{n}}
Let $\DSpoly = k[\Dx_1, \ldots ,\Dx_n]$ be a polynomial ring with a
maximal ideal $\Dnn = (\Dx_1,
\ldots ,\Dx_n)$. Let $I \subseteq \DSpoly$ be a graded ideal and $A =
\DSpoly/I$. Suppose that $I$ is $m$-saturated for some
$m\geq 2$.
Then
\begin{enumerate}
\item if $H_A(m) = m+1$ and $H_A(m+1) = m+2$, then $H_A(l) = l+1$ for
all $l\leq m$, in particular $H_A(1) = 2$.
\item if $H_A(m) = m+2$ and $H_A(m+1) = m+3$, then $H_A(l) = l+2$ for
all $l\leq m$, in particular $H_A(1) = 3$.
\end{enumerate}
\end{lem}
\begin{proof}
1. First, if $H_A(l) \leq l$ for some $l < m$, then by Macaulay's Growth
Theorem $H_A(m)\leq l < m+1$, a contradiction. So it suffices to prove
that $H_A(l) \leq l+1$ for all $l < m$.
Let $J$ be the ideal generated by elements of degree at most $m$ in $I$.
We will prove that the graded ideal $J$ of $\DSpoly$ defines a
$\mathbb{P}^1$ linearly embedded into $\mathbb{P}^{n-1}$.
Let $B = \DSpoly/J$. Then $H_B(m) = m+1$ and $H_B(m+1) \geq m+2$. Since $H_B(m)
= m+1 =
\binom{m+1}{m}$, we have $H_B(m)^{\langle m\rangle} = \binom{m+2}{m+1} =
m+2$ and by Theorem \ref{ref:MacaulayGrowth:thm} we get $H_B(m+1)\leq m+2$, thus $H_B(m+1) = m+2$. Then by
Gotzmann's
Persistence Theorem $H_B(l) = l+1$ for all $l > m$. This implies that the
Hilbert polynomial of $\Proj B \subseteq \mathbb{P}^{n-1}$ is $h_B(t) = t+1$, so
that $\Proj B \subseteq \mathbb{P}^{n-1}$ is a linearly embedded $\mathbb{P}^1$. In
particular the Hilbert function and Hilbert polynomial of $\Proj B$ are
equal
for all arguments.
By assumption, we have $J_l = J^{sat}_l$ for all $l < m$. Then $H_{A}(l)
= H_{\DSpoly/J}(l) = H_{\DSpoly/J^{sat}}(l) = l+1$ for all $l < m$ and the claim of
the lemma follows.
2. The proof is similar to the above one; we mention only the points,
where it changes. Let $J$ be the ideal generated
by elements of degree at most $m$ in $I$ and $B = \DSpoly/J$. Then $H_B(m) = m
+2 = \binom{m+1}{m} + \binom{m-1}{m-1}$, thus $H_B(m+1) \leq
\binom{m+2}{m+1} +
\binom{m}{m} = m+3$ and $B$ defines a closed subscheme of $\mathbb{P}^{n-1}$ with
Hilbert polynomial $h_B(t) = t+2$. There are two isomorphism types of such
subschemes: $\mathbb{P}^1$ union a point and $\mathbb{P}^1$ with an
embedded double point. One checks that for these schemes the Hilbert
polynomial is equal to the Hilbert function for all arguments and then
proceeds as in the proof of Point 1.
\end{proof}
\begin{remark}\label{ref:GorensteinSaturated:rmk}
\def\Dnn{\mathfrak{n}}
If $A = \DSpoly/I$ is a graded Artin Gorenstein algebra of socle degree $s$, then
it is $m$-saturated for every $m\leq s$.
Indeed, we may assume that $A = \Apolar{F}$ for some homogeneous $F\in
\DP$ of degree $s$, then $I = \Dan{F}$. Let $\Dnn = (\Dx_1, \ldots ,\Dx_n) \subseteq
k[\Dx_1, \ldots ,\Dx_n] = \DSpoly$.
Take $\sigma\in (\DSpoly)_l$, then $\sigma\in I$ if and only if $\sigma\hook F =
0$. Similarly, $\sigma\Dnn^{m-l} \subseteq I$ if and only if every element
of $\Dnn^{m-l}$ annihilates $\sigma\hook F$. Since $\sigma\hook F$ is
either a homogeneous polynomial of degree $s - l \geq m -l$ or it is zero, both conditions are
equivalent.
\end{remark}
\subsection{Flatness over $\Spec k[t]$}
For further reference we explicitly state a purely elementary flatness
criterion. Its formulation is a bit complicated, but this is precisely the
form which is needed for the proofs. This criterion relies on the easy
observation that the torsion-free modules over $k[t]$ are flat.
\begin{prop}\label{ref:flatelementary:prop}
Suppose $S$ is a $k$-module and $I \subseteq S[t]$ is a $k[t]$-submodule. Let $I_0 := I\cap S$.
If for every $\lambda\in k$ we have
\[(t-\lambda)\cap I \subseteq (t-\lambda)I + I_0[t],\] then $S[t]/I$ is a flat $k[t]$-module.
\end{prop}
\begin{proof}
The ring $k[t]$ is a principal ideal domain, thus a $k[t]$-module is flat if and only
if it is torsion-free, see \cite[Cor~6.3]{EisView}.
Since every polynomial decomposes into linear factors, to prove that $M = S[t]/I$ is
torsion-free it is enough to show that $t-\lambda$ are non--zerodivisors on
$M$,~i.e.~that $(t-\lambda)x\in I$ implies $x\in I$ for all $x\in S[t]$,
$\lambda\in k$.
Fix $\lambda\in k$ and suppose that $x\in
S[t]$ is such that $(t-\lambda)x \in I$. Then by assumption $(t-\lambda)x\in
(t-\lambda)I + I_0[t]$, so that $(t-\lambda)(x-i) \in I_0[t]$ for some $i\in I$.
Since $S[t]/I_0[t] \simeq S/I_0[t]$ is a free $k[t]$-module, we have
$x-i\in I_0[t]\subseteq I$ and so $x\in I$.
\end{proof}
\begin{remark}\label{ref:flatnessremovet:remark}
Let $i_1, \ldots ,i_r$ be the generators of $I$. To check the inclusion
which is the assumption
of Proposition~\ref{ref:flatelementary:prop}, it is enough to
check that $s\in (t-\lambda)\cap I$ implies $s\in (t-\lambda)I + I_0[t]$
for all $s
= s_1 i_1 + \ldots + s_r i_r$, \emph{where $s_i\in S$}.
Indeed, take an arbitrary element $s\in I$ and write $s = t_1 i_1 + \ldots + t_r i_r$, where $t_1, \ldots ,t_r\in
S[t]$. Dividing $t_i$ by $t-\lambda$ we obtain $s = s_1 i_1 + \ldots +
s_r i_r + (t-\lambda)i$, where $i\in I$ and $s_i\in S$. Denote $s' = s_1
i_1 + \ldots + s_r i_r$, then
$s\in (t-\lambda)\cap I$ if and only if $s'\in (t-\lambda)\cap I$ and $s\in (t-\lambda)I +
I_0[t]$ if and only if $s'\in (t-\lambda)I + I_0[t]$.
\end{remark}
\begin{example}\label{ref:exampleflatcrit:example}
Consider $S = k[x,y]$ and $I = xyS[t] + (x^3 - tx)S[t] \subseteq S[t]$. Take an element $s_1 xy + s_2(x^3 - tx)\in
I$ and suppose $s_1 xy + s_2(x^3 - tx)\in (t-\lambda)S[t]$. We want to prove
that this element lies in $I_0[t] + (t-\lambda)I$.
As in Remark~\ref{ref:flatnessremovet:remark}, by subtracting an element of $I(t-\lambda)$ we may assume that $s_1, s_2$ lie in $S$.
Then $s_1 xy + s_2(x^3 - tx)\in (t-\lambda)S[t]$ if
and only if $s_1 xy + s_2(x^3 - \lambda x) = 0$. In particular we have $s_2
\in yS$ so that $s_2 (x^3 - tx)\in xyS[t]$, then $s_1 xy + s_2(x^3 - tx)\in xyS[t] \subseteq I_0[t]$.
\end{example}
\begin{remark}\label{ref:decompositionhomog:rmk}
Similarly as in Example~\ref{ref:exampleflatcrit:example}, in the
following we will frequently use the following easy observation. Consider
a ring $R = B[\Dx]$ graded by the degree of $\Dx$.
Let $d$ be a natural number and $I\subseteq R$ be a homogeneous ideal
generated in degrees less or equal to $d$.
Let $q\in B[\Dx]$ be an element for $\alpha$-degree less than $d$ and such
that
for every $b\in B$ satisfying $b\Dx^{d}\in I$, we have $bq\in I$.
Then for every $r\in R$
\[\mbox{the condition}\ \ r(\Dx^d - q)\in I\ \ \mbox{implies}\ \ r\Dx^n
\in I \ \ \mbox{and}\ \ rq\in I.\]
To prove this, write $r = \sum_{i=0}^{m} r_i \Dx^i$, where $r_i\in B$.
The leading form of $r(\Dx^d - q)$ is $r_m\Dx^{m+d}$ and it lies in $I$. Since $I$
is homogeneous and generated in degree at most $d$, we have $r_m \Dx^d\in I$. Then $r_m
q\in I$ by assumption, so that $\hat{r} := r - r_m\Dx^{m}$ satisfies
$\hat{r}(\Dx^{d} - q)\in I$. By induction on the $\alpha$-degree we may assume
$\hat{r}\Dx^d, \hat{r}q \in I$, then also $r\Dx^d, rq\in I$.
\end{remark}
\section{Standard form of the dual generator}\label{sec:dualgen}
\begin{defn}\label{ref:standardform:def}
Let $f\in \DP = k[x_1,\dots,x_n]$ be a polynomial of degree $s$. Let $I = \annn{\DS}{f}$
and $A = \DS/I = \Apolar{f}$. By $\Dhdvect{\bullet}$ we denote the
decomposition of the Hilbert function of $A$ and we set $e(a) := \sum_{t=0}^a
\Dhd{t}{1}$.
We say that $f$ \emph{is in the standard form} if
\[
f = f_0 + f_1 + f_2 + f_3 + \dots + f_s,\quad \mbox{where}\quad f_i\in \DP_i \cap
k[x_1,\dots,x_{e(s-i)}]\mbox{ for all } i .
\]
Note that if $f$ is in the standard form and $\partial\in \DmmS$ then
$f + \partial\hook f$ is also in the standard form.
We say that an Artin Gorenstein algebra $\DS/I$ is in the \emph{standard
form} if any (or every) dual socle generator of $\DS/I$ is in the
standard form, see Proposition \ref{ref:standardformconds:prop} below.
\end{defn}
\begin{example}\label{ref:standardformofpowersum:example}
If $f = x_1^6 + x_2^5 + x_3^3$, then $f$ is in the standard form.
Indeed, $e(0) = 1$, $e(1) = 2$, $e(2) = 2$, $e(3) =
3$ so that we should check that $x_1^6\in k[x_{1}]$, $x_2^5\in k[x_1,
x_2]$, $x_3^3\in k[x_1, x_2, x_3]$, which is true. On the contrary, $g
= x_3^6 + x_2^5 + x_1^3$ is not in the standard form, but may be put
in the standard form via a change of variables.
\end{example}
The change of variables procedure of
Example~\ref{ref:standardformofpowersum:example} may be generalised to
prove that every
local Artin Gorenstein algebra can be put into a standard form, as the following
Proposition~\ref{ref:existsstandardform:prop} explains.
\begin{prop}\label{ref:existsstandardform:prop}
For every Artin Gorenstein algebra $\DS/I$ there is an automorphism
$\varphi:\DS\to \DS$ such that $\DS/\varphi(I)$ is in the standard form.
\end{prop}
\begin{proof}
See \cite[Thm 5.3AB]{ia94}, the proof is rewritten in \cite[Thm 4.38]{JelMSc}.
\end{proof}
The idea of the proof of Proposition~\ref{ref:existsstandardform:prop} is
to ``linearise'' some elements of $\DS$. This is quite technical and
perhaps it can be best seen on the following example.
\begin{example}\label{ref:iarrfavorite:ex}
On this example we exhibit the proof of
Proposition \ref{ref:existsstandardform:prop}. Let $f = x_1^6 +
x_1^4x_2$. The annihilator of $f$ in $\DS$ is $(\Dx_2^2, \Dx_1^5 -
\Dx_1^3\Dx_2)$, the Hilbert function of $\Apolar{f}$ is $(1, 2, 2, 2,
1, 1, 1)$ and the symmetric decomposition is
\[\Dhdvect{0} = (1, 1, 1,
1, 1, 1, 1),\ \ \Dhdvect{1} = (0, 0, 0, 0, 0, 0),\ \ \Dhdvect{2} = (0, 1, 1,
1, 0).\]
This shows that $e(0) = 1$, $e(1) = 1$, $e(2) = 2$. If $f$ is in the
standard form we should have
$f_5 = x_1^4x_2 \in k[x_1, \ldots, x_{e(1)}] = k[x_1]$. This means that $f$ is not
in the standard form. The ``reason'' for $e(1) = 1$ is the fact that
$\Dx_1^{3}(\Dx_2 - \Dx_1^2)$ annihilates $f$, and the ``reason'' for
$f_5\not\in k[x_1]$ is that $\Dx_2 - \Dx_1^2$ is
not a linear form. Thus we make $\Dx_2 - \Dx_1^2$ a linear form by twisting by a
suitable automorphism of $\DS$.
We define an automorphism $\psi:\DS\to \DS$ by
$\psi(\Dx_1) = \Dx_1$ and $\psi(\Dx_2) = \Dx_2 + \Dx_1^2$, so that we
have
$\psi(\Dx_2 - \Dx_1^2) = \Dx_2$.
The automorphism maps the annihilator of $f$ to the ideal
${I := ((\Dx_2 + \Dx_1^2)^2, \Dx_1^3\Dx_2)}$. We will see that the
algebra $\DS/I$ is in the standard form and also find a particular dual generator obtained
from $f$.
As mentioned in Remark \ref{ref:dualautomorphisms:remark}, the automorphism $\psi$ induces an automorphism
$\DPut{psistar}{\psi^*}$ of the $k$-linear space $\DP_{\leq
6}$. This automorphism maps $f$ to a dual socle
generator $\Dpsistar{f}$ of $\DS/I$.
The element
$F := \Dpsistar{x_1^6}$ is the only element of $\DP$ such that
$\psi(\Dx_1^7)\hook F = \psi(\Dx_2)\hook F = 0$,
$\psi(\Dx_1^6)(F) = 1$ and
$\psi(\Dx_1^{l})(F) = 0$ for $l\leq 5$. Caution: in the last line we
use evaluation on the functional and not the induced action (see Remark
\ref{ref:dualautomorphisms:remark}).
One can compute that $\Dpsistar{x_1^6} = x_1^6 - x_1^4x_2 + x_1^2x_2^2
- x_2^3$ and similarly $\Dpsistar{x_1^4x_2} = x_1^4x_2 - 2x_1^2x_2 +
3x_2^3$ so that $\Dpsistar{f} = x_1^6 - x_1^2x_2^2 + 2x_2^3$. Now
indeed $x_1^6\in k[x_1], x_1^2x_2^2\in k[x_1, x_2]$ and $2x_2^3\in
k[x_1, x_2]$ so the dual socle generator is in the standard form.
\end{example}
We note the following equivalent conditions for a dual socle generator to
be in the standard form.
\begin{prop}\label{ref:standardformconds:prop}
In the notation of Definition \ref{ref:standardform:def}, the following
conditions are equivalent for a polynomial $f\in \DP$:
\begin{enumerate}
\item the polynomial $f$ is in the standard form,
\item for all $r$ and $i$ such that $r > e(s-i)$ we have
$\DmmS^{i - 1}\Dx_r \subseteq I = (f)^{\perp}$. Equivalently,
for all $r$ and $i$ such that $r > e(i)$ we have
$\DmmS^{s - i - 1}\Dx_r \subseteq I = (f)^{\perp}$.
\end{enumerate}
\end{prop}
\begin{proof}
Straightforward.
\end{proof}
\begin{cor}\label{ref:automorphismpresever:cor}
Let $f\in \DP$ be such that the algebra $\DS/I$ is in the standard
form, where $I =\Dan{f}$.
Let $\varphi$ be an automorphism of $\DS$ given by
\[
\varphi\pp{\Dx_i} = \kappa_i\Dx_i + q_i\mbox{ where }q_i\mbox{ is such
that } \deg(q_i \hook f) \leq \deg(\Dx_i \hook f)\mbox{ and
}\kappa_i\in k\setminus\left\{ 0 \right\}.
\]
Then the algebra $\DS/\varphi^{-1}(I)$ is also in the standard form.
\end{cor}
\begin{proof}
The algebras $\DS/I$ and $\DS/\varphi^{-1}(I)$ are isomorphic, in
particular they have equal functions $e(\cdot)$. By
Proposition \ref{ref:standardformconds:prop} it suffices to
prove that if for some $r, i$ we have $\DmmS^r \Dx_i \subseteq I$,
then $\DmmS^r \Dx_i \subseteq \varphi^{-1}(I)$. The latter condition is equivalent to
$\DmmS^r \varphi(\Dx_i) \subseteq I$.
If $\DmmS^r \Dx_i \hook f = 0$ then $\deg (\Dx_i \hook f) < r$
so, by assumption, $\deg (q_i \hook f) < r$ thus $\DmmS^r q_i \hook f
= 0$ and
$\DmmS^r \varphi(\Dx_i) = \DmmS^r (\kappa_i\Dx_i + q_i) \hook f = 0$.
\end{proof}
\begin{cor}\label{ref:basicautos:cor}
Suppose that $q\in\DmmS^2$ does not contain $\Dx_i$ and let $\varphi:
\DS\to \DS$ be an automorphism given by
\[\varphi(\Dx_j) = \Dx_j\mbox{ for all }j\neq i\mbox{ and
}\varphi(\Dx_i) = \kappa_i\Dx_i + q,\mbox{ where }\kappa_i\in
k\setminus\{0\}.\]
Suppose that $\DS/I$ is in the standard form, where $I = \Dan{f}$ and
that $\deg(q\hook f) \leq \deg(\Dx_i \hook f)$. Then the algebras
$\DS/\varphi(I)$ and $\DS/\varphi^{-1}(I)$ are also in the standard
form.
\end{cor}
\begin{proof}
Note that $\psi:\DS\to \DS$ given by $\psi(\Dx_j) = \Dx_j$ for $j\neq
i$ and $\psi(\Dx_i) = \kappa_i^{-1}(\Dx_i - q)$ is an automorphism of $\DS$
and furthermore $\psi(\kappa_i\Dx_i + q) = \Dx_i - q + q = \Dx_i$ so
that $\psi = \varphi^{-1}$.
Both $\varphi$ and $\psi$ satisfy assumptions of Corollary
\ref{ref:automorphismpresever:cor} so both $\DS/\varphi^{-1}(I)$ and
$\DS/\psi^{-1}(I) = \DS/\varphi(I)$ are in the standard form.
\end{proof}
\begin{remark}\label{ref:techbasicautos:remark}
The assumption $q\in\DmmS^2$ of Corollary~\ref{ref:basicautos:cor} is
needed only to ensure that $\varphi$ is an automorphism of $\DS$. On
the other hand the fact that $q$ does not contain $\Dx_i$ is
important, because it allows us to control $\varphi^{-1}$ and in
particular prove that $S/\varphi(I)$ is in the standard form.
\end{remark}
The following Corollary \ref{ref:semibasicautos:ref} is a straightforward generalisation of Corollary
\ref{ref:basicautos:cor}, but the notation is difficult. We first choose
a set $\mathcal{K}$ of variables. The automorphism sends each variable
from $\mathcal{K}$ to
(a multiple of) itself plus a suitable polynomial in variables not
appearing in $\mathcal{K}$.
\begin{cor}\label{ref:semibasicautos:ref}
Take $\mathcal{K} \subseteq \{1, 2, \dots, n\}$ and $q_i\in\DmmS^2$
for $i\in \mathcal{K}$
which do not contain any variables from the set $\{\Dx_i\}_{i\in
\mathcal{K}}$. Define $\varphi:
\DS\to \DS$ by
\[
\varphi(\Dx_i) = \begin{cases}
\Dx_i &\mbox{ if }i\notin \mathcal{K}\\
\kappa_i\Dx_i + q_i,\mbox{ where }\kappa_i\in
k\setminus\{0\} &\mbox{ if } i\in \mathcal{K}.
\end{cases}
\]
Suppose that $\DS/I$ is in the standard form, where $I = \Dan{f}$ and
that $\deg(q_i\hook f) \leq \deg(\Dx_i \hook f)$ for all $i\in
\mathcal{K}$. Then the algebras
$\DS/\varphi(I)$ and $\DS/\varphi^{-1}(I)$ are also in the standard
form.\qed
\end{cor}
\section{Special forms of dual socle generators}\label{sec:specialforms}
Recall that $k$ is an algebraically closed field of characteristic
neither $2$ nor $3$.
In the previous section we mentioned that for every local Artin Gorenstein
algebra there exists a dual socle generator in the standard form, see Definition
\ref{ref:standardform:def}. In this section we will see that in most cases we
can say more about this generator. Our main aim is to put the generator in the
form $x^n + f$, where $f$ contain no monomial divisible by a ``high'' power of
$x$. We will use it to prove that families arising from certain ray
decompositions (see Definition \ref{ref:raydecomposition:def}) are flat.
We begin with an easy observation.
\begin{remark}\label{ref:removinglinearpart:remark}
Suppose that a polynomial $f\in \DP$ is such that
$H_{\Apolar{f}}(1)$ equals the number of variables in $\DP$. Then any linear form
in $\DP$ is a derivative of $f$. If $\deg f > 1$ then the
$\DS$-submodules $\DS f$ and $\DS(f - f_1
- f_0)$ are equal, so analysing this modules we may assume $f_1 =
f_0 = 0$, i.e.~the linear part of $f$ is zero.
Later we use this remark implicitly.
\end{remark}
The following Lemma~\ref{ref:topdegreetwist:lem} provides a method to
slightly improve the given dual socle generator. This
improvement is the building block of all other results in this section.
\begin{lem}\label{ref:topdegreetwist:lem}
Let $f\in \DP$ be a polynomial of degree $s$ and $A$ be the apolar algebra of
$f$. Suppose that $\Dx_1^s\hook f\neq 0$. For every $i$ let $d_i :=
\deg(\Dx_1\Dx_i
\hook f) + 2$.
Then $A$ is isomorphic to the apolar algebra of a polynomial $\hat{f}$ of
degree $s$, such that
$\Dx_1^s \hook \hat{f} = 1$ and $\Dx_1^{d_i - 1}\Dx_i \hook \hat{f} = 0$ for all
$i\neq 1$. Moreover, the leading forms of $f$ and $\hat{f}$ are equal up to a
non-zero constant. If $f$ is in the standard form, then $\hat{f}$ is also in the standard form.
\end{lem}
\begin{proof}
By multiplying $f$ by a non-zero constant we may assume that $\DPut{xx}{\Dx_1}^s \hook f = 1$.
Denote $I := \Dan{f}$.
Since $\deg(\Dxx\Dx_i\hook f) = d_i - 2$, the polynomial $\Dxx^{d_i -
1}\Dx_i\hook f = \Dxx^{d_i - 2}(\Dxx\Dx_i \hook f)$ is constant; we denote it by $\lambda_i$. Then
\[\pp{\Dxx^{d_i - 1}\Dx_i - \lambda_i\Dxx^{s}}\hook f = 0,\mbox{ so that
} \Dxx^{d_i - 1}\pp{\Dx_i - \lambda_i \Dxx^{s - d_i + 1}}\in I.\]
Define an automorphism $\varphi:\DS\to \DS$ by
\[
\varphi(\Dx_i) = \begin{cases}
\Dxx &\mbox{ if }i = 1\\
\Dx_i - \lambda_i \Dxx^{s - d_i + 1} &\mbox{ if } i\neq 1,
\end{cases}
\]
then $\alpha_1^{d_i - 1}\alpha_i \in \varphi^{-1}(I)$ for all $i > 1$. The
dual socle generator $\hat{f}$ of the algebra $\DS/\varphi^{-1}(I)$ has the
required form. We can easily check that the graded algebras of
$\DS/\varphi^{-1}(I)$ and $\DS/I$ are equal, in particular $\hat{f}$ and $f$
have the same leading form, up to a non-zero constant.
Suppose now that $f$ is in the standard form.
Let $i\in \{1, \ldots ,n \}$. Then $d_i = \deg(\Dxx\Dx_i\hook f) + 2\leq \deg(\Dx_i
\hook f) + 1$, so that $\deg(\Dxx^{s-d_i +1} \hook
f) \leq d_i - 1\leq \deg(\Dx_i\hook f)$.
Since $\varphi$ is
an automorphism of $\DS$, by Remark~\ref{ref:techbasicautos:remark} we may
apply Corollary~\ref{ref:semibasicautos:ref}
to $\varphi$. Then $S/\varphi(I)$ is in
the standard form, so $\hat{f}$ is in the standard form by definition.
\end{proof}
\begin{example}\label{ref:topdegreeexample:ex}
Let $f\in k[x_1, x_2, x_3, x_4]$ be a polynomial of degree $s$. Suppose
that the leading form $f_s$ of $f$ can be written as $f_s = x_1^s + g_s$
where $g_s\in k[x_2, x_3, x_4]$. Then $\deg(\Dx_1\Dx_i\hook f) \leq s -
3$ for all $i > 1$. Using Lemma~\ref{ref:topdegreetwist:lem} we produce $\hat{f} =
x_1^s + h$ such that the apolar algebras of $f$ and $\hat{f}$ are
isomorphic and
$\Dx_1^{s-2}\Dx_i\hook h = 0$ for all $i\neq 1$. Then $\Dx_1^{s-2}\hook h
= \lambda_1 x_1 + \lambda_2$, where $\lambda_i\in k$ for $i=1, 2$.
After adding a suitable derivative to $\hat{f}$, we may assume $\lambda_1
= \lambda_2 = 0$, i.e.~$\Dx_1^{s-2}\hook h = 0$.
\end{example}
\begin{example}\label{ref:standardformofstretched:ex}
Suppose that a local Artin Gorenstein algebra $A$ of socle degree $s$ has
Hilbert function equal to $(1, H_1, H_2, \dots, H_c, 1, \dots, 1)$. The
standard form of the dual socle generator of $A$ is
\[f = \DPut{ys}{x_1}^s + \kappa_{s-1}\Dys^{s-1} + \dots +
\kappa_{c+2}\Dys^{c+2} + g,\] where $\deg g\leq c+1$ and
$\kappa_{\bullet}\in k$. By adding a suitable
derivative we may furthermore make all $\kappa_{i} = 0$ and assume
that $\Dx_1^{c+1}\hook g = 0$. Using Lemma \ref{ref:topdegreetwist:lem} we
may also assume that $\Dx_{1}^{c}\Dx_j\hook g = 0$ for every $j\neq 1$ so
we may assume $\Dx_{1}^c \hook g = 0$, arguing as in
Example~\ref{ref:topdegreeexample:ex}. This gives a dual socle generator
\[f = x_1^s + g,\]
where $\deg g \leq c+1$ and $g$ does not contain monomials divisible by
$\Dys^c$.
\end{example}
The following proposition was proved in \cite{CN2stretched}
under the assumption that $k$ is algebraically closed of characteristic
zero and in \cite[Thm
5.1]{JelMSc} under the assumption that $k = \mathbb{C}$. For completeness we include the
proof (with no assumptions on $k$ other than listed at the beginning of this
section).
\begin{prop}\label{ref:squares:prop}
Let $A$ be Artin local Gorenstein algebra of socle degree $s\geq 2$ such that the Hilbert function decomposition from
Theorem~\ref{ref:Hfdecomposition:thm} has $\Dhdvect{A, s-2} =
(0, q, 0)$. Then $A$ is isomorphic to the apolar
algebra of a polynomial $f$ such that $f$ is in the standard form and the
quadric part $f_2$ of $f$ is a sum of
$q$ squares of variables not appearing in $f_{\geq 3}$ and a quadric
in variables appearing in $f_{\geq 3}$.
\end{prop}
\begin{proof}
Let us take a standard dual socle generator $f\in \DP := k[x_1,\dots,x_n]$
of the algebra $A$. Now we will twist $f$ to obtain the required form of
$f_2$. We may assume that $H_{\Apolar{f}}(1) = n$.
If $s = 2$, then the theorem follows from the fact that the quadric $f$ may be
diagonalised. Assume $s\geq 3$.
Let
$\DPut{parte}{e} := e(s-3) = \sum_{t=0}^{s-3} \Dhd{A, t}{1}$. We have
$\DPut{tote}{n} = e(s-2) = f + q$,
so that $f_{\geq 3}\in k[x_1,\dots,x_{\Dparte}]$ and $f_2\in
k[x_1,\dots,x_{\Dtote}]$. Note that $f_{\geq 3}$ is also in the standard
form, so that every linear form in $x_1, \ldots ,x_{\Dparte}$ is a
derivative of $f_{\geq 3}$, see Remark~\ref{ref:removinglinearpart:remark}.
First, we want to assure that $\Dx_{\Dtote}^2\hook f \neq 0$. If
$\Dx_{\Dtote}\hook f\in
k[x_1,\dots,x_{\Dparte}]$ then there exists an operator $\partial\in \DmmS^2$ such
that $\pp{\Dx_{\Dtote} - \partial}\hook f = 0$. This contradicts the fact
that $f$ was in the standard form (see the discussion in Example~\ref{ref:iarrfavorite:ex}).
So we get that $\Dx_{\Dtote} \hook f$ contains some $x_r$ for $r >
\Dparte$, i.e.~$f$
contains a monomial $x_rx_{\Dtote}$. A change
of variables involving only $x_r$ and $x_{\Dtote}$ preserves the standard form and gives
$\Dx_{\Dtote}^2 \hook f \neq 0$.
Applying Lemma \ref{ref:topdegreetwist:lem} to $x_{\Dtote}$ we see that $f$ may
be taken to be in the form $\hat{f} + x_{\Dtote}^2$, where $\hat{f}$ does not contain
$x_{\Dtote}$, i.e. $\hat{f}\in k[x_1, \ldots ,x_{n-1}]$. We repeat the argument for $\hat{f}$.
\end{proof}
\begin{example}
If $A$ is an algebra of socle degree $3$, then $H_A = (1, n, e, 1)$ for
some $n$, $e$. Moreover, $n\geq e$ and the symmetric decomposition of $H_A$ is $(1, e, e, 1)
+ (0, n-e, 0)$. By Proposition \ref{ref:squares:prop} we see that $A$ is
isomorphic to the apolar algebra of
\[
f + \sum_{e<i\leq n} x_i^2,
\]
where $f\in k[x_1, \ldots ,x_{e}]$.
This claim was first proved by Elias and Rossi, see~\cite[Thm
4.1]{EliasRossiShortGorenstein}.
\end{example}
\subsection{Irreducibility for fixed Hilbert function in two variables.}
Below we analyse local Artin Gorenstein algebras with Hilbert function $(1, 2,
2, \ldots )$. Such algebras are classified up to isomorphism in
\cite{EliasVallaAlmostStretched}, but rather than such classification we need
to know the geometry of their parameter space, which is analysed (among other
such spaces) in \cite{iarrobino_punctual}.
\begin{prop}\label{ref:irreducibleintwovariables:prop}
Let $H = (1, 2, 2, *, \dots, *, 1)$ be a vector of length
$s+1$. The set of
polynomials $f\in k[x_1, x_2]$ such that $H_{\Apolar{f}} = H$ constitutes
an irreducible, smooth subscheme of the affine space $k[x_1, x_2]_{\leq s}$.
A general member of
this set has, up to an automorphism of $\DP$ induced by an automorphism of $\DS$,
the form $f + \partial\hook f$, where $f = x_1^s + x_2^{s_2}$ for some
$s_2 \leq s$.
\end{prop}
\begin{proof}
The irreducibility and smoothness is proved in \cite[3.13]{iarrobino_punctual}.
In the case $H = (1, 1, 1, \ldots , 1)$ the claim (with $s_2 = 0$) follows directly from
the existence of the standard form of a polynomial. Further in the proof we assume $H(1) = 2$.
Let us take a general polynomial $f$ such that
$H_{\Apolar{f}} = H$. Then $\Dan{f} = (q_1, q_2)$ is a complete
intersection, where $q_1\in\DS$ has order $2$, i.e.~$q_1\in
\DmmS^2\setminus \DmmS^3$. Since
$f$ is general, we may assume that the quadric part of $q_1$ has maximal
rank, i.e. rank two, see also
\cite[Thm~3.14]{iarrobino_punctual}. Then after a change of variables $q_1
\equiv \Dx_1\Dx_2 \mod \DmmS^3$.
Since the leading form $\Dx_1\Dx_2$ of $q_1$ is reducible, $q_1 = \delta_1
\delta_2$ for some $\delta_1, \delta_2\in \DS$ such that $\delta_i \equiv
\Dx_i \mod \DmmS^2$ for $i=1,2$,
see~e.g.~\cite[Thm~16.6]{Kunz_plane_algebraic_curves}. After an
automorphism of $\DS$ we may assume $\delta_i = \Dx_i$, then $\Dx_1\Dx_2 =
q_1$ annihilates $f$, so that it has the required form.
\end{proof}
\subsection{Homogeneous forms and secant
varieties}\label{sec:homogeneousforms}
\def\kchar{\operatorname{char}\, k}
It is well-known that if $F\in \DP_s$ is a form such that $H_{\Apolar{F}} = (1, 2,
\dots, 2, 1)$ then the standard form of $F$ is either $x_1^{s} +
x_2^s$ or $x_1^{s-1}x_2$. In particular the set of such forms in $\DP$ is
irreducible
and in fact it is open in the so-called secant variety. This section is
devoted to some generalisations of this result for the purposes of
classification of leading forms of polynomials in $\DP$.
The following proposition is well-known if the base field is of characteristic
zero (see \cite[Thm 4]{BGIComputingSymmetricRank} or \cite{LO}), but we could
not find a reference for the positive characteristic case, so for completeness we
include the proof.
\begin{prop}\label{ref:thirdsecant:prop}
Suppose that $\DPut{ff}{F}\in k[x_1, x_2, x_3]$ is a homogeneous polynomial
of degree $\Ddegf\geq 4$.
The following conditions are equivalent
\begin{enumerate}
\item the algebra $\Apolar{\Dff}$ has Hilbert function $H$ beginning
with $H(1) = H(2) = H(3) = 3$, i.e.~$H = (1, 3, 3, 3,
\ldots )$,
\item after a linear change of variables $\Dff$ is in one of the forms
\[x_1^{\Ddegf} + x_2^{\Ddegf} + x_3^{\Ddegf},\qquad x_1^{\Ddegf -
1}x_2 + x_3^{\Ddegf},\qquad x_1^{\Ddegf - 2}(x_1 x_3 + x_2^2).\]
\end{enumerate}
Furthermore, the set of forms in $k[x_1, x_2, x_3]_{\Ddegf}$ satisfying
the above conditions is irreducible.
\end{prop}
\begin{proof}
For characteristic zero case see \cite{LO} and references therein.
\def\span#1{\langle #1 \rangle}
\def\Dtmpy{\theta}
Let $\DS = k[\Dx_1, \Dx_2,
\Dx_3]$ be a polynomial ring dual to $\DP$. This notation is incoherent
with the global notation, but it is more readable than $\DSpoly$.
Let $I := \annn{\DS}{\Dff}$ and $I_2 := \span{\Dtmpy_1, \Dtmpy_2,
\Dtmpy_3}
\subseteq (\DS)_2$ be the linear space of operators of degree $2$
annihilating $\Dff$. Let $A := \DS/I$, $J := (I_2)
\subseteq \DS$ and $B := \DS/J$.
Since $A$ has length greater than $3\cdot 3 > 2^3$, the ideal $J$ is not a complete
intersection.
We will prove that the graded ideal $J$ is saturated and defines a zero-dimensional scheme of degree
$3$ in $\mathbb{P}^2 = \Proj \DS$. First, $3 = H_A(3)
\leq H_{B}(3) \leq 4$ by Macaulay's Growth Theorem. If
$H_B(3) = 4$ then by Lemma~\ref{ref:P1gotzmann:lem} and
Remark~\ref{ref:GorensteinSaturated:rmk} we have
$H_A(1) = 2$, a contradiction. We have proved that $H_{B}(3) = 3$.
Now we want to prove that $H_B(4) = 3$. By Macaulay's Growth Theorem applied to
$H_B(3) = 3$ we have $H_B(4) \leq 3$. If $\Ddegf > 4$ then $H_A(4) = 3$,
so $H_B(4) \geq 3$. Suppose $\Ddegf = 4$. By Buchsbaum-Eisenbud result
\cite{BuchsbaumEisenbudCodimThree} we know that the minimal number of
generators of $I$ is odd. Moreover, we know that $A_n = B_n$ for $n < 4$,
thus the generators of $I$ have degree two or four. Since $I_2$ is not a
complete intersection, there are at least two generators of degree
$4$, so $H_B(4) \geq H_A(4) + 2 = 3$.
From $H_B(3) = H_B(4) = 3$ by Gotzmann's Persistence Theorem we see that $H_B(m) =
3$ for all $m\geq 1$. Thus the scheme $\Gamma := V(J) \subseteq \Proj
k[\Dx_1, \Dx_2, \Dx_3]$ is finite of degree $3$ and $J$ is saturated. In
particular, the ideal $J = I(\Gamma)$ is contained in $I$.
We will use $\Gamma$ to compute the possible forms of $F$, in the spirit
of Apolarity Lemma, see \cite[Lem~1.15]{iakanev}. There are four possibilities for $\Gamma$:
\begin{enumerate}
\item $\Gamma$ is a union of three distinct, non-collinear points. After a change of basis $\Gamma
= \left\{ [1:0:0] \right\} \cup \left\{ [0:1:0] \right\} \cup
\left\{ [0:0:1] \right\}$, then $I_2 = (\Dx_1\Dx_2, \Dx_2\Dx_3,
\Dx_3\Dx_1)$ and $\Dff = x_1^{\Ddegf} + x_2^{\Ddegf} + x_3^{\Ddegf}$.
\item $\Gamma$ is a union of a point and scheme of length two, such
that $\span{\Gamma} = \mathbb{P}^2$. After
a change of basis $I_{\Gamma} = (\Dx_1^2, \Dx_1\Dx_2, \Dx_2\Dx_3)$,
so that $\Dff = x_3^{\Ddegf-1}x_1 + x_2^{\Ddegf}$.
\item $\Gamma$ is irreducible with support $[1:0:0]$ and it is not a
$2$-fat point. Then $\Gamma$ is Gorenstein and so $\Gamma$ may
be taken as the curvilinear scheme defined by $(\Dx_3^2, \Dx_2\Dx_3,
\Dx_1\Dx_3 - \Dx_2^2)$. Then, after a linear change of variables,
$\Dff = x_1^{\Ddegf-1}x_3 +
x_2^2x_1^{\Ddegf-2}$.
\item $\Gamma$ is a $2$-fat point supported at $[1:0:0]$. Then
$I_{\Gamma} = (\Dx_2^2, \Dx_2\Dx_3, \Dx_3^2)$, so $F =
x_1^{\Ddegf-1}(\lambda_2 x_2 + \lambda_3 x_3)$ for some $\lambda_2,
\lambda_3\in k$. But then there is a degree one operator in $\DS$
annihilating $F$, a contradiction.
\end{enumerate}
The set of $\Dff$ which are sums of three powers of
linear forms is irreducible. To see that the forms satisfying the
assumptions of the Proposition constitute an irreducible
subset of $\DP_{\Ddegf}$ we observe that every $\Gamma$ as above is smoothable by
\cite{CEVV}. The flat family proving the smoothability of $\Gamma$ induces a family
$\Dff_t \to \Dff$, such that $\Dff_{\lambda}$ is a sum of three powers of linear
forms for $\lambda\neq 0$, see \cite[Corollaire in Section 2]{emsalem}. See also \cite{bubu2010} for a generalisation of
this method.
\end{proof}
\begin{prop}\label{ref:fourthsecant:prop}
Let $\Ddegf \geq 4$.
Consider the set $\DPut{set}{\mc{S}}$ of all forms $\DPut{ff}{F}\in k[x_1,
x_2, x_3, x_4]$ of degree $\Ddegf$
such that the apolar algebra of $\Dff$ has Hilbert function $(1, 4, 4,
4, \dots, 4, 1)$. This set is irreducible and its general member has the
form $\ell_1^{\Ddegf}
+ \ell_2^{\Ddegf} + \ell_3^{\Ddegf} + \ell_4^{\Ddegf}$, where $\ell_1$,
$\ell_2$, $\ell_3$, $\ell_4$
are linearly independent linear forms.
\end{prop}
\begin{proof}
First, the set $\DPut{setzero}{\mc{S}_0}$ of forms equal to $\ell_1^4
+ \ell_2^4 + \ell_3^4 + \ell_4^4$, where $\ell_1$, $\ell_2$, $\ell_3$, $\ell_4$
are linearly independent linear forms, is irreducible and contained in
$\Dset$. It would be enough
to prove that $\Dset$ lies in the closure of $\Dsetzero$.
We follow the proof of Proposition
\ref{ref:thirdsecant:prop}, omitting some details which can be found there.
Let $S = k[\Dx_1, \Dx_2, \Dx_3, \Dx_4]$, $I := \Dan{\Dff}$ and $J :=
(I_2)$. Set $A = S/I$ and $B = S/J$. Then
$H_B(2) = 4$ and $H_B(3)$ is either $4$ or $5$. If $H_B(3) = 5$, then by
Lemma~\ref{ref:P1gotzmann:lem} we have $H_B(1) = 3$, a contradiction. Thus
$H_B(3) = 4$.
Now we would like to prove $H_B(4) = 4$. By
Macaulay's Growth Theorem $H_B(4) \leq 5$. By
Lemma~\ref{ref:P1gotzmann:lem} $H_B(4) \neq 5$, thus $H_B(4) \leq 4$. If
$\Ddegf > 4$ then $H_B(4) \geq H_A(4) = 4$, so we concentrate on
the case $\Ddegf = 4$.
Let us write the minimal free resolution of $A$, which is symmetric by
\cite[Cor 21.16]{EisView}:
\[
0\to S(-8) \to S(-4)^{\oplus a}\oplus S(-6)^{\oplus 6} \to
S(-3)^{\oplus b}\oplus S(-4)^{\oplus c} \oplus S(-5)^{\oplus b}\to
S(-2)^{\oplus 6} \oplus S(-4)^{\oplus a}\to S.
\]
Calculating $H_A(3) = 4$ from the resolution, we get $b = 8$. Calculating
$H_A(4) = 1$ we obtain $6 - 2a + c = 0$. Since $1 + a = H_B(4) \leq 4$ we have $a\leq 3$,
so $a = 3$, $c = 0$ and $H_B(4) = 4$.
Now we calculate $H_B(5)$. If $\Ddegf > 5$ then $H_B(5) = 4$ as before.
If $\Ddegf = 4$ then extracting syzygies of $I_2$ from the above resolution
we see that $H_B(5) = 4
+ \gamma$, where $0\leq \gamma\leq 8$, thus
$H_B(5) = 4$ and $\gamma = 0$.
If $\Ddegf = 5$, then the resolution of $A$ is
\[
0\to S(-9)\to S(-4)^{\oplus 3}\oplus S(-7)^{\oplus 6}\to S(-3)^{\oplus
8}\oplus S(-6)^{\oplus 8}\to S(-5)^{\oplus 3}\oplus S(-2)^{\oplus 6}
\to S.
\]
So $H_B(5) = 56 - 20\cdot 6 + 8 = 4$. Thus, as in the previous case we see that $J$ is the
saturated ideal of a scheme $\Gamma$ of degree $4$. Then $\Gamma$
smoothable by \cite{CEVV} and its smoothing induces a family $\Dff_t\to
\Dff$, where $\Dff_{\lambda}\in \Dsetzero$ for $\lambda\neq 0$.
\end{proof}
The following Corollary~\ref{ref:equationforsecant:cor} is a consequence of
Proposition~\ref{ref:fourthsecant:prop}. This corollary is not used in the
proofs of the main results, but it is of certain interest of its own and shows
another connection with secant varieties. For simplicity and to refer to some
results from \cite{LO}, we assume that $k = \mathbb{C}$, but
the claim holds for all fields of characteristic zero or large
enough.
\def\catal{\varphi_{a, \Ddegf-a}}
To formulate the claim we introduce catalecticant matrices.
Let $\varphi_{a, \Ddegf-a}: \DS_{a}\times \DP_{\Ddegf}\to\DP_{\Ddegf-a}$ be the contraction mapping
applied to homogeneous polynomials of degree $\Ddegf$. For $F\in \DP_\Ddegf$
we obtain $\catal(F): \DS_a \to \DP_{\Ddegf-a}$, whose matrix is called the \emph{$a$-catalecticant
matrix}. It is straightforward to see that $\rk \catal(F) =
H_{\Apolar{F}}(a)$.
\def\floor#1{\left\lfloor #1 \right\rfloor}
\begin{cor}\label{ref:equationforsecant:cor}
Let $\Ddegf \geq 4$ and $k = \mathbb{C}$.
The fourth secant variety to $\Ddegf$-th Veronese reembedding of
$\mathbb{P}^n$ is a subset $\sigma_4(v_\Ddegf(\mathbb{P}^n)) \subseteq \mathbb{P}(P_s)$ set-theoretically defined
by the condition $\rk \catal \leq 4$, where $a = \floor{\Ddegf/2}$.
\end{cor}
\begin{proof}
\def\Dh#1{H(#1)}
Since $H_{\Apolar{F}}(a) \leq 4$ for $F$ which is a sum of four powers of
linear forms, by semicontinuity every $F\in \sigma_4(v_\Ddegf(\mathbb{P}^n))$ satisfies the
above condition.
Let $F\in \DP_\Ddegf$ be a form satisfying $\rk \catal(F) \leq 4$. Let $A = \Apolar{F}$ and $H = H_A$ be the Hilbert
function of $A$. We want to reduce to the case where $\Dh{n} = 4$ for all $0 < n < \Ddegf$.
First we show that $\Dh{n} \geq 4$ for all $0 < n < \Ddegf$.
If $\Dh{1}\leq 3$, then the claim follows from~\cite[Thm~3.2.1~(2)]{LO},
so we assume $\Dh{1} \geq 4$.
Suppose that for some $n$ satisfying $4 \leq n < \Ddegf$ we
have $\Dh{n} < 4$. Then by Remark~\ref{ref:MacaulaysBoundedGrowth:rmk} we
have $\Dh{m} \leq \Dh{n}$ for
all $m \geq n$, so that $\Dh{1} = \Dh{\Ddegf-1} < 4$, a contradiction. Thus $\Dh{n}\geq
4$ for all $n\geq 4$. Moreover, $\Dh{3}\geq 4$ by Macaulay's Growth Theorem. Suppose now
that $\Dh{2}
< 4$. By Theorem~\ref{ref:MacaulayGrowth:thm} the
only possible case is $\Dh{2} = 3$ and $\Dh{3} = 4$. But then $\Dh{1} =2 <
4$ by Lemma~\ref{ref:P1gotzmann:lem}, a contradiction. Thus we have proved
that
\begin{equation}\label{eq:lowerbound}
\Dh{n} \geq 4\quad \mbox{for all}\quad 0 < n < \Ddegf.
\end{equation}
We have $\Dh{a} = 4$. If $\Ddegf\geq 8$, then $a\geq 4$, so by
Remark~\ref{ref:MacaulaysBoundedGrowth:rmk} we have $\Dh{n}
\leq 4$ for all $n > a$. Then by the symmetry $\Dh{n} = \Dh{\Ddegf - n}$ we have $\Dh{n}
\leq 4$ for all $n$. Together with $\Dh{n}\geq 4$ for $0 < n < \Ddegf$, we have
$\Dh{n} = 4$ for $0 < n < \Ddegf$. Then $F\in \sigma_4(v_\Ddegf(\mathbb{P}^n))$ by
Proposition~\ref{ref:fourthsecant:prop}.
If $a = 3$ (i.e.~$\Ddegf = 6$ or $\Ddegf = 7$), then $\Dh{4} \leq 4$ by
Lemma~\ref{ref:P1gotzmann:lem} and we finish the proof as in the case
$\Ddegf
\geq 8$.
If $\Ddegf = 5$, then $a = 2$ and the Hilbert function of $A$ is $(1, n, 4, 4,
n, 1)$. Again by Lemma~\ref{ref:P1gotzmann:lem}, we have $n\leq 4$, thus
$n = 4$ by \eqref{eq:lowerbound} and
Proposition~\ref{ref:fourthsecant:prop} applies.
If $\Ddegf = 4$, then $H = (1, n, 4, n, 1)$. Suppose $n\geq 5$, then
Lemma~\ref{ref:P1gotzmann:lem} gives $n\leq 3$, a contradiction. Thus
$n = 4$ and Proposition~\ref{ref:fourthsecant:prop} applies also to this case.
\end{proof}
Note that for $s\geq 8$ the Corollary~\ref{ref:equationforsecant:cor} was also
proved, in the case $k = \mathbb{C}$, in \cite[Thm~1.1]{bubu2010}.
\section{Ray sums, ray families and their flatness}\label{sec:ray}
Recall that $k$ is an algebraically closed field of characteristic
neither $2$ nor $3$.
Since $k[[\Dx_i]]$ is a discrete valuation ring, all its ideals have the form
$\Dx_i^{\DPut{ord}{\nu}}k[[\Dx_i]]$ for some $\Dord \geq 0$. We use this property to construct certain
decompositions of the ideals in power series ring $\DS$.
\def\Drord#1#2{\operatorname{rord}_{#1}\left( #2 \right)}
\def\Dcomp#1{\mathfrak{p}_{#1}}
\begin{defn}\label{ref:order:def}
Let $I$ be an ideal of finite colength in the power series ring
$\DPut{tmpseries}{k[[\Dx_1,\dots,\Dx_n]]}$
and $\pi_i:\Dtmpseries \onto k[[\Dx_i]]$ be the projection defined by
$\pi_i(\Dx_j) = 0$ for $j\neq i$ and $\pi_i(\Dx_i) = \Dx_i$.
The $i$-th \emph{ray order} of $I$ is a non-negative
integer $\Dord = \Drord{i}{I}$ such that $\pi_i(I) = (\Dx_i^\Dord )$.
\end{defn}
By the discussion above, the ray order is
well-defined.
Below by $\Dcomp{i}$ we denote the kernel of $\pi_i$; this is the ideal
generated by all variables except for $\Dx_i$.
\begin{defn}\label{ref:raydecomposition:def}
Let $I$ be an ideal of finite colength in the power series ring
$\DS = \Dtmpseries$. A \emph{ray decomposition} of $I$ with respect to
$\Dx_i$ consists of
an ideal $J \subseteq S$, such that $J \subseteq I \cap \Dcomp{i}$, together with an
element $q\in \Dcomp{i}$ and $\Dord\in \mathbb{Z}_{+}$ such that
\[
I = J + (\Dx_i^{\Dord} - q)\DS.
\]
\end{defn}
Note that from Definition \ref{ref:order:def} it follows that for every
$I$ and $i$ a ray decomposition (with $J = I\cap \Dcomp{i}$) exists and that
$\Dord = \Drord{i}{I}$ for every ray decomposition.
\begin{defn}\label{ref:rayfamily:def}
\def\DJpoly{J_{poly}}
Let $\DS = k[[\Dx_1, \ldots ,\Dx_n]]$ and $\DSpoly = k[\Dx_1, \ldots
,\Dx_n] \subseteq \DS$.
Let $I = J + \left( \Dx_{i}^\Dord - q \right)\DS$ be a ray
decomposition of a finite colength ideal $I \subseteq \DS$. Let $\DJpoly =
J\cap \DSpoly$. The associated \emph{lower ray family} is
\[
k[t] \to \frac{\DSpoly[t]}{\DJpoly[t] + (\Dx_i^\Dord -
t\cdot\Dx_i - q)\DSpoly[t]},
\]
and the associated \emph{upper ray family} is
\[
k[t] \to \frac{\DSpoly[t]}{\DJpoly[t] + (\Dx_i^\Dord -
t\cdot\Dx_i^{\Dord-1} -
q)\DSpoly[t]}.
\]
If the lower (upper) family is flat over $k[t]$ we will call it a \emph{lower (upper) ray degeneration}.
\end{defn}
Note that the lower and upper ray degenerations agree for $\Dord = 2$.
\begin{remark}
In all considered cases the quotient $\DSpoly/J_{poly}$ will be finite over
$k$, so that every ray family will be finite (thus projective) over $k[t]$,
then every ray degeneration will give a morphism to the Hilbert scheme. We will
implicitly left this check to the reader.
\end{remark}
\begin{remark}\label{ref:fibers:rmk}
In this remark for simplicity we assume that $i = 1$ in
Definition~\ref{ref:rayfamily:def}. Below we
write $\Dx$ instead of $\Dx_1$.
Let us look at the fibers of the upper ray family from this definition
in a special case, when $\Dx\cdot q\in J$.
The fiber over $t=0$ is isomorphic to $\DS/I$.
Let us take $\lambda\neq 0$ and analyse the fiber at $t=\lambda$. This
fiber is supported at $(0, 0, \dots, 0)$ and at $(0, \dots, 0, \lambda, 0, \dots,
0)$, where $\lambda$ appears on the $i$-th position.
In particular, this shows that the
existence of an upper ray degeneration proves that the algebra $\DS/I$ is
limit-reducible; this is true also for the lower ray degeneration.
Now $\Dx^{\Dord+1} - \lambda\Dx^\Dord$ is in the ideal defining the fiber
of the upper ray family over $t
= \lambda$. Now one
may compute that
near $(0, \dots, 0)$ the ideal defining the fiber is
$(\lambda\Dx^{\Dord-1} - q) + J$. Similarly
near $(0, \dots, 0, \lambda, 0, \dots, 0)$ it is $(\Dx - \lambda) + (q) +
J$. The argument is similar (though easier) to the proof of
Proposition~\ref{ref:fibersofray:prop}.
\end{remark}
Most of the families constructed in \cite{CEVV} and \cite{cn09} are ray
families.
\begin{defn}\label{ref:raysum:def}
For a non-zero polynomial $f\in \DP$ and $d\geq 2$ the $d$-th \emph{ray sum of $f$ with respect to
a derivation $\partial\in \DmmS$} is a polynomial $g\in \DP[x]$ given by
\[g = f + \DPut{specvar}{x}^{d}\cdot\partial \hook f +
\Dspecvar^{2d}\cdot\partial^2\hook f +
\Dspecvar^{3d}\cdot\partial^{3}\hook f + \dots\ .\]
\end{defn}
The following proposition shows that a ray sum naturally induces a ray
decomposition, which can be computed explicitly.
\begin{prop}\label{ref:raysumideal:prop}
Let $g$ be the $d$-th ray sum of $f\neq 0$ with respect to $\partial\in \DmmS$,
see Definition~\ref{ref:raysum:def}. Let $\Dx$ be an
element dual to $x$, so that $P[x]$ and $\DPut{T}{T}:= \DS
[[\Dx]]$ are dual.
The annihilator of $g$ in $\DT$ is
given by the formula
\begin{equation}\label{eq:anndecomposition}
\annn{\DT}{g} = \annn{\DS}{f} + \pp{\sum_{i=1}^{d-1} k\Dx^i}
\annn{\DS}{\partial\hook f} + (\Dx^{d} - \partial)\DT,
\end{equation}
where the sum denotes the sum of $k$-vector spaces. In particular,
the ideal $\annn{\DT}{g} \subseteq \DT$ is generated by $\annn{\DS}{f}$,
$\Dx\annn{\DS}{\partial\hook f}$ and $\Dx^d - \partial$.
The formula \eqref{eq:anndecomposition} is a ray decomposition of
$\annn{\DT}{g}$ with respect to $\Dx$ and with $J = \annn{\DS}{f}\DT +
\Dx\annn{\DS}{\partial\hook f}\DT$ and $q = \partial$.
\end{prop}
\begin{proof}
It is straightforward to see that the right hand side of Equation
\eqref{eq:anndecomposition} lies in $\annn{\DT}{g}$.
Let us
take any $\partial'\in \annn{\DT}{g}$. Reducing the powers of $\Dx$ using
$\Dx^{d} - \partial$ we can write
\[
\partial' = \sigma_0 + \sigma_1\Dx + \dots +
\sigma_{k-1}\Dx^{d-1},
\]
where $\sigma_{\bullet}$ do not contain $\Dx$. The action of this derivation
on $g$ gives
\[
0 = \sigma_0 \hook f + \Dspecvar \sigma_{d-1}\partial \hook f +
\Dspecvar^{2} \sigma_{d-2}\partial\hook f + \dots +
\Dspecvar^{d-1}\sigma_1\partial \hook f + \Dspecvar^d\left( \dots \right).
\]
We see that $\sigma_0\in \annn{\DS}{f}$ and $\sigma_i \in
\annn{\DS}{\partial\hook f}$ for $i \geq 1$, so the equality
is proved. It is also clear that $J \subseteq \DmmS$ and $\annn{\DT}{g} = J + (\Dx^d -
\partial)\DT$, so that indeed we obtain a ray decomposition.
\end{proof}
\begin{remark}\label{ref:Hilbfunccouting:rmk}
It is not hard to compute the Hilbert function of the apolar algebra of
a ray sum in some special cases. We mention one
such case below.
Let $f\in \DP$ be a polynomial satisfying $f_2 = f_1 = f_0 = 0$ and
$\partial\in \DmmS^2$ be such that $\partial \hook f = \ell$ is a linear
form, so that $\partial^2\hook f = 0$. Let $A = \Apolar{f}$ and $B =
\Apolar{f + x^2\ell}$, then the only different values of $H_A$ and $H_B$
are $H_B(m) = H_A(m) + 1$ for $m=1, 2$. The $f_2 = f_1 = f_0 = 0$
assumption is needed to ensure that the degrees of $\partial\hook f$ and
$\partial\hook (f + x^2\ell)$ are equal for all $\partial$ not
annihilating $f$.
\end{remark}
\subsection{Flatness of ray families}
\begin{prop}\label{ref:raysumflatness:prop}
\def\DTpoly{\DT_{poly}}
\def\DJpoly{J_{poly}}
Let $g$ be the $d$-th ray sum with respect to $f$ and $\partial$. Then the
corresponding upper and lower ray families are flat. Recall, that these
families are explicitly given as
\begin{equation}\label{eq:upperrayfamily}
k[t] \to \frac{\DTpoly[t]}{\DJpoly[t] + (\Dx^{d} - t\Dx^{d-1} -
\partial)\DTpoly[t]}\quad
\mbox{ (upper ray family),}
\end{equation}
\begin{equation}\label{eq:lowerrayfamily}
k[t] \to \frac{\DTpoly[t]}{\DJpoly[t] + (\Dx^{d} -t\Dx - \partial)\DTpoly[t]}\quad\quad
\mbox{ (lower ray family),}
\end{equation}
where $\DTpoly$ is the fixed polynomial subring of $\DT$.
\end{prop}
\begin{proof}
\def\DTpoly{\DT_{poly}}
We will prove the flatness of the Family \eqref{eq:lowerrayfamily}. We leave
the case of Family \eqref{eq:upperrayfamily} to the reader.
We want to use Proposition~\ref{ref:flatelementary:prop}. To simplify
notation let $J := J_{poly}$.
Denote by $\DPut{tmpII}{\mathfrak{I}}$ the ideal defining the family and
suppose that its element lies in $(t-\lambda)$, for some $\lambda\in k$.
Write this element as $i + i_2\DPut{spec}{\pp{\Dx^{d} - t\Dx -
\partial}}$, where $i\in J[t]$, $i_2\in \DTpoly[t]$. Subtracting an element of
$(t-\lambda)\DtmpII$ we
may assume that $i\in J$, $i_2\in \DTpoly$, see
Remark~\ref{ref:flatnessremovet:remark}.
Then $i + i_2\DPut{spec}{(\Dx^d - \lambda\Dx - \partial)} = 0$.
By definition $J$ is homogeneous with respect to grading by $\Dx$, see
definition in Proposition~\ref{ref:raysumideal:prop}. More
precisely it is equal to $J_0 + J_1\Dx$, where $J_0, J_1$ are generated
by elements not containing $\Dx$. Moreover, $\partial J \subseteq J_0$.
Since $-i = i_2\Dspec \in J$, by Remark~\ref{ref:decompositionhomog:rmk} we
have $i_2\Dx^d\in J$. Then $i_2\Dx\in J$, thus
$i_2(\Dx^d - t\Dx)\in J[t] \subseteq\DtmpII \cap \DTpoly$. Since $i_2 \partial\in
\DtmpII \cap \DTpoly$ by definition, this
implies that $i + i_2(\Dx^d - t\Dx - \partial)\in J \subseteq (\DtmpII\cap
\DTpoly)[t]$. Now the flatness follows from
Proposition~\ref{ref:flatelementary:prop}.
\end{proof}
\begin{prop}\label{ref:fibersofray:prop}
Let us keep the notation of Proposition \ref{ref:raysumflatness:prop}.
Let $\lambda\in k\setminus\left\{ 0 \right\}$.
The fibers of the Family \eqref{eq:upperrayfamily} and Family
\eqref{eq:lowerrayfamily} over $t-\lambda$ are
reducible.
Suppose that $\partial^2\hook f = 0$ and the characteristic of $k$ does
not divide $d-1$. The fiber of the Family \eqref{eq:lowerrayfamily} over $t-\lambda$ is
isomorphic to \[\Spec \Apolar{f} \sqcup \left(\Spec \Apolar{\partial
f}\right)^{\sqcup d-1}.\]
\end{prop}
\begin{proof}
For both families the support of the fiber over $t - \lambda$ contains the
origin. The support of the fiber of Family \eqref{eq:upperrayfamily} contains
furthermore a point with $\alpha = \lambda$ and other coordinates equal to
zero. The support of the fiber of Family \eqref{eq:lowerrayfamily} contains a
point with $\alpha = \omega$, where $\omega^{d-1} = \lambda$.
Now let us concentrate on Family \eqref{eq:lowerrayfamily} and on the case
$\partial^2\hook f = 0$.
The support of the fiber over $t-\lambda$ is $(0,\dots,0,0)$ and
$(0, \dots, 0, \omega)$, where $\omega^{d-1} = \lambda$ are $(d-1)$-th roots of
$\lambda$, which are pairwise different because of the characteristic assumption.
We will analyse the support point by point.
By hypothesis $\partial\in \Dan{\partial\hook f}$, so that $\alpha\cdot \partial\in J$, thus $\alpha^{d+1} -
\lambda\cdot \alpha^2$ is in the ideal $I$ of the fiber over $t = \lambda$.
\def\DTpoly{\DT_{poly}}
Near $(0,0,\dots,0)$ the element $\alpha^{d-1} - \lambda$ is invertible, so
$\alpha^2$ is in the localisation of the ideal $I$, thus $\alpha + \lambda^{-1}\partial$ is in the
ideal. Now we check that the localisation of $I$ is equal to $\Dan{f} +
(\alpha +
\lambda^{-1}\partial)\DTpoly$. Explicitly, one should check that
\[
\pp{\Dan{f} + (\alpha + \lambda^{-1}\partial)\DTpoly}_{(0, \ldots ,0)}
= \pp{\Dan{f} + (\Dx^{d}
-\lambda\Dx - \partial)\DTpoly}_{(0, \ldots ,0)}.
\]
Then the stalk of the fiber at $(0,
\ldots , 0)$ is isomorphic to $\Spec \Apolar{f}$.
Near $(0, 0,\dots, 0, \omega)$ the elements $\alpha$ and
$\frac{\alpha^{k+1} - \lambda\cdot \alpha^2}{\alpha - \omega}$
are invertible, so
$\Dan{\partial\hook f}$ and $\alpha - \omega$ are in the localisation of
$I$. This, along with
the other inclusion, proves
that this localisation is generated by $\Dan{\partial\hook f}$ and $\alpha -
\omega$ and thus the stalk of the fiber is isomorphic to $\Spec \Apolar{\partial f}$.
\end{proof}
We make the most important corollary explicit:
\begin{cor}\label{ref:smoothabilityofrayiff:cor}
We keep the notation of Proposition \ref{ref:raysumflatness:prop}. Suppose
that $\kchar$ does not divide $d-1$ and $\partial^2\hook f = 0$.
If both apolar algebras of $f$ and $\partial\hook f$ are smoothable then
also the apolar algebra of every ray sum of $f$ with respect to $\partial$
is smoothable.\qed
\end{cor}
\begin{example}\label{ref:squareadding:example}
Let $f\in k[x_1, \ldots ,x_n]$ be a dual socle generator of an algebra
$A$. Then the algebra $B = \Apolar{f + x_{n+1}^{2}}$ is limit-reducible: it
is a limit of algebras of the form $A \times k$. In particular, if $A$ is
smoothable, then $B$ is also smoothable.
Combining this with Proposition~\ref{ref:squares:prop}, we see that every
local Gorenstein algebra $A$ of socle degree $s$ with $\Dhdvect{A, s-2} = (0, q,
0)$, where $q\neq 0$, is limit-reducible.
If $\deg f \geq 2$, then the Hilbert functions of $A = \Apolar{f}$ and $B = \Apolar{f + x_{n+1}^2}$
are related by $H_{B}(m) = H_A(m)$ for $m\neq 1$ and $H_B(1) = H_A(1) +
1$.
\end{example}
Above, we took advantage of the explicit form of ray decompositions coming
from ray sums to analyse the resulting ray families in depth. In
Proposition~\ref{ref:stretchedhavedegenerations:prop} below we prove
the flatness of the upper ray family without such knowledge. The price paid
for this is the fact that we get no information about the fibers of this
family.
\begin{prop}\label{ref:stretchedhavedegenerations:prop}
Let $f = x_1^\Ddegf + g\in \DP$ be a polynomial of degree $\Ddegf $ such that
$\Dx_1^{c} \hook g = 0$ for some $c$ satisfying $2c\leq \Ddegf $. Then
any ray decomposition $\Dan{f} = (\Dx_1^{\Dord} - q) + J$, where $J = \Dan{f}
\cap (\Dx_2, \ldots ,\Dx_n)$,
gives rise to an upper ray degeneration. In particular $\Apolar{f}$ is
limit-reducible.
\end{prop}
\begin{proof}
Let $\DPut{tmpII}{\mathfrak{I}} := \DPut{spec}{(\Dx_1^{\Dord} -
t\Dx_{1}^{\Dord-1}
- q)} + J$ be the ideal defining the
ray family and recall that $q, J \subseteq \Dcomp{1}$, where $\Dcomp{1} =
(\Dx_2,\dots, \Dx_n)$.
Since $\Dx_1^\Dord - q\in \Dan{f}$, we have $q\hook g = q\hook f =
\Dx_1^{\Dord}\hook f = x_1^{\Ddegf -\Dord} + \Dx_1^\Dord \hook g$. Then
$\Dx_1^{\Ddegf -\Dord}(q\hook
g) = \Dx_1^{\Ddegf -\Dord}\hook x_1^{\Ddegf -\Dord} + \Dx_1^\Ddegf \hook g = 1$, thus
$\Dx_1^{\Ddegf -\Dord}\hook g\neq 0$.
It follows that $\Ddegf - \Dord \leq c-1$, so $\Dord - 1\geq \Ddegf - c\geq c$, thus
$\Dx_{1}^{\Dord -1} \hook g = 0$. For all $\gamma\in \Dcomp{1}$, we claim that
\begin{equation}\label{eq:contstretched}
\gamma\cdot (\Dx_1^{\Dord } - t\Dx_1^{\Dord -1} - q)\in J[t].
\end{equation}
Note that $(\Dx_1^\Dord - q) \hook f = 0$ and $\Dx_1^{\Dord -1}\gamma\hook f =
\Dx_1^{\Dord -1}\gamma\hook g = 0$. This means that $\Dx_1^{\Dord -1}\gamma\in J$. Since always $(\Dx_1^{\Dord } - q)\gamma\in J$,
we have proved \eqref{eq:contstretched}.
Let $\DtmpII \subseteq \DS_{poly}[t]$ be the ideal
defining the upper ray family. Take any $\lambda\in k$ and an element
$i\in \DtmpII \cap (t-\lambda)$. We will prove that
$i\in\DtmpII(t-\lambda) + \DPut{tmpzero}{\DtmpII_0}[t]$, where $\Dtmpzero = \DtmpII \cap
\DS$, then Proposition~\ref{ref:flatelementary:prop} asserts that
$\DS[t]/\DtmpII$ is flat. Write $i = i_1 + i_2\Dspec$. As before, we may
assume $i_1\in J$, $i_2\in \DS$. Since $i\in (t-\lambda)$, we have
$i_1 + i_2(\Dx_1^\Dord - \lambda\Dx_1^{\Dord -1} - q) = 0$.
Since $i_1\in \Dcomp{1}$, we also have $i_2\in \Dcomp{1}$. But then by
Inclusion~\eqref{eq:contstretched} we have $i_2\Dspec \subseteq
\Dtmpzero[t]$. Since clearly $i_1\in J \subseteq \Dtmpzero[t]$, the
assumptions of Proposition~\ref{ref:flatelementary:prop} are satisfied,
thus the upper ray family is flat.
Now, Remark~\ref{ref:fibers:rmk} shows that a general fiber of the upper
ray degeneration is reducible, thus $\Apolar{f}$ is a flat limit of
reducible algebras, i.e.~limit-reducible.
\end{proof}
\begin{example}\label{ref:quarticlimitreducible:example}
Let $f\in k[x_1, x_2, x_3, x_4]$ be a polynomial of degree $4$. Suppose
that the leading form $f_4$ of $f$ can be written as $f_4 = x_1^4 + g_4$
where $g_4\in k[x_2, x_3, x_4]$. We will prove that $\Apolar{f}$ is
limit-reducible.
By Example~\ref{ref:topdegreeexample:ex} we may
assume that $f = x_1^4 + g$, where $\Dx_1^2\hook g = 0$. By
Proposition~\ref{ref:stretchedhavedegenerations:prop} we see that
$\Apolar{f}$ is limit-reducible.
\end{example}
\begin{example}\label{ref:stretched:example}
Suppose that an Artin local Gorenstein algebra $A$ has Hilbert function
$H_A = (1, H_1,\dots, H_c, 1,\dots, 1)$ and socle degree $\Ddegf \geq 2c$.
By Example \ref{ref:standardformofstretched:ex} we
may assume that $A \simeq \Apolar{x_1^{\Ddegf } + g}$, where $\Dx_1^{c} \hook g =
0$ and $\deg g\leq c+1$. Then by Proposition \ref{ref:stretchedhavedegenerations:prop} we
obtain a flat degeneration
\begin{equation}\label{eq:exampledegeneration}
k[t] \to \frac{\DS[t]}{(\Dx_1^{\Dord } - t\Dx_1^{\Dord -1} - q) + J}.
\end{equation}
Thus $A$ is limit-reducible in the sense of Definition
\ref{ref:limitreducible:def}.
Let us take $\lambda\neq 0$.
By Remark \ref{ref:fibers:rmk} the fiber over $t = \lambda$ is
supported at $(0, 0,\dots, 0)$ and at $(\lambda, 0,\dots, 0)$ and
the ideal defining this fiber near $(0, 0,\dots, 0)$ is $I_0 = (\lambda\Dx_1^{\Dord -1} - q)
+ J$.
From the proof of \ref{ref:stretchedhavedegenerations:prop} it follows that
$\Dx_1^{\Dord -1} \hook g = 0$. Then one can check that $I_0$ lies in the
annihilator of $\lambda^{-1} x_{1}^{\Ddegf -1} + g$.
Since $\sigma\hook (x_1^{\Ddegf} + g) = \sigma \hook(\lambda^{-1} x_1^{\Ddegf - 1} +
g)$ for every $\sigma\in (\Dx_2, \ldots ,\Dx_n)$, one calculates that the
apolar algebra of $\lambda^{-1} x_1^{\Ddegf -1} + g$ has Hilbert function
$(1, H_1,\dots, H_c, 1,\dots, 1)$ and socle degree $\Ddegf -1$. Then
$\dimk \Apolar{x_1^{\Ddegf -1} + g} = \dimk \Apolar{\lambda^{-1}x_1^{\Ddegf } + g} -
1$. Thus the fiber is a union of a point and
$\Spec\Apolar{\lambda^{-1}x_1^{\Ddegf } + g}$,
i.e.~degeneration~\eqref{eq:exampledegeneration} peels one point off $A$.
\end{example}
\subsection{Tangent preserving ray
degenerations}\label{subsec:tangentpreserving}
A (finite) ray degeneration gives a morphism from $\Spec k[t]$ to the Hilbert
scheme, i.e. a curve on the Hilbert scheme $\Hilb{n}{}$. In this section we prove
that in some cases
the dimension of the tangent space to $\Hilb{n}{}$ is constant along this curve.
This enables us to prove that certain points of this scheme are
smooth without the need for lengthy computations.
This section seems to be the
most technical part of the paper, so we include even more examples. The most
important results here are Theorem \ref{ref:nonobstructedconds:thm}
together with Corollary \ref{ref:CIarenonobstructed:cor}; see examples below
Corollary \ref{ref:CIarenonobstructed:cor} for applications.
Recall (\cite{cn09}) that the dimension of the tangent space to
$\Hilb{n}{}$ at a $k$-point corresponding to a Gorenstein scheme $\Spec S/I$ is
$\dimk S/I - \dimk S/I^2$.
\begin{lem}\label{ref:tangentflatcondition:lem}
Let $d\geq 2$. Let $g$ be the $d$-th ray sum of $f\in \DP$ with respect to
$\partial\in
\DS$ such that $\partial^2 \hook f = 0$.
Denote $I := \annn{\DS}{f}$ and $J := \annn{\DS}{\partial\hook f}$.
Take $\DPut{T}{T} = S[[\Dx]]$ to be the ring dual to $\DP[x]$ and let
\[\DPut{II}{\mathfrak{I}} := \pp{I + J\Dx +
\DPut{spec}{(\Dx^{d} - t\Dx - \partial)}}\cdot \DT[t]\] be the ideal in $\DT[t]$ defining the
associated lower ray degeneration, see Proposition \ref{ref:raysumflatness:prop}.
Then the family $k[t] \to \DT[t]/\DII^2$ is flat if and only if $(I^2 :
\partial) \cap I \cap J^2 \subseteq I\cdot J$.
\end{lem}
\begin{proof}
To prove flatness we will use
Proposition~\ref{ref:flatelementary:prop}.
Take an element $i\in \DII^2\cap (t-\lambda)$. We want to prove that $i\in
\DII^2 (t-\lambda) + \DPut{zerocomp}{\DII_0[t]}$, where $\Dzerocomp =
\DII^2 \cap \DT$. Let $\DPut{JJ}{\mathcal{J}} := (I + J\Dx)\DT$.
Subtracting a suitable element of $\DII^2(t-\lambda)$ we
may assume that
\[i = i_1 + i_2\Dspec + i_3\Dspec^2,\] where $i_1\in
\DJJ^2$, $i_2\in \DJJ$ and $i_3\in \DT$.
We will in fact show that $i\in \DII^2(t-\lambda) + \DJJ^2[t]$.
To simplify notation denote $\DPut{ss}{\sigma} = \Dx^d - \lambda\Dx -
\partial$. Note that $J\Dss\subseteq \DJJ$.
We have $i_1 + i_2\Dss + i_3\Dss^2 = 0$. Let $j_3 := i_3\Dss$.
We want to apply Remark~\ref{ref:decompositionhomog:rmk}, below we check
its assumptions.
The ideal $\DJJ$ is homogeneous with
respect to $\Dx$, generated in degrees less than $d$. Let $s\in
\DT$ be an element satisfying $s\Dx^{d}\in \DJJ$.
Then $s\in J$, which implies $s(\lambda \Dx + \partial)\in \DJJ$.
By Remark~\ref{ref:decompositionhomog:rmk} and $i_3\Dss^2 = j_3\Dss\in \DJJ$ we
obtain
$j_3\Dx^{d} \in \DJJ$, i.e.~$i_3 \Dss \Dx^d\in \DJJ$.
Applying the same
argument to $i_3\Dx^d$ we obtain $i_3\Dx^{2d}\in \DJJ$, therefore $i_3\in J\DT$.
Then
\[
i_3 \Dspec^2 - i_3 \Dss \Dspec = i_3\Dx (t - \lambda) \Dspec \in
\DJJ(t-\lambda) \Dspec \subseteq \DII^2(t-\lambda).
\]
Subtracting this element from $i$ and substituting $i_2 := i_2 + i_3 \Dss$
we may assume $i_3 = 0$.
We obtain
\begin{equation}\label{eq:flatnesslemma}
0 = i_1 + i_2\Dss = i_1 + i_2(\Dx^d - \lambda\Dx - \partial).
\end{equation}
Let $i_2 = j_2 + v_2\Dx$, where $j_2\in \DS$, i.e.~it does not contain
$\Dx$. Since $i_2\in \DJJ$, we have $j_2\in I$. As before, we have $v_2\Dx (\Dspec - \Dss) = v_2\Dx^2(t-\lambda)\in
\DII^2(t-\lambda)$, so that we may assume $v_2 = 0$.
Comparing the top $\Dx$-degree terms of \eqref{eq:flatnesslemma} we see
that $j_2\in J^2$.
Comparing the terms of
\eqref{eq:flatnesslemma} not containing $\Dx$, we deduce that
$j_2\partial\in I^2$, thus $j_2\in (I^2:\partial)$. Jointly, $j_2\in I\cap
J^2\cap (I^2:\partial)$, thus $j_2\in IJ$ by assumption.
But then $j_2\Dx \in \DJJ^2$, thus $j_2\Dspec\in \DJJ^2[t]$ and since
$i_1\in \DJJ^2$, the
element $i$ lies in $\DJJ^2[t] \subseteq \Dzerocomp$. Thus the assumptions of
Proposition~\ref{ref:flatelementary:prop} are satisfied and the family
$\DT[t]/\DII^2$ is flat over $k[t]$.
The converse is easier: one takes $i_2\in I\cap
J^2\cap (I^2:\partial)$ such that $i_2\not\in IJ$. On one hand, the element $j:=i_2(\Dx^d -
\partial)$ lies in $\DJJ^2$ and we get that $i_2\Dspec - j = ti_2\Dx\in
\DII^2$. On the other hand if $i_2\Dx\in \DII^2$, then $i_2\Dx\in (\DII^2 + (t))
\cap \DT = (\DJJ + (\Dx^d - \partial))^2$, which is not the
case.
\end{proof}
\begin{remark}\label{ref:tangentfibers:rmk}
\def\tansp#1{\tan(#1)}
Let us keep the notation of Lemma~\ref{ref:tangentflatcondition:lem}. Fix
$\lambda\in k\setminus \{0\}$ and suppose that the characteristic of
$k$ does not divide $d-1$.
The supports of the fibers of $\DS[t]/\DII$, $\DII/\DII^2$ and
$\DS[t]/\DII^2$ over $t = \lambda$ are finite and equal.
In particular from Proposition \ref{ref:fibersofray:prop} it follows that
the dimension of the fiber of $\DII/\DII^2$ over $t-\lambda$ is equal to
$\tansp{f} + (d-1)\tansp{\partial\hook f}$, where $\tansp{h} = \dimk \Dan{h}/\Dan{h}^2$ is the dimension of
the tangent space to the point of the Hilbert scheme corresponding to
$\Spec \DS/\Dan{h}$.
\end{remark}
\begin{thm}\label{ref:nonobstructedconds:thm}
Suppose that a polynomial $f\in \DP$ corresponds to a smoothable, unobstructed algebra
$\Apolar{f}$. Let $\partial\in \DS$ be such that $\partial^2\hook f = 0$
and the algebra $\Apolar{\partial\hook f}$ is smoothable and unobstructed.
The following are equivalent:
\begin{enumerate}
\item[1.]\label{it:somenonob} the $d$-th ray sum of $f$ with respect to
$\partial$ is unobstructed for some $d$ such that $2\leq d \leq
\kchar$ (or $2\leq d$ if $\kchar = 0$).
\item[1a.]\label{it:allnonob} the $d$-th ray sum of $f$ with respect to
$\partial$ is unobstructed for all $d$ such that $2\leq d \leq \kchar
$ (or $2\leq d$ if $\kchar = 0$).
\item[2.]\label{it:tgflat} The $k[t]$-module $\DPut{defid}{\DII}/\Ddefid^2$ is flat,
where $\Ddefid$ is the ideal defining the lower ray family of the
$d$-th ray sum for some $2\leq d \leq \kchar$ (or $2\leq d$ if
$\kchar = 0$), see
Definition \ref{ref:rayfamily:def}.
\item[2a.]\label{it:tgflatevery} The $k[t]$-module $\DPut{defid}{\DII}/\Ddefid^2$ is flat,
where $\Ddefid$ is the ideal defining the lower ray family of the
$d$-th ray sum for every $2\leq d \leq \kchar$ (or $2\leq d$ if
$\kchar = 0$), see
Definition \ref{ref:rayfamily:def}.
\item[3.]\label{it:quoflat} The family $k[t] \to \DS[t]/\Ddefid^2$ is flat,
where $\Ddefid$ is the ideal defining the lower ray family of the
$d$-th ray sum for some $2\leq d \leq \kchar$ (or $2\leq d$ if
$\kchar = 0$).
\item[3a.]\label{it:quoflatevery} The family $k[t] \to \DS[t]/\Ddefid^2$ is flat,
where $\Ddefid$ is the ideal defining the lower ray family of the
$d$-th ray sum for every $2\leq d \leq \kchar$ (or $2\leq d$ if
$\kchar = 0$).
\item[4.] The following inclusion (equivalent to equality) of ideals in
$\DS$
holds: $I\cap J^2 \cap (I^2:\partial) \subseteq I\cdot J$, where
$I = \annn{\DS}{f}$ and $J = \annn{\DS}{\partial\hook f}$.
\end{enumerate}
\end{thm}
\begin{proof}
It is straightforward to check that the inclusion $I\cdot J\subseteq I\cap J^2 \cap
(I^2:\partial) \subseteq I\cdot J$ in Point 4 always holds,
thus the other inclusion is
equivalent to equality.\\
3. $\iff$ 4. $\iff$ 3a.
The equivalence of Point 3 and
Point 4 follows from Lemma
\ref{ref:tangentflatcondition:lem}. Since Point 4 is independent of $d$,
the equivalence of Point 4 and Point 3a also follows.
2. $\iff$ 3. and 2a. $\iff$ 3a.
We have an exact sequence of
$k[t]$-modules
\[0\to \Ddefid/\Ddefid^2 \to \DS[t]/\Ddefid^2 \to \DS[t]/\Ddefid
\to 0.\] Since $\DS[t]/\Ddefid$ is a flat
$k[t]$-module by Proposition \ref{ref:raysumflatness:prop}, we see from
the long exact sequence of $\operatorname{Tor}$ that
$\Ddefid/\Ddefid^2$ is flat if and only if $\DS[t]/\Ddefid^2$ is flat.
1. $\iff$ 2. and 1a. $\iff$ 2a.
Let $g\in P[x]$ be the $d$-th ray sum of $f$ with
respect to $\partial$. We may consider
$\Apolar{g}$, $\Apolar{f}$, $\Apolar{\partial\hook f}$ as quotients of a
polynomial ring $T_{poly}$, corresponding to points of the Hilbert scheme.
The dimension of the tangent space at $\Apolar{g}$ is given by $\dimk
\Ddefid/\Ddefid^2 \tensor k[t]/t = \dimk \Ddefid/(\Ddefid^2 + (t))$. By Remark \ref{ref:tangentfibers:rmk} it is
equal to the sum of the dimension of the tangent space at $\Apolar{f}$ and
$(d-1)$ times the dimension of the tangent space to $\Apolar{\partial\hook f}$. Since both
algebras are smoothable and unobstructed we conclude that $\Apolar{g}$ is also
unobstructed. On the other hand, if $\Apolar{g}$ is unobstructed, then
$\Ddefid/\Ddefid^2$ is a finite $k[t]$-module such that the length of
fiber $\Ddefid/\Ddefid^2\tensor k[t]/\mathfrak{m}$ does not depend on the
choice of maximal ideal $\mathfrak{m} \subseteq k[t]$. Then
$\Ddefid/\Ddefid^2$ is flat by \cite[Ex~II.5.8]{HarAG} or
\cite[Thm~III.9.9]{HarAG} applied to the associated sheaf.
\end{proof}
\begin{remark}
The condition from Point 4 of Theorem
\ref{ref:nonobstructedconds:thm} seems very technical. It is
enlightening to look at the images of $(I^2:\partial)\cap I$ and $I\cdot
J$ in $I/I^2$.
The image of $(I^2:\partial)\cap I$ is the annihilator of $\partial$ in
$I/I^2$. This annihilator clearly contains $(I:\partial)\cdot I/I^2 =
J\cdot I/I^2$. This shows that if the $S/I$-module $I/I^2$ is ``nice'', for
example free, we should have an equality $(I^2:\partial)\cap I = I\cdot J$.
More generally this equality is connected to the syzygies of
$I/I^2$.
\end{remark}
In the remainder of this subsection we will prove that in several situations
the conditions of Theorem~\ref{ref:nonobstructedconds:thm} are satisfied.
\begin{cor}\label{ref:CIarenonobstructed:cor}
We keep the notation and assumptions of
Theorem~\ref{ref:nonobstructedconds:thm}. Suppose further
that the algebra $\DS/I = \Apolar{f}$ is a complete intersection. Then the equivalent
conditions of Theorem~\ref{ref:nonobstructedconds:thm} are satisfied.
\end{cor}
\begin{proof}
Since $\DS/I$ is a complete intersection, the $\DS/I$-module $I/I^2$ is
free, see e.g. \cite[Thm~16.2]{Matsumura_CommRing} and discussion above it or
\cite[Ex~17.12a]{EisView}. It implies that $(I^2 : \partial) \cap I = (I : \partial)I
= JI$, because $J = \Dan{\partial\hook f} = \{ s\in \DS\ |\ s
\partial\hook f = 0\} = (\Dan{f} : \partial) = (I : \partial)$. Thus the
condition from Point 4 of Theorem
\ref{ref:nonobstructedconds:thm} is satisfied.
\end{proof}
\begin{example}\label{ref:14531case:example}
If $A = \DS/I$ is a complete intersection, then it is
smoothable and unobstructed
(see Subsection~\ref{sss:smoothability}). The apolar algebras
of monomials are complete intersections, therefore the assumptions of
Theorem~\ref{ref:nonobstructedconds:thm} are satisfied e.g.~for $f
=x_1^2x_2^2x_3$ and $\partial = \Dx_2^2$. Now
Corollary~\ref{ref:CIarenonobstructed:cor} implies that the equivalent
conditions of the Theorem are also satisfied, thus $x_1^2x_2^2x_3 +
x_4^{d}x_1^2x_3 = (x_2^2x_3)(x_1^2 + x_4^d)$ is unobstructed for every $d\geq
2$ (provided $\kchar = 0$ or $d \leq \kchar$).
Similarly, $x_1^2x_2x_3 + x_4^2x_1$ is unobstructed
and has Hilbert function $(1, 4, 5, 3, 1)$.
\end{example}
\begin{example}\label{ref:1441:example}
Let $f = (x_1^2 + x_2^2)x_3$, then $\Dan{f} = (\Dx_1^2 - \Dx_2^2,
\Dx_1\Dx_2, \Dx_3^2)$ is a complete intersection. Take $\partial =
\Dx_1\Dx_3$, then $\partial\hook f = x_1$ and
$\partial^2\hook f = 0$, thus $f + x_4^2\partial\hook f = x_1^2x_3 +
x_2^2x_3 + x_4^2x_1$ is unobstructed. Note that by
Remark~\ref{ref:Hilbfunccouting:rmk} the apolar algebra of this
polynomial has Hilbert function $(1, 4, 4, 1)$.
\end{example}
\begin{prop}\label{ref:unobstructeddoubleray:prop}
Let $f\in \DP$ be such that $\Apolar{f}$ is a complete
intersection.
Let $d$ be a natural number. Suppose that $\kchar = 0$ or $d\leq \kchar$.
Take $\partial\in \DPut{Sf}{\DS}$ such that $\partial^2\hook f = 0$ and
$\Apolar{\partial\hook f}$ is also a complete intersection.
Let $g\in \DP[y]$ be the $d$-th ray sum $f$ with respect to $\partial$,
i.e.~$g = f + y^{d} \partial\hook f$.
Suppose that $\deg \partial\hook f > 0$.
Let $\beta$ be the variable dual to $y$ and $\sigma\in \DSf$ be such that
$\sigma\hook (\partial\hook f) = 1$. Take $\varphi := \sigma\beta\in
\DPut{Sg}{\DT} = \DS[[\beta]]$.
Let $h$ be any ray sum of $g$ with respect to $\varphi$, explicitly
\[
h = f + y^{d} \partial\hook f + z^my^{d-1}
\]
for some $m\geq 2$.
Then the algebra $\Apolar{h}$ is
unobstructed.
\end{prop}
\begin{proof}
First note that $\varphi\hook g = y^{d-1}$ and so $\varphi^2\hook g =
\sigma\hook y^{d-2} = 0$, since $\sigma\in
\mathfrak{m}_{\DSf}$. Therefore indeed $h$ has the presented form.
\def\DmmV{\mathfrak{m}_{\DSf}}
From Corollary \ref{ref:CIarenonobstructed:cor} it follows that
$\Apolar{g}$ is unobstructed. Since $\varphi\hook g = y^{d-1}$,
the algebra $\Apolar{\varphi\hook g}$ is unobstructed as well. Now by
Theorem \ref{ref:nonobstructedconds:thm} it remains to prove that
\begin{equation}\label{eq:maincontainment}
(I_g^2:\varphi) \cap I_g \cap J_g^2 \subseteq I_g J_g,
\end{equation}
where
$\DPut{Ig}{I_g} =
\annn{\DSg}{g}, \DPut{Jg}{J_g} = \annn{\DSg}{\varphi\hook g}$.
The rest of the proof is a technical verification of this claim.
Denote $\DPut{If}{I_f} := \annn{\DSf}{f}$ and $\DPut{Jf}{J_{f}} := \annn{\DSf}{\partial\hook f}$;
note that we take annihilators in $\DSf$.
By Proposition \ref{ref:raysumideal:prop} we have $\DIg = \DIf\DT +
\beta\DJf\DT + \DPut{spec}{(\beta^{d} - \partial)}\DT$.
Consider $\gamma\in \DSg$ lying in $(\DIg^2 : \varphi) \cap \DIg \cap
\DJg^2$. Write $\gamma = \gamma_0 + \gamma_1 \beta + \gamma_2 \beta^2 +
\dots$ where $\gamma_i\in \DSf$, so they do not contain $\beta$. We will
prove that $\gamma\in \DIg\DJg$.
First, since $\Dspec^2 \in \DIg\DJg$ we may reduce powers of $\beta$ in $\gamma$ using this
element and so we assume $\gamma_{i} = 0$ for $i\geq 2d$.
Let us take $i < 2d$. Since $\gamma\in \DJg^2 =
\pp{\annn{\DSg}{y^{d-1}}}^2 = \pp{\DmmV, \beta^d}^2$ we see that $\gamma_i\in
\DmmV \subseteq \DJg$. For $i > d$ we have $\beta^i \in \DIg$, so
that $\gamma_i \beta^i \in \DJg\DIg$ and we may
assume $\gamma_i = 0$.
Moreover, $\beta^d \gamma_d - \partial \gamma_d \in \DIg\DJg$ so we may also
assume $\gamma_d = 0$, obtaining
\[\gamma = \gamma_0 + \dots + \gamma_{d-1} \beta^{d-1}.\]
From the explicit description of $\DIg$ in
Proposition~\ref{ref:raysumideal:prop} it follows that $\gamma_i\in \DJf$
for all $i$.
Let $M = \DIg^{2} \cap \varphi\DT = \DIg^2 \cap \DJf\beta\DT$. Then for
$\gamma$ as above we have $\gamma \varphi\in M$, so we will analyse the
module $M$.
Recall that
\begin{equation}\label{eq:scarydecomposition}
\DIg^2 = \DIf^2\cdot \DT + \beta \DIf \DJf\cdot \DT + \beta^2 \DJf^2\cdot \DT +
\Dspec\DIf\cdot \DT + \Dspec\beta\DJf \cdot \DT + \Dspec^2\cdot \DT.
\end{equation}
We claim that
\begin{equation}\label{eq:contains}
M \subseteq \DIf^2\cdot \DT + \beta\DIf \DJf\cdot \DT + \beta^2
\DJf^2\cdot \DT + \Dspec\beta\DJf\cdot \DT.
\end{equation}
We have $\DIg^2 \subseteq
\DJf \cdot \DT + \Dspec^2\cdot\DT$, so
if an element of $\DIg^2$ lies in
$\DJf\cdot\DT$, then its coefficient standing next to $\Dspec^2$ in Presentation
\eqref{eq:scarydecomposition} is an element of $\DJf$ by
Remark~\ref{ref:decompositionhomog:rmk}.
Since $\DJf \cdot
\Dspec \subseteq \DIf + \beta\DJf$, we may ignore the term $\Dspec^2$:
\begin{equation}\label{eq:lessscdec}
M \subseteq \DIf^2\cdot \DT + \beta \DIf \DJf\cdot \DT + \beta^2 \DJf^2\cdot \DT +
\Dspec\DIf\cdot \DT + \Dspec\beta\DJf\cdot \DT.
\end{equation}
Choose an element of $M$ and let $i\in \DIf\cdot\DT$ be the coefficient of this
element standing next to $\Dspec$. Since $\DIf\DT \cap \beta \DT \subseteq
\DJf\DT$ we may assume that $i$ does not contain $\beta$, i.e. $i\in
\DIf$.
Now, if an element of the right hand side of \eqref{eq:lessscdec} lies in
$\beta\cdot\DT$, then the coefficient $i$ satisfies
$i\cdot \partial\in \DIf^2$, so that $i\in (\DIf^2 : \partial)$. Since
$\DIf$ is a complete intersection ideal the $\DS/\DIf$-module
$\DIf/\DIf^2$ is free, see Corollary~\ref{ref:CIarenonobstructed:cor} for
references. Then we have $(\DIf^2: \partial) =
(\DIf:\partial)\DIf$ and $i\in (\DIf:\partial)\DIf = \DIf\DJf$. Then
$i\cdot \Dspec \subseteq \DIf^2 + \beta\cdot \DIf\cdot \DJf$ and so the
Inclusion \eqref{eq:contains} is proved. We come back to the proof of
proposition.
From Remark~\ref{ref:decompositionhomog:rmk} applied to the ideal
$\DJf^2\DT$ and the element $\beta\Dspec$ and the fact that $\beta\partial\DJf^2
\subseteq I_g^2$ we compute
that $M\cap \{ \delta\ |\ \deg_{\beta} \delta \leq d\}$ is
a subset of $\DIf^2\cdot\DT + \beta\cdot \DIf \DJf\cdot\DT + \beta^2
\DJf^2\cdot \DT$. Then $\gamma \varphi = \gamma \beta\sigma$ lies in this set, so that
$\gamma_0 \in (\DIf\DJf : \sigma)$ and $\gamma_{n} \in (\DJf^2 : \sigma)$
for $n > 1$. Since $\Apolar{f}$ and $\Apolar{\partial\hook f}$ are
complete intersections, we have
$\gamma_0 \in \DIf\DmmV$ and $\gamma_i \in
\DJf\DmmV$ for $i \geq 1$.
It follows that $\gamma\in \DIg\DmmV \subseteq \DIg\DJg$.
\end{proof}
\begin{example}\label{ref:1551:example}
Let $f\in P$ be a polynomial such that $A = \Apolar{f}$ is a complete
intersection. Take
$\partial$ such that $\partial\hook f = x_1$ and $\partial^2\hook f = 0$.
Then the apolar algebra of $f + y_1^{d}x_1 + y_{2}^{m}y_1^{d-1}$ is unobstructed
for any $d, m\geq 2$ (less or equal to $\kchar$ if it is non-zero). In particular $g = f + y_1^2x_1 + y_2^2y_1$ is
unobstructed.
Continuing Example \ref{ref:1441:example}, if $f =
x_1^2x_3 + x_2^2x_3$, then $x_1^2x_3 + x_2^2x_3 + x_4^2x_1 + x_5^2x_4$ is
unobstructed. The apolar algebra of this polynomial has Hilbert function
$(1, 5, 5, 1)$.
Let $g = x_1^2x_3 + x_2^2x_3 + x_4^2x_1$, then $x_1^2x_3 + x_2^2x_3 +
x_4^2x_1 + x_5^2x_4$ is a ray sum of $g$ with respect to $\partial =
\Dx_4\Dx_1$. Let $I := \Dan{g}$ and $J := (I : \partial)$.
In contrast with Corollary~\ref{ref:CIarenonobstructed:cor} and Example~\ref{ref:1441:example} one may
check that all three terms $I$, $J^2$ and $(I^2 : \partial)$ are necessary to
obtain equality in the inclusion \eqref{eq:maincontainment} for $g$ and $\partial$, i.e.~no two
ideals of $I$, $J^2$, $(I^2 : \partial)$ have intersection equal to $IJ$.
\end{example}
\begin{example}\label{ref:144311:example}
Let $f = x_1^5 + x_2^4$. Then the annihilator of $f$ in $k[\Dx_1, \Dx_2]$
is a complete intersection, and this is true for every $f\in k[x_1, x_2]$. Let
$g = f + x_3^2x_1^2$ be the second ray sum of $f$ with respect to
$\Dx_1^3$ and $h = g + x_4^2x_3$ be the second ray sum of $g$ with
respect to $\Dx_3\Dx_1^2$.
Then the apolar algebra of
\[h = x_1^5 + x_2^4 + x_3^2x_1^2 + x_4^2x_3\] is smoothable and not
obstructed. It has Hilbert function $(1, 4, 4, 3, 1, 1)$.
\end{example}
\begin{remark}
The assumption $\deg \partial\hook f > 0$ in
Proposition~\ref{ref:unobstructeddoubleray:prop} is necessary:
the polynomial $h = x_1x_2x_3 + x_4^2 + x_5^2x_4$ is obstructed, with
length $12$ and tangent space dimension $67 > 12\cdot 5$ over
$k = \mathbb{C}$. The polynomial $g$ is the fourth ray sum of $x_1 x_2 x_3$
with respect to $\Dx_1\Dx_2\Dx_3$ and $h$ is the second
ray sum of $g = x_1 x_2 x_3 + x_4^2$ with respect to $\Dx_4$, thus this
example satisfies the assumptions of
Proposition~\ref{ref:unobstructeddoubleray:prop} except for $\deg
\partial\hook f > 0$. Note that in this case $\Dx_4^2\hook g \neq 0$.
\end{remark}
\section{Proof of Main Theorem and comments on the degree 14
case}\label{sec:proof}
\subsection{Preliminary results}\label{sss:parameterising}
Let $r\geq 1$ be a natural number and $V $ be a constructible subset of
$P_{\leq s}$. Assume that the apolar
algebra $\Apolar{f}$ has length $r$ for every closed
point $f\in V$. Then we may construct the incidence scheme $\{(f,
\Apolar{f})\}\to V$ which is a finite flat family over $V$ and thus we obtain a morphism from $V$ to the (punctual) Hilbert
scheme of $r$ points on an appropriate $\mathbb{P}^n$. See
\cite[Prop~4.39]{JelMSc} for details.
Then, the semicontinuity of dimensions of fibers implies the following
Remark~\ref{ref:semicontinuity:rmk}.
\begin{remark}\label{ref:semicontinuity:rmk}
Let $s$ be a positive integer and $V \subseteq \DP_{\leq s}$ be a constructible subset. Then
the set $U$, consisting of $f\in V$ such that the apolar algebra of $f$
has the maximal length (among the elements of $V$), is open in $V$. In
particular, if $V$ is irreducible then $U$ is also irreducible.
\end{remark}
\begin{example}\label{ref:semicondegthree:example}
Let $\DP_{\geq 4} = k[x_1, \ldots
,x_n]_{\geq 4}$ be the space of polynomials that are sums of
monomials of degree at least $4$.
Suppose that the set $V \subseteq \DP_{\geq 4}$ parameterising algebras
with fixed Hilbert function $H$ is irreducible. Then also the set $W$ of
polynomials $f\in \DP$ such that $f_{\geq 4}\in V$ is irreducible. Let
$e:= H(1)$ and suppose that the symmetric decomposition of $H$ has zero
rows $\Dhdvect{s-3} = (0, 0, 0, 0)$ and $\Dhdvect{s-2} = (0, 0, 0)$, where
$s = \deg f$.
We claim that general element of $W$ corresponds to an algebra $B$ with Hilbert
function: $H_{max} = H + (0, n-e, n-e, 0)$.
Indeed, since we may only vary the degree three part of the polynomial,
the function $H_B$ has the form $H + (0, a, a, 0) + (0, b, 0)$ for some
$a, b$ such that $a + b \leq n - e$. Therefore algebras with Hilbert
function $H_{max}$ are precisely the algebras of maximal possible length.
Since $H_{max}$ is attained for $f_{\geq 4} + x_{e+1}^3 +
\ldots + x_n^3$, the claim follows from
Remark~\ref{ref:semicontinuity:rmk}.
\end{example}
\subsection{Lemmas on Hilbert functions}
In the following $H_A$ denotes the Hilbert function of an algebra $A$.
\begin{lem}\label{ref:hilbertfunc:lem}
Suppose that $A$ is a local Artin Gorenstein algebra of socle degree $s\geq 3$ such that
$\Dhdvect{A, s-2} = (0, 0, 0)$. Then $\len A \geq 2\left(H_A(1) + 1\right)$.
Furthermore, equality occurs if and only if $s = 3$.
\end{lem}
\begin{proof}
Consider the symmetric decomposition $\Dhdvect{\bullet} = \Dhdvect{A,
\bullet}$ of $H_A$.
From symmetry we have $\sum_j \Dhd{0}{j} \geq 2 + 2\Dhd{0}{1}$ with
equality only if $\Dhdvect{0}$ has no terms between $1$ and $s-1$~i.e.~when $s = 3$.
Similarly $\sum_j \Dhd{i}{j}\geq 2\Dhd{i}{1}$ for all $1 \leq i < s-2$.
Summing these inequalities we obtain
\[
\len A = \sum_{i<s-2} \sum_j \Dhd{i}{j} \geq 2 + \sum_{i<s-2} 2\Dhd{i}{1} = 2
+ 2H_A(1).\qedhere
\]
\end{proof}
\begin{lem}\label{ref:trikofHilbFunc:lem}
Let $A$ be a local Artin Gorenstein algebra of length at most $14$. Suppose
that $4\leq H_A(1) \leq 5$. Then $H_A(2)\leq 5$.
\end{lem}
\begin{proof}
Let $s$ be the socle degree of $A$.
Suppose $H_A(2) \geq 6$. Then $H_{A}(3) + H_{A}(4) + \dots \leq 3$, thus
$s\in \{3, 4, 5\}$. The cases $s = 3$ and $s = 5$ immediately lead to
contradiction -- it is impossible to get the required symmetric
decomposition. We will consider the case $s = 4$. In this case $H_A = (1,
*, *, *, 1)$ and its symmetric decomposition is $(1, e, q, e, 1) + (0, m,
m, 0) + (0, t, 0)$.
Then $e = H_A(3) \leq 14 - 2 - 4 - 6 = 2$.
Since $H_A(1) < H_A(2)$ we have $e < q$. This can
only happen if $e = 2$ and $q = 3$. But then $14\geq \len A = 9 + 2m + t$,
thus $m\leq 2$ and $H_A(2) = m + q \leq 5$. A contradiction.
\end{proof}
\begin{lem}\label{ref:14341notexists:lem}
There does not exist a local Artin Gorenstein algebra with Hilbert
function \[(1, 4, 3, 4, 1, \ldots , 1).\]
\end{lem}
\begin{proof}
See \cite[pp.~99-100]{ia94} for the proof or \cite[Lem~5.3]{CJNpoincare} for a generalisation. We provide a sketch for
completeness.
Suppose such an algebra $A$ exists and fix its dual socle
generator $f\in k[x_1, \ldots, x_4]_\Ddegf$ in the standard form. Let $I =
\Dan{f}$.
The proof relies on two observations. First, the leading term of $f$ is, up
to a constant, equal to $x_1^\Ddegf$ and in fact we may take $f = x_1^\Ddegf +
f_{\leq 4}$. Moreover from the symmetric decomposition it follows that the
Hilbert functions of $\Apolar{x_1^s + f_4}$ and $\Apolar{f}$ are equal. Second,
$h(3) = 4 = 3^{\langle 2\rangle} = h(2)^{\langle 2\rangle}$ is the maximal growth, so arguing similarly as in
Lemma~\ref{ref:P1gotzmann:lem} we may assume that the degree
two part $I_2$ of the ideal of $\gr A$ is equal to $((\Dx_3,
\Dx_4)\DS)_2$. Then any derivative of $\Dx_3\hook f_4$ is a derivative of
$x_1^s$, i.e. a power of $x_1$. It follows that $\Dx_3\hook f_4$ itself is a
power of $x_1$; similarly $\Dx_4\hook f_4$ is a power of $x_1$.
It follows that $f_4\in x_1^3\cdot k[x_1,x_2,x_3,x_4] + k[x_1,
x_2]$, but then $f_4$ is annihilated by a
linear form, which contradicts the fact that $f$ is in
the standard form.
\end{proof}
The following lemmas essentially deal with the limit-reducibility in the case
$(1, 4, 4, 3, 1, 1)$. Here the method is straightforward, but the cost
is that the proof is broken into several cases and quite long.
\begin{lem}\label{ref:144311Hilbfunc:lem}
Let $f = x_1^5 + f_4$ be a polynomial such that $H_{\Apolar{f}}(2) <
H_{\Apolar{f_4}}(2)$.
Let $\DPut{tmpQ}{\mathcal{Q}} = \DS_2 \cap
\Dan{x_1^5} \subseteq
\DS_2$. Then $x_1^2\in \DtmpQ f_4$ and $\Dan{f_4}_{2} \subseteq \DtmpQ$.
\end{lem}
\begin{proof}
Note that $\dim \DtmpQ f_4 \geq \dim \DS_2 f_4 - 1 = H_{\Apolar{f_4}}(2) -
1$. If $\Dan{f_4}_{2} \not\subseteq \DtmpQ$, then there is a $q\in \DtmpQ$
such that $\Dx_1^2 - q\in \Dan{f_4}$. Then $\DtmpQ f_4 = \DS_2 f_4$ and
we obtain a contradiction.
Suppose that $x_1^2\not\in \DtmpQ f_4$. Then the degree
two partials of $f$ contain a direct sum of $k x_1^2$ and $\DtmpQ f_4$,
thus they are at least $H_{\Apolar{f_4}}(2)$-dimensional, so that
$H_{\Apolar{f}}(2)\geq H_{\Apolar{f_4}}(2)$, a
contradiction.
\end{proof}
\begin{lem}\label{ref:144311caseCI:lem}
Let $f = x_1^5 + f_4\in \DP$ be a polynomial such that $H_{\Apolar{f}} = (1, 3, 3, 3, 1, 1)$
and $H_{\Apolar{f_4}} = (1, 3, 4, 3, 1)$. Suppose that $\Dx_1^3\hook f_4 =
0$ and that $\pp{\Dan{f_4}}_2$ defines a complete intersection. Then
$\Apolar{f_4}$ and $\Apolar{f}$ are complete intersections.
\end{lem}
\begin{proof}
Let $I := \Dan{f_4}$.
First we will prove that $\Dan{f_4} = (q_1, q_2, c)$, where $\langle q_1,
q_2\rangle = I_2$ and $c\in I_3$. Then of course $\Apolar{f_4}$ is a complete intersection.
By assumption, $q_1, q_2$ form a regular sequence. Thus there are no syzygies of
degree at most three in the minimal resolution of $\Apolar{f_4}$. By
the symmetry of the minimal resolution, see \cite[Cor 21.16]{EisView},
there are no generators of degree at least four in the minimal generating
set of $I$. Thus $I$ is generated in degree two and three. But
$H_{\DS/(q_1, q_2)}(3) = 4 = H_{\DS/I}(3) + 1$, thus there is a
cubic $c$, such that $I_3 = kc\oplus (q_1, q_2)_3$, then $(q_1, q_2, c) = I$, thus $\Apolar{f_4} = \DS/I$
is a complete intersection.
Let $\DPut{anntwo}{\mathcal{Q}}:= \Dan{x_1^5} \cap S_2 \subseteq S_2$.
By Lemma~\ref{ref:144311Hilbfunc:lem} we have $q_1, q_2\in \Danntwo$, so
that $\Dx_1^3\in I \setminus (q_1, q_2)$, then $I = (q_1, q_2, \Dx_1^3)$.
Moreover, by the same Lemma, there exists $\sigma\in \Danntwo$ such that $\sigma\hook f_4 =
x_1^2$.
Now we prove that $\Apolar{f}$ is a complete intersection.
Let $J := (q_1, q_2, \Dx_1^3 - \sigma) \subseteq \Dan{f}$.
We will prove that $\DS/J$ is a complete intersection.
Since $q_1$, $q_2$, $\Dx_1^3$ is a
regular sequence, the set $\DS/(q_1, q_2)$ is a cone over a scheme of
dimension zero and $\Dx_1^3$ does not
vanish identically on any of its components. Since $\sigma$ has degree two, $\Dx_1^3 - \sigma$
also does not vanish identically on any of the components of $\Spec
\DS/(q_1, q_2)$, thus $\Spec \DS/J$ has dimension zero,
so it is a complete intersection (see also \cite[Cor~2.4,
Rmk~2.5]{Valabrega_FormRings}).
Then the quotient by $J$ has length at most
$\deg(q_1)\deg(q_2)\deg(\Dx_1^3 - \sigma) = 12 = \dimk \DS/\Dan{f}$. Since
$J \subseteq \Dan{f}$, we have $\Dan{f} = J$ and
$\Apolar{f}$ is a complete intersection.
\end{proof}
\begin{lem}\label{ref:144311casenotCI:lem}
Let $f = x_1^5 + f_4 + g$, where $\deg g\leq 3$, be a polynomial such that
$H_{\Apolar{f_{\geq 4}}} = (1, 3, 3, 3, 1, 1)$
and $H_{\Apolar{f_4}} = (1, 3, 4, 3, 1)$. Suppose that $\Dx_1^3\hook f_4 =
0$ and that $\pp{\Dan{f_4}}_2$ does not define a complete intersection.
Then $\Apolar{f}$ is limit-reducible.
\end{lem}
\begin{proof}
\def\spann#1{\langle #1 \rangle}
Let $\langle q_1, q_2\rangle = \pp{\Dan{f_4}}_2$. Since $q_1, q_2$ do not
form a regular sequence, we have, after a linear transformation $\varphi$, two
possibilities: $q_1 = \Dx_1\Dx_2$ and $q_2 = \Dx_1\Dx_3$ or $q_1 =
\Dx_1^2$ and $q_2 = \Dx_1\Dx_2$. Let $\beta$ be the image of $\Dx_1$ under
$\varphi$, so that $\beta^3\hook f_4 = 0$.
Suppose first that $q_1 = \Dx_1\Dx_2$ and $q_2 = \Dx_1\Dx_3$. If $\beta$
is up to constant equal to $\Dx_1$, then $\Dx_1\Dx_2, \Dx_1\Dx_3,
\Dx_1^3\in \Dan{f_4}$, so that $\Dx_1^2$ is in the socle of
$\Apolar{f_4}$, a contradiction. Thus we may assume, after another change
of variables, that $\beta = \Dx_2$, $q_1 = \Dx_1\Dx_2$ and $q_2 = \Dx_1\Dx_3$.
Then $f = x_2^5 + f_4 + \hat{g} = x_2^5 + x_1^4 + \hat{h} + \hat{g}$,
where $\hat{h}\in k[x_1, x_3]$ and $\deg(\hat{g})\leq 3$. Then by
Lemma~\ref{ref:topdegreetwist:lem} we may assume that $\Dx_1^2\hook f =
0$, so $\Apolar{f}$ is limit-reducible by
Proposition~\ref{ref:stretchedhavedegenerations:prop}. See also
Example~\ref{ref:quarticlimitreducible:example} (the degree
assumption in the Example can easily be modified).
Suppose now that $q_1 = \Dx_1^2$ and $q_2 = \Dx_1\Dx_2$. If
$\beta$ is not a linear combination of $\Dx_1, \Dx_2$, then we may assume $\beta
= \Dx_3$. Let $m$ in $f_4$ be any monomial divisible by $x_1$. Since $q_1,
q_2\in \Dan{f_4}$, we see that $m =\lambda x_1x_3^3$ for some $\lambda\in
k$. But since $\beta^3\in
\Dan{f_4}$, we have $m = 0$. Thus $f_4$ does not contain $x_1$, so
$H_{\Apolar{f_4}}(1) < 3$, a contradiction. Thus $\beta\in \langle \Dx_1,
\Dx_2\rangle$. Suppose $\beta = \lambda\Dx_1$ for
some $\lambda\in k \setminus \{0\}$.
Applying the Lemma~\ref{ref:144311Hilbfunc:lem} to $f_{\geq 4}$ we see
that $x_1^2$ is a derivative of $f_4$, so $\beta^2\hook f_4\neq 0$, but
$\beta^2\hook f_4 = \lambda^2q_1\hook f_4 = 0$, a contradiction. Thus
$\beta = \lambda_1 \Dx_1 + \lambda_2 \Dx_2$ and changing $\Dx_2$ we
may assume that $\beta = \Dx_2$. This substitution does not change $\langle \Dx_1^2,
\Dx_1\Dx_2\rangle$. Now we directly check that $f_4 =
x_3^2(\kappa_1 x_1x_3 + \kappa_2 x_2^2 + \kappa_3 x_2x_3 + \kappa_4
x_3^2)$, for some $\kappa_{\bullet}\in k$. Since $x_1$ is a derivative
of $f$, we have $\kappa_1\neq 0$. Then a non-zero element
$\kappa_2\Dx_1\Dx_3 - \kappa_1\Dx_2^2$ annihilates $f_4$. A contradiction
with $H_{\Apolar{f_4}}(2) = 4$.
\end{proof}
\begin{lem}\label{ref:144311addingpartial:lem}
Let $f = x_1^5 + f_4 + g$, where $\deg g\leq 3$, be a polynomial such that
$H_{\Apolar{f_4}} = (1, 3, 3, 3, 1)$. Suppose that $\Dx_1^3\hook f_4 =
0$. Then there exists $\lambda\in k$ such that the apolar algebra of $f_4 + \lambda x_1^4$ has
Hilbert function $(1, 3, 4, 3, 1)$.
\end{lem}
\begin{proof}
We will use the classification of possible $f_4$ given in
Proposition~\ref{ref:thirdsecant:prop}. After a linear change of
coordinates we may assume that $f_4$ is in one of the forms
\[
x_1^4 + x_2^4 + x_3^4,\quad x_1^3x_2 + x_3^4, \quad x_1^2(x_1x_3 +
x_2^2).
\]
Let $\beta$ be the image of $\Dx_1$ in the dual change of coordinates,
then $\beta^3\hook f_4 = 0$. If $f_4$ is of the first form, then no such
$\beta$ exists. If $f_4$ has the second form, then $\beta = \Dx_2$ up to a
constant. Then $H_{\Apolar{f_4 + x_2^4}} = (1, 3, 4, 3, 1)$ and we are
done. If $f_4$ has the third form, then we may assume that $\beta$ is some
linear combination of $\Dx_2$ and $\Dx_3$. We claim that the corresponding
$h = f_4 + (\lambda_2x_2 + \lambda_3x_3)^4$ satisfies $H_{\Apolar{h}} =
(1, 3, 4, 3, 1)$. Indeed, the second order derivatives of $h$ are
\[
x_1x_2,\quad x_1^2,\quad x_1x_3 + x_2^2,\quad cx_1^2 + (\lambda_2x_2 +
\lambda_3x_3)^2,
\]
where $c$ is a constant, and it is easy to see that they are linearly independent, provided that
at least one of $\lambda_2, \lambda_3$ is non-zero. Then
$H_{\Apolar{h}}(2) = 4$, so that $H_{\Apolar{h}} = (1, 3, 4, 3, 1)$.
\end{proof}
\subsection{Proofs}
The following Proposition~\ref{ref:mainthmthree:prop} generalises results about algebras with Hilbert function
$(1, 5, 5, 1)$, obtained in \cite{JJ1551} and \cite{bertone2012division}.
\begin{prop}\label{ref:mainthmthree:prop}
Let $A$ be a local Artin Gorenstein algebra of socle degree three and
${H_A(2)\leq 5}$. Then $A$ is smoothable.
\end{prop}
\begin{proof}
Suppose that the Hilbert function of $A$ is $(1, n, e, 1)$.
By Proposition \ref{ref:squares:prop} the dual socle generator of $A$ may
be put in the form $f + x_{e+1}^2 + \dots + x_n^2$, where $f\in
k[x_1,\dots,x_e]$. By repeated use of
Example~\ref{ref:squareadding:example} we see that
$A$ is a limit of algebras of the form $\Apolar{f} \times k^{\oplus n-e}$.
Thus it is smoothable if and only if $B = \Apolar{f}$ is.
Let $e:=H_A(2)$, then $H_B = (1, e, e, 1)$.
If $H_B(1) = e \leq 3$ then $B$ is smoothable. It remains to consider $4\leq e\leq 5$.
The set of points corresponding to algebras with Hilbert function $(1, e,
e, 1)$ is irreducible in $\Hilb{e}{2e+2}$ by
Remark~\ref{ref:semicontinuity:rmk} for obvious parameterisation (as mentioned in
\cite[Thm~I, p.~350]{iaCompressed}), thus it will be enough to find a
smooth point in this set which corresponds to a smoothable
algebra.
The cases $e = 4$ and
$e = 5$ are considered in Example~\ref{ref:1441:example} and Example~\ref{ref:1551:example} respectively.
\end{proof}
\begin{remark}
The claim of Proposition~\ref{ref:mainthmthree:prop} holds true if we
replace the assumption ${H_A(2) \leq 5}$ by $H_A(2) = 7$, thanks to the
smoothability of local Artin Gorenstein algebras with Hilbert function
$(1, 7, 7, 1)$, see~\cite{bertone2012division}. We will not use this
result.
\end{remark}
\begin{lem}\label{ref:14521case:lem}
Let $A$ be a local Artin Gorenstein algebra with Hilbert function $H_A$
beginning with $H_A(0) = 1$,
$H_A(1) = 4$, $H_A(2) = 5$, $H_A(3) \leq 2$. Then $A$ is smoothable.
\end{lem}
\begin{proof}
Let $f$ be a dual socle generator of $A$ in the standard form. From
Macaulay's Growth Theorem it follows that $H_A(m) \leq 2$ for all $m\geq 3$,
so that $H_A = (1, 4, 5, 2, 2, \ldots , 2, 1, \ldots , 1)$. Let $s$ be
the socle degree of $A$.
Let $\Dhdvect{A, s-2} = (0, q, 0)$ be the $(s-2)$-nd row of the symmetric
decomposition of $H_A$. If $q > 0$, then by
Example~\ref{ref:squareadding:example} we know that $A$ is limit-reducible; it
is a limit of algebras of the form $B \times k$, such that $H_{B}(1) =
H_A(1) - 1 = 3$. Then the algebra $B$ is smoothable
(see~\cite[Prop~2.5]{cn09}),
so $A$ is also smoothable. In the following we assume that $q = 0$.
We claim that $\DPut{ffour}{f_{\geq 4}} \in k[x_1, x_2]$. Indeed, the symmetric
decomposition of the Hilbert function is either $(1, 1, \ldots , 1) + (0,
1, \ldots , 1, 0) + (0, 0, 1, 0, 0) + (0, 2, 2, 0)$ or $(1, 2, \ldots ,
2, 1) + (0, 0, 1, 0, 0) + (0, 2, 2, 0)$. In particular $e(s-3) =
\sum_{i\geq 3} \Dhd{i}{1} = 2$, so that $\Dffour \in k[x_1,
x_2]$ and $H_{\Apolar{\Dffour}}(1) = 2$, in particular $x_1$ is a derivative of $\Dffour $, i.e.~there exist
a $\partial\in \DS$ such that $\partial\hook \Dffour = x_1$. Then we may
assume $\partial\in \DmmS^3$, so $\partial^2\hook f = 0$.
Let us fix $\Dffour $ and consider the set of all polynomials of the
form $h = \Dffour + g$, where $g\in k[x_1, x_2, x_3, x_4]$ has degree at
most three. By Example~\ref{ref:semicondegthree:example} the apolar
algebra of a general such polynomial will have
Hilbert function $H_A$. The set of polynomials $h$ with fixed
$h_{\geq 4} = \Dffour $, such that $H_{\Apolar{h}} = H_A$, is irreducible.
This set contains $h := \Dffour + x_3^2x_1 + x_4^2x_3$. To finish the
proof is it enough to show that $h$ is smoothable and unobstructed. Since
$\Apolar{\Dffour }$ is a complete intersection, this
follows from Example~\ref{ref:1551:example}.
\end{proof}
The following Theorem~\ref{ref:mainthmstretchedfive:thm} generalises numerous
earlier smoothability results on stretched (by Sally, see~\cite{SallyStretchedGorenstein}),
$2$-stretched (by Casnati and Notari, see \cite{CN2stretched}) and almost-stretched (by Elias and Valla, see
\cite{EliasVallaAlmostStretched}) algebras. It is important to understand
that, in contrast with the mentioned papers, we avoid a full classification of
algebras. In the course of the proof we give some partial classification.
\begin{thm}\label{ref:mainthmstretchedfive:thm}
Let $A$ be a local Artin Gorenstein algebra with Hilbert function $H_A$ satisfying
$H_A(2) \leq 5$ and $H_{A}(3)\leq 2$. Then $A$ is smoothable.
\end{thm}
\begin{proof}
We proceed by induction on $\len A$, the case $\len A = 1$ being trivial.
If $A$ has socle degree three, then the result follows from Proposition
\ref{ref:mainthmthree:prop}. Suppose that $A$ has socle degree $s\geq 4$.
Let $f$ be a dual socle generator of $A$ in the standard form.
If the symmetric decomposition of
$H_A$ has a term $\Dhdvect{s-2} = (0, q, 0)$ with $q\neq 0$, then by
Example~\ref{ref:squareadding:example}, we have that $A$ is a limit of
algebras of the form $B \times k$, where $B$ satisfies the assumptions
$H_B(2) \leq 5$ and $H_B(2) \leq 2$ on the Hilbert function. Then $B$ is
smoothable by induction, so also $A$ is smoothable. Further in the proof
we assume that $\Dhdvect{A, s-2} = (0, 0, 0)$.
We would like to understand the symmetric decomposition of the Hilbert
function $H_A$ of $A$. Since $H_A$ satisfies the Macaulay growth
condition (see Subsection \ref{ref:MacGrowth:sss}) it follows that $H_A =
(1, n, m, 2, 2, \dots, 2, 1, \dots, 1)$, where the number of ``$2$'' is
possibly zero. If follows that the possible symmetric decompositions
of the Hilbert function are
\begin{enumerate}
\item $(1, 2, 2, \ldots , 2, 1) + (0, 0, 1, 0, 0) + (0, n-3, n-3, 0)$,
\item $(1, 1, 1 \ldots , 1, 1) + (0, 1, 1, \ldots , 1, 0) + (0, 0, 1,
0, 0) + (0, n-3, n-3, 0)$,
\item $(1, 1, 1 \ldots , 1, 1) + (0, 1, 2, 1, 0) + (0, n-3, n-3, 0)$,
\item $(1, \ldots , 1) + (0, n-1, n-1, 0)$,
\item $(1, 2, \ldots ,2, 1) + (0, n-2, n-2, 0)$,
\item $(1, \ldots , 1) + (0,1, \ldots , 1, 0) + (0, n-2, n-2, 0)$,
\end{enumerate}
and that the decomposition is uniquely determined by the Hilbert function.
In all cases we have $H_A(1)\leq H_A(2)\leq 5$, so $f\in
k[x_1, \ldots ,x_5]$.
Let us analyse the first three cases. In each of them we have $H_A(2) = H_A(1) + 1$. If $H_A(1) \leq
3$, then $A$ is smoothable, see \cite[Cor 2.4]{cn09}. Suppose $H_A(1) \geq
4$. Since $H_A(2) \leq 5$,
we have $H_A(2) = 5$ and $H_A(1) = 4$. In this case the result follows from
Lemma~\ref{ref:14521case:lem} above.
It remains to analyse the three remaining cases. The proof is similar to
the proof of Lemma~\ref{ref:14521case:lem}, however here it
essentially depends on induction.
Let $\DPut{ffour}{f_{\geq 4}}$ be the sum of homogeneous components of $f$ which have
degree at least four. Since $f$ is in the standard form, we have
$\Dffour\in k[x_1, x_2]$. The decomposition of Hilbert function $\Apolar{\Dffour}$ is
one of the decompositions $(1, \ldots ,1)$, $(1,2 \ldots ,2,1)$, $(1, \ldots ,1) + (0, 1, \ldots
, 1, 0)$, depending on the decomposition of the Hilbert function of $\Apolar{f}$.
Let us fix a vector $\hat{h} = (1, 2, 2, 2, \ldots , 2, 1, 1, \ldots , 1) $ and take the set
\[
V_1 := \left\{ f\in k[x_1, x_2]\ |\ H_{\Apolar{f}} = \hat{h}\right\}\mbox{
and } V_2 := \left\{ f\in k[x_1, \ldots ,x_n]\ |\ f_{\geq 4}\in V_1 \right\}.
\]
By Proposition~\ref{ref:irreducibleintwovariables:prop} the set $V_1$ is
irreducible and thus $V_2$ is also irreducible.
The Hilbert function of the apolar algebra of a general member of $V_2$ is,
by Example~\ref{ref:semicondegthree:example}, equal to $H_A$. It remains
to show that the apolar algebra of this general member is
smoothable.
Proposition~\ref{ref:irreducibleintwovariables:prop} implies that the general
member of $V_2$ has (after a nonlinear change of coordinates) the form
$f + \partial\hook f$, where $f = x_1^{s} + x_2^{s_2} + g$ for some $g$ of
degree at most three. Using Lemma \ref{ref:topdegreetwist:lem} we may assume (after another
nonlinear change of coordinates) that $\Dx_1^2\hook g = 0$.
Let $B := \Apolar{x_1^{s} + x_2^{s_2} + g}$. We will show that $B$ is
smoothable.
Since $s \geq 4 = 2\cdot 2$
Proposition~\ref{ref:stretchedhavedegenerations:prop} shows that $B$
is limit-reducible. Analysing the fibers of
the resulting degeneration, as
in Example~\ref{ref:stretched:example}, we see that they have the form
$B'\times k$, where $B' = \Apolar{\hat{f}}$ and $\hat{f} = \lambda^{-1}x_1^{s-1} + x_2^{s_2} + g$.
Then $H_{B'}(3) = H_{\Apolar{\hat{f}_{\geq 4}}}(3) \leq 2$. Moreover,
$\hat{f}\in k[x_1, \ldots ,x_5]$, so that $H_{B'}(1) \leq 5$. Now
analysing the possible symmetric decompositions of $H_{B'}$, which are
listed above, we see that $H_{B'}(2)\leq H_{B'}(1) = 5$.
It follows from induction on the length that $B'$ is smoothable, thus
$B'\times k$ and $B$ are smoothable.
\end{proof}
\begin{prop}\label{ref:mainthmsfour:prop}
Let $A$ be a local Artin Gorenstein algebra of socle degree four satisfying $\len
A\leq 14$. Then $A$ is smoothable.
\end{prop}
\begin{proof}
We proceed by induction on the length of $A$. Then by
Proposition~\ref{ref:mainthmthree:prop} (and the fact that all algebras of
socle degree at most two are smoothable) we may assume that all algebras
of socle degree \emph{at most} four and length less than $\len A$ are
smoothable.
If $\Dhdvect{A,
1} = (0, q, 0)$ with $q\neq 0$, then by
Example~\ref{ref:squareadding:example} the algebra $A$ is a limit of
algebras of the form $A'
\times k$, where $A'$ has socle degree four. Hence $A$ is
smoothable. Therefore we assume $q = 0$. Then $H_A(1) \leq 5$ by Lemma
\ref{ref:hilbertfunc:lem}. Moreover, we may assume $H_A(1) \geq 4$ since
otherwise $A$ is smoothable by \cite[Cor 2.4]{cn09}.
The symmetric
decomposition of $H_A$ is $(1, n, m, n, 1) + (0, p, p, 0)$ for some $n, m,
p$. By the fact that $n\leq 5$ and Stanley's result \cite[p.
67]{StanleyCombinatoricsAndCommutative} we have $n\leq m$, thus $n\leq 4$
and $H_A(2) \leq H_A(1)\leq 5$.
Due to $\len A \leq 14$ we have four cases: $n = 1$, $2$, $3$,
$4$ and five possible shapes of Hilbert functions: $H_A = (1, *, *, 1, 1)$,
$H_A = (1, *, *, 2, 1)$, $H_A = (1, 4, 4, 3, 1)$, $H_A = (1, 4, 4, 4,
1)$, $H_A = (1, 4, 5, 3, 1)$.
The conclusion in the first two cases follows from Theorem
\ref{ref:mainthmstretchedfive:thm}.
In the remaining cases we first look for a suitable irreducible set of
dual socle generators parameterising algebras with prescribed $H_A$.
We examine the case $H_A = (1, 4, 4, 3,
1)$. We claim that the set of $f\in \DP = k[x_1, x_2, x_3, x_4]$ in the
standard form, which are generators of algebras with
Hilbert function $H_A$ is irreducible. Since the leading form $f_4$ of
such $f$ has Hilbert function $(1, 3, 3, 3, 1)$, the set of possible
leading forms is irreducible by Proposition~\ref{ref:thirdsecant:prop}.
Then the irreducibility follows from
Example~\ref{ref:semicondegthree:example}. The irreducibility in the cases
$H_A = (1, 4, 4, 4, 1)$ and $H_A = (1, 4, 5, 3, 1)$ follows similarly from
Proposition~\ref{ref:fourthsecant:prop} together with
Example~\ref{ref:semicondegthree:example}.
In the first two cases we see that $f_4$ is a sum of powers of variables,
then Example~\ref{ref:quarticlimitreducible:example} shows
that the apolar algebra $A$ of a general $f$ is limit-reducible. More
precisely,
$A$ is limit of algebras of the form $A' \times k$, where $A'$ has socle
degree at most four (compare Example~\ref{ref:stretched:example}). Then $A$ is smoothable.
In the last case Example~\ref{ref:14531case:example} gives an unobstructed
algebra in this irreducible set.
This completes the proof.
\end{proof}
Now we are ready to prove Theorem~\ref{ref:mainthmsmoothable} which is the
algebraic counterpart of Theorems~\ref{ref:mainthm13degree}
and~\ref{ref:mainthm14degree}.
\begin{thm}\label{ref:mainthmsmoothable}
Let $A$ be an Artin Gorenstein algebra of length at most $14$. If the
Hilbert function of $A$ is not equal to
$(1, 6, 6, 1)$, then $A$ is smoothable. In particular, if $A$ has length
at most $13$, then $A$ is smoothable.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{ref:mainthmsmoothable}]
\def\Dh#1{H(#1)}
Let $A$ be an algebra of length at most $14$ and of socle degree $s$.
By $H$ we denote the Hilbert function of $A$.
As mentioned in Subsection~\ref{sss:smoothability} it is enough to prove
$A$ is limit-reducible.
On the contrary, suppose that $A$ is strongly non-smoothable in the sense of Definition
\ref{ref:limitreducible:def}. By Example~\ref{ref:squareadding:example} we
have $\Dhdvect{A, s-2} = (0, 0, 0)$. Then by Lemma \ref{ref:hilbertfunc:lem} we see that either $H = (1,
6, 6, 1)$ or $\Dh{1} \leq 5$. It is enough to consider $\Dh{1}\leq 5$.
If $s = 3$ then $\Dh{2} \leq \Dh{1} \leq 5$, so by Proposition
\ref{ref:mainthmthree:prop} we may assume $s > 3$. By Proposition
\ref{ref:mainthmsfour:prop} it follows that
we may consider only $s\geq 5$.
If $\Dh{1}\leq 3$ then $A$ is smoothable by \cite[Cor 2.4]{cn09}, thus we
may assume $\Dh{1} \geq 4$. By Lemma \ref{ref:trikofHilbFunc:lem} we
see that $\Dh{2} \leq 5$. Then by Theorem
\ref{ref:mainthmstretchedfive:thm} we may reduce to the
case $\Dh{3} \geq 3$. By Macaulay's Growth Theorem we have $\Dh{2} \geq 3$.
Then $\sum_{i>3} \Dh{i} \leq 14 - 11$, so we are left with several
possibilities: $H = (1, 4, 3, 3, 1, 1, 1)$, $H = (1, 4, 3, 3, 2, 1)$ or
$H = (1, *, *, *, 1, 1)$.
In the first two cases it follows from the symmetric decomposition that
$\Dhdvect{A, s-2} \neq (0, 0, 0)$ which is a contradiction. We examine the
last case.
By Lemma \ref{ref:14341notexists:lem} there does not exist an algebra with Hilbert function $(1, 4, 3,
4, 1, 1)$. Thus the only possibilities are $(1, 4, 3, 3, 1, 1)$,
$(1, 5, 3, 3, 1, 1)$ and $(1, 4, 4, 3, 1, 1)$. Once more, it can be
checked directly
that in the first two cases $\Dhdvect{A, s-2} \neq (0, 0, 0)$, so we
focus on the last case. The only possible symmetric decomposition of $H$ with $\Dhdvect{A,
s-2} = (0, 0, 0)$ is
\begin{equation}\label{eq:hfdecomposition}
(1, 4, 4, 3, 1, 1) = (1, 1, 1, 1, 1, 1) + (0, 2, 2, 2, 0) + (0, 1, 1,
0).
\end{equation}
Let us take a dual socle generator $f$ of $A$. We assume that $f$ is in
the standard form: $f = x_1^5 + f_4 + g$, where $\deg g\leq 3$, and analyse
the possible Hilbert functions of $B = \Apolar{f_4}$. By Lemma~\ref{ref:topdegreetwist:lem} we
may assume that $\Dx_1^3\hook f_4 = 0$. Suppose first that
$H_{B}(1) \leq 2$. From \eqref{eq:hfdecomposition} it follows that
$H_{\Apolar{f+f_4}}(1) = 3$, so that $H_B(1) = 2$ and we may assume that
$f_4\in k[x_2, x_3]$. Then by
Lemma~\ref{ref:topdegreetwist:lem} we may further assume $\Dx_1^2\hook (f
- x_1^5) = 0$, then Proposition~\ref{ref:stretchedhavedegenerations:prop}
asserts that $A = \Apolar{f}$ is limit-reducible.
Suppose now that $H_B(1) = 3$. Since $x_1^5$ is annihilated by a codimension
one space of quadrics, we have $H_B(2) \leq H_A(2) + 1$, so there are two
possibilities: $H_B = (1, 3, 3, 3, 1)$ or $H_B = (1,
3, 4, 3, 1)$. Using Lemma~\ref{ref:144311addingpartial:lem} to $x_1^5 +
f_4$ and
replacing $f$ by $f + \lambda\Dx_1\hook f$ for some $\lambda\in k$, we may
assume that $H_B = (1, 3, 4, 3, 1)$. Now by Lemma~\ref{ref:144311casenotCI:lem} we may
consider only the case when $\pp{\Dan{f_4}}_2$ is a complete
intersection, then by Lemma~\ref{ref:144311caseCI:lem} we have that
$\Apolar{x_1^5 + f_4}$ is a complete intersection.
By Example~\ref{ref:semicondegthree:example} the set of algebras with fixed leading polynomial $f_{\geq 4}$
and Hilbert function $(1, 4, 4, 3, 1, 1)$ is irreducible.
It remains to find a smooth point of the Hilbert scheme in this set, for
every $f_{\geq 4}$.
By Corollary~\ref{ref:CIarenonobstructed:cor} we see that $x_1^5 + f_4 +
x_4^2x_1$ is such a smooth point.
\end{proof}
\begin{remark}\label{ref:1661:rmk}
Assume $\kchar = 0$.
In \cite{emsalem_iarrobino_small_tangent_space} Emsalem and Iarrobino
analysed the tangent space to the Hilbert scheme.
Iarrobino and Kanev claim that using Macaulay they are able to check
that the tangent space to $\Hilb{14}{6}$ has dimension
$76$ at a point corresponding to a general local Gorenstein algebra $A$ with Hilbert
function $(1, 6, 6, 1)$, see \cite[Lem~6.21]{iakanev}, see also \cite{cn10}
for further details. Since $76 < (1 + 6 + 6 + 1) \cdot 6$ this shows
that $A$ is non-smoothable. Moreover, since all algebras of degree at most
$13$ are smoothable, $A$ is strongly non-smoothable.
\end{remark}
To prove Theorem~\ref{ref:mainthm14degree}, we need to show that the
non-smoothable part of $\HilbGor_{14} \mathbb{P}^n$ (for $n\geq 6$) is
irreducible. The algebraic version of (a generalisation of) this statement is the following
lemma.
\begin{lem}\label{ref:1661irreducibility:lem}
Let $n\geq m$ be natural numbers and $V \subseteq \DP_{\leq 3} = k[x_1,
\ldots ,x_n]_{\leq 3}$ be the set of $f\in \DP$ such
that $H_{\Apolar{f}} = (1, m, m, 1)$. Then $V$ is constructible
and irreducible.
\end{lem}
\begin{proof}
Let $V_{gr} = V \cap \DP_{3}$ denote
the set of \emph{graded} algebras with Hilbert function $(1, m, m, 1)$.
This is a constructible subset of $\DP_3$.
To an element $f_3\in V_{gr}$ we may associate the tangent space to
$\Apolar{f_3}$, which is isomorphic to $S_2\hook f_3$. We
define
\[\{(f_3, [W])\in V_{gr} \times
\operatorname{Gr}(m, n) \ |\ W \supseteq S_2\hook f_3\},\]
which is an open subset in a vector bundle $\{(f_3, [W])\in \DP_{3} \times
\operatorname{Gr}(m, n) \ |\ W \supseteq S_2\hook f_3\}$ over
$\operatorname{Gr}(m, n)$, given by the condition $\dim S_2\hook f_3 \geq m$.
Let $f\in V$ and write it as $f = f_3 + f_{\leq 2}$,
where $\deg f_{\leq 2} \leq 2$. Then $H_{\Apolar{f_3}} = (1, m, m, 1)$.
Therefore we obtain a morphism $\varphi:V\to V_{gr}$ sending $f$ to $f_3$.
We will analyse its fibers. Let $f_3\in V_{gr}$ and $f = f_3 + f_{\leq
2}\in \DP_{\leq 3}$,
where $\deg f_{\leq 2}\leq 2$. Then $H_{\Apolar{f}} = (1, M, m, 1)$ for
some $M\geq m$. Moreover $M = m$ if and only if $\alpha\hook f_{\leq 2}$
is a partial of $f_3$ for every $\alpha$ annihilating $f_3$. The
fiber of $\varphi$ over $f$ is an affine subspace of $P_{\leq 2}$ defined by these conditions
and the morphism
\[
\{(f = f_3 + f_{\leq 2}, [W])\in V \times
\operatorname{Gr}(m, n) \ |\ W \supseteq S_2\hook f_3\}\to \{(f_3, [W])\in V_{gr} \times
\operatorname{Gr}(m, n) \ |\ W \supseteq S_2\hook f_3\}
\]
is a projection from a vector bundle, which is thus irreducible. Since $V$
admits a surjection from this bundle, it is irreducible as
well. Moreover, the above shows
that $V$ is constructible.
\end{proof}
\begin{proof}[Proof of Theorems~\ref{ref:mainthm13degree}
and~\ref{ref:mainthm14degree}]
The locus of points of the Hilbert scheme corresponding to smooth
(i.e.~reduced) algebras of length $d$ is irreducible,
as an image of an open subset of the $d$--symmetric product of
$\mathbb{P}^n$, and
smooth.
The locus of points corresponding to smoothable algebras is
the closure of the aforementioned locus, so it is also irreducible. If
$d\leq 13$ or $d\leq 14$ and $n\leq 5$, this locus is the whole Hilbert
scheme by Theorem~\ref{ref:mainthmsmoothable} and the claim follows.
Now consider
the case $d = 14$ and $n\geq 6$. Let $\mathcal{V}$ be the set of points of
the Hilbert scheme
corresponding to local Gorenstein algebras with Hilbert function $(1, 6,
6, 1)$. By Remark~\ref{ref:1661:rmk} these are the only non-smoothable algebras of length $14$, thus they
deform only to local algebras with the same Hilbert function. Therefore,
$\mathcal{V}$ is a sum of irreducible components of the Hilbert scheme. We
will prove that $\mathcal{V}$ is an irreducible set, whose general point
is smooth.
Let $\mathcal{V}_p \subseteq \mathcal{V}$ denote the set
corresponding of schemes
supported at a fixed point $p\in\mathbb{P}^n$. Then
$\mathcal{V}$ is dominated by a set $\mathcal{V}_p \times
\mathbb{P}^n$. Note that an irreducible scheme supported at a point $p$
may be identified with a Gorenstein quotient of the power series ring having Hilbert
function $(1, 6, 6, 1)$. These quotients are parameterised by the dual
generators. More precisely, the set of $V$ of $f\in k[x_1, \ldots ,
x_n]_{\leq 3}$ such that $H_{\Apolar{f}} = (1, 6, 6, 1)$ gives a morphism
\[V\to \mathcal{V}_p \subseteq \HilbGor_{14} \mathbb{P}^n\]
which sends $f$ to $\Spec \Apolar{f}$
supported at $p$ (see subsection~\ref{sss:parameterising}). Since $V\to \mathcal{V}_p$ is surjective and $V$ is
irreducible by Lemma~\ref{ref:1661irreducibility:lem}, we see that
$\mathcal{V}_p$ is irreducible. Then $\mathcal{V}$ is irreducible as well.
Take a smooth point of $\HilbGor_{14} \mathbb{P}^6$ which corresponds to
an algebra $A$ with Hilbert function $(1, 6, 6, 1)$. Then any point of
$\HilbGor_{14} \mathbb{P}^n$ corresponding to an embedding $\Spec A
\subseteq \mathbb{P}^n$ is smooth by \cite[Lem 2.3]{cn09}. This concludes the
proof.
\end{proof}
\section{Acknowledgements}
We wish to express our thanks to A.A.~Iarrobino and P.M.~Marques for
inspiring conversations. Moreover we are also sincerely grateful to
W.~Buczy\'nska and J.~Buczy\'nski for their care, support and hospitality during
the preparation of this paper. The examples were obtained with the help of
Magma computing software, see~\cite{Magma}. | 156,602 |
Academic Philosophy Events in the Netherlands
- This event has passed.
Wicked problems, co-production, and knowledge sharing: a worst case analysis
21 April @ 15:30 - 18:00
You are cordially invited to the upcoming talk
Wicked problems, co-production, and knowledge sharing: a worst case analysis
Albert Dzur
Distinguished Research Professor of Political Science, Bowling Green University
Details
April 21 2020
15.30 – 18.00
Janskerkhof 13
Room 0.06 (Stijlkamer)
3512 BL Utrecht
The Netherlands
Attendance is free, all are welcome! For further questions, please contact [email protected]
This event is hosted by Utrecht University. Funding is due to the ERC Starting Grant project The Enemy of the Good. Towards a Theory of Moral Progress (PROGRESS, 851043, PI: Hanno Sauer). For further information,. | 72,302 |
Your wedding day is supposed the one of the happiest in your life. Chances are, it’s one of the only times that most of your friends and family will be in one place together, and not to mention you’re in the throes of promising yourself to the love of your life. At the same time, getting married can be one of the most expensive things you’ll ever do. People can be prone to overdoing it without even noticing. So when a Reddit user asked married people to recall the biggest waste of money at their wedding, the answers didn’t disappoint at all.
Weddings often feel like a showcase of overpriced trinkets but Reddit users agreed that nothing was a worse bang for your buck than centerpieces.
“Damn centrepieces,” Caz1542 wrote. “So pointless but you have to have them for some reason – hubby and I were fine with spending on a nice venue and good food/wine and things people actually enjoy, but we went bare minimum for centrepieces. Seriously, has anyone ever left a wedding thinking, ‘wow those centrepieces were really something!’?”
Flowers, party favors, and mediocre photographers were also put on blast by people who had been burned by the madness of the wedding industry. In fact, user HowardAndMallory may have summed it up best when they said “[e]verything but the certificate itself” was too damn expensive.
One user even had the audacity to go after the crown jewels of weddings: the bride’s dress. Though, in his defense, it sounds like his wife is just as upset about it as he is.
“The dress,” wrote PhilosophicalFarmer. “You spend a lot of time finding the right wedding dress, and even the cheap ones aren’t cheap. Then you wear it once, box it up, and put it in storage for the rest of your life. There aren’t a lot of social events where wearing a wedding dress is appropriate, and even if you have a daughter, odds are that it will be out of style or the wrong size by the time she needs it. My wife felt beautiful in hers, but she started regretting buying it shortly after we got it preserved and boxed.”
Weddings are so pricey, that once you start to tally up what exactly you spent money on, it becomes clear that the box of doves or raised lettering on your invitation was probably overkill. In 2017, the average wedding cost something in the vein of $26,000, which is more than half of the average household income in America. If that sounds crazy, then consider the fact that people today spend twice what they spent on them just a decade ago. | 208,625 |
TITLE: Is the topology induced by a norm an initial topology?
QUESTION [4 upvotes]: Let $(V,\mathcal{T})$ be a topological vector space where $\mathcal{T}$ is the topology induced by a norm
$$\Vert \cdot \Vert: V \to [0, \infty[$$
Is it true that $\mathcal{T}$ is the initial topology w.r.t. this norm?
Let $\mathcal{S}$ be the initial topology generated by this norm. I can see that $\mathcal{S}\subseteq \mathcal{T}$ must hold but does the other inclusion also hold?
EDIT: the other inclusion also seems to hold:
Consider the ball $B_{\Vert \cdot \Vert}(0,\epsilon)$. Since this is the inverse image of the open set $[0, \epsilon[$ under the norm map, we see that $B_{\Vert \cdot \Vert}(0, \epsilon) \in \mathcal{S}$. Since $(V, \mathcal{S})$ is also a topological vector space, all translates of this ball are also in $\mathcal{S}$. Thus $\mathcal{S}$ contains a basis of $\mathcal{T}$, and we must have $\mathcal{T}\subseteq \mathcal{S}$ as well.
Is this correct?
REPLY [2 votes]: Given the norm function $n(x)=\|x\|: V \to \Bbb R_0^+$, it's certainly true that $\mathcal{T}$, the induced topology by that norm on $V$, does make $n$ continuous, so for $\mathcal{T}_n$ (the initial topology induced by $n$ on $V$) we can surely say $$\mathcal{T}_n \subseteq \mathcal{T}$$ by minimality.
The reverse is certainly not true: Because we have an initial topology induced by a single function, $$\mathcal{T}_n = \{n^{-1}[O]: O \subseteq \Bbb R^+_0 \text{ open}\}$$ and this implies that if $n(x)=n(x')$ for $x,x' \in V$ and any open $O \in \mathcal{T}_n$: $x \in O \iff x' \in O$, so $V$ in $\mathcal{T}_n$ is not $T_0$ and so quite different from the metric topology $\mathcal{T}$.
However, if we use the notion of a weak vector-space topology, things change: the minimal vector space topology $\mathcal{T}_{n,v}$ (making the $+,-,\cdot$ operations continuous) that also makes $n$ continuous, has the property that all norm-open balls are open in $\mathcal{T}_{n,v}$ as you state and then the fact that all operations are continuous allows you to translate those to all other points as well. So trivially yes to your title question, if we work in the category of TVS's and trivially no if we work in the category Top. | 73,067 |
AF-9300-W Anchor Premium 3 stage Water filter with LCD display-White
The new Anchor 9300 series is an innovative counter top water filter with advanced electrical filter performance monitor. This 3.
Countertop Features:
- LCD displays remaining capacity
- Audible alarm for the end of filter life.
- Battery operated, automatic power-off memorization
Removes / Reduces:
* Unpleasant odors
* Chlorine * Chloramine
* Lead * Mercury
* Iron * Hydrogen sulfide
* Also controls scale, bacteria and algae.
Stage 1 - filter removes heavy metal and control micro-organisms
Stage 2 - filter removes smaller floating solids and separates the layers to preserve their integrity
Stage 3 - filter removes chemicals such as chlorine and its odor and improves taste, high chemical absorption capacity
Download Instruction Manual
We Also Recommend | 171,858 |
TITLE: Solve the linear system Ax = B
QUESTION [4 upvotes]: I apologize about the formatting but I'm stuck on this problem where I have to solve the linear system $Ax=b$, the vector is a 4×1 while the system is a 4×3. Is this type of question even solvable becaue I tried doing it after watching a video which featured a 3×1 vector and a 3×3 system.
$$A=\begin{bmatrix}
1&0&-2\\2&1&-3\\0&-2&1\\4&1&-2
\end{bmatrix}\qquad b=\begin{bmatrix}6\\9\\0\\11\end{bmatrix}$$
I've added the actual question from the sheet
REPLY [3 votes]: Remember that the clause $A \vec{x} = \vec{b}$ means that $\vec{x}$ is a vector that can be dot-multiplied on the right of $A$, so is a three component vector, $\vec{x} = (x_1, x_2, x_3)$. Then the clause means \begin{align*}
1\cdot x_1 + 0 \cdot x_2 -2 \cdot x_3 &= 6 \\
2\cdot x_1 + 1 \cdot x_2 -3 \cdot x_3 &= 9 \\
0\cdot x_1 - 2 \cdot x_2 +1 \cdot x_3 &= 0 \\
4\cdot x_1 + 1 \cdot x_2 -2 \cdot x_3 &= 11 \\
\end{align*}
So you have four equations in three unknowns. It's possible that this system is overdetermined (i.e., has no solution).
But really, you should just try Gaussian elimination, using the first equation to eliminate the $x_1$s from the other three equations, then the reduced second equation to eliminate the $x_2$s from the following two reduced equations, then scale the third to remove the x_3 from the twice reduced last equation. Either the last equation becomes trivial ($0+0+0=0$) and the system has a solution which you can get by backsubstitution or the last equation is not trivial and no solution exists. | 98,550 |
TITLE: $X_1,X_2,...,X_n$ independent random variables are uniformly distributed on $[0,1]$. $P(X_1<X_2<...<X_n)=?$
QUESTION [3 upvotes]: $X_1,X_2,...,X_n$ independent random variables are uniformly distributed on $[0,1]$.
So, we say that $P(X_1<X_2<...<X_n)=\frac{1}{n!}$
But why is that the correct answer? How do we calculate it?
According to the answer, my guess is: This case is like arranging $n$ people in a row. we have $n!$
permutations for that, and only one satisfies the requirement. Thus, we get $\frac{1}{n!}$.
Is that the way of thinking that should be?
REPLY [3 votes]: There is zero probability that two variables are equal. We can forget this case of zero measure.
Take a permutation of indices $\sigma$. Consider $p(\sigma)$ the probability that the variables are ordered accordingly.
By symmetry $p(\sigma)$ does not depend on $\sigma$ and the sum over all permutations is $1$, hence the conclusion that $p(\sigma)=1/n!$.
It does not even matter that the variables are uniformly distributed (e.g. could be normally distributed R.V.). The symmetry argument always applies.
Essentially the argument is very similar to what to propose, only one has to reframe it into more formal probabilistic terms. | 198,589 |
Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.
Let’s continue with our impromptu Cinco De Mayo Mixtape Fiesta with another one, and then there will be another one. I usually do not post mixtapes this frequently but these are all artists that I have been into so the good music has to be heard by everyone. The Kid Daytona gets Mick Boogie to assist him on this mixtape and comes with a much shorter project than the other two that are being posted today. The theme of the mixtape is with each song we are building more upon the machine that you can Come Fly With Me, well with him, since that is going to be the name of his album. Each producer took the sample from the intro track and chopped up to make an original beat and you can tell they are all “related” by listening to the bass groove in each song. He also was pretty original with concept of having each song with a title like “The Navigation” (the song above), “The Engine”, “The Body” etc. to go along with the album theme. The production is pretty nuts too, if I had to describe it I would say jazzy chaos on a spaceship. If you don’t get it then listen to the album because that’s all I got haha. Get the download after the break.
Download: The Kid Daytona – The Daytona 500 (Mixtape)
Notify me of followup comments via e-mail | 12,228 |
\begin{document}
\title{On weakly extremal structures in Banach spaces}
\author{Jarno Talponen}
\address{University of Helsinki, Department of Mathematics and Statistics, Box 68, (Gustaf H\"{a}llstr\"{o}minkatu 2b) FI-00014 University
of Helsinki, Finland}
\email{[email protected]}
\subjclass{Primary 46B20; Secondary 46A20}
\date{\today}
\begin{abstract}
This paper deals with the interplay of the geometry of the norm and the weak topology in Banach spaces.
Both dual and intrinsic connections between weak forms of rotundity and smoothness are discussed. Weakly locally uniformly rotund spaces,
$\omega$-exposed points, smoothness, duality and the interplay of all the above are studied.
\end{abstract}
\maketitle
\section{Introduction}
In introducing weaker forms of strong convexity in Banach spaces it is natural to apply the weak topology instead
of the norm topology to describe the size of the extremal sections of the closed unit ball. In this article we study
the weak geometry of the norm, that is, the interplay between the weak topology and the geometry of the norm.
While we investigate the duality between the smoothness and the convexity, it turns out
that also other types of interplay occur (see Theorem \ref{extreme}).
Our study involves $\omega$-locally uniformly rotundity, $\omega$-exposedness and the G-smoothness of the
points $x\in\S_{\X}$, as well as the properties of the duality mapping $J\colon \S_{\X}\longrightarrow \mathcal{P}(\S_{\X^{\ast}})$.
The concept of '$\omega$-exposedness' is natural in the sense that in a reflexive space $\X$ a point $x\in\S_{\X}$ is $\omega$-exposed
if and only if it is exposed.
Let us mention the main themes appearing in this article. First, we give a characterization for
$\omega$-exposed points $x\in\S_{\X}$ in terms of duality between the norm-attaining G-smooth functionals of the
dual space. This also leads to a sufficient condition for a point $x\in\S_{\X}$ to possess the $\omega$-LUR property.
We also give a characterization for the reflexivity of a Banach space $\X$ in terms of an equivalent bidual $\omega$-LUR renorming
of the bidual $\X^{\ast\ast}$. See e.g. \cite{Sullivan,90} for related research.
Secondly, we discuss the abundance of ($\omega$- or strongly) exposed points in various situations. For strongly exposed points this is
a classical topic in the literature (see \cite{LP}) and well-studied especially in connection with the Radon-Nikodym property
(see e.g. \cite[p.35]{JL}). In treating the question whether the $\omega$-exposed points are dense in $\S_{\X}$
we apply the topological properties of the duality mapping.
We will use the following notations. Real Banach spaces are denoted by $\X,\Y$ and $\Z$ unless otherwise stated.
We denote by $\B_{\X}=\{x\in\X:\ ||x||\leq 1\}$ the closed unit ball and by $\S_{\X}=\{x\in\X:\ ||x||=1\}$ the unit sphere.
In what follows $\tau$ is a locally convex topology on $\X$.
For a discussion of basic concepts and results concerning the geometry of the norm we refer to \cite{HHZ} and
to the first chapter of \cite{JL}.
The duality mapping $J\colon \S_{\X}\rightarrow \mathcal{P}(\S_{\X^{\ast}})$ is defined by
$J(x)=\{x^{\ast}\in\S_{\X^{\ast}}|x^{\ast}(x)=1\}$. If $A\subset \S_{\X}$ then we denote $J(A)=\bigcup_{a\in A}J(a)$.
Denote by $\mathrm{NA}(\S_{\X^{\ast}})=J(\S_{\X})$ the set of norm-attaining functionals of $\S_{\X^{\ast}}$.
Let us recall the following well known results:
\begin{theorem}\label{densesmooth}
If $\X$ is a separable Banach space, then the set of G-smooth points of $\S_{\X}$ is a dense $G_{\delta}$-set.
\end{theorem}
\begin{theorem}(Bishop-Phelps)\label{BPT}
The set $\mathrm{NA}(\S_{\X^{\ast}})$ is dense in $\S_{\X^{\ast}}$.
\end{theorem}
Recall that $\X$ is called an Asplund space if for any separable $\Y\subset\X$ it holds that $\Y^{\ast}$ is separable.
An Asplund space $\X$ satisfies that the F-smooth points of $\S_{\X}$ are a dense $G_{\delta}$-set (see \cite[Ch.1]{JL}).
If $\X$ satisfies this conclusion for G-smooth points respectively, then $\X$ is called weakly Asplund.
A point $x\in\S_{\X}$ is called \emph{very smooth} if $x\in \S_{\X}\subset \S_{\X^{\ast\ast}}$ is G-smooth
considered in $\X^{\ast\ast}$.
If $f\in \S_{\X^{\ast}}$ is such that $f^{-1}(1)\cap \S_{\X}=\{x\}$ then $f$ is said to \emph{expose} $x$.
We say that $x\in \S_{\X}$ is a \emph{$\omega$-exposed} point if there is $f\in \S_{\X^{\ast}}$ such that whenever
$(x_{n})\subset \B_{\X}$ is a sequence with $\lim_{n\rightarrow \infty}f(x_{n})=1$ then
$x_{n}\stackrel{\omega}{\longrightarrow}x$ as $n\rightarrow\infty$. If the same conclusion holds
for norm convergence then $x$ is called a \emph{strongly exposed} point. In such cases above $f$ is called
a \emph{$\omega$-} (resp. \emph{strongly}) \emph{exposing functional} for $x$.
We denote the $\omega$- (resp. strongly) exposed points of $\B_{\X}$ by $\omega\mathrm{-exp}(\B_{\X})$
(resp. $||\cdot||\mathrm{-exp}(\B_{\X})$).
A point $x\in\S_{\X}$ is called $\tau$-strongly extreme,
if for all sequences $(z_{n}),(y_{n})\subset \B_{\X}$ such that
$\frac{z_{n}+y_{n}}{2}\stackrel{||\cdot||}{\longrightarrow} x$ as $n\rightarrow\infty$ it holds that
$z_{n}-y_{n}\stackrel{\tau}{\longrightarrow}0$ as $n\rightarrow\infty$.
When $\tau$ is a locally convex topology on $\X$ we say that $x\in\S_{\X}$ is a $\tau$-Locally Uniformly Rotund point,
$\tau$-LUR point for short, if for all sequences $(x_{n})\subset\B_{\X}$ such that $\lim_{n\rightarrow\infty}||x+x_{n}||=2$
it holds that $x_{n}\stackrel{\tau}{\longrightarrow}x$ as $n\rightarrow\infty$. If each $x\in\S_{\X}$ is $\tau$-LUR
then $\X$ is said to be $\tau$-LUR.
If $T$ is a topological space then a subset
$A\subset T$ is called \emph{comeager} provided that it contains a countable intersection of subsets open and dense in $T$.
\section{Weak topology and convexity}
The most essential concept in this article is $\omega$-exposed point $x\in\S_{\X}$, which by its definition is exposed by a
$\omega$-exposing functional $f\in\S_{\X^{\ast}}$. Let us begin by characterizing the $\omega$-exposing functionals.
\begin{theorem}\label{weak}
Let $\X$ be a Banach space and suppose that $x\in \S_{\X},\ f\in \S_{\X^{\ast}}$ are such that $f(x)=1$.
Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)]{$f$ $\omega$-exposes $x$.}
\item[(ii)]{For each closed convex set $C\subset \B_{\X}$ such that $\sup_{y\in C}f(y)=1$ it holds that $x\in C$.}
\item[(iii)]{$f$ is a G-smooth point in $\X^{\ast}$, i.e. there is a unique $\psi\in \S_{\X^{\ast\ast}}$ such that $\psi(f)=1$.}
\end{enumerate}
\end{theorem}
Before giving the proof we will make some remarks. The above result must be previously known. For example
by applying \cite[Thm.1,Thm.3]{ZZ} one can deduce the equivalence (i)$\iff$(iii). However, we will give a more elementary proof below.
If above $\X$ is reflexive and $f$ exposes $x$ then one can see by the weak compactness of $\B_{\X}$
that actually $f$ $\omega$-exposes $x$.
\begin{proof}[Proof of Theorem \ref{weak}]
The equivalence (i)$\Leftrightarrow$(ii) follows easily by applying Mazur's theorem to $\conv(\{x_{n}|n\in\N\})$,
where $(x_{n})\subset \B_{\X}$ is a sequence such that $f(x_{n})$ tends to $1$ as $n\rightarrow\infty$.
Direction (iii)$\implies$(ii):
Towards this suppose that $C\subset \B_{\X}$ is a closed convex subset so that
$\sup_{y\in C}f(y)=1$. Fix a sequence $(y_{n})\subset C$ satisfying $f(y_{n})\rightarrow 1$ as $n\rightarrow \infty$.
Observe that $x\in \S_{\X}\subset \S_{\X^{\ast\ast}}$ is the unique norm-one functional supporting $f$, since $f$ is a G-smooth
point of $\X^{\ast}$. By applying the \u Smulyan lemma to $x$ and $(y_{n})$ considered in $\X^{\ast\ast}$
we get that $y_{n}\stackrel{\omega^{\ast}}{\longrightarrow}x$ in $\X^{\ast\ast}$ as $n\rightarrow\infty$. This means that
$h(y_{n})\rightarrow h(x)$ as $n\rightarrow \infty$ for all $h\in \X^{\ast}$, so that $y_{n}\stackrel{\omega}{\longrightarrow}x$
in $\X$ as $n\rightarrow \infty$. Thus $x\in\overline{\conv}(\{y_{n}|n\in \N\})\subset \overline{\conv}^{\omega}(C)=C$
by Mazur's theorem, since $C$ is a norm-closed convex set.
Conversely, suppose that (i) holds and let $\phi\in \B_{\X^{\ast \ast}}$ be an arbitrary point such that $\phi(f)=1$.
We claim that $\phi=x$, which yields that $f$ is a G-smooth point. Let $g\in \X^{\ast}$ be arbitrary and recall that
$\B_{\X}$ is $\omega^{\ast}$-dense in $\B_{\X^{\ast \ast}}$ by Goldstine's theorem. In particular,
$\phi\in\overline{\B_{\X}}^{\omega^{\ast}}$. Thus
$$\B_{\X}\cap \{\psi\in \B_{\X^{\ast\ast}}:\ |\psi(f)-1|<\frac{1}{n}\ \mathrm{and}\ |\psi(g)-\phi(g)|<\frac{1}{n}\}\neq\emptyset$$
for all $n\in \N$. Hence we may pick a sequence $(y_{n})\subset \B_{\X}$ for which $f(y_{n})\rightarrow 1$ and
$g(y_{n})\rightarrow \phi(g)$ as $n\rightarrow \infty$. Since $f$ $\omega$-exposes $x$ we know that
$y_{n}\stackrel{\omega}{\longrightarrow}x$ in $\X$ as $n\rightarrow\infty$.
This yields that $\phi(g)=\lim_{n\rightarrow\infty}g(y_{n})=g(x)$ and that $\phi=x$ as this equality holds for all $g\in \X^{\ast}$.
\end{proof}
\begin{proposition}
Let $\X$ be a Banach space, $x\in\S_{\X}$ a F-smooth point and $f\in\S_{\X^{\ast}}$ a G-smooth point such that
$f(x)=1$. Then $x$ is a $\omega$-LUR point.
\end{proposition}
\begin{proof}
Suppose $(x_{n})\subset\B_{\X}$ is a sequence such that $||x_{n}+x||\rightarrow 2$ as $n\rightarrow \infty$.
By Theorem \ref{weak} the functional $f$ $\omega$-exposes $x$. Thus it suffices to show that
$f(x_{n})\rightarrow 1$ as $n\rightarrow \infty$.
By the Hahn-Banach Theorem one can find a sequence of functionals $(g_{n})\subset \S_{\X^{\ast}}$ such that
$g_{n}\left(\frac{x+x_{n}}{2}\right)=\left|\left|\frac{x+x_{n}}{2}\right|\right|$ for each $n\in\N$.
Clearly $g_{n}(x)\rightarrow 1$ and $g_{n}(x_{n})\rightarrow 1$ as $n\rightarrow \infty$.
Hence the F-smoothness of $x$ together with the \u Smulyan Lemma yields that
$g_{n}\stackrel{||\cdot||}{\longrightarrow} f$ as $n\rightarrow\infty$. Thus we obtain that $f(x_{n})\rightarrow 1$
as $n\rightarrow\infty$.
\end{proof}
\begin{proposition}
Let $\X$ be a Banach space and suppose that $x^{\ast}\in \S_{\X^{\ast}}$ is a very smooth point. Then there exists
$x\in \S_{\X}\subset \S_{\X^{\ast\ast}}$ such that $x^{\ast}\in\S_{\X^{\ast}}\subset \S_{\X^{\ast\ast\ast}}$ $\omega$-exposes
$x$ in $\X^{\ast\ast}$.
\end{proposition}
\begin{proof}
Since by the definition $x^{\ast}$ is G-smooth in $\X^{\ast\ast\ast}$ it suffices to show that there exists $x\in \S_{\X}$
such that $x^{\ast}(x)=1$. Indeed, once this is established we may apply Theorem \ref{weak} to obtain the claim.
Let $x^{\ast\ast}\in \S_{\X^{\ast\ast}}$ be such that $x^{\ast\ast}(x^{\ast})=1$. By Goldstein's theorem $\B_{\X}\subset\B_{\X^{\ast\ast}}$
is $\omega^{\ast}$-dense. Pick a sequence $(x_{n})\subset\S_{\X}$ such that $x^{\ast}(x_{n})\rightarrow 1$ as $n\rightarrow \infty$.
Observe that according to Theorem \ref{weak} the functional $x^{\ast}$ considered in $\S_{\X^{\ast\ast\ast}}$ $\omega$-exposes
$x^{\ast\ast}$ in $\X^{\ast\ast}$. Hence $x_{n}\stackrel{\omega}{\longrightarrow} x^{\ast\ast}$ in $\X^{\ast\ast}$
as $n\rightarrow\infty$. Since $\X\subset \X^{\ast\ast}$ is $\omega$-closed by Mazur's theorem, we obtain that
$x^{\ast\ast}\in \S_{\X}\subset\S_{\X^{\ast\ast}}$.
\end{proof}
It is a natural idea to characterize reflexivity of Banach spaces in terms of suitable equivalent renormings
(see e.g. \cite{Hajek}).
\begin{theorem}
The following conditions are equivalent:
\begin{enumerate}
\item[(1)]{$\X$ is reflexive.}
\item[(2)]{$\X$ admits an equivalent renorming such that $\X^{\ast\ast}$ is $\omega$-LUR.}
\item[(3)]{$\X$ admits an equivalent renorming such that $\Lambda\subset\S_{\X^{\ast\ast}}$ given by
\[\Lambda=\left\{\phi\in \S_{\X^{\ast\ast}}| \forall\ (\phi_{n})_{n\in\N}\subset\S_{\X^{\ast\ast}}:
\sup_{n}||\phi+\phi_{n}||=2\ \implies\ \phi\in[(\phi_{n})_{n\in\N}]\right\}\]
satisfies that $[\Lambda]=\X^{\ast\ast}$.}
\end{enumerate}
\end{theorem}
\begin{proof}
If $\X$ is reflexive then it is weakly compactly generated and hence admits an equivalent LUR norm, see e.g. \cite[p.1784]{JL2}.
Thus, by using reflexivity again we obtain that $\S_{\X^{\ast\ast}}$ is LUR.
Direction (2)$\implies$(3) follows by using Mazur's theorem that for convex sets weak and norm closure coincide.
Since reflexivity is an isomorphic property we may assume without loss of generality in proving
direction (3)$\implies$(1) that $\X$ already satisfies $[\Lambda]=\X^{\ast\ast}$.
Fix $\phi\in\Lambda$. Select a sequence $(f_{n})\subset \S_{\X^{\ast}}$ such that
$\phi(f_{n})\rightarrow 1$ as $n\rightarrow\infty$. Pick a sequence $(x_{n})_{n\in\N}\subset \S_{\X}$ such that
$f_{n}(x_{n})\rightarrow 1$ as $n\rightarrow\infty$. This means that
\[||\phi+x_{n}||_{\X^{\ast\ast}}\geq (\phi+x_{n})(f_{n})\longrightarrow 2\ \mathrm{as}\ n\rightarrow\infty.\]
Hence by the definition of $\Lambda$ we obtain that $\phi\in [(x_{n})]$. Since $[\Lambda]=\X^{\ast\ast}$, this yields that
$[\X]=\X^{\ast\ast}$ and hence $\X=\X^{\ast\ast}$ as $\X\subset\X^{\ast\ast}$ is a closed subspace.
\end{proof}
It turns out below that a smoothness property (namely Asplund) together with a weak convexity property
(namely $\omega$-strongly extreme) yields in fact a stronger convexity property (namely the $\omega$-convergence),
which is analogous to the '$\omega$-exposed situation'.
\begin{theorem}\label{extreme}
Let $\X$ be an Asplund Banach space and let $x\in\S_{\X}$ be a $\omega$-strongly extreme point. Suppose that
$\{x_{n}|n\in\N\}\subset\B_{\X}$ is a set such that $x\in\overline{\conv}(\{x_{n}|n\in\N\})$.
Then there is a sequence $(x_{n_{k}})_{k}\subset \{x_{n}|n\in\N\}$ such that $x_{n_{k}}\stackrel{\omega}{\longrightarrow}x$ as
$k\rightarrow\infty$.
\end{theorem}
\begin{proof}
Consider convex combinations $y_{m}=\sum_{n\in J_{m}} a_{n}^{(m)}x_{n}\in\conv(\{x_{n}|n\in\N\})$
such that $y_{m}\rightarrow x$ as $m\rightarrow\infty$. Above $J_{m}\subset\N$ is finite and
$a_{n}^{(m)}\geq 0$ are the corresponding convex weights for $m\in\N$.
One shows easily that if $x_{n}^{\prime},x_{n}^{\prime\prime}\in \B_{\X},\ n\in\N,$ satisfy
$\lambda_{n}x_{n}^{\prime}+(1-\lambda_{n})x_{n}^{\prime\prime}\rightarrow x$ with $\lambda_{n}\rightarrow \lambda\in (0,1]$
as $n\rightarrow\infty$, then $x_{n}^{\prime}\stackrel{\omega}{\longrightarrow} x$ as $n\rightarrow\infty$.
Fix $f\in \X^{\ast},\ \epsilon>0$ and put
\[K_{m}=\{n\in J_{m}|\ f(x_{n})<f(x)-\epsilon\}\quad \mathrm{for}\ m\in\N.\]
We claim that $\lambda_{m}=\sum_{n\in K_{m}}a_{n}^{(m)}\rightarrow 0$ as $m\rightarrow\infty$. Indeed, assume to the contrary that
this is not the case. Then, by passing to a subsequence we may assume without loss of generality that
$\lambda_{n}\rightarrow \lambda\in (0,1]$ as $n\rightarrow\infty$. Write $y_{m}=\lambda_{m}y_{m}^{\prime}+(1-\lambda_{m})y_{m}^{\prime\prime}$ for $m\in\N$, where
\[y_{m}^{\prime}=\frac{\sum_{n\in J_{m}}a_{n}^{(m)}x_{n}}{\lambda_{m}}\ \mathrm{and}\ y_{m}^{\prime\prime}=\frac{\sum_{n\in \N\setminus J_{m}}a_{n}^{(m)}x_{n}}{(1-\lambda_{m})}.\]
By the definition of the sequence $(y_{m}^{\prime})$ we have $f(y_{m}^{\prime})\leq f(x)-\epsilon$ for $m\in\N$ but this contradicts
the remark that $y_{m}^{\prime}\stackrel{\omega}{\longrightarrow} x$ as $m\rightarrow\infty$.
Thus $\sum_{n\in K_{m}}a_{n}^{(m)}\rightarrow 0$ as $m\rightarrow\infty$ and a similar argument for
$L_{m}=\{n\in J_{m}|f(x_{n})>f(x)+\epsilon\},\ m\in\N,$ gives that $\sum_{n\in L_{m}}a_{n}^{(m)}\rightarrow 0$ as $m\rightarrow\infty$.
These observations yield that
\[\lim_{m\rightarrow\infty}\sum_{\substack{n\in J_{m}:\\ |f(x)-f(x_{n})|<\epsilon}}a_{n}^{(m)}=1\]
for any $\epsilon>0$.
Since $f$ was arbitrary, we obtain that
\[\lim_{m\rightarrow\infty}\sum_{\substack{n\in J_{m}:\\ x_{n}\in U}}a_{n}^{(m)}=1,\]
where $U=\bigcap_{k}g_{k}^{-1}([g_{k}(x)-\epsilon,g_{k}(x)+\epsilon])$ and $g_{1},\ldots, g_{k}\in \X^{\ast},\ k\in\N$.
Recall that $x$ has a weak neighbourhood basis consisting of such sets $U$. In particular
\begin{equation}\label{eq: xom}
x\in\overline{\{x_{n}|n\in\N\}}^{\omega}.
\end{equation}
Let $\Y=\overline{\span}(\{x_{n}|n\in\N\})$. Since $\X$ is an Asplund space and $\Y$ is separable we obtain that $\Y^{\ast}$ is separable.
Fix a sequence $(h_{n})_{n}\subset \Y^{\ast}$, which is dense in $\Y^{\ast}$. Note that $(\B_{\Y},\omega)$ is metrizable
by the metric $d(x,y)=\sum_{n\in\N}2^{-n}|h_{n}(x-y)|,\ x,y\in\B_{\Y}$. Hence there is a sequence $(x_{n_{k}})_{k}\subset\{x_{n}|n\in\N\}$
such that $x_{n_{k}}\stackrel{\omega}{\longrightarrow} x$ as $k\rightarrow\infty$ in $\Y$. By the Hahn-Banach extension of $\Y^{\ast}$
to $\X^{\ast}$ it is straight-forward to see that $x_{n_{k}}\stackrel{\omega}{\longrightarrow}x$ as $k\rightarrow\infty$ also in $\X$.
\end{proof}
\subsection{Density of $\omega$-exposed points}
\ \newline
Recall the following result due to Lindenstrauss and Phelps \cite[Cor.2.1.]{LP}: \\
\textit{If $C$ is convex body in an infinite dimensional separable reflexive Banach space, then the extreme points
of $C$ are not isolated in the norm topology.}\\
This result is the starting point for the studies in this section.
Let us first consider a natural property of the duality mapping $J\colon \S_{\X}\rightarrow\mathcal{P}(\S_{\X^{\ast}})$. The following
fact is an elementary topological statement about $J$ and it is proved here for convenience.
\begin{proposition}\label{equivprop}
The following conditions are equivalent for a Banach space $\X$:
\begin{enumerate}
\item[(1)]{For each relatively open non-empty $U\subset\S_{\X}$ the set
$J(U)$ contains an interior point relative to $\mathrm{NA}(\S_{\X^{\ast}})$.}
\item[(2)]{For each relatively dense $A\subset \mathrm{NA}(\S_{\X^{\ast}})$ the subset
$\{x\in\S_{\X}|\ J(x)\cap A\neq\emptyset\}\subset \S_{\X}$ is dense.}
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose that (1) holds. Let $A\subset \mathrm{NA}(\S_{\X^{\ast}})$ be a dense subset. Assume to the contrary that
$\{x\in\S_{\X}|\ J(x)\cap A\neq\emptyset\}\subset \S_{\X}$ is not dense. Thus there is
a non-empty set $U\subset \{x\in\S_{\X}|\ J(x)\cap A=\emptyset\}$, which is open in $\S_{\X}$. Hence
$J(U)\cap A=\emptyset$ where $J(U)\subset \S_{\X^{\ast}}$ is open according to condition (1). This contradicts the assumption that
$A\subset \mathrm{NA}(\S_{\X^{\ast}})$ is dense and consequently condition (2) holds.
Suppose that (2) holds and $U\subset \S_{\X}$ is a non-empty open set. Assume to the contrary that $J(U)$ does not contain an interior
point relative to $\mathrm{NA}(\S_{\X^{\ast}})$. Then $\mathrm{NA}(\S_{\X^{\ast}})\setminus J(U)$ is dense in
$\mathrm{NA}(\S_{\X^{\ast}})$. Hence condition (2) states that
$\S_{\X}\setminus U=J^{-1}(\mathrm{NA}(\S_{\X^{\ast}})\setminus J(U))$ is dense in $\S_{\X}$. This contradicts the
assumption that $U$ is open and hence condition (1) holds.
\end{proof}
When $\X$ satisfies the above equivalent conditions of Proposition \ref{equivprop}, we say that $\X$ satisfies ($\ast$) for the sake of
brevity. The following result describes this condition.
\begin{proposition}\label{*prop}
Suppose that $\X^{\ast}$ is an Asplund space. Then $\X$ satisfies condition $(\ast)$ if and only if
the set of strongly exposed points of $\B_{\X}$ is dense in $\S_{\X}$.
\end{proposition}
\begin{proof}
The 'if' case. Fix non-empty $A\subset \mathrm{NA}(\S_{\X^{\ast}})$. Suppose $\S_{\X}\setminus \{x\in\S_{\X}|\ J(x)\cap A\neq\emptyset\}$
contains an interior point relative to $\S_{\X}$. We aim to show that in such case $A\subset\S_{\X^{\ast}}$ is not dense.
Indeed, by the density assumption regarding the strongly exposed points we obtain that there is
$x\in ||\cdot||\mathrm{-exp}(\B_{\X})$, which is not in the closure of $\{x\in\S_{\X}|\ J(x)\cap A\neq\emptyset\}$.
If $f\in\S_{\X^{\ast}}$ is a strongly exposing functional for $x$ then there is $\epsilon>0$ such that
$f(\{x\in\S_{\X}|\ J(x)\cap A\neq\emptyset\})\subset [-1,1-\epsilon]$. This gives that $||f-g||\geq \epsilon$ for all
$g\in A$, so that $A\subset\S_{\X^{\ast}}$ is not dense.
The 'only if' case. Let us apply the fact that $\X^{\ast}$ is Asplund. Denote by $\mathrm{F}$ the set of
F-smooth points $x^{\ast}\in \S_{\X^{\ast}}$. Recall that $F\subset\S_{\X^{\ast}}$ is dense since
$\X^{\ast}$ is Asplund. Condition ($\ast$) of $\X$ gives that $\{x\in\S_{\X}|\ J(x)\cap \mathrm{F}\neq\emptyset\}$ is dense in $\S_{\X}$.
By applying the \u Smulyan Lemma it is easy to see that each F-smooth functional $x^{\ast}\in \S_{\X^{\ast}}$ is norm-attaining and
in fact a strongly exposing functional.
\end{proof}
The following main result is a version of the above-mentioned result by Lindenstrauss and Phelps.
\begin{theorem}\label{th:isolated}
Let $\X$ be a Banach space, which satisfies $\dim(\X)\geq 2$ and suppose that the following conditions hold:
\begin{enumerate}
\item[(1)]{$\mathrm{NA}(\S_{\X^{\ast}})\subset \S_{\X^{\ast}}$ is comeager.}
\item[(2)]{$\X^{\ast}$ is weakly Asplund.}
\end{enumerate}
Then there does not exist a G-smooth point $x\in\S_{\X}$, which is $\omega$-isolated in $\omega\mathrm{-exp}(\B_{\X})$.\\
Moreover, if $\X$ satisfies additionally $(\ast)$, then $\omega\mathrm{-exp}(\B_{\X})\subset\S_{\X}$ is dense.
\end{theorem}
To comment on the assumptions shortly, observe that condition (1) above holds for instance if
$\X$ has the RNP (see \cite{Phelps} and \cite[Thm.8]{Bour}) and $\X^{\ast}$ is weakly Asplund for instance if $\X^{\ast}$ is separable
by Theorem \ref{densesmooth}. It follows that the Asplund property of $\X^{\ast}$ is sufficient for both the conditions (1) and (2)
to hold. Observe that we do not require $\X$ above to be separable, nor infinite dimensional. On the other hand, the assumption about
the G-smoothness of $x$ can not be removed. For example consider $\ell^{\infty}_{n}$ for $n\in\{2,3,\ldots\}$.
\begin{proof}[Proof of Theorem \ref{th:isolated}]
According to the weak Asplund property of $\X^{\ast}$ there is a dense $G_{\delta}$-set of G-smooth points in $\S_{\X^{\ast}}$.
We apply this fact together with assumption (1) as follows. By using the Baire category theorem we obtain that
\[\mathcal{NG}=\{x^{\ast}\in \mathrm{NA}(\S_{\X^{\ast}})|x^{\ast}\ \mathrm{is\ G-smooth}\}\]
is dense in $\S_{\X^{\ast}}$. Observe that by Theorem \ref{weak} all $f\in \mathcal{NG}$ are in fact $\omega$-exposing functionals.
Now, assume to the contrary that $x$ is a G-smooth $\omega$-isolated point in $\omega\mathrm{-exp}(\B_{\X})$.
Then $x$ is $\omega$-exposed by a unique support functional $f\in \mathcal{NG}$.
It is easy to see that since $f$ is a $\omega$-exposing and $\omega$-isolated functional, there is $\epsilon>0$ such that
$f(\omega\mathrm{-exp}(\B_{\X})\setminus \{x\})\subset [-1,1-\epsilon]$.
Consequently, by the uniqueness of $f$ we obtain that $||f-g||\geq \epsilon$ for all $g\in \mathcal{NG}\setminus \{f\}$.
Thus we obtain that the relatively open set
$\{h\in\S_{\X^{\ast}}:\ 0<||f-h||<\epsilon\}\subset\S_{\X^{\ast}}$ is non-empty as $\dim(\X)\geq 2$ and it does not intersect
the dense subset $\mathcal{NG}\subset\S_{\X^{\ast}}$, which provides a contradiction. Hence the first part of the claim holds.
Finally, let us assume that $\X$ satisfies ($\ast$). Hence $\{x\in \S_{\X}|J(x)\cap \mathcal{NG}\neq\emptyset\}=J^{-1}(\mathcal{NG})$
is dense in $\S_{\X}$. Recall that the points in $J^{-1}(\mathcal{NG})$ are $\omega$-exposed.
\end{proof}
\subsection*{Acknowledgments}
I thank the referee for many useful suggestions. This article is part of the writer's ongoing Ph.D. work,
which is supervised by H.-O. Tylli. The work has been supported financially by the Academy of Finland projects
\# 53968 and \# 12070 during the years 2003-2005 and by the Finnish Cultural Foundation in 2006. | 30,042 |
\begin{document}
\setcounter{secnumdepth}{1}
\title{Quantum Noise Filtering via Cross-Correlations}
\author{Boaz Tamir}
\email{[email protected]}
\affiliation{\mbox{Faculty of Interdisciplinary Studies, S.T.S Program, Bar-Ilan University, Ramat-Gan, Israel}\\{IYAR, Israel institute for advanced research}}
\author{Eliahu Cohen}
\email{[email protected]}
\affiliation{\mbox{School of Physics and Astronomy, Tel Aviv University, Tel Aviv, Israel}}
\date{\today}
\pacs{03.67.Ac;42.50.Lc}
\begin{abstract}
\textbf{Abstract}\\
Motivated by successful classical models for noise reduction, we suggest a quantum technique for filtering noise out of quantum states. The purpose of this paper is twofold: presenting a simple construction of quantum cross-correlations between two wave-functions, and presenting a scheme for a quantum noise filtering. We follow a well-known scheme in classical communication theory that attenuates random noise, and show that one can build a quantum analog by using non-trace-preserving operators. By this we introduce a classically motivated signal processing scheme to quantum information theory, which can help reducing quantum noise, and particularly, phase flip noise.
\end{abstract}
\maketitle
\vspace{1cm}
\noindent
\section {Introduction and motivation}
\label{sec:intro}
In classical communication theory, the use of cross-correlation and autocorrelation functions is very common. There is a family of classical algorithms for the retrieval of information below noise. These are based on cross-correlations and use some type of referential wave function to detect the presence of a signal, or its shape \cite{Champeney}, \cite{Schwartz}. To name just a few, there is the phase sensitive detector that uses a synchronous referential wave form; the Boxcar detector that correlates a repetitive waveform with a pulse function as a gating function; the matched filter that detects the presence of a signal with known shape but unknown amplitude, and the more general correlator; the lock-in amplifier; the integrator which is a low-pass filter, etc. Cross-correlations are also used in spectrum analysis or for estimating the level of randomness, etc. see \cite{Sharma} - \cite{Cover}.
Here we wish to present a quantum analog for the classical correlator, that is, a quantum noise filter that utilizes cross-correlations to attenuate the noise.
The power of cross-correlation stems from a deep connection between the correlation and the energy or power spectrum density \cite{Wiener}. In classical theory this is known as the Wiener-Khinchin theorem \cite{Champeney}. Similar relations are also true in quantum optics \cite{Garrison}.\\
Quantum correlations are well-known in quantum optics following the work of Glauber \cite{Glauber}. The single photon interference in a two-slit experiment can be described using the first-order cross-correlation Glauber function $G^{(1)} (r_1,t_1,r_2,t_2)$. Moreover, the intensity-intensity correlation or the Hanbury-Brown-Twiss effect \cite{Hanbury} are explained via the second-order Glauber function $G^{(2)} (r_1,t_1,r_2,t_2)$. Cross-correlations are also used in photon detection, an example could be the heterodyne or monodyne detection schemes \cite{Garrison}. Both are the quantum (optical) analogues of the classical schemes for the detection of weak radio frequency signals. For other uses of cross-correlations see for example \cite{Gerry}, \cite{Barnett}.
The paper is organized as follows. We start (section II) with a few essential preliminaries which will be used later on for the construction of the proposed method. The heart of the paper lies in section III. First we describe a simple way to construct correlation integrals. Given a discrete pure wave function with density matrix $\mathcal{S}$ and a `reference' (see below) discrete pure wave function $\mathcal{S}_0$ we present a non trace-preserving operator $\mathcal{E}_{\mathcal{S}_0}$ such that $\mathcal{E}_{\mathcal{S}_0}(\mathcal{S})$ is a density matrix with the correlations $\langle S,S_0 \rangle$ of $S_0$ and $S$ as its coefficients. We use von-Neuman measurement theory \cite{Neumann} and some techniques used in weak measurement theory \cite{Aharonov,ACE} to construct $\mathcal{E}$ (also discussed in the appendix).\\
Next we analyze the case of a quantum signal with noise; let $\rho= p\mathcal{S}+ (1-p)\mathcal{N}$ where $\mathcal{N}= \sum_i E_iSE_i^\dag$ and $\sum E^\dag E =1$. We prove that our operator $\mathcal{E}$ increases the fidelity between $\rho$ and $\mathcal{S}$, to be more precise:
\[ F(\mathcal{E}(\rho),\mathcal{E}(S))\geq F(\rho,\mathcal{S})\]
\noindent We also show that the increase in fidelity is paid out by the number of postselections (final projective measurements) in the construction of $\mathcal{E}_{\mathcal{S}_0}(\mathcal{S})$. We therefore reduce the amount of noise and pay with the number of measurements. The operator $\mathcal{E}_{\mathcal{S}_0}$ is the quantum parallel of a classical correlator or low-pass filter integrator. Another way to look at it, is as a lock-in amplifier (recently discussed in \cite{Kotler}). \\
In section IV we discuss the results and suggest a few future research directions.
\section {Preliminaries}
In this section we outline the scope of our problem and define the correlator which will be later used for noise filtering.
Let $\ket{\phi} = \sum_{k=1}^N \phi(k) \ket{k}$ and $\ket{\psi} = \sum_{k=1}^N \psi(k) \ket{k}$ be two pure state vectors. We will also use the notation:
$\ket{\phi(k)} =\phi(k) \ket{k}$\\
\textbf{Definition:} Define the correlation coefficient as:
\begin{eqnarray}
C(\ket{\phi}, \ket{\psi}) = \frac{1}{N} \sum_{i=1}^N | \bra{\phi(k-i)} \psi(k) \rangle_k |^2,
\end{eqnarray}
\noindent where $\bra{\phi(k-i)} \psi(k) \rangle_k$ is the $k$ integral making the scalar product. \\
The following lemma is a simple consequence of the above definition:\\
\textbf{Lemma:} $0 \leq C(\ket{\phi}, \ket{\psi})\leq 1$.\\
Consider the following density matrix describing a signal with some noise:
\begin{eqnarray}
\rho = p \ket{\phi} \bra{\phi} + (1-p) E \ket{\phi} \bra{\phi} E^\dag,
\end{eqnarray}
\noindent where $E^\dag E = 1$. Let $\mathcal{S}$ denote the signal $\ket{\phi} \bra{\phi}$ and $\mathcal{N}$ the noise $E \ket{\phi} \bra{\phi} E^\dag$, then we can extend the above definition of correlation coefficient to the densities:
\begin{eqnarray}
C(S,N) = \frac{1}{N} \sum_{i=1}^N | \bra{\phi(k-i)} E \ket{\phi(k)}_k |^2.
\end{eqnarray}
\textbf{Example:} For a phase flip type of noise $E = \frac{1}{n}\sum_{i=1}^n Z^{(i)}$ and a fixed amplitude signal $\ket{\phi} = \frac{1}{\sqrt{N}} \sum_{k=1}^N \ket{k}$ it is easy to see that: \[C(\mathcal{S},\mathcal{N})=0\]
\[C(\mathcal{S},\mathcal{S})=1.\]
Therefore, if we could somehow correlate $\rho$ with $S$ we will eventually get rid of the noise. In other words, we are looking for a quantum counterpart of the classical scheme for filtering noise by correlation integrals. \\
Classically, if $\mathcal{S}$ is a signal, $\mathcal{N}$ is some random amplitude noise and $\langle\mathcal{S}+\mathcal{N},\mathcal{S}+\mathcal{N}\rangle$ is their autocorrelation, then:
\[ \langle\mathcal{S}+\mathcal{N},\mathcal{S}+\mathcal{N}\rangle = \langle\mathcal{S},\mathcal{S}\rangle + \langle\mathcal{S},\mathcal{N}\rangle+\]
\[+ \langle\mathcal{N},\mathcal{S}\rangle + \langle\mathcal{N},\mathcal{N}\rangle = \langle\mathcal{S},\mathcal{S}\rangle .\]
\noindent It is well-known that the spectrum of $\langle\mathcal{S},\mathcal{S}\rangle$ is very close to that of $\mathcal{S}$, and therefore we can analyze $\mathcal{S}$ by analyzing $\langle\mathcal{S},\mathcal{S}\rangle$, regardless of the noise $\mathcal{N}$ \cite{Hancock}. This scheme is not practical as a quantum process since it demands the cloning of $\mathcal{S}+\mathcal{N}$ \cite{Wootters}. Therefore we will use a similar scheme:
\[ \langle\mathcal{S}_0,\mathcal{S}+\mathcal{N}\rangle = \langle\mathcal{S}_0,\mathcal{S}\rangle + \langle\mathcal{S}_0,\mathcal{N}\rangle,\]
\noindent where $\mathcal{S}_0$ is some known referential function. \\
To construct the correlation functions we will present a non-trace-preserving operator $\mathcal{E}= \mathcal{E}_{\ket{\phi_0}}$ where $\ket{\phi_0}\bra{\phi_0}$ is some referential signal $S_0$. Applying $\mathcal{E}$ to $\rho$ we have:
\begin{eqnarray}
\mathcal{E}_{\ket{\phi_0}}(\rho) = q \mathcal{E}_{\ket{\phi_0}}(\mathcal{S})+ (1-q) \mathcal{E}_{\ket{\phi_0}}(\mathcal{N}),
\end{eqnarray}
\noindent where $\mathcal{E}_{\ket{\phi_0}}(\mathcal{S})$ (res. $\mathcal{E}_{\ket{\phi_0}}(\mathcal{N}$)) is a density matrix with the correlations $\langle{\phi_0(k-i)}, \phi(k)\rangle$ (res. $\bra{\phi_0(k-i)} E \ket{\phi(k)} $) as its coefficients. Hence the correspondence of quantum to classical signals is as follows:
\begin{eqnarray}
\end{eqnarray}
\[ \mathcal{E}_{\ket{\phi_0}}(\mathcal{S}) \sim \langle\mathcal{S}_0,\mathcal{S}\rangle\]
\[ \mathcal{E}_{\ket{\phi_0}}(\mathcal{N}) \sim \langle\mathcal{S}_0,\mathcal{N}\rangle\]
\[ \mathcal{E}_{\ket{\phi_0}}(\rho) \sim \langle\mathcal{S}_0,\mathcal{S+N}\rangle. \] \
\noindent The probability $q$ (res. $1-q$) is a functions of $p$ (res. 1-p)) and the correlation coefficient $C(\mathcal{S}_0,\mathcal{S})$ (res. $C(\mathcal{S}_0,\mathcal{N})$). In terms of fidelity measure we will further show that:
\begin{eqnarray}
F( \mathcal{E}_{\ket{\phi_0}}(\rho), \mathcal{E}_{\ket{\phi_0}}(\mathcal{S})) \geq F(\rho, \mathcal{S}),
\end{eqnarray}
\noindent and therefore $\mathcal{E}_{\ket{\phi_0}}(\rho)$ can be looked at as a rotation in the direction of the signal and away from the noise.\\
\section {The construction of correlations and autocorrelations}
In this section we will provide a general argument showing how to construct correlations and autocorrelations between wave-functions. We will follow the general scheme presented below:\\
\setlength{\unitlength}{0.75mm}
\begin{picture}(150,80)(-20,-20)
\put(0,0){\framebox(30,30)}
\put(2,23){\makebox(0,0)[bl]{$A$}}
\put(2,3){\makebox(0,0)[bl]{$Q$}}
\put(-25,27){\makebox(0,0)[bl]{$\ket{\psi}\bra{\psi}$}}
\put(-25,7){\makebox(0,0)[bl]{$\ket{\phi}\bra{\phi}$}}
\put(15,37){\makebox(0,0)[bl]{$\frac{1}{N} \sum_{i,k,j,l} \ket{\phi(k-i)}\ket{i} \bra{j} \bra{\phi(l-j)}$}}
\put(13,13){\makebox(0,0)[bl]{$U\rho U^\dag$}}
\put(40,7){\makebox(0,0)[bl]{$\hat{M}=\ket{\phi_0}\bra{\phi_0}$}}
\put(-20,5){\line(1,0){20}}
\put(-20,25){\line(1,0){20}}
\put(30,25){\line(1,0){40}}
\put(30,5){\line(1,0){5}}
\put(30,5){\line(1,0){40}}
\put(45,5){\line(1,0){10}}
\put(-25,-15){\makebox(0,0)[bl]{{\bf Fig.1.} Schematic illustration of the correlator }}
\end{picture}\\
Given a system $Q$ described by the density matrix $\rho=\ket{\phi}\bra{\phi}$ of the pure state $\ket{\phi}$, and a system $A$ described by $\ket{\psi}\bra{\psi} = \frac{1}{N} \sum_{i,j} \ket{i}\bra{j}$, we will couple the two systems by a unitary operator that will entangle them (see the Appendix for further details). Thus we will get:
\begin{eqnarray}
U\rho U^\dag = \frac{1}{N}\sum_{i,k,j,l} \ket{\phi(k-i)}\ket{i} \bra{j} \bra{\phi(l-j)}.
\end{eqnarray}
\noindent Next we will post-select the referential function $\ket{\phi_0}$ using the measurement operator $\hat{M}= \ket{\phi_0}\bra{\phi_0}$. This will leave the system $A$ as a density matrix with the cross-correlations $\bra{\phi_0(k)} \phi(k-i)\rangle_k$ as matrix coefficients. Explicitly:
\begin{eqnarray}
\end{eqnarray}
\[ \mathcal{E}\ket{\phi}\bra{\phi} = \frac{1}{N} \sum_{i,j} \bra{\phi_0(k)} \phi(k-i)\rangle_k \bra{\phi(k-j)} \phi_0(k)\rangle_k \ket{i} \bra{j}.\]
\noindent Note the following trace formula:
\begin{eqnarray}
\label{trace}
\end{eqnarray}
\[ tr(\mathcal{E}\ket{\phi}\bra{\phi}) = \frac{1}{N} \sum_i | \bra{\phi_0(k)} \phi(k-i)\rangle_k|^2 = C(\ket{\phi_0},\ket{\phi}).\]
\noindent In other words, the probability to get the post-selection $\ket{\phi_0}\bra{\phi_0}$ is the average correlation between the reference state $\ket{\phi_0}$ and the original signal $\ket{\phi}$. If the correlation between the reference and the signal is high, then the probability to post-select the reference is also high.\\
We will now show that the operator $\mathcal{E}= \mathcal{E}_{\ket{\phi_0}}$ increases the fidelity $F$.\\
\noindent \textbf{Theorem:} $F(\mathcal{E}(\rho),\mathcal{E}(S))\geq F(\rho,S)$.\\
Since $\mathcal{E}(\rho)$ is non-trace-preserving we have to normalize it by its trace. If we use the reference $\ket{\phi_0}$ then $\mathcal{E}= \mathcal{E}_{\ket{\phi_0}}$ and:
\begin{eqnarray}
\end{eqnarray}
\[ \frac{\mathcal{E}_{\ket{\phi_0}} (\rho)}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)} = p \frac{\mathcal{E}_{\ket{\phi_0}} (\ket{\phi}\bra{\phi})}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)}+(1-p) \frac{\mathcal{E}_{\ket{\phi_0}} (E\ket{\phi}\bra{\phi}E^\dag)}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)}.\]
\noindent Next we normalize $\mathcal{E}_{\ket{\phi_0}} (\ket{\phi}\bra{\phi})$ and $\mathcal{E}_{\ket{\phi_0}} (E\ket{\phi}\bra{\phi}E^\dag)$ to get:
\begin{eqnarray}
\end{eqnarray}
\[ \frac{\mathcal{E}_{\ket{\phi_0}} (\rho)}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)}= p \frac{tr \mathcal{E}_{\ket{\phi_0}}(\ket{\phi}\bra{\phi})}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)} \frac{\mathcal{E}_{\ket{\phi_0}} (\ket{\phi}\bra{\phi})}{tr\mathcal{E}_{\ket{\phi_0}} (\ket{\phi}\bra{\phi})}+\]
\[+(1-p) \frac{tr \mathcal{E}_{\ket{\phi_0}}(E\ket{\phi}\bra{\phi}E^\dag)}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)} \frac{\mathcal{E}_{\ket{\phi_0}} (E\ket{\phi}\bra{\phi}E^\dag)}{tr\mathcal{E}_{\ket{\phi_0}} (E\ket{\phi}\bra{\phi}E^\dag)}.\]
\noindent However by the above trace formula ($\ref{trace}$):
\begin{eqnarray}
\end{eqnarray}
\[ \tilde{\mathcal{E}}_{\ket{\phi_0}} (\rho)= p \frac{ C(\ket{\phi_0},\ket{\phi})}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)} \cdot \tilde{\mathcal{E}}_{\ket{\phi_0}} (\ket{\phi}\bra{\phi})\]
\[+ (1-p) \frac{ C(\ket{\phi_0},E\ket{\phi})}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)} \cdot
\tilde{\mathcal{E}}_{\ket{\phi_0}} (E\ket{\phi}\bra{\phi}E^\dag),\]
\noindent where $\tilde{\mathcal{E}}(\rho)$ denotes the normalization of $\mathcal{E}(\rho)$.\\
\noindent The density matrix $\tilde{\mathcal{E}}_{\ket{\phi_0}} (\ket{\phi}\bra{\phi})$ is now multiplied by the correlation coefficient $C(\ket{\phi_0},\ket{\phi})$, while the density matrix $\tilde{\mathcal{E}}_{\ket{\phi_0}} (E\ket{\phi}\bra{\phi}E^\dag)$ is multiplied by the correlation coefficient $C(\ket{\phi_0},E\ket{\phi})$. By the strong concavity of the fidelity (see \cite{Nielsen} theorem 9.2) we can write:
\begin{eqnarray}
F(\mathcal{E}(\rho),\mathcal{E}(S))\geq \sqrt{ p \frac{ C(\ket{\phi_0},\ket{\phi})}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)}}.
\end{eqnarray}
\noindent We can also compute $F(\rho,S)$ directly:
\begin{eqnarray}
F(\rho,S) = \sqrt{p +(1-p) \bra{\phi} E \ket{\phi}^2}.
\end{eqnarray}
\noindent Whenever $C(\ket{\phi_0},\ket{\phi})$ is close to $1$ (by choosing the right referential function) and $C(\ket{\phi_0},E \ket{\phi})$ is close to $0$ (this will depend on the type of noise) we can guarantee that:
\begin{eqnarray}
F(\rho,S)\approx \sqrt{p},
\end{eqnarray}
\noindent and
\begin{eqnarray}
F(\mathcal{E}(\rho),\mathcal{E}(S))\approx \sqrt{ \frac{ p}{tr\mathcal{E}_{\ket{\phi_0}} (\rho)}}.
\end{eqnarray}
\hspace{75mm} $\blacksquare$\\
\textbf{Corollary:} The increase in fidelity due to the filter is proportional to:
\begin{eqnarray}
\frac{1}{\sqrt{tr\mathcal{E}_{\ket{\phi_0}} (\rho)}}.
\end{eqnarray}
\hspace{75mm} $\blacksquare$\\
It is important to note that filtering the noise has a cost in terms of of the number of post-selection trials. This is the content of the above corollary. We will need several applications of the protocol until post-selection is achieved. This argument is similar to the one employed in signal amplification protocols using weak measurement methods \cite{Amp1,Amp2}.\\
\textbf{Example:} For $\ket{\phi}= \ket{\phi_0}=\frac{1}{\sqrt{N}} \sum_i \ket{i}$ and $E=\frac{1}{n}\sum_i Z^{(i)}$ a phase flip noise as above:
\[ F(\rho,S)= \sqrt{p}\]
\[ F(\mathcal{E}(\rho),\mathcal{E}(S))=1,\]
\noindent and the increase in fidelity is exactly $\frac{1}{\sqrt{p}}$.
\section{Discussion}
Motivated by the theory of cross-correlations in classical physics and its successful applications such as filters, correlators, integrators, etc. we extended its methods to the theory of quantum information processing. \\
We have constructed a simple method for creating cross-correlations using a quantum measurement scheme followed by post-selection. The cross-correlation integrals (for each of the lags) are represented as the amplitudes of the output vector of the quantum filter, whereas the average cross correlation over all lags corresponds to the post-selection probability. This immediately suggests the construction of a quantum noise filter. We have shown that such a filter can be defined by the use of a non-trace-preserving operator.\\
For the protocol to work we need to be sure that the correlation of the reference function with the original signal is high, and the type of noise is such that the correlation of the noise with the signal is low. These conditions are similar to those in the classical counterpart of lock-in amplifier.
We applied the proposed method to the case of phase flip channel in the special case there the input signal was constant. The phase flip channel is strictly related to the phase damping channel- one of the most subtle and important processes in the study of quantum information. The use of cross-correlation led to a significant reduction of this noise. \\
We believe that the above scheme (or a modification of it) could be generalized to different wave-functions and noise patterns. Moreover, this suggests the use of other classical signal processing techniques in quantum information theory.
\section{Appendix}
We shall describe the entanglement process made by $U$ in details, first for a continuous, and then for an almost discrete, quantum pointers.
Consider a system $Q$ described by a the density matrix $\ket{\eta}\bra{\eta}$. Let $V_Q$ be its Hilbert space and let $\ket{\eta}$ be described by a real continuous wave-function:
\begin{eqnarray}
\eta(x)= \left(\frac{1}{2\pi\sigma^2}\right)^{1/4} e^{-\frac{x^2}{4\sigma^2}}
\end{eqnarray}
We couple $Q$ to another system $A$ with $N$ eigenvectors $\ket{a_j}$ such that $\hat{A}\ket{a_j}= a_j \ket{a_j}$. The eigenvectors $\ket{a_j}$ will be used to shift the argument of the function $\eta(x)$. Consider also the state vector
\[ \ket{\psi} = \frac{1}{\sqrt{N}} \sum_j \ket{a_j} \]
\noindent in the system $A$. We will now couple the two systems by the von Neumann interaction Hamiltonian \cite{Aharonov,ACE}:
\begin{eqnarray}
\hat{H}= \hat{H}_{int}= g(t) \hat{A} \hat{P},
\end{eqnarray}
\noindent where $[\hat{X},\hat{P}]=i\hbar$ and the coupling function $g(t)$ satisfies:
\[ \int_0^T g(t)dt = 1 ,\]
\noindent during the coupling time $T$. We shall start with the vector:
\[ \ket{\Psi} = \ket{\psi} \ket{\Phi}, \]
\noindent in the tensor space of the two systems. Applying the time evolution operator we get:
\[ e^{-i\hat{A}\hat{P}/\hbar} \ket{\Psi}.\]
\noindent It is easy to see that on the subspace $\ket{a_j} V_Q$ the Hamiltonian $\hat{H}$ takes $\hat{X}$ (i.e. $I \cdot \hat{X}$) to $\hat{X}+I a_j$, since by the time $T$ the coupling is already done, we have (Heisenberg equation):
\begin{eqnarray}
\hat{X}(T) - \hat{X}(0) = \int_0^T dt \frac{\partial \hat{X}}{\partial t} = \int_0^T \frac{i}{h}[\hat{H},\hat{X}] dt = a_j,
\end{eqnarray}
\noindent and therefore this Hamiltonian induces a transformation of the operator $\hat{X}$. The corresponding transformation of the coordinates of the wave function is (see \cite{Peres} section 8.4):
\begin{eqnarray}
e^{-i\hat{A}\hat{P}/h}\ket{\psi}\eta(x) = \frac{1}{\sqrt{N}}\sum_j \ket{a_j}\eta(x-a_j).
\end{eqnarray}
We now examine the case of almost discrete pointer by taking:
\[ \ket{k} = \left(\frac{1}{2\pi\epsilon^2}\right)^{1/4} e^{-\frac{(x-k)^2}{4\epsilon^2}}, \]
\noindent where $\epsilon$ is small enough so that $\ket{k}$ is a sharp Gaussian centered around $x=k$ (this will help us defining an almost discrete function in the variable $k$). Hence,
\[ e^{-i\hat{A}\hat{P}/h}\ket{a_j} \ket{k} = \ket{k-a_j}.\]
\noindent Therefore, for the state vector $\ket{\phi} = \sum \phi(k)\ket{k}$ we have:
\[ e^{-i\hat{A}\hat{P}/h}\ket{a_j} \ket{\phi} = \sum \phi(k) \ket{k-a_j}. \]
\noindent We choose
\[ \hat{A}= \sum_{l=1}^n 2^{l-1} \left({\frac{1-\sigma_z}{2}}\right)^{(l)}. \]
\noindent Let $j=i_n,...,i_1$ be the binary decomposition of $j$:
\[ j= \sum_{l=1}^n i_l 2^{l-1}, \]
\noindent then
\[\hat{A} \ket{i_n \otimes i_{n-1},...\otimes i_1}= j \ket{i_n \otimes i_{n-1},...\otimes i_1}.\]
\noindent Therefore we can write $\ket{a_j} = \ket{i_n \otimes i_{n-1},...\otimes i_1}$, $a_j =j$, and thus:
\[ e^{-i\hat{A}\hat{P}/h}\ket{a_j} \ket{\phi} = \sum \phi(k) \ket{k-j}. \]
\noindent The above formula is the entanglement we need for the correlation integrals in the main text.
\section {Acknowledgments}
E.C. was supported in part by the Israel Science Foundation Grant No. 1311/14.\\
\section {References} | 30,994 |
\begin{document}
\title{An Approach with Toric Varieties for Singular Learning Machines.}
\author{M.P. Castillo-Villalba}
\address{Program of Computational Genomics, Center for Genomic Sciences UNAM, C.P. 62210, Cuernavaca, Morelos, Mexico.}
\email{[email protected]}
\author{J.O. Gonz\'{a}lez -Cervantes.}
\address{Department of Mathematics, Superior School of Physic and Mathematics, National Polytechnical Institute, IPN, Zacatenco, CP 07738, Mexico City, Mexico.}
\email{[email protected]}
\subjclass[2010]{Primary}
\keywords{Toric variety; toric morphism; lattice polytope; Hilbert basis; learning curves; Singular machines, Kullback distance.}
\ams{AMS classification codes.}
\date{19/Jul/2017.}
\begin{abstract}
The Computational Algebraic Geometry applied in Algebraic Statistics; are beginning to exploring new branches and applications; in artificial intelligence and others areas. Currently, the development of the mathematics is very extensive and it is difficult to see the immediate application of few theorems in different areas, such as is the case of the Theorem 1 given in \cite{GEwald1993} and proved the middle here. Also this work has the intention to show the Hilbert basis as a powerful tool in data science; and for that reason we compile important results proved in the works by, S. Watanabe \cite{SWatanabe2009}, D. Cox, J. Little and H. Schenck \cite{DCoxLittleSchenck2010}, and G. Ewald \cite{GEwald1993}. In this work we study, first, the fundamental concepts in toric algebraic geometry. The principal contribution of this work is the application of Hilbert basis (as one realization of Theorem 1) for the resolution of singularities with toric varieties, and a background in lattice polytope. In the second part we apply this theorem to problems in statistical learning, principally in a recent area as is the Singular Learning Theory. We define the singular machines and the problem of \textbf{Singular Learning} through the computing of learning curves on these statistical machines. We review and compile results on the work of S. Watanabe in \textbf{Singular Learning Theory}, ref.; \cite{SWatanabe12001}, \cite{SWatanabe42001}, \cite{SWatanabe52001}, we formalize this theory with toric resolution morphism in a theorem proved here (Theorem 6), characterizing these singular machines as toric varieties, and we reproduce results previously published in Singular Statistical Learning in \cite{SWatanabe32001}, \cite{SWatanabe42001}, \cite{SWatanabe72001}.
\end{abstract}
\maketitle
\section{Preliminars.}
The paper is organized as follows. In the first part, we revise a few concepts of convex geometry, the Gordan lemma and separation lemma, as important preliminary results for the subsequent developments as Hilbert basis. In the second section we revise the standard theory of toric algebraic geometry, \cite{BSturmfels1996}, \cite{DCoxLittleSchenck2010}, \cite{GEwald1993}; and make use of the definition of toric variety as an algebraic affine scheme, a definition that will permit the formalizations we show for singular machines and S-systems. In the third section, we enunciate a proof of the Hilbert basis lemma and we compute toric ideals. The value of this result and the computing by means of the Singular program, ref., \cite{DGPS}, enable us to compute toric ideals as the basis for applications in statistical learning. Furthermore, we define toric morphisms and gluing maps which are of the great importance for the proof of theorem 1. These applications give evidence of the relevance of theorem 1 and its potential benefit to facilitating solutions of problems in engineering.\\
We also give a formal definition of singularity, Ewald \cite{GEwald1993}, and enunciate two theorems for toric resolution; one of them is the theorem of Atiyah-Hironaka; S. Watanabe, \cite{SWatanabe12001}, which is applied for the resolution of singularities due to S. Watanabe, \cite{SWatanabe12001}, \cite{SWatanabe32001}, \cite{SWatanabe62001}, this fact is our motivation to study toric varieties in singular machines and embedding of its parameter space associated, into projective spaces as the theorem 3 proves.\\
In the fourth section, we study and summarize the main concepts of statistical singular learning (identifiable and non identifiable machines, Kullback distance, Fisher matrix information, learning curve and singular machines) with the purpose of making a formal study of singular machines by means of toric resolutions and affine toric varieties where we enunciate and prove part of the Theorem 6 applying the results of the first part. We also see the effect of the singularities in statistical learning and its importance for the performance and training in singular machines, \cite{SWatanabe32001}, which is resolved and studied by means of Theorem 6. We conclude this section with applications for three different statistical machines (perceptron of two layers, mix of binomial distributions, and three layer perceptron) and compute the learning curves by means of Hilbert basis reproducing the results of S. Watanabe, \cite{SWatanabe52001},\cite{SWatanabeYamazaki2002}, \cite{KYamazakiWatanabe2004}.
\section{Background of Convex Combinatorial Geometry.}
All this compilation of definitions and concepts can be consulted in; G. Ewald, \cite{GEwald1993}.
A set $ S \, \subset \, \mathbb{R}^{n}\setminus \emptyset $ is a convex set if each $ \alpha \,\in S $ is a convex combination of elements of $S$; that is, $\displaystyle \alpha= \sum_{i=1}^{r} \lambda_{i} \alpha_{i}$, where $ \lambda_{i} \geq \, 0 $ and $\alpha_i \in S$ for all $i =1,\dots, r$, with $ \displaystyle \sum_{i=1}^{r} \lambda_{i}=1 $. \\
Given $M \subset \mathbb{R}^{n} $, by \textbf{conv}$ M $ we mean the \textbf{hull convex} of $M$, which is the set of all convex combinations of elements of $M$. Moreover, if $ M $ is a finite set then \textbf{conv}$ M $ is called a \textbf{convex polytope} or \textbf{polytope}. \\
A \textbf{lattice} $ N $ is a free abelian group of finite rank, and if its rank is $ n \in\mathbb N $, then $N$ is isomorphic to $ \mathbb{Z}^{n} $.\\
Let $ M $ and $ N $ be two lattice both of rank $n$, consider $\: \langle \cdot, \cdot \rangle : M \times N \, \longrightarrow \, \mathbb{Z} $, the usual homomorphism of lattice from the inner product in $\mathbb R^n$ and identify to $ N $ with $ Hom_{\mathbb{Z}}(M,\mathbb{Z}) $, then we say
that $ N $ is the dual lattice of the lattice $ M $, and reciprocally. In any case one denotes $ N=M^{\vee} $, see for more details of this formalism \cite{DCoxLittleSchenck2010}.\\
Given $ M $ and $ N $ as dual lattice, denote $ M_{\mathbb{R}} = M\otimes_{Z}\mathbb{R}$ and $ N_{\mathbb{R}} = N\otimes_{Z}\mathbb{R} $, and set $ \sigma =Con( S ) \subseteq \, M_{\mathbb{R}} $, for some set $ S \, \subseteq \, M $, then $ \sigma $ is called a \textbf{rational polyhedric cone} or \textbf{lattice cone}, \cite{DCoxLittleSchenck2010}.\\
Also a \textbf{lattice cone} is a cone $ \sigma= Con(\alpha_{1},...,\alpha_{r}) \subset \, \mathbb{Z}^{n} $, generated by, $\alpha_{1},...,\alpha_{r} \, \in \mathbb{Z}^{n}$ vectors. If the coordinates of $\alpha_i$ are relative primes to pairs for each $i=1,\dots,r$, then $\alpha_{1},...,\alpha_{r} $ are called \textbf{primitive vectors}, and the cone $ \sigma $ is called a \textbf{regular cone}. It is well known that if $ \alpha_{1},...,\alpha_{r} $, are primitives, then there exists $ \alpha_{r+1},...,\alpha_{n} \, \in \mathbb{Z}^{n} $ such that:
\begin{center}
\textbf{Det}$ (\alpha_{1},...,\alpha_{n})= \pm 1 $.
\end{center}
Also, if the $ \alpha_{1},...,\alpha_{n} \, \in \mathbb{Z}^{n} $ are linearly independent, then the cone $ \sigma $ is a \textbf{simplex cone or simplicial cone.}\\
A \textbf{face}, $ \tau $ of a cone $ \sigma $ is $ H_{p}\cap \sigma $ where $ H_{p} \subset \, \mathbb{R}^{n} $ is a tangent hyperplane to $\sigma$ at $ p \in \sigma $, it is usually denoted by $ \tau \, \preceq \, \sigma $ it is well known $ \preceq $ is a relation of order.\\
The relative interior of $ \sigma $ is $Relint(\sigma)= \sigma^{\vee} \setminus \sigma^{\perp} $, where
\begin{center}
$\sigma^{\vee} = \lbrace m \in \sigma: \langle m,u \rangle > 0, \, \forall u \, \in \sigma \rbrace $,\\
$ \sigma^{\perp}= \lbrace m \in \mathbb{R}^{n}: \langle m,u \rangle =0, \, \forall u \, \in \sigma \rbrace$.
\end{center}
Let $ P \, \subseteq \, M_{\mathbb{R}} $ be a lattice polytope. A set of cones $ \sum_{F} = \lbrace \sigma_{F} \, \vert \, F \, \preceq \, P \rbrace$, is called a \textbf{Fan} if and only if:
\begin{itemize}
\item If $\tau \preceq \sigma_{F} $, then $ \tau \preceq \sum_{F} $ for each $\sigma_{F} \, \in \, \sum_{F} $, .\\
\item If $\tau = \sigma_{1}\cap \sigma_{2} $, then $ \tau \, \preceq \, \sigma_{1} $ and $ \tau \, \preceq \, \sigma_{2} $ for each $
\sigma_{1}, \sigma_{2} \, \in \, \sum_{F}$.
\end{itemize}
Recalling the following facts:
\begin{enumerate}
\item Separation Lemma: If $ \sigma_{1} $ and $ \sigma_{2} $ are lattice cones in $ M $, whose intersection $ \tau =\sigma_{1} \cap \sigma_{2} $ is a face of both, then there is exists a hyperplane $ H_{m} $ such that:
\begin{center}
$ \tau =\sigma_{1} \cap H_{m}=\sigma_{2} \cap H_{m}$,
\end{center}
for any $ m \, \in \, Relint(\sigma)= \sigma_{1}^{\vee} \cap (-\sigma_{2})^{\vee}$.\\
\item Lemma. Set $\tau \, \preceq \, \sigma$ and $ m \,\in Relint(\tau^{\perp} \cap \sigma^{\vee} ) \setminus \lbrace 0 \rbrace $. Then
\begin{center}
$ \tau^{\perp}= \sigma^{\vee} \oplus\{ \lambda (-m)\ \mid \ \lambda \, \in \, \mathbb{R} \}$.
\end{center}
\item Lemma. Let $ \sigma \, \subset \, \mathbb{R}^{n} $ be a lattice cone, then $ \sigma \cap \mathbb{Z}^{n} $ is a \textbf{monoid}.
\item Gordan lemma. Let $ \sigma \, \subset \, \mathbb{R}^{n} $ be a lattice cone, then the monoid $ \sigma \cap \mathbb{Z}^{n} $ is finitely generated.
\item Theorem. Let $ \sigma \subset \, \mathbb{R}^{n} $ be a $ n $ dimensional lattice cone with \'{a}pex $ 0 $, i.e.; $0 \preceq \sigma $, and let $ b_{1},...,b_{r} $ be the inner normal facets of $ \sigma $. Then
\begin{center}
$ \sigma^{\vee}= Con(b_{1},...,b_{r})$.
\end{center}
\end{enumerate}
\subsection{About toric algebraic geometry.}
The affine variety $(\mathbb{C}^{*})^{n} =(\mathbb{C}/ \lbrace 0 \rbrace)^{n} $ is a group equipped with the complex product of coordinates to pairs and it is called the \textbf{complex algebraic n-torus}. A \textbf{torus T} is an affine variety isomorphic to $(\mathbb{C}^{*})^{n}$.\\
A \textbf{character} of a torus \textbf{T} is a homomorphism of groups, $ \chi : T\longrightarrow \mathbb{C}^{*} $. For example, set $ m=(a_{1},...,a_{n}) \, \in \mathbb{Z}^{n} $, then $ \chi^m :(\mathbb{C}^{*})^{n} \longrightarrow \mathbb{C}^{*} $ given by:
\begin{center}
$ \chi^{m}(t_{1},...,t_{n})=t_{1}^{a_{1}}*...*t_{n}^{a_{n}} $,
\end{center}
is \textbf{character} of $(\mathbb{C}^{*})^{n} $. Even more, it is well known that any character of $(\mathbb{C}^{*})^{n}$ is given as above.
Note that given a lattice $ M $ and $m\in M$, then it is possible to define a character of $T$ by $ \chi^{m}:T \longrightarrow \mathbb{C}^{*} $.\\
By an \textbf{uni-parametric subgroup} of a torus \textbf{T} we mean a homomorphism of groups $ \lambda : \mathbb{C} ^{*} \longrightarrow T$. Given $ u =(b_{1},...,b_{n}) \, \in \mathbb{Z}^{n}$ define $ \lambda^{u}: \mathbb{C} ^{*} \longrightarrow (\mathbb{C}^{n})^{*} $ by:
\begin{center}
$ \lambda^{u}(t_{1},...,t_{n})=(t_{1}^{b_{1}}, \dots, t_{n}^{b_{n}})$.
\end{center}
Then $\lambda^{u}$
is a uni-parametric subgroup of $(\mathbb{C}^{n})^{*}$ and any uni-parametric subgroup of $(\mathbb{C}^{n})^{*}$ is given in the same form.
One sees that given a Torus \textbf{T}, there holds that all uni-parametric subgroups of $T$ form a free abelian group $ N $ with the same dimension of \textbf{T}. The same fact is obtained for all characters of \textbf{T}.
The ring:
\begin{center}
$ \mathbb{C}[t,t^{-1}]=\mathbb{C}[t_{1},...,t_{n},t_{1}^{-1},...,t_{n}^{-1}] $
\end{center}
is called the \textbf{ ring of Laurent polynomials} and the monomials,
\begin{center}
$ \lambda * t^{a}= \lambda *t^{a_{1}}*...*t^{a_{n}}$ with $ a=(a_{1},...,a_{n}) \, \in \mathbb{Z}^{n}, \, \lambda \, \mathbb{C^{*}}$.
\end{center}
are called \textbf{Laurent monomials}.\\
The \textbf{support} of a Laurent polynomial $ f=\sum_{i=1}^{r}\lambda_{i}t^{a_{i}} $, is
\begin{center}
\textbf{supp}(f)$ =\lbrace a_i \, \in \, \mathbb{Z}^{n}: \, \lambda_{i} \neq 0 \rbrace$.
\end{center}
It is known that given $\mathbb{C}[t,t^{-1}] $ as above, and let $ \sigma $ be lattice cone. Then
\begin{center}
$ R_{\sigma}=\lbrace f \, \in \, \mathbb{C}[t,t^{-1}] :$ \textbf{supp} $ (f) \subset \, \sigma \rbrace$
\end{center}
is a generated finitely monomial $ \mathbb{C} -$algebra.\\
\begin{definition}\label{Definition.} An \textbf{affine toric variety} is an irreducible affine variety $ X $ containing a torus $ T_{N}\simeq (\mathbb{C^{*}})^{n} $ as Zariski open subset, such that the action of $ T_{N} $ on itself, is extended to an algebraic action of $ T_{N} $ on $ X $; that is, there exists a morphism from $ T_{N} \times X$ to $ X $, \cite{DCoxLittleSchenck2010} .\\
Let $\sigma$ be a lattice cone, the
\textbf{affine algebraic scheme:}
\begin{center}
$ X_{\sigma}=$ \textbf{Spec} $ (R_{\sigma}) $.
\end{center}
is called \textbf{abstract toric affine variety} or \textbf{embedding of torus.}\\
\end{definition}
For example, set $ 0\leq \, r \, \leq n $, and let $ \sigma \subset \, \mathbb{R}^{n}$ be a lattice cone generated as follows $ \sigma= Con(e_{1},...,e_{r}) $ where $ e_{i} $ are canonical vectors in $ \mathbb{R}^{n} $ for $ i=1,...,r $. Then computing its dual cone, one has $ \sigma^{\vee}= Con(e_{1},...,e_{r}, \pm e_{1},...,\pm e_{n}) $, and the affine toric variety is
\begin{center}
$ X_{\sigma}=$ \textbf{Spec} $ \mathbb{C}[t_{1},...,t_{r},t_{r+1}^{\pm 1},...,t_{r+n}^{\pm 1}] \simeq \mathbb{C}^{r} \times (\mathbb{C}^{*})^{n-r}$.
\end{center}
this example is seen in, \cite{DCoxLittleSchenck2010}.
\section{Hilbert Basis.}
The theory of \textbf{Hilbert basis} is an important algebraic geometry tool. The major contribution of this work
is the employment of \textbf{Hilbert basis} associated to a monoid $ N \subset \mathbb{Z}^{n} $ to give explicitly a toric resolution in a set of new coordinates to solve a problem in singular statistical learning in an original way; also see; \cite{BSturmfels1996}.
\\
Let $ N \simeq \mathbb{Z}^{n}$ be the lattice and set $ M= N^{\vee} $ its dual lattice. Let $ \sigma $ be a lattice cone defined in $ N $ and let $ \sigma^{\vee} $ be its dual cone in $ M $. Denote $ S_{\sigma} = \sigma^{\vee} \cap M$ and note that his monoid is finitely generated (see Gordan lemma).
\begin{lemma}\label{Lemma 2.1.1.} (Basis Hilbert). Set $ \sigma \subseteq \, N $, then $\sigma$ is a n-dimensional cone if and only if it is a strongly convex cone; i.e., $ \sigma\cap (-\sigma^{\vee}) = \lbrace 0 \rbrace$. In this case the monoid $ S_{\sigma} $ has a finite minimal set of generators \textbf{H} $ \subseteq \, M \simeq \mathbb{Z}^{d}$ and these are minimal, for details of this proof, see \cite{DCoxLittleOshea2004}, \cite{DCoxLittleSchenck2010}.\end{lemma}
\begin{definition}\label{Definition 2.1.1} Set $ \omega = (\omega_{1},...,\omega_{n}) \, \in \mathbb{R}^{n} $, and for a polynomial $ f = \sum_{i=1}^n \lambda_{i}t^{a_{i}} $ define its \textbf{initial form} $ in_{\omega} (f)$, as the sum over all lambda terms, such that the inner product $ \langle \, \omega, a_{i} \, \rangle $ is maximal. \\
For an ideal $ I $, we mean the \textbf{initial ideal} as the ideal generated from the initial forms
\begin{center}
$ in_{\omega} (I) = \langle \, in_{\omega}(f) : f \, \in I \, \rangle$.
\end{center}
\end{definition}
\begin{definition}\label{Definition 2.1.2.} Each polynomial $ f = \sum_{i =1}^{n}\lambda_{i}t^{a_{i}} $,
in the ring $ \mathbb{C}[t, t^{-1}] $, is associated to a convex polytope, or convex hull, in $ \mathbb{R}^{n} $ as follows:
\begin{center}
$ New(f)= $ \textbf{Conv} $ \lbrace a_{i} : i=1,...,m \rbrace \, \subset \mathbb{R}^{n} $.
\end{center}
$ New(f)$ is called the \textbf{Newton polytope} associated to $ f $, in the literature of singular learning machine it is known as the \textbf{exponent space}. But, generally, it is called the exponent space generated by a Newton polytope; see ref., \cite{BSturmfels1996}.\\
\end{definition}
\begin{lemma}\label{Lemma 2.1.2.} Given $f,g$ two polynomials, then $ New(f*g)= New(f) + New(g) $, where $*$ is the usual product of polynomials and the sum is the \textbf{Minkowski sum} defined for polytopes, see ref., \cite{BSturmfels1996}.
\end{lemma}
\begin{proposition}\label{Proposition 2.0.0.} Let $ I $ be an ideal of the affine toric variety $ X_{\sigma} \, \subseteq \, \mathbb{C}^{n}$. Then define
\begin{center}
$ I(X_{\sigma})= \langle \, t^{l_{+}} - t^{l_{-}} \vert l \, \in L \, \rangle =\langle \, t^{\alpha} - t^{\beta} \vert \alpha - \beta \, \in L, \, \alpha, \, \beta \in \, \mathbb{Z}_{+}^{n} \, \rangle $,
\end{center}
where $ L $ is the kernel of the following morphism $ 0 \longrightarrow L \longrightarrow \mathbb{Z}^{n} \longrightarrow M $ and $ M $ is a monoid such that $ M \simeq \mathbb{Z}^{d} $. The elements of $ l \, \in L $ satisfies $ \sum_{i=1}^{n} l_{i}m_{i}=0 $.\end{proposition}
\begin{definition}\label{Definition 2.1.3.} Let be $ L \, \subseteq \, \mathbb{Z}^{n} $, a sub-lattice.\\
\textbf{(a).} The ideal $ I_{L} = \langle \, t^{\alpha} - t^{\beta} \vert \alpha - \beta \, \in L, \, \alpha, \, \beta \in \, \mathbb{Z}_{+}^{n} \, \rangle$, is called a \textbf{lattice ideal}.\\
\textbf{(b).} A prime lattice ideal is called a \textbf{toric ideal.}\\\end{definition}
\begin{proposition}\label{Proposition 2.1.0.} An ideal $ I \, \subseteq \, \mathbb{C}[t_{1},...,t_{n}] $ is toric if and only if it is prime and it is generated by binomials.
\end{proposition} One sees the details of the proof, ref. \cite{DCoxLittleSchenck2010}.
\subsection{Toric Morphisms and Gluing Maps.}
\begin{definition}\label{Definition 2.3.0.} Let $ \Phi :\mathbb{C}^{k} \longrightarrow \Phi(\mathbb{C}^{k}) $ be a monomial map, i.e., each component non zero of $ \Phi $ is a monomial with coordinates in $ \mathbb{C}^{k} $, and let $ X_{\sigma} \hookrightarrow \mathbb{C}^{k}$ and $ X_{\sigma'} \hookrightarrow \mathbb{C}^{m}$ be inclusions of toric affine varieties. If $ \Phi (X_{\sigma}) \, \subset \, X_{\sigma'} $, then $ \varphi := \Phi \vert _{X_{\sigma}}$ is called a \textbf{toric affine morphism} of $ X_{\sigma} $ to $ X_{\sigma'} $. If $ \varphi $ is bijective and its inverse map $ \varphi^{-1} :X_{\sigma'} \longrightarrow X_{\sigma} $ is also a toric morphism, then $ \varphi $ is called an \textbf{affine toric isomorphism} and it is denoted by $ X_{\sigma} \simeq X_{\sigma'} $, \cite{GEwald1993}.
\end{definition}
\begin{proposition}\label{Proposition 2.3.0.} Every toric morphism $ \varphi : X_{\sigma} \longrightarrow X_{\sigma'}$ determines a monomial homomorphism $ \varphi^{*} : R_{\sigma'} \longrightarrow R_{\sigma} $ and reciprocally, \cite{GEwald1993}.\end{proposition}
\begin{definition}\label{Definition 2.3.1.} For two lattice cones $ \sigma \, \subset \mathbb{R}^{n}= lin(\sigma) $ and $ \sigma' \, \subset \mathbb{R}^{m}= lin(\sigma') $, we say that $ \sigma $ and $ \sigma' $ are isomorphic and denote $ \sigma \, \simeq \, \sigma'$, if $ m= n $ and there exists an uni modular transformation $ L : \mathbb{R}^{n} \longrightarrow \mathbb{R}^{n} $ such that $ L(\sigma')=\sigma $. The monoids $ \sigma \cap \mathbb{Z}^{n} $ and $ \sigma' \cap \mathbb{Z}^{n} $ are isomorphic also.\end{definition}
\begin{definition}\label{Definition 2.3.2.} Given $ R_{\sigma} $ and $ R_{\sigma'} $ two $ \mathbb{C}-$ algebras, there is a \textbf{monomial isomorphic}, $ R_{\sigma} \simeq R_{\sigma'} $, if there exists an invertible monomial homomorphism $ R_{\sigma} \longleftrightarrow R_{\sigma'} $\end{definition}
\begin{theorem}\label{Theorem 2.3.0.} Set $ \sigma \, \subset \mathbb{R}^{n}= lin(\sigma) $ and $ \sigma' \, \subset \mathbb{R}^{m}= lin(\sigma') $, then the following conditions are equivalent, \cite{GEwald1993}:
\begin{center}
\textbf{(a).} $ \sigma \, \simeq \, \sigma' $ {} {} {} \textbf{(b).} $ R_{\sigma} \simeq R_{\sigma'} $ {} {} {} \textbf{(c).} $ X_{\sigma} \simeq X_{\sigma'} $.
\end{center}
\end{theorem}
\begin{proof} The implications a) $ \Rightarrow $ b)$ \Rightarrow $ c) are proven by means of the following diagram and we prove that it is commutative.
\begin{center}
$ \qquad \qquad \sigma \quad \longrightarrow \quad R_{\sigma} \quad \hookrightarrow\quad X_{\sigma}=$\textbf{Spec}$(R_{\sigma})$\\
$ \qquad \qquad \downarrow \uparrow L^{-1} \qquad $ $ \downarrow \uparrow \psi^{-1} \qquad \quad \quad\quad\quad $ $\downarrow \uparrow \varphi^{-1} $\\
$ \qquad \qquad \sigma^{'} \quad \longrightarrow \quad R_{\sigma^{'}} \quad \hookrightarrow\quad X_{\sigma^{'}}=$\textbf{Spec}$(R_{\sigma^{'}})$
\end{center}
We define the monomial homomorphisms $ h_{\sigma}:\sigma \longrightarrow R_{\sigma}$, $ h_{\sigma^{'}}:\sigma^{'} \longrightarrow R_{\sigma^{'}} $, $ j_{\sigma}:R_{\sigma} \longrightarrow X_{\sigma} $, $ j_{\sigma^{'}}:R_{\sigma'} \longrightarrow X_{\sigma'} $. From the hypothesis one has that $ \sigma \backsimeq \sigma^{'} $ then there exists a uni-modular transformation $ L $ such that $ L(\sigma)=\sigma^{'} $ and its inverse transformation $ L^{-1}(\sigma^{'})=\sigma $ is well defined. Then one does the following monomial homomorphisms:
\begin{center}
$ h_{\sigma}(a)= \sum \lambda_{a}t^{a} \, \in \, R_{\sigma}$ and $ a\in \, supp(h_{\sigma}) \subset \sigma $,\\
$ h_{\sigma^{'}}(a')= \sum \lambda_{a'}t^{a'} \, \in \, R_{\sigma^{'}} $ and $a' \in \, supp(h_{\sigma^{'}}) \subset \sigma^{'} $,\\
$ \psi (h_{\sigma}(a))=\sum \lambda_{a'}t^{L(a)} \, \in \, R_{\sigma^{'}}$ and $ L(a)=a' \in \, supp(\psi) \subset \sigma^{'} $,\\
$ \psi^{-1} (h_{\sigma^{'}}(a'))=\sum \lambda_{a}t^{L^{-1}(a')} \, \in \, R_{\sigma}$ and $ L^{-1}(a')=a \in \, supp(\psi^{-1}) \subset \sigma $.\\
\end{center}
Choosing the prime generators $ t^{a} \, \in \, R_{\sigma}, \, a \,\in\, \sigma$ and $ t^{a'} \, \in \, R_{\sigma^{'}}, \, a' \,\in\, \sigma^{'} $, define:
\begin{center}
$ j_{\sigma}(t^{a})=\langle t^{a} \rangle \, \in \, X_{\sigma}=$\textbf{Spec}$(R_{\sigma})$,\\
$ j_{\sigma^{'}}(t^{a'})=\langle t^{a'} \rangle \, \in \, X_{\sigma^{'}}=$\textbf{Spec}$(R_{\sigma^{'}})$,\\
$ \varphi(\langle t^{a} \rangle)=\langle t^{L(a)} \rangle \, \in \, X_{\sigma^{'}}=$\textbf{Spec}$(R_{\sigma^{'}})$,\\
$ \varphi^{-1}(\langle t^{'a} \rangle)=\langle t^{L^{-1}(a')} \rangle \, \in \, X_{\sigma}=$\textbf{Spec}$(R_{\sigma})$.
\end{center}
where $ \langle t^{L(a)} \rangle $ and $ \langle t^{L^{-1}(a')} \rangle $ are prime ideals like a realization, respectively, of the spectrum \textbf{Spec} of the coordinate rings $ R_{\sigma} $ and $ R_{\sigma^{'}} $.
We see easily that these monomial homomorphisms accomplish the following identities, without lost of generality, $ \lambda_{a}=\lambda_{a'}=1 $ so: $ L\circ L^{-1}=id_{\sigma} $, $ L^{-1}\circ L=id_{\sigma^{'}} $, $ \psi \circ \psi^{-1}=id_{R_{\sigma^{'}}} $, $ \psi^{-1} \circ \psi=id_{R_{\sigma}} $, $ \varphi^{-1}\circ \varphi = id_{X_{\sigma}} $, $\varphi \circ \varphi^{-1} = id_{X_{\sigma^{'}}} $.
The isomorphisms $ L $, $ \psi $, $ \varphi $, are isomorphisms of, cones, algebras of coordinate rings, and isomorphisms of toric varieties (toric morphism) respectively, and the first are well defined; one is a uni-modular transformation and the second one is an isomorphisms of algebras. It only remains to proof $ \varphi $ is a toric morphism. Define the monomial homomorphism $ \Phi:\mathbb{C}^{n} \longrightarrow \mathbb{C}^{n} $ by $ \Phi(\langle t^{a} \rangle)=\langle t^{a'} \rangle\, \ni \, \Phi(X_{\sigma}) \subset X_{\sigma^{'}} $. This homomorphism induces the morphism $ \varphi $ which is bijective, so that for the generator $ t^{0}=1_{X_{\sigma}} $ as lattice vector $ a=0\, \in \, \sigma $, $ \varphi(t^{0})=t^{L(0)}=1_{X_{\sigma^{'}}} $. Then $ \varphi $ is injective and consider the generator $ t^{a'}\, \in \, X_{\sigma^{'}} $. Since $ L(a)=a'\Rightarrow \,\exists \,t^{a} \, \in \, X_{\sigma} \, \ni \,\varphi(t^{a})=t^{L(a)}=t^{a'} $, then $ \varphi $ is surjective. Note that $ \varphi=\Phi\mid _{X_{\sigma}} $; in the same way one can see that $ \varphi^{-1} $ is a toric morphism too. Therefore, $ \varphi $ is a toric isomorphism. On the other hand, $ \psi(h_{\sigma}(a))=h_{\sigma^{'}}(L(a)) $, $ \varphi(j_{\sigma}(t^{a}))=j_{\sigma^{'}}(t^{a'}) $; which proves that the diagram commutes and one obtains the isomorphisms wished. For details of the implication c) $ \Rightarrow $ a), see \cite{GEwald1993}. \textbf{q.e.d.} \end{proof}
\begin{definition}\label{Definition 2.3.3.} Recall that a \textbf{complex projective n-space $ \mathbb{C}P^{n} $} is the space of class of equivalence of pairs of points such that it consists of lines on $ \mathbb{C}P^{n}=\mathbb{C}^{n+1} / \sim $. The relationship between points $ \sim $ is of the following manner, given any vector $ v:= (\eta_{0},...,\eta_{n}) $ it defines a line $ \mathbb{C}*v $ and two of said vectors $ v \sim v' \in \mathbb{C}^{n+1}\setminus \lbrace 0 \rbrace $ define the same line if and only if, one is a scalar multiple of the other.\end{definition}\
In the next example we point the important relationship of basis Hilbert and the Theorem \ref{Theorem 2.3.0.}, this connection is the great importance for the applications in the following sections; the example of Hirzebruch surface is possible consulting it in; Ewald, \cite{GEwald1993}.
\begin{example}\label{Example}. By $ H_{k} $ we mean the \textbf{Hirzebruch surface}. We consider a hyper surface in $ \mathbb{C}P^{1}\times \mathbb{C}P^{2}=\lbrace ([\eta_{0},\eta_{1}],[\zeta_{0},\zeta_{1},\zeta_{2}]) : (\eta_{0},\eta_{1}) \neq (0,0),(\zeta_{0},\zeta_{1},\zeta_{2})\neq (0,0,0) \rbrace$ determined by the equation, see example given in \cite{GEwald1993},\\
\begin{center}
$ \eta^{k}_{0}\zeta_{0}=\eta^{k}_{1}\zeta_{1},\quad k \in \mathbb{Z}. $
\end{center}
Applying Theorem \ref{Theorem 2.3.0.}, as in previous examples, one has the isomorphic coordinate rings by each one of the affine charts associated to this surface. Determining the Newton polytopes of fan $ \Sigma $,so as its dual cones, it follows there four planes which are affine charts, and its gluing depend of $ k $, thus,\\
\begin{center}
$ R_{\sigma^{\vee}_{0}}= \mathbb{C}[z^{e_{1}},z^{e_{2}}] = \mathbb{C}[z_{1},z_{2}]$,\\
$ R_{\sigma^{\vee}_{1}}= \mathbb{C}[z^{-e_{1}},z^{e_{1}+ke_{2}}]=\mathbb{C}[z^{-1}_{2},z_{1}z^{k}_{2}]$,\\
$ R_{\sigma^{\vee}_{2}} = \mathbb{C}[z^{-e_{1}},z^{e_{2}}]=\mathbb{C}[z^{-1}_{1},z_{2}]$,\\
$ R_{\sigma^{\vee}_{3}} =\mathbb{C}[z^{-e_{1}-ke_{2}},z^{-e_{2}}]=\mathbb{C}[z^{-1}_{1}z^{-k}_{2},z^{-1}_{2}] $;\\
which implies the following toric varieties:\\
$X_{\sigma^{\vee}_{0}}=$\textbf{Spec}$ (\mathbb{C}[z_{1},z_{2}])$ ;\\
$X_{\sigma^{\vee}_{1}}=$\textbf{Spec}$ (\mathbb{C}[z^{-1}_{2},z_{1}z^{k}_{2}])$ ;\\
$X_{\sigma^{\vee}_{2}}=$\textbf{Spec}$ (\mathbb{C}[z^{-1}_{1},z_{2}]) $;\\
$X_{\sigma^{\vee}_{3}}=$\textbf{Spec}$ (\mathbb{C}[z^{-1}_{1}z^{-k}_{2},z^{-1}_{2}]) $.
\end{center}
Where the fan $ \Sigma^{\vee} $ formed by the cones $\sigma^{\vee}_{0}=Con(e_{1},e_{2}) $, $ \sigma^{\vee}_{1}=Con(-e_{1},e_{1}+ke_{2}) $, $ \sigma^{\vee}_{2}= Con(-e_{1},e_{2}) $, $ \sigma^{\vee}_{3}=Con(-e_{1}-ke_{2},-e_{2} )$ are the Hilbert basis $ H_{\Sigma} $ associated to the fan $ \Sigma=\lbrace \sigma_{0}, \sigma_{1}, \sigma_{2} , \sigma_{3} \rbrace $, with $ \sigma_{0}= $Con$ (e_{1}, e_{2}) $, $ \sigma_{1}= $Con$ (-e_{2}, -ke_{1} + e_{2}) $, $ \sigma_{2}= $Con$ (-e_{2}, e_{1}) $, $ \sigma_{3} =$Con$ (-ke_{1} + e_{2}, e_{1})$. This technique will be applied in the example of three layer perceptron.\\
Following this re parametrization of polynomials one can see that it is convenient to work in complex projective spaces according to Theorem 1.\end{example}
\textbf{Gluing Maps.}\\
\begin{lemma}\label{Lemma 2.3.0.} Let $ \sigma $ be a lattice cone and set $ \tau \, \preceq \, \sigma $. The natural identification,
\begin{center}
$ X_{\tau^{\vee}} \: \simeq \: X_{\sigma^{\vee}} \setminus \lbrace u_{k}= 0 \rbrace$.
\end{center}
where $ u_{k} $ is the last generator of the representation of the coordinate ring associated to $ X_{\sigma^{\vee}} $, see details of the proof, ref., \cite{GEwald1993}.\end{lemma}
\begin{definition} \label{Definition 2.3.3.1.} The isomorphism,
\begin{center}
$ \psi_{\sigma, \sigma'} : \, X_{\sigma^{\vee}} \setminus \lbrace u_{k}= 0 \rbrace \longrightarrow X_{\sigma'^{\vee}} \setminus \lbrace v_{l}= 0 \rbrace $.
\end{center}
is called \textbf{gluing morphism}, which glues the varieties $ X_{\sigma^{\vee}} $ and $ X_{\sigma'^{\vee}} $ in the variety $ X_{\tau^{\vee}} $.\end{definition}
\subsection{Toric Resolution.}
\begin{definition}\label{Definition 2.4.0.} (Singularity) Let $ X_{\sum} $ be a n-dimensional toric variety and let $ \sum $ be a re\-gu\-lar fan. A point $ p \, \in \, X_{\sum} $ is called \textbf{singular} or \textbf{singularity} of $ X_{\sum} $, if $p$ belongs to an affine chart $ X_{\sigma^{\vee}} $ where $ \sigma \, \in \, \sum $ which is not of the form $ \mathbb{C}^{k} \times (\mathbb{C}^{*})^{n-k} $. For details of the proof, see ref., \cite{GEwald1993}.\\
\end{definition}
\begin{theorem}\label{Theorem of Hironaka-Atiyah} (Hironaka-Atiyah) Let $ f $ be a real analytical function in a neighborhood of $ \omega = (\omega_{1},....,\omega_{n})\in \mathbb{R}^{n} $ such that $ f(\omega)= 0 $. Then there exists an open set $ V \subset\mathbb R$, a real analytical variety $ U $ and a proper analytical map $ g :U \to V$ such that:\\
(a) $ g\quad :U-\epsilon \longrightarrow V-f^{-1}(0) $ is an isomorphism, where $ \epsilon =g^{-1}(f(0)) $,\\
(b) For each $ u \in U $, there exist local analytical coordinates $ (u_{1},....,u_{n}) $ such that $ f(g(u))=\pm u^{s_{1}}_{1}u^{s_{2}}_{2}*...*u^{s_{n}}_{n} $, where $ s_{1},....,s_{n} $ are non negative integers; see ref. \cite{SWatanabe52001}.
\end{theorem}
The previous theorem is a version of the well-known theorem of resolution of singula\-ri\-ties established by Hironaka in algebraic geometry, see, ref. \cite{HHironaka1964}, \cite{SWatanabe22001}.\\
\begin{theorem}\label{Theorem of toric modification} Let $ X_{\sum} $ be a regular toric variety, and let $ X_{\sum_{0}} $ be a toric invariant sub variety defined by the star $ st(\sigma,\sum) \backsimeq \sum_{0} $ of $ \sigma $ into $ \sum $; $ 1 < k:=dim \sigma \leqslant n $.\\
(a) Under toric blow up $ \psi^{-1}_{\sigma} $, any point $ x \in X_{\sum_{0}} $ is substituted by a k-dimensional \textbf{(k-1) projective space.}\\
(b) The blow down $ \psi_{\sigma} $ is a toric morphism which is bijective in the outside of $ \psi^{-1}_{\sigma} $.\\
See ref. for the proof of this fact, \cite{GEwald1993}.
\end{theorem}
\section{Singular Statistical Learning.}
In this section we will focus on the statistical learning machine. Given a probability space $ (\Omega, \mathbb{F}, P) $, where $ \Omega $ is a set of events, $ \mathbb{F} $ is a $ \sigma-$algebra on $ \mathbb{F} $, and $ P $ is a measure of Kolmogorov, one can compute the predictive probability $ P(y \vert x,\omega) $ of the output variable $ y \in \mathbb{R}^{n}$ given $ x \in \mathbb{R}^{m}$ and a parameter vector $ \omega \, \Theta \subseteq \mathbb{R}^{n}$. This probability is factorized according to Bayes theorem and to Theorem of Hammersley and Clifford; that is, $ P(X \vert Y, \Theta) $ satisfies the local property of Markov and it can be factorized through an undirected graph $ G =(E,V)$, also it is represented by a toric variety. \\
\begin{definition}\label{Definition 4.1.7} (Identifiable and non identifiable Machines) Let $ (\Omega, \mathbb{F}, P) $ be a probability space and let $ P(y \vert x,\omega) $ be the probabilistic inference or prediction probability of a statistical machine, where $ y \in \mathbb{R}^{n} $ is an output vector and $ x \in \mathbb{R}^{m} $ is an input vector. If $ \omega \mapsto P(y \vert x,\omega) $ is an injective mapping is we say the machine is \textbf{identifiable}, see Watanabe \cite{SWatanabe42001}. If the mapping is not injective, then we say that the machine is a \textbf{ non identifiable machine}. \end{definition}
The probability densities of the learning machines are defined in the probability space $ (\Omega, \mathbb{F}, P) $ and are denoted as follows:
\begin{itemize}
\item \textbf{prediction of the vector } $P(y \vert x, \omega) $, {} $ y \in \, \mathbb{R}^{M}$ ,
\item \textbf{true inference of the machine} $ q(y\vert x) $,
\item $ q(y \vert x)q(x) $ is the \textbf{ distribution of probability with which are taken and trained the set of examples} of inference machines in an independent way.
\end{itemize}
\begin{definition}\label{Definition 4.1.8} Let $ \omega_{0} \in \Theta \, \subset \,\Omega$ be a parameter such that $ P(y \vert x,\omega_{0})=q(y \vert x) $; which means, that the parameter $ q(y \vert x) $ (which establishes the true inference of the statistical machine) is equal to the predictive probability density of the output vector $ y \in \mathbb{R}^{n} $. For non identifiable machines this parameter is not unique. Even more, the set of these parameters is called \textbf{space of true parameters} and it is denoted by \\
\begin{center}
$ W_{0}= \lbrace \omega_{0} \in \Theta \, \subset \, \Omega :P(y \vert x,\omega_{0})=q(y \vert x) \rbrace $.
\end{center}
It is well known that $W_{0}$ is a sub-variety formed by singular points. If these probability densities are analytic functions, then $W_0$ is called an \textbf{analytic set}. But, if these probability densities are polynomials, then $W_0$ is called an \textbf{algebraic set}. These sets are very important for our study.
\end{definition}
\textbf{Watanabe Theorems.}\\
\begin{theorem}\label{(1) Theorem, Watanabe,} (1)Theorem, Watanabe, \cite{SWatanabe52001}, \cite{SWatanabe2005}. Suppose that $ f $ is an analytic function and $ \varphi$ is a probability density function both defined in $ \mathbb{R}^{d} $. Then, there exists a real constant $ C $ such that
\begin{center}
$ G(n) \leq \lambda_{1} \log n - (m_{1} - 1)\log (\log n) + C$,
\end{center}
for any natural number $ n $. The rational number $ -\lambda_{1}$ {} $(\lambda_{1} > 0) $ and the natural number $ m_{1} $ are the largest poles of a meromorphic function which is analytical continuation of
\begin{center}
$ J(\lambda)= \displaystyle \int_{f(\omega)< \epsilon} f(\omega)^{\lambda}\varphi'(\omega) d\omega, \quad (Re(\lambda)> 0), $
\end{center}
where $ \epsilon > 0 $ is a constant, and $ \varphi'(\omega) $ is a function of class $C^{\infty}_{0} $ satisfying $ 0 \leqslant \varphi'(\omega) \leqslant \varphi(\omega)$.\end{theorem}
\begin{definition}\label{Definition 4.3.1.} The poles of the function $ J $ belong to the intersection between the negative real semi axes and the set $ \lbrace m+\nu; \ m=0,-1,-2,...,b(\nu)=0 \rbrace $. Denoting these poles in a decreasing manner: $ -\lambda_{1},-\lambda_{2},-\lambda_{3},...,-\lambda_{k} $, where $ \lambda_{k} $ is a rational number, and the multiplicity of $ -\lambda_{k} $ is denoted by $ m_{k} $.
\end{definition}
\textbf{ Condition (A)}. Let $ \psi(x,\omega) $ be a real valued function, where $ (x,\omega) \in \mathbb{R}^{M} \times \mathbb{R}^{d} $, such that:\\
(1) $ \psi(x,\cdot) $ is an analytic function on $ W=supp(\varphi) \subset \mathbb{R}^{d} $ which can be extended to a holomorphic function on some open set $ W^{*} $, where $ W\subset W^{*} \subset \mathbb{C}^{d} $, and $ W^{*} $ is independent of $ x \in supp(q) \subset \mathbb{R}^{M}$.\\
(2) $ \psi(\cdot,\omega) $ is a measure function on $ \mathbb{R}^{M} $, which satisfies:
\begin{center}
$ \displaystyle \int sup_{\omega \in W^{*}} \Vert \psi(x,\omega) \Vert^{2}q(x) dx < \infty $,
\end{center}
where $ \Vert \bullet \Vert $ is the norm of the vector $ \psi(x,\omega) $.\\
\begin{theorem}\label{(2) Theorem, Watanabe} (2) Theorem, Watanabe \cite{SWatanabe42001}, \cite{SWatanabe52001}. Set a constant $ \sigma > 0 $. Let $ \varphi $ be a probability density of class $ C^{\infty}_{0} $. We will consider the statistical learning machines characterized by the following true inference of machine:
\begin{center}
$ P(y \vert x,\omega) =\dfrac{1}{(2\pi \sigma^{2})^{N/2}} \exp \left( \dfrac{-\Vert y - \psi(x,\omega)\Vert^{2}}{2\sigma^{2}}\right) $,
\end{center}
where both $ \psi(x,\omega) $ y $ \Vert \psi(x,\omega) \Vert^{2} $ satisfies the condition (A). Then there exists a constant $ C' > 0 $ such that \begin{center}
$ \vert G(n) - \lambda_{1} \log n + (m_{1} - 1)\log \log n \vert \leqslant C'$,
\end{center}
for any natural number $ n $, where the rational number $ -\lambda_{1} (\lambda_{1} > 0) $ and a natural number $ m_{1} $ are the largest poles of a meromorphic function which is analytical continuation of
\begin{center}
$\displaystyle J(\lambda)= \int _{f(\omega) < \epsilon} f(\omega)^{\lambda} \varphi(\omega) d\omega, (Re (\lambda)> 0) $,
\end{center}
where $ \epsilon > 0 $ is a constant.\end{theorem}
\textbf{Learning Curves and Resolution of Singularities.}\\
It is well known that the regular statistical models in which $ \lambda_{1}=d/2 $ and $ m_{1}=1 $ are special cases of Theorem (2) (Watanabe). In models of non identifiable machines, generally, the bayesian neural networks have different values of $ \lambda_{1} \leq d $ and $ m_{1} \geq 1 $. It is work of the algebraic geometry to find the poles $ \lambda_{1} $ and $ m_{1} $, of the meromorphic function $ J(\lambda) $ defined in theorem (1) and (2) (Watanabe), by means of techniques of resolution of singularities suggested by Watanabe, \cite{SWatanabe42001}, \cite{SWatanabe52001}, \cite{SWatanabe62001}, such as toric modification and blow up, in the algebraic set $ \lbrace \lambda \in W : H(\lambda)=J(\lambda) =0 \rbrace. $ .\\
\begin{corollary}\label{Corollary 4.3.1} Suppose the hypothesis of Theorem \ref{(2) Theorem, Watanabe}. If $ c(n+1) - c(n)= o\left( \dfrac{1}{n\log n}\right) $, then the \textbf{learning curve} is given by,
\begin{center}
$ K(n)= \dfrac{\lambda_{1}}{n} + \dfrac{m_{1} - 1}{n \log n} + o\left( \dfrac{1}{n\log n}\right) $.
\end{center}
Using this formula in regular models one has that $ \lambda_{1} =d/2$ and $ m_{1}=1 $. For non identifiable models, such as bayesian neural networks, the corresponding values are $ \lambda_{1} \leq d/2 $ y $ m_{1} \geq 1 $.\\
\end{corollary}
\begin{corollary}
\label{Corollary 4.3.2.} Suppose the hypothesis of Theorem \ref{(1) Theorem, Watanabe,}. If $ \varphi'(\omega) > 0 $ for each $ \omega_{0} \in W_{0} $, then $ \lambda_{1} \leq d/2 $ where $ d $ is the \textbf{dimension of parameter spaces}. See ref. \cite{SWatanabe42001}, \cite{SWatanabe52001}.\end{corollary}
\begin{definition}\label{Definition 4.1.9} The \textbf{Kullback distance}, or \textbf{information entropy}, of a statistical machine
quantifies the distance between the predictive probability $ P(y \vert x,\omega) $, of the output variable $ y \in \mathbb{R}^{N}$, and the true statistical inference of the machine $q(y \vert x) $. \\
\begin{center}
\textbf{(Kullback distance)} $\displaystyle H(\omega) = \int \log \dfrac{q(y \vert x)}{P(y \vert x,\omega)}q(y \vert x)q(x) dxdy$.
\end{center}
where $ q(x) $ is the true probability of the input variable $ x $.\end{definition}
The Kullback distance induces other important definitions.
\begin{definition}\label{Definition 4.1.10.} The \textbf{learning curve} of a statistical machine or \textbf{generalization of the error}, Watanabe \cite{SWatanabe32001}, is given by\\
\begin{center}
$ \displaystyle K(n)= E_{n} \left\lbrace \int \log \dfrac{q(y \vert x)}{P_{n}(y \vert x,\omega)}q(y \vert x)q(x) dxdy \right\rbrace $,
\end{center}
where $ E_{n} \lbrace \bullet \rbrace $ is the expected value over all pairs of trained examples by the machine, and $ P_{n}(y \vert x,\omega) $ is the mean density probability over all posterior probabilities of the output of the machine.
\end{definition}
\textbf{Algebraic geometry of Statistical machines.}\\
\begin{definition}\label{Definition 4.1.11} It is important to comment that an \textbf{algebraic set} $ W_{0} $ is, equivalently, defined by
\begin{center}
$ W_{0}= \lbrace H(\omega)=0 : \omega \in \Theta \rbrace$.
\end{center}
This set is not empty and is the principal set of our study related with singular machines.
\end{definition}
\section{Singular Machines.}
\begin{definition}\label{Definition 4.2.1} Let $(\Omega, \mathbb{F}, P) $ be a probability space and let $ y \in \mathbb{R}^{N}$ be a random vector. Define the \textbf{Fisher information matrix}, as follows, see A.S. Poznyak, \cite{ASPoznyak2009}:
\begin{center}
$\displaystyle I(\omega) = E_{n} \left\lbrace \nabla_{\omega} \log P_{n}(y \vert x,\omega) \nabla_{\omega}^{\intercal} \log P_{n}(y \vert x,\omega) \right\rbrace = \, \, \int_{\omega \in \mathbb{R}^{M}} \left\lbrace \nabla_{\omega} \log P_{n}(y \vert x,\omega) \nabla_{\omega}^{\intercal} \log P_{n}(y \vert x,\omega) \right\rbrace P_{n}(y \vert x,\omega)q(x) dxdy $,
\end{center}
where $ P_{n} $ is given in Definition \ref{Definition 4.1.10.} In general terms, this expression can be understood as a metric in the parameter space whenever the matrix is positive defined.
\end{definition}
\begin{definition}\label{Definition 4.2.2.} A statistical learning machine is called \textbf{regular learning machine} if the Fisher information matrix is positive defined, otherwise, it is called a \textbf{singular learning machine} if there exists a parameter $ \omega \in \Theta $ (called singularity of the Fisher information matrix) such that \textbf{det}$ I(\omega)=0 $. These singularities are several and the probability of the parameter $ \omega $ can not be approximated by a quadratic form in the sense of differential geometry, see the regular statistical machines, see ref. \cite{SWatanabe42001}.\end{definition}
\subsection{Effect of the Singularities in the Statistical Learning.}
In the following we define the mean empirical Kullback distance as:
\begin{center}
$ H_{n} = \dfrac{1}{n} \sum_{i=1}^{n} \log \dfrac{q(y_{i} \vert x_{i})}{P(y_{i} \vert x_{i},\omega)}$,
\end{center}
and let $ H(\omega) $ be an usual Kullback distance as we previously saw, if there exists a parameter $ \omega_{0} $ such that $ H(\omega_{0})=0 $, then $ H(\omega) $ satisfies the statement of Theorem \ref{Theorem of Hironaka-Atiyah}. Therefore, there exists a variety $ U $ and a resolution map $ g: U \mapsto W $, such that,
\begin{center}
$ H(g(u))=A(u)^{2} $ with $ A(u)= u^{k_{1}}_{1}*...*u^{k_{d}}_{d} $,
\end{center}
and the empirical distance can be written as above; for more details see ref. \cite{SWatanabe32001}, \cite{SWatanabe42001}, \cite{SWatanabe62001}.
\begin{definition}\label{Definition (Synaptic Function)} According to the previous notations, the synaptic function of a statistical learning machine is given as follows:
\begin{center}
$ \psi(x,y,u)=\dfrac{1}{A(u)}\left( H(g(u)) - \log\dfrac{q(y \vert x)}{P(y \vert x,g(u))}\right) $.
\end{center}
The function $ \psi(x,y,u) $ can be written as $ \psi(x,y,g^{-1}(u)) $ if $ H(\omega)\neq 0 $. However, it is well defined, in general, when $ H(\omega)=0 $. On the other hand, we proved that $ \psi(x,y,u) $ is an analytic function of $ u $, whenever $ H(g(u))=0 $. From the property of normal crosses of $ A(u) $, one can see that $ \psi(x,y,u) $ is well defined in the variable $ u $, see ref., \cite{SWatanabe42001}, \cite{SWatanabe52001}.\end{definition}
\textbf{Learning Coefficient.} In this part of our work, we compute the learning coefficient of the following statistical learning machine:\\
\begin{center}
$P(y \vert x,a,b) = \dfrac{1}{\sqrt{2\pi}} \exp(-\dfrac{1}{2}(y - af(b,x))^{2})$.
\end{center}
The true statistical inference of the machine is given by:
\begin{center}
$ q(y \vert x) =\dfrac{1}{\sqrt{2\pi}} \exp\left( \dfrac{-1}{2}(y -\dfrac{a_{0}f(b_{0},x)}{\sqrt{n}})^{2}\right) $,
\end{center}
where
\begin{center}
$ \int \dfrac{\psi(b)db}{\Vert f(b) \Vert} < \infty $.
\end{center}
Then, the learning curve of this machine can be expanded asymptotically by:
\begin{center}
$ G(n)=\dfrac{\lambda(a_{0},b_{0})}{n} + o\left( \dfrac{1}{n}\right) $.
\end{center}
The \textbf{learning coefficient}, which is independent of $ n $, is given by:
\begin{center}
$ \lambda(a_{0},b_{0})=\dfrac{1}{2} \left\lbrace 1 + a^{2}_{0}\Vert f(b_{0})\Vert^{2} - \sum_{j=1}^{J}a_{0}f_{j}(b_{0})E_{g}\left[ \dfrac{\partial}{\partial g_{j}} \log Z(g)\right] \right\rbrace $,
\end{center}
see, Watanabe, \cite{SWatanabe42001}, where $ g=\lbrace g_{j} \rbrace$ is a random variable subject to the dimensional gaussian distribution $ J $, whose mean is zero and its covariance matrix is the identity. By $ E_{g} $ we mean the expecting values on $ g $, and set
$(g) = \int \exp(L(g))\dfrac{\psi(b)db}{\Vert f(b) \Vert},$
with $ L(b)=\dfrac{m((g + a_{0}f(b_{0}))*f(b))^{2}}{2\Vert f(b) \Vert^{2}} .$
Now, considering, at the beginning of our example, the synaptic function $ f(b,x) $ in terms of its expansion in orthonormal basis $ e_{j} $, one has that
\begin{center}
$ P(y \vert x,a,b)=\dfrac{1}{\sqrt{2\pi}} \exp \left( \dfrac{-1}{2}(y - \sum_{j=1}^{N} ab_{j}e_{j}(x))^{2}\right). $
\end{center}
In this case for the synaptic function $ \Psi $. When $ N \geqslant 2 $, in a model of true regression, see ref. \cite{SWatanabe42001}, this machine is singular with parameter space $ W_{0}=\lbrace (a,b), a=0, b=0, : a \in \mathbb{R}, b \in \mathbb{R}^{N} \rbrace$. Using the toric resolution, it is parametrized by means $ \omega_{i}=ab_{i} $ and substituting in the model one has:
\begin{center}
$ P(y \vert x,a,b)=\dfrac{1}{\sqrt{2\pi}} \exp\left( \dfrac{-1}{2}(y - \sum_{j=1}^{N} \omega_{j}e_{j}(x))^{2}\right) $.
\end{center}
With this resolution in the parameter space, the model has become a regular model, with learning coefficient given by $ \lambda(\omega_{0}) = N/2$ for an arbitrary parameter $ \omega_{0} $. Meanwhile, without this toric resolution the learning coefficient would be given by the above expression $ \lambda( a_{0} ,b_{0}) $. Clearly, $ \lambda(a_{0},b_{0}) \neq \lambda(\omega_{0}) $. With the previous facts, one see that the \textbf{singularities into the parameter space play an important role in the statistical learning, \cite{SWatanabe22001}, \cite{SWatanabe32001}.}
\begin{theorem}\label{Corollary 4.3.3} Let $P(y\vert x, \omega) $ be a non singular statistical learning machine, see Definition \ref{Definition 4.2.2.}, in the probability space $ (\Omega, \mathbb{F}, P) $, and consider the Kullback distance associated with its parameters space; i.e., $ \lbrace \lambda \in W \, \subset \, \Omega: H(\lambda)=J(\lambda) =0 \rbrace $. Then, the following polynomial is a parametrization such that for each $ \Theta \, \subset \, \Omega$, there exists $ \, \omega \,\in \, \Theta \subset \Omega \subset \mathbb{C}^{n},\, $ as we have seen, $ H(\omega)=\sum_{i=1}^{n} c_{i}\omega^{a} $ with $ c_{i} \,\in \, \mathbb{R} ,\, \omega^{a} \, \in \, \mathbb{C^{*}}$, {} for each $i$, and $ a \, \in \, \mathbb{Z}^{n} $ being the lattice vector (see Definition \ref{Definition 2.1.2.}, Newton polytope) if and only if the lattice cone $ \sigma \subseteq \, supp(H(\omega)) $, generated by the lattice vector of support of $ H(\omega) $, is not singular; i.e.,
\textbf{Det} $ (\sigma)=1 $.\end{theorem}
\begin{proof} The need is a consequence of the following facts. As $ H(\omega) $ is a re parametrization of singular polynomial $ H(\lambda) $ then there exists a resolution map $ g: H(\lambda) \, \longrightarrow \, H(\omega) $, by the Theorem \ref{Theorem of Hironaka-Atiyah}, such that $ H(\omega) $ is not singular. Now, let $ \sigma' \subset \, supp(H(\lambda)) $ and let $ \sigma \subset \, supp(H(\omega)) $ be lattice cones generated by the lattice vectors of support of $ H(\lambda) $ and $ H(\omega) $, respectively. Then, we affirm that $ g $ induces a morphism of monomial generated finitely $ \mathbb{C}-$ algebras: $ R_{\sigma'}= \lbrace f \vert \, supp(f)\subset \, \sigma' \rbrace $ and $ R_{\sigma}= \lbrace f \vert \, supp(f)\subset \, \sigma \rbrace $, such that $ H(\lambda)\,\in \, R_{\sigma'} $ and $ H(\omega) \, \in \, R_{\sigma}$. The proof of this follows from its definition as resolution map. Then Theorem \ref{Theorem 2.3.0.}, implies that $ R_{\sigma'} \simeq R_{\sigma} \, \Longrightarrow \, \sigma' \simeq \sigma$. Therefore there exists an uni modular transformation $ L \, \in \, \mathbb{Z}^{n} \times \mathbb{Z}^{n} $ with a matrix associated to the Hilbert basis $ H_{(\sigma')^{\vee}} $, with dual cone $ (\sigma')^{\vee} \, \Longrightarrow \, L(\sigma')=\sigma$ and with uni modularity of this transformation. Then \textbf{Det} $L(\sigma')=$ \textbf{Det} $ \sigma=1 \, \Longrightarrow \,\sigma$ is not singular.\\
Reciprocally, let $ \sigma \subset \, supp(H(\omega)) \, \ni $ \textbf{Det} $ \sigma =1 \, \Longrightarrow \,\exists \, L \in \, \mathbb{Z}^{n} \times \mathbb{Z}^{n} \, \ni \, L(\sigma')=\sigma $ be a lattice cone with $ L $ being an uni modular transformation, for some cone $ \sigma' \subset \, \mathbb{Z}^{n} \, \Longrightarrow \, \sigma'\simeq \sigma$. Then by Theorem \ref{Theorem 2.3.0.}, one has that this isomorphism lifts to a toric morphism $ \psi $ such that $ R_{\sigma'}\simeq R_{\sigma} $ to make it compatible with the previous notation, we do $ g=\psi $. Since the morphisms of monomial finitely generated $ \mathbb{C}-$ algebras are torics as it has been said before, then $ g $ is a resolution map of $ H(\omega) $ and therefore this polynomial is not singular. We will prove that the Fisher information matrix $ I(\omega) $ associated to the model is not singular, i.e., \textbf{Det} $ I(\omega)\neq 0 $.\\
\textbf{Proof of the affirmation.} If $ H(\omega) $ is singular, then the inference machine $ P(y \vert x,\omega) $ is a model non iden\-ti\-fia\-ble, by Definition \ref{Definition 4.1.8}. Therefore, $ P(y \vert x,\omega)=q(y\vert x) $ where $ q(y \vert x) $ is the true statistical inference of the model. Applying the operator $ \nabla_{\omega} $ to $P(y \vert x,\omega) $, from Definition of Fisher information matrix (Definition \ref{Definition 4.2.1}), one has that $\nabla_{\omega} P(y \vert x,\omega)= \nabla_{\omega} q(y\vert x) = 0 \,\Longrightarrow$ \textbf{Det}
$ I(\omega)=0 $, which concludes the proof, \textbf{q.e.d.} \end{proof}
\section{Applications in singular machines.}
We present an application to the learning curve in the following singular machine.\\
\textbf{Application A.} Consider the polynomial which represents the learning curve of a perceptron of two layers $ H(a,b,c)= a^{2}b^{2}+ 2abc + c^{2} + 3a^{2}b^{4}, \ (a,b,c) \in \mathbb{R}^{3} $ which is singular in its parameter space, $ (0,0,0) \in \mathbb{R}^{3} $. Establishing the Newton polytope, defined by $ supp(H) $, one has the lattice cone: $ \sigma=Con ((2,2,0),(1,1,1),(0,0,2),(2,4,0)) $.\\
Now, we get the following associated dual cone, see Theorem \ref{Theorem 2.3.0.}:
\begin{center}
$ \sigma^{\vee}= Con(2e_{1} - e_{2} - e_{3},-e_{1} + e_{2}, e_{3}) $,
\end{center}
which gives the Hilbert basis associated to monoid $ \sigma \cap \mathbb{Z}^{3}$.
\begin{center}
$ \mathrm{H}_{\sigma^{\vee}} = \lbrace e_{1} + e_{2},e_{1} + e_{2} + e_{3}, e_{1} + 2e_{2} \rbrace $.
\end{center}
From Theorem \ref{Theorem 2.3.0.}, one obtains the geometric realization of the affine toric variety $ X_{\sigma'^{\vee}} $,\\
\begin{center}
$X_{\sigma'^{\vee}} $ = \textbf{Spec}$(\mathbb{C}[S_{\sigma'^{\vee}} \cap \mathbb{Z}^{3}])= \textbf{Spec}(\mathbb{C}[u_{1}u_{2},u_{1}u_{2}u_{3},u_{1}u^{2}_{2}])$.
\end{center}
One chooses the set of generators of $ X_{\sigma'^{\vee}} $, which forms a uni modular matrix $ A=Columns((1,1,0)',(1,1,1)',(1,2,0)') $, and parametrize this system by means of monomials of Laurent.\\
By Theorem \ref{Theorem of Hironaka-Atiyah}, one obtains the resolution map,
\begin{center}
$ g_{1}: (a,b,c) \longrightarrow (u_{1}u_{2},u_{1}u_{2}u_{3},u_{1}u^{2}_{2})/(0,0,0) $,
\end{center}
such that,\\
\begin{center}
$ H(g_{1}(u_{1},u_{2},u_{3}))= u^{4}_{1}u^{4}_{2}u^{2}_{3} + 2u^{3}_{1}u^{4}_{2}u_{3} + u^{2}_{1}u^{4}_{2} + 3u^{6}_{1}u^{6}_{2}u^{4}_{3}$\\
$\qquad \quad\quad\quad\quad\quad=u^{2}_{1}u^{4}_{2}(u^{2}_{1}u^{2}_{3} + 2u_{1}u_{3} + 1 + 3u^{4}_{1}u^{2}_{2}u^{4}_{3}) $\\
$\quad\quad\quad\quad\quad=u^{2}_{1}u^{4}_{2}((u_{1}u_{3} + 1)^{2} + 3u^{4}_{1}u^{4}_{3}u^{2}_{2}) $\\
$\qquad =c^{2}_{1}((b_{1} + 1)^{2} + 3b^{4}_{1}d^{2}_{1}) $\\
$ \quad\quad=c^{2}_{1}(b'^{2}_{1} + 3(b'_{1} - 1)^{4}d^{2}_{1}) $\\
$=c^{2}_{1}(b'^{2}_{1} + 3e^{4}_{1}d^{2}_{1}) $.
\end{center}
By applying a second time the technique of resolution by means of Hilbert basis to the polynomial defined by $ h(b'_{1},d_{1},e_{1})=b'^{2}_{1} + 3e^{4}_{1}d^{2}_{1} $, where the basis Hilbert associated to Newton polytope of $ supp(h) $, in this manner, one has that $ \mathrm{H_{\sigma^{\vee}}}=\lbrace e_{1},e_{1} + e_{2},e_{1} + 2e_{2} \rbrace $, and as consequence of the resolution map,
\begin{center}
$g_{2}:(b'_{1},d_{1},e_{1}) \longrightarrow (s_{1},s_{1}s_{2},s_{1}s^{2}_{2})/(0,0,0)$.
\end{center}
Then, one obtains the affine toric variety; $ X_{\sigma^{\vee}}= Spec\mathbb{C}[s_{1},s_{1}s_{2},s_{1}s^{2}_{2}] $, and so,
\begin{center}
$ H(g_{1}(g_{2}(s_{1},s_{2},s_{3})))=c^{2}_{1}s^{2}_{1}(1+ 3s^{4}_{1}s^{8}_{2}s^{2}_{2}) $\\
$ =c^{2}_{1}s^{2}_{1}(1 + 3s^{4}_{1}s^{10}_{2}) $,
\end{center}
which is not singular in $ (0,0,0) \in \mathbb{R}^{3} $. This same fact is proved in, S. Watanabe, see, ref. \cite{SWatanabe42001}, \cite{SWatanabe62001}.\\
\textbf{Application (B). Mix of binomial distributions.} This kind of statistical learning is used by the spectral analysis of mutations, see \cite{MAoyaguiWatanabe2005}, and the statistical machine is cha\-rac\-te\-ri\-zed by the following probabilities: \\
True probability of $ x $, $ q(x=k)=Bin_{N} (x;p^{*})= \binom {N}{x}p^{*x} (1 - p^{*})^{N - x}$.\\
True probabilistic inference of model $ P(x=k \vert w)= aBin_{N}(x,p_{1}) + (1 - a) Bin_{N}(x,p_{2}) $.\\
Parameter space $ w$ is defined by:
\begin{center}
$ w = (\lbrace a_{i} \rbrace^{K}_{i=1}, \lbrace p_{i} \rbrace^{K+1}_{i=1}) $,
\end{center}
where coordinates of the parameters $ p_{i} $ are defined in the range $ 0 < p_{i} <1/2 $ and,
\begin{center}
$ a_{K+1}= 1 - \sum^{K}_{i=1} a_{i} $.
\end{center}
There holds the following theorem proved by Watanabe, see ref., \cite{KYamazakiWatanabe2004}, immediately, the same result is proven but using Hilbert basis and toric morphism: \\
\begin{theorem}\label{Theorem} Consider a learning machine characterized by the probabilities defined above, then for a number large enough $ n $ of training examples, in accordance with Corollary \ref{Corollary 4.3.1}, its learning curve is given by:
\begin{center}
$ K(n)= \dfrac{3}{4} \log(n) + C $,
\end{center}
where $C$ is independent of $ n $.\\
\end{theorem}
\textbf{Proof.} The Kullback information distance is given by:
\begin{center}
$ \mathrm{H}(x,a,b_{1},b_{2})= \sum^{N}_{x=0} q(x) log\left( \dfrac{q(x)}{P(x \vert \omega)} \right) $
$ =(ap_{1} + (1 - a)p_{2})^{2} + (ap^{2}_{1} + (1 - a)p^{2}_{2})^{2} +....+$\\
$ =b^{2}_{2} + (ab^{2}_{1} + (b_{2} - ab_{1})^{2})^{2} +...+$ major order terms.
\end{center}
which is singular in $ (0,0,0) \in \mathbb{R}^{3}$. According to Theorem \ref{Theorem of Hironaka-Atiyah} and Hilbert basis lemma, and to our technique with toric morphisms, one can see that the previous polynomial is generated by the ideal $ \mathrm{I}< \mathbb{C}[a,b_{1},b_{2}] $\\
\begin{center}
$ \mathrm{I}:= < b_{2}^{2}; ab_{1}^{2}; ab_{1}b_{2}> $
\end{center}
and the lattice cone
\begin{center}
$ \sigma = Con((0,0,2);(1,2,0);(1,1,1)). $
\end{center}
Computing the geometric realization of the affine toric variety $ X_{\sigma^{\vee}} $, associated with monoid $ S_{\sigma}= \sigma^{\vee} \cap \mathbb{Z}^{3} $ and with Hilbert basis, one obtains \\
\begin{center}
$ \mathrm{H}_{\sigma^{\vee}}= \lbrace e_{3}; e_{1} + e_{2} + e_{3}; e_{1} + 2e_{2} \rbrace $,
\end{center}
where the toric variety is:
\begin{center}
$ X_{\sigma^{\vee}} =$\textbf{Spec}$\mathbb{C}[w_{3},w_{1}w_{2}w_{3},w_{1}w_{2}^{2}] $,
\end{center}
and the coordinate system:
\begin{center}
$a= w_{3} ; $\\
$ b_{1}=w_{1}w_{2}w_{3}; $\\
$ b_{2}=w_{1}w_{2}^{2}$,
\end{center}
and using this parametrization we get the resolution map $ g : X_{\sum'} \longrightarrow X_{\sum} $ such that $ \mathrm{H}(g(w)), w=(w_{1},w_{2},w_{3}) \in \mathbb{R}^{3}$ is not singular in $ (0,0,0) $ and from Theorem \ref{Theorem of Hironaka-Atiyah};
$ \mathrm{H}(g(w))= w_{1}^{2}w_{2}^{4} + (w_{3}w_{1}^{2}w_{2}^{2}w_{3}^{2}+ (w_{1}w_{2}^{2} - w_{1}w_{2}w_{3}^{2})^{2})^{2} +...+$ order major terms),\\
$=w_{1}^{2}w_{2}^{4} + [w_{1}^{2}w_{2}^{2} + (w_{3}^{3}+(w_{2} -w_{3}^{2})^{2})]^{2} +...+$ order major terms,\\
$ =w_{1}^{2}w_{2}^{4} + w_{1}^{4}w_{2}^{4}[w_{3}^{3}(w_{2}-w_{3}^{2})^{2}]^{2} +...+$ order major terms.\\
$ =w_{1}^{2}w_{2}^{4}(1 + w_{1}^{2}(w_{3}^{6} +2w_{3}^{3}(w_{2}- w_{3}^{2})^{2}+(w_{2}-w_{3})^{4} +...+$ order major terms).
It is easy to see that writing the terms of integration of $ J(z) $, we get,
\begin{center}
$ \displaystyle J(z)=\int H(g(w))^{z} \vert g'(w) \vert du$
$ \displaystyle =\int ((1 + w_{3}^{2}w_{1}^{2}+...)w_{2}^{4}w_{1}^{2})^{z} \vert w_{2}^{2}w_{1} \vert dw_{1}dw_{2}dw_{3} $
$ =\dfrac{f(z)}{4z + 3} $,
\end{center}
where the most large pole of $ J(z) $ is $ \lambda_{1}=\dfrac{3}{4} $ and multiplicity $ m_{1}=1 $, then the learning curve is given by: \\
\begin{center}
$ K(n)=\dfrac{3}{4} \log(n) + C $;
\end{center}
\textbf{q.e.d.} see ref. \cite{KYamazakiWatanabe2004}.\\
\textbf{Application (C).} The following application is the toric resolution in a perceptron of three layer on the learning curve of the same, for details of the computing of this learning curve, see, Watanabe \cite{SWatanabe42001}. We define the machine in the space probability $ (\Omega, \mathbb{F}, P) $ as follows: \\
\begin{enumerate}
\item \textit{A priori} probability distribution, $ \varphi(\omega) > 0 $.
\item Predictive probability of the vector $ y \in \mathbb{R}^{N} $,
\begin{center}
$P(y \vert x,\omega)=\dfrac{1}{(2\pi s^{2})^{N/2}}\exp \left( \dfrac{-1}{2s^{2}} \Vert y - f_{k}(x,\omega)\Vert^{2} \right)$,
\end{center}
with $ x \in \mathbb{R}^{M} $ and $ s > 0 $ is the standard deviation.
\item True probability distribution of model,
\begin{center}
$ q(y \vert x)q(x) = \dfrac{1}{(2\pi s^{2})^{N/2}}\exp \left( -\dfrac{1}{2s^{2}}\Vert y \Vert^{2}\right) q(x)$.
\end{center}
\end{enumerate}
First we compute the Kullback distance of this machine is defined as follows,
\begin{center}
$ \displaystyle H(a,b,c)= \dfrac{1}{2s^{2}}\int \Vert f_{K}(x,a,b,c) \Vert^{2} q(x) dx $
$ =\sum^{N}_{p=1} \sum^{K}_{h,k=1} B_{hk}(b,c) a_{hp} b_{kp} $.
\end{center}
with parameter space associated, and function of the hidden units given by, $ f_{K}(x,\omega)= \sum^{K}_{k=1} a_{k} \sigma(b_{k} x + c_{k}) $ {} :
\begin{center}
$ a=\lbrace a_{k} \in \mathbb{R}^{N}; k=1,2...,K \rbrace $\\
$ b= \lbrace b_{k} \in \mathbb{R}^{M}; k=1,2,...,K \rbrace $\\
$ c=\lbrace c_{k} \in \mathbb{R}; k=1,2,...,K \rbrace $,\\
$ a_{k}=\lbrace a_{kp} \in \mathbb{R}; p=1,2...,N \rbrace $\\
$ b_{k}=\lbrace b_{kp} \in \mathbb{R};q=1,2,...,M \rbrace $.
\end{center}
defining $ \displaystyle B_{hk}(b,c)=\dfrac{1}{2s^{2}}\int \sigma(b_{h}*x + c_{h})\sigma(b_{k}*x + c_{k})q(x)dx $, where $ \sigma(x)=tanh(x) $ is a synaptic function, see for details of the formulation, Watanabe \cite{SWatanabe42001}. Thus developing terms.
\begin{center}
$ H(a,b,c)=\sum_{p=1}^{N}(B_{11}(b,c)a_{1p}a_{1p} + B_{22}(b,c)a_{2p}a_{2p}+...+B_{KK}a_{Kp}a_{Kp})$\\
$ =B_{11}a_{11}^{2}+B_{22}a_{21}^{2}+....+B_{KK}a_{K1}^{2}$\\
$+B_{11}a_{12}^{2}+B_{22}a_{22}^{2}+....+B_{KK}a_{K2}^{2} $\\
$ +B_{11}a_{13}^{2}+B_{22}a_{23}^{2}+....+B_{KK}a_{K3}^{2} $\\
$ +B_{11}a_{14}^{2}+B_{22}a_{24}^{2}+...+B_{KK}a_{K4}^{2}+ $\\
$ .............+......+......$\\
$ +B_{11}a_{1N}^{2}+B_{22}a_{2N}^{2}+....+B_{KK}a_{KN}^{2}. $
\end{center}
This polynomial seen in the coordinates $ a_{hk}\in\mathbb{R} $, is singular in $ (0,0,...,0)\in\mathbb{R} ^{N} $, we construct a toric resolution in this coordinates utilizing the concept of projective sets as previously we have seen. We define the affine charts utilizing the following projective set,
\begin{center}
$ U_{j}=\lbrace[a_{11},...,a_{1N},a_{21},...,a_{2N},...,a_{K1},...,a_{KN}]\in \mathbb{R} P^{KN-1}:a_{jj}\neq 0\rbrace$.
\end{center}
Where $ \mathbb{R} P^{KN-1}$ is the real projective space $ KN-1 $-dimensional, and also there exists a bijection as we have seen, the affine real space $ KN-1 $-dimensional $ \mathbb{R}^{KN-1} $, given by,
\begin{center}
$ U_{j}:\mathbb{R} P^{KN-1}\longmapsto \mathbb{R} ^{KN-1} $
\end{center}
\begin{center}
$ U_{j}:[a_{11},...,a_{1N},a_{21},...,a_{2N},...,a_{K1},...,a_{KN}]\longmapsto
(1,a_{12}a_{11}^{-1},...,a_{1N}a_{11}^{-1},a_{21}a_{11}^{-1},...,a_{2N}a_{11}^{-1},...
,a_{K1}a_{11}^{-1},...,a_{KN}a_{11}^{-1})$
\end{center}
now in these projective coordinates we redefine $ H(a,b,c)=u_{11}^{2}H_{1}(a,b,c) $, since:
\begin{center}
$ H(a,b,c)=a_{12}^{2}u_{11}^{2}B_{11}+a_{13}^{2}u_{11}^{2}B_{11}+...+a_{1N}^{2}u_{11}^{2}B_{11}+a_{21}^{2}u_{11}^{2}B_{22} +...+ a_{2N}^{2}u_{11}^{2}B_{22}+...+
a_{k1}^{2}u_{11}^{2}B_{KK}+...+a_{KN}^{2}u_{11}^{2}B_{KK}$
\end{center}
where the new coordinates in the affine space $ \mathbb{R} ^{KN-1} $ son $ (u_{11},a_{12},...,a_{1N},...,a_{K1},...,a_{KN})\\
\in \mathbb{R} ^{KN-1} $, in this new coordinate ring we construct the lattice cone of Newton polytope associated in the re parametrized polynomial, which give us as: $ \sigma=Con(2e_{1}+2e_{2},...,2e_{1}+2e_{1N},...,
2e_{1}+2e_{K1},...,2e_{1}+2e_{KN}) $, that in matrix way give rises the following array associated to the cone,
\\
\\
\\\\
$ A_{\sigma}= $
\begin{center}
\begin{tabular}{c c c c c c c}
2&2&0&.&.&.&0 \\
2&0&2&.&.&.&0\\
2&0&0&2&.&.&0\\
.&.&.&.&.&.&.\\
.&0&.&.&.&2&0\\
2&0&.&.&.&0&2\\\\\\
\end{tabular}
\end{center}
It possible to show in a inductive way, and using of Singular program, \cite{DGPS}, that the Hilbert basis associated to this lattice cone, are given by the following matrix array,\\\\
$ H_{\sigma^{\vee}}= $
\begin{center}
\begin{tabular}{c c c c c c c}
1&0&0&.&.&.&1 \\
1&0&.&.&.&1&0\\
1&0&.&.&1&0&0\\
.&.&.&.&.&.&.\\
.&.&1&0&.&.&0\\
1&1&0&.&.&.&0\\\\
\end{tabular}
\end{center}
where the lattice vectors of this array represent us a regular lattice cone; and by The Theorem \ref{Theorem 2.3.0.} and Theorem \ref{Theorem of toric modification}, we have a toric blow up or toric resolution, also we obtain the respectively toric variety $ X_{\sigma^{\vee}} $, taking as exponents the elements of this base for the constructing the monomial homomorphisms ( Theorem \ref{Theorem 2.3.0.} ), and so the following transformation of monomial coordinates:
\begin{center}
$ a_{11}=v_{11}; $\\
$ u_{11}=v_{11}^{-1}$,\\
$ a_{hp}=u_{11}*u_{hp}$; $ \forall $ $h\neq 1 $ \'{o} $p\neq 1$.
\end{center}
which is the re parametrization shown in, Watanabe \cite{SWatanabe42001}. Furthermore we set a extra coordinate $ u_{11} $, being that we work with the projective set $ U_{j} $, where we construct from the affine chart;
\begin{center}
$ A_{0}=\lbrace(1,a_{12}a_{11}^{-1},...,a_{1N}a_{11}^{-1},a_{21}a_{11}^{-1},...,a_{2N}a_{11}^{-1},...
,a_{K1}a_{11}^{-1},...,a_{KN}a_{11}^{-1}) \vert a_{11} \neq 0 \in\mathbb{R}^{KN-1} \rbrace $,
\end{center}
so explicitly we have the toric variety as the affine algebraic scheme
\begin{center}
$ X_{\sigma^{\vee}} = $\textbf{Spec}$(\mathbb{C}[\sigma^{\vee} \cap \mathbb{Z}^{KN-1}])=A_{0} $;
\end{center}
is enough to realize the toric resolution in this chart being that the toric morphism are proper, and they extend at all the variety. Finally by the Watanabe's theorems, computing the largest pole of zeta function,
\begin{center}
$\displaystyle J(z)=\int_{U(\delta)} H(g(u),b,c)^{z} \varphi_{0} \vert g'(u) \lambda du'dbdc $.\\
\end{center}
In Watanabe is shown that this toric resolution is not complete and is necessary other resolution to the Kullback distance $ H(g(u),b,c) $ applying Hilbert basis again, now the monomial transformation is given by,
\begin{center}
$ g :\lbrace u_{kp},v_{k}; 1\leqslant k \leqslant K; 1 \leqslant p \leqslant M \rbrace \mapsto \lbrace b_{kp},c_{k}; 1 \leqslant k \leqslant K; 1 \leqslant p \leqslant M \rbrace$.
\end{center}
that is defined by,
\begin{center}
$ b_{11}=u_{11} $\\
$\qquad \qquad \qquad \qquad b_{kp}=u_{11}u_{kp}, \quad (k\neq 1)$ o $ (p\neq 1) $,\\
$ c_{k}= u_{11}v_{k} $.
\end{center}
Then by Atiyah-Hironaka theorem exists analytic function $ H_{2}(a,u',v) $, such that,
\begin{center}
$ H(a,b,c)= u^{2}_{11}H(a,u',v) $,
\end{center}
which implies;
therefore $ \lambda_{1} \leqslant (M + 1)K/2 $.\\
Combining the results of above, the largest pole $ -\lambda_{1} $ of the poles of $ J(z)$ satisfies of inequality,
\begin{center}
$ \lambda_{1} \leqslant \dfrac{K}{2} min\lbrace N,M +1 \rbrace $,
\end{center}
With this information and corollary \ref{Corollary 4.3.1}, we have the learning curve associated to the perceptron:\\
\begin{center}
$ K(n) \leqslant \dfrac{K}{2} min\lbrace N,M+1 \rbrace \log(n) + o\left( \log (n)\right) $.
\end{center}
We reproduce the first toric resolution in the first re parametrization of the Kulback distance and its toric variety associated by means of Hilbert basis; and the second resolution is given in Watanabe \cite{SWatanabe42001}; but is possible apply the technique with Hilbert basis the necessary times to up having the wished resolution, agreement to the Hironaka's theorem, \cite{HHironaka1964}.
\thispagestyle{plain}
\section*{\begin{center}
Conclusions.
\end{center}}
The principal conclusion of this work is the use of Theorem \ref{Theorem 2.3.0.}; as consequence, Theorem \ref{Corollary 4.3.3}, that are fundamentals for the formalization and for reproducing of results previously reported in statistical singular learning, S. Watanabe \cite{SWatanabe12001}, \cite{SWatanabe42001}, \cite{SWatanabe52001}. The practical applications of these theorems is obtained by means of the use of Hilbert basis with Singular program, ref., \cite{DGPS}. This open the doors to look for other perspectives of investigation for machines with a high dimensional parameter space important in data science.It should be clear that the algorithmic complexity for computing Hilbert basis is a topic of current interest in computational algebraic geometry, but its solution for lattice polytope with thousands of vertex may well help in the solution of many other problems beyond the examples presented here.\\\\
\input amsart-template.bbl
\end{document} | 34,766 |
Debyer 0.1
Debyer is a software for calculation of diffraction patterns.
It can be run on any modern platform, from typical Linux or MS Windows workstation to large computer clusters. The parallel version uses MPI library.
Debyer is distributed under the terms of GNU General Public License.
- last updated on:
- September 27th, 2006, 17:47 GMT
- price:
- FREE!
- developed by:
- Marcin Wojdyr
-
- license type:
- GPL (GNU General Public License)
- category:
- ROOT \ Science and Engineering \ Chemistry
Add your review!
SUBMIT | 239,156 |
Emergency Preparations
Summary: This outlines an emergency preparedness plan developed by Times Square Church in New York City, covering food selection and storage, conservation, loss of electricity, money, water, hygiene, first aid, cooking and baby care.
Find articles by Larry Fox about the biblical end times
Page Contents:
Thoughts on preparation
Food storage
Helpful hints
Times Square Church Emergency Preparedness Plan
From God’s Plan to Protect His People in the Coming Depression, by David Wilkerson (1998)
The scriptures make it clear that preparation for an emergency does not negate trust in God but rather reinforces it.:
- If used sparingly, the suggested items should last approximately sixty days.
- Regular store-bought meat, milk and other perishables will not generally be available to consumers.
- Suggested items commonly sold in cans, bottles or storable, packaged form have a shelf life typically measured in months or, in some cases, years.
- People who require special diets should determine whether the levels of sodium, sugar, fat or cholesterol in the suggested items might adversely affect their health.
- Careful attention should be paid to instructions concerning the reconstituting of dehydrated items with water.Choices may be made from all the suggested food groups based on personal taste and availability of items.
- Besides:
- Vegetables., com flakes, raisin bran.
- Beans and peas. Twenty-four IS.
- Assorted fruit. Twenty-four 15-ounce cans. For example: apples, plums, cherries, peaches, pineapples, figs.
- Assorted fruit juices. Ten l6-ounce cans. For example: orange, apple, grape, fruit punch, prune.
- Fruit blends and dried fruits. Twenty 16-ounce cans. For example: fruit cocktail, cranberry sauce, applesauce. Also ten 4-ounce packages of assorted dried fruits. For example: raisins, dates, prunes.
- Milk. Fifty 10-ounce packages of any long-lasting milk product, such as Parmalat or powdered milk.
- Pasta. Ten 1-pound packages of assorted pasta.
- Rice. One 10-pound bag or ten 1-pound bags of minute rice.
- Crackers. Five 16-ounce boxes of all kinds.
- Combination foods. Five 16-ounce cans. For example: beef stew, ravioli, spaghetti and meatballs, macaroni and cheese, chicken and dumplings.
-, eye-wear, eye care, hearing aids, prosthetic and orthopedic devices, etc. And begin storing items now – don’t procrastinate.
Helpful Hints
Conservation
- Use stored items sparingly; extend their use for the duration of their shelf life.
- Practice now using less water for showers, washing, cooking, etc.
- Learn to use leftovers; do not discard unused but storable food.
- Plan all meals ahead of time in order to economize and avoid waste.
Loss of electricity
- A crisis may mean loss of cooking gas and home heating, so it’s important to make contingency plans.
- Regular shopping for refrigerated and perishable items should continue until loss of electricity or food shortages forces you to revert to your emergency supply.
- Keep a supply of long-burning candles.
- If
- Reduce your spending levels now and avoid incurring new debts of any kind.
- Set aside $10-$20 per week at home in a safe place.
- It is vitally important to keep as much cash on hand as you can reasonably secure., if other storage space is not available.
- Keep canned foods away from direct sunlight.
- Never use a swollen or punctured can; it may cause botulism.
- Frequent opening of containers could cause items to spoil. Keeping food in glass containers
- Although we cannot be certain, experts do not expect water shortages to last more than thirty days.
- Plan to store enough water for drinking, cooking and hygiene for one month – perhaps 25 gallons.
- Use water for personal hygiene sparingly (for example, three times per week).
- Prepare food in batches in order to save time, fuel and water.
- Reduce washing of pots by making one-pot meals, such as stews, rich soups, stir-fry dishes, etc.
- Use disposable eating ware to save on dish and utensil washing.
- Baby wipes and antibacterial cleansers can reduce the need for water used in washing hands.
Hygiene
- Store hygiene products in quantity, including soap, toothpaste, mouthwash, deodorant, tissue, feminine supplies, baby diapers, Depends, bed pan with plastic liner, lye, disinfectant, air freshener, chlorine bleach.
First aid
- Fill two months’ worth of prescriptions if possible.
- Buy an ample supply of vitamins and mineral supplements.
- Purchase first-aid items, including bandages, pain relievers, antacids, laxatives, antiseptics, etc.
Cooking
- Build main dishes around pasta or grains, with meals such as rice and beans.
- Stock up on canned heat, such as Sterno.
- Budget.
- Do not use fried, greasy, brined (pickles, sauerkraut), processed (sausages) or high-calorie (candy, sodas, cakes) foods.
- Avoid using honey due to possible salmonella contamination.
- Fresh or canned juice is preferred over powdered or packaged beverages.
- Formula: Ready-to-eat: Twelve 8-ounce cans. Concentrate: Four 8-ounce cans. Powder: Eight 15-ounce cans.
Following is a suggested sixty-day food storage plan for an infant at least one year old:
- Iron-enriched baby cereal. Two 16-ounce boxes of the following: rice, barley, wheat or toasted oats.
- Vegetables. Sixty 6-ounce jars of any of the following: squash, sweet potato or mixed vegetable.
- Fruit. Sixty 6-ounce jars of any of the following: pears, peaches, apples, plums or strawberries.
- Meat and dairy. Sixty 6-ounce jars of any of the following: turkey and rice, beef, chicken or pasta; forty 8-ounce packages of any long-lasting milk product, such as Parmalat or powdered milk.
2 comments on “Emergency Preparations”
• clean water, energy savings, prepper sites and sustainable energy products.
This is also our new website for our earlier comment about the University of Alabama solar desalination product. | 46,684 |
Lindsey Stirling Dances While Playing Violin in Mind-Blowing Freestyle in Night 1 of 'DWTS' Finals
Lindsey Stirling's freestyle on Monday's Dancing With the Stars was one for the ages.
The acclaimed violinist hit the stage during the first night of DWTS' two-night finale, and performed a number tailored specifically to her amazing talents.
Set to the powerful string composition of "Palladio," Stirling and partner Mark Ballas hit the stage in stylized leather conductor's outfits and delivered a show-stealing number that blew the roof off the theater.
Incorporating elements from every style of dance Stirling and Ballas have performed throughout the season, their lengthy freestyle saw them dancing intimately together, then suddenly surrounded by other performers who lifted Stirling high in the air, and then performing side by side yet again, without ever missing a beat.
At one point, the 31-year-old musician danced while playing her violin, and continued to play as she was hoisted in the air and spun around by her imaginative partner.
"The [idea] for our freestyle is that we are conductors, and then we get swept away by the music," Stirling explained during a pre-taped segment before the performance.
"This is meant to reflect you to a tee," Ballad explained. "The whole thing represents how you embody music, and how you can kind of command it and own it."
The bombastic, larger-than-life number received nothing but effusive praise from all the judges. Guest judge Julianne Hough said it was "like a Tim Burton quirky musical masterpiece of awesomeness," while Bruno Tonioli called it a "modern classic extravaganza, conducted and orchestrated to perfection."
The pair got a perfect 40 out of 40 for their efforts, which is the same score they received for the first dance of the night -- a quickstep set to "Barflies at the Beach" by Royal Crown Revue.
Between the two numbers, the couple earned a flawless score of 80, tying them with Jordan Fisher and his pro partner Lindsay Arnold.
Stirling and Ballas will be facing off against Fisher and Arnold -- as well as Frankie Muniz and his partner Witney Carson -- when the second night of the DWTS finals kicks off on Tuesday at 8 p.m. ET/PT on ABC.
RELATED CONTENT:
WATCH: 'DWTS' Competition Tightens After Long-Awaited Elimination in Night One of Season 25 Finals
WATCH: Lindsey Stirling and Mark Ballas Land Their First Perfect Score on ‘DWTS’: ‘We Nailed It!’ (Exclusive)
WATCH: Julianne Hough Delivers Tearful Performance About Terminal Illness in Emotional 'DWTS' Finale | 106,140 |
TITLE: Expectation value of the anticommutator of the bosonic creation and annihilation operator
QUESTION [0 upvotes]: The number operator is given by:
$$\hat{n}= a^{\dagger} a.$$
For a presentation, I have to derive the expectation value of the anticommutator of the bosonic operators $a$ and $a^{\dagger}$ :
$$\langle \{a , a^{\dagger} \} \rangle = \langle 2 \, \hat{n} + 1 \rangle $$
How can I do this?
REPLY [1 votes]: $[a,a^\dagger]=1$ gives $aa^\dagger=1+a^\dagger a$
So $\{a,a^\dagger\}=aa^\dagger+a^\dagger a=2a^\dagger a+1=2\hat{n}+1$
To calculate the expectation value $\langle 2\hat{n}+1 \rangle$ we have (take $\hbar=1$ )
\begin{align}
\langle 2\hat{n}+1 \rangle&=\text{Tr}[\hat{\rho} (2\hat{n}+1)] \\
&=\text{Tr}[\frac{e^{-\beta \omega (\hat{n}+1/2)}}{\text{Tr}[e^{-\beta \omega (\hat{n}+1/2)}]} (2\hat{n}+1)] \\
\end{align}
First let's compute $Z=\text{Tr}[e^{-\beta \omega (\hat{n}+1/2)}]=e^{-\beta \omega/2}\sum_{n}\langle n|e^{-\beta \omega \hat{n}}|n \rangle=e^{-\beta \omega/2}\sum_{n}e^{-\beta \omega n} $
Then
\begin{align}
\text{Tr}[\frac{e^{-\beta \omega (\hat{n}+1/2)}}{Z} (2\hat{n}+1)]&=1+2\frac{e^{-\beta \omega/2} \sum_{n}n e^{-n \beta \omega}}{Z} \\
&=1+2\frac{ \sum_{n}n e^{-n \beta \omega}}{\sum_{n}e^{-\beta \omega n}} \\
\langle 2\hat{n}+1 \rangle&=1+\frac{2}{e^{\beta \omega}-1}
\end{align}
Hope this is helpful. | 195,739 |
A LIFESAVING crew which has been flooded with calls since the onset of the warm weather is warning pleasure-seekers to steer clear of the banks of the Dee.
Flint Lifeboat is already bracing itself for a flurry of requests for help after the Environment Agency gave the go-ahead to cockle f ishing in Flintshire this week.
The Mostyn Bank and Thurstaston beds were opened on Tuesday, with only Thurstaston on the English side of the estuary open on Wednesday.
Flint coastguard Garry Jones said within hours of returning home from the Talacre call-out on Tuesday, he was attending another incident.
'We received another call at 5.15pm on Tuesday reporting a boat which had broken down between Mostyn and the Wirral,' he said. 'There were 12 people on board.'
The Hoylake off-shore lifeboat was required to carry out the operation due to the number of people involved.
'They took 10 people off the boat and left two on board for it be towed back to Mostyn, where they all returned home safe and well.'
But it's not only cocklers who are stretching the Flint crew's already limited resources.
Crew member Alwyn Dunn said people venturing to the riverside to enjoy the warm weather are also putting a strain on the lifeboat.
'We've had a particularly busy spell this week with the warm weather bringing so many people to the river's edge,' he said.
'They don't realise that although it's warm, the treacherous currents of the river are still as dangerous as ever.' On Sunday, two adults and a child from Neston were spotted by a member of the public near Parkgate.
The crew launched at Connah's Quay Ski Club in a bid to offer assistance to the party who were picking edible seaweed.
But help was declined by the walkers who said they had a good local knowledge of the area, and made their way back of their own accord.
The crew were back out again on Monday following reports of three children and an adult in difficulty.
The coastguard received a call from a woman claiming she had spotted the party attempting to cross from the English to the Welsh side of the estuary near the blue bridge in Queensferry.
When the Flint rescuers arrived at 2.30pm, there was no sign of them.
But after speaking to some fishermen-it was evident that the walkers had crossed the river safely and made their way home.
Alwyn said: 'The message to the public is don't put yourself in any danger.'
Honorary secretary Alan Forrester is concerned that a culmination of hot weather and an increased number of people working on the banks of the Dee could end in tragedy.
'We anticipate an even busier period with the cockle beds opening,' he said. 'But we are urging them not to block slipways because we can't get to them or other people in difficulty.
'They need to think about the consequences when parking because they're putting their own lives at risk.'
An Environment Agency spokesperson said the situation was being assessed yesterday to decide which beds would remain open. | 354,998 |
ROCHESTER, Minn. (WCCO) –.
After years of research, doctors at the Mayo Clinic discovered they could learn a lot about a person’s risk of colon cancer from their poop.
“The way this test works is to detect cancerous or precancerous [cells], along with blood, within the stool specimen,” Shah said, “thereby providing some prediction about the presence of cancer or precancerous lesions.”
Clinical trials show that cancer detection rates of the at-home test are close to those reported for a colonoscopy.
“I would say 90 percent would be a number that, in a very general way, is quite a good test.” Shah said.
With this new test kit, doctors are optimistic more people will get tested for colon cancer, increasing early detection and decreasing the number of people who die from the disease.
The Cologuard test will be available nationally next month.
You have to get a prescription from your doctor to purchase it and results take about two weeks to get back. | 248,038 |
The annual Herring migration along Vancouver Island is an exciting time for everyone! The fishing fleets are all out ready for the action, as well as the seagulls, sea lions, seals, brant geese, and of course people. Such an awesome time to observe nature. The water turns a turquoise blue as the herring migrate along the way which reveals their where a bouts. The colours, the sights and the action make this a memorable experience for all. This takes place early March if you are wanting to make the trip to see for yourself. Best places to view is Qualicum Beach, French Creek and Parksville, British Columbia, Canada.
1 - 17 of 17 - 17 of 17 (0.001 s)
Cache: mysql | 1 | 1 | 1 | 1 | 166,436 |
Home —› Products —› Rudraksha —› 12 MUKHI (TWELVE FACE) RUDRAKSHA —› 12 MUKHI RUDRAKSHA With Certificate (RUC12-006) Click on above image to view full picture More Views 12 MUKHI RUDRAKSHA With Certificate (RUC12-006) Be the first to review this product Out of stock $ 175. If you wish to know more, CLICK HERE..... Add To Wishlist Add To Compare Email to a Friend 100% Genuine Secure Payment 100% Certified ACCOLADES ISO Certificate HACCP Trust Seal Product Specification PRODUCT DWADASH MUKHI (TWELVE FACE) RUDRAKSHA With Certificate SKU RUC12-006 REPORT NUMBER N/A WEIGHT 02.116 gm SIZE 18.30*18.61*15.72 COLOR Brown MATERIAL Natural Rudraksh SHAPE & CUT Round Bead OPTIC CHARACTER N/A REFRACTIVE INDEX N/A SPECIFIC GRAVITY N/A Reviews Write Your Own Review How do you rate this product? * 1 * 2 * 3 * 4 * 5 * Quality Value Price Nickname* Summary* Review* Submit Review | 74,545 |
40+ Modern Meeting Room Designs With Glass Walls
The individual pods that are used as meeting rooms were created to incorporate all types of interesting features from all over the world. The massive meeting room is going to be set up according to the demands of each group… Continue Reading | 289,277 |
Search ERA.com
Search Bay County MLS
Search Gulf-Franklin County MLS
Search Chipola Area MLS
Why Choose ERA?
ERA Answers
Financing Resources
Fast and convenient mortgage loan financing. Just click here to submit an online application or call ERA Mortgage at
1-888-308-3499
.
Get discounts or special offers from national brands, or names and information on local venders who are properly licensed and insured to practice in their trade
.
The commercial sales and marketing division has a long history of proven success. The firm provides expertise in the areas of Market analysis and Site Selection for Shopping Centers, Restaurants, Motels, Manufacturing, Office, Multifamily Apartments, and Retail
. | 12,577 |
Korean Spicy Crab – Gye Muchim
I love taking my time to browse the aisles of the wonderful markets in Las Vegas Chinatown. Often, something new to me will catch my eye that I can’t resist trying. This treasure, from the fresh kimchi deli-counter in the Greenland Korean Market might be the most remarkable find yet.
Blue crabs (the kind used for Maryland steamed crabs) are quartered and marinated in an intense chili sauce fragrant with ginger, garlic and sesame oil. The crabs are sweet, spicy, salty and finger licking good. Along with some plain rice, and an icy cold bottle of soju, a Korean beverage similar to vodka, the chilled crabs made for one of the most extraordinary lunches I have had in recent memory. They are insanely delicious, and I fear, very addictive.
Wow that looks spicy all right! Hope your doing great long time no see on the web… have a wonderful holiday season! | 182,128 |
Satellite-TV Are you experiencing pixelated, hazy pictures on your television screen, even after installing your new TV set? Well, you don’t have to worry about much. What’s the solution? Get rid of your analog aerial, buy a digital TV antenna and get it installed on your roof top. The result? You will be able to watch all your favourite shows without the slightest of disturbance. How much does a digital TV antenna cost ? Well, these antennas used to be expensive. But over the past few years, the cost of digital aerials has gone down drastically. Of course, the price of antennas may vary depending on models and features. Contact a shop dealing with digital aerials near your locality to get a holistic view of TV aerial prices, antenna installation charges, etc. Listed below are some of the reasons on why you should opt for a digital TV antenna . Uninterrupted viewing: Digital aerials are resistant to signal disturbance. Unlike analogue antennas, digital aerials are built to withstand heavy rain or storm. Certain aerials .e with waterproof coating. Those without water resistant facilities can be transformed into one. All you need to do is dial technicians specializing in antenna repair, aerial installation and the likes. They will apply a special type of coating on the base of your antenna to make it waterproof. The death of analogue: Every nation has gone digital. Today, the existence of analogue aerials is minimal. Hence, to view pay-per-view and Freeview channels, you need to get a digital TV antenna installed at your home. However, in order to experience the desired picture quality, you have to ensure that the digital aerial is installed properly. Or else, you may encounter the same problem that you faced with your old analogue antenna. In case you are not familiar with the process involved in TV antenna installation, call up technicians specializing in antenna installation to get the job done. They will take care of all the technicalities associated with TV aerial installation. Here’s what they can do for you: Re.mend to you the best aerial that is perfectly suited for your TV Ensure that there are no sharp bends in the cable. A ni.y degree bend somewhere can hamper transmission of signals, thereby adversely affecting the performance of the entire setup For your information, these technicians can also help you out with home theatre setup , TV wall mounting, set top box setup, etc. In case if you are facing problems in setting up any one of these, you can always take the help of experts specialized in such tasks. So what are you waiting for? Replace your old, worn out, analogue aerial with a digital one to experience crystal clear picture and superior sound quality! About the Author: 相关的主题文章: | 406,065 |
Tattoos.” (NIH), nearly 16 million American adults suffer from depression. The World Health Organization (WHO) says depression carries the heaviest burden of disability among mental and behavioral disorders..”
Click for more from Fox 12 Oregon.
Originally available here | 124,300 |
TITLE: Is a topology sandwiched between two norms compactly generated?
QUESTION [1 upvotes]: Recall that a Hausdorf topological space $X$ is called compactly generated if any set whose intersections with compacts are compact is closed. Locally compact and first countable spaces are compactly generated.
Let $E$ be a Banach space with the norm $\|\cdot\|$ and the unit ball $B_E$. Let $|||\cdot|||\le \|\cdot\|$ be another norm, and let $\tau$ be a linear (or even locally convex) topology which is stronger than the $|||\cdot|||$-topology, but weaker than the $\|\cdot\|$-topology on $B_E$. Does it follows that $B_E$ with $\tau$ is compactly generated?
REPLY [3 votes]: Let $\tau$ be the weak topology on the Banach space $\ell_1$. It is known that each weakly convergent sequence in $\ell_1$ is norm convergent (i.e., $\ell_1$ has the Shur property). This property implies that $\tau$ is not compactly generated (otherwise it would be equal to the norm topology). Now consider the norm
$$|||(x_n)_{n\in\omega}|||=\sum_{n=0}^\infty\frac{|x_n|}{2^n}$$and observe that the topology $\tau$ is stronger that the topology generated by the norm $|||\cdot|||$ on the unit ball of $\ell_1$.
If we want the topology $\tau$ to be stronger that the topology generated by the norm $|||\cdot|||$ we can replace $\tau$ by the supremum of the weak topology and the topology generated by the norm $|||\cdot|||$. This modified topology still will not be compactly generated (because of the same Shur property). | 153,898 |
XENIA — A group of employees from the City of Xenia have filed a Petition for Representation Election with the State Employment Relations Board.
The petition seeks certification of a bargaining unit including several positions under the operations of the City Manager’s Office and Finance Office.
According to the City of Xenia, the Petition was filed on March 7 of this year.
“We had heard some employees were investigating forming a union last December, but we only found out for sure when we received notification from SERB,” Xenia City Manager Brent Merriman said. “There are about 20 employees, one third under the City Manager’s umbrella and the rest in the Finance Office.”
The union seeking to represent the bargaining unit is Ohio Council 8, American Federation of State, County and Municipal Employees, AFL-CIO (AFSCME).
The union currently represents the City’s public service maintenance, wastewater and water employees.
Filing the Petition for Representation Election is the first step in the process to seek union representation.
Upon certification of the bargaining unit, SERB will determine the date and time for conducting a secret ballot election. The majority of votes cast in the secret ballot election will determine whether or not AFSCME becomes the representative for the bargaining unit.
“The process is regulated by SERB to have a ballot,” Merriman said. “We have to post a notice for employees that there will be a vote to form a union. If the majority of those who cast a ballot vote for the measure, the union will be certified.”
William Duffield can be reached at 937-372-4444 ext. 133 or on Twitter @WilliamDuffield | 207,800 |
Coco Keeno – Coco Goldrush
June 18th, 2012 by Rosa
I got this box of Coco Keeno as a free sample from their booth at Sweets and Snacks. They were sampling all kinds of chocolate-covered things, and I asked for a box of their Coco Goldrush to take home for review. I thought they were the most delicious and unique of their offerings.
The box describes them as “organic dried goldenberry covered in natural cocoa.” I’d never heard of goldenberries before – I can’t decide if they sound magical or like a juvenile euphemism.
Apparently they’re dried gooseberries, which I’ve had before in England. I remember fresh gooseberries tasting like super tart grapes.
The dried goldenberries were all shriveled up. They sort of looked like giant golden raisins studded with seeds, like raspberries.
They were seedy and chewy and became increasingly tart as I chewed them, and they finished with a slight bitterness. Alone, they were too intensely sour to be enjoyable.
Covered in dark chocolate, however, they found a great flavor foil that balanced them out. The dark chocolate was dry with a light cocoa flavor that tamped down the goldenberry’s tartness.
When all chomped together, these were a uniquely tasty treat. I don’t buy into all the superfood hubbub about them, but I will give them an OM.
Category: chocolate, OM, organic, review | Comments Off on Coco Keeno – Coco Goldrush | 405,500 |
\begin{document}\maketitle
\section{Introduction}\label{intro}
Nahm's equations are a system of ordinary differential equations for three functions $\vec\phi=(\phi_1,\phi_2,\phi_3)$ of a real variable $y$
that take values in the Lie algebra $\frak g$ of a compact Lie group $G$. These functions satisfy
\begin{equation}\label{tofu} \frac{\d \phi_1}{\d y}+[\phi_2,\phi_3] =0,\end{equation}
along with cyclic permutations of these equations. More succinctly, we write
\begin{equation}\label{ofu}\frac{\d\vec\phi}{\d y}+\vec\phi\times \vec\phi = 0 \end{equation}
or
\begin{equation}\label{zofu}\frac{\d \phi_i}{\d y}+\frac{1}{2}\sum_{j,k}\epsilon_{ijk}[\phi_j,\phi_k]=0,\end{equation}
where $\epsilon_{ijk}$ is the antisymmetric tensor with $\epsilon_{123}=1$. These ways of writing the equation show that if we view $\vec\phi$
as an element of $\frak{g}\otimes \R^3$, then Nahm's equation is invariant under the action of $SO(3)$ on $\R^3$.
In Nahm's work on magnetic monopole solutions of gauge theory \cite{Nahm}, a key role was played by a special singular solution of Nahm's
equations on the open half-line $y>0$. The solution reads
\begin{equation}\label{dofo} \vec\phi(y)=\frac{\vec\tt}{y},\end{equation}
where $\vec\tt=(\tt_1,\tt_2,\tt_3)$ is a triplet of elements of $\frak g$, obeying
\begin{equation}\label{nofo} [\tt_1,\tt_2]=\tt_3, \end{equation}
and cyclic permutations thereof. In other words, the $\tt_i$ obey the commutation relations of
the Lie algebra $\frak{su}(2)$; we can think of them as the images of a standard basis of
$\frak{su}(2)$ under a homomorphism\footnote{We are primarily interested in the case that $\varrho$
is non-zero and hence is an embedding of Lie algebras, but our considerations also apply for $\varrho=0$.
See Appendix \ref{groups} for some background and examples concerning homomorphisms from $\frak{su}(2)$ to
a simple Lie algebra $\frak g$.} $\varrho:\frak{su}(2)\to\frak{g}$.
We will call this solution the Nahm pole solution. The Nahm pole
solution has been important in many applications of Nahm's equations;
for example, see \cite{K}, which is also relevant as background for the present paper.
Nahm's work on monopoles was embedded in D-brane physics in \cite{Diaconescu}.
The Nahm pole therefore plays a role in D-brane physics,
and this was explained conceptually in \cite{fuzzy}. Results about D-branes often have
implications for gauge theory, and in the case at hand,
by translating the D-brane results to gauge theory language, one learns \cite{GW} that the
Nahm pole should be used to define a natural boundary condition not just for Nahm's
1-dimensional equation but for certain gauge theory equations in higher dimensions.
The equations in question include second order equations of supersymmetric Yang-Mills
theory, and associated first-order equations that are relevant to the geometric Langlands correspondence
\cite{KW} and the Jones polynomial and Khovanov homology of knots \cite{WittenK,WittenKtwo}.
Our aim in this paper is to elucidate the Nahm pole boundary condition. Though we
will also discuss generalizations (including a five-dimensional
equation \cite{haydys,WittenK} that is important in the application to Khovanov homology), we
will primarily study a certain system of first-order equations in four
dimensions for a pair $A,\phi$. Here $A$ is a connection on a $G$-bundle $E\to M$, with
$M$ an oriented Riemannian four-manifold
with metric $g$, and $\phi$ is a 1-form on $M$ valued in
$\ad(E)$ (the adjoint bundle associated to $E$). The equations read
\begin{align}\label{zobo} F-\phi\wedge\phi+\star\,\d_A\phi &=0 \cr
\d_A\star \phi & = 0,\end{align}
where $\star$ is the Hodge star and $\d_A=\d+[A,\cdot]$ is the gauge-covariant
extension of the exterior derivative. Alternatively, in local coordinates $x^1,\dots,x^4$,
\begin{align} \label{robo} F_{ij}-[\phi_i,\phi_j]+\epsilon_{ij}{}^{kl}D_k\phi_l &= 0 \cr D_i\phi^i& = 0, \end{align}
where $D_i=D/D x^i$ is the covariant derivative (defined using the connection
$A$ and the Riemannian connection on the tangent bundle of $M$), $\epsilon_{ijkl}$ is the
Levi-Civita antisymmetric tensor, and indices are raised and lowered using the metric $g$. (Summation over repeated indices is understood.)
These equations (or their generalization to $t\not=1$; see eqn.\ (\ref{noxo}) below) have sometimes been called the KW equations and we will use this name for lack
of another one. For recent work on these equations, see \cite{Taubes,Taubestwo,GU}.
To explain the relation to the Nahm pole, take $M$ to be the half-space $x^4\geq 0$ in a copy of $\R^4$ with Euclidean coordinates $x^1,\dots,x^4$ (oriented with
$\epsilon_{1234}=1$). We denote this half-space as $\R^4_+$ and write $\vec x=(x^1,x^2,x^3)$ and $y=x^4$.
The KW equations have a simple exact solution
\begin{equation}\label{telmo} A=0,~~ \phi=\frac{\sum_{a=1}^3 \tt_a\,\d x^a}{y}, \end{equation}
where the $\tt_a$ obey the $\frak{su}(2)$ commutation relations (\ref{nofo}).
This gives an embedding of the basic Nahm pole solution (\ref{zobo}) in four-dimensional gauge theory, for any choice of the homomorphism
$\varrho:\frak{su}(2)\to \frak g$. However, in many applications, the basic case is that $\varrho$ defines a principal embedding of $\frak{su}(2)$ in $\frak g$,
in the sense of Kostant. For $G=SU(N)$,
this means that the $N$-dimensional representation of $G$ is an irreducible representation of $\varrho(\frak{su}(2))$; in general, the principal embedding is the closest
analog of this for any $G$.
If $\varrho $ is a principal embedding, we also say that $\varrho$ is regular or that $\phi$ has a regular Nahm pole. The motivation for this terminology
is that if $\varrho$ is a principal embedding, then any nonzero complex linear combination of the $\tt_a$ is a regular element of the complex
Lie algebra ${\frak g}_\C=\frak g\otimes _\R\C$; for instance, $\tt_1+i\tt_2$ is a regular nilpotent element.
For every $\varrho$, one defines \cite{GW} a natural boundary condition on the KW equations that we call the Nahm pole boundary condition, but in this introduction,
we consider only the case of a principal embedding. (For more detail and the generalization to any $\varrho$, see section \ref{nonregular}.)
For $M=\R^4_+$ and $\varrho$ a principal embedding, the Nahm pole boundary condition is defined by saying
that one only allows solutions that coincide with the Nahm pole solution (\ref{telmo}) modulo terms that are less singular for $y\to 0$; the equation then implies
that in a suitable gauge these less
singular terms actually vanish for $y\to 0$.
The Nahm pole boundary condition can be generalized, with some care, to a more general four-manifold with boundary.
See section 3.4 of \cite{WittenK} and also section \ref{fourmanifold} of the present paper.
There are two main results of the present paper. The first is that the Nahm pole boundary condition is elliptic. Since the equation and solutions
contain singular terms, this is not the standard notion of ellipticity of boundary problems, formulated for example using the Lopatinski-Schapiro
conditions, but is the analog of this in the framework of uniformly degenerate operators \cite{M-edge}. In fact, we verify the ellipticity of the
linearization of this problem. The data prescribing the Nahm pole boundary condition are inherently discrete, so the linearization measures the
fluctuations of the solution relative to this principal term. The boundary conditions for this linear operator simply require solutions to
blow up less quickly than the Nahm pole; see section 2.4 for a precise statement. The steps needed to verify that this linearization with
such boundary conditions is elliptic involve first computing the indicial roots of the problem, and then showing that the linear operator
in the model setting of the upper half-space $\R^4_+$ is invertible on a certain space of pairs $(a,\varphi)$ satisfying these boundary conditions.
The indicial roots measure the formal rates of growth or decay of solutions as $y \to 0$. One of the key consequences of ellipticity
is that the actual solutions of this linearized problem, and eventually also the nonlinear equations, possess asymptotic expansions
with exponents determined by these initial roots. This is a strong regularity statement which allows us to manipulate solutions
to these equations rather freely.
The second main result here is a uniqueness theorem for the KW equations with Nahm pole boundary condition. This states that a solution of these
equations on $M=\R^4_+$ which satisfies the Nahm pole condition at $y=0$ and which is also asymptotic at a suitable rate to the Nahm pole
solution for $(\vec x, y)$ large must actually be the Nahm pole solution. This uniqueness theorem is important in the application
to the Jones polynomial \cite{WittenK} and corresponds to the expected result that the Jones polynomial of the empty link is trivial.
The proof of the uniqueness theorem involves finding a suitable Weitzenbock formula adapted to the Nahm pole solution, and showing that
the fluctuations around the Nahm pole solution decay at a rate sufficient to justify that boundary terms in the Weitzenbock formula vanish.
Essentially the same reasoning leads to an analogous uniqueness theorem for the related five-dimensional equation that is expected to give a
description of Khovanov homology. In this case, the uniqueness theorem corresponds to the statement that the Khovanov homology of the
empty link is of rank 1. This Weitzenbock formula can be linearized, and this version of it is used to establish the second part of the proof
that the linearized boundary problem is elliptic. The uniqueness theorem and the ellipticity both hold for arbitrary $\varrho$.
The uniqueness theorem means roughly that solutions of the KW equations with Nahm pole boundary condition do not exhibit ``bubbling'' along the boundary.
The basis for this statement is that on $\R_+^4$, the KW equations and
also their Nahm pole solution are scale-invariant, that is invariant under $(\vec x,y)\to (\lambda \vec x,\lambda y)$, $\lambda>0$. If there were a non-trivial
solution on $\R^4_+$ with the appropriate behavior at infinity, it could be ``scaled down'' by taking $\lambda$ very small and
glued into any given solution that obeys the Nahm pole boundary conditions. Ths would give a new approximate
solution that obeys the same boundary conditions and coincides with the given solution except in a very small region near the boundary; the behavior
for $\lambda\to 0$ would be somewhat similar to bubbling of a small Yang-Mills instanton.
The Nahm pole boundary condition can be naturally generalized to include knots.
In the framework of \cite{WittenK}, this is done by modifying the boundary conditions in the equations (\ref{zobo})
along a knot or link $K\subset\partial M$. The appropriate general procedure for this is only known if $\varrho$ is a principal embedding.
The model case is that $M=\R_+^4$ and $K$ is a straight line $\R\subset \R^3=\partial M$. To every irreducible
representation $R^\vee$ of the Langlands or GNO dual group $G^\vee$ of $G$, one associates a model solution of eqns. (\ref{zobo}) that coincides with the Nahm pole solution
away from $K$ and has a more complicated singular behavior along $K$. This more complicated behavior depends on $R^\vee$. Solutions for the model case
were found in section 3.6 of \cite{WittenK} for
$G$ of rank 1 and in \cite{Mikhaylov} for any $G$. A boundary condition on eqns. (\ref{zobo}) is then defined by saying that a
solution should be asymptotic to this model solution along $K$, and to have a Nahm pole singularity elsewhere along $\partial M$.
This boundary condition can again be extended naturally, with some care, to
the case that $M$ has a product structure $W\times \R_+$ near its boundary, with an arbitrary embedded knot or link in
$W=\partial M$. (In the case of a link with several connected components, each component can be labeled by a different representation of $G^\vee$, corresponding to
a different singular model solution.) The Nahm pole boundary condition in the presence of a knot is again subject to a uniqueness theorem, which
says that for $M=\R^4_+$, with $K=\R\subset\partial M$, and for any representation $R^\vee$, a solution that agrees with the
model solution near $\partial M$ and has appropriate behavior at infinity must actually coincide with the model solution. This more general type of uniqueness
theorem and the closely related ellipticity of the boundary condition in the presence of a knot will be described elsewhere.
\section{Uniqueness Theorem For The Nahm Pole Solution}\label{second}
In this section we lay out the strategy for proving uniqueness of the Nahm pole solution. The centerpiece of this is the introduction of
the Nahm pole boundary condition, and the analysis which shows that this is an elliptic boundary condition, so that solutions have well
controlled asymptotics near the boundary. A proper statement of this boundary condition requires a somewhat elaborate calculation of the
indicial roots of the problem. These are the formal growth rates of solutions, but without further analysis, there is no guarantee
that solutions grow at these precise rates. This further analysis rests on the verification of the ellipticity of the linearized KW operator acting on
fields with a certain imposed growth rate at the boundary. The second main result here is a Weitzenbock formula for these
equations. There are a number of such formulas, in fact, and the subtlety is to choose one which is well adapted to solutions with
Nahm pole singularities. A linearization of this formula plays an important role in understanding ellipticity of the Nahm
pole boundary condition.
These results and ideas are somewhat intertwined, and we present them in a way that is perhaps not the most logical from a strictly
mathematical point of view, but which emphasizes the essential points as quickly as possible. Thus we first explain the Weitzenbock formula,
and then proceed to the calculation of indicial roots. We are then in a position to give a precise definition of the Nahm pole boundary conditions.
At this point, we use the Weitzenbock formula to prove the uniqueness theorem. This is only a formal calculation unless we prove
that solutions do have these asymptotic rates. This is established in the remainder of the paper.
\subsection{Solutions On A Four-Manifold Without Boundary}\label{background}
We first review how to characterize the solutions of the KW equations when formulated on an oriented four-manifold $M$ without boundary.
(See section 3.3 of \cite{WittenK}.) The details are not needed in the rest of the paper. This material is included only to motivate the way we will search for
a uniqueness theorem in the presence of the Nahm pole.
As a preliminary, we give a brief proof of ellipticity of the KW equations. By definition, a nonlinear partial differential equation is called elliptic if its linearization
is elliptic. For a gauge-invariant equation, this means that the linearization is elliptic if supplemented with a suitable gauge-fixing condition. In the case of the KW
equations linearized around a solution $A_{(0)},\phi_{(0)}$, a suitable gauge-fixing condition is $\d_{A_{(0)}}\star (A-A_{(0)})=0$, or equivalently
\begin{equation}\label{zelob}\sum_i\frac{D}{D x^i} (A-A_{(0)})^i=0, \end{equation}
where $D/D x^i =\partial_i +[A_{(0)\,i},\cdot]$ is the covariant derivative defined using the connection $A_{(0)}$. Any other gauge condition
that differs from this one by lower order terms also gives an elliptic gauge-fixing condition; a convenient choice turns out to be
\begin{equation}\label{elob}\sum_i\frac{D}{ Dx^i}(A-A_{(0)}^i)+\sum_i[\phi_{(0)\,i},\phi^i-\phi_{(0)}^i]=0. \end{equation}
It is convenient to regard the linearized
KW equations as equations for a pair $\Phi=(A-A_{(0)},\star(\phi-\phi_{(0)}))$ consisting of a 1-form and 3-form on $M$ both
valued in $\ad(E)$. With this interpretation, the symbol of
the linearized and gauge-fixed KW equations is the same as the symbol of the operator $\d+\d^*$ mapping odd-degree differential forms on $M$ valued in $\ad(E)$
to even-degree forms valued in $\ad(E)$. This is a standard example of an elliptic operator, so the KW equations are elliptic.
For future reference, we observe that the $\d+\d^*$ operator admits two standard and very simple elliptic boundary conditions, which are much
more straightforward than the Nahm pole boundary
condition which is our main interest in the present paper.\footnote{As we explain in section \ref{nonregular}, these
boundary conditions
can be viewed as special cases of the Nahm pole boundary condition with $\varrho=0$.} These conditions are respectively
\begin{equation}\label{tively}i^*(\Lambda)=0 \end{equation}
and
\begin{equation}\label{sively}i^*(\star\Lambda)=0,\end{equation}
where $i:\partial M\to M$ is the inclusion and, for a differential form $\omega$ on $M$, $i^*(\omega)$ is the pullback of $\omega$ to $\partial M$.
Now set
\begin{equation}\label{telbo}\V_{ij}=F_{ij}-[\phi_i,\phi_j]+\epsilon_{ij}{}^{kl}D_k\phi_l,~~~~\V^0=D_i\phi^i, \end{equation}
so that the KW equations are
\begin{equation}\label{elboxoc} \V_{ij}=\V^0=0. \end{equation}
These equations arise in a twisted supersymmetric gauge theory in which the bosonic part of the action is
\begin{equation}\label{baction}I=-\int_M\d^4x \sqrt g\Tr\left(\frac{1}{2}F_{ij}F^{ij}+D_i\phi_j D^i\phi^j+R_{ij}\phi^i\phi^j+\frac{1}{2}[\phi_i,\phi_j][\phi^i,\phi^j]\right), \end{equation}
where sums over repeated indices are understood and $R_{ij}$ is the Ricci tensor. Also $\Tr$ is an invariant, nondegenerate, negative-definite quadratic form
on the Lie algebra $\frak g$ of $G$. For example, for $G=SU(N)$, $\Tr$ can be the trace in the $N$-dimensional representation; the precise normalization of the
quadratic form will not be important in this paper.
The simplest way to find a vanishing theorem for the KW equations is to form a Weitzenbock formula. We take the sum of the squares of the equations and
integrate over $M$. After some integration by parts, and without assuming the boundary of $M$ to vanish, we find
\begin{equation}\label{zoffbo} -\int_M\d^4x \sqrt g\Tr\left(\frac{1}{2}\V_{ij}\V^{ij}+(\V^0)^2\right)=I+\int_{\partial M}\d^3x\, \epsilon^{abc}\Tr\left(\frac{1}{3}\phi_a[\phi_b,\phi_c]-\phi_a
F_{bc}\right). \end{equation} (We write $i,j,k=1,\dots,4$ for indices tangent to $M$ and $a,b,c=1,\dots,3$ for indices tangent to $\partial M$.) In evaluating the boundary term, we
assume that near its boundary, $M$ is a product $\partial M\times [0,1)$. We also assume that, if $n$ is the normal vector to $\partial M$, then $n\, \llcorner \phi=0$ along
$\partial M$, or equivalently, that the pullback of the 3-form $\star\phi$ to $\partial M$ vanishes:
\begin{equation}\label{llomigo} i^* (\star\phi) =0. \end{equation}
This condition is needed to get a useful form for the boundary contribution in the Weitzenbock formula (or alternatively because of supersymmetric
considerations explained in \cite{GW}), so it will be part of the Nahm pole boundary condition. However, for the rest of this introductory discussion, we
assume that $\partial M$ is empty.
If the KW equations $\V_{ij}=\V^0=0$ hold and $\partial M$ vanishes, it follows from the formula above that $I=0$. This immediately leads to a vanishing theorem:
if the Ricci tensor of $M$ is
non-negative, then each term in (\ref{baction}) must separately vanish. Thus, the curvature $F$ must vanish; $\phi$ must be covariantly constant and its components
must commute, $[\phi_i,\phi_j]=0$; and finally $\phi$ must be annihilated by the Ricci tensor, $R_{ij}\phi^j=0$.
Still on an oriented four-manifold without boundary, the KW equations are
actually subject to a stronger vanishing theorem than we have just explained, because of a fact that is related to the underlying supersymmetry:
modulo a topological invariant,
the functional $I$ can be written as a sum of squares in multiple ways. To explain this, we first generalize the KW equations to depend on a real parameter $t$.
Given a two-form $\Lambda$ on $M$, we write $\Lambda=\Lambda^++\Lambda^-$, where $\Lambda^+$ and $\Lambda^-$ are
the selfdual and anti-selfdual projections of $\Lambda$. Then we define
\begin{align}\V^+_{ij}(t)&=(F_{ij}-[\phi_i,\phi_j]+t(D_i\phi_j-D_j\phi_i))^+ \cr \V^-_{ij}(t)&=(F_{ij}-[\phi_i,\phi_j]-t^{-1}(D_i\phi_j-D_j\phi_i))^-\cr
\V^0&= D_i\phi^i.
\end{align}
The equations \begin{equation}\label{noxo}\V^+_{ij}(t)=\V^-_{ij}(t)=\V^0=0\end{equation}
are a one-parameter family\footnote{One can naturally
think of $t$ as taking values in $\R\cup\infty=\RP^1$. For $t\to 0$, one should multiply $\V^-(t)$ by $t$, and for $t\to\infty$, one should multiply $\V^+(t)$ by $t^{-1}$.
The proof of ellipticity given above at $t=1$ can be extended to all $t$. One approach to this uses the formula (\ref{zelg}) below, supplemented by some special
arguments at $t=0,\infty$.}
of elliptic differential equations that reduce to (\ref{robo}) for $t=1$.
All considerations of this paper can be extended to generic\footnote{The Nahm pole boundary condition is defined for generic $t$ starting with a model solution in which
the Nahm pole appears in $A$ as well as $\phi$, with $t$-dependent coefficients.} $t$, but to keep the formulas simple and because this case has the closest relation to
Khovanov homology, we will generally focus on the case $t=1$.
The generalization of eqn.\ (\ref{zoffbo}) to generic $t$ reads
\begin{align}\label{zelg} -\int_M\d^4x \sqrt g&\Tr\left(\frac{t^{-1}}{t+t^{-1}}\V^+_{ij}(t)\V^{+\,ij}(t)+\frac{t}{t+t^{-1}}\V_{ij}^-(t)\V^{-\,ij}(t)+(\V^0)^2\right)\cr &
=I +\frac{t-t^{-1}}{4(t+t^{-1})}\int_M \d^4 x \,\epsilon^{ijkl}\Tr F_{ij}F_{kl}. \end{align}
In writing this formula, we have assumed that the boundary of $M$ vanishes. (For a more general formula for $\partial M$ non-empty, see eqn.\ (2.60) of \cite{WittenK}.)
Notably, the expression $I$ that appears on the right hand side on (\ref{zelg}) is the functional defined in eqn.\ (\ref{baction}), independent of $t$. This immediately
leads to very strong results about possible solutions.
Suppose, for example, that we find $A,\phi$ obeying the original KW equations (\ref{robo})
at $t=1$. Then setting $t=1$ in (\ref{zelg}), the left hand side vanishes, and
\begin{equation}\label{elg}\P= \frac{t-t^{-1}}{4(t+t^{-1})}\int_M \d^4 x \,\epsilon^{ijkl}\Tr F_{ij}F_{kl} \end{equation}
certainly also vanishes at $t=1$, so therefore $I=0$. Now suppose that the integral
$\int_M\d^4 x \epsilon^{ijkl}\Tr\,F_{ij}F_{kl}$ -- a multiple of which is the first Pontryagin class $p_1(E)$ -- is nonzero. Then we can choose $t\not=1$
to make $\P<0$, and we get a contradiction: the left hand side of (\ref{zelg}) is non-negative, and the right hand side is negative.
Hence any solution of the original equations at $t=1$ is on a bundle $E$ with $p_1(E)=0$. The same is actually true for a solution of the more general eqn.\ (\ref{noxo}) at any
value of $t$ other than 0 or $\infty$. To show this, starting with a solution of (\ref{noxo}) at, say, $t=t_0$, one observes that unless $p_1(E)=0$,
one would reach the same contradiction as before by considering eqn.\ (\ref{zelg}) at a value $t=t_1$ at which $\P$ is more negative than it is at $t=t_0$.
Such a $t_1$ always exists for $t_0\not= 0,\infty$ unless $p_1(E)=\P=0$.
Once we know that $\P=0$, it follows that the right hand side of (\ref{zelg}) is independent of $t$, and hence vanishes for all $t$ if it vanishes for any $t$. But the left
hand side of (\ref{zelg}) vanishes if and only if the eqns. (\ref{noxo}) are satisfied. So if $A,\phi$
obey the eqns. (\ref{noxo}) at any $t\not=0,\infty$, they satisfy those equations for all $t$. This leads to a simple description of all the solutions (away from $t=0,\infty$).
Combine $A,\phi$ to a complex connection $\A=A+i\phi$. We view $\A$ as a connection on a $G_\C$-bundle $E_\C\to M$; here $G_\C$ is a complex
simple Lie group that is the complexification of $G$, and $E_\C\to M$ is the $G_\C$ bundle that is obtained by complexifying the $G$-bundle $E\to M$. We also
define the curvature of $\A$ as $\F=\d\A+\A\wedge \A$.
The condition that eqns. (\ref{noxo}) are satisfied for all $t$ is that $\F=0$ and $\d_A\star \phi=0$. By a well-known result \cite{corlette}, solutions of these equations are in 1-1 correspondence
with homomorphisms $\uppsi:\pi_1(M)\to G_\C$ that satisfy a certain condition of semistability. (This condition says roughly that if the holonomies of $\uppsi$
are triangular, then they are actually block-diagonal.)
The key to getting these simple results was the fact that (modulo a multiple of $p_1(E)$) the same functional $I$ can be written as a sum of squares in more than
one way. This fact is related to the underlying supersymmetry.
We will look for something similar to find a uniqueness theorem associated to the Nahm pole.
\subsection{A Weitzenbock Formula Adapted To The Nahm Pole}\label{beyond}
Now suppose that $M$ has a non-empty boundary, and consider a solution with a Nahm pole along $\partial M$. The formulas above do not lead to a useful
conclusion directly because the Nahm pole causes the boundary term in (\ref{zoffbo}) to diverge.
To make this more precise, let us specialize to the case $M=\R_+^4$. As in the introduction, introduce coordinates $\vec x=(x^1,x^2,x^3)$ and $y=x^4$ on $\R^4$,
with $M$ the half-space $y\geq 0$. The familiar Nahm pole solution is given by
\begin{equation}\label{zerr}A=0,~~~\phi=\sum_{a=1}^3\frac{\tt_a\cdot \d x^a}{y}. \end{equation}
(Indices $i,j,k=1,\dots,4$ will refer to all four coordinates $x^1,\dots,x^4$, and indices $a,b,c=1,\dots,3$ will refer to $x^1,x^2,x^3$ only.)
For this solution, the commutators $[\phi_a,\phi_b]$ and covariant derivatives $D_y\phi$ are all of order $1/y^2$, hence not square-integrable
near $y=0$ (or as $|(\vec x,y)| \to \infty$), and thus the functional $I$ in (\ref{baction}) diverges. Accordingly, the boundary terms
in the Weitzenbock formula, which we repeat here for convenience (omitting the factor of $\sqrt{g}$ on the left because $M$ is Euclidean),
\begin{equation}\label{zoffboz} -\int_M\d^4x \Tr\left(\frac{1}{2}\V_{ij}\V^{ij}+(\V^0)^2\right)=I+\int_{\partial M}\d^3x \,\epsilon^{abc}\Tr\left(\frac{1}{3}\phi_a[\phi_b,\phi_c]-\phi_a
F_{bc}\right),\end{equation}
are also divergent. A standard way to regularize such divergences is to replace $M$ by $M_\epsilon = \{ y > \epsilon, \ |(\vec x,y)| < 1/\epsilon\}$,
carry out the integrations by parts, and discard the terms which diverge as $\epsilon \to 0$. For the purposes of the present exposition, let us focus
only on the portion of the boundary where $y = \epsilon$; arguments are given in section \ref{zobot} to show that the contributions from the other part of the boundary
are negligible. The bulk and boundary terms on the right hand side of (\ref{zoffboz}) are both of order $1/\epsilon^3$ near this lower boundary,
and since the left side vanishes (for the Nahm pole solution), these various diverging contributions on the right must cancel. However, when such a
cancellation comes into play, it is very difficult to deduce any positivity of the remaining terms on the right, so this formula is not well-suited
to deduce a vanishing theorem.
It is inevitable that the boundary contribution in (\ref{zoffboz}) is at least nonzero for the Nahm pole solution, since otherwise, we could prove
that $I=0$ for this solution, contradicting the fact that $I$ is a sum of squares of quantities (such as $[\phi_a,\phi_b]$) not all of which vanish for the Nahm pole solution.
Observe that once we know that both $I$ and the boundary term are nonvanishing, scale-invariance implies that they must diverge as $\epsilon\to 0$.
To learn something in the presence of the Nahm pole, we need a different way to write the left hand side of (\ref{zoffboz}) as a sum of squares plus a boundary term,
where the boundary term will vanish for any solution that obeys the Nahm pole boundary condition. This will imply a vanishing theorem for such solutions.
Of course, for this to be possible, the objects whose squares appear on the right hand side of the new formula must vanish in the Nahm pole solution.
So let us write down a set of quantities that vanish in the Nahm pole solution. It is convenient to expand $\phi=\sum_{a=1}^3 \phi_a\d x ^a+\phi_y \d y$.
The Nahm pole solution is characterized by $A=\phi_y=0$ and hence trivially
\begin{equation}\label{zon} F= D_i\phi_y =[\phi_i,\phi_y]=0. \end{equation}
Somewhat less trivially, the Nahm pole solution also satisfies
\begin{equation}\label{ozon} W_a = 0 =D_a\phi_b, \end{equation}
where we define
\begin{equation}\label{cozon} W_a=D_y\phi_a+\frac{1}{2}\epsilon_{abc}[\phi_b,\phi_c]. \end{equation}
Conversely, these equations characterize the Nahm pole solution, in the following sense. The equations (\ref{zon}) and (\ref{ozon}) imply immediately that in a suitable
gauge $A=0$ and $\phi_a$ and $\phi_y$ are functions of $y$ only. Moreover, $\partial_y\phi_y=0$ (in the gauge with $A=0$), so if $\phi_y$ is required to vanish at $y=0$
(which will be part of the Nahm pole boundary condition) then it vanishes identically. Finally, the condition $W_a=0$ means that the functions $\phi_a(y)$ obey
the original 1-dimensional Nahm equation $\d\phi_a/\d y+(1/2)\epsilon_{abc}[\phi_b,\phi_c]=0$.
This discussion motivates us to replace the functional $I$ of eqn.\ (\ref{baction}) by a new functional $I'$ which is the sum of squares of objects which vanish for
the Nahm pole solution:
\begin{align}\label{longsum}I'=-\int_{\R^3\times \R_+}\d^4x\,\Tr\left(\frac{1}{2}\sum_{i,j}F_{ij}^2+\sum_{a,b}(D_a\phi_b)^2 +\sum_i(D_i\phi_y)^2+\sum_a[\phi_y,\phi_a]^2
+\sum_a W_a^2\right).\end{align}
The only difference between $I$ and $I'$ is that we have replaced $\sum_a(D_y\phi_a)^2+\frac{1}{2}\sum_{a,b}[\phi_a,\phi_b]^2$ by $\sum_aW_a^2$.
Since
\begin{equation}\label{gosum}\Tr\,\left( \sum_a(D_y\phi_a)^2+\frac{1}{2}\sum_{a,b}[\phi_a,\phi_b]^2\right)=\sum_a\Tr \,W_a^2-\frac{1}{3}\partial_y\epsilon^{abc}\Tr \,\phi_a[\phi_b,\phi_c], \end{equation}
the sole effect of this is to change the boundary term in \eqref{zoffbo}, in fact to cancel the cubic terms in $\phi$ that cause the
divergence as $y\to 0$ in the Nahm pole solution. Eqn.\ (\ref{zoffbo}) is now replaced by the new identity:
\begin{equation}\label{zoffbox}-\int_{\R^3\times \R_+}\d^4x\, \Tr\left(\frac{1}{2}\V_{ij}\V^{ij}+(\V^0)^2\right)=I'-\left(\int_{y=0}-\int_{y=\infty}\right)\d^3x \,\epsilon^{abc}\Tr\,\phi_a F_{bc}+\Delta,
\end{equation}
where
\begin{equation}\label{mofobox}\Delta=\int_{\R^3\times\R_+}\d^4x\,\frac{\partial}{\partial x^i}\Tr\left(\phi_j D^j\phi^i-\phi^iD_j\phi^j\right). \end{equation}
For the time being, we do not replace $\Delta$ by a boundary integral. If $M$ is a compact manifold with boundary, then $\Delta$ vanishes for a solution
that is regular along $\partial M$ and satisfies (\ref{llomigo}), which explains why $\Delta$ does not appear in eqn.\ (\ref{zoffbo}). However, for $M=\R^4_+$,
the use of (\ref{llomigo}) in eliminating the boundary contribution is less simple in the presence of the Nahm pole, so the term $\Delta$ cannot be dropped trivially
and will be analyzed later.
Now there is a clear strategy for proving a uniqueness theorem for the Nahm pole solution. We must show that any solution that is
asymptotic to the Nahm pole solution for $y\to 0$ and for $|(\vec x,y)| \to \infty$ approaches the Nahm pole solution quickly enough
that the boundary terms in eqn.\ (\ref{zoffbox}) (including $\Delta$) vanish. It will then follow that $I'=0$ for any such solution. Since $I'$ is a sum of squares
of quantities that vanish only for a solution derived from the 1-dimensional Nahm solution, the given solution will coincide with the Nahm pole solution everywhere.
\subsection{The Indicial Equation}\label{smally}
\subsubsection{Overview}\label{overview}
Our next task is to examine in detail the possible behavior of a solution of the KW equation that is asymptotic to the Nahm pole solution (with some $\varrho$)
as $y\to 0$. This analysis is necessary before we can properly define the Nahm pole boundary condition, and will also be essential for showing that the boundary
terms in eqn.\ (\ref{zoffbox}) vanish.
In making this analysis, we need to supplement the KW equation with a gauge condition.
In the Nahm pole boundary condition, we only allow gauge transformations that are trivial\footnote{At the end of section \ref{nonregular}, we explain that in the case of a nonregular
Nahm pole, one can define a more general boundary condition in which gauge transformations are not required to be trivial at $y=0$.}
at $y=0$, and we are interested in a gauge condition that fixes this gauge invariance.
A gauge transformation that vanishes at $y=0$ can be chosen in a unique fashion to make $A_y=0$, and for understanding the asymptotic behavior of perturbations
of the Nahm pole solution near $y=0$, this is a natural boundary condition. However, for other purposes (including proving that the Nahm pole boundary condition
is well-posed, but also studying the boundary terms at infinity in the Weitzenbock formula), it is necessary to choose an elliptic gauge condition, i.e.\ one which
augments the KW equations to an elliptic system. Two examples of elliptic gauge conditions were given in equations (\ref{zelob}) and (\ref{elob}). The Nahm pole
solution is $A_{(0)}=0$, $\phi_{(0)}=\tt\cdot \d x/y$, and we consider nearby solutions, which we write as $A=a$, $\phi=\tt\cdot \d x/y+\varphi$, so $a$ and $\varphi$
are the fluctuations about the Nahm pole. The gauge conditions (\ref{zelob}) and (\ref{elob}) are $\partial_i a^i=0$ and
\begin{equation}\label{zurimo}\partial_i a^i+ \frac{1}{y}[\tt_a,\varphi_a]=0, \end{equation}
respectively. Both of these gauge conditions are elliptic, but we use (\ref{zurimo}) as it simplifies the later analysis considerably.
Technically, we assume that $a$ and $\varphi$ admit asymptotic expansions as $y \to 0$, and consider solutions of the KW equations (with
a gauge condition) such that $a$ and $\varphi$ are less singular than $1/y$ there.
Writing the putative expansion around the Nahm pole solution as
\begin{equation}\label{donzo}
A=y^\lambda a_0(\vec x)+\dots , \qquad \phi=\frac{\sum_{a=1}^3\tt_a\,\d x^a}{y}+ y^\lambda \varphi_0(\vec x)+\dots,
\end{equation}
where $a_0(\vec x)$, $\varphi_0(\vec x)$ depend only on $\vec x$, and the ellipses refer to terms that are less singular than $y^\lambda$ for $y\to 0$,
we ask which exponents $\lambda > -1$ are allowed if this expression satisfies the equations formally.
In greater detail, write the KW equations along with a fixed gauge condition as
\begin{equation}
{\KW}(A, \phi) = 0.
\label{nlkw}
\end{equation}
Expanding this about the Nahm pole solution yields
\begin{equation}
{\KW}(a, \phi_{(0)} + \varphi) = \LKW (a,\varphi) + Q(a,\varphi),
\label{texpkw}
\end{equation}
where $\L $ is the linearization of $\KW$ at $(0, \tt \cdot \d x/y)$ and the remainder term $Q$ vanishes quadratically in a suitable sense. Assuming that
$a$ and $\varphi$ have expansions as above, and that these expansions may be differentiated, multiplied, etc., we see that the most singular terms in $\L(a,\varphi)$ are of order $y^{\lambda-1}$, while $Q(a,\varphi)$ is no more singular than $y^{2\lambda}$. Since $\lambda>-1$, this is less
singular than $y^{\lambda-1}$. Furthermore, only certain terms in $\L( y^\lambda a_0, y^\lambda \varphi_0)$ are as
singular as $y^{\lambda-1}$. Specifically, the terms which
include a $\del_y$ yield a singular factor $y^{\lambda - 1}$, as do the terms containing a commutator with the unperturbed Nahm pole solution.
On the other hand, terms containing $\del_{x^a}$ are $\calO(y^\lambda)$ and hence may be dropped for these considerations. What remains is
a linear algebraic equation involving $a_0, \varphi_0$ and the exponent $\lambda$. This is known as the indicial equation for the problem.
We have been somewhat pedantic about separating the steps of first passing to the linearization and then the indicial operator of this
linearization. The same sets of equations can be obtained by directly inserting the putative expansions for $a$ and $\varphi$ into
the nonlinear equations and retaining only the leading terms. The reason for our emphasis will become clear later.
At this level, the dependence of these coefficients on $\vec x$ is irrelevant, and because of this, the indicial equation respects the symmetry
$A_a\to -A_a$, $\phi_y\to -\phi_y$, with $\phi_a$ and $A_y$ left unchanged. This means that the indicial equation uncouples into a system of equations
for $\varphi_a, a_y$ and another for $a_a,\varphi_y$. These read
\begin{align}\label{lembo}\lambda a_a+[\tt_a,\varphi_y]-\epsilon_{abc}[\tt_b,a_c]&=0 \cr
\lambda\varphi_y-[\tt_a,a_a]&=0, \end{align}
and
\begin{align}\label{wembo}\lambda \varphi_a-[\tt_a,a_y]+\epsilon_{abc}[\tt_b,\varphi_c]&=0 \cr \lambda a_y+[\tt_a,\varphi_a]&=0,
\end{align}
respectively. (Had we used the gauge condition $\partial_i a^i=0$ instead of eqn.\ (\ref{zurimo}), the only difference would be that
the $[\tt_a,\varphi_a]$ term would be missing in the second line of (\ref{wembo}).)
The determination of the indicial roots of the problem, which are defined as the values of $\lambda$ for which these equations have nontrivial solutions, requires a foray into group theory.
\subsubsection{Some Useful Group Theory}\label{group}
\def\T{{\eusm T}}
\def\S{{\eusm S}}
\def\M{{\eusm M}}
\def\ss{{\frak s}}
\def\ff{{\frak f}}
\def\F{{\eusm F}}
One obvious ingredient in these equations is the $\frak{su}(2)$ subalgebra of $\frak g$ that is generated by the $\tt_a$. This depends on the choice of
homomorphism $\varrho:\frak{su}(2)\to \frak g$; we call its image $\frak{su}(2)_\tt \subset \frak g$.
From the representation theory of $\frak{su}(2)$, we know that up to isomorphism, $\frak{su}(2)$ has one irreducible complex module of dimension $n$ for
every positive integer $n$. It is convenient to write $n=2j+1$, where $j$ (which is a non-negative half-integer) is called the spin.
In particular under the action of $\frak{su}(2)_\tt$, the complexification $\frak g_\C=\frak g\otimes_\R\C$ of $\frak g$ decomposes as a direct sum of irreducible
modules $\frak r_\sigma$, of dimension $n_\sigma=2j_\sigma+1$.\footnote{When the $j_\sigma$
are all integers, for example in the case of a principal $\frak{su}(2)$ embedding, this statement is true without having to replace $\frak g$ by its complexification
$\frak g_\C$. The complexification is needed in case some $j_\sigma$ are half-integers. It can happen in general that several of the $\frak r_\sigma$'s are isomorphic
and in that case the decomposition of $\frak g_\C$ as a direct sum of irreducible $\frak{su}(2)_\tt$ submodules $\frak r_\sigma$ is not unique. This does not affect the following analysis.} For a principal embedding, the $j_\sigma$ are positive integers of which
precisely one is equal to 1 (the submodule of $\frak g$ of spin $j_\sigma=1$ is precisely $\frak{su}(2)_\tt\subset \frak g$). For example, $G=SU(N)$ has rank $N-1$
and the values of the $j_\sigma$ are\footnote{For this and additional group-theoretic background, see
Appendix \ref{groups}.}
$1,2,3,\dots,N-1$. At the opposite extreme, if $\varrho=0$, so that the $\tt_a$ all vanish, then $\frak g$ is the direct sum of trivial 1-dimensional
$\frak{su}(2)_\tt$ modules, all of spin 0.
The indicial equation does not intertwine the $\frak{su}(2)_\tt$ submodules $\frak r_\sigma\subset \frak g_\C$ since none of the terms in the equation do;
hence the equation can be restricted to any one of the $\frak r_\sigma$. For example, in \eqref{lembo}, it suffices to consider $\varphi_y$ and all the $a_a$ taking values in the same
submodule $\frak r_\sigma$. A general solution of the indicial equation is a sum of $\frak r_\sigma$-valued solutions over all the different $\sigma$.
This is useful because for solutions taking values in a given $\frak r_\sigma$, the endomorphisms appearing in \eqref{lembo} and \eqref{wembo}
reduce to diagonal operators, so that the equations then completely decouple.
The calculation making this explicit occupies the remainder of this subsection.
An important property of the algebra $\frak{su}(2)$ is the existence of a quadratic Casimir operator that commutes with the algebra.
In general, given any $\frak{su}(2)$ algebra with a basis $\frak b_a$, $a=1,\dots,3$, obeying the $\frak{su}(2)$ relations
\begin{equation}\label{tildno}[\frak b_a,\frak b_b]=\epsilon_{abc}\frak b_c, \end{equation}
we define the Casimir as
\begin{equation}\label{dinko}\Delta=-\sum_{a=1}^3 \frak b_a^2. \end{equation} On a module of spin $j$, one has
\begin{equation}\label{inko}\Delta=j(j+1). \end{equation}
In the case of $\frak{su}(2)_\tt\subset\frak g$, we usually write the action of the generators on $\frak g$ as $w\to [\tt_a,w]$ (rather than $w\to \tt_a(w))$. So we can write the Casimir $\Delta_\T$ as
\begin{equation}\label{inco}
\Delta_\T=-\sum_{a=1}^3[\tt_a,[\tt_a,\cdot]],
\end{equation}
or more abstractly,
\begin{equation}\label{pilo}\Delta_\T=-\sum_{a=1}^3\tt_a^2,\end{equation}
as in (\ref{dinko}).
It is often best to think of the triple $\vec a=(a_1,a_2,a_3)$ or similarly the triple $\vec\varphi=(\varphi_1,\varphi_2,\varphi_3)$
as a single element of $\frak g\otimes N$ where $N\cong \R^3$.
Another useful $\frak{su}(2)$ algebra acts on the three-dimensional vector space $N$. (This is simply inherited from invariance of the original KW equations under
rotations of $\vec x=(x^1,x^2,x^3)$.) Explicitly, we define $3\times 3$ matrices $\ss_a$, $a=1,\dots,3$
by
\begin{equation}\label{donkey} (\ss_a)_{bc}=-\epsilon_{abc}.\end{equation}
These matrices obey the $\frak{su}(2)$ commutation relations
\begin{equation}\label{monkey} [\ss_a,\ss_b]=\epsilon_{abc}\ss_c, \end{equation} and generate an $\frak{su}(2)$ algebra
that we call $\frak{su}(2)_\ss$.
We define the quadratic Casimir $\Delta_\S=-\sum_{a=1}^3\ss_a^2$ and find that $\Delta_\S=2$.
The value 2 is $j(j+1)$ with $j=1$, and reflects the fact that $N$ is an irreducible $\frak{su}(2)_\ss$ module of spin 1.
Finally, we can define a third $\frak{su}(2)$ algebra that we call $\frak{su}(2)_\ff$, generated by $\ff_a=\tt_a+\ss_a$.
The importance of $\frak{su}(2)_\ff$ is that, since the Nahm pole solution is invariant under $\frak{su}(2)_\ff$ but not under $\frak{su}(2)_\tt$ or $\frak{su}(2)_\ss$,
it is only $\frak{su}(2)_\ff$ that is a symmetry of the indicial equation. To be more exact, to make $\frak{su}(2)_\ff$ a symmetry of the indicial equation,
we let $\frak{su}(2)_\ff$ act on $\frak g\otimes N$ as just described, while in acting on $\frak g$ itself, we declare that $\ss_a=0$ and $\ff_a=\tt_a+\ss_a=\tt_a$.
Then interpreting $a_a$ and $\varphi_a$ as elements of $\frak g\otimes N$ and $a_y$ and $\varphi_y$ as elements of $\frak g$, with the $\frak{su}(2)_\ff$
action just described, the indicial equation is invariant under $\frak{su}(2)_\ff$.
To exploit this, it is useful to again define a quadratic Casimir $\Delta_\F=-\sum_{a=1}^3\ff_a^2$. Now we have a very useful formula for the $\frak{su}(2)_\ff$-invariant
operator $\ss\cdot \tt=\sum_a\ss_a\cdot \tt_a$:
\begin{equation}\label{mozzo} \ss\cdot \tt=-\frac{1}{2}\left(\Delta_\F-\Delta_\T-\Delta_\S\right). \end{equation}
To make this formula explicit for the module $\frak r_\sigma\otimes N$, we need to know how to decompose this module under $\frak{su}(2)_\ff$. The answer is given
by the representation theory of $\frak{su}(2)$.
Provided that $j_\sigma\geq 1$,
the tensor product $\frak r_\sigma\otimes N$
decomposes under $\frak{su}(2)_\ff$ as a direct sum of modules $\frak r_{\sigma,\eta}$ of spin $f_{\sigma,\eta}=j_\sigma+\eta$ where $\eta\in\{1,0,-1\}$.
For $j_\sigma<1$, the decomposition is the same except that the range of values of $\eta$ is smaller; for $j_\sigma=1/2$, one has only $\eta\in\{1,0\}$, and for $j_\sigma=0$
one has only $\eta=1$.
In any event, it follows from (\ref{mozzo}) that in acting on $\frak r_{\sigma,\eta}$, the value of $\ss\cdot\tt$ is
\begin{equation}\label{mexico}\ss\cdot\tt=\begin{cases}-j_\sigma & {\text{if}}\;\; \eta=1\cr 1 & \text{if}\;\;\eta=0\cr j_\sigma+1 & \text{if}\;\;\eta=-1.\end{cases}\end{equation}
This result is useful because the object $\ss\cdot\tt$ appears in the indicial equation. For example, understanding $\vec a=(a_1,a_2,a_3)$ as an element of $\frak g\otimes N$,
so that $\ss\cdot \tt\, (\vec a)$ is also a triple of elements $(\ss\cdot \tt\,(\vec a))_a$, $a=1,2,3$ of $\frak g$,
we have from the definitions
\begin{equation}\label{exico} (\ss\cdot \tt \,(\vec a))_a=\epsilon_{abc} [\tt_b,a_c].\end{equation}
The right hand side appears in (\ref{lembo}), and now we have a convenient way to evaluate it. Similarly, the analogous object $\epsilon_{abc} [\tt_b,\varphi_c]$ appears
in (\ref{wembo}).
When we decompose the $\frak r_\sigma$-valued part of the indicial equation (\ref{lembo}) under the action
of the symmetry group $\frak{su}(2)_\ff$,
modules with spin $j_\sigma\pm 1$ (in other words $\eta=\pm 1$) appear only in the $\frak{su}(2)_\ff$ decomposition of
$a_a$, while
spin $j_\sigma$ (or $\eta=0$) appears both in $a_a$ and in $\varphi_y$. It follows that the terms in \eqref{lembo} involving
$\varphi_y$, and likewise, the terms in \eqref{wembo} involving $a_y$, only appear when $\eta=0$.
\subsubsection{The Indicial Roots}\label{throots}
It is now straightforward to determine the indicial roots. First we consider the pair $a_a,\varphi_y$ and we restrict to the $\frak r_\sigma$-valued part of the equation.
For $\eta\not=0$, we can set $\varphi_y=0$, as explained at the end of section \ref{group},
and so the equation (\ref{lembo}) reduces to $\lambda a_a=\epsilon_{abc}[\tt_b,a_c]$. The right hand side was analyzed in eqn.\ (\ref{mexico}) and (\ref{exico}),
and so $\lambda=-j_\sigma$ for $\eta=1$ and $\lambda=j_\sigma+1$ for $\eta=-1$. For $\eta=0$, we have to work a little harder. We solve
the second equation in (\ref{lembo}) with\footnote{This solution is not valid if $j_\sigma=0$, because of the factor of $j_\sigma$ in the denominator.
For $j_\sigma=0$, eqns. (\ref{lembo}) and (\ref{wembo}) become trivial, since all commutator terms vanish, and tell us that all modes have $\lambda=0$.
This agrees with the result we find in eqn.\ (\ref{indroots}) below, except that some modes -- the ones with $\lambda=j_\sigma+1$ -- do not exist
for $j_\sigma=0$.}
\begin{equation}\label{delf}a_a=-\frac{\lambda}{j_\sigma(j_\sigma+1)}[\tt_a,\varphi_y]\end{equation}
and then after also using the Jacobi identity and the $\frak{su}(2)$ commutation relations, the first equation in (\ref{lembo}) becomes
\begin{equation}\label{urmo}-\frac{\lambda^2}{j_\sigma(j_\sigma+1)}+1+\frac{\lambda}{j_\sigma(j_\sigma+1)}=0.\end{equation}
So for $\eta=0$, the possible values of $\lambda$ are $j_\sigma+1$ and $-j_\sigma$. In sum for $a_a,\varphi_y$, the indicial roots are
\begin{equation}\label{indroots}\lambda=\begin{cases}-j_\sigma & {\text{if}}\;\; \eta=1\cr j_\sigma+1,\,-j _\sigma& \text{if}\;\;\eta=0\cr j_\sigma+1 & \text{if}\;\;\eta=-1.\end{cases}\end{equation}
These results need correction for $j_\sigma<1$, since some modes are missing. For $j_\sigma=1/2$, the $\lambda=j_\sigma+1=3/2$ mode
with $\eta=-1$ should be dropped, and for $j_\sigma=0$, both modes with $\lambda=j_\sigma+1=1$ should be dropped.
Inspection of eqns. (\ref{lembo}) and (\ref{wembo}) shows that the indicial roots for the pair $\varphi_a,a_y$ are obtained
from those for $a_a,\varphi_y$ by just changing the sign of $\lambda$.
So with no need for additional calculations, the indicial roots for the pair $\varphi_a,a_y$ are as follows:
\begin{equation}\label{indrootstwo}\lambda=\begin{cases}j_\sigma & {\text{if}}\;\; \eta=1\cr j_\sigma,~-j_\sigma-1& \text{if}\;\;\eta=0\cr -j_\sigma-1 & \text{if}\;\;\eta=-1.\end{cases}\end{equation}
Again, some modes should be omitted for $j_\sigma<1$.
It is notable that all of these modes have $a_y=0$, and therefore make sense in the gauge $A_y=0$,
except the $\eta=0$ modes in (\ref{indrootstwo}). Those particular modes are spurious in the sense that they are pure gauge: they are of the form
$a_y=\partial_y u$, $\varphi_a=[\tt_a/y,u]$ with $u(\vec x,y)=y^{\lambda+1}v(\vec x)$.
After finding a solution of the KW equations, one can always make a gauge transformation that sets $A_y=0$ and eliminates these modes.
However, this can only be usefully done after finding a global solution:
to develop a general theory of solutions of the KW equations, which can predict the existence of solutions,
one needs an elliptic gauge condition, and such a gauge condition will always allow
pure gauge modes, such as the ones we have identified. Unlike the pure gauge modes with $a_y\not=0$, the perturbations we have found with $a_y=0$ have gauge-invariant content; they cannot
be removed by a gauge transformation that is trivial at $y=0$, since there are no gauge transformations that
are trivial at $y=0$ and preserve the condition $a_y=0$. So the gauge-invariant content of the possible perturbations of the Nahm pole solution near $y=0$
is precisely contained in the modes in (\ref{indroots}) and those in (\ref{indrootstwo}) with $\eta\not=0$.
One more mode in (\ref{indrootstwo}) has a simple interpretation. Nahm's equations have the familiar Nahm pole solution $\phi_a=\tt_a/y$,
but since Nahm's equations are invariant under shifting $y$ by a constant, they equally well have a solution $\phi_a=\tt_a/(y-y_0)$ for any constant $y_0$.
Differentiating with respect to $y_0$ and setting $y_0=0$, we find that the linearization of Nahm's equations around the Nahm pole solution can be satisfied
by $\varphi_a=\tt_a/y^2$. This accounts for the mode in (\ref{indrootstwo}) with $j_\sigma=1$, $\lambda=-2$, and $\eta=-1$.
Each value of $\lambda$ that is indicated in (\ref{indroots}) or (\ref{indrootstwo}) represents a space of fluctuations of dimension $2(j_\sigma+\eta)+1$,
transforming as an irreducible $\frak{su}(2)_\tt$ module. Allowing for these multiplicities, the sum of all indicial roots is 0 for
$a_a,\varphi_y$ and likewise for $\varphi_a,a_y$.
This is a check on the calculations: the indicial roots are eigenvalues of matrices that appear in (\ref{lembo}) and (\ref{wembo}) and are readily seen to be traceless.
\subsection{The Nahm Pole Boundary Condition}\label{nonregular}
We are now in a position to give a precise formulation of the Nahm pole boundary condition. This boundary condition depends on
the choice of homomorphism $\varrho:\frak{su}(2)\to \frak g$, which determines the most singular behavior of the
solution at $\del M$. As explained below, in favorable cases (for example, when $\varrho$ is a regular embedding), the Nahm pole
boundary condition simply requires that a solution coincides with the Nahm pole solution modulo less singular terms, but in general
(when $j_\sigma=0$ appears in the decomposition of $\frak g$) the formulation of the boundary condition involves some further details.
Without exploring any of the questions concerning existence of solutions that obey the Nahm pole boundary condition, the expectation
is that any such solution has an asymptotic expansion at $\del M$, where the leading term is precisely the Nahm pole singularity, and
that all fluctuations are well-behaved lower order terms in the expansion with strictly less singular rates of blowup. As already
suggested by the discussion in section \ref{overview}, the growth rates of these fluctuation terms are governed by the indicial roots,
which are themselves determined by the linearization of the KW equations at the Nahm pole solution, as in \eqref{texpkw}. In fact,
one might try to construct solutions of ${\KW}(A,\phi) = 0$ by fixing the Nahm pole singularity, then setting the right side of
\eqref{texpkw} to zero to solve for the fluctuation terms. This would involve inverting the linearized operator $\L $, and it is thus
important to understand the possible invertibility properties of this operator. In summary, the actual boundary condition we want
to discuss is one for the linear operator $\L $ which requires solutions to blow up at some rate strictly less than $y^{-1}$.
The indicial root calculation above is what leads us to specify a growth (or decay) rate such that $\L $ is as close to invertible
as possible when acting on fields with this growth rate.
\subsubsection{The Regular Case}\label{simple}
First assume that $\varrho$ is a principal embedding of $\frak{su}(2)$ in $\frak g$, or more generally, that in the decomposition of $\frak g_\C$ under
$\frak{su}(2)_\tt$, the minimum value of $j_\sigma$ is $1$. (The two conditions are equivalent for $G=SU(N)$ but
not in general, as explained in Appendix \ref{groups}.) In this case there is a simplification stemming from the fact that there are no indicial roots with
$-1<\lambda<1$. We certainly want to exclude fluctuations around the Nahm pole solution with $\lambda\leq -1$, and allow fluctuations with $\lambda>0$.
Accordingly, for a principal embedding and more generally whenever $j_\sigma=1$ is the smallest value in the decomposition of $\frak g_\C$, we can state the Nahm pole
boundary condition in either of the following two equivalent ways:
\begin{itemize}\item[(1)] A solution satisfies the Nahm pole boundary condition if in a suitable gauge it has an asymptotic expansion as $y \to 0$
with leading term the Nahm pole solution, and all remaining terms less singular than $1/y$.
\item[(2)] A solution satisfies the Nahm pole boundary condition if in a suitable gauge it has an asymptotic expansion as $y \to 0$ with leading term the Nahm
pole solution, and with all remaining terms vanishing as $y\to 0$. \end{itemize}
Condition (1) is {\it a priori} weaker than condition (2), but they are equivalent because under our assumption
on the values of $j_\sigma$, there
are no indicial roots in the range $(-1,0]$.
The full explanation of ellipticity of the Nahm pole boundary condition is in section \ref{analysis}. However,
a preliminary observation that plays an important role
is that this boundary condition
allows half of the perturbations near $y=0$: those with $\eta=1$ in $\varphi_a,a_y$, those with $\eta=-1$ in $a_a,\varphi_y$, and
half of the $\eta=0$ perturbations in
both sets of fields. (One of the two pure gauge modes that appear at $\eta=0$ in eqn.\ (\ref{indrootstwo}) vanishes
at $y=0$ and one diverges, so the pure gauge modes
did not affect this counting.)
\subsubsection{The General Case}\label{gencase}
As explained in Appendix \ref{groups}, if any value $j_\sigma<1$ occurs in the decomposition of $\frak g_\C$
under $\frak{su}(2)$, then in fact $j_\sigma=0$ occurs in this decomposition. (If $j_\sigma=0$ occurs
in the decomposition, then $j_\sigma=1/2$ may or may not occur.)
So
let us consider the case that $j_\sigma=0$ does occur in the decomposition. This simply means that there is a nonzero subspace $\frak c$ of $\frak g$ that commutes
with the $\tt_a$; $\frak c$ is automatically a Lie subalgebra of $\frak g$, the Lie algebra of a subgroup $C\subset G$. We write $\varphi^{\frak c}$, $a^{\frak c}$ for the $\frak c$-valued
parts of $\varphi$ and $a$. The formulas (\ref{indroots}), (\ref{indrootstwo}) for the indicial roots show that all
modes of $j_\sigma=0$ have $\lambda=0$. Actually, this is clear without a detailed calculation; for $j_\sigma=0$, the commutator terms can be dropped
in (\ref{lembo}) and (\ref{wembo}), which just say that $\lambda=0$.
This also shows that for $j_\sigma=0$,
the equations for $a_a$ and $\varphi_y$ decouple and the modes with $\eta=1$ or $0$ describe fluctuations of only $a_a$ or
$\varphi_y$, respectively; a similar remark applies, of course, for $\varphi_a$ and $a_y$. Exactly what one means by the Nahm pole boundary conditions depends on
how one treats these $\lambda=0$ modes; this will be discussed momentarily.
When both $j_\sigma=1/2$ and $j_\sigma=0$ occur in the decomposition of $\frak g_\C$, there is a further subtlety.
In this case, to preserve the counting mentioned at the end of section \ref{simple},
we have to define the Nahm pole boundary condition to not allow the perturbation with an indicial root $\lambda=-1/2$ that appears in (\ref{indroots})
for $j_\sigma=1/2$ and $\eta=0$. In other words, we have to require that a solution departs from the Nahm pole solution by
a correction that is less singular than $1/y^{1/2}$.
As for the modes with $j_\sigma=0$,
there are different physically motivated choices of how to treat them \cite{GW}, and these correspond to different boundary conditions.
The general possibility
is explained at the end of this subsection. However, for every $\varrho$, there is a natural boundary condition that we call the strict Nahm pole
boundary condition; for $G=SU(N)$, this is the half-BPS boundary condition that can be naturally realized via D-branes. For this boundary condition,
we want to leave $\phi_a^{\frak c}$ unconstrained at $y=0$ but to constrain $\phi_y^{\frak c}$, $A_a^{\frak c}$ to vanish at $y=0$.
Thus, we can state the strict Nahm pole
boundary condition for general $\varrho$ in either of the following two equivalent ways:
\begin{itemize}\item[(1)] A solution satisfies the strict Nahm pole boundary condition if in a suitable gauge it has an asymptotic expansion as $y \to 0$
with leading term the Nahm pole solution and with remaining terms less singular than $1/y^{1/2}$ (or $1/y$ if $j_\sigma=1/2$ does not occur in the decomposition of $\frak g_\C$),
with the further restriction that $\phi_y^{\frak c}$ and $A_a^{\frak c}$ vanish at $y=0$.
\item[(2)] A solution satisfies the strict Nahm pole boundary condition if in a suitable gauge it has an asymptotic expansion with leading term the Nahm pole solution
and with remaining terms vanishing as $y\to 0$, except that $\phi_a^{\frak c}$ is regular at $y=0$ but does not necessarily vanish there. \end{itemize}
To illustrate the strict Nahm pole boundary condition when $\frak c\not=0$, let us consider the extreme case $\varrho=0$, so that there is no Nahm pole and $\frak c
=\frak g$. In this case, the strict Nahm pole boundary condition just means that $\phi_a$ is regular at $y=0$ while $\phi_y$ and $A_a$ are constrained to vanish. This is
an elementary elliptic boundary condition on the KW equation, already formulated in eqn.\ (\ref{tively}) above. For general $\varrho$, the strict Nahm pole boundary condition
is a sort of hybrid of this case with the opposite case of a principal embedding. Of course, the well-posedness of the strict Nahm pole boundary
condition is elementary at $\varrho=0$, while understanding it for $\varrho\not=0$ is the main goal of the present
paper.
Finally, we describe a generalized Nahm pole boundary condition associated to
a more general treatment of the $j_\sigma=\lambda=0$ modes. For this, we pick an arbitrary subalgebra $\frak h\subset \frak c$, corresponding to a subgroup $H\subset C\subset G$, and we denote
as $\frak h^\perp$ the orthocomplement of $\frak h$ in $\frak c$ ($\frak h^\perp$ is a linear subspace of $\frak c$ but generically not a subalgebra).
Then, in addition to allowing only perturbations that are less singular than $1/y^{1/2}$,
we declare that the $\frak h^\perp$-valued parts of $A_a$ and $\phi_y$ vanish at $y=0$, while the $\frak h$-valued part of these fields is unconstrained; and reciprocally,
we place no constraint on the $\frak h^\perp$-valued part of $\phi_a$, but require the $\frak h$-valued part of $\phi_a$ to vanish at $y=0$.
The strict Nahm pole
boundary condition is the case that $\frak h=0$. To get the generalized Nahm pole boundary condition for
$\frak h\not=0$, we relax the requirement that a gauge transformation should be trivial at $y=0$; instead, we allow gauge transformations that are $H$-valued at
$y=0$.
(For $G=SU(N)$, any $\varrho$, and some specific choices of $\frak h$,
this boundary condition can be realized via a combination of D-branes with an NS5-brane \cite{GW}.)
For a simple illustration of this more general boundary condition, take $\varrho=0$ and
$\frak h=\frak c=\frak g$. Then the boundary condition is simply that $i^* \phi=0$.
This is actually a second elementary elliptic boundary condition on the KW equations, which was formulated
in eqn.\ (\ref{sively}). We will show in section \ref{genebc} that the boundary condition described in this paragraph is
well-posed for all $\varrho$ and $\frak h$.
\subsection{The Boundary Terms And The Vanishing Theorem}\label{largey}
We can now easily show the vanishing of the boundary terms at $y=0$ in the Weitzenbock-like formula (\ref{zoffbox}).
(In doing this, we can ignore the spurious modes that can be removed by a gauge transformation -- the $\eta=0$ modes
in (\ref{indrootstwo}). Since the boundary terms are gauge-invariant, they do not receive a contribution from the spurious modes.)
For example, let us first look at the boundary contribution $\int\d^3 x \,\epsilon_{abc} \Tr\, F_{ab}\phi_c$. The dominant part of $\phi_c$
is the Nahm pole term $\tt_c/y$. To avoid a contribution involving this term, we need the part of $F_{ab}$ that is valued in
$\frak{su}(2)_\tt$ (and thus not orthogonal to the coefficient of this Nahm pole) to vanish faster than $y$ for $y\to 0$.
The relevant part of $F_{ab}$ is of order $y^2$ for $y\to 0$, since for example the part of $F$ that is $\frak{su}(2)_\tt$-valued and
linear in the connection $A$ is a $j_\sigma=1$ mode with an indicial root $j_\sigma+1=2$. The part of $F$ that is quadratic in $A$
vanishes equally fast. (Note that since all of this appears in a boundary integral, we discard the component $F_{ay}$, which only vanishes like $y$.)
So the contribution to $\epsilon_{abc}\Tr\,F_{ab}\phi_c$ involving the Nahm pole in $\phi_c$ is of order $y^2\cdot y^{-1}=y$.
We can also consider contributions to $\epsilon_{abc}\Tr \,F_{ab}\phi_c$ that come from fluctuations in $\phi_c$ around the Nahm
pole solution (as well as fluctuations in the connection $A_a$). As long as we do not consider modes with $j_\sigma=0$, all fluctuations
in either $\phi_c$ or $A_a$ are controlled by strictly positive indicial roots, so the fluctuations vanish at $y=0$ and their contribution
to $\epsilon_{abc}\Tr \,F_{ab}\phi_c$ vanishes. The last case to consider is the case of $j_\sigma=0$ fluctuations in both $\phi_c$
and $A_a$. The general boundary condition formulated in the last paragraph of section \ref{nonregular} ensures that the contributions
of these fluctuations to $\epsilon_{abc}\Tr\,F_{ab}\phi_c$ vanishes, because $F_{ab}$ when restricted to $y=0$ is valued in a subalgebra
$\frak h\subset\frak c$, while $\phi_c$ is valued in an orthogonal subspace $\frak h^\perp\subset \frak c$. Thus the general construction
with arbitrary $\varrho$, $\frak h$ ensures the vanishing of $\epsilon_{abc}\Tr\,F_{ab}\phi_c$ at $y=0$.
The other possible boundary term in the Weitzenbock formula at $y=0$ comes from the expression
$\Delta$, defined in eqn.\ (\ref{mofobox}). Here the integral we have to consider at $y=0$ is $\int \d^3x \Tr\, (\phi_a D_a\phi_y-\phi_y D_a\phi_a).$
Again, we first consider the Nahm pole contribution with $\phi_a=\tt_a/y$. A contribution from this term is avoided for reasons similar to what we found in the
last paragraph. Indeed, the only contribution to $\Tr\, (\phi_a D_a\phi_y-\phi_y D_a\phi_a)$ that is linear in fluctuations around the Nahm pole solution comes from the $j_\sigma=1$
part of $\phi_y$. The corresponding indicial root is 2, so the relevant piece of $\phi_y$ vanishes as $y^2$, too quickly to contribute for $y\to 0$ even when multiplied
by the Nahm pole $\tt_a/y$. Alternatively, we can consider contributions to $\Tr\,(\phi_a D_a\phi_y-\phi_y D_a\phi_a)$ that are bilinear in fluctuations around the
Nahm pole. Here, for group-theoretic reasons, only modes with $j_\sigma>0$ are relevant. Modes with $j_\sigma>0$ have indicial roots of at least
3/2 for $a_a,\varphi_y$
or $1/2$ for $\varphi_a,a_y$, so an expression bilinear in
such modes and linear in the Nahm pole part of
$\phi_a$ vanishes at least as fast as $y^{3/2}y^{1/2}\cdot y^{-1}\sim y$. Finally,
we can consider contributions to $\Tr\, (\phi_a D_a\phi_y-\phi_y D_a\phi_a)$ that do not involve the Nahm pole part of $\phi_a$ at all. Since all relevant indicial
roots are nonnegative, the only possible contribution come from the $j_\sigma=0$ modes for which the indicial root vanishes. These contributions vanish for
much the same reason as in the last paragraph: the restriction of $A_a$ and $\phi_y$ to $y=0$ is valued in a subalgebra $\frak h$, while the $j_\sigma=0$ part of
$\phi_a$ is valued in an orthogonal subspace $\frak h^\perp$.
To complete the proof of the uniqueness theorem for the Nahm pole solution, we need to know that the surface terms in (\ref{zoffbox})
also vanish for $\vec x$ and/or $y$ going to $\infty$. Let $r=\sqrt{|\vec x|^2+y^2}$. To ensure vanishing of the surface terms for
$\vec x ,y\to\infty$, we need $\epsilon_{abc}\Tr\,F_{ab}\phi_c$ and $\Tr\, (\phi_i D_i\phi_j-\phi_j D_i\phi_i)$ to vanish at infinity
faster than $1/r^3$. For example, this is so if the deviation of $A$ and $\phi$ from the Nahm pole
solution $A=0$, $\phi=\tt\cdot \d \vec x/y$ vanishes at infinity faster than $1/r$ and the curvature $F$ and the covariant derivatives
of $\phi$ vanish faster than $1/r^2$. For a solution of the KW equations with this property, the boundary terms for
$\vec x,y\to\infty$ vanish, so if such a solution obeys the Nahm pole boundary condition, it actually is the Nahm pole solution.
\subsection{Behavior At Infinity}\label{zobot}
To decide if the uniqueness result stated in section \ref{largey}
is strong enough to be useful, we need to know if the rate of approach to the Nahm pole solution
that we had to assume
for $\vec x,y\to\infty$
is natural. The goal of the following analysis is to show that it is.
We start with the same reasoning with which we began the analysis for $y\to 0$. If a solution of the KW equations does approach
the Nahm pole solution for $r\to\infty$, then its leading deviation from that solution satisfies, at large $r$,
the linear equation obtained by linearizing around the Nahm pole solution. We will show that
any solution of that linear equation that vanishes for $r\to\infty$ vanishes at least as fast as $1/r^2$ (and its derivative vanishes at least as
fast as $1/r^3$). This holds irrespective of what singularities the solution might have if continued in to small $r$ (where we do not assume
the linearized KW equations to be valid). Vanishing of the perturbations as $1/r^2$ for $r\to\infty$
is more than was needed in section \ref{largey}.
As before, we will write $a_i$ and $\varphi_i$ for the perturbations of $A_i$ and $\phi_i$ around the Nahm pole solution.
To explain the idea of the analysis, we first describe the simplest case, which is the behavior of $\varphi_y$. The linearized KW equations
imply a linear equation for $\varphi_y$ independent of all other modes. This perhaps surprising fact can be proved by taking a certain linear
combination of derivatives of the KW equations. However, a quicker route is to go back to eqn.\ (\ref{zoffbo}). At a solution of the KW
equations $\V_{ij}=\V^0=0$, the left hand side of (\ref{zoffbo}) is certainly stationary under variations of $A$ and $\phi$, so the right hand
side is also. If we consider variations of $A$ and $\phi$ whose support is in the interior of $M$, the boundary term can be ignored
and therefore the functional $I$ is stationary at a solution of the KW equations. In other words, the KW equations imply the Euler-Lagrange equations
$\delta I/\delta A=\delta I/\delta\phi=0$. (This statement is part of the relation of the KW equations to
a four-dimensional supersymmetric gauge theory.)
It is straightforward to work out the Euler-Lagrange equation for
$\varphi_y$ from the explicit formula for the action $I$ in eqn.\ (\ref{baction}). A convenient way to do this is to expand the action in powers
of the fluctuations $a$ and $\varphi$, around the Nahm pole solution on $\R^4_+$. There are no linear terms -- since the Nahm pole solution is a solution -- but there are quadratic terms.
The part of $I$ that is second order in the fluctuations and
has a nontrivial dependence on $\varphi_y$ is
\begin{equation}\label{udz}-\int \d^4x \Tr\left(\sum_i (\partial_i\varphi_y)^2+\sum_a [\phi_a,\varphi_y]^2\right).\end{equation}
Importantly, there are no terms that are bilinear in $\varphi_y$ and the other fluctuations $a$ and $\varphi_a$; that is why at the linearized
level one finds an equation that involves only $\varphi_y$ and not the other fields.
This equation is just the Euler-Lagrange equation that arises in varying the functional (\ref{udz})
with respect to $\varphi_y$. We write
\begin{equation}\label{tenso}\Delta=-\sum_{i=1}^4\partial^2_{x^i} \end{equation}
for the Laplacian on $\R_+^4$ (with the gauge connection $A$ taken to vanish, as in
the Nahm pole solution), and of course we set $\phi_a=\tt_a/y$. The equation for $\varphi_y$ is
\begin{equation}\label{benso}\left(\Delta -\frac{1}{y^2}\sum_a[\tt_a,[\tt_a,\cdot]]\right)\varphi_y=0. \end{equation}
In four dimensions,
\begin{equation}\label{lenso}\Delta=-\frac{\partial^2}{\partial r^2}-\frac{3}{r}\frac{\partial}{\partial r}+\frac{\Delta_{S^3}}{r^2},\end{equation}
where $\Delta_{S^3}$ is the Laplacian on the three-sphere $r=1$. Since we are working on the half-space $\R_+^4$ rather
than all of $\R^4$, we consider $\Delta_{S^3}$ as an operator defined on a hemisphere in $S^3$. It is convenient
to introduce the polar angle $\psi$ where $y/r=\cos \psi$, so that $\psi=0$ along the positive $y$-axis where $\vec x=0$, and the hemisphere
is defined by $\psi\leq \pi/2$. One has
\begin{equation}\label{dsthree}\Delta_{S^3}=-\frac{1}{\sin^2\psi}\partial_\psi \sin^2\psi \partial_\psi +\frac{1}{\sin^2\psi}\Delta_{S^2},\end{equation}
where $\Delta_{S^2}$ is the Laplacian on a unit two-sphere.
The boundary condition at $\psi=\pi/2$ is determined by the four-dimensional boundary condition
at $y=0$ and hence
can be read off from section \ref{nonregular}. In our context, this generally means that $\varphi_y$ vanishes at $\psi=\pi/2$,
except possibly if $j_\sigma=0$, in which case $\varphi_y^{\frak h}$ is not required to vanish at $y=0$ or $\psi=\pi/2$. (Rather,
the KW equation $D_y\phi_y+D_a\phi_a=0$, where $\phi_a^{\frak h}=0$ at $y=0$, and $A$ is $\frak h$-valued at $y=0$,
implies that $D_y\phi_y^{\frak h}=0$
at $y=0$, so for $\varphi_y^{\frak h}$, eqn.\ (\ref{benso}) should be supplemented with Neumann boundary conditions
at $y=0$.) Actually, as we will see in a moment, for $j_\sigma>0$, there is a potential that enforces the vanishing of $\varphi_y$ at $\psi=\pi/2$.
To make (\ref{benso}) more explicit, we replace $-\sum_a[\tt_a,[\tt_a,\cdot]]$ with $j_\sigma(j_\sigma+1)$,
where $j_\sigma$ is defined as in section \ref{group}. The equation
for $\varphi_y$ becomes
\begin{equation}\label{enso}\left(-\frac{\partial^2}{\partial r^2}-\frac{3}{r}\frac{\partial}{\partial r}+\frac{W}{r^2}\right)\varphi_y=0,
\end{equation}
where
\begin{equation}\label{tensox} W=\Delta_{S^3} + \frac{j_\sigma(j_\sigma+1)}{\cos^2\psi} = -\frac{1}{\sin^2\psi}\partial_\psi \sin^2\psi \partial_\psi +\frac{1}{\sin^2\psi}\Delta_{S^2}+ \frac{j_\sigma(j_\sigma+1)}{\cos^2\psi} . \end{equation}
$W$ is a self-adjoint operator on the hemisphere with a discrete and non-negative spectrum
(strictly positive except for $j_\sigma=0$
and $\varphi_y\in\frak h$).
Any solution of (\ref{enso}) is a linear combination of solutions of the form $\varphi_y=r^s f$, where $f$ is an eigenfunction of $W$,
obeying $Wf=\gamma f$ for some $\gamma\geq0$, and $s(s+2)-\gamma=0$ or
\begin{equation}\label{zom} s=-1\pm \sqrt{1+\gamma}.\end{equation}
Actually, the spectrum of the operator $W$ can be found in closed form. In particular, for $j_\sigma>0$, the ground state (which is the unique
everywhere positive
eigenfunction) is $f=\cos^{j_\sigma+1} \psi$,
with eigenvalue $\gamma=(j_\sigma+1)(j_\sigma+3)$.
So from (\ref{zom}), if $s$ is negative -- as it must be if $\varphi_y$ is to vanish for $r\to\infty$ -- then $s=-3-j_\sigma$. The perturbations thus decay for
large $r$ as $r^{-3-j_\sigma}$. Thus for example
if $\varrho$ is principal so that $j_\sigma\geq 1$ for all modes, then in a solution that is asymptotic to the Nahm pole solution,
$\varphi_y$ vanishes for $r\to\infty$ as $1/r^4$.
This analysis of the fluctuations is valid at large $r$ even for $y\to 0$, so we can
compare to our study of the indicial roots. The wavefunction $\varphi_y=\cos^{j_\sigma+1}\psi/r^{j_\sigma+3}$ vanishes for $y\to 0$
as $y^{j_\sigma+1}$, in agreement with (\ref{indroots}), where the positive indicial roots are $\lambda=j_\sigma+1$.
As usual, for $j_\sigma=0$, there is more to say as there are actually two types of mode. For $\varphi_y\in \frak h^\perp$, $\varphi_y$ obeys Dirichlet boundary
conditions at $y=0$, and all the previous formulas are valid, including the asymptotic behavior $\varphi\sim 1/r^{3+j_\sigma}=1/r^3$.
But for $\varphi_y\in \frak h$, we want Neumann boundary conditions at $\psi=\pi/2$. The lowest eigenvalue of $W$ is $\gamma=0$,
with eigenfunction 1, leading to $s=-2$ and $\varphi_y\sim 1/r^2$.
We have analyzed here a second order equation, not all of whose solutions are necessarily associated to solutions of the linearized KW equation, which is first order.
In practice, we need not explore this issue here in detail since all modes we have found decay more rapidly at infinity than was needed for the vanishing
argument of action \ref{largey}. The same remarks apply in what follows.
Fluctuations in the other fields can be analyzed along the same lines. For this, it is convenient to write a general formula for the expansion of the action
$I$ around the Nahm pole solution. We write $I_2$ for the part of $I$ that is quadratic in the fluctuations $a,\varphi$, and compute that
\begin{equation}\label{monx} I_2=I_{2,0}+I_{2,1}+I_{2,2} \end{equation} with
\begin{align}\label{tonx} I_{2,0}&=-\int \d^4x\,\Tr\left(\sum_{i,j}\left((\partial_i a_j)^2+(\partial_i\varphi_j )\right)^2+\frac{1}{y^2}\sum_{a,i}\left([\tt_a,a_i]^2+[\tt_a,\varphi_i]^2\right)\right)\cr
I_{2,1} &=-\int \d^4 x \,\Tr \left(\frac{2}{y^2}\epsilon_{abc} [\tt_a,\varphi_b]\varphi_c]+\frac{4}{y^2}a_y[\tt_a,\varphi_a]
\right)
\cr I_{2,2}&= ~~\int \d^4x\, \Tr\left(\sum_i\partial_i a^i+\frac{1}{y}[\tt_a,\varphi_a]\right)^2=\int \d^4x\, \Tr\,S^2, \end{align}
where the gauge condition (\ref{zurimo}) was $S=0$. This illuminates one of the advantages of that gauge condition: because $I_{2,2}$ is
homogeneous and quadratic in $S$, it is automatically stationary when $S=0$ and hence
does not contribute to the Euler-Lagrange equations. This significantly simplifies the analysis.
Since the spatial part $a_a$ of the connection does not appear in $I_{2,1}$, it obeys an Euler-Lagrange equation that comes entirely from $I_{2,0}$. This
equation coincides with the equation for fluctuations of $\varphi_y$, which we have already analyzed. This is in keeping with the fact that
$\varphi_y$ and $a_a$ have the same indicial roots, so their fluctuations must have the same behavior for $\psi\to \pi/2$.
The equation for fluctuations of $\varphi_a,a_y$ does receive a contribution from $I_{2,1}$. This contribution, which only arises for $j_\sigma>0$ (since
$I_{2,1}=0$ for $j_\sigma=0$), slightly modifies
the behavior of the perturbations for $r\to\infty$. It can
be analyzed using methods similar to those that we used in computing the indicial roots.
For the same reason as in that analysis, the term in $I_{2,1}$ that involves $a_y$ contributes only for $\eta=0$.
For $\eta=\pm 1$, we only have to consider the term involving
$\epsilon_{abc} \Tr\,[\tt_a,\varphi_b]\varphi_c$ which we evaluate using (\ref{exico}) and (\ref{mexico}), to find that
\begin{equation}\label{pilon}I_{2,1}=-\int \d^4 x \frac{1}{r^2\cos^2\psi}\Tr \,\phi_a \phi_a \cdot \begin{cases} -2j_\sigma & \eta=1\cr
2(j_\sigma+1) & \eta=-1. \end{cases}\end{equation}
The equation (\ref{enso}) is modified only by a shift in $W$,
\begin{equation}\label{dosox}W\to W+\frac{1}{\cos^2\psi}\begin{cases} -2j_\sigma & \eta=1\cr 2(j_\sigma+1) &\eta=-1. \end{cases}\end{equation}
This is equivalent to replacing $j_\sigma$ by $j_\sigma-\eta$ in the definition (\ref{tensox}) of $W$, so that the perturbations in $\phi_a$ with $\eta=\pm 1$
vanish
at infinity as $1/r^{3+j_\sigma-\eta}$. The modes that decay most slowly are those with $\eta=1$; they decay for $r\to\infty$
as $1/r^{2+j_\sigma}$ and for $\psi\to \pi/2$ as $\cos^{j_\sigma}\psi$. The last statement is in accord with the value found in (\ref{indrootstwo})
for the indicial root at $\eta=1$.
For $\eta=0$, we can express $\varphi_a$ in terms of a new field $u$ by
$\varphi_a=[\tt_a,u]/\sqrt{j_\sigma(j_\sigma+1)}$ (recall that
we can assume that $j_\sigma>0$, since otherwise the perturbation vanishes). In terms of these variables, the Euler-Lagrange equation turns out to be
\begin{equation}\label{zongo} \left( -\frac{\partial^2}{\partial r^2}-\frac{3}{r}\frac{\partial}{\partial r}+\frac{W}{r^2}+\frac{2}{r^2\cos^2\psi}M\right)\begin{pmatrix}
u \cr a_y\end{pmatrix}=0,\end{equation}
where the contribution of $I_{2,1}$ is the term proportional to
\begin{equation}\label{ofty}M =\begin{pmatrix} 1 & \sqrt{j_\sigma(j_\sigma+1) }\cr \sqrt{j_\sigma(j_\sigma+1)}&0\end{pmatrix}.\end{equation}
The eigenvalues of $M$ are $j_\sigma+1$ and $-j_\sigma$. Upon substituting one of these eigenvalues for $M$ in (\ref{zongo}), one gets precisely
the same shifts of $W$ as described in eqn.\ (\ref{dosox}). So the two modes with $\eta=0$ obey precisely the same equations as the two
modes with $\eta=\pm 1$. In particular, the mode that decays most slowly for $r\to\infty$ again decays as $1/r^{2+j_\sigma}$, while vanishing
as $\cos^{j_\sigma}\psi$ for $\psi\to \pi/2$. The last statement corresponds to the indicial root
$\lambda=j_\sigma$ found at $\eta=0$ in (\ref{indrootstwo}).
Though we motivated this analysis by asking if the conditions needed to get a uniqueness theorem for the Nahm pole solution are reasonable,
the results are applicable more widely. For example, we may be interested in a solution of the KW equations in which the Nahm pole
boundary condition is modified by inclusion of knots at $y=0$. As long as the knots are compact, one can look for solutions of the KW equation
that coincide with the Nahm pole solution for $r\to \infty$. Their rate of approach to that solution will be as we have just described.
We observed in section \ref{background} that the symbol $\sigma$ of the linearized KW equations is the symbol of the operator $D=\d+\d^*$ mapping odd
degree forms to even degree forms. Let $D^\dagger$ be the adjoint of $D$ and let $\sigma^{\dagger}$ be the adjoint of $\sigma$.
Since $D^\dagger D$ is the Laplacian on odd degree differential forms, it follows that $\sigma^\dagger\sigma$ is the symbol of
the Laplacian. This is reflected in the above formulas: the fluctuations are annihilated by a second order differential operator that is
equal to the Laplacian plus corrections of lower order.
\subsection{Extension To Five Dimensions}\label{exfive}
The four-dimensional KW equations are closely related to a certain system of elliptic differential equations in five dimensions \cite{haydys}, \cite{WittenK,WittenKtwo}
and this relationship is crucial in the application to Khovanov homology.
To explain the relationship, we first specialize the KW equations to a four-manifold of the form $M=W\times \I$, with $W$ an oriented
three-manifold, and
$\I$ an oriented one-manifold, possibly with boundary. We endow $M$ with a product metric $g_{ab}(x)\d x^a \d x^b+\d y^2$, where the $x^a$, $a=1,\dots,3$, parametrize $W$ and $y$ parametrizes $\I$, and as usual we expand
$\phi=\sum_a \phi_a \d x^a+\phi_y\d y$. The KW equations have the property
that $\phi_y$ enters only in commutators -- either covariant derivatives $D_a\phi_y=[D_a,\phi_y]$, or commutators $[\phi_a,\phi_y]$ with other components
of $\phi$. This enables us to do the following. We replace the four-manifold $M$ by the five-manifold $Y=\R\times M=\R\times W\times \I$, where
$\R$ is parametrized by a new
coordinate $x^0$. Then wherever a commutator with $\phi_y$ appears in the KW equations, we simply replace it by a commutator with $D_0=D/Dx^0$.
So we replace $D_a\phi_y$ with $[D_a,D_0]=F_{a0}$, and $[\phi_a,\phi_y]$ with $[\phi_a,D_0]=-D_0\phi_a$.
In this way, we get some partial differential equations in five dimensions. Most
of what we have said in this paper about the KW equations carries over to them. For example,
the five-dimensional equations have a Weitzenbock formula quite analogous to (\ref{zelg}).
One simply has to replace $[\phi_y,\cdot]$ with $[D_0,\cdot]$ in the formula
for the action functional $I$ of equation (\ref{baction}). This Weitzenbock formula can be used to prove that the five-dimensional equations
are elliptic for $t\not=0,\infty$. (See eqn.\ (5.44) of \cite{WittenK} for the Weitzenbock formula at $t=1$.) The proof of ellipticity
in the interior amounts to showing that -- similarly to what
was explained for the KW equations at the end of section \ref{zobot} -- if $\sigma$ is
the symbol of the five-dimensional equations, then $\sigma^\dagger\sigma$ is the symbol of the Laplacian (times the identity matrix) and in particular is invertible; hence $\sigma$
is invertible and the equations are elliptic.
Though the equations are elliptic for generic $t$, something nice happens precisely for $t=1$ (or $t=-1$, which is equivalent to $t=1$ modulo $\phi\to -\phi$). Just in this case,
the equations acquire four-dimensional symmetry. From the way we described these equations, they are formulated on a five-manifold of the particular
form $Y=\R\times W\times \I$. However, at $t=1$, there is more symmetry, a fact that is essential in the application to Khovanov homology.
One can replace $\R\times W$ by a general oriented Riemannian four-manifold $X$
with no additional structure, and formulate
the equations on\footnote{Still more generally, one can formulate these equations -- and they remain elliptic --
on an arbitrary five-manifold $Y$ with an everywhere
non-zero vector field \cite{haydys}.}
$Y=X\times \I$. We take on $Y$ a product metric
$\sum_{\mu,\nu=0}^3 g_{\mu\nu}\d x^\mu\d x^\nu+\d y^2$, where $x^\mu$, $\mu=0,\dots,3$ are local coordinates on $X$ and $\I$ is parametrized
by $y$. The equations on $Y$ can be described
as follows. Let $\Omega^{2,+}\to X$ be the bundle of self-dual two-forms, and using the natural projection $X\times \I\to X$, pull this bundle
back to a bundle over $Y=X\times \I$ that we also denote as $\Omega^{2,+}$.
The fields appearing in the five-dimensional equations are a connection
$A$ on a $G$-bundle $E\to Y$, and a section $B$ of $\Omega^{2,+}(\ad(E))=\Omega^{2,+}\otimes \ad(E)$. (For $X=\R\times W$, the relation between $B$ and the object
$\vec\phi=\sum_a\phi_a\d x^a$ that appears in the KW equations is $B_{0a}=\phi_a$, $B_{ab}=\epsilon_{abc}\phi_c$.) The five-dimensional equations can be written
\begin{equation}\label{moniko}
\begin{split}F^+- \frac{1}{4}B\times B-\frac{1}{2}D_y B &=0 \\
F_{y\mu}+ D^\nu B_{\nu\mu}&=0. \end{split}\end{equation}
Here $F^+$ is the orthogonal projection of the curvature $F$ onto the part valued in $\Omega^{2,+}(\ad(E))$, and $B\times B$ is defined
as follows. Since $\Omega^{2,+}$ is a rank 3 real bundle with structure group $SO(3)$, there is a natural isomorphism\footnote{For a vector space $V$,
we denote the symmetric and antisymmetric parts of $V\otimes V$ as $\mathrm{Sym}^2V$ and $\wedge^2V$, respectively.} $\wedge^2\Omega^{2,+}\cong
\Omega^{2,+}$. By composing this with the Lie bracket $\wedge^2\frak g\to \frak g$, we get a natural map $\mathrm{Sym}^2\Omega^{2,+}(\ad(E))
\to \Omega^{2,+}(\ad(E))$. The image of $B\otimes B$ under this map is what we call $B\times B$. An explicit formula, viewing $B$ as a self-dual
two-form valued in $\ad(E)$, is
\begin{equation}\label{omen}(B\times B)_{\mu\nu}=\sum_{\sigma,\tau=0}^3 g^{\sigma\tau}[B_{\mu\sigma},B_{\nu\tau}]. \end{equation}
In view of all this, any solution of the KW equations on $W\times \R_+$ can be viewed as a ``time''-independent solution of the five-dimensional
equations on $\R\times W\times \R_+$ (where we identify $x^0$ as time, as is natural in the application to Khovanov homology), with
$\phi_y$ reinterpreted as $A_0$. In particular, the Nahm pole solution on $\R^3\times \R_+$ can be regarded
as a solution of the five-dimensional equations on $\R^4\times \R_+$. The solution is simply \begin{equation}\label{ford} A=0,~~~B_{0a}=\tt_a/y,~~~
~B_{ab}=\epsilon_{abc}\tt_c/y.\end{equation}
Given the classical Nahm pole solution, we then ask if we can define a boundary condition on the five-dimensional equations by allowing only solutions
that are asymptotic for $y\to 0$ to the Nahm pole. The first step is to compute the indicial roots. These are precisely the same for the five-dimensional
equation as for the four-dimensional KW equations, since the indicial roots are defined in terms of solutions that depend only on $y$ and so in particular
are time-independent. The only real difference between the computation of indicial roots in five dimensions and the four-dimensional computation described in
section \ref{throots} is that the five-dimensional interpretation, with $\phi_y$ reinterpreted as $A_0$, gives a better explanation of the symmetry of the equations
between $a_a$ and $\varphi_y$. (This symmetry is visible in the formula (\ref{tonx}) for $I_2$, and accounts for the fact that in eqn.\ (\ref{indroots}), the indicial roots
for different values of $\eta$ are pairwise equal.) Given the indicial roots, one can imitate the discussion in section \ref{nonregular} to define precisely
the Nahm pole boundary condition, and as in section \ref{largey} it follows that the boundary terms at $y=0$ in the Weitzenbock formula vanish.
The Nahm pole solution on $\R^4\times \R_+$ is therefore unique if one requires sufficiently fast convergence as $r=\sqrt{x^2+y^2}$ becomes large.
The analysis in section \ref{zobot} can be repeated to show that the expected rate of convergence at infinity of a solution that does converge to the Nahm
pole solution is fast enough to make the uniqueness result concerning the Nahm pole solution relevant. Just a few modifications are needed. In
equation (\ref{benso}), $\Delta$ should now be the five-dimensional Laplacian on a half-space,
\begin{equation}\label{umongo}\Delta =-\frac{\partial^2}{\partial r^2}-\frac{4}{r}\frac{\partial}{\partial r}+\frac{\Delta_{S^4}}{r^2}. \end{equation}
We expand the fluctuation in the connection around the Nahm pole as $a=\sum_{s=0}^3 a_s\d x^s +a_y \d y$. The equation obeyed by $a_s$ is now
\begin{equation}\label{tumongo}\left(-\frac{\partial^2}{\partial r^2}-\frac{4}{r}\frac{\partial}{\partial r}+\frac{W}{r^2}\right)a_s=0,~~s=0,\dots,3\end{equation}
where
\begin{equation}\label{rumongo}W=\Delta_{S^4}+\frac{j_\sigma(j_\sigma+1)}{\cos^2\psi}=-\frac{1}{\sin^3\psi}\partial_\psi \sin^3\psi\partial_\psi+\frac{1}{\sin^2\psi}\Delta_{S^3}
+\frac{j_\sigma(j_\sigma+1)}{\cos^2\psi}. \end{equation}
The lowest eigenvalue of $W$ is now $(j_\sigma+1)(j_\sigma+4)$, again with eigenfunction $\cos^{j_\sigma+1}\psi$, and now the fluctuations decay as
$r^{-4-j_\sigma}$ for $r\to\infty$, with precisely one extra power of $1/r$ compared to the four-dimensional case.
The corresponding formulas for $B$ and $a_y$ are similar to the analysis of $\varphi_a$ and $a_y$ in the four-dimensional case,
and again, the fluctuations in five dimensions
decay with one extra power of $r$ compared to what we found in four dimensions.
\section{The Linearized Operator On A Half-Space}\label{third}
\subsection{Overview}\label{overview2}
A nonlinear partial differential equation is said to be elliptic if its linearization is elliptic. An important property of a linear elliptic differential operator on
a closed manifold is that its kernel and cokernel are finite-dimensional.
If a linear elliptic differential equation is considered on a manifold $M$ with a nonempty boundary, then it is necessary to impose some sort of boundary condition
to make the problem similarly well posed. A boundary condition such that the accompanying problem has a finite dimensional kernel and cokernel, and so that in addition
solutions enjoy optimal regularity properties, is called an elliptic boundary condition. As before, for a nonlinear elliptic differential equation on a manifold $M$
with boundary, a choice of boundary condition is called elliptic if it (or its linearization if the boundary condition is also nonlinear) is an elliptic boundary
condition for the linearized operator. For standard nondegenerate elliptic operators, the theory of elliptic boundary conditions is now classical, and the criterion
for ellipticity of a boundary condition (which involves both the interior and boundary operators) is called the Lopatinski-Schapiro condition. The linearized operator
in our setting cannot be treated with this classical theory since its coefficients of order $0$ are singular at $y=0$. It is, however, a uniformly degenerate
elliptic operator, as introduced in \cite{M-edge}. There is a suitable notion of ellipticity for boundary conditions in this setting as well, to which we shall be appealing here.
As a way to motivate the definition of ellipticity of a given boundary condition, consider the model case where $M$ is the half-space $\R^n_+ = \{x^n\geq 0\}$.
If a linear differential operator on this half-space and a boundary condition on the boundary $\R^{n-1}$ are both invariant under rotations and translations
in the boundary variables, then the kernel and cokernel of this boundary problem (amongst tempered solutions on $\R^n_+$) are finite-dimensional if and
only if they are actually trivial. Conversely, a boundary condition with this property, and so that the accompanying linear operator
as a map between appropriate Sobolev spaces has closed range, is elliptic. We have stated this formulation for operators and boundary conditions
with substantial symmetry; more generally, if $\L $ is a general linear elliptic operator with variable coefficients and $B$ a possibly non-constant operator giving the boundary conditions, then we may apply
this condition to the constant coefficient problem on $\R^n_+$ obtained by freezing the coefficients of $\L $ and $B$ at $q \in \R^{n-1}$.
These remarks are relevant to the Nahm pole boundary condition on the KW equations. Our task now is to show that the linearization $\L $ of the KW operator
around a solution with Nahm pole boundary data on a four-manifold $M$ with boundary satisfies this ellipticity condition. By the remarks above,
this is actually equivalent to proving the corresponding property for the linearization of the KW operator around the actual model Nahm pole solution
on the half-space $\R^4_+$. We shall explain in section \ref{analysis} how this fits into the analytic theory which justifies the main consequences
of this paper, namely the regularity at $y = 0$ of more general solutions satisfying Nahm pole boundary conditions and the uniqueness theorem.
Thus the aim of this section is to study the linearized KW operator on the half-space, and to show that it is an isomorphism on the space of $L^2$
fields which satisfy the Nahm pole boundary conditions. As suggested above, this consists of two rather separate parts: one involves showing
that the kernel and cokernel of $\L $ vanish, while the other requires showing that $\L $ has closed range as a map between the appropriate
function spaces. We undertake the first of these in the present section. Section \ref{vanishing} contains a proof that the kernel of $\L $ vanishes. This turns
out to be a rather direct consequence of the formulas that were used in section \ref{second} to establish the uniqueness theorem. As for vanishing of
the cokernel, a standard strategy, once the kernel is known to vanish, is to show, after reducing to an ODE via a Fourier transform, that the index of $\L $ (defined as the difference
in dimension between the kernel and cokernel) vanishes. We do this in two essentially separate ways. The first involves some fairly elementary
algebraic considerations; see section \ref{index}.
The second is more direct (and much more useful in generalizations). It turns out, see section \ref{pseudo}, that the adjoint of $\L $ is
conjugate to $-\L $, a property that we call pseudo skew-adjointness. This immediately gives an isomorphism between the kernel and
cokernel of $\L $, so the cokernel vanishes if the kernel does. Finally, in section \ref{extension}, we show that these arguments carry over more or
less immediately to the five-dimensional extension of the KW equations that is relevant to Khovanov homology.
The remaining task, to show that the range of $\L $ is closed, turns out to follow using general machinery that will be explained in section \ref{analysis}.
\subsection{Vanishing Theorem For The Kernel}\label{vanishing}
The uniqueness theorem for the Nahm pole solution
on a half-space was deduced from an identity (\ref{zoffbox}) which reads schematically
\begin{equation}\label{terrfo}-\int \d^4x \,\Tr\,\sum_\lambda\V_\lambda^2 =-\int \d^4x \,\Tr\,\sum_\sigma \W_\sigma^2 ,\end{equation}
where we omit surface terms since we have shown them to vanish in a solution of the KW equations that is asymptotic to the Nahm pole solution for $y\to 0$ and at infinity. The $\V_\lambda$ are the $\frak g$-valued quantities $\V_{ij}$ and $\V^0$ that appear on the left hand side
of (\ref{zoffbox}). The $\W_\sigma$ are the objects $F_{ij}$, $D_a\phi_b$, $D_i\phi_y$, $[\phi_y,\phi_a]$, and $W_a$ whose squares appear on the right hand
side of the definition (\ref{longsum}).
If the KW equations $\V_\lambda=0$ are obeyed, then the identity (\ref{terrfo}) shows
that the $\W_\sigma$ vanish, which implies that the solution is
constructed from a solution of Nahm's equations.
Now let us see what this formula tells us about the linearization of the KW equations about the Nahm pole solution. Schematically, let us combine
the fields $A,\phi$ to an object $\Phi$ (one can think of $\Phi=A+\star\phi$ as an odd-degree differential form on $\R^4_+$ valued in $\mathrm{ad}(E)$).
We write $\Phi_0$ for the Nahm pole solution, and we consider a family of fields $\Phi_s=\Phi_0+s \Phi_1$, where $s$ is a parameter and $\Phi_1$ is a perturbation.
Expanding the identity (\ref{terrfo}) in powers of $s$, the linear term vanishes because $\V_\lambda = \W_\sigma=0$ at $s=0$. Taking the second derivative
with respect to $s$, terms such as $\V_\lambda \partial_s^2 \V_\lambda$ vanish at $s=0$ since $\V_\lambda=0$ at $s=0$. So we get
\begin{equation}\label{terrf}-\int \d^4x \,\Tr\,\sum_\lambda\left(\frac{\partial\V_\lambda}{\partial s}\right)^2 =-\int \d^4x \,\Tr\,\sum_\sigma\left(\frac{\partial \W_\sigma}{\partial s}
\right)^2.\end{equation}
Hence the equations $\partial \V_\lambda/\partial s=0$ are satisfied if and only if the equations $\partial \W_\sigma/\partial s=0$ are satisfied.
The equations $\partial \V_\lambda/\partial s=0$ are the linearization of the KW equations around the Nahm pole solution. In other words, these equations
are $\L \Phi_1=0$, where $\L $ is the linearization of the KW equations and $\Phi_1$ is the perturbation around the Nahm pole solution.
The equations $\partial \W_\sigma/\partial s=0$ imply that $\Phi_1$ actually vanishes if it vanishes at infinity. For example, the equation $\partial_s F_{ij}=0$
implies that the fluctuation in the connection $A$ can be gauged away, and upon doing so, the equations $\partial_s(D_a\phi_b)=\partial_s(D_a\phi_y)=0$ imply
that the perturbation in $\phi$ is independent of $\vec x$ and so vanishes if it vanishes at infinity. The other conditions $\partial_s\W_\sigma=0$ imply
that $\Phi_1$ actually comes from a solution of the linearization of Nahm's equation. Of course, such a perturbation (or any perturbation that is independent of $\vec x$)
is not square-integrable in four dimensions.
Hence if the linearization $\L $ of Nahm's equations is understood as an operator acting on a Hilbert space of square-integrable wavefunctions, its kernel vanishes.
\subsection{Index}\label{index}
Here we will sketch a standard strategy to prove that the cokernel of $\L $ vanishes once one knows that the kernel vanishes. We only provide a sketch
since in the particular case of the KW equations, there is a more powerful and direct method that we explain in section \ref{pseudo}.
First of all, using the translation symmetries of $\R^3=\partial(\R^4_+)$, we can look for a momentum eigenstate, that is a
perturbation of the form $\Phi_1(\vec x,y)=e^{i\vec k \cdot \vec x}F(y)$
where $F$ depends on $y$ only and $\vec k$ is a real ``momentum'' vector. $F(y)$ is a function on $\R_+$ with values in a finite-dimensional complex vector space $Y$. Let $d=\dim\,Y$;
for the KW equations,
$d=8\dim \frak g$. To show that the cokernel of $\L $ is trivial for square-integrable wavefunctions, it suffices to show that it vanishes for momentum eigenstates with
$\vec k\not=0$.
On momentum eigenstates, the linearized KW equation $\LKW \Phi_1=0$ reduces to an equation $\L _1(\vec k)F(y)=0$, with
\begin{equation}\label{redy}\L _1(\vec k)=\frac{d}{d y} + B(y,\vec k), \end{equation}
where $B(y,\vec k)$ is a self-adjoint matrix-valued function of $y$ and $\vec k$. In fact,
\begin{equation}\label{edy} B(y,\vec k)=\frac{B_0}{y}+B_1(\vec k), \end{equation}
where $B_0$ is a constant matrix (independent of $y$ and $\vec k$) and $B_1$ is independent of $y$ and homogeneous and linear in $\vec k$. Actually, $B_0$
is the matrix whose eigenvalues are the indicial roots, which we computed in section \ref{throots}. $B_1(\vec k)$ is the symbol of the operator $\d+\d^*$ on
$\mathrm{ad}(E)$-valued
dfferential forms on $\R^3$ of all possible degrees; in other words, $B_1(\vec k)$ is the momentum space version of the $\d+\d^*$ operator.
Let $\Y$ be the space of all solutions of the linear equation $\L _1F(y)=0$ on $\R_+$, with no condition on the behavior near $y=0$ or $\infty$.
We can identify $\Y$ with $Y$ by, for example, mapping a solution $F(y)\in \Y$ to its
value $F(1)\in Y$, so $\Y$ has dimension $d$. Let $\Y_0$ be the subspace of $\Y$ consisting of solutions that obey the Nahm pole boundary condition at $y=0$ (and
in particular are square-integrable near $y=0$),
and let $\Y_\infty$ be the subspace consisting of solutions that are square-integrable at infinity. Also, let $d_0=\dim\,\Y_0$, $d_1=\dim\,\Y_1$.
Finally, let $\Y^*$ be the space of solutions of (\ref{redy}) that obey the Nahm pole boundary condition at $y=0$, and in addition are square-integrable,
and set $d^*=\dim \,\Y^*$.
$\Y^*$ is simply the intersection $\Y_0\cap\Y_\infty$ of the spaces of solutions that are well-behaved at 0 and at $\infty$. So
if $d_0+d_\infty-d\geq 0$ and the subspaces $\Y_0,\Y_\infty\subset \Y$ are generic, then the dimension of $\Y^*$ is $d^*=d_0+d_\infty-d$. Even if these conditions
do not hold, the index of the operator $\L _1$ is $d_0+d_\infty-d$.
In the case of the KW equations with Nahm pole boundary conditions, $d_0=d_\infty=d/2$ and therefore the index of $\L _1$ is 0. Hence if the kernel of $\L _1$
vanishes -- as shown in section \ref{vanishing} -- then the cokernel also vanishes. The fact that $d_0=d/2$ was explained in section \ref{nonregular}, basically as
a consequence of the fact that the indicial roots of $\varphi_a,a_y$ are the negatives of those of $a_a,\varphi_y$ (with some care when some roots vanish).
Since $B(y,\vec k)$ can be approximated by $B_1(\vec k)$ for $y\to\infty$, solutions of $\L _1F(y)=0$ that are square-integrable for $y\to \infty$ correspond
to positive eigenvalues of $B_1(\vec k)$. So to show that $d_\infty=d/2$, we must
show that $B_1(\vec k)$ has equal numbers of positive and negative eigenvalues. This follows from the fact that $B_1(\vec k)$ is the symbol of the $\d+\d^*$ operator; its
square is $|\vec k|^2$, so its eigenvalues are $\pm |\vec k|$, and as it is traceless, precisely half of the eigenvalues are positive.
Alternatively, by rotation symmetry, the number of positive eigenvalues of $B_1(\vec k)$ is invariant under $\vec k\to -\vec k$; but since $B_1(-\vec k)=-B_1(\vec k)$,
it has equally many positive and negative eigenvalues.
We have omitted various details here,
since we turn next to a more direct (and much more widely applicable) proof of the vanishing of the cokernel of the linearized KW operator $\LKW$.
There are two specific reasons to have included the above material.
First, this explains why the counting of positive
indicial roots in section \ref{nonregular} is important. Second, even without a direct calculation of the index of the operator $\L _1$,
the fact that the problem can be formulated in terms of the vanishing of
the cokernel of this 1-dimensional operator will make it easy to go from 4 to 5 dimensions in section \ref{extension}.
\subsection{Pseudo Skew-Adjointness}\label{pseudo}
Inspection of the equations (\ref{lembo}) and (\ref{wembo}) that determine the indicial roots shows that these roots (in the gauge (\ref{zurimo}), which we assume in what
follows) are odd under exchange of $a_a,\varphi_y$ with $\varphi_a,a_y$. We make this exchange via the linear transformation
\begin{align}\label{roggo}N\begin{pmatrix}a_a \cr \varphi_y\end{pmatrix}&=\begin{pmatrix} \varphi_a \cr a_y \end{pmatrix} \cr
N\begin{pmatrix} \varphi_a \cr a_y\end{pmatrix}&=-\begin{pmatrix} a_a \cr \varphi_y \end{pmatrix}.\end{align}
The minus sign in the second line does not affect what we have said so far, but will be important shortly. Taking this minus sign into account, we have
\begin{equation}\label{donzott}N^2=-1,~~~~N^\dagger=-N ,\end{equation}
where $N^\dagger$ is the transpose of $N$ in the usual basis given by $a_i$ and $\varphi_i$, or more invariantly the
adjoint of $N$ with respect to the quadratic form
\begin{equation}\label{elbo} -\Tr\,\sum_{i=1}^4\left( a_i^2+\varphi_i^2\right).\end{equation}
Perturbations of the Nahm pole solution that depend only on $y$ are governed by an equation that we schematically write
$\L _1(0)\Phi=0$, where $\Phi$ combines all the fields and
\begin{equation}\label{zorbo}\L _1(0)=\frac{\d}{\d y}+\frac{B_0}{y} \end{equation}
is obtained from $\L _1(\vec k)$ (eqn.\ (\ref{redy})) by setting $\vec k=0$. The matrix $B_0$ can be read off from (\ref{lembo}) and (\ref{wembo}) (which are derived from the equation $\L _1(0)\Phi=0$
by replacing $\d/\d y$ with $\lambda/y$), and by inspection we see that $B_0$ is a real symmetric matrix in the usual basis, or in other words is real and self-adjoint
for the quadratic form (\ref{elbo}). On the other hand, $\d/\d y$ is skew-adjoint. The adjoint of $\L _1(0)$ is thus
\begin{equation}\label{morbo} \L _1(0)^\dagger=-\frac{\d}{\d y} +\frac{B_0}{y}.\end{equation} Since the indicial roots are the eigenvalues of $-B_0$,
the statement that the matrix $N$ reverses the sign of the indicial roots is equivalent to $N B_0=-B_0N$. We can combine this with
(\ref{morbo}) as the statement that
\begin{equation}\label{orbo} \L _1(0)^\dagger=- N \L _1(0)N^{-1}. \end{equation}
So far we have just reformulated the symmetry that changes the sign of the indicial roots. It turns out, however, that (\ref{orbo}) holds without change
for $\vec k\not=0$:
\begin{equation}\label{porbo}\L _1(\vec k)^\dagger = -N\L _1(\vec k)N^{-1}. \end{equation}
Once one knows that the kernel of $\L _1(\vec k)$ is trivial, it immediately follows from (\ref{porbo}) that the cokernel of this operator is also trivial.
Indeed, the cokernel of $\L _1(\vec k)$ is the kernel of $\L _1(\vec k)^\dagger$, but (\ref{porbo}) implies that the kernel of $\L _1(\vec k)^\dagger$ is obtained
by acting with $N$ on the kernel of $\L _1(\vec k)$.
Since eqn.\ (\ref{porbo}) holds for any $\vec k$, this statement can be formulated without introducing momentum eigenstates. If $\L $ is the linearization of the KW equations
around the Nahm pole solution, then
\begin{equation}\label{homebo}\L ^\dagger =-N \L N^{-1}, \end{equation}
a property that we describe by saying that $\L $ is pseudo skew-adjoint.
We can write $\L =\partial_y +B$, where $B$ is a self-adjoint\footnote{Self-adjointness of $B$ is not hard to verify by inspection, and is clear in the relation \cite{WittenKtwo}
of the KW equations on $W\times \I$, for a one-manifold $\I$, to gradient
flow equations for the complex Chern-Simons functional on $W$. Linearization of the gradient flow equation for any Morse function $h$ on a Riemannian manifold $X$
always produces
a differential operator $\d/\d y+B$, where $B$ (which is derived from the matrix of second derivatives of the function $h$ and the metric of $X$) is self-adjoint. In the present example, $X$ is
essentially the space of complex-valued connections on the bundle $E_\C\to W$, where $E_\C$ is the complexification of $E$, and $h$ is the imaginary
part of the Chern-Simons
functional of such a connection.} first-order differential operator that contains derivatives only along $W$. Then $\L ^\dagger=-\partial_y+B$
and (\ref{homebo}) amounts to the statement that
\begin{equation}\label{zomebo} N BN^{-1}=-B. \end{equation}
Another equivalent statement is that $\t \L =N\L $ is actually self-adjoint.
These assertions hold in much greater generality than perturbing around the Nahm pole solution on $\R^3\times \R_+$. They hold, as we will see, in perturbing
around any solution of the KW equations on $W\times \I$ for any oriented three-manifold $W$ and one-manifold $\I$ (endowed with a product metric
$g_{ab}\d x^a \d x^b+\d y^2$), provided only that $\phi_y$ (which as usual is the component of $\phi$ in the $\I$ direction) vanishes.
This claim can be verified by inspection of the linearized KW equations. In doing this, to minimize clutter,
we write simply $A, \phi$ (rather than $A_{(0)}, \phi_{(0)}$ as in section
\ref{background}) for a solution of the KW equations about which we wish to perturb. As usual, we denote the perturbations about this solution
as $a,\varphi$, so we consider the condition that $A+s a, \phi+s\varphi$ obeys the KW equations to first order in the small parameter $s$. The symbol $D_i$ will
denote a covariant derivative defined using the unperturbed connection $A$ (and the Levi-Civita connection of $W$ if $W$ is not flat). As already explained,
we assume that the solution about which we expand obeys $\phi_y=0$, and we describe the background in the gauge $A_y=0$. However, the gauge condition
that we impose on the fluctuations is that of eqn.\ (\ref{zurimo}):
\begin{equation}\label{consoo} D_i a^i+[\phi_a,\varphi^a]=0. \end{equation}
Given that $A_y=0$, this is equivalent to
\begin{equation}\label{bonso} \frac{\partial a_y}{\partial y} + D_a a^a+[\phi_a,\varphi^a]=0. \end{equation}
Now let us compare this gauge condition to the linearization of one of the KW equations, namely the condition $D_i\phi^i=0$. With $A_y=\phi_y=0$, the linearization
of this equation gives
\begin{equation}\label{conso}\frac{\partial \varphi_y}{\partial y} + D_a\varphi^a -[\phi_a,a^a]=0. \end{equation}
When we transform $a$ and $\varphi$ via (\ref{roggo}), these two equations are exchanged except that the signs are reversed
for all terms not proportional to $\partial_y$, as predicted in eqn.\ (\ref{zomebo}).
The other KW equations $F_{ij}-[\phi_i,\phi_j]+\epsilon_{ijkl}D^k\phi^l=0$ behave similarly. We write the linearization of
these equations in detail\footnote{Our orientation convention
is such that the antisymmetric
tensors $\epsilon_{ijkl}$ and $\epsilon_{abc}$ obey $\epsilon_{abcy}=\epsilon_{abc}=-\epsilon_{yabc}$. } in a way adapted to the split $M=W\times \I$:
\begin{align}\label{tellme}\partial_y a_a-D_a a_y-[\varphi_y,\phi_a]-\epsilon_{abc}D^b\varphi^c-\epsilon_{abc}[a^b,\phi^c] & = 0 \cr
\partial_y\varphi_a+[a_y,\phi_a]-D_a\varphi_y-\epsilon_{abc}D^b a^c+\epsilon_{abc}[\phi^b,\varphi^c] &=0. \end{align}
Again when we transform $a$ and $\varphi$ via (\ref{roggo}), these two equations are exchanged except that the signs are reversed
for all terms not proportional to $\partial_y$.
\subsection{Extension To Five Dimensions}\label{extension}
As explained in section \ref{exfive}, the Nahm pole solution can also be used to define a boundary condition on certain elliptic differential equations in five
dimensions that are expected to be relevant to Khovanov homology. We simply replace $\R^3\times \R_+$ by $\R\times \R^3\times\R_+$, where the first
factor is parametrized by a new ``time'' coordinate $x^0$. We
reinterpret $\phi_y$ as the component $A_0$ of the connection in the $x^0$ direction, and replace $[\phi_y,\,\cdot\,]$ with $D/D x^0$.
The equations acquire a rotation symmetry
in $\R^4=\R\times \R^3$.
For the Nahm pole boundary condition in five dimensions to be elliptic, the linearization $\h \L $ of the five-dimensional equations
around the Nahm pole solution must have trivial kernel and cokernel. Using the translation symmetries of $\R^4$, we can consider momentum eigenstates,
proportional to $\exp\left(i\sum_{j=0}^3 k_jx^j\right)$ with a real four-vector $k=(k_0,\dots,k_3)$. Acting on wavefunctions of this kind, $\h \L $ becomes a 1-dimensional operator
$\h \L _1(k)$, acting on functions that depend only on $y$, and it suffices to show that for $k\not=0$, $\h \L _1(k)$ has trivial kernel and cokernel.
Using the rotation symmetries of $\R^4$, we can assume that the ``time'' component $k_0$ of $k$ vanishes. In that case, we are dealing with a time-independent perturbation.
By definition, the five-dimensional equations reduce to the KW equations in the time-independent case (with $A_0$ interpreted as $\phi_y$), so for $k_0=0$,
$\h \L _1(k)$ coincides precisely with the
corresponding operator $\L _1(\vec k)$ of the four-dimensional poblem.
But we already know that the kernel and cokernel of $\L _1(\vec k)$ vanish;
the kernel vanishes by the vanishing result of section \ref{vanishing}, and the cokernel vanishes because of pseudo skew-adjointness. So in expanding around the
Nahm pole solution in five dimensions, the kernel and cokernel of $\hat \L $ vanish, as we aimed to show.
\section{The Nahm Pole Boundary Condition On A Four-Manifold}\label{fourmanifold}
So far, we have described the Nahm pole boundary condition for certain four- or five-dimensional equations
on a half-space $\R^4_+$ or $\R^5_+$. The purpose of the present
section is to explore the Nahm pole boundary condition on a general manifold with boundary.
In section \ref{induced}, we formulate the Nahm pole boundary condition for the KW equation on an oriented
four-manifold $M$ with boundary $W$.
The five-dimensional case is similar but will not be described
here.
Once the KW equation with
Nahm pole boundary condition is defined on a four-manifold with boundary, one can inquire about the index of the linearization $\L$
of this equation. Assuming certain foundational results that we postpone to section \ref{analysis} (such as the fact that $\L$ is Fredholm),
a simple formal computation that we present in section \ref{indexcalc} determines this index.
The analogous index problem on a five-manifold with boundary will not be treated in the present paper.
The index of an elliptic operator on a five-manifold without boundary is always 0, but this is not necessarily the case on a five-manifold with boundary.
\subsection{Boundary Conditions on the Connection}\label{induced}
For a homomorphism $\varrho:\frak{su}(2)\to \frak g$, let us say that $\varrho$ is quasiregular if $j_\sigma=0$ does
not occur in the decomposition of $\frak g$ or equivalently if the commutant $C$ is a finite group. (This is so if $\varrho$ is principal; for additional examples see Appendix \ref{groups}.)
On a half-space, for quasiregular $\varrho$,
the Nahm pole boundary condition on the KW equations implies that the connection $A$ vanishes along the boundary, since
the relevant indicial roots are strictly positive.
More generally, on any four-manifold $M$ with boundary $W=\del M$,
the leading order behavior of the connection $A$ along $W$ is
coupled by the KW equations to the leading order behavior of $\phi$. In particular, if ${\KW}(A,\phi) = 0$ and
$\phi$ and $A$ have expansions
$\phi = y^{-1}(\sum \frak t_a \d x^a) + \dots$, $A = A_{(0)} + \dots$ near the boundary, then for quasiregular
$\varrho$, the restriction $A_{(0)}$ of the connection to
$W$ is uniquely determined, as we will explain. In expanding the solution near $y=0$ and analyzing $A_{(0)}$, we
implicitly use the regularity theorem of section \ref{nonlinreg}, that solutions $(A,\phi)$ have asymptotic
expansions as $y \to 0$. In the calculations below we only use the first few terms of this expansion.
The KW equations determine an entire sequence of relationships between the higher coefficients in the expansion of $\phi$ and $A$, with a certain
number of these left undetermined because of the gauge freedom. However, we focus
here on the leading order relationships, which signify the rigidity of the Nahm pole boundary condition.
The main subtlety is to understand the generalization of the formula $\phi=\sum_a\frak t_a\d x^a/y+\dots$ to the case that
the boundary manifold $W$ is not flat. For this,
we view $\phi$ as a section of the bundle $\mathrm{Hom}(TW, \ad(E))$. For a point $\vec x\in W$,
let $\{e_a\}$ be any orthonormal basis of $T_{\vec x}W$.
Suppose that the leading term in the expansion of $\phi$ is of order $y^{-1}$. This leading term must be $\sum_a \frak t_a e_a^*/y$
for some $\frak t_a\in {\mathrm{ad}}(E)_{\vec x}$, and the $y^{-2}$ term in the first of eqns. (\ref{kwe}) below implies that the $\frak t_a$
satisfy the commutation relations of $\frak {su}(2)$. The classification of $\frak{su}(2)$ subalgebras of $\frak g$ up to conjugacy is discrete
and hence the conjugacy class of $\frak t_a$ is independent of $\vec x$; this is the conjugacy class of some homomorphism $\varrho:\frak{su}(2)\to
\frak g$. It also follows from the theory of $\frak{su}(2)$ that $\phi_{\varrho}=\sum_a \frak t_a e_a^*$, viewed as a homomorphism from $T_{\vec x}W$ to
the subspace of $\mathrm{ad}(E)_{\vec x}$ spanned by the $\frak t_a$, is an isometry.
For $G ={SO}(3)$ or ${SU}(2)$, assuming that $\varrho\not=0$,
the $\frak t_a$ span all of $\mathrm{ad}(E)$ and we have learned that the polar
part of $\phi$ determines an isomorphism between $\mathrm{ad}(E)$ and $TW$. (We assume that $\varrho\not=0$ to avoid many
exceptions in the following remarks, but the discussion below of the non-quasiregular case applies in particular to $\varrho=0$.)
For $G={SO}(3)$, a knowledge of
$\mathrm{ad}(E)$ is equivalent to a knowledge of the principal bundle $E$. For $G=SU(2)$, this is not quite true; the possible
choices of $E$ -- once the identification of $\mathrm{ad}(E)$ with $TW$ is known -- correspond to spin structures on $W$.
For $G$ of higher rank, the full story is more complicated, and includes the possibility of twisting by a $C$-bundle,
where $C$ is the commutant of $\varrho(\frak{su}(2))$ in $G$. The possibility of this twisting will be reflected in eqn.\ (\ref{zub}) below.
We henceforth fix $\phi_{\varrho}$ and consider any solution pair $(A,\phi)$ with Nahm pole given by $\phi_\varrho=\sum_a \frak t_a e_a^*$.
We assume that both $\phi$ and $A$ are polyhomogeneous (this will be proved in section \ref{nonlinreg}),
where $\phi$ has leading term $\phi_{(0)} = y^{-1} \phi_{\varrho}$ and $A$ has leading term $A_{(0)}$.
Insert the expansions for $A$ and $\phi$ into the two equations in \eqref{zobo} and collect the terms with like powers.
We discuss the coefficients of the powers $y^{-2}$ and $y^{-1}$ in turn.
We first discuss the quasiregular case, which means that
\begin{equation}
\phi \sim y^{-1}\phi_{\varrho} + y\varphi_1 + \ldots, \qquad A \sim A_{(0)} + y a_1 + \ldots.
\end{equation}
We compute
\begin{equation}
\begin{split}
F_A &= \mathcal O(1), \\
\phi \wedge \phi & = y^{-2} \phi_{\varrho} \wedge \phi_{\varrho} + \mathcal O(1), \\
\star \d_A \phi & = - y^{-2} \star (\d y \wedge \phi_{\varrho}) + y^{-1} \star \d_{A_{(0)}} \phi_{\varrho} + \mathcal O(1), \\
\d_A \star \phi & = - y^{-2} \d y \wedge \star \phi_{\varrho} + y^{-1} \d_{A_{(0)}}(\star \phi_\varrho)+ \mathcal O(1),
\end{split}
\end{equation}
so the KW equations become
\begin{equation}\label{kwe}
\begin{split}
- y^{-2}( \star \d y \wedge \phi_\varrho + \phi_\varrho \wedge \phi_\varrho) + y^{-1} (\star \d_{A_{(0)}} \phi_\varrho)
+ \ldots & = 0 \\ - y^{-2} \d y \wedge \star \phi_{\varrho} + y^{-1} \d_{A_{(0)}}(\star \phi_\varrho) + \ldots & = 0.
\end{split}
\end{equation}
The gauge condition \eqref{elob} does not include any terms with $y^{-2}$ or $y^{-1}$. The coefficient of $y^{-2}$ in the
first of eqns. (\ref{kwe}) is just Nahm's equation, forcing the $\frak t_a$ to generate an $\frak{su}(2)$ subalgebra of $\mathrm{ad}(E)$,
while in the second, $\star \phi_{\varrho}$ already contains a $\d y$,
so both terms vanish.
The coefficients of $y^{-1}$ reduce to
\begin{equation}
i)\ \d_{A_{(0)}} \phi_\varrho = 0, \qquad \mbox{and}\quad ii)\ \d_{A_{(0)}} \star \phi_{\varrho} = 0
\label{harmonicphi}
\end{equation}
Since $\star \phi_\varrho$ along $W$ is proportional to $\d y$, eqn.\ $ii)$ involves only tangential derivatives and does not
involve the $ y$ component $A_{(0)}$. Equation $i)$, on the other hand, may have a $\d y$ term, but let us first
examine its pullback to $W$. This restricted equation implies that $\phi_\varrho$
intertwines the Levi-Civita connection on $TW$ and the connection $A_{(0)}$ on $\ad(E)$. Indeed, this equation shows
that the part of this connection induced on the image of $\phi_\varrho$ is torsion-free, and since it is a $\frak g$ connection,
it is also compatible with the Killing metric on $\ad(E)$. Hence its pullback to $TW$ must be the Levi-Civita connection:
i.e.\ in terms of the orthonormal frame $e_a$,
\begin{equation*}
\phi_{\varrho}\left( \nabla_{e_a} e_b \right) = \nabla_{e_a} \left(\phi_\varrho(e_b)\right)
\end{equation*}
for all $a, b$. Equivalently, with respect to the product connection on $T^*W \times \ad(E)$, $\nabla \phi_{\varrho} = 0$.
The second equation in \eqref{harmonicphi} is then automatically satisfied. eqn.\ $i)$ further implies that $A_{(0)}$ is
valued in $\varrho(TW)\subset \mathrm{ad}(E)$; its projection onto the orthocomplement of this space vanishes. Thus,
if $\varrho$ is quasiregular or equivalently if the commutant $C$ is a finite group, the restriction of
$\mathrm{ad}(E)$ to $W$ and its connection $A_{(0)}$ are uniquely determined locally in terms of $TW$ with its Levi-Civita connection.
(Globally, depending on $C$, there may be some discrete choices that generalize
the choice of a spin structure for $G=SU(2)$.)
The discussion up to this point is independent of how $\phi_\varrho$ or $A_{(0)}$ is extended into $M$. The $\d y$ component
of eqn.\ $i)$ in \eqref{harmonicphi} states that $\nabla^{A_{(0)}}_{\del_y} \phi_\varrho = 0$ at $y=0$. This is not only a gauge-invariant
condition, but it is also independent of the extension of $A_{(0)}$ into the interior. In particular, if we choose the gauge so that $(A_{(0)})_y = 0$,
then $\del_y \phi_\varrho = 0$ (which vindicates the fact that we have omitted the $y^0$ term in the expansion for $\phi$).
In any case, the normal derivative of $\phi_\varrho$ at $W$ is pure gauge. The remaining coefficients in the expansions of $A$
and $\phi$ are then determined by these leading coefficients
and the successive equalities determined by the vanishing of the coefficients of each $y^\lambda$. However, it is increasingly hard
to extract information from the higher terms; even the coefficients of $y^0$ are not so easy to interpret.
Finally, consider the non-quasiregular case. Let $C$, with Lie algebra $\frak c$, be the
commutant in $G$ of $\varrho(\frak{su}(2))$. The KW equations
would allow the above description of $\phi$ and $A$ to be modified by $\frak c$-valued terms that would be $\O(1)$ for $y\to 0$.
However, as in section \ref{gencase}, we pick an arbitrary subgroup $H\subset C$, with Lie algebra $\frak h$, and we write $\frak h^\perp$
for the orthocomplement of $\frak h$ in $\frak c$. Then
we restrict the expansion to take the form
\begin{equation}\label{zub}
\phi \sim y^{-1}\phi_{\varrho} + \varphi_0 + \ldots, \qquad A \sim A_{(0)} + a_0 + \ldots,
\end{equation}
where $(\varphi_0)_a \in \frak h^\perp$ and $(\varphi_0)_y, a_a \in \frak h$, and $\phi_{\varrho}$, $A_{(0)}$ are as above.
(In general, in the non-quasiregular case, the next term in the expansion is of order $y^{1/2}$.) In this expansion, we have set to
0 the $\frak h^\perp$-valued part of $a_a, \,(\varphi_0)_y$ and the $\frak h$-valued part of $(\varphi_0)_a$, and
then, in an appropriate global setting, the KW equations will determine globally the $\frak h$-valued part of $a_a,\,(\varphi_0)_a$
and the $\frak h^\perp$-valued part of $(\varphi_0)_a$. The justification for this assertion is provided in section \ref{analysis}, where we show that the boundary problem just stated
is well-posed.
Calculating the first few terms in the expansions of $F_A$, $\phi \wedge \phi$, $\star \d_A \phi$ and $\d_A \star \phi$ as before,
we see that the coefficient of $1/y^2$ in ${\KW}(A,\phi)$ vanishes just as before. The $y^{-1}$ coefficient in the term $\phi \wedge \phi$ now
equals $\phi_\varrho \wedge \varphi_0$. This involves terms of the form
$[\frak t_a, (\varphi_0)_b]$ or $[\frak t_a, (\varphi_0)_y]$, and these vanish since
$\varphi_0$ is valued in the commutant $ \frak c$ of $\varrho(\frak s \frak u(2))$. Similarly, $A_{(0)}$ must be replaced by $A_{(0)} + a_0$
in each of the two equations in \eqref{harmonicphi}, but
again this does not modify the above considerations, since $a_0$ is valued in $\frak c$.
\subsection{The Index}\label{indexcalc}
Here, assuming foundational results proved in section \ref{analysis}, we calculate the index of the linearization $\LKW$ of the KW equations.
We do this first on a closed four-manifold, and then on a compact
four-manifold with boundary, with arbitrary Nahm pole boundary conditions. The first case is included
simply to isolate the contribution from the interior topology of $M$. The computation in the second
case relies on a short argument to show that the index is actually independent of the choice of Nahm pole
boundary condition, or in other words, of the representation $\varrho$. This reduces the computation
to one for the special case $\varrho = 0$, where the computation reduces to a well-known one.
\begin{prop}
Let $(M^4,g)$ be closed, and suppose that ${\KW}(A_{(0)} ,\phi_{(0)}) = 0$. Writing ${\LKW}$ for the linearization of $\KW$ at
this solution, then
\begin{equation*}
\mathrm{ind} ({\LKW}) = - (\dim \frak g ) \, \chi(M).
\end{equation*}
\label{index1}
\end{prop}
We have already remarked in section \ref{background} that the symbol of $\LKW$ is the same as that of the twisted
Gauss-Bonnet operator $(\d + \d^*) \otimes \mathrm{Id}_{\ad(E)} : \wedge^{\mathrm{odd}}M \otimes \ad(E) \to
\wedge^{\mathrm{even}}M \otimes \ad(E)$. The formula here follows directly from the fact that the
index of $\d + \d^*$, acting from even forms to odd forms, equals $\chi(M)$, the Euler characteristic of $M$. (Twisting by $E$ does
not affact the index of the twisted Gauss-Bonnet operator even
if $E$ is topologically non-trivial.)
\begin{prop}
Let $(M^4,g)$ be a manifold with boundary, with $g$ cylindrical near $\del M$.
Then fixing the Nahm pole boundary conditions at $\del M$
with $\varrho = 0$ and $\frak h = \{0\}$, the index of the linearization about any solution is given by
\begin{equation*}
\mathrm{ind}({\LKW}) = - (\dim \frak g) \, \chi(M).
\end{equation*}
\label{index2}
\end{prop}
This choice of Nahm pole condition is the same as the absolute boundary condition for $\d + \d^*$.
This is again a classical formula. We could equally well have chosen $\frak h = \frak g$, corresponding
to relative boundary conditions for the Gauss-Bonnet operator, in which case the index equals $- (\dim \frak g) \, \chi(M, \del M)$,
but by Poincar\'e duality, $\chi(M, \del M) = \chi(M)$. This is a special case of the next result,
which is the main one of this section.
\begin{prop}
Let $(M^4,g)$ be an arbitrary compact manifold with boundary, with $g$ cylindrical near $\del M$, and fix any choice of Nahm pole boundary
condition $\varrho$ at $W = \del M$. Then once again
\begin{equation*}
\mathrm{ind}({\LKW}) = - (\dim \frak g) \, \chi(M).
\end{equation*}
\label{index3}
\end{prop}
We prove this in two steps. First consider the special case where $M = W \times I$ with a product metric, and with Nahm pole
boundary condition given by any $\varrho$ at $y = 0$ and with trivial Nahm pole boundary condition ($\varrho = 0$,
$\frak h = 0$) at $y = 1$. We see immediately, using the pseudo skew-hermitian property of $\LKW$ in this product setting,
that the index vanishes.
Now let $M$ and $\varrho$ be arbitrary and denote by ${\LKW}_\varrho$ the linearized KW operator
about any solution satisfying the Nahm pole boundary condition associated to $\varrho$, and similarly let
${\LKW}_{\mathrm{rel}}$ denote the linearized KW operator relative to $\varrho = 0$ and $\frak h = \frak g$,
i.e.\ with relative boundary conditions. We apply a standard excision theorem for the index, for example \cite[Prop 10.4]{BBW},
which shows that
\begin{equation*}
\mathrm{ind}({\LKW}_\varrho) - \mathrm{ind}({\LKW}_{\mathrm{rel}}) = \mathrm{ind}({\LKW}_{\varrho, \mathrm{rel}}),
\end{equation*}
where the operator on the right is the linearized KW operator on the cylinder $\del M \times I$ with Nahm
pole boundary condition $\varrho$ at one end and with relative boundary condtions on the right.
We have already shown that the index on the right vanishes, whence the claim.
\section{Analytic theory}\label{analysis}
We now turn to a more careful description of the analytic theory which underpins many of the preceding
considerations. More specifically, we briefly describe some aspects of the theory of linear elliptic uniformly degenerate
equations, all taken from \cite{M-edge} and \cite{MVer}, explain how the results and calculations obtained above
fit into this theory, and then prove a regularity theorem for solutions of the nonlinear KW equations which
justifies the calculations in the uniqueness theorem.
Let us make two comments before proceeding. The first is that in the special case where $\varrho = 0$, the
Nahm pole boundary condition reduces to a classical elliptic boundary problem, and it is well-known that
solutions are then smooth up to $W$. The theory described below is the natural extension of those ideas
which allows us to handle the case where $\varrho \neq 0$ and ${\LKW}$ is no longer a uniformly elliptic operator.
The second is that to keep the exposition simpler, we focus on the problem in four dimensions.
All of the theory below generalizes immediately to the linearized problem in five dimensions, as does the
application of these results to the regularity of solutions of the nonlinear equations as in section \ref{nonlinreg}.
The changes required are strictly notational.
\subsection{Uniformly Degenerate Operators}\label{unifdegg}
Let $M$ be a manifold with boundary, and choose coordinates $(\vec x, y)$ near a boundary point, where
$\vec x \in U \subset \R^{n-1}$ and $y \geq 0$. A differential operator ${\LKW}_0$ is called uniformly degenerate if in any
such coordinate chart near the boundary it takes the form
\begin{equation}\label{unifdeg}
{\LKW}_0 = \sum_{j + |\alpha| \leq m} A_{j \alpha}(\vec x, y) (y\del_y)^j (y\del_x)^{\alpha};
\end{equation}
here $(y\del_x)^\alpha = (y\del_{x^1})^{\alpha_1} \ldots (y\del_{x^{n-1}})^{\alpha_{n-1}}$. Such operators are also
called $0$-differential operators. The key point in this definition is that every derivative is accompanied by a factor
of $y$. In our setting, the order $m$ equals $1$ and the coefficients $A_{j \alpha}$ are matrices. Observe that a
uniformly degenerate can never be uniformly elliptic in the standard sense at $\partial M$
because all the coefficients of the highest order terms vanish when $y=0$. However, there is an extended notion of ellipticity
for such operators: ${\LKW}_0$ is said to be an elliptic uniformly degenerate operator if it is elliptic in the standard sense at
interior points, where $y > 0$, and if in addition, near points of the boundary, the matrix-valued polynomial obtained by replacing
each $y\del_{x^a}$ and $y\del_y$ with multiplication by the linear variables $-i k_a$ and $-i k_n$ is invertible when
$(k_1, \ldots, k_n) \neq 0$. (This formal replacement actually has an invariant meaning; see \cite{M-edge}.)
The linearized KW operator $\LKW$ is not quite of the form \eqref{unifdeg}; instead, $y\LKW = \LKW_0$ is an elliptic
uniformly degenerate operator. This is close enough so that the methods described here can be applied to its analysis equally well.
To put this into perspective, note that if $\Delta$ is the standard Laplacian on a half-space, then $y^2 \Delta$ is elliptic
uniformly degenerate, which indicates that the uniformly degenerate theory for the latter operator must therefore reflect the
well-known properties of the former. In other words, the theory described below subsumes and generalizes the classical
theory of boundary problems for nondegenerate elliptic operators. Unlike $\Delta = y^{-2}( y^2 \Delta)$, however,
the operator $\LKW = y^{-1}\LKW_0$ is still degenerate because of the presence of terms involving $1/y$; hence (contrary
to the study of $\Delta$), it is necessary to draw on this uniformly degenerate theory.
The mapping and regularity properties of solutions of an elliptic uniformly degenerate operator ${\LKW}_0$ hinge on the study of
two simpler model operators. The first of these:
\begin{equation}\label{normalop}
N({\LKW}_0) = \sum_{j + |\alpha| \leq m} A_{j \alpha}( \vec x, 0) (s \del_s)^j (s\del_{\vec w})^\alpha,
\end{equation}
is called the normal operator. This is invariantly defined (up to a linear change of variables) as an operator on the half-space $\R^n_+$,
naturally identified with the inward-pointing tangent space at the boundary point $(\vec x, 0) \in \partial M$.
To emphasize that it acts on functions defined on this entire half-space, rather than just on a coordinate chart, we write it
using the linear variables $s \geq 0$, $\vec w \in \R^{n-1}$, which are globally defined on this half-space. The global
behavior of $N({\LKW}_0)$ on $\R^n_+$ plays a central role in the analysis below. As a matter of notation, we
define the normal operator of the linearized KW operator as
\begin{equation}
\label{normalopL}
N({\LKW}) = s^{-1} N({\LKW}_0).
\end{equation}
Notice that $N({\LKW}_0)$ only depends on $\vec x \in \del M$ as a parameter; there is a different normal operator
at each point of the boundary, each of which is again uniformly degenerate and elliptic in this sense. This whole collection of
operators is called the normal family of $\LKW_0$. In some
cases, certain crucial features of each $N_{\vec x}(\LKW_0)$ vary with the parameter $\vec x$. Fortunately this does not
happen in our setting; the normal operators at different boundary points all `look the same', and so we shall usually
omit the dependence on $\vec x$. The normal operator $N({\LKW}_0)$ enjoys considerably more symmetries than
${\LKW}_0$ itself; namely, it is translation invariant in $w \in \R^{n-1}$ and invariant under dilations $(s,w) \mapsto (\lambda s,
\lambda w)$, $\lambda > 0$. Because of these symmetries, it is relatively elementary to study directly, and the goal of
this entire theory is to show that key properties of these normal operators carry over to ${\LKW}_0$ itself.
The second model operator is a further reduction, called the indicial operator
\begin{equation}\label{indicialop}
I({\LKW}_0) = \sum_{j \leq m} A_{j 0} (\vec x, 0) (s\del_s)^j,
\end{equation}
obtained from $N({\LKW}_0)$ by dropping all of the terms $(s\del_{w_j})^{\alpha_j}$ with $\alpha_j > 0$. Following
the convention above, we also write $I({\LKW}) = s^{-1}I({\LKW}_0)$, when $\LKW$ is the KW operator.
There is a further, purely algebraic, reduction of the indicial operator obtained by letting $I({\LKW}_0)$ act on the
elementary functions $s^\lambda$. This yields the indicial family:
\begin{equation}\label{indfam}
I({\LKW}_0, \lambda) = \sum_{j \leq m} A_{j0}(\vec x, 0) \lambda^j = s^{-\lambda} I({\LKW}_0) s^\lambda,
\end{equation}
where each $s\del_s$ has been replaced by a factor of $\lambda$.
The reader will notice that the normal operator $N({\LKW})$ was effectively already introduced in section \ref{third}
when we considered the linearized KW operator at the special Nahm pole solution on $\R^4_+$. Moreover,
we also encountered the indicial family of the linearized KW operator in the matrices on the left hand side of \eqref{lembo}
and \eqref{wembo}. The indicial roots of ${\LKW}_0$ (or equivalently, of ${\LKW} = s^{-1}{\LKW}_0$) are the finite set of
values of $\lambda$ for which $I({\LKW}_0, \lambda)$ is not invertible. As we have seen, their computation
in our setting involves nontrivial algebraic subtleties.
For the rest of this discussion, let us consider only the case where ${\LKW}$ is the linearized KW operator.
Everything we say here has analogues for operators of higher order. With this assumption, one obvious
simplification is that the equation characterizing the indicial roots
is a simple (generalized) eigenvalue problem, namely that the matrix
\begin{equation}\label{indrts}
A_{10} \lambda + A_{00}
\end{equation}
has nontrivial kernel. Writing the fields on which ${\LKW}_0$ acts as $\Psi$, then corresponding to each
indicial root $\lambda$ there is an eigenvector $\Psi_\lambda$. Equivalently, there is a solution of
the indicial operator of the form $\Psi_\lambda s^\lambda$. In general the indicial roots
may depend on the basepoint $\vec x \in \partial M$, and this introduces substantial analytic
complications. Fortunately, in our case, this does not occur and we assume henceforth that
the indicial roots are constant in $\vec x$.
The importance of these indicial roots can be explained at various levels. At the simplest level,
they provide the expected growth or decay rates of solutions to the equation ${\LKW}_0 \Psi = 0$.
There is no a priori guarantee that actual solutions to this linear PDE actually do grow or
decay at these precise rates, and the fact that they do in some cases is a regularity theorem.
The discussion in the earlier part of this paper assumes that these growth rates
are legitimate, and we are now in the process of showing that this is so for the linearized KW equation
with the Nahm pole boundary condition.
To proceed further, we pass from the normal operator $N({\LKW})$ to the same operator conjugated
by the Fourier transform in $\vec w$, just as in section \ref{index}. This leads to the matrix-valued
ordinary differential operator
\begin{equation}\label{ftnormal}
\widehat{N}({\LKW}) = A_{10} \del_s - i A_{0a} k^a + \frac{1}{s}A_{00}
\end{equation}
The factor of $i$ appears since $e^{i \vec k \cdot \vec w} (s\partial_{\vec w}) e^{-i \vec k\cdot \vec w} = -ik^a$. There are two key
facts about solutions of this operator, each following from elementary considerations:
\begin{itemize}
\item[i)] Any solution $\hat \Psi(s,\vec k)$ to $\widehat{N}({\LKW}) \hat \Psi = 0$ either
decays exponentially or else grows exponentially as $s \nearrow \infty$;
\item[ii)] Any solution of this equation near $s = 0$ has a complete (and in fact convergent) asymptotic
expansion
\begin{equation}\label{expnorsoln}
\hat \Psi(s,\vec k) = \sum_\lambda \sum_{j=0}^\infty \hat \Psi_{\lambda j} s^{\lambda + j},
\end{equation}
where the first sum is over indicial roots of ${\LKW}$. (In exceptional cases, where the difference between different indicial
roots is an integer, this sum may include extra logarithmic factors. This actually happens in the case of the KW equations.)
\end{itemize}
The second of these assertions is an immediate consequence of the classical theory of Frobenius series of
solutions of equations with analytic coefficients near regular singular points. The first assertion is slightly more subtle
in that it depends on the ellipticity of the normal operator $N({\LKW})$. The dominant terms in \eqref{ftnormal} as $s \to \infty$
are the first two on the right. Dropping the third term $A_{00}$, we obtain the constant coefficient operator
$A_{10} \del_s - i A_{0a}k^a$, which has solutions of the form $\tilde \Psi_\lambda e^{\lambda s}$ where $\lambda$ and $\tilde \Psi_\lambda$
satisfy the algebraic eigenvalue equation $(A_{10} \lambda - i A_{0a}k^a )\tilde \Psi_\lambda = 0$. The fact that there are no solutions
of this equation with
purely imaginary $\lambda$ (or with $\lambda=0$, $\vec k\not=0$) is a restatement of the ellipticity, in the
ordinary sense, of the operator $A_{10} \del_s + A_{0a} \del_{w^a}$.
Before proceeding with the formulation of boundary conditions, we recall the general notions of conormality and polyhomogeneity
of a field $\Psi$ near $\partial M$; these are simple and useful extensions of the notion of smoothness up to the
boundary of $\Psi$. We say that $\Psi$ is conormal of order $\lambda_0$, and write $\Psi \in \mathcal A^{\lambda_0}$,
if $y^{-\lambda_0}|\Psi| \leq C$ with a similar estimate for all its derivatives, i.e.\ $y^{-\lambda_0}| (y\del_y)^j (\del_{\vec x})^\alpha
\Psi| \leq C_{j \alpha}$ for all $j, \alpha$. Any such field is smooth in the interior of $M$,
but these estimates give only a very limited sort of smoothness near the boundary: for example, both $y^{\sqrt{-1}}$ and
$1/\log y$ lie in $\mathcal A^0$. A more tractable subclass consists of the space of polyhomogeneous fields $\Psi$.
Here $\Psi$ is said to be polyhomogeneous at $y=0$ if it is conormal and in addition has an asymptotic expansion
\begin{equation}
\Psi \sim \sum y^{\gamma_j} (\log y)^p \Psi_{jp}(\vec x).
\label{phg}
\end{equation}
The exponents $\gamma_j$ on the right lie in some discrete set $E \subset \mathbb C$, called the index set of $\Psi$,
which has the following properties: $\mathrm{Re}\, \gamma_j \to \infty$ as $j \to \infty$, the powers $p$ of
$\log y$ are all nonnegative integers, and there are only finitely many such log terms accompanying any $y^{\gamma_j}$.
Notice that the conormality of $\Psi$ implies that each
$\Psi_{jp}(\vec x) \in \mathcal C^\infty(\del M)$. The meaning of $\sim$ is the classical one for an asymptotic
expansion: namely,
\begin{equation*}
| \Psi - \sum_{j \leq N} y^{\gamma_{j}} (\log y)^p \Psi_{jp} | \leq C y^{\mathrm{Re} \gamma_{N+1}}(\log y)^q,
\end{equation*}
where the term on the right is the next most singular term in the expansion. The corresponding statement must
hold for the series obtained by differentiating any finite number of times. If the $\gamma_j$ are all nonnegative integers
and the log terms are absent, this is just the standard notion of smoothness up to $y = 0$.
Solutions of uniformly degenerate equations ${\LKW}_0 \Psi = 0$ are typically polyhomogeneous (at least in favorable
circumstances), but essentially never smooth in the classical sense. In our specific problem, the exponents
$\gamma_j$ are of the form $\gamma_j = j/2$, $j = 0, 1, 2, \ldots$;
log terms, if they appear at all, do not occur in the leading terms.
\subsection{Elliptic Weights}\label{ellweights}
We now turn to the various sorts of boundary conditions that can be imposed on the operator ${\LKW}$ and a description of what
makes a boundary condition elliptic (relative to ${\LKW}$). General types of boundary conditions can be local and of `mixed' Robin
type, or nonlocal, such as an Atiyah-Patodi-Singer type boundary condition. We shall focus, however, on the particular
local algebraic boundary conditions which arise in the Nahm pole setting. In this section we describe the simplest of these
boundary conditions, where ${\LKW}$ acts on fields with a prescribed rate of vanishing or blowup at $y = 0$. This is
analogous to a homogeneous Dirichlet condition (which is tantamount to considering solutions
which vanish like $y^\epsilon$ at the boundary for any $0 < \epsilon < 1$). This type of boundary condition is relevant in our
setting only when none of the $j_\sigma = 0$; in particular, this is the correct type of boundary condition when $\varrho$ is a
regular representation. This case is simpler to state, and considerably simpler to analyze, than the more general one when
some of the $j_\sigma=0$, which we come to only in section \ref{genebc}. As before, we continue to focus exclusively on the
linearized KW operator ${\LKW}$, or its uniformly degenerate associate ${\LKW}_0 = y{\LKW}$, although all the results below
have analogs for more general elliptic uniformly degenerate operators.
We shall study the action of $\LKW$ on weighted $0$-Sobolev spaces $y^{\lambda_0 + 1/2}H^k_0(M)$, and so we
start by defining these. First consider $y^{\lambda_0 + 1/2}L^2(M)$, which consists of all fields $\Psi = y^{\lambda_0 + 1/2} \Psi_1$
where $\Psi_1 \in L^2(M)$. The measure is always assumed to equal $\d\vec x \,\d y$ up to a smooth nonvanishing multiple.
Next, for $k \in \mathbb N$, let
\begin{equation*}
H^k_0(M) = \{ \Psi \in L^2(M): (y \del_{\vec x})^\alpha (y\del_y)^j \Psi \in L^2(M),\ \forall\ j + |\alpha| \leq k\},
\end{equation*}
and finally, define $y^{\lambda_0 + 1/2} H^k_0 = \{\Psi = y^{\lambda_0 + 1/2}\Psi_1: \Psi_1 \in H^k_0\}$. The spaces $s^{\lambda_0 + 1/2} H^k_0(\R^n_+)$
are defined similarly. The subscript $0$ on these Sobolev spaces indicates that they are defined relative to the $0$-vector fields
$y\del_y$ and $y\del_{x^a}$; it does not connote that the fields have compact support. A key feature of these spaces is that their norms have
a scale invariance coming from the invariance of $y\partial_y$ and $y\partial_{x^a}$ under dilations $(y,\vec x) \mapsto
(cy, c\vec x)$, $c > 0$. In fact, $N({\LKW}_0)$ does not act naturally on the more standard Sobolev spaces defined
using the vector fields $\partial_y$, $\partial_{x^a}$. The shift by $1/2$ in these weight factors is for notational convenience only and
corresponds to the fact that the function $y^{\lambda_0}$ lies in $y^{\lambda_0 + 1/2 + \epsilon}L^2$ locally near $y = 0$
when $\epsilon < 0$ but not when $\epsilon \geq 0$. In other words, $y^{\lambda_0}$ just marginally fails to lie
in $y^{\lambda_0+1/2}L^2$ (near $y=0$).
It is evident that
\begin{equation}
{\LKW}: y^{\lambda_0 + 1/2}H^1_0(M) \longrightarrow y^{\lambda_0 - 1/2} L^2(M)
\label{map11}
\end{equation}
is a bounded mapping for any $\lambda_0 \in \R$. Notice that the weight on the right has dropped by $1$, reflecting that the operator
${\LKW}$ involves the terms $\del_y$ and $1/y$; if we were formulating this using ${\LKW}_0$, then it would be appropriate to
use the same weight on the left and the right. Our main concern is whether this mapping is Fredholm, i.e.\ has closed
range and a finite dimensional kernel and cokernel, and to describe the regularity of solutions of ${\LKW} \Psi = 0$ (or ${\LKW} \Psi = f$
for fields $f$ which have better regularity and decay as compared to $y^{\lambda_0 - 1/2}L^2$). It is not hard to show that \eqref{map11} does
not have closed range when $\lambda_0$ is an indicial root of $\LKW$. Indeed, in this case, an appropriate sequence of compactly
supported cutoffs of the function $y^{\lambda_0}$ can be used to create a Weyl sequence $\Psi_j$, i.e.\ an orthonormal sequence of fields
such that
\begin{equation*}
||\Psi_j||_{y^{\lambda_0+1/2}H^1_0} =1, \qquad || {\LKW} \Psi_j||_{y^{\lambda_0 - 1/2}L^2} \to 0.
\end{equation*}
Hence \eqref{map11} is certainly not Fredholm then. When $\lambda_0 \ll 0$, then \eqref{map11} has
an infinite dimensional kernel, while if $\lambda_0 \gg 0$, its cokernel is infinite dimensional. Thus the only chance for \eqref{map11}
to be Fredholm is when $\lambda_0$ is nonindicial and lies in some intermediate range. In some cases (as described in
section \ref{genebc}), it is not Fredholm for any weight $\lambda_0$. Closely related is the fact that fields in the kernel of \eqref{map11}
when $\lambda_0 \ll 0$ are, in general, not regular, i.e.\ polyhomogeneous; indeed, for such $\lambda_0$, most solutions are quite
rough at $\del M$. All of this motivates the following definition.
\begin{definition}\label{ellbc}
The weight $\lambda_0$ is called elliptic for the linearized KW operator ${\LKW}$ if its normal operator defines an invertible mapping:
\begin{equation}\label{noropmap}
N({\LKW}): s^{\lambda_0 + 1/2} H^1_0( \R^n_+; \d\vec w \,\d s) \longrightarrow s^{\lambda_0 - 1/2} L^2(\R^n_+; \d\vec w\, \d s).
\end{equation}
\end{definition}
The two main consequences of the ellipticity of a weight are stated in the following propositions.
\begin{prop}\label{fred1}
Let ${\LKW}$ be the linearized KW operator on a compact manifold with boundary $M$, and suppose that $\lambda_0$
is an elliptic weight. Then the mapping \eqref{map11} is Fredholm.
\end{prop}
Recalling that we are in the case where no $j_\sigma = 0$, let $\underline{\lambda}$ and $\overline{\lambda}$ be the
largest negative and smallest positive indicial roots of $\LKW$, respectively. Thus (see Appendix \ref{groups}) $\underline{\lambda} = -1$
and $\overline{\lambda} =1$. We assert that any $\lambda_0 \in (\underline{\lambda}, \overline{\lambda})$
is an elliptic weight. We shall prove this later; in fact, all of the necessary facts for the proof come from the
considerations in sections \ref{second} and \ref{third}. It will follow from this proof that if some $\lambda_0$ is an elliptic
weight, then so is any other $\lambda_0'$ which lies in a maximal interval around $\lambda_0$ containing
no indicial roots.
\begin{prop}\label{regularity1}
With all notation as above, let $\lambda_0$ be an elliptic weight for ${\LKW}$, and suppose that ${\LKW}\Psi = f \in y^{\lambda_0 - 1/2}L^2$
where $\Psi \in y^{\lambda_0 + 1/2} L^2$. If $f$ is smooth in a neighborhood of some point $q \in \del M$, or slightly more generally,
if it has a polyhomogeneous asymptotic expansion as $y \to 0$, then in that neighborhood, $\Psi$ admits a polyhomogeneous expansion
\begin{equation}
\Psi \sim \sum y^{\gamma_j} \Psi_j,
\label{expansion}
\end{equation}
where the exponents $\gamma_j$ are of the form $\lambda + \ell$, $ \ell \in \mathbb N$, where either $\lambda$
is an indicial root of ${\LKW}$ or else is an exponent occurring in the expansion of $f$.
\end{prop}
The expansion \eqref{expansion} may contain terms of the form $y^{\gamma_j} (\log y)^p$, $p > 0$. These can only appear when
the differences between indicial roots are integers (as happens in our setting), or when there is a coincidence between the
indicial roots of ${\LKW}$ and the terms in the expansion of the inhomogeneous term $f$. However, the key fact is simply
that $\Psi$ has an expansion at all; once we know this, then the precise terms in its expansion can be determined by matching
like terms on both sides of the equation ${\LKW} \Psi = f$.
The existence of such an asymptotic expansion for solutions should be regarded as a satisfactory replacement for smoothness
up to the boundary. For an ordinary (nondegenerate) elliptic operator ${\LKW}_0$, if the standard Dirichlet condition (requiring
solutions to vanish at $y=0$) is an elliptic boundary condition in the classical sense, then solutions of ${\LKW}_0 \Psi = 0$ with
$\Psi(0,\vec x) = 0$ are necessarily smooth and vanish to order $1$ at the boundary. Proposition \ref{regularity1} is the exact
analogue of this. For the linearized KW operator, the nonnegative indicial roots lie in the set $\{j/2: j = 0, 1, 2, \ldots\}$,
so if ${\LKW} \Psi = 0$ in some neighborhood of a boundary point, then
\begin{equation*}
\Psi \sim \sum y^{j/2} \Psi_j,
\end{equation*}
(we are neglecting log terms which might appear); in general, there are half-integral exponents, so $\Psi$ is genuinely not smooth at $y=0$.
We now verify the invertibility of \eqref{noropmap} for every $\lambda_0 \in (\underline{\lambda}, \overline{\lambda})$
in our particular example, which proves the assertion that every such $\lambda_0$ is an elliptic weight. First
conjugate $N({\LKW})$ with the Fourier transform in $\vec w$, thus passing to the simpler ordinary differential operator
$\widehat{N}({\LKW})$ as in \eqref{ftnormal}. We must show that
\begin{equation}
\widehat{N}({\LKW}): s^{\lambda_0+1/2} H^1_0(\R^+; \d s) \longrightarrow s^{\lambda_0 - 1/2} L^2( \R^+; \d s),
\label{noropmapft}
\end{equation}
is invertible for each $\vec k$, and that the norm of its inverse is bounded independently of $\vec k$.
The first step is to use the scaling properties of $\widehat{N}({\LKW})$ to reduce to the case $|\vec k| = 1$. Indeed,
set $t = s|\vec k|$ and write $B({\LKW}) = A_{10} \partial_t - it A_{0a} k^a/|\vec k| + A_{00}$. (The ``$B$'' refers to the
fact that this operator which has many features in common with the Bessel equation, and so we call $B({\LKW})$
the model Bessel operator of $\LKW$.) Applying this change of variables replaces $B({\LKW})$ by $|\vec k|^{-1} \widehat{N}({\LKW})$.
Suppose that we have already shown that the version of \eqref{noropmapft} with $B({\LKW})$ replacing $\widehat{N}({\LKW})$
is invertible for every $\vec k$ with $|\vec k| = 1$, and let $B(G)(t, t', \vec k)$ denote the Schwartz kernel of this inverse.
We then recover the Schwartz kernel $G(s,s', \vec k)$ of the inverse of \eqref{noropmapft} for any $\vec k \neq 0$ as
\begin{equation*}
\widehat{N}(G)(s, s', \vec k) = B(G)(s|\vec k|, s' |\vec k|, \vec k/|\vec k|).
\end{equation*}
To see that this is the case, we first compute that
\begin{equation*}
\begin{split}
\widehat{N}({\LKW}) \int & B(G)(s|\vec k|, s'|\vec k|, \vec k/|\vec k|) f(s', \vec k)\, \d s' =
\int (B({\LKW}) B(G)) (s|\vec k|, s'|\vec k|, \vec k/|\vec k|) |\vec k| f(s', \vec k) \, \d s' \\
& = \int \delta(s|\vec k| - s' |\vec k|) |\vec k| f(s', \vec k)\, \d s' = f(s, \vec k),
\end{split}
\end{equation*}
since the $\delta$ function in one dimension is homogeneous of degree $-1$. This result may seem counterintuitive
since one expects that $||\widehat{N}({\LKW})^{-1}|| \sim 1/|\vec k|$, but that expectation is false because we are
letting $\widehat{N}({\LKW})$ act between spaces with different weight factors. In fact, the norm of
$\widehat{N}(G)$ is bounded uniformly in $\vec k$, but does not decay as $|\vec k| \to \infty$. To this end,
observe that we must estimate the norm of
\begin{equation*}
H(s, s', \vec k) := s^{-\lambda_0 - 1/2} \widehat{N}(G)(s,s', \vec k) (s')^{\lambda_0 - 1/2}: L^2 \to L^2.
\end{equation*}
This is done by calculating
\begin{multline*}
\int \left| \int H(s, s', \vec k) f(s', k)\, \d s' \right| ^2 \, \d s \\
= \int \left| \int B(G)(s|\vec k|, s'|\vec k|, \vec k/|\vec k|) (s|\vec k|)^{-\lambda_0 - 1/2} (s' |\vec k|)^{\lambda_0 - 1/2}
|\vec k| f(s', \vec k)\, \d s'\right|^2 \, \d s \\ = \int \left| \int B(G)(t, t', \vec k/|\vec k|)
t^{-\lambda_0 - 1/2}(t')^{\lambda_0 - 1/2} |\vec k|^{-1/2} f(t'/|\vec k|, \vec k)\, \d t' \right|^2 \\
\leq C || |\vec k|^{-1/2} f( t/|\vec k|, \vec k) || = C ||f (\cdot, \vec k)||.
\end{multline*}
The inequality in the fourth line reflects the boundedness of $B(G): t^{\lambda_0 - 1/2} L^2 \to t^{\lambda_0 + 1/2}L^2$.
Beyond all this, compactness of the unit sphere in $\vec k$ shows that the norm of $B(G)$ can be bounded
independently of $\vec k$.
As for showing that \eqref{noropmapft} is invertible for each $\vec k$, we first show that it is Fredholm. This can be
done by a standard ODE analysis of the operator. First construct approximate local inverses near $s=0$ and $s=\infty$;
the existence of these shows that \eqref{noropmapft} is Fredholm precisely when $\lambda_0$ is {\it not}
an indicial root of ${\LKW}_0$. (As noted earlier, when $\lambda_0$ is an indicial root, this mapping does not have closed range.)
Now fix $\lambda_0$ to be any nonindicial value and recall the fact i) that any element of the kernel either grows or decays exponentially
as $s \to \infty$. Then injectivity of this map means precisely that the solutions which decay exponentially as $s \to \infty$
must blow up faster than $s^{\lambda_0}$ as $s \to 0$. One can perform a similar analysis for the adjoint operator,
or by showing by other methods that the index vanishes, to show that \eqref{noropmapft} is surjective too.
For the linearized KW operator ${\LKW}$, the discussion in section \ref{nonregular} implies directly that the kernel of
$\widehat{N}({\LKW})$ has only trivial kernel on $s^{\lambda_0 + 1/2}L^2$ when $\lambda_0 > 0$. Indeed,
perform the integrations by parts (which are now only in the $s$ variables), using the decay of solutions both
as $s \to 0$ and as $s \to \infty$ to rule out contributions from the boundary terms. We can extend this to allow
any $\lambda_0 > \underline{\lambda}$ simply by observing using the fact ii) about solutions that if $\widehat{N}({\LKW})
\hat\Psi = 0$ and $\hat \psi \in s^{\lambda_0 + 1/2}L^2$, then $\hat\Psi$ vanishes like $s^{\overline{\lambda}}$, so we may
integrate by parts as before. The fact that the index vanishes when $\lambda \in (\underline{\lambda}, \overline{\lambda})$
follows by using the pseudo skew-adjointness established in section \ref{pseudo}. Note that those arguments are for ${\LKW}$ on the model
space $\R^4_+$, which is canonically identified with the normal operator $N({\LKW})$ of the linearized KW equations on
any manifold with boundary, and the pseudo skew-adjointness passes directly to $\widehat{N}({\LKW})$ as well.
We have now proved that in the quasiregular case, when no $j_\sigma = 0$, any $\lambda_0 \in (\underline{\lambda}, \overline{\lambda})$
is an elliptic weight for $\LKW$.
The results just stated are not well suited for our nonlinear problem simply because these weighted $L^2$ spaces do not behave
well under nonlinear operations. One hope might be to use $L^2$- (or $L^p$-) based scale-invariant Sobolev spaces with
sufficiently high regularity. These do have good multiplicative properties locally in the interior, but not near the boundary.
This leads us to introduce several related H\"older-type spaces, and then describe the mapping properties of ${\LKW}$ acting on them.
We start with the spaces $\mathcal C^k_0$, which consist of all fields $\Psi$ such that
$(y\partial_y)^j (y\partial_{\vec x})^\alpha \Psi$ is bounded on $M$ and continuously differentiable in
the interior of $M$ for all $j + |\alpha| \leq k$. The H\"older seminorm is defined by
\begin{equation*}
[ \Psi]_{0; 0,\gamma} = \sup_{(y,\vec x) \neq (y', \vec x\, ')} \frac{ |\Psi(y, \vec x) - \Psi(y', \vec x\, ')|(y+y')^\gamma}{
|y-y'|^\gamma + |\vec x - \vec x\, '|^\gamma}
\end{equation*}
Then $\mathcal C^{k,\gamma}_0$ consists of all $\Psi \in \mathcal C^k_0$ such that
$ [ (y\partial_y)^j (y\partial_{\vec x})^\alpha \Psi]_{0; 0,\gamma} < \infty$. Finally, $y^{\lambda_0} \mathcal C^{k,\gamma}_0$
consists of all $\Psi = y^{\lambda_0}\Psi_1$ with $\Psi_1 \in \mathcal C^{k,\gamma}_0$.
These spaces capture no information about regularity in the $\vec x$ directions at $y=0$, so we also introduce hybrid spaces
\begin{equation*}
\mathcal C^{k,\ell, \gamma}_0 = \{ \Psi \in \mathcal C^{k,\gamma}_0: (\partial_{\vec x})^\alpha \Psi \in
\mathcal C^{k-|\alpha|,\gamma}_0,\ \mbox{for all}\ |\alpha| \leq \ell\}.
\end{equation*}
Note that all of these spaces contain elements like $y^\lambda$ or $y^\lambda (\log y)^p$, provided
$\lambda > \lambda_0$ (or $\lambda = \lambda_0$ if $p = 0$).
The mapping property of $\LKW$ on these spaces is much the same as in Proposition \ref{fred1}.
\begin{prop}
Let ${\LKW}$ be the linearized KW operator. Suppose that no $j_\sigma = 0$ and let $\lambda_0 \in (\underline{\lambda},
\overline{\lambda})$ be an elliptic weight. Then the mapping
\begin{equation}
{\LKW}: y^{\lambda_0} \mathcal C^{k,\ell, \gamma}_0 \longrightarrow y^{\lambda_0 - 1} \mathcal C^{k-1, \ell, \gamma}_0
\label{Fredholder}
\end{equation}
is Fredholm for $0 \leq \ell \leq k-1$ and $k \geq 1$.
\label{maphold}
\end{prop}
We explain in the next section how this result is essentially a corollary of Proposition~\ref{fred1}. More specifically,
both results are proved by parametrix methods; this parametrix is constructed using $L^2$ methods and it is initially
proved to be bounded between weighted Sobolev spaces, but it is also bounded between certain of these weighted
H\"older spaces, which leads directly to the proof of Proposition \ref{maphold}.
\subsection{Structure of the Generalized Inverse}\label{strgeninv}
We now briefly describe the technique behind the proofs of these results. The main step in each is the
construction and use of the generalized inverse $G$ for \eqref{map11}. The ellipticity of the weight enters
directly into this construction. By definition, a generalized inverse for \eqref{map11} is a bounded operator
$G: y^{\lambda_0 - 1/2}L^2 \to y^{\lambda_0 + 1/2}H^1_0$ which satisfies
\begin{equation}
G {\LKW} = \mbox{Id} - R_1, \qquad {\LKW} G = \mbox{Id} - R_2,
\label{geninv}
\end{equation}
where $R_1$ and $R_2$ are finite rank projections onto the kernel and cokernel of $\LKW$, respectively.
The nonuniqueness here is mild and results only from the different possible choices of projectors. Since
we are working on a specific weighted $L^2$ space, it is natural to demand that $R_1$ and $R_2$
be the orthogonal projectors onto the kernel and cokernel with respect to that inner product, and
we make this choice henceforth.
If we already know that \eqref{map11} is Fredholm, then general functional analysis tells us that a generalized
inverse exists. Conversely, the existence of an operator $G$ with these properties (the boundedness of
$G$ is particularly important) implies that \eqref{map11} is Fredholm. In fact, it is only necessary to find a
bounded operator $\tilde{G}$ such that the `error terms' $R_1$ and $R_2$ defined as in \eqref{geninv} are
compact operators, for then standard abstract arguments imply that \eqref{map11} is Fredholm and show that $\tilde{G}$
can be corrected to an operator such that \eqref{geninv} holds, with $R_1$ and $R_2$ the actual projectors.
This observation is important because it is certainly easier to construct an intelligently designed approximation
to the generalized inverse than to construct the precise generalized inverse directly. The criterion by which
one judges the approximation to be good enough is simply that the remainder terms $R_1$ and $R_2$ are compact.
An approximation of this type is called a parametrix for $\LKW$.
A parametrix can be constructed within the framework of geometric microlocal analysis, as carried out in full
detail in \cite{M-edge}. The key point is to work within a class of pseudodifferential operators on $M$
adapted to the particular type of singularity exhibited by $\LKW$. This is the class of $0$- (or uniformly degenerate)
pseudodifferential operators, $\Psi_0^*(M)$. We wish that $\Psi^*_0(M)$ is sufficiently large to contain parametrices
of elliptic uniformly degenerate differential operators, but not so large that the individual operators in $\Psi^*_0(M)$
are too unwieldy to analyze. We describe these operators in sufficient
detail for the present purposes, but refer to \cite{M-edge} for further details.
The elements of $\Psi^*_0(M)$ are characterized by the singularity structure of their Schwartz kernels. Thus, an operator
$A \in \Psi^*_0(M)$ has a Schwartz kernel $\kappa_A(y, \vec x, y', \vec x\,')$, which is a distribution on $M^2$.
We expect it to have a a standard pseudodifferential singularity (generalizing that of the Newtonian potential,
for example) along the diagonal $\{y = y', \vec x = \vec x\,'\}$, but we also require a very precise regularity along
the boundaries of $M^2$, $\{y = 0,\ \mbox{or}\ y' = 0\}$, and at the intersection of the diagonal with the boundary.
To formalize this, introduce the space $M^2_0$ obtained by taking the real blowup of the product $M^2$
at the boundary of the diagonal. In local coordinates this means that we replace each point $(0, \vec x, 0, \vec x)$
in the boundary of the diagonal with its inward-pointing normal sphere-bundle. Alternatively, in polar coordinates
\begin{equation*}
R = (y^2 + (y')^2 + |\vec x - \vec x\,'|^2)^{1/2}, \ \omega = (\omega_0, \omega_0', \hat\omega) =
(y, y', \vec x - \vec x\,')/R \in S^4_+,
\end{equation*}
where $S^4_+$ consists of the unit vectors in $\R^5$ with $\omega_0, \omega_0' \geq 0$, we replace each point
$(0,\vec x, 0, \vec x)$ by the quarter-sphere at $R = 0$. We can then use $(R, \omega, \vec x\,')$ as a full set of coordinates.
This new space is a manifold with corners up to codimension three, and has a new hypersurface boundary at $R=0$,
which is called the front face. Its two other codimension one boundaries
$\omega_0 = 0$ and $\omega_0' = 0$ are called its left and right faces. There is an obvious blowdown map $M^2_0 \to M^2$.
We now say that $A$ is a $0$-pseudodifferential operator if $\kappa_A$ is the pushforward under blowdown of a distribution
on $M^2_0$ (which we denote by the same symbol) which decomposes in the following fashion
as a sum $\kappa_A(R,\omega,\vec x\,') = \kappa_A' + \kappa_A''$.
Here $\kappa_A'$ is supported away from the left and right faces and has a pseudodifferential singularity of some order $m$
along the lifted diagonal $\{\omega_0 = \omega_0', \hat \omega = 0\}$, and if we factor $\kappa_A' = R^{-4} \hat\kappa_A'$,
then $\hat\kappa_A'$ (along with its conormal diagonal singularity) extends smoothly across the front face of $M^2_0$.
This exponent $-4$ is dimensional; in general it should be replaced by the dimension of $M$. On the other hand $\kappa_A''$ is
smooth in the interior of $M^2_0$ and has polyhomogeneous expansions at the left, right and front faces of this space,
with product type expansions at the corners. Altogether, if the expansions at these faces commence with the terms
$\omega_0^a$ (at the left face), $(\omega_0')^b$ (at the right face) and $R^{-4 + s}$ (at the front face), then we write
\begin{equation*}
A \in \Psi_0^{m, s, a, b}(M).
\end{equation*}
Slightly more generally we could replace the superscripts $a$, $b$, denoting the leading exponents of the polyhomogeneous
expansions at the left and right faces, by index sets, but we do not need this more refined notation here.
This elaborate notation simply specifies the precise vanishing or blowup properties of $\kappa_A$ in each of these regimes.
We have introduced it out of some necessity since at least some features of this precise structure will be used in an
important way below. Before proceeding, note one very special case: the identity operator $\mbox{Id}$
is an element in this class, and lies in $\Psi_0^{0, 0, \emptyset, \emptyset}$. The fact that it has order $0$ along
the diagonal is expected, and since its Schwartz kernel $\delta(y-y') \delta(\vec x - \vec x\,')$ is supported
on the diagonal, its expansion is trivial at the left and right faces, which explains the third and fourth
superscripts. Finally, the second superscript is explained by noting that in polar coordinates
\begin{equation*}
\delta(y-y') \delta(\vec x - \vec x\,') = R^{-4} \delta(\omega_0 - \omega_0')\delta(\hat\omega)
\end{equation*}
Having introduced this general class of pseudodifferential operators, we now explain the parametrix construction.
We aim to find an operator $\tilde{G} \in \Psi_0^*(M)$ such that $\LKW \tilde{G}$ is equal to the identity
up to some compact remainder terms. Rewriting this as the distributional equation
\begin{equation*}
{\LKW} \kappa_{\tilde G} = R^{-4} \delta(\omega_0 - \omega_0') \delta(\hat\omega)
\end{equation*}
we see that the singularity of $\kappa_{\tilde{G}}$ along the diagonal can be obtained by classical methods
(the symbol calculus), and this construction is uniform as $R \to 0$ once we have removed the appropriate
factors of $R$. In fact, writing $\LKW$ in these same polar coordinates and noting that it lowers homogeneity
in $R$ by $1$, we expect $\kappa_{\tilde{G}}$ to only blow up like $R^{-3}$ at the front face. In addition,
$\LKW$ must kill the terms in the expansion of $\kappa_{\tilde{G}}$ at the left face, which means that
the terms in the expansion in this face should involve the indicial roots; in particular, the leading exponent at
this face must equal $\overline{\lambda}$.
It is not apparent here where the ellipticity of the weight $\lambda_0$ enters. The answer is as follows. After
first solving away the diagonal singularity using the symbol calculus, we must then improve the initial
guess for the parametrix to another one for which ${\LKW} \kappa_{\tilde{G}} - \delta_{\mathrm{Id}}$ vanishes at the front
face as well. Because the lift of $\LKW$ to $M^2_0$ acts tangentially to the boundary faces of that space,
this equation restricts to an elliptic equation on the front face. Using a natural identification of each quarter-sphere
fiber of the front face with a half-Euclidean space, and a few other steps which we omit, we are led to having to find
the {\it exact} solution to $N({\LKW}) \Psi = f$ where $f$ is some smooth compactly supported function on $\R^4_+$.
If we are able to do this, we can then correct the parametrix to all orders so that the remainder terms are clearly compact.
The natural identification used here is that the quarter-sphere $S^4_+$ fiber in the front face over each point $(0,\vec x, 0, \vec x)$
of the boundary of the diagonal can be identified with the half-space $\R^4_+$, where this identification is unique
up to a projective map, and the restriction to the front face of the lift of $\LKW$ to $M^2_0$ is transformed to
$N({\LKW})$ in this identification. Section 2 of \cite{M-edge} (especially around eqn.\ (2.10)) explains more about
these identifications. This explains why the exact invertibility of the normal operator plays a crucial role.
The vindication that this all works is that by carrying out this parametrix construction, one proves that the generalized
inverse $G$ for $\LKW$ is an element of $\Psi^{-1,1, \overline{\lambda}, b}_0(M)$, where the final index $b$ is some positive
number related to the indicial roots of the adjoint of $\LKW$, and that the remainder terms in \eqref{geninv} satisfy
$R_1 \in \Psi^{-\infty, \overline{\lambda}, b}(M)$, $R_2 \in \Psi^{-\infty, b, \overline{\lambda}}(M)$, where this notation (note the
absence of the subscript $0$) means that their Schwartz kernels are smooth in the interior and polyhomogeneous at
the two boundary hypersurfaces of $M^2$, rather than being polyhomogeneous on the blown up space $M^2_0$.
We now explain how to use all of this for our purposes. Granting this structure of the Schwartz kernel of the
generalized inverse $G$, the boundedness of the map
\begin{equation}
G: y^{\lambda_0 - 1/2} L^2 \longrightarrow y^{\lambda_0 + 1/2} H^1_0
\label{GL2}
\end{equation}
can be deduced from the standard local boundedness of pseudodifferential operators of order $-1$ between
$L^2$ and $H^1$, and the following inequalities for the orders of vanishing of $\kappa_G$ at the
various boundary faces. The fact that $\kappa_G$ blows up like $R^{-3}$ at the front face, one order
better than the Schwartz kernel of the identity, partly explains why $G$ raises the exponent in the weight factor by $1$.
The other aspect which affects the weight on the right in \eqref{GL2} is the leading exponent $\overline{\lambda}$
at the left face, since it is clear that the decay profile of
\begin{equation}
\int_M \kappa_G( y, \vec x, y', \vec x\,') f(y', \vec x\,') \, \d y' \d\vec x\,'
\label{intexp}
\end{equation}
as $y \to 0$ must incorporate that exponent. Recalling the earlier discussion that $\kappa_G$ decomposes
into the near-diagonal and off-diagonal parts, $\kappa_G'$ and $\kappa_G''$, a close analysis of the
integrals from each of these terms proves that
\begin{equation}
G: y^{\lambda_0 - 1/2}L^2 \longrightarrow y^{\lambda_0 + 1/2} H^1_0 + \bigcap_{\epsilon > 0 \atop k \geq 0 } y^{\overline{\lambda}+1/2 - \epsilon} H^k_0.
\label{refregsob}
\end{equation}
In other words, the near-diagonal part raises regularity and the weight parameter by exactly $1$, while
the off-diagonal part improves the $0$-regularity to an arbitrarily large amount, and has growth/decay
rate of the outcome dictated by the leading term of $\kappa_G$ on the left face. We must intersect
over all $\epsilon>0$ here simply because $y^{\overline{\lambda}} \in y^{\overline{\lambda} + 1/2-\epsilon}L^2$ when $\epsilon > 0$,
but not otherwise.
From all of this, and observing that when $\overline{\lambda} > \lambda_0$, the range of \eqref{refregsob} is contained
in $y^{\lambda_0 + 1/2}H^1_0$, we see that \eqref{map11} is Fredholm,
which is Proposition \ref{fred1}. The proof of Proposition \ref{regularity1} is obtained by a more refined examination of the
mapping properties of $G$, in particular the fact that if $f$ is polyhomogeneous, then so is the outcome of the integral \eqref{intexp}.
Finally, the proof of Proposition \ref{maphold} can be explained as follows. The preceding discussion has been
based on $L^2$ considerations, which is natural since, for example, Fourier analysis has been used
at several points. However, the precise pointwise behavior of the Schwartz kernel of $G$ makes it possible to
read off its mapping properties on other function spaces. In particular, we obtain the analog of \eqref{refregsob}
on weighted H\"older spaces:
\begin{equation}
G: y^{\lambda_0-1} \mathcal C^{k,\gamma}_0 \longrightarrow y^{\lambda_0}\mathcal C^{k+1,\gamma}_0 +
\bigcap_m y^{\overline{\lambda}} \mathcal C^{m,\gamma}_0.
\label{refreghold}
\end{equation}
As before, this range lies in $y^{\lambda_0} \mathcal C^{k+1,\gamma}_0$ when $\lambda_0 < \overline{\lambda}$.
There is a slight refinement of this which we shall need later, namely that
\begin{equation}
G: y^{\mu-1} \mathcal C^{k,\gamma}_0 \longrightarrow y^{\mu}\mathcal C^{k+1,\gamma}_0 +
\bigcap_m y^{\overline{\lambda}} \mathcal C^{m,\gamma}_0
\label{refreghold2}
\end{equation}
for any $\mu > \lambda_0$ (and for simplicity of the statement, $\mu$ not an indicial root).
Since the equations \eqref{geninv} are satisfied as distributions, and every operator in them is bounded between
the appropriate H\"older spaces, we see that \eqref{Fredholder} is in fact Fredholm, at least for $\ell = 0$.
To prove that \eqref{Fredholder} is Fredholm for $\ell \geq 0$, we need one extra fact, which is that each of
the commutators $[ G, \del_{x^a}]$ lies in $\Psi^{-1, 1, \overline{\lambda}, b}_0$, i.e.\ is a $0$-pseudodifferential operator
with the {\it same} indices, hence has the same mapping properties as $G$ itself.
This simple transition from Sobolev to H\"older spaces is a good exemplar of the parametrix method; if we were working
solely with a priori estimates, then it is no simple matter to deduce Fredholmness on one type of function space from
the corresponding property on another type of function space.
\subsection{Algebraic Boundary Conditions and Ellipticity}\label{genebc}
There are many natural operators, however, for which there are no ellliptic weights, i.e.\ so that for any nonindicial $\lambda_0$,
the map \eqref{noropmapft} has either nontrivial kernel or cokernel, or both. This is the case for the linearized KW operator ${\LKW}$
when some of the $j_\sigma = 0$. We now describe the somewhat more complicated formulation of the ellipticity criterion
for boundary conditions in these cases. As before, we consider only the parts of this story relevant to the Nahm pole boundary
conditions for ${\LKW}_0$.
The argument in the last section (slightly after \eqref{noropmapft}) shows that even when the lowest nonnegative indicial
root $\lambda$ is $0$, then $\hat{N}({\LKW}) : s^{\lambda_0 + 1/2}L^2 \to s^{\lambda_0-1/2}L^2$ has no kernel if $\lambda_0 > 0$,
although the cokernel of this mapping is nontrivial. On the other hand, when $\lambda_0 < 0$, the nullspace has positive dimension,
though the map is surjective. To be definite, suppose that $-1/2 < \lambda_0 < 0$, which rules out solutions which blow up like
$s^\lambda$ where $\lambda$ is any one of the strictly negative indicial
roots of ${\LKW}_0$. Using the fact ii) that solutions of $\widehat{N}({\LKW}) \hat{\Psi} = 0$ have (convergent)
expansions at $s = 0$, the leading coefficient $\hat\Psi_0 = (\hat{a}_a, \hat{\varphi}_y, \hat{a}_y, \hat{\varphi}_a)$,
i.e.\ the coefficient of $s^0$, is well-defined. This coefficient lies in the eigenspace corresponding to the indicial root
$\lambda = 0$, and hence, following the language at the end of section \ref{nonregular}, is an element of
$\frak c^{8}$. We call this the Cauchy data of $\hat{\Psi}$ and write it as $\mathcal C(\hat{\Psi})$.
At this ODE level, the Nahm pole boundary condition intermediates between the spaces $y^{\lambda_0 + 1/2}L^2$
when $-1/2 < \lambda_0 < 0$ and $0 < \lambda_0 < 1/2$. Recall that for this boundary condition, we
fix a subalgebra $\frak h \subset \frak c$ and its orthogonal complement $\frak h^\perp$ in $\frak c$, and
then consider the linear map
\begin{equation}
\mathcal B_{\frak h} (\hat{a}_a, \hat{\varphi}_y, \hat{a}_y, \hat{\varphi}_a) = (\hat{a}_a^{\, \frak h^\perp}, \hat{\varphi}_y^{\, \frak h^\perp} ,
\hat{a}_y^{\, \frak h} , \hat{\varphi}_a^{\,\frak h}) \in (\frak h^\perp)^3 \oplus \frak h^\perp \oplus \frak h \oplus \frak h^3.
\label{bcs0}
\end{equation}
This determines a boundary condition for $\hat{N}(\LKW)$ and we shall study the problem
\begin{equation}
\hat{N}(\LKW) \hat \Psi = f, \qquad \hat\Psi \in s^{\lambda_0 + 1/2}H^1_0,\ \ \mathcal B_{\frak h}(\mathcal C(\hat \Psi)) = 0.
\label{bcs}
\end{equation}
As a first observation, following the same integrations by parts as above, there are no nontrivial fields $\hat{\Psi}$ which decay
at infinity and satisfy \eqref{bcs} with $f = 0$; indeed, the conditions $a_a^{\frak h^\perp} = 0$, $\varphi_y^{\frak h^\perp} = 0$,
$a_y^{\frak h} = 0$, $\varphi_a^{\frak h} = 0$ make all boundary terms at $s=0$ in this integration by parts vanish.
On the other hand, if we only assume that $f \in s^{\lambda_0 - 1/2}L^2$ for some $-1/2 < \lambda_0 < 0$, then this problem is
not well posed. Indeed, although there is a solution to the first equation in \eqref{bcs}, there is no reason for the
leading coefficient $\mathcal C(\hat \Psi)$ to have any meaning, so the boundary condition may not have any sense.
Thus it is necessary to suppose that $f$ lies in a slightly better space, as we now describe.
With $\lambda_0 \in (-1/2,0)$ as before, define
\begin{equation}
\hat{\mathcal H}_{\lambda_0} = \{\hat \Psi \in s^{\lambda_0 + 1/2}L^2(\R^+): \hat{N}(\LKW) \hat \Psi \in s^{\lambda_0 + 1/2}L^2(\R^+)\}.
\label{Hnor}
\end{equation}
The right hand side is one order less singular than might be expected, and using standard ODE techniques, one sees
that $\hat \Psi = s^0 \hat \Psi_0 + \mathcal O(s^{\lambda_0 + 3/2})$, and hence the leading coefficient $\hat \Psi_0$ is
well defined. We may now legitimately consider the mapping
\begin{equation}
\hat{N}(\LKW): \hat{\mathcal H}_{\lambda_0} \cap \{\hat \Psi: \mathcal B_{\frak h} (\mathcal C(\hat \Psi)) = 0\}
\longrightarrow s^{\lambda_0 + 1/2} L^2.
\end{equation}
Note that the weighted $L^2$ restriction on $\hat \Psi$ when $s$ is large precludes the exponentially growing solutions of
$N(\LKW) \hat\Psi = 0$.
We have so far suppressed the dependence of $\hat{N}({\LKW})$, and hence also of the map $\mathcal B_{\frak h}$, on
the parameters $\vec x \in \partial M$ and $\vec k \neq 0$. As observed earlier, the only essential part of the dependence on
$\vec k$ is on the direction $\vec k/|\vec k|$, and hence we assume for the remainder of this discussion
that $|\vec k| = 1$, i.e.\ $\vec k \in S^2$. For this particular operator, the dependence on $\vec x$ is not
very serious in that the operator `looks the same' in appropriate local coordinates at any point of $\del M$.
However, formally we should be considering $(\vec x, \vec k)$ as a point
in $S^* \partial M$, the cosphere bundle of $\del M$. If $\pi: S^* \partial M \to \partial M$
is the natural projection, then $\mathcal B_{\frak h}$ is a bundle map between
$\pi^* \frak c^{8}$ and $\pi^* \left((\frak h^\perp)^3 \oplus \frak h^\perp \oplus \frak h \oplus (\frak h)^3\right)$.
The fact that $\mathcal B_{\frak h}$ is independent of $\vec k$ allows us to call this an algebraic boundary condition.
The additional ingredient we need in this discussion is the space
\begin{equation}
V = V_{\vec x, \vec k} = \{ \hat\Psi \in \hat{\mathcal H}_{\lambda_0}:\hat N(\LKW)(\hat\Psi) = 0\}
\label{defV}
\end{equation}
of homogeneous solutions in $s^{\lambda_0 + 1/2}L^2$ which do not necessarily
satisfy the boundary condition. The dependence
of $V$ on $\vec x$ is again negligible, but its dependence on $\vec k$ is genuine since $\vec k$ appears in the
coefficients of $N(\LKW)$ and these kernel elements do vary nontrivially with $\vec k$. However, the dimension
of $V_{\vec x, \vec k}$ does not depend on $\vec k$, and in fact $V$ varies smoothly with $\vec k$ (and $\vec x$).
This means that we can regard $V$ as a vector bundle over $S^* \partial M$. The injectivity of $\hat N({\LKW})$
on $y^{\lambda_0 + 1/2}L^2$ when $\lambda_0 > 0$ shows that the restriction of the Cauchy data map $\mathcal C$
to $V$ is injective. This means that $\mathcal C(V)$ is a subbundle of $\frak c^{8}$; this is sometimes
called the Calderon subbundle.
We can now finally state the property which makes $\mathcal B_{\frak h}$ an elliptic boundary condition.
\begin{definition}
Let $\mathcal B$ be any bundle map from $\pi^* \frak c^{8}$ to another vector bundle $W$ over $S^* \partial M$.
Then $\mathcal B$ is said to be an elliptic boundary condition for $\hat{N}(\LKW)$ (and hence ultimately for $\LKW$) if
the restriction of $\mathcal B$ to the subbundle $V$ is bijective onto $W$, i.e.\ so that
\begin{equation*}
\left. \mathcal B \right|_V: V \longrightarrow W
\end{equation*}
is a bundle isomorphism.
\label{bcs1}
\end{definition}
This condition can be phrased in various obviously equivalent ways; the one we use below is to require
that $\mathcal B$ is injective and that the ranks of $V$ and $W$ are the same. However, in the end, this
condition is precisely what is
needed to construct a good parametrix for the actual boundary problem on $M$. We can see this rather
easily at the level of this ODE. If $f \in s^{\lambda_0 + 1/2}L^2 \subset s^{\lambda_0 - 1/2}L^2$, then there is
a solution $\hat \Psi \in s^{\lambda_0 + 1/2}H^1_0$ to $\hat{N}({\LKW})\hat\Psi = f$. This is not unique,
since there is a nullspace. In any case, this solution has a leading coefficient $\hat \Psi_0$, which
is however unlikely to satisfy $\mathcal B_{\frak h}(\hat \Psi_0 ) = 0$. Now modify $\hat\Psi$ by subtracting
an element $\hat\Phi \in V$. By definition, $\hat{N}({\LKW})(\hat\Psi - \hat \Phi) = f$, and provided we choose
$\hat \Phi$ so that $\mathcal B_{\frak h}(\hat\Phi_0) = \mathcal B_{\frak h}(\hat\Psi_0)$,
then $\hat \Psi - \hat \Phi$ satisfies the boundary condition too. The fact that there is a unique
such choice of $\hat \Phi$ is precisely the content of Definition~\ref{bcs1}.
Let us now check that this condition holds for the map $\mathcal B_{\frak h}$ which appears in the general Nahm
pole boundary condition. We have proved above that $\mathcal B_{\frak h}$ is injective on $V$. Furthermore,
it follows from the results of section \ref{third} that the rank of $V$ is half the rank of $\frak c^{8}$, i.e.\
$\mathrm{rk}(V) = 4\dim \frak c$. Since this is the same as $\dim ( (\frak h^\perp)^3 \oplus \frak h^\perp \oplus
\frak h \oplus \frak h^3)$, we see that $\left. \mathcal B_{\frak h}\right|_V$ is also surjective, and hence an isomorphism.
As noted earlier, $\mathcal B$ is called an algebraic boundary condition if $W = \pi^* W'$ where $W'$ is a bundle over
$\del M$. If this is the case, then the analytic theory of the boundary problem for the actual operator $\LKW$ is simpler
because the boundary conditions are local (of mixed Dirichlet-Neumann type), rather than nonlocal (pseudodifferential).
It is clear that $\mathcal B_{\frak h}$ is an algebraic boundary condition.
Return now to the linearized KW operator, and assume that some $j_\sigma = 0$, so that we can augment
the operator $\LKW$ with the boundary condition $\mathcal B_{\frak h}$. Fix $\lambda_0 \in (-1/2, 0)$ and consider
\begin{equation}
\mathcal H_{\lambda_0} = \{\Psi \in y^{\lambda_0 + 1/2} H^1_0( \d\vec x \d y): {\LKW} \Psi \in y^{\lambda_0 + 1/2} L^2\}.
\end{equation}
As before, the expected behavior for $\Psi \in y^{\lambda_0 + 1/2}L^2$ is that ${\LKW} \Psi \in y^{\lambda_0 - 1/2}L^2$.
This means that fields in $\mathcal H_{\lambda_0}$ must possess some special properties to ensure that ${\LKW}\Psi$
is one order less singular than $y^{\lambda_0 - 1/2}L^2$. Although we no longer have ODE arguments to fall
back upon, it is still possible to show that any $\Psi \in \mathcal H_{\lambda_0}$ has a {\it weak} partial expansion
\begin{equation*}
\Psi \underset{w}{\sim} \Psi_0 \, y^0 + \tilde{\Psi},
\end{equation*}
where $\Psi_0$ is a distribution of negative order which lies in the Sobolev space $H^{\lambda_0}(\partial M)$.
The remainder term $\tilde{\Psi}$ vanishes like $y^{\lambda_0 + 1}$ in a similar distributional sense. We do not
pause to make this more precise (see section 7 of \cite{M-edge}), but note only that the actual meaning of the
weak expansion above is that if we `test' $\Psi$ against some $\chi \in \mathcal C^\infty(\del M)$, then
\begin{equation*}
\int \Psi(y,\vec x) \chi(\vec x) \, \d \vec x = \langle \Psi_0, \chi \rangle y^0 + \langle \tilde{\Psi}, \chi \rangle,
\end{equation*}
where the second term on the right vanishes like $y^{\lambda_0 + 3/2}$.
The point of belaboring all of this is that it is possible to make sense of the leading coefficient $\Psi_0$ of
a general element $\Psi \in \mathcal H$ as a $\frak c^{\oplus 8}$-valued distribution of negative
order on $\del M$. Because the boundary condition is algebraic, we can then make sense of the projection
$\mathcal B_{\frak h}(\Psi_0)$, again as a distribution. In particular, when $\Psi \in \mathcal H$,
there is now a good meaning of the condition $\mathcal B_{\frak h}(\mathcal C(\Psi)) = 0$.
We can now state analogs of all the main results.
\begin{prop}\label{fred2}
Let ${\LKW}$ be the linearized KW operator on a compact manifold $M$ with boundary and suppose that $\rho$ is
not quasiregular, so that some $j_\sigma = 0$. Using the elliptic boundary condition given by the bundle map $\mathcal B_{\frak h}$,
then for $-1/2 < \lambda_0 < 0$, the mapping
\begin{equation}
{\LKW}: \{\Psi \in \mathcal H_{\lambda_0}: \mathcal B_{\frak h}(\Psi_0) = 0\} \longrightarrow s^{\lambda_0 + 1/2} L^2(M)
\label{map12}
\end{equation}
is Fredholm.
\end{prop}
\begin{prop}\label{regularity2}
With all notation as above, suppose that ${\LKW}\Psi = f$ where $f$ is smooth in a neighborhood of some point $q \in \del M$,
or slightly more generally, where $f$ has an asymptotic expansion as $y \to 0$, and in addition $\mathcal B_{\frak h}(\Psi_0) = 0$
near $q$. Then in that neighborhood, $\Psi$ has a polyhomogeneous expansion
\begin{equation}
\Psi \sim \sum y^{\gamma_j} \Psi_j,
\end{equation}
where the exponents $\gamma_j$ are of the form $\lambda + \ell$, $\ell \in \mathbb N$, where either $\lambda$
is an indicial root of ${\LKW}$ or else is an exponent occurring in the expansion of $f$. As before, this expansion may include log terms.
\end{prop}
\begin{prop}
Let ${\LKW}$ be the linearized KW operator. If some $j_\sigma = 0$, then for $-1/2 < \lambda_0 < 0$ and relative
to any choice of subalgebra $\frak h$, the mapping
\begin{equation}
{\LKW}: y^{\lambda_0} \mathcal C^{k,\ell, \gamma}_0 \longrightarrow y^{\lambda_0 - 1} \mathcal C^{k-1, \ell, \gamma}_0
\label{Fredholder2}
\end{equation}
is Fredholm when $0 \leq \ell \leq k-1$ and $k \geq 1$.
\label{maphold2}
\end{prop}
These results are proved, as before, using parametrix methods. Unlike the earlier case, this is a slightly
more involved process which requires the introduction of generalized Poisson and boundary trace operators;
see \cite{MVer}. We find along the way that the analog of the refined mapping property of the generalized
inverse \eqref{refreghold2} still holds.
\subsection{Regularity of Solutions of the KW Equations}\label{nonlinreg}
We have now described enough of the linear theory that we can formulate and prove the main result
needed earlier in the paper, that solutions of the full nonlinear gauge-fixed KW equations ${\KW}(A,\phi) = 0$
are also polyhomogeneous at $\partial M$. The implication of this regularity is that all the calculations
in section 2 which led to the uniqueness theorem when $M = \R^4_+$ are fully justified. (Of course,
this polyhomogeneity is far more than is really needed to carry out those calculations, but it is very
useful to have this sharp regularity for other purposes too.)
\begin{prop} \label{propnlreg}
Let ${\KW}(A,\phi) = 0$, and suppose that near $y = 0$, $A = A_{(0)} + a$, $\phi = \phi_{(0)} + \varphi$,
where $(a,\varphi)$ satisfy the Nahm pole boundary conditions. Then $(a,\varphi)$ is polyhomogeneous.
\end{prop}
The proof begins by using \eqref{texpkw} to write ${\KW}(A, \phi) = 0$ as ${\LKW}(a,\varphi) = -Q(a,\varphi)$.
Observe that since the terms in $\KW$ are at most quadratic, $Q$ is a bilinear form in $(a,\varphi)$. We
suppose from the beginning that $a$ and $\varphi$ lie in $y^{\lambda_0} \mathcal C^{1,\gamma}_0$, where the rate
of blowup (or decay) $\lambda_0$ is as dictated by the Nahm pole boundary condition. The details
of the proof are essentially the same in the simpler quasiregular case and in the more general
case where some $j_\sigma = 0$. In the former, $\overline{\lambda} =1$, $\underline \lambda=-1$, and we take
any elliptic weight $\lambda_0 \in
(\underline{\lambda}, \overline{\lambda})$, while in the latter, generically we take $\lambda_0 \in (-1/2, 0)$ (if
$j_\sigma=1/2$ does not occur in the decomposition of $\frak g_\C$, we can take $\lambda_0\in (-1,0)$),
and the generalized inverse is constructed using the more elaborate considerations of section \ref{genebc}.
The key facts that we use below, however, are the existence of a generalized inverse satisfying \eqref{geninv},
in particular so that the remainder term $R_1$ maps into a finite dimensional space of polyhomogeneous
fields and the fact that $G$ satisfies \eqref{refreghold2}. Although the construction of $G$ is more
complicated in the second case, we still end up with the result that both these properties hold then too.
There are two main steps. The first is to prove that $(a,\varphi)$ is conormal of order $\overline{\lambda}$, i.e.
$(a,\varphi) \in \mathcal A^{\overline{\lambda}}$, which we recall means that
\begin{equation}
(y\partial_y)^j \partial_{\vec x}^\alpha (a,\varphi) \in \bigcap_k y^{\overline{\lambda_0}} \mathcal C^{k,\gamma}_0
\label{conormal}
\end{equation}
for all $j$ and all multi-indices $\alpha$. In the second step we improve this to the existence of a polyhomogeneous expansion.
Since $\lambda_0$ is an elliptic weight, there exists a generalized inverse $G$ for ${\LKW}$ which provides an
inverse to \eqref{Fredholder} up to finite rank errors. Applying $G$ to ${\KW}(A,\phi) = 0$ gives
\begin{equation}
(a,\varphi) = - G Q(a, \varphi) + R_1 (a,\varphi).
\label{inteqn}
\end{equation}
The finite rank operator $R_1$ has range in the space of polyhomogeneous functions (with leading
term $y^{\overline{\lambda}}$),
maps into a finite dimensional space of polyhomogeneous functions, so the second term on the right is
polyhomogeneous and hence negligible. We are thus free to concentrate on proving the regularity of the first term
on the right in \eqref{inteqn}.
We first assert that $(a,\varphi) \in y^{\overline{\lambda}} \mathcal C^{k,\gamma}_0$ for all $k \geq 0$. (Note that
this is not conormality since we are not yet applying the tangential vector fields $\del_{x^a}$ without the
extra factor of $y$.) Since $Q$ is bilinear, we see first that $Q(a,\varphi) \in y^{2\lambda_0} \mathcal C^{1,\gamma}_0$.
so that from \eqref{refreghold2} with $k = 1$, and since $2\lambda_0 > \lambda_0 - 1$, we obtain
$(a,\varphi) \in y^{2\lambda_0 + 1} \mathcal C^{2,\gamma}_0 + y^{\overline{\lambda}}\mathcal C^{m,\gamma}_0$ for all $m$.
After a finite number of iterations, the right hand side is contained in $y^{\overline{\lambda}}\mathcal C^{m,\gamma}_0$
for some $m$, and bootstrapping further shows that it lies in this space for all $m$.
Revisiting this iteration, we can improve the regularity with respect to the vector fields $\partial_{x^a}$ as well.
This relies on a structural fact about $0$-pseudodifferential operators already quoted at the end of section \ref{strgeninv},
nmely that
\begin{equation*}
[ \partial_{x^a} , G] \in \Psi^{-1, 1, \overline{\lambda}, b}_0,
\end{equation*}
and hence this commutator satisfies the same mapping properties as $G$ itself. We apply this as follows. Write
\begin{equation*}
\partial_{x^a} (a,\varphi) = - G( \partial_{x^a} Q(a,\varphi) ) - [ \partial_{x^\alpha} , G] Q(a,\varphi).
\end{equation*}
We are discarding the term $R_1(a,\varphi)$ since it is already fully regular. It is convenient now to
regard $(a,\varphi)$ as lying in $y^{\lambda_0}\mathcal C^{m,\gamma}_0$ for $\lambda_0 = \overline{\lambda} - \epsilon$,
since we want to use the mapping properties of $G$ at a nonindicial weight. By the mapping properties
of the commutator, the second term on the right lies in $\cap_m y^{\lambda_0} \mathcal C^{m,\gamma}_0$.
On the other hand, we write $\partial_{x^a} Q(a,\varphi) = y\partial_{x^a} (y^{-1} Q(a,\varphi))$. This lies
in $\cap_m y^{2\lambda_0 - 1} \mathcal C^{m,\gamma}_0$, since $y^{-1}Q(a,\varphi) \in \cap_m y^{2\lambda_0 - 1} \mathcal C^{m,\gamma}_0$
and $y\del_{x^a}$ preserves this property. Since $G$ acts on this space, the entire first term lies in $\cap_m y^{\lambda_0}
\mathcal C^{m,\gamma}_0$. This proves that $(a,\varphi) \in \cap_m \mathcal C^{m,1, \gamma}_0$. The same argument improves the tangential
regularity incrementally, so $(a,\varphi) \in y^{\lambda_0} \mathcal C^{k,\ell,\gamma}_0$ for all $0 \leq \ell \leq k < \infty$.
Recalling \eqref{refreghold2} again, we can now replace $\lambda_0$ by $\overline{\lambda}$. This proves that
$(a,\varphi) \in \mathcal A^{\overline{\lambda}}$.
The second main step of the proof is easier. We wish to prove that $(a,\varphi)$ has an expansion. This relies on
the observation that we can treat the nonlinear equation ${\KW}(A_{(0)}+a, \phi_{(0)}+\varphi) = 0$ as a nonlinear
ODE in $y$, regarding the dependence on $\vec x$ as parametric. This is reasonable since $(a,\varphi)$
is completely smooth in this tangential variable. Thus we can decompose the linear term in \eqref{texpkw}
further using the indicial operator $I({\LKW})$ (introduced in eqn.\ (\ref{indicialop})) at any given boundary point to get
\begin{equation}
I({\LKW}) (a, \varphi) = (I({\LKW})-\LKW) (a,\varphi) - Q(a,\varphi).
\label{iter}
\end{equation}
The two terms on the right lie in $y^{\lambda_0 } \mathcal A$ and $y^{2\lambda_0} \mathcal A$, respectively.
(We recall that $I({\LKW}) - {\LKW}$ has no $\partial_y$ or $1/y$ terms; the coefficients of this
difference are smooth to $y=0$.) Integrating this ODE shows that $(a,\varphi)$ is a finite sum of terms
$(a_j, \varphi_j) y^{\lambda_j}$, where the $\lambda_j$ are the indicial roots of ${\LKW}$ which lie between $\lambda_0$
and $\mu = \min\{2\lambda_0+1, \lambda_0 + 1\}$, and an error term vanishing at this faster rate $y^\mu$.
At the next step, inserting this new information into \eqref{iter} shows that this is now an ODE where the right
side has a partial expansion up to order $\min\{\mu, 2\mu\}$ plus an error term vanishing at that rate,
and so $(a,\varphi)$ has a partial expansion up to order $\mu_2 = \min \{\mu + 1, 2\mu + 1\}$.
This completes the proof of the existence of the expansion of $(a,\varphi)$ in the case
where $\lambda_0 \in (0, \overline{\lambda})$ is an elliptic weight.
\appendix
\section{Some Group Theory}\label{groups}
\def\frak{\mathfrak}
The purpose of this appendix is to describe some basic facts and examples in group theory as background to the paper.
First of all, up to isomorphism, the group $SU(2)$ or equivalently the Lie algebra $\frak{su}(2)$ has precisely
one irreducible representation of dimension $n$, for each positive integer $n$.
It is convenient to write $n=2j+1$ where $j$ is a non-negative integer or half-integer called the spin.
If $v_j$ denotes an irreducible $\frak{su}(2)$ representation of spin $j$, then for $j\geq j'$, we have
\begin{equation}\label{decomp}v_j\otimes v_{j'}\cong \oplus_{j''=j-j'}^{j+j'}v_{j''}. \end{equation}
Now, for $N\geq 2$, we will describe group
homomorphisms $\varrho:SU(2)\to G=SU(N)$, or equivalently Lie algebra homomorphisms $\varrho:\frak{su}(2)\to \frak \frak{su}(N)$.
To describe such a homomorphism amounts to describing how the fundamental $N$-dimensional representation of $SU(N)$,
which we denote $V$, transforms under $\varrho(SU(2))$. As an $SU(2)$-module, $V$
will have to be the direct sum of a number of irreducible $SU(2)$ modules $v_{j_i}$ of dimension $n_i=2j_i+1$, for some $j_i$.
The possibilities simply correspond to partitions of $N$, that is to ways of writing
$N$ as an (unordered) sum of positive integers,
\begin{equation}\label{ecomp}N=n_1+n_2+\dots+n_s. \end{equation}
For example, the trivial homomorphism $\varrho:\frak{su}(2)\to\frak{su}(N)$, which maps $\frak{su}(2)$ to 0,
corresponds to the partition $N=1+1+\dots+1$ with $N$ terms.
At the other extreme, a principal embedding of $\frak{su}(2) $ in $\frak{su}(N)$
(which is the most important example for the present paper) corresponds to the partition with only one term, the integer $N$.
In general, we define the commutant $C$ of $\varrho$ as the subgroup of $SU(N)$ that commutes with $\varrho(SU(2))$;
its Lie algebra $\frak c$ is the subalgebra of $\frak{su}(n)$ that commutes with $\varrho(\frak{su}(2))$.
If $\varrho$ corresponds as in eqn.\ (\ref{ecomp}) to a partition with $s$ terms, then $C$ is a Lie group of rank $s-1$.
(It is abelian if and only if the $n_i$ are all distinct.) In particular, for $G=SU(N)$, the only case that $C$ is a finite group (or equivalently
a group of rank 0) is
that $s=1$, meaning that $\varrho$ is a principal embedding. In this case, $C$ is simply the center of $G$. Whenever $s>1$,
$C$ has a non-trivial Lie algebra, and this means, in the language of section \ref{second}, that $j_\sigma=0$ occurs
in the decomposition of $\frak g_\C$ under $\frak{su}(2)$. Thus, for $G=SU(N)$, the only case that $j_\sigma=0$ does
not occur in this decomposition is the case that $\varrho$ is a principal embedding.
To explicitly decompose $\frak{su}(N)$ under $\varrho(\frak{su}(2))$, we use the fact that $\frak{su}(N)$ is the traceless
part of $\mathrm{Hom}(V,V)$; equivalently it can be obtained from $V\otimes V^\vee$ by omitting a 1-dimensional
trivial representation. (Here $V^\vee$ is the dual of $V$.) Any $\frak{su}(2)$-module is isomorphic to its own dual,
so as a $\frak{su}(2)$ module, $\frak{su}(N)$ is $V\otimes V$ with a copy of the trivial module $v_0$ removed.
For example, if $\varrho$ is a principal embedding, so that $V$ is an irreducible $\varrho(\frak{su}(2))$
module $v_j$ with $N=2j+1$, then we use (\ref{decomp}) to learn that $\frak{su}(N)$ is the direct sum of
$\frak{su}(2)$-modules of spins $j_\sigma=1,2,\dots, N-1$.
As a corollary, we note that if $j_\sigma=0$ does not occur in the decomposition of $\frak{su}(n)$ (which happens only
if $\varrho$ is principal), then the $j_\sigma$'s are integers and in particular $j_\sigma=1/2$ does not occur in the decomposition
of $\frak{su}(n)$. As explained below, this statement has an analog for any simple Lie group $G$.
A few additional facts that follow from the above discussion of $SU(N)$ hold for all $G$.
The number of summands in the decomposition of $\frak g$ under a principal $\frak{su}(2)$ subalgebra
is always the rank of $G$ (this rank is $N-1$ for $G=SU(N)$). Also, the minimum
value of $j_\sigma$ for a principal embedding is always
$j_\sigma=1$, and this value occurs with multiplicity 1, corresponding to the $\frak{su}(2)$ submodule
$\varrho(\frak{su}(2))\subset \frak g$.
With similar elementary methods, we can analyze the other classical groups $SO(N)$ and $Sp(2k)$.
Here the following is useful. An $SU(2)$ module $v$ is said to be real, or to admit a real structure, if there
is a symmetric, non-degenerate, and $SU(2)$-invariant map $v\otimes v\to \C$; it is said to be pseudoreal, or to admit
a pseudoreal structure, if there is an antisymmetric, non-degenerate, and $SU(2)$-invariant map $v\otimes v\to \C$.
The representation $v_j$ is real (but not pseudoreal) if $j$ is an integer, or equivalently the dimension
$n=2j+1$ is odd, and pseudoreal (but not real) if $j$ is a half-integer, or equivalently the dimension $n=2j+1$ is even.
If $w$ is a 2-dimensional complex vector space (with trivial $SU(2)$ action), then $w$ admits both a symmetric nondegenerate
map $w\otimes w\to \C$ and an antisymmetric one. So if $v$ is either real or pseudoreal, then $v\oplus v\cong v\otimes w$
admits both a real structure and a pseudreal one.
Suppose that
\begin{equation}\label{morex}v=\oplus_{j\geq 0}a_j v_j,~~~ a_j\in \Bbb Z\end{equation}
is an $SU(2)$ module that is the direct sum of $a_j$ copies of $v_j$ (with almost all $a_j$ vanishing).
The criterion
for $v$ to be real or pseudoreal reduces to separate conditions on each $a_j$:
$v$ is real precisely if $a_j$ is even for half-integer $j$ (with no restriction for integer $j$), and $v$ is pseudoreal precisely if
$a_j$ is even for integer $j$ (with no restriction for half-integer $j$).
Now let us consider homomorphisms $\varrho:SU(2)\to G$ for $G=SO(N)$.
Such a homomorphism can be described by giving the decomposition of the fundamental
$N$-dimensional representation $V$ of $SO(N)$ as an $SU(2)$-module. Thus, such a homomorphism
again determines a partition of the integer $N$, as in (\ref{ecomp}). However, now we must impose the condition that
the representation $V$ of $SO(N)$ is real. In view of the statements in the last paragraph,
the condition that this imposes on the partition is simply that even integers $n_i$ in (\ref{ecomp}) must occur with even
multiplicity.
The condition that the commutant $C$ of $SU(2)$ -- or more precisely of $\varrho(SU(2))$ --
is a finite group, and hence that $j_\sigma=0$ does not
occur in the decomposition of the Lie algebra $\frak{so}(N)$, is\footnote{If an integer $n_i$ appears with multiplicity $d_i$ in the partition of $N$,
then the commutant of $\frak{su}(2)$ in $SO(N)$ contains a factor of $SO(d_i)$ if $n_i$ is odd or $Sp(d_i)$ if $n_i$ is even. (The last statement
makes sense because $d_i$ is always even when $n_i$ is even.)
Hence the group $C$ is finite if and only if the $n_i$ are all distinct, so that the $d_i$ are 0 or 1.
The last statement is true for $G=Sp(2k)$ for similar reasons: if an integer $n_i$ appears with multiplicity $d_i$ in the partition of $2k$, then the commutant contains a factor of
$SO(d_i)$ if $n_i$ is even and of $Sp(d_i)$ if $n_i$ is odd. (For $G=Sp(2k)$, $d_i$ is even when $n_i$ is odd.)} that the integers $n_i$ in the partition must be all distinct.
But since even integers must occur with even multiplicity, this implies that the $n_i$ must be odd. For example, if $N$ is odd,
a principal embedding of $\frak{su}(2)$ in $\frak{so}(N)$ corresponds to a partition with only one term, the integer $N$.
But if $N$ is even, a principal embedding corresponds to the two-term partition $N=1+(N-1)$. For $SO(N)$, in contrast to $SU(N)$,
an embedding that is not principal can still have a trivial commutant. For example, for $N=9$,
the partition $9=1+3+5$ represents 9 as the sum of distinct odd integers; this embedding is not principal, but its commutant
is a finite group. When the commutant is not a finite group, it has a Lie algebra of positive dimension and hence $j_\sigma=0$ occurs in the
decomposition of $\frak{so}(n)$ under $\frak{su}(2)$.
For $G=SO(N)$, rather as we found for $SU(N)$,
there is also a useful elementary criterion that ensures that $j_\sigma=1/2$ does not appear in the decomposition
of $\frak{so}(n)$. In fact, there is a useful criterion that ensures that no half-integer value of $j_\sigma$ occurs
in this decomposition. For this, we first recall that $\frak{so}(N)\cong \wedge^2 V\subset V\otimes V$.
For a given $\frak{su}(2)$ embedding, the decomposition of $V\otimes V$ under $\frak{su}(2)$ can be worked out using (\ref{decomp}).
One finds that half-integer values of $j$ occur in $V\otimes V$ (and also in $\wedge^2V$) if and only if both
odd and even integers $n_i$ occur in the chosen partition of $N$. But we have already observed that if even integers
appear in this partition, then $j_\sigma=0$ occurs in the decomposition of $\frak{so}(n)$ under $\frak{su}(2)$.
So if $j_\sigma=0$ does not occur in the decomposition of $\frak{so}(n)$, then $j_\sigma=1/2$ also does not occur.
The case that $G=Sp(2k)$ for some $k$ can be analyzed similarly, with the words ``even'' and ``odd'' exchanged in some statements.
A homomorphism from $SU(2)$ to $Sp(2k)$ can be described by giving the decomposition of the $2k$-dimensional representation
$V$ of $Sp(2k)$ as a direct sum of $SU(2)$ modules. Thus, such a homomorphism determines a partition $2k=n_1+n_2+\dots+n_s$.
Now the fact the representation $V$ of $Sp(2k)$ is pseudoreal implies that odd integers occur in this partition with even multiplicity.
The condition that the commutant $C$ is a finite group, so that $j_\sigma=0$ does not occur in the decomposition
of $\frak{sp}(2k)$ under $\frak{su}(2)$, is again that the integers appearing in the partition should be distinct.
But now this implies that these integers are all even. A principal embedding is the case that the partition consists of only of a single
integer $2k$. Just as for $SO(N)$, there are non-principal embeddings with the property that $j_\sigma=0$ does not occur
in the decomposition of $\frak{sp}(2n)$; these correspond to partitions of $2k$ as the sum of distinct even integers, for example
$6=2+4$.
By the same argument as for $SO(N)$, one can show that if $j_\sigma=0$ does not occur in the decomposition of $\frak{sp}(2k)$,
then half-integer values of $j_\sigma$ do not occur and in particular $j_\sigma=1/2$ does not occur. For this, one uses
the fact that $\frak{sp}(2k)\cong {\mathrm{Sym}}^2V\subset V\otimes V$, along with the rule (\ref{decomp}) for decomposition of tensor
products.
To understand homomorphisms from $SU(2)$ to an exceptional Lie group $G$, it is probably best to use less elementary
methods, and we will not explore this here. We remark, however, that the following feature of the above examples
is actually true for any simple Lie group $G$: if $j_\sigma=0$ does not occur in the decomposition of $\frak g_\C$ under $\frak{su}(2)$,
then only integer values of $j_\sigma$ occur in this decomposition and in particular $j_\sigma=1/2$ does not occur. (For a proof,
see the next paragraph.)
Given this, it follows from the formulas of section \ref{throots} that if $j_\sigma=0$ does not occur in the decomposition
of $\frak g_\C$, then there are no indicial roots in the gap between $\underline\lambda=-1$ and $\overline\lambda=1$.
Both $-1$ and $1$ always are indicial roots in this situation,
since $j_\sigma=1$ always occurs in the decomposition of $\frak g_\C$, the corresponding subspace of $\frak g_\C$ being
$\varrho(\frak{su}(2))$.
A proof of a claim in the last paragraph was sketched for us by B. Kostant. The complexification of $\varrho$ is a homomorphism
of complex Lie algebras $\varrho:\frak{sl}_2(\C)\to \frak g_\C$. We take a standard basis $(h,e,f)$ of $\frak{sl}_2(\C)$ and
write simply $(h,e,f)$ for their images in $\frak g_\C$. The hypothesis that half-integer values of $j_\sigma$ occur in the decomposition
of $\frak g_\C$ means, in the terminology of \cite{Carter}, p. 165, that $e$ is not even.
In this case, according to Proposition 5.7.6 of that
reference, $e$ is not distinguished, and therefore, according to Proposition 5.7.4 of the same reference, the homomorphism $\mathrm{ad}(e):
\frak g(0)\to \frak g(2)$ has a non-trivial kernel. This kernel is the commutant $\frak c$ of $\varrho(\frak{sl}_2(\C))$.
\vskip1cm
\noindent {\it {Acknowledgements}} Research of RM supported in part by NSF Grant DMS-1105050.
Research of EW supported in part by NSF Grant PHY-1314311.
\bibliographystyle{unsrt} | 19,299 |
Welcome to the newly updated TrekEarth ForumsPlease note: If this is your first visit since the change, you will need to log out from the site and re-log in to verify your identity to the forum before you can post.
San Marino's Fortresspaido
(944) *
Jaku 2004-09-02 19:59
Great composition. Nice landscape. More bit contrast and will be better - look on my workshop.
© 2009 TrekEarth is an Internet Brands Company. Advertise | Privacy Policy | Terms of Use | 224,552 |
TITLE: Understanding Generalised Quadrangles
QUESTION [4 upvotes]: I have a project to do on Generalised Quadrangles, specifically GQ(2,2). The project needs to have information about the construction of GQ(2,2), to prove this construction meets the conditions of a generalised quadrangle and showing that $S_{6}$ acts "flag transitively" on this object.
I'm aware now of the definition of $GQ(2,2)$, and I have some idea on how to construct it. Essentially, it seems you take $6$ points, and then take all possible triplets of pairs that can be formed from this (and there are $\binom{6}{2} = 15$ of these unordered pairs) and then you just put lines between triplets who share points. However, I'm struggling to prove that this matches the definition of a generalized quadrangle. Furthermore, I don't really understand what flag transitivity means, or how to show that $S_{6}$ acts in this way.
REPLY [6 votes]: First I'll go over flag transitivity in general, flesh out your construction of $GQ(2,2)$, and then give a hint for how to prove that the action of $S_6$ on $GQ(2,2)$ is flag-transitive.
First, the definition:
Let $Q$ be a generalized quadrangle, and $G$ be a group acting on $Q$. We say that the action of $G$ on $Q$ is flag transitive if for every pair $((p_1,l_1), (p_2,l_2))$ where $p_i$ is a point incident with the line $l_i$, there exists some $g\in G$ such that $(p_1,l_1)^g = (p_2,l_2)$.
If you're familiar with point transitivity and/or line transitivity, this should feel a little similar. A flag is just an incident point-line pair, and being flag transitive means that the action takes one flag to any other flag. Note that this is stronger than both point and line transitivity.
Now, for your construction. I'm going to go over it in detail mostly for myself; if I am misunderstanding your idea, let me know!
Take $S = \{1, 2, 3, 4, 5, 6\}$ and consider the set $T = \{\{a,b\}: a,b\in S, a\ne b\}$. As you mentioned, since ${6\choose 2} = 15$, the set $T$ has $15$ elements. Take $T$ to be the point-set of $GQ(2,2)$. Then three points $A,B,C\in T$ are all pairwise collinear (equivalently, define a unique line) if and only if $A\cup B\cup C= S$. Thus, take the set $R = \{\{A,B,C\}:A,B,C\in T, (A\cup B\cup C) = S\}$ to be the set of lines of $GQ(2,2)$. It should be shown that $R$ also has $15$ elements (since $GQ(s,t)$ has $(t+1)(st+1)$ lines). To prove that this yields your generalized quadrangle, you should prove three things:
For every $A\in T$, there exists exactly three lines incident with $A$. This means you should find exactly three pairs $\{B_i,C_i\}$ where $\{A,B_i,C_i\}\in R$.
For every $L\in R$, there exists exactly three points incident with $L$. This comes more-or-less from the definition (unless I'm missing a subtlety).
For every pair $(A,L)$ where $A$ and $L$ are not incident, there exists a unique point $B$ incident with $L$ and collinear with $A$.
If these all hold, then (since $GQ(2,2)$ is unique), you will have truly constructed $GQ(2,2)$.
To prove that the action of $S_6$ on $GQ(2,2)$ is flag transitive, you should ask yourself first: what is the action? If $\sigma\in S_6$, then for $\{a,b\}\in T$, define $\{a,b\}^{\sigma}:= \{\sigma(a), \sigma(b)\}$. This gives a natural action on the points. Similarly, for $\{A,B,C\}\in R$, define $\{A,B,C\}^{\sigma}:= \{A^{\sigma}, B^{\sigma}, C^{\sigma}\}$. Lastly, for $A\in T$ and $L\in R$ where $A$ is incident to $L$, define $(A,L)^{\sigma}:= (A^{\sigma}, L^{\sigma})$. You should prove that these definitions are indeed group actions!
Hopefully, this gives a good foundation for how to show that this action of $S_6$ on flags is in fact transitive. You might want to show that for any flag $(A,L)$, there exists some $\sigma\in S_6$ such that $(A,L)^{\sigma} = (\{1,2\}, \{\{1,2\},\{3,4\},\{5,6\}\})$. This almost immediately implies transitivity (why?). | 119,187 |
Bella Zafrina Veksler
zafrib
rpi.edu
Rensselaer Polytechnic Institute
Publications:
Gartenberg, D., Veksler, B., Gunzelmann, G., & Trafton, J. G. (2014). An ACT-R process model of the signal duration phenomenon of vigilance. In Proceedings of 58th annual meeting of the Human Factors and Ergonomics Society.
Veksler, B. Z. (2010). Visual search strategies and the layout of the display. In D. D. Salvucci & G. Gunzelmann (Eds.), Proceedings of the 10th International Conference on Cognitive Modeling (pp. 323-324). Philadelphia, PA: Drexel University. | 354,501 |
Age: 13
City: Lehi
The Journey Reese uploaded her first music video, “Glorious,” to YouTube when she was 9. It was a simple recording filmed on an iPad and edited in iMovie. Since then, Reese has produced 18 videos across two YouTube channels and a Facebook page and is quickly approaching 10 million collective views. Reese has also performed in local theater productions including Les Miserables, Joseph and the Amazing Technicolor Dreamcoat, and Annie. She is a ballet student at the Barlow Arts Conservatory and has performed in the Nutcracker. Reese’s signature sparkle can also be found in commercials, training videos and sitcoms. Her latest project is portraying Young Beth in the 2018 movie “Little Women,” produced by the same team that brought us “Once I Was A Beehive.”
Road Blocks “Rapidly-changing technology and artificial intelligence being the deciding factor of which content is served to the end user can be frustrating. It’s tricky trying to balance content creation with the science of the latest algorithm changes.”
Vacation Destination “Fukuoka, Japan. I participated in a small singing group tour under the direction of Masa Fukuda. It helped me realize music can transcend language barriers and foster lifelong friendships and connections.”
Local Destination “The Tulip Festival at Thanksgiving Point.”
Must-Have “Lip gloss! You never know when you will need it!”
Pingback: This Song Has Over 300 Renditions But Never Gets Old | Mormon Light | 7,636 |
Sheep farmer Jerry McCarthy sought to find a solution for after he noticed that his sheep were becoming resistant to the usual showers, doses and pour-ons. The Kerryman also used to travel around the country driving trucks for different business and noticed that sheep everywhere were scratching.
He would see wool on fences and hedges all over the place and he knew that the sheep were losing their wool by scratching their bodies, trying to rid themselves of irritation.
Solution
The innovative farmer saw that a man called Neil Fell had started mobile sheep dipping in the UK after returning from Australia and Jerry thought “I could make that”.
The Kenmare native figured that he could buy a truck and get an engineer to convert it into something that he could use as a mobile dipping pool.
Jerry worked closely with Neil and learned from his business model as Neil is hoping to dip 200,000 sheep this year as he completed 175,000 in the latter half of last year alone.
He went ahead and purchased a 2008, MAN truck from the UK and proceeded to present it to a number of engineering companies for conversion – they all turned him away and even said that he was ‘mad’.
Not one to be defeated, he simply took his €130,000 over to the UK and had his bespoke dipping machine created there.
How it works
“If you imagine going into a chipper and you put the chips into the basket and lower it down into the grease – that’s how my system works,” said Jerry, with his rather clear explanation.
“They climb up the back of the trailer and into a 4 X 8 ft cage that holds on average 10 sheep. It’s all hydraulic and we just lower them down into it,” he continued.
While they are in the cage, the sheep are submerged for approximately one minute. Jerry controls the hydraulics from a button on the side of the machine and he will submerge briefly the sheep’s heads two or three times.
Afterwards, the sheep leave the cage and stand towards the front of the lorry for a few minutes while they drain off and they will then leave the vehicle to the left or right, depending on the farm and it’s all over.
“I have somewhere around 20,000 sheep put through the machine since I put the truck on the road last September.”
Jerry will get through about 200 sheep per hour. “I dipped 2,000 lambs in nine hours before Christmas; they were store lambs weighing about 35kg each,” he smiled.
Challenges
One challenge that Jerry faces with his mobile service vehicle is filling the tank with 3,000-litres of water. The machine itself is fully fitted with pumps so that it can use water from a nearby lake, pond or other water sources.
“A lot of farmers that know that I’m coming, and they prepare ahead by having their water tanks full up for me,” he said appreciatively.
“You can shower 100 sheep on 15-litres of water. I’ve 300 sheep done this morning and I lost 700-litres of water on them animals,” he acknowledged.
It is Jerry’s argument; however, that soaking the sheep will garner the most effective results and that is where the plunge pool has the advantage.
Another challenge for the new dipping business is that when people see the Kerry registration plate, they believe that Jerry won’t go as far north as people might expect, but he says that he will travel anywhere.
“I'm up in Clifden and Westport this week now and it doesn’t matter how big or small your flock is, I’ll get to you alright”, he said with assurance.
Accreditation
It is believed that Tagline Mobile Dipping Service is the only one of its kind in the country. Jerry is also the only commercial dipper in Ireland approved by the Department of Agriculture, Food and the Marine.
Jerry has acted on behalf of the department on a small number of animal welfare cases and he is also Bord Bia-approved, which means that he is entitled to give VAT receipts.
The vehicle also incorporates a tag reader which means Jerry can supply a list of electronic IDs of the treated sheep.
The entrepreneur from Kenmare seems to have tapped into a market with great potential for success. He is already a busy man, rearing a flock of 40 pedigree Texels and 30 Mules.
He lives in Kenmare with his children - Danny (5) and Ava (6) and is currently preparing for the wedding of the year with his fiancée Joanne.
Information
For further information, click here or see Facebook
If you would like to share your story, email - [email protected] - with a short bio. | 3,642 |
I therefore set out to see what options there are in the world of coffee mugs for men of advancing age. Mugs, that is, with an "old" theme.
Unfortunately, I was unable to find my husband's Old Guys Rule coffee mug to share on this page. The Old Guys Rule series includes mugs for all types of men from golfers to fishermen to storytellers...you can see which mugs from that series are available right now on eBay by clicking here.
I did find this handsome black and orange mug on Amazon that proclaims the owner is of Premium Quality and crafted from A Very Old Recipe. That he is a Vintage Dude. That he is The Man, The Myth The Legend, one composed of 60 Percent Courage and 40 Percent Ability. If that describes an old guy you know, you can find the mug on Amazon by clicking right here.
You might want to think carefully before you gift your dad or even your grandfather with one of these mugs. Will he love it or will he be offended? That is a very important consideration. I know my husband would love it, especially if he needed a new coffee mug...and it would be a perfect Father's Day, birthday or Christmas present for an "old" guy, don't you think?:
Order the Old Vintage Dude mug on Amazon.
See what Old Guys Rule mugs are available on eBay. | 8,220 |
\begin{document}
\title[Reverse mathematics and uniformity in proofs]{Reverse mathematics and uniformity in proofs without excluded middle}
\author{Jeffry L. Hirst}
\address{Department of Mathematical Sciences\\
Appalachian State University\newline
Boone, NC 28608, USA}
\email{[email protected]}
\urladdr{www.mathsci.appstate.edu/\urltilde jlh}
\author{Carl Mummert}
\address{Department of Mathematics\\
Marshall University\newline
One John Marshall Drive\\
Huntington, WV 25755, USA}
\email{[email protected]}
\urladdr{www.science.marshall.edu/mummertc}
\date{\today}
\begin{abstract}
We show that when certain statements are provable
in subsystems of constructive analysis using intuitionistic
predicate calculus, related sequential statements are provable in
weak classical subsystems. In particular, if a $\Pi^1_2$ sentence
of a certain form is provable using
E-HA${}^\omega$ along with the axiom of
choice and an independence of premise principle, the sequential form
of the statement is provable in the classical system RCA.
We obtain this and similar results using applications of modified
realizability and the \textit{Dialectica} interpretation. These
results allow us to use techniques of classical reverse mathematics
to demonstrate the unprovability of several mathematical principles in
subsystems of constructive analysis.
\end{abstract}
\maketitle
\section{Introduction}
We study the relationship between systems of intuitionistic arithmetic
in all finite types (without the law of the excluded middle) and weak
subsystems of classical second order arithmetic. Our theorems give
precise expressions of the informal idea that if a sentence $\forall
X\, \exists Y\, \Phi(X,Y)$ is provable without the law of the excluded
middle, then the proof should be sufficiently direct that the stronger
\textit{sequential form}
\[
\forall \langle X_n \mid n \in \setN \rangle\,
\exists \langle Y_n \mid n \in \setN\rangle\, \forall n\,
\Phi(X_n,Y_n)
\]
is provable in a weak subsystem of classical arithmetic. We call
our theorems ``uniformization results'' because the provability of
the sequential form demonstrates a kind of uniformity in the proof of
the original sentence.
The subsystems of classical arithmetic of interest are
$\rca_0$, which is well-known in Reverse
Mathematics~\cite{Simpson-SOSOA}, and its extension $\rca$ with
additional induction axioms. These systems are closely related to
computable analysis. In particular, both subsystems are satisfied in the
model $\REC$ that has the set $\omega$ of standard natural numbers as
its first order part and the collection of all computable subsets of
$\omega$ as its second order part. When the conclusions of our
uniformization results are viewed as statements about $\REC$, they
provide a link between constructive analysis and computable analysis.
Moreover, because $\rca_0$ is the base system most often employed in
Reverse Mathematics, our results also provide a link between the
fields of Reverse Mathematics and constructive analysis. Full
definitions of the subsystems of intuitionistic and classical arithmetic
that we study are presented in section~\ref{sec2}.
In section~\ref{sec3}, we prove uniformization results using modified
realizability, a well-known tool in proof theory. In particular, we
show there is a system $I_0$ of intuitionistic arithmetic in all finite types
such that whenever an $\forall\exists$ statement of a certain
syntactic form is provable in $I_0$, its sequential form is
provable in $\rca_0$ (Theorem~\ref{719J}). Moreover, the system $I_0$ contains the full scheme for the
axiom of choice in all finite types, which is classically much
stronger than $\rca_0$. We have attempted to make section~\ref{sec3}
accessible to a general reader who is familiar with mathematical logic
but possibly unfamiliar with modified realizability.
In section~\ref{sec4}, we give several examples of theorems in
classical mathematics that are provable in $\rca_0$ but not provable
in~$I_0$. These examples demonstrate empirically that the syntactic
restrictions within our uniformization theorems are not excessively
tight. Moreover, our uniformization theorems allow us to obtain these
unprovability results simply by showing that the sequential versions of the
statements are unprovable in $\rca_0$, which can be done using
classical techniques common in Reverse Mathematics. In this way, we
obtain results on unprovability in intuitionistic arithmetic solely through a
combination of our uniformization theorems and the study of classical
arithmetic. A reader who is willing to accept the results of
section~\ref{sec3} should be able to skim that section and then
proceed directly to section~\ref{sec4}.
In section~\ref{sec5}, we prove uniformization results for $\rca_0$
and $\rca$ using the {\it Dialectica} interpretation of G\"odel.
These results allow us to add a Markov principle to the system of
intuitionistic arithmetic in exchange for shrinking the class of
formulas to which the theorems apply.
We would like to thank Jeremy Avigad and Paulo Oliva for
helpful comments on these results. We began this work
during a summer school on proof theory taught by Jeremy
Avigad and Henry Towsner at Notre Dame in 2005. Ulrich
Kohlenbach generously provided some pivotal insight during
the workshop on Computability, Reverse Mathematics, and
Combinatorics at the Banff International Research Station in~2008,
and much additional assistance in later conversations.
\section{Axiom systems}\label{sec2}
Our results make use of subsystems of intuitionistic and classical
arithmetic in all finite types. The definitions of these systems rely on
the standard type notation in which the type of a natural number is
$0$ and the type of a function from objects of type $\rho$ to objects
of type $\tau$ is $\rho \to \tau$. For example, the type of a
function from numbers to numbers is $0 \to 0$. As is typical in the
literature, we will use the types $1$ and $0\to 0$ interchangeably,
essentially identifying sets with their characteristic functions. We
will often write superscripts on quantified variables to indicate
their type.
Full definitions
of the following systems are given by Kohlenbach~\cite{Koh-book}*{section~3.4}.
\begin{definition}
The system $\haw$ is a theory of intuitionistic arithmetic in all finite types first defined by Feferman~\cite{Feferman-1977}.
The language $\lang(\haw)$ includes the constant 0; the successor,
addition, and multiplication operations; terms for primitive recursion
on variables of type $0$; and the projection and substitution
combinators (often denoted $\Pi_{\rho,\tau}$ and
$\Sigma_{\delta,\rho,\tau}$ \cite{Koh-book}) which allow terms to be
defined using $\lambda$ abstraction. For example, given $x \in \setN$
and an argument list $t$, $\haw$ includes a term for $\lambda t.x$,
the constant function with value $x$.
The language includes equality as a primitive relation only for type
$0$ objects (natural numbers). Equality for higher types is defined
pointwise in terms of equality of lower types, using the following
extensionality scheme
\[ {\mathsf{E}}\colon \forall x^\rho \forall y^\rho \forall z^{\rho\to
\tau}\, ( x =_\rho y \to z(x) = _\tau z(y) ).
\]
The axioms of $\haw$ consist of this extensionality scheme, the
basic arithmetical axioms, the defining axioms for the term-forming
operators, and an axiom scheme for induction on quantifier-free
formulas (which may have parameters of arbitrary types).
\end{definition}
\begin{definition}[Troelstra~\cite{troelstra73}*{1.6.12}]
The subsystem $\hawfi$ is an extension of $\haw$ with additional terms
and stronger
induction axioms. Its language contains additional term-forming
recursors $R_\sigma$ for all types $\sigma$. Its new axioms include
the definitions of these recursors and the full induction scheme
\[\ia\colon A(0) \to (\forall n(A(n) \to A(n+1)) \to \forall n
A(n)),\]
in which $A$ may have parameters of arbitrary types.
\end{definition}
The following class of formulas will have an important role in our results. These are, informally, the formulas that have no existential commitments in intuitionistic systems.
\begin{definition}
A formula of $\lang(\haw)$
is \textit{$\exists$-free} if it is built from prime (that is, atomic)
formulas using only universal quantification and the connectives
$\land$ and $\to$. Here the symbol $\bot$ is treated as a prime
formula, and a negated formula $\lnot A$ is treated as an abbreviation for
$A \to \bot$; thus $\exists$-free formulas may include both $\bot$ and~$\lnot$.
\end{definition}
We will consider extensions of $\haw$ and $\hawfi$ that include additonal axiom schemes. The following schemes have been discussed by Kohlenbach~\cite{Koh-book}
and by Troelstra~\cite{troelstra73}.
\begin{definition} The following axiom schemes are defined in
$\lang(\hawfi)$. When we adjoin a scheme to $\haw$, we implicitly
restrict it to $\lang(\haw)$. The formulas in these schemes may have
parameters of arbitrary types.
\begin{list}{$\bullet$}{}
\item \textit{Axiom of Choice}. For any $x$ and $y$ of finite type,
\[ \ac \colon \forall x\, \exists y A(x,y) \to \exists Y\, \forall x\, A(x,Y(x)).\]
\item \textit{Independence of premise for $\exists$-free formulas}.
For $x$ of any finite type, if $A$ is $\exists$-free and
does not contain $x$, then
\[\ipwef\colon (A \to \exists x B(x)) \to \exists x (A \to B(x)).\]
\item \textit{Independence of premise for universal formulas}. If
$A_0$ is quantifier free, $\forall x$ represents a block
of universal quantifiers, and $y$ is of any type and is
not free in $\forall x A_0(x)$, then
\[\ipwa\colon (\forall x A_0(x) \to \exists y B(y)) \to \exists y (\forall x A_0 (x) \to B(y)).\]
\item \textit{Markov principle for quantifier-free formulas}. If $A_0$ is quantifier-free and
$\exists x$ represents a block of existential quantifiers
in any finite type, then
\[\markov\colon \neg\neg \exists x A_0 (x) \to \exists x A_0 (x).\]
\end{list}
\end{definition}
\subsection{Classical subsystems}
The full scheme $\ac$ for the axiom of choice in all finite types,
which is commonly included in subsystems of intuitionistic arithmetic,
becomes extremely strong in the presence of the law of the excluded
middle. For this reason, we will be interested in the restricted choice
scheme
\[
{\mathsf{QF}{\text-}\mathsf{AC}}^{\rho,\tau}\colon \forall x^\rho\, \exists y^\tau A_0(x,y) \to \exists Y^{\rho \to \tau}\, \forall x^\rho A_0 (x, Y(x) ),
\]
where $A_0$ is a quantifier-free formula that may have parameters.
We obtain subsystems of classical arithmetic by adjoining forms of this
scheme, along with the law of the excluded middle, to systems
of intuitionistic arithmetic. Because these systems include the law
of the excluded middle, they also include all of classical predicate
calculus.
\begin{definition} The system $\rcaw_0$ consists of $\haw$ plus ${\sf
QF{\text-}AC}^{1,0}$ and the law of the excluded middle.
The system $\rcaw$ consists of $\hawfi$ (which includes full
induction) plus ${\sf QF{\text-}AC}^{1,0}$ and the law of the
excluded middle.
\end{definition}
We are also interested in the following second order restrictions of
these subsystems. Let $\hatwo$ represent the restriction of $\haw$ to
formulas in which all variables are type $0$ or $1$, and let
$\hafitwo$ be the similar restriction of $\hawfi$ in which variables
are limited to types $0$ and $1$ and the recursor constants are
limited to those of type~$0$.
\begin{definition}
The system $\rca_0$ consists of $\hatwo$ plus ${\sf
QF{\text-}AC}^{0,0}$ and the law of the excluded middle.
The system $\rca$ consists of $\hafitwo$ (which includes the full
induction scheme for formulas in its language) plus ${\sf
QF{\text-}AC}^{0,0}$ and the law of the excluded middle.
\end{definition}
The system $\rca_0$ (and hence also $\rcaw_0$) is able to prove the
induction scheme for $\Sigma^0_1$ formulas using ${\sf QF}\text{-}{\sf
AC}^{0,0}$ and primitive recursion on variables of type~$0$,
as noted by Kohlenbach~\cite{Koh-HORM}.
The following conservation results show that the second order subsystems
$\rca$ and $\rca_0$ have the same deductive strength for
sentences in their restricted languages as the corresponding
higher-type systems $\rca^\omega$ and $\rca^\omega_0$,
respectively.
\begin{theorem}\label{consrcao}{\cite{Koh-HORM}*{Proposition~3.1}}
For every sentence $\Phi$ in $\lang(\rca_0)$,
if $\rcaw_0 \vdash \Phi$ then $\rca_0 \vdash \Phi$.
\end{theorem}
The proof of this theorem is
based on a formalization of the extensional model of the hereditarily
continuous functionals ($\mathsf{ECF}$), as presented in section~2.6.5
of Troelstra~\cite{troelstra73}. The central notion is that
continuous objects of higher type can be encoded by lower type objects.
For example, if $\alpha$ is a functional of type $1 \to 0$ and
$\alpha$ is continuous in the sense that the value of $\alpha (X)$
depends only on a finite initial segment of the characteristic
function of $X$, then there is an {\sl associated function}
\cite{Kleene} of type $0 \to 0$ that encodes all the information
needed to calculate values of~$\alpha$. Generalizing this notion,
with each higher-type formula $\Phi$ we can associate a second order
formula $\Phi_{\sf ECF}$ that encodes the same information. The proof
sketch for the following result indicates how this is applied to
obtain conservation results.
\begin{theorem}\label{consrca}
For each sentence $\Phi$\/ in $\lang(\rca)$, if\/ $\rcaw \vdash \Phi$
then $\rca \vdash \Phi$.
\end{theorem}
\begin{proof}
The proof proceeds in two steps. First, emulating section~2.6.5 and
Theorem~2.6.10 of Troelstra~\cite{troelstra73}, show that if $\rcaw
\vdash \Phi$ then $\rca \vdash \Phi_{\sf ECF}$. Second, following
Theorem~2.6.12 of Troelstra \cite{troelstra73}, prove that if $\Phi$
is in the language of $\rca$ then $\rca \vdash \Phi \leftrightarrow
\Phi_{\sf ECF}$.
\end{proof}
The classical axiomatization of $\RCAo$, presented by
Simpson~\cite{Simpson-SOSOA}, uses the set-based language $L_2$ with
the membership relation symbol~$\in$, rather than the language based
on function application used in~$\haw$. The systems defined above as
$\RCAo$ is sometimes denoted $\RCAo^2$ to indicate it is a restriction
of $\RCAo^\omega$. As discussed by Kohlenbach~\cite{Koh-HORM},
set-based $\RCAo$ and function-based $\RCAo^2$ are each included in a
canonical definitional extension of the other, and the same holds for
set-based $\rca$ and function-based $\rca^2$. Throughout this paper,
we use the functional variants of $\RCAo$ and $\rca$ for convenience,
knowing that our results apply equally to the traditionally
axiomatized systems.
\section{Modified realizability}\label{sec3}
Our most broadly applicable uniformization theorems are
proved by an application of modified realizability, a
technique introduced by Kreisel~\cite{KR}. Excellent expositions on
modified realizability are given by
Kohlenbach~\cite{Koh-book} and
Troelstra~\cites{troelstra73,troelstra-HP}. Indeed, our proofs
make use of only minute modifications of results stated in these sources.
Modified realizability is a scheme for matching each formula $A$ with
a formula \mbox{$t \mr A$} with the intended meaning ``the sequence
of terms $t$ realizes~$A$.''
\begin{definition}\label{719A}
Let $A$ be a formula in $\lang ( \hawfi )$, and let $\seq x$ denote a possibly
empty tuple of terms whose variables do not appear free in $A$. The formula
$\seq x \mr A$ is defined inductively as follows:
\begin{list}{}{}
\item [(1)] $\seq x \mr A$ is $A$, if $\seq x$ is empty and $A$ is a prime formula.
\item [(2)] $\seq x , \seq y \mr (A \land B)$ is $\seq x \mr A \land \seq y \mr B$.
\item [(3)] $z^0, \seq x, \seq y \mr (A \lor B)$ is $(z = 0 \to \seq x \mr A) \land (z \neq 0 \to \seq y \mr B)$.
\item [(4)] $\seq x \mr (A \to B)$ is $\forall y ( \seq y \mr A \to \seq x \seq y \mr B)$.
\item [(5)] $\seq x \mr (\forall y^\rho A(y))$ is $\forall y^\rho (\seq x \seq y \mr A(y))$.
\item [(6)] $z^\rho , \seq x \mr (\exists y^\rho A(y))$ is $\seq x \mr A(z)$.
\end{list}
Note that if $A$ is a prime formula then $A$ and $t \mr A$ are identical; this is even true for $\exists$-free formulas if we ignore dummy quantifiers.
\end{definition}
We prove each of our uniformization results in two steps. The
first step shows that whenever an $\forall \exists$ statement is
provable in a particular subsystem of intuitionistic arithmetic, we
can find a sequence of terms that realize the statement. The second
step shows that a classical subsystem is able to leverage the terms in
the realizer to prove the sequential version of the original
statement.
We begin with systems containing the full induction scheme.
For the first step, we require the following theorem.
\begin{theorem}[\cite{Koh-book}*{Theorem~5.8}]\label{719B}
Let $A$ be a formula in $\lang ( \hawfi )$. If\/
\[
\hawfi +\ac + \ipwef \vdash A
\]
then there is a tuple $t$ of terms of $\lang(\hawfi)$
such that $\hawfi \vdash t \mr A$.
\end{theorem}
For any formula $A$, $\hawfi + \ac + \ipwef$ is able to prove $A
\leftrightarrow \exists x ( x \mr A)$. However, the deduction of $A$
from $(t \mr A)$ directly in $\hawfi$ is only possible for some
formulas.
\begin{definition}\label{719C}
$\Gamma_1$ is the collection of formulas in $\lang ( \hawfi )$ defined inductively as follows.
\begin{list}{}{}
\item [(1)] All prime formulas are elements of $\Gamma_1$.
\item [(2)] If $A$ and $B$ are in $\Gamma_1$, then so are $A \land B$,
$A\lor B$, $\forall x A$, and $\exists x A$.
\item [(3)] If $A$ is
$\exists$-free and $B$ is in $\Gamma_1$, then $(\exists x
A \to B)$ is in $\Gamma_1$, where $\exists x$ may represent a block of
existential quantifiers.
\end{list}
\end{definition}
The class $\Gamma_1$ is sometimes defined in terms of ``negative''
formulas~\cite{troelstra73}*{Definition~3.6.3}, those which can be constructed
from negated prime formulas by means of $\forall$, $\land$, $\to$, and
$\bot$. In all the systems studied in this paper, every $\exists$-free
formula is equivalent to the negative formula obtained by replacing
each prime formula with its double negation. Thus the distinction
between negative and $\exists$-free will not be significant.
The next lemma is proved by Kohlenbach~\cite{Koh-book}*{Lemma~5.20}
and by Troelstra~\cite{troelstra73}*{Lemma~3.6.5}
\begin{lemma}
\label{719D}
For every formula $A$ in $\lang(\hawfi)$, if $A$ is in $\Gamma_1$,
then $\hawfi \vdash (t \mr A ) \to A$.
\end{lemma}
Applying Theorem~\ref{719B} and Lemma~\ref{719D}, we now prove the
following term extraction lemma, which is similar to the main
theorem on term extraction via modified realizability (Theorem 5.13) of
Kohlenbach~\cite{Koh-book}. Note that $\forall x\, \exists y\, A$ is in
$\Gamma_1$ if and only if $A$ is in~$\Gamma_1$.
\begin{lemma}\label{719E}
Let $\forall x^\rho\, \exists y^\tau A(x,y)$ be a sentence of $\lang(\hawfi)$
in $\Gamma_1$, where $\rho$ and $\tau$ are arbitrary types.
If
\[
\hawfi + \ac + \ipwef \vdash \forall x^\rho\, \exists y^\tau A(x,y),
\]
then $\rcaw \vdash \forall x^\rho A(x, t(x))$, where $t$
is a suitable term of\/~$\lang(\hawfi)$.
\end{lemma}
\begin{proof}
Assume that $\hawfi + \ac + \ipwef \vdash \forall x^\rho
\exists y^\tau A(x,y)$ where $A(x,y)$ is in $\Gamma_1$.
By Theorem~\ref{719B}, there is a tuple $t$ of terms of
$\lang(\hawfi )$ such that $\hawfi$ proves $t \mr \forall x^\rho
\exists y^\tau A (x,y)$. By clause (5) of
Definition~\ref{719A}, $\hawfi \vdash \forall x^\rho (
t(x) \mr \exists y^\tau A (x,y))$. By clause (6) of
Definition~\ref{719A}, $t$ has the form $t_0 , t_1$
and $\hawfi \vdash \forall x^\rho [t_1 (x) \mr A (
x, t_0 (x))]$. Because $A(x,y)$ is in~$\Gamma_1$,
Lemma~\ref{719D} shows that $\hawfi \vdash \forall x^\rho A ( x, t_0 (x))$.
Because $\rcaw $ is an extension of $\hawfi$, we see that
$\rcaw \vdash \forall x^\rho A (x, t_0 (x))$.
\end{proof}
We are now prepared to prove our first uniformization theorem.
\begin{theorem}\label{719F}
Let $\forall x \exists y A(x,y)$ be a sentence of $\lang(\hawfi)$ in $\Gamma_1$. If
\[
\hawfi + \ac + \ipwef \vdash \forall x\, \exists y\, A(x,y),
\]
then
\[
\rcaw \vdash \forall \seqx \, \exists \seqy \, \forall n\, A(x_n,y_n).
\]
Furthermore, if $x$ and $y$ are both type $1$
\textup{(}set\textup{)} variables, and the formula $\forall x\, \exists y
A(x,y)$ is in $\lang ( \rca )$, then $\rcaw$ may be replaced
by $\rca$ in the implication.
\end{theorem}
\begin{proof}
Assume that $\hawfi + \ac + \ipwef \vdash \forall x^\rho
\exists y^\tau A(x,y)$. We may apply Lemma~\ref{719E} to
extract the term $t$ such that $\rcaw \vdash \forall
x^\rho A(x, t(x))$. Working in $\rcaw$, fix any sequence
$\seqx$. This sequence is a function
of type $0 \to \rho$, so by $\lambda$ abstraction we can
construct a function of type $0 \to \tau$ defined by
$\lambda n . t(x_n )$. Taking $\seqy$ to be this sequence, we
obtain $\forall n\, A(x_n , y_n )$. The final sentence
of the theorem follows immediately from the fact that $\rcaw$ is
a conservative extension of $\rca$ for formulas in $\lang
(\rca )$.
\end{proof}
We now turn to a variation of Theorem~\ref{719F} that replaces
$\hawfi$ and $\rcaw$ with $\haw$ and $\rcaw_0$, respectively.
Lemmas~\ref{719G} and~\ref{719H} are proved by imitating the proofs of
Theorem~\ref{719B} and Lemma~\ref{719D}, respectively, as described in the
first paragraph of section 5.2 of Kohlenbach~\cite{Koh-book}.
\begin{lemma}\label{719G}
Let $A$ be a formula in $\lang ( \haw )$. If\/
$
\haw +\ac + \ipwef \vdash A
$,
then there is a tuple $t$ of terms of $\lang(\haw)$ such that $\haw \vdash t \mr A$.
\end{lemma}
\begin{lemma}\label{719H}
Let $A$ be a formula of $\lang(\haw)$. If $A$ is in $\Gamma_1$,
then $\haw \vdash (t \mr A ) \to A$.
\end{lemma}
\begin{lemma}\label{719I}
Let $\forall x^\rho\, \exists y^\tau A(x,y)$ be a sentence of $\lang(\haw)$
in $\Gamma_1$, where $\rho$ and $\tau$ are arbitrary types.
If
\[
\haw + \ac + \ipwef \vdash \forall x^\rho \exists y^\tau A(x,y),
\]
then $\rcaw_0 \vdash \forall x^\rho A(x, t(x))$, where $t$
is a suitable term of\/~$\lang(\haw)$.
\end{lemma}
\begin{proof}
Imitate the proof of Lemma~\ref{719E}, substituting Lemma~\ref{719G}
for Theorem~\ref{719B} and Lemma~\ref{719H} for Lemma~\ref{719D}.
\end{proof}
We now obtain our second uniformization theorem. This is the theorem discussed in the
introduction, where $I_0$ refers to the theory $\haw + \ac + \ipwef$.
\begin{theorem}\label{719J}
Let $\forall x\, \exists y A(x,y)$ be a sentence of $\lang(\haw)$ in $\Gamma_1$. If
\[
\haw + \ac + \ipwef \vdash \forall x \exists y\, A(x,y),
\]
then
\[
\rcaw_0 \vdash \forall \seqx \, \exists \seqy \, \forall n\, A(x_n,y_n).
\]
Furthermore, if $x$ and $y$ are both type $1$
\textup{(}set\textup{)} variables, and the formula $\forall x\, \exists y
A(x,y)$ is in $\lang ( \rca_0 )$, then $\rcaw_0$ may be replaced
by $\rca_0$ in the implication.
\end{theorem}
The proof is parallel to that of Theorem~\ref{719F}, which did not make
use of induction or recursors on higher types. Theorem~\ref{consrcao}
serves as the conservation result to prove the final claim.
\section{Unprovability results}\label{sec4}
We now demonstrate several theorems of core mathematics which are
provable in $\RCAo$ but have sequential versions that are not provable
in $\rca$. In light of Theorem~\ref{719F}, such theorems are not
provable in $\hawfi + \ac + \ipwef$. Where possible, we carry out
proofs using restricted induction, as this gives additional
information on the proof-theoretic strength of the principles being
studied. The terminology in the following theorem is well known; we
give formal definitions as needed later in the section.
\begin{theorem}\label{thm1}
Each of the following statements is provable in\/ $\RCAo$
but not provable in\/ $\hawfi + \ac + \ipwef$.
\begin{enumerate}
\item Every $2 \times 2$ matrix has a Jordan
decomposition.
\item Every quickly converging Cauchy sequence of
rational numbers can be converted to a Dedekind cut
representing the same real number.
\item Every enumerated filter on a countable poset can be
extended to an unbounded enumerated filter.
\end{enumerate}
\end{theorem}
There are many
other statements that are provable in $\RCAo$ but not
$\hawfi + \ac + \ipwef$; we have chosen these three to
illustrate the what we believe to be the ubiquity of this
phenomenon in various branches of core mathematics.
We will show that each of the statements (\ref{thm1}.1)--(\ref{thm1}.3)
is unprovable in $\hawfi + \ac + \ipwef$ by noting that each statement is
in $\Gamma_1$ and showing that the sequential form of each statement
implies a strong comprehension axiom over $\RCAo$. Because these
strong comprehension axioms are not provable even with the added
induction strength of $\rca$, we may apply Theorem~\ref{719F} to
obtain the desired results. The stronger comprehension axioms include
weak K\"onig's lemma and the arithmetical comprehension scheme, which
are discussed thoroughly by Simpson~\cite{Simpson-SOSOA}.
We begin with statement (\ref{thm1}.1). We consider only
finite square matrices whose entries are complex numbers
represented by quickly converging Cauchy sequences. In
$\RCAo$, we say that a matrix $M$ \define{has a
Jordan decomposition} if there are matrices $(U, J)$ such
that $M = U J U^{-1}$ and $J$ is a matrix consisting of
Jordan blocks. We call $J$ the \textit{Jordan canonical
form} of $M$. The fundamental definitions and theorems
regarding the Jordan canonical form
are presented by Halmos~\cite{Halmos-FDVS}*{Section~58}.
Careful formalization of (\ref{thm1}.1) shows that this principle can
be expressed by a $\Pi^1_2$ formula in $\Gamma_1$; the key point is
that the assumptions on $M$, $U$, $J$, and $U^{-1}$ can be
expressed using only equality of
real numbers, which requires only universal
quantification.
\begin{lemma}\label{s3l1}
$\RCAo$ proves that every $2 \times 2$ matrix has a
Jordan decomposition.
\end{lemma}
\begin{proof}
Let $M$ be a $2 \times 2$ matrix. $\RCAo$ proves that the
eigenvalues of $M$ exist and that for each eigenvalue
there is an eigenvector. (Compare Exercise~II.4.11
of Simpson~\cite{Simpson-SOSOA}, which notes that the
basics of linear algebra, including fundamental properties
of Gaussian elimination, are provable in $\RCAo$.) If the
eigenvalues of $M$ are distinct, then the Jordan
decomposition is trivial to compute from the eigenvalues
and eigenvectors. If there is a unique eigenvalue and
there are two linearly independent eigenvectors then the
Jordan decomposition is similarly trivial to compute.
Suppose that $M$ has a unique eigenvalue $\lambda$ but not
two linearly independent eigenvectors. Let $u$ be any
eigenvector and let $\{u,v\}$ be a basis. It follows that
$(M - \lambda I)v = au + bv$ is nonzero. Now $(M -
\lambda I)(au + bv) = b(M-\lambda I)v$, because $u$ is an
eigenvector of $M$ with eigenvalue $\lambda$. This shows
$(M - \lambda I)$ has eigenvalue~$b$, which can only
happen if $b = 0$, that is, if $(M - \lambda I)v$ is a
scalar multiple of $u$. Thus $\{u,v\}$ is a chain of
generalized eigenvectors of $M$; the Jordan decomposition can be
computed directly from this chain.
\end{proof}
It is not difficult to see that the previous proof makes
use of the law of the excluded middle.
\begin{remark}
Proofs similar to that of Lemma~\ref{s3l1}
can be used to show that for each standard natural number
$n$ the principle that every $n \times n$ matrix has a
Jordan decomposition is provable in $\RCAo$. We do not
know whether the principle that every finite matrix has a
Jordan decomposition is provable in~$\RCAo$.
\end{remark}
The next lemma is foreshadowed by previous research. It is well known
that the function that sends a matrix to its Jordan decomposition is
discontinuous. Kohlenbach~\cite{Koh-HORM} has shown that, over the
extension $\RCAo^\omega$ of $\RCAo$ to all finite types, the existence
of a higher-type object encoding a non-sequentially-continuous
real-valued function implies the principle $\exists^2$. In turn,
$\rcaw + \exists^2$ proves every instance of the arithmetical
comprehension scheme.
\begin{lemma}
The following principle implies arithmetical comprehension over\/ $\RCAo$
\textup{(}and hence over $\rca$\textup{)}. For every
sequence $\langle M_i \mid i \in \setN\rangle$ of $2
\times 2$ real matrices, such that each matrix $M_i$ has
only real eigenvalues, there are sequences $\langle U_i
\mid i \in \setN \rangle$ and $\langle J_i \mid i \in
\setN \rangle$ such that $(U_i,J_i)$ is a Jordan
decomposition of $M_i$ for all $i \in \setN$.
\end{lemma}
\begin{proof}
We first demonstrate a concrete example of the
discontinuity of the Jordan form. For any real $z$, let
$M(z)$ denote the matrix
\[
M(z) = \begin{pmatrix}1 & 0 \\
z & 1 \end{pmatrix}.\]
The matrix $M(0)$ is the identity matrix, and so is its Jordan
canonical form.
If $z \not = 0$ then $M(z)$ has the following Jordan decomposition:
\[
M(z) =
\begin{pmatrix}
1 & 0 \\
z & 1
\end{pmatrix}
=
\begin{pmatrix}
0 & 1\\
z & 0
\end{pmatrix}
\begin{pmatrix}
1 & 1 \\
0 & 1
\end{pmatrix}
\begin{pmatrix}
0 & 1\\
z & 0
\end{pmatrix}
^{-1}.
\]
The crucial fact is that the entry in the upper-right-hand
corner of the Jordan canonical form of $M(z)$ is $0$ if $z =
0$ and $1$ if $z \not = 0$.
Let $h$ be an arbitrary function from $\setN$ to $\setN$.
We will assume the principle of the theorem and show that
the range of $h$ exists; this is sufficient to establish the desired result.
It is well known that
$\mathsf{RCA}_0$ can construct a function $n \mapsto z_n$
that assigns each $n$ a quickly converging Cauchy sequence
$z_n$ such that, for all $n$, $z_n = 0$ if and only $n$ is
not in the range of $h$. Form a sequence of matrices
$\langle M(z_n) \mid n \in \setN\rangle$; according to the
principle, there is an associated sequence of Jordan
canonical forms. The upper-right-hand entry of each of
these canonical forms is either $0$ or $1$, and it is possible
to effectively decide between these two cases. Thus, in
$\mathsf{RCA}_0$, we may form the range of $h$ using the
sequence of Jordan canonical forms as a parameter.
\end{proof}
We now turn to statement (\ref{thm1}.2). Recall that the
standard formalization of the real numbers in $\RCAo$, as
described by Simpson~\cite{Simpson-SOSOA}, makes use of
quickly converging Cauchy sequences of rationals.
Alternative formalizations of the real numbers may be
considered, however. We define a \textit{Dedekind cut} to
be a subset $Y$ of the rational numbers such that both $Y$ and $\setQ \setminus Y$
are nonempty, and if $p \in Y$ and $q < p$ then
$q \in Y$. We say that a Dedekind cut $Y$ is
\textit{equivalent} to a quickly converging Cauchy sequence
$\langle a_i \mid i \in \setN\rangle$ if any only if the
equivalence
\[
q \in Y \Leftrightarrow q \leq \lim_{i\rightarrow \infty} a_i
\]
holds for every rational number $q$. Formalization of (\ref{thm1}.2) shows that
it is in $\Gamma_1$.
Hirst~\cite{Hirst-RRRM} has proved the following results
that relate Cauchy sequences with Dedekind cuts.
Together with Theorem~\ref{719F}, these results show that statement
(\ref{thm1}.2) is provable in $\RCAo$ but not $\hawfi + \ac + \ipwef$.
\begin{lemma}[Hirst~\cite{Hirst-RRRM}*{Corollary 4}] The
following is provable in $\RCAo$. For each quickly
converging Cauchy sequence $x$ there is an equivalent
Dedekind cut.
\end{lemma}
\begin{lemma}[Hirst~\cite{Hirst-RRRM}*{Corollary~9}]
The following principle is equivalent to weak K\"onig's lemma over
$\RCAo$ \textup{(}and hence over $\rca$\textup{)}.
For each sequence $\langle X_i \mid i \in \setN\rangle$ of
quickly converging Cauchy sequences there is a sequence
$\langle Y_i \mid i \in \setN\rangle$ of Dedekind cuts such that $X_i$ is
equivalent to $Y_i$ for each $i \in \setN$.
\end{lemma}
Statement (\ref{thm1}.3), which is our final application of
Theorem~\ref{719F}, is related to countable posets. In $\RCAo$, we
define a \textit{countable poset} to be a set $P \subseteq \setN$ with
a coded binary relation $\preceq$ that is reflexive, antisymmetric,
and transitive. A function $f \colon \setN \rightarrow P$ is called
an \textit{enumerated filter} if for every $i,j \in \setN$ there is a
$k \in \setN$ such that $f(k) \preceq f(i)$ and $f(k) \preceq f(j)$,
and for every $q \in P$ if there is an $i \in \setN$ such that $f(i)
\preceq q$ then there is a $k \in \setN$ such that $f(k) = q$. An
enumerated filter is called \textit{unbounded} if there is no $q \in
P$ such that $q \prec f(i)$ for all $i \in \setN$. An enumerated
filter $f$ \textit{extends} a filter $g$ if the range of $g$ (viewed
as a function) is a subset of the range of~$f$. If we modify the
usual definition of an enumerated filter to include an auxiliary
function $h\colon \setN^2 \to \setN$ such that for all $i$ and $j$,
$f(h(i,j))\preceq f(i)$ and $f(h(i,j))\preceq f(j)$, then
(\ref{thm1}.3) is in $\Gamma_1$.
Mummert has proved the following two lemmas about extending
filters to unbounded filters (see Lempp and Mummert~\cite{LM-FCP} and the
remarks after Lemma~4.1.1 of Mummert~\cite{Mummert-Thesis}). These
lemmas show that (\ref{thm1}.3) is provable in $\RCAo$ but
not $\hawfi + \ac + \ipwef$.
\begin{lemma}[Lempp and Mummert~\cite{LM-FCP}*{Theorem~3.5}]
$\RCAo$ proves that any enumerated filter on a countable
poset can be extended to an unbounded enumerated filter.
\end{lemma}
\begin{lemma}[Lempp and Mummert~\cite{LM-FCP}*{Theorem~3.6}]
The following statement is equivalent to arithmetical
comprehension over $\RCAo$ \textup{(}and
hence over $\rca$\textup{)}. Given a sequence $\langle P_i \mid i
\in \setN \rangle$ of countable posets and a sequence
$\langle f_i \mid i \in \setN\rangle$ such that $f_i$ is
an enumerated filter on $P_i$ for each $i \in \setN$,
there is a sequence $\langle g_i \mid i \in \setN \rangle$
such that, for each $i \in \setN$, $g_i$ is an unbounded
enumerated filter on $P_i$ extending~$f_i$.
\end{lemma}
We close this section by noting that the proof-theoretic results of
section~\ref{sec3} are proved by finitistic methods. Consequently,
constructivists might accept arguments like those presented here to
establish the non-provability of certain theorems from systems of
intuitionistic arithmetic.
\section{The {\it Dialectica} interpretation}\label{sec5}
In the proofs of section~\ref{sec3},
applications of G\"odel's {\it Dialectica} interpretation
can replace the applications of modified realizability.
One advantage of this substitution is that the constructive axiom
system can be expanded to include the scheme $\markov$, which formalizes
a restriction of the Markov principle.
This gain has associated costs. First, the class of formulas for which
the uniformization results hold is restricted from $\Gamma_1$ to
the smaller class $\Gamma_2$ defined below. Second, the independence
of premise principle $\ipwef$ is replaced with the weaker principle
$\ipwa$. Finally, the extensionality scheme~$\mathsf E$ is
replaced with a weaker rule of inference
\[
{\sf {QF{\text -}{ER}}}\colon \text{From~}A_0 \to s=_\rho t\text{~deduce~}A_0 \to r[s/x^\rho]
=_\tau r[t/x^\rho],
\]
where $A_0$ is quantifier free and $r[s/x^\rho]$ denotes the
result of replacing the variable $x$ of type $\rho$ by the
term $s$ of type $\rho$ in the term $r$ of type $\tau$.
We denote the systems based on this rule of inference as $\whaw$ and $\whawfi$.
Extended discussions of G\"odel's \textit{Dialectica}
interpretation are given by Avigad and
Feferman~\cite{AF-HPT}, Kohlenbach~\cite{Koh-book}, and
Troelstra~\cite{troelstra73}. The interpretation assigns to
each formula $A$ a formula $A^D$ of the form $\exists x
\forall y \,A_D$, where $A_D$ is quantifier free and each
quantifier may represent a block of quantifiers of the same
kind. The blocks of quantifiers in $A^D$ may include
variables of any finite type.
\begin{definition}
We follow Avigad and Feferman~\cite{AF-HPT} in defining the
\textit{Dialectica} interpretation inductively via the following six
clauses, in which $A^D = \exists x \forall y \,A_D$ and $B^D =
\exists u \forall v \,B_D$.
\begin{list}{}{}
\item [(1)] If $A$ a prime formula then $x$ and $y$ are both empty
and $A^D = A_D = A$.
\item [(2)] $(A \land B )^D = \exists x \exists u \forall y \forall
v \,(A_D \land B_D)$.
\item [(3)] $(A \lor B )^D = \exists z \exists x \exists u \forall y
\forall v \,((z = 0 \land A_D) \lor (z=1 \land B_D))$.
\item [(4)] $(\forall z \,A (z))^D = \exists X \forall z \forall y
\,A_D (X(z) , y, z)$.
\item [(5)] $(\exists z \,A (z))^D = \exists z \exists x \forall y
\,A_D (x,y,z)$.
\item [(6)] $(A \to B)^D = \exists U \exists Y \forall x \forall v
\,(A_D (x, Y(x,v))\to B_D (U(x),v))$.
\end{list}
A negated formula $\neg A$ is treated as an abbreviation of $A \to
\bot$.
\end{definition}
We begin our derivation of the uniformization results with a soundness
theorem of G\"odel that is analogous to Theorem~\ref{719B}. A detailed
proof is given by Kohlenbach~\cite{Koh-book}*{Theorem 8.6}.
\begin{theorem}\label{722b}
Let $A$ be a formula in $\lang ( \whawfi )$. If
\[
\whawfi + \ac + \ipwa + \markov \vdash \forall x\, \exists y A(x,y),
\]
then $\whawfi \vdash \forall x A_D (x,t(x))$, where $t$ is a suitable term of $\whawfi$.
\end{theorem}
To prove our uniformization result, we will need to convert $A^D$ back
to~$A$. Unfortunately, $\rcaw$ can only prove $A^D \to A$ for certain
formulas. The class~$\Gamma_2$, as found in (for example) Definition
8.10 of Kohlenbach~\cite{Koh-book}, is a subset of these formulas.
\begin{definition}
$\Gamma_2$ is the collection of formulas in $\lang ( \whawfi )$
defined inductively as follows.
\begin{list}{}{}
\item [(1)] All prime formulas are elements of $\Gamma_2$.
\item [(2)] If $A$ and $B$ are in $\Gamma_2$, then so are $A \land B$,
$A\lor B$, $\forall x A$, and $\exists x A$.
\item [(3)] If $A$ is purely universal and $B \in \Gamma_2$, then
$(\exists x A \to B) \in \Gamma_2$, where $\exists x$ may represent
a block of existential quantifiers.
\end{list}
\end{definition}
Kohlenbach~\cite{Koh-book}*{Lemma 8.11} states the following result for
$\whawfi$. Since $\rcaw$ is an extension of $\whawfi$, this suffices for the proof
of the uniformization result, where it acts as an analog of Lemma~\ref{719D}.
\begin{lemma}\label{722d}
Let $A$ be a formula of $\lang(\whawfi)$ in $\Gamma_2$. Then\/ $\whawfi \vdash A^D \to A$.
This result also holds for $\whaw$ for formulas in $\lang(\whaw)$.
\end{lemma}
\begin{proof}
The proof is carried out by an external induction on formula complexity with
cases based on the clauses in the definition of $\Gamma_2$. For details,
see the proof of part~(iii) of~Lemma~3.6.5 in Troelstra \cite{troelstra73}.
The proof of each clause depends only on the definition of the \textit{Dialectica}
interpretation and intuitionistic predicate calculus. Consequently, the same
argument can be carried out in $\whaw$.
\end{proof}
We can adapt our proof of Lemma~\ref{719E} to obtain the following
term extraction result.
\begin{lemma}\label{722e}
Let\/ $\forall x^\rho \exists y^\tau A(x,y)$ be a sentence
of $\lang(\whawfi)$
in $\Gamma_2$ with arbitrary types
$\rho$ and~$\tau$. If\/
$
\whawfi + \ac + \ipwa + \markov \vdash \forall x^\rho \exists y^\tau A(x,y),
$
then $\rcaw \vdash \forall x^\rho A(x, t(x))$, where $t$
is a suitable term of $\whawfi$.
\end{lemma}
Substituting Lemma \ref{722e} for the use of Lemma \ref{719E} in
the proof of Theorem~\ref{719F}, we obtain a proof of the
{\it Dialectica} version of our uniformization result.
\begin{theorem}\label{722f}
Let\/ $\forall x \exists y A(x,y)$ be a sentence
of $\lang(\whawfi)$ in $\Gamma_2$. If
\[
\whawfi + \ac + \ipwa + \markov \vdash \forall x \exists y\, A(x,y),
\]
then
\[
\rcaw \vdash \forall \seqx \, \exists \seqy \forall n\, A(x_n, y_n).
\] Furthermore, if $x$ and $y$ are both type $1$
\textup{(}set\textup{)} variables, and $\forall x \exists y
A(x,y)$ is in $\lang ( \rca )$, then\/ $\rcaw$ may be
replaced by\/ $\rca$ in the implication.
\end{theorem}
As was the case in section~\ref{sec3}, these results can
be recast in settings with restricted induction. As noted by
Kohlenbach \cite{Koh-book }*{section 8.3}, Theorem \ref{722b}
also holds with $\whawfi$ replaced by $\whaw$. Applying the
restricted-induction version of Lemma \ref{722d} leads to the
restricted form of Lemma \ref{722e}. Combining this with
the conservation result for $\rcaw _0$ over $\RCAo$ (Theorem~\ref{consrcao}) leads to
a proof of the following version of Theorem \ref{722f}.
\begin{theorem}\label{restrdialectica}
Let\/ $\forall x \exists y A(x,y)$ be a sentence of $\lang(\whaw)$
in $\Gamma_2$. If
\[
\whaw + \ac + \ipwa + \markov \vdash \forall x\, \exists y\, A(x,y),
\]
then
\[
\rcaw_0 \vdash \forall \seqx \, \exists \seqy \forall n\, A(x_n, y_n).
\] Furthermore, if $x$ and $y$ are both type $1$
\textup{(}set\textup{)} variables, and $\forall x \exists y
A(x,y)$ is in $\lang ( \RCAo )$, then\/ $\rcaw_0$ may be
replaced by\/ $\RCAo$ in the implication.
\end{theorem}
Uniformization results obtained by the {\it Dialectica} interpretation
are less broadly applicable than those obtained by modified
realizability, due to the fact that $\Gamma_2$ is a proper subset of
$\Gamma_1$. In practice, however, the restriction to $\Gamma_2$ may
not be such a serious impediment. Examination of the statements in
Theorem \ref{thm1} shows that the hypotheses in their implications are
purely universal, and consequently each of the statements is in
$\Gamma_2$. Thus an application of Theorem~\ref{722f} shows that
Theorem~\ref{thm1} holds with $\hawfi + \ac + \ipwef$ replaced by
$\whawfi + \ac + \ipwa + \markov$.
While $\Gamma_2$ may not be the largest class of formulas for which an
analog of Theorem~\ref{restrdialectica} can be obtained, any class
substituted for $\Gamma_2$ must omit a substantial collection of
formulas. For example, imitating the proof of
Kohlenbach~\cite{Koh-goodman}, working in $\whaw + \ac$ one can deduce
the $\Pi^0_n$ collection schemes, also known as ${\sf B} \Pi^0_n$.
These schemes contain formulas that are not provable in $\RCAo$,
and any class of
formulas for which Theorem \ref{restrdialectica} holds must omit such
formulas. The same observation holds for Theorem \ref{719J}.
\bibliographystyle{asl}
\begin{bibsection}
\begin{biblist}
\bib{AF-HPT}{article}{
author={Avigad, Jeremy},
author={Feferman, Solomon},
title={G\"odel's functional \textup{(}\!``Dialectica''\textup{)} interpretation},
conference={
title={Handbook of proof theory},
},
book={
series={Stud. Logic Found. Math.},
volume={137},
publisher={North-Holland},
place={Amsterdam},
},
date={1998},
pages={337--405},
review={\MR{1640329 (2000b:03204)}},
}
\bib{Feferman-1977}{article}{
author={Feferman, S. },
title={Theories of finite type related to mathematical practice},
conference={
title={Handbook of mathematical logic},
},
book={
publisher={North-Holland},
place={Amsterdam},
},
date={1977},
pages={913--971},
}
\bib{Halmos-FDVS}{book}{
author={Halmos, Paul R.},
title={Finite-dimensional vector spaces},
series={The University Series in Undergraduate Mathematics},
note={2nd ed},
publisher={D. Van Nostrand Co., Inc., Princeton-Toronto-New York-London},
date={1958},
pages={viii+200},
review={\MR{0089819 (19,725b)}},
}
\bib{Hirst-RRRM}{article}{
author={Hirst, Jeffry L.},
title={Representations of reals in reverse mathematics},
journal={Bull. Pol. Acad. Sci. Math.},
volume={55},
date={2007},
number={4},
pages={303--316},
issn={0239-7269},
review={\MR{2369116}},
}
\bib{Kleene}{article}{
author={Kleene, S. C.},
title={Countable functionals},
conference={
title={Constructivity in mathematics: Proceedings of the colloquium
held at Amsterdam, 1957 (edited by A. Heyting)},
},
book={
series={Studies in Logic and the Foundations of Mathematics},
publisher={North-Holland Publishing Co.},
place={Amsterdam},
},
date={1959},
pages={81--100},
review={\MR{0112837 (22 \#3686)}},
}
\bib{Koh-goodman}{article}{
author={Kohlenbach, Ulrich},
title={A note on Goodman's theorem},
journal={Studia Logica},
volume={63},
date={1999},
number={1},
pages={1--5},
issn={0039-3215},
review={\MR{1742380 (2000m:03150)}},
}
\bib{Koh-HORM}{article}{
author={Kohlenbach, Ulrich},
title={Higher order reverse mathematics},
conference={
title={Reverse mathematics 2001},
},
book={
series={Lect. Notes Log.},
volume={21},
publisher={Assoc. Symbol. Logic},
place={La Jolla, CA},
},
date={2005},
pages={281--295},
review={\MR{2185441 (2006f:03109)}},
}
\bib{Koh-book}{book}{
author={Kohlenbach, Ulrich},
title={Applied proof theory: proof interpretations and their use in
mathematics},
series={Springer Monographs in Mathematics},
publisher={Springer-Verlag},
place={Berlin},
date={2008},
pages={xx+532},
isbn={978-3-540-77532-4},
review={\MR{2445721 (2009k:03003)}},
}
\bib{KR}{article}{
author={Kreisel, Georg},
title={Interpretation of analysis by means of constructive functionals of
finite types},
conference={
title={Constructivity in mathematics: Proceedings of the colloquium
held at Amsterdam, 1957 (edited by A. Heyting)},
},
book={
series={Studies in Logic and the Foundations of Mathematics},
publisher={North-Holland Publishing Co.},
place={Amsterdam},
},
date={1959},
pages={101--128},
review={\MR{0106838 (21 \#5568)}},
}
\bib{LM-FCP}{article}{
author={Lempp, Steffen},
author={Mummert, Carl},
title={Filters on computable posets},
journal={Notre Dame J. Formal Logic},
volume={47},
date={2006},
number={4},
pages={479--485},
issn={0029-4527},
review={\MR{2272083 (2007j:03084)}},
}
\bib{Mummert-Thesis}{thesis}{
author={Mummert, Carl},
organization = {The Pennsylvania State University},
Title = {On the reverse mathematics of general topology},
type={Ph.D. Thesis},
Year = {2005}}
\bib{Simpson-SOSOA}{book}{
author={Simpson, Stephen G.},
title={Subsystems of second order arithmetic},
series={Perspectives in Mathematical Logic},
publisher={Springer-Verlag},
place={Berlin},
date={1999},
pages={xiv+445},
isbn={3-540-64882-8},
review={\MR{1723993 (2001i:03126)}},
}
\bib{troelstra73}{book}{
title={Metamathematical investigation of intuitionistic arithmetic and
analysis},
series={Lecture Notes in Mathematics, Vol. 344},
editor={Troelstra, A. S.},
publisher={Springer-Verlag},
place={Berlin},
date={1973},
pages={xvii+485},
review={\MR{0325352 (48 \#3699)}},
}
\bib{troelstra-HP}{article}{
author={Troelstra, A. S.},
title={Realizability},
conference={
title={Handbook of proof theory},
},
book={
series={Stud. Logic Found. Math.},
volume={137},
publisher={North-Holland},
place={Amsterdam},
},
date={1998},
pages={407--473},
review={\MR{1640330 (99f:03084)}},
}
\end{biblist}
\end{bibsection}
\end{document} | 46,635 |
Pashion
By Mariah, age 15, Minnesota
Pashion [pa-shuhn] (n.) 1. An eclectic Victorian rocker-inspired fashion trend made famous by Pete Doherty. "My wardrobe requires a bit of Pashion in order to become one with Doherty." [see Doherty, Pete]
Pashion Revealed
In my opinion, Pashion is the hottest trend since sliced bread and cotton candy. I am in love with Pete Doherty's sense of (or lack of) style, however you want to describe it. Just as Pete knows how to put on a good show, he can also assemble a nice wardrobe full of various "dirty pretty things", as he becomes a style icon for the dirty glam approach on haute couture.
The Infamous Trilby
The trilby hat holds a special place in my heart as an utterly delectable accessory when paired with the right ensemble of clothing. Vastly mistaken for its predecessor, the fedora, the trilby hat has a deeply indented crown with a narrow brim and a pinch at the front, whereas a fedora has a longer brim and the back of a fedora is less sharply upturned.
Exhibit A: A Trilby
Exhibit B: A Fedora
A Crimson Masterpiece
The crimson scarf Pete is often seen wearing also has the ability to tie an outfit together, especially when paired with a trilby hat. Nothing breathes Pashion more than a striking crimson scarf.
The Cardigan
The cardigan that Pete is seen wearing here looks like it could belong to a cool, hip gran'pa. The cardigan emits a pleasantly chill attitude.
The Military Cap
The military cap is standalone evidence of Pete's eclectic sense of style. The dark blue military cap will long go down in history as a Pashion staple. This cap is a necessity for pulling off this look.
Give Your Wardrobe A Bi' O' Pashion
While the Pashion trend is more favored by the masculine audience, I have discovered a few key accessories that give a feminine approach to this furor. I also recommend paying a visit to vintage boutiques and thrift stores that offer a wide variety of miscellaneous embellishments.
The Trilby
Hat available for $9.00 at
The Crimson Scarf
Scarf available for $19.00 at
The Cardigan
Cardigan available for $39.99 at
The Jeans
Jeans available for $69.50 at | 397,206 |
Website Worth and Statistics for Domain: whatstheharm.net
Last update: Oct 10, 2015
Find the best content from whatstheharm.net right here. Direct fast access to whatstheharm.net. Our network of dedicated servers connects via a fast connection to any website. We do not host any files on our server. Visit whatstheharm.net. This website contains information about whatstheharm.net and. The data come from various sources. whatstheharm.net is not affiliated with us in any way. We offer statistics, hosting information, server information, google pagerank information. Google pagerank of this site can be found below.
Whatstheharm.net's three-month global Alexa traffic rank is 860,428. Roughly 66% of visits to this site consist of only one pageview (i.e., are bounces). This site has attained a traffic rank of 451,330 among users in the US, where roughly 37% of its audience is located. Whatstheharm.net can be found in the “Opposing Views” category. The fraction of visits to the site referred by search engines is approximately 27%.
Domain information
Site Information
How engaged are visitors to whatstheharm.net
Bounce Raten/a
Daily Pageviews per Visitorn/a
Daily Time on Siten/a
Search Engine Optimization (SEO) Statistics
Hosting information
Site Traffic & Worth Estimates
Embed our widget
Copy/Paste this code to your forum
Copy/Paste this code to your website | 279,583 |
Jennifer Harris has the kind of enthusiasm about beet kvass and kimchi usually reserved for lottery winners and game show contestants. So it makes sense that she’s also the organizer of Sonoma’s largest celebration of fermentation — The Sonoma County Fermentation Festival, Sept. 2 at the Petaluma Fairgrounds.
Now in its seventh year, the event has grown from a small gathering of enthusiasts in Occidental to a true festival with 75 vendors, including beer, cider and wine tasting, cheesemakers, picklers, chocolatiers and kombuchists.
Riding a wave of probiotics and food preservation fascination, Harris’ ongoing event is an ode to the importance of microbiomes and healthy approaches to making fermented food at home. That, and a darn good spot to taste some of the most unique libations being made in Wine Country, from sake, mead and water kefir to shrubs and unfiltered ciders.
Classes and speakers at the Fermentation Festival include Jonas Ketterle of Firefly Chocolate, Wildbrine’s Rick Goldberg and Chris Glab, Karen Diggs of Krautsource, Veggie Queen Jill Nussinow and many others. The event is from
The Sonoma County Fermentation Festival is from 11a.m. to 5p.m., and pre-sale tickets are $20 for general admission and $40 for VIP admission includes access to the coveted Libation Lounge (21+ only). The event is family-friendly, and kids under 16 are free. Details online at fermentfestival.com.
Can’t make it? Jennifer hosts monthly classes at the Sebastopol Grange Hall.
Photo of Jennifer Harris, courtesy of spoiled to perfection
Recent Comments | 199,736 |
The Sword in the Stone Clip Art 2
with images of King Arthur/Wart, Merlin, Archimedes, Sir Ector, Squirrel and Madam Mim as a dragon
Last updated on November 1st 2014.
Notice: The following images were drawn (by tracing) and/or colored and clipped by Disneyclips.com. They are meant for non-profit use only, and are not to be displayed elsewhere online without a visible source link. Please read the Terms of Use.
You may also like
- The Sword in the Stone coloring pages
- The Sword in the Stone movie info
- The Sword in the Stone song lyrics | 334,761 |
Land Rover Eden Prairie Takes 2003 TReK Title
FT. GARLAND, Colo., Aug. 11, 2003 -- Land Rover Eden Prairie of Eden Prairie, Minnesota has won the Land Rover TReK 2003. This triumphant three-man team bested thirty-eight other North American Land Rover retailer teams in the grueling off-road competition recently held at Forbes Trinchera Ranch, Colorado.
The Minnesota-based automotive retail team included Wayne Pisinski, Ross Corey and Ryan Krause. Their determination, teamwork and consistency through their regional selection trial and the National finals, proved to be the winning formula for scores in orienteering, kayaking and various off-road test drives. Of a possible 700 points in the final competition, Land Rover Eden Prairie scored a total of 630 points.
"The competition was very close and the point spread was very narrow," said Bob Burns, national training and development manager Land Rover North America. "To win, you can't just be a rocket scientist, or you can't just be a super-athlete, but when you combine the two, you have Land Rover Eden Prairie."
In North America, Land Rover's training unit, Land Rover University, created "TReK", an event that proves to be the ultimate team-building and brand awareness exercise including events such as orienteering, autocross, kayaking regatta, running, biking and an English trials course.
The main event was an off-road obstacle course called "TReK Test Drive" that required the teams to maneuver their 2004 TReK Discoverys through off-road situations such as a "pivot bridge" obstacle and a 45-degree angle final winch ascent.
The 2004 Discovery model used for TReK 2003 features a 4.6-liter engine with manually locking center differential. Burns and his crew feel it's the most capable Discovery ever built, and the terrain in Colorado was a perfect place to showcase the full extent of its capability.
"TReK is a unique motivational competition that gives Land Rover an edge in the marketplace," said Sally Eastwood, vice president, marketing Land Rover North America. "Competitors return to their retail facilities with the enthusiasm and spirit of adventure inherent in Land Rover brand values."
"The goal of TReK is not only to highlight the capabilities of Land Rover vehicles, but also to build excitement at the retailer level, which translates to our customers through the sales and service experience," added Eastwood.
Second place honors go to Land Rover Denver South with a score of 580 and Land Rover Buckhead with a score of 560 points. Fourth, fifth and sixth place go to Land Rover Naperville, The AutoMaster, and Land Rover of Richmond respectively.
In addition, the Canadian team from British Columbia, Land Rover of Richmond, takes home the "Spirit Award" for continuous encouragement throughout the.
Vehicle specifications and features are subject to change. For the latest Land Rover pricing and product information, contact Land Rover North America Corporate Communications at (949) 341-6800.
NOTE TO EDITORS: Go to for news releases and high-resolution photographs. | 291,907 |
5400 Atlanta Hwy
Montgomery, AL 36109
334-270-3050
4260 Carmichael Ct N Montgomery, AL
4121 Carmichael Rd Ste 100 Montgomery, AL
The employees are so friendly at this bank. Whenever I have a question, they are quick to reassure me and give me the correct information I need. Their services are excellent.5 | 640 |
\begin{document}
\begin{center}{\Large \textbf{
Monitoring continuous spectrum observables: the strong measurement limit
}}\end{center}
\begin{center}
M. Bauer\textsuperscript{$\spadesuit,\diamondsuit$},
D. Bernard\textsuperscript{$\clubsuit$*},
T. Jin\textsuperscript{$\clubsuit$}
\end{center}
\begin{center}
{\bf $\spadesuit$} Institut de Physique Th\'eorique de Saclay, CEA-Saclay $\&$ CNRS, 91191 Gif-sur-Yvette, France.
\\
{\bf $\diamondsuit$} D\'epartement de math\'ematiques et applications, \'Ecole Normale Sup\'erieure, PSL Research University, 75005 Paris, France
\\
{\bf $\clubsuit$} Laboratoire de Physique Th\'eorique de l'\'Ecole Normale Sup\'erieure de Paris, CNRS, ENS, PSL University $\&$ Sorbonne Universit\'e, France.
\\
* [email protected]
\end{center}
\begin{center}
\today
\end{center}
\section*{Abstract}
{\bf
We revisit aspects of monitoring observables with continuous spectrum in a quantum system subject to dissipative (Lindbladian) or conservative (Hamiltonian) evolutions. After recalling some of the salient features of the case of pure monitoring, we deal with the case when monitoring is in competition with a Lindbladian evolution. We show that the strong measurement limit leads to a diffusion on the spectrum of the observable. For the case with competition between observation and Hamiltonian dynamics, we exhibit a scaling limit in which the crossover between the classical regime and a diffusive regime can be analyzed in details.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
Upon monitoring a quantum system, one progressively extracts random information and back-acts randomly on the system \cite{books}. There are of course various way to monitor a quantum system \cite{Haroche}. Let us assume that the observation process is such that if the monitoring is strong enough the result of these back-actions is to project the system state close to one of the eigenstates of the monitored observable. Then, if the monitoring process and the system dynamics are compatible -- that is, if the flows they induce are commuting --, the net effect of the combined system dynamics plus monitoring is this projection mechanism. However if the system dynamics is not compatible with the monitoring process, the resulting dynamical evolution is more subtle and depends on the relative competition between these two processes.
In the case of finite dimensional systems, the effective dynamics is simple enough. The observable eigenstates are well-defined normalisable states, often called pointer states \cite{Zurek-point}. In absence of system dynamics, strongly monitoring the system results in non-linearly projecting its state onto one of the pointer state according to Born's rules. This is usually referred to as non-demolition measurement \cite{Haroche,qnd}. Adding a non trivial system dynamics, be it coherent or dissipative, induces quantum jumps between pointer states \cite{Jumps}. These jumps occurs randomly and are governed by a Markov chain whose transition rates depend on the system dynamics and the nature of the monitored observable \cite{Markov}. These jumps processes are dressed by quantum spikes but we shall here neglect these finer effects \cite{Qspikes}. One noticeable difference between coherent and dissipative system dynamics is that the former is Zeno frozen \cite{QZeno} under monitoring so that the transition rates of the quantum jump Markov chain asymptotically depend on the strength of the monitoring process while they do not in the latter case.
The picture is a priori more complicated in the case of the monitoring of an observable with continuous spectrum \cite{Old-classic}, because then the observable eigenstates are not normalisable -- they are generalised eigenstates --, so that the result of the monitoring process cannot be the projection of the system state onto one of those non-normalisable states and the result of the combined system plus monitoring dynamics cannot be a simple Markov chain on those generalised eigenstates.
The dynamical process and the statistical aspects involved in monitoring an observable with continuous spectrum has recently been precisely described in ref.\cite{Juerg-et-al}, in the absence of internal dynamics. The aim of this paper is to show that, at least for some classes of system dynamics, one can present a precise formulation of the resulting stochastic processes describing the combined system plus observation dynamics in the limit of strong monitoring of an observable with continuous spectrum. Naively, we may expect that those processes are some kind of diffusion processes on the spectrum of the monitored observable. The aim of this paper is to make this statement more precise. To fix the framework, we shall assume that the monitored observable is the position observable for a massive particle on flat space, but the generalisation to other observables is clear. We shall deal with dissipative and hamiltonian dynamics. In both the Lindbladian and the Hamiltonian case we start from an exact treatment of a simple model (related in both cases, though for different reasons, to an harmonic oscillator) and then extend our results to more general situations.
The case of dissipative dynamics is dealt with in Section \ref{sec:dissip}. There we shall prove that, in the limit of strong monitoring of the position observable, the resulting processes are driven by stochastic differential equations on the position space, if the Lindblad operators generating the system dynamics are quadratic in the impulsion operators. In other words, under such hypothesis, quantum trajectories are mapped into stochastic trajectories -- a kind of quantum to classical transition induced by monitoring.
The case of hamiltonian dynamics is dealt with in Section \ref{sec:hamilton}. There we take a look at the well-documented problem of a quantum massive particle in a smooth potential whose position is monitored continuously \cite{Old-classic, Many}. As previously analysed \cite{Many,Kolokol,Bassi,BDK}, in such a case the resulting dynamics possesses three different regimes: (a) a collapse regime in which the wave function localizes in space in agreement with non-demolition measurement, (b) a classical regime in which the localized wave function moves in space according to classical dynamics, and (c) a diffusive regime in which the wave function diffuses randomly and significantly. However, as pointed out in ref.\cite{BDK}, ``it is not an easy task to spell out rigorously these regimes and their properties". The aim of this section is to argue that we may define a double scaling limit which maps quantum trajectories onto solutions of a Langevin equation describing precisely enough the cross-over from the classical regime to the diffusive regime.
How to formulate position monitoring in quantum mechanics and what is the associated notion of quantum trajectories is recalled in the Section \ref{sec:qnd}. This Section is also used to describe fine statistical aspects of position monitoring. In particular we explain how one may adopt two different points of view to describe those statistical properties depending on which information one has access to. Some mathematical results related to this last point -- i.e. related to possible changes of filtrations in observing the monitoring process -- are given in the Appendix.
\section{QND measurement of a continuous spectrum observable.}
\label{sec:qnd}
The general rules of quantum mechanics (including the measurement postulates) enable to show that the evolution of the density matrix under continuous monitoring of the observable $X$ (later we shall take $X$ to be the position observable, hence the name) but in absence of any internal evolution read \cite{Old-classic} (see the lecture notes \cite{lecture} for a simple derivation):
\beqa \label{eq:QND-lind}
d\rho_t &=& -\frac{\gamma^2}{2} [X, [X,\rho_t]] dt + \gamma \big( X\rho_t + \rho_t X - 2 \vev{X}_t\rho_t\big)\, dW_t,\\
dS_t &=& 2 \gamma \vev{X}_t\, dt + dW_t, \nonumber
\eeqa
with $\vev{X}_t := \mathrm{Tr}(X\rho_t)$ and $W_t$ a normalized Brownian motion, with $dW_t^2=dt$, and $S_t$ the output signal. The dimensionful parameter $\gamma$ codes for the rate at which information is extracted. One may alternatively write the above evolution equation eq.(\ref{eq:QND-lind}) for the density matrix kernel $\rho_t(x,y)=\langle x|\rho_t| y \rangle$, with $|x\rangle$ the generalized eigenstate of $X$ with eigenvalue $x$, as:
\beq \label{eq:qnd-kernel}
d\rho_t(x,y) = -\frac{\gamma^2}{2} (x-y)^2\rho_t(x,y) dt + \gamma \big(( x+y - 2 \vev{X}_t)\rho_t(x,y)\big)\, dW_t,
\eeq
with $\vev{X}_t:=\int \dd x\, x\rho_t(x,x)$.
One way to derive these equations is to look at a discrete time version where the monitoring is the effect of repeated interactions of probes with the system, with subsequent ``yes-no'' von Neumann measurement coupled to $X$ on the probes. The process $S$ is the bookkeeping device to record the monitoring and in the discrete-time viewpoint, a ``yes'' (resp. a ``no'') for a probe measurement leads to an increase (resp. decrease) of $S$ for the corresponding time step. Probe measurements with more than two possible outcomes lead to introduce vectors $S$ and $W$.
The purpose of the next subsection is to exhibit two equivalent, but rather different looking, descriptions of the monitoring process. We state the main results, but the details are relegated to the appendix, together with a quick discussion of the discrete time situation which the reader should refer to for motivation.
\subsection{Diagonal evolution and probabilistic interpretation}
The point of view we develop now is the following: during the monitoring process of the observable $X$, the observer gains information little by little and is able at large times to infer a value, say $x$, that characterizes the asymptotic state of the system. It turns out that the description of the process is substantially simpler for a ``cheater'' who uses from the beginning the knowledge of the result of the measurement. Though we do not claim that this is ``the'' interpretation of the density matrix, it is convenient for the forthcoming discussion to talk about it (in particular about its diagonal with respect to the spectral decomposition of $X$) using a Bayesian vocabulary, i.e. view it as expressing the best guess for the observer with the information available for him at a certain time.
If $X$ is (as suggested by the name) the position of the particle, the ``eigenstates'' of $X$ are not normalizable, so that the matrix elements of the density matrix between those ``eigenstates'', also called pointer states, may not be well-defined: in general the diagonal of the density matrix is not a function of the position variable $x$ but a probability measure $\dd \mu_t (x)$. In terms of the density matrix kernel we have $\dd\mu_t(x)=\rho_t(x,x)\, \dd x$.
We choose to concentrate on the diagonal of the density matrix in the pointer states basis for the forthcoming discussion. Happily, the time evolution of the pair $(\dd \mu_t, S_t)_{t\geq 0}$, which is easily deduced from the general equations above, eqs.(\ref{eq:QND-lind},\ref{eq:qnd-kernel}), remains autonomous and reads~\footnote{Don't get confused between the notation $\dd$ associated to the integration of the variable $\alpha$ or $x$ and the notation $d$ related to differentiation with respect to time variable $t$.}:
\beqa
d \dd \mu_t(x) &=& 2 \gamma \left( x - \int y \, \dd \mu_t(y)\right)\dd \mu_t(x)\, dW_t, \label{eq:qm1}\\
dS_t &=& 2 \gamma \left( \int y \, \dd \mu_t(y)\right) \, dt + dW_t. \label{eq:qm2}
\eeqa
The measure $\dd \mu_0$, i.e. the diagonal of the density matrix before the monitoring starts, is assumed to be known to the experimenter (e.g. via preparation). We shall see that $\dd \mu_t$ can be interpreted as a best guess at time $t$ for the experimenter knowing only $\dd \mu_0$ and $S_u$, $u\in [0,t]$.
To simplify the notation a little bit, we set $A:=2 \gamma X$ and use $\alpha$ to denote the variables in the spectrum of $A$. The above coupled stochastic differential equations are not totally trivial to analyze. However, the following holds:
{\bf Proposition:} {\it
Let $(B_t)_{t \geq 0}$ be a Brownian motion and $A$ be an independent random variable with distribution $\dd \mu_0(\alpha)$ defined on some common probability space.\\
Let $S_t$ be the process $S_t:=B_t+At$. Let $\mathcal{G}_t:=\sigma \{A \text{ and } S_u, u\in [0,t]\}= \sigma \{A \text{ and } B_u, u\in [0,t]\}$ be the filtration describing the knowledge accumulated by knowing $A$ since $t=0$ and $B_u$ or $S_u$ for $u\in [0,t]$. Let $\mathcal{H}_t:=\sigma \{S_u, u\in [0,t]\}$ be the filtration describing the knowledge accumulated by knowing $S_u$ for $u\in [0,t]$. Then:\\
-- If $h$ is an arbitrary measurable function such that $h(A)$ is integrable then
\[\mathbb{E}[h(A) |\mathcal{H}_t]= \frac{\int \dd\mu_0(\beta) h(\beta)\, e^{\beta S_t-\beta^2 t/2}}{\int \dd\mu_0(\beta)\, e^{\beta S_t-\beta^2 t/2}}.\]
-- The process $(W_t)_{t \geq 0}$ defined by
\[W_t:=S_t- \int_0^t \mathbb{E}[A|\mathcal{H}_u] \, du\] is an $\mathcal{H}_t$-adapted Brownian motion.\\
-- The pair $(\dd \mu_t, S_t)_{t\geq 0}$ where $\mu_t$ is defined by
\beq \label{eq:qnd-diag}
\mathbb{E}[h(A) |\mathcal{H}_t]=: \int \dd \mu_t(\beta) h(\beta),\quad \text{ i.e. } \dd \mu_t(\alpha)=\frac{d\mu_0(\alpha) e^{\alpha S_t-\alpha^2 t/2}}{\int d\mu_0(\beta)e^{\beta S_t-\beta^2 t/2}}
\eeq
solves the system
\beqa
d \dd \mu_t(\alpha) &=& \left( \alpha - \int \beta \, \dd \mu_t(\beta)\right)\dd \mu_t(\alpha)\, dW_t \label{eq:ps1},\\
dS_t &=& \left( \int \beta \, \dd \mu_t(\beta)\right) \, dt + dW_t,\label{eq:ps2}
\eeqa
}
A detailed proof in given in the appendix, together with some motivations from the discrete time situation. The proof we give is a pedestrian one, based on explicit translations in Gaussian integrals. This is also the main intuition behind Girsanov's theorem, which is indeed the right general framework in which the proposition fits.
Let us now explain the meaning of this result. The important thing to realize is that the process $S_t$ can be analyzed under a number of filtrations, but its definition and properties are independent of the choice of filtration.
For instance, from the initial decomposition $S_t=B_t+At$ we infer that $\frac{S_t}{t}$ converges at large times to the random variable $A$ with probability $1$, because by the law of large numbers for Brownian motion $\frac{B_t}{t}$ converges at large times to $0$ with probability $1$. By the law of large numbers for Brownian motion again $\frac{W_t}{t}$ converges at large times to $0$ with probability $1$. Hence, the $\mathcal{H}_t$-measurable random variable $\frac{1}{t}\int_0^t \mathbb{E}[A|\mathcal{H}_u] \, du$ which, by eq.(\ref{eq:qnd-diag}), equals $\frac{1}{t}\int_0^t \left( \int \beta \, \dd \mu_u(\beta) \right) \, du$
converges at large times to the random variable $A$ with probability $1$.
Now $\mathbb{E}[A|\mathcal{H}_u]$ is, by the very definition of conditional expectations, the best $\mathcal{H}_u$-measurable approximation of $A$. Thus the connection between the definition $S_t=B_t+At$ and the system of eqs.(\ref{eq:ps1},\ref{eq:ps2}) has an interpretation in the field of statistical estimation: It is equivalent to sample $A$ at time $0$ (with $\dd \mu_0$ of course) and observe the process $(S_t)_{t\geq 0}$ (i.e. to use the filtration $\mathcal{G}_t$) or to observe only $(S_t)_{t\geq 0}$ (i.e. to use the filtration $\mathcal{H}_t$) and retrieve $A$ asymptotically using Bayesian inference. Another road to this result is to substitute $S_t=B_t+At$ in the formula for $\dd \mu_t(\alpha)$ to find that the numerator $e^{\alpha S_t-\alpha^2 t/2}$ is strongly peaked around $\alpha=A$ at large times.
The striking fact is that the systems (\ref{eq:qm1},\ref{eq:qm2}) and (\ref{eq:ps1},\ref{eq:ps2}) are the same (mutatis mutandi, with the substitution $A=2 \gamma X$). But the first system results from the application of the rules of quantum mechanics, while the second one has a purely statistical estimates content as explained above. From the point of view of quantum mechanics, the natural situation of an experimenter is that she/he observes only the result of monitoring i.e. the process $(S_t)_{t\geq 0}$ and infers/measures a more and more accurate value of $X$ when time gets large. But there is also an interpretation when a cheater measures $X$ at time $0$ (with the outcome distributed as $\dd \mu_0$ of course) and then gives the system to the experimenter. The cheater has $\mathcal{G}_t$ at its disposal, and in particular knows in advance which value for $X$ the experimenter will infer/measure after an infinite time from the sole result of monitoring.
The explicit formula for $\dd \mu_t$ as a function of $S_t$ can be quickly recovered by linearization of the system (\ref{eq:ps1},\ref{eq:ps2}), a trick that we recall below because it works in general (i.e. when the system has some intrinsic evolution while the monitoring is performed). Note also that $W_t$ is not simply the conditional expectation of $B_t$ knowing $\mathcal{H}_t$. The interested reader should consult the appendix for a detailed discussion.
\subsection{Density matrix evolution}
Let us now give a brief description of the off-diagonal elements of the density matrix kernel $\rho_t(x,y)=\bra{x}\rho_t\ket{y}$ whose evolution is governed by eq.(\ref{eq:qnd-kernel}). These simple results will be useful in the following. As is well known \cite{Many}, the solution of this equation are obtained by changing variables and defining a un-normalized density matrix kernel $\hat \rho_t(x,y)$, solution of the linear stochastic differential equation
\beq \label{eq:hat-rho}
d\hat \rho_t(x,y) = -\frac{\gamma^2}{2} (x-y)^2\hat \rho_t(x,y) dt + \gamma (x+y)\hat \rho_t(x,y)\big)\, dS_t,
\eeq
driven by the output signal $S_t$. The normalized density kernel $\rho_t(x,y)$ is then reconstructed by setting
\[ \rho_t(x,y) = \frac{\hat \rho_t(x,y)}{Z_t} ,\]
with the normalization factor $ Z_t := \int dx\, \hat \rho_t(x,x)$ satisfying $dZ_t= 2\gamma\, \vev{X}_t\, Z_t\,d S_t$. Since $dS_t^2=dt$, solution of eq.(\ref{eq:hat-rho}) reads
\beqs
\hat \rho_t(x,y) &=& \rho_0(x,y)\ e^{-\gamma^2t\,(x^2+y^2) + \gamma \,(x+y)\, S_t}.\\
Z_t &=& \int \dd\mu_0(x)\, e^{-2\gamma^2t\,x^2 + 2\gamma \, x\, S_t} .
\eeqs
For the diagonal elements one recovers the solution (\ref{eq:qnd-diag}) described above.
The mean position is then $2\gamma\, \vev{X}_t = (\partial_S\log Z)(S_t)$ so that $dS_t = (\partial_S\log Z)(S_t)\, dt + dW_t$.
The analysis of the previous subsection implies that the signal $S_t$ can also be decomposed as $S_t = 2\gamma\, \bar x\, t + B_t$, with $B_t$ another standard Brownian motion and $\bar x$ an independent random variable sampled with measure $\dd\mu_0(x)$. As a consequence,
\[ \hat \rho_t(x,y) = \rho_0(x,y)\, e^{2\gamma^2t\bar x^2}\ e^{-\gamma^2t\,\big((x-\bar x)^2+(y-\bar x)^2\big) +O(\gamma\sqrt{t})}, \]
and $\dd \mu_t(x)$ is a Gaussian distribution\footnote{But the kernel $\rho_t(x,y)$ are not a two dimensional normalized Gaussian.} centred at the random position $\bar x$ with width of order $\sim 1/\sqrt{\gamma^2 t}$.
Alternatively, this tells us that under QND monitoring of the position the system state has approximatively `collapses' on Gaussian state of width of order $\ell$ after a time of order $1/\ell^2\gamma^2$.
The diagonal components of the density matrix kernel have a well-defined large $\gamma$ limit, in the sense that $\dd \mu_t(x)\to \delta(x-\bar x) \dd x$ in this limit, but the off-diagonal components don't. This is illustrated by the behaviour of the momentum distribution, $\vev{e^{iaP}}_t= e^{-\frac{1}{2}\gamma^2t\, a^2}$, which implies that $\vev{P^{2k}}_t\to\infty$ and $\vev{f(P)}_t\to 0$ for any function $f$ with bounded Fourier transform, at large $\gamma$ as expected from Heisenberg principle.
\section{Monitoring continuous spectrum observables with dissipative dynamics}
\label{sec:dissip}
The aim of this section is to understand what dynamical processes emerge when strongly monitoring an observable with a continuous spectrum for a quantum system subject to a dissipative dynamics.
The simplest way to present the discussion is to consider a quantum particle on the real line, with quantum Hilbert space $\mathcal{H}={L}^2(\mathbb{R})$, and a monitoring of the position observable $X$. The (stochastic) dynamical equation for the density matrix is then
\beq \label{eq:rho-dyn}
d\rho_t = L(\rho_t)\, dt -\frac{\gamma^2}{2} [X, [X,\rho_t]] dt + \gamma \big( X\rho_t + \rho_t X - 2 \vev{X}_t\rho_t\big)\, dW_t,\eeq
with $W_t$ a normalized Brownian motion and $L$ some Lindblad operator. Alternatively, the evolution of the density kernel $\rho_t(x,y)=\bra{x}\rho_t\ket{y}$ reads
\[ d\rho_t(x,y) = L(\rho_t)(x,y)\, dt -\frac{\gamma^2}{2} (x-y)^2\rho_t(x,y) dt + \gamma \big(( x+y - 2 \vev{X}_t)\rho_t(x,y)\big)\, dW_t.\]
The aim of this section is to understand the limit of large $\gamma$.
If one want to get a precise statement, one should not look directly at the dynamics of the limiting density kernel but at the dynamics of the measures associated to system observables induced by the density matrix. Let us recall that, given a system density matrix $\rho$, to any system observable $O=O^\dag$ is associated a measure $\mu^O$ on $Spec(O)$, the spectrum of $O$, via
\[ \mathrm{Tr}\big( \rho\, \varphi(O)\big) = \int \dd\mu^O[o]\, \varphi(o), \]
for any (bounded) function $\varphi$. Now, a time evolution $\rho\to \rho_t$, as specified by eq.(\ref{eq:rho-dyn}), induces a time evolution of the measure $\mu^O\to \mu_t^O$ via $\int \dd\mu_t^O[o]\, \varphi(o)=\mathrm{Tr}\big( \rho_t\, \varphi(O)\big)$. Since $\rho_t$ is random, so is $\mu_t^O$. The statements about the large $\gamma$ limit are going to be formulated in terms of the limiting behavior of those measures for appropriately chosen observables.
The measure we should look at is that associated to the position, the observable that is monitored, defined by
\[ \int \dd\mu_t^X(x)\, \varphi(x)=\mathrm{Tr}\big( \rho_t\, \varphi(X)\big),\]
for any function $\varphi$. To simplify notation, we drop the upper index $X$ on $\mu^X$ and let $\dd \mu_t(x):=\dd \mu_t^X(x)$. Alternatively, $\dd\mu_t(x)= \rho_t(x,x)\, \dd x$.
\subsection{A simple example: Monitoring quantum diffusion.}
Let us first look at the simplest model for which the Lindblad operator $L$ is the so-called quantum Laplacian
\[ L(\rho)= -\frac{D}{2}\big[P ,\big[P, \rho \big]\big] ,\]
with $D$ some diffusion constant, so that $L(\rho_t)(x,y)= \frac{D}{2}(\partial_x+\partial_y)^2\, \rho_t(x,y)$. With this choice, the SDE for the density kernel reads
\beq \label{eq:sde-X}
d\rho_t(x,y) = \frac{D}{2}(\partial_x+\partial_y)^2\, \rho_t(x,y)\, dt -\frac{\gamma^2}{2} (x-y)^2\rho_t(x,y) dt + \gamma \big(( x+y - 2 \vev{X}_t)\rho_t(x,y)\big)\, dW_t.
\eeq
Interestingly enough, this equation is closed on the diagonal elements so that the measure $\dd\mu_t(x):= \rho_t(x,x)\, \dd x$ satisfies
\[
d\,(\dd\mu_t(x)) = \frac{D}{2}\,\partial_x^2\, \dd\mu_t(x)\, dt + 2\gamma\, (x-\vev{X}_t)\, \dd\mu_t(x)\, dW_t,
\]
with $\vev{X}_t= \int \dd\mu_t(x)\, x = : \mu_t[x]$. Alternatively, introducing a test function $\varphi$ and denoting its ``moment'' by $\mu_t[\varphi]=\int \dd\mu_t(x)\, \varphi(x)$, the stochastic evolution reads:
\beq \label{eq:sde-muX}
d\,\mu_t[\varphi] = \frac{D}{2}\,\mu_t[\Delta\varphi]\, dt + 2\gamma\,\mu_t[x\cdot\varphi]^c\, dW_t,
\eeq
with $\Delta=\partial_x^2$ the Laplacian, and $\mu_t[x\cdot\varphi]^c:= \mu_t[x\varphi]-\mu_t[x]\mu_t[\varphi]$ the ``connected $[x\cdot\varphi]$-moment".
Eq.(\ref{eq:sde-muX}) defines a process on measures on the real line. It is specified by its transition kernel which is going to be generated by a second order differential operator. We have to spell out the class of functions (of the measure $\mu_t$) used to test the process on which this operator or kernel is acting. By construction, the functions we are going to consider are polynomials in the moments of the measure $\mu_t$ and their appropriate completions (that is, say, convergent series of the moments of the measure $\mu_t$, with a finite radius of convergence).
Let $\varphi_j$, $j=1,\cdots, n$, be test functions and let $\mu_t[\varphi_j]$ be the corresponding moments. Let $f$ be a function of $n$ variables, $f(\vec \mu)=f(\mu_1,\cdots,\mu_n)$, defined by its series expansion (with a finite radius of convergence). The class of functions (of the measure $\mu_t$) we consider are defined by
\[ F^{\vec{\varphi}}_f(\mu_t):= f(\mu_t[\varphi_1],\cdots,\mu_t[\varphi_n]).\]
We set $f^{\vec\varphi}= f\circ {\vec\varphi}$, that is $f^{\vec\varphi}(x)=f(\varphi_1(x),\cdots,\varphi_n(x))$.
We can then state:
{\bf Proposition:} {\it Let $\mu_0$ be the initial condition $\mu_0=\mu_{t=0}$.\\
At $\gamma\to \infty$ the limiting process for the measure $\mu_t$ is that of measures concentrated on a Brownian motion started at an initial position chosen randomly with distribution $\mu_0$. That is~\footnote{$\delta_y$ denotes de Dirac measure centered at $y$: $\delta_y:=\delta(x-y)\dd x$.}:
\beq \label{eq:mu-infty} \lim_{\gamma\to \infty} \dd \mu_t=\delta_{Y_t},\quad Y_t:=B_{\frac{D}{2} t}\ \mathrm{with}\ Y_{t=0} \ \mu_0\,\mathrm{-distributed},
\eeq
with $B_t$ a normalized Brownian motion. The limit is weak in the sense that it holds for all moments $F^{\vec{\varphi}}_f(\mu_t)$. Namely
\beq \label{eq:limiting}
\lim_{\gamma\to\infty} \mathbb{E}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] = \mathbb{E}_{\mu_0}[f^{\vec{\varphi}}(Y_t)]
= \mu_0[ e^{t\,\frac{D}{2}\, \Delta}\cdot f^{\vec\varphi}] .
\eeq
with $\mathbb{E}_{\mu_0}$ refers to the measure on Brownian motion with initial conditions distributed according to $\mu_0$.
}
{\bf Proof:}
The proof goes in three steps: (i) first identify the differential operators (acting on $F^{\vec{\varphi}}_f$) generating the process; (ii) then identify the differential operator echoing the monitoring and look at its limiting action at large $\gamma$; and (ii) finally use a perturbation theory to deal with the effect of Lindblad dynamics on top of the monitoring.
(i) Let $F^{\vec{\varphi}}_f(\mu_t)$ be some moments of the random measure $\mu_t$. Using eq.(\ref{eq:sde-muX}) and It\^o calculus, it is easy to check (as for general stochastic process generated by a stochastic differential equation) that these moments satisfies a stochastic differential equation of the form $d F^{\vec{\varphi}}_f(\mu_t) = \big(\mathcal{D}\cdot F^{\vec{\varphi}}_f\big)(\mu_t)\, dt + \mathrm{noise}$, with $\mathcal{D}$ a second order differential operator. Equivalently, their expectation reads
\[ \mathbb{E}_{\mu_0}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] = \Big( [e^{t\,\mathcal{D}}]\cdot F^{\vec{\varphi}}_f\Big)(\mu_0).\]
The form of eq.(\ref{eq:sde-muX}) ensures that the second order differential operator $\mathcal{D}$ decomposes as $\mathcal{D}=\mathcal{D}_0+4\gamma^2\, \mathcal{D}_2$. Here $\mathcal{D}_0$ is a first order differential operator whose action on linear functions is such that $\mathcal{D}_0\cdot \mu[\varphi] := \mu[ \frac{D}{2}\Delta\varphi]$, by definition. It is extended to any functions $F^{\vec{\varphi}}_f(\mu)$ via Leibniz's rules:
\[ \big(\mathcal{D}_0\cdot F^{\vec{\varphi}}_f\big)(\mu)= \sum_{k=1}^n \mu[ \frac{D}{2}\Delta\varphi_k]\, (\nabla_k f)(\mu[\varphi_1],\cdots,\mu[\varphi_n]) .\]
The operator $ \mathcal{D}_2$ is a second order differential operator whose action on first and second moments is
$\mathcal{D}_2\cdot \mu[\varphi] =0$ and $\mathcal{D}_2\cdot \mu[\varphi_1]\, \mu[\varphi_2] = \mu[x\cdot\varphi_1]^c\, \mu[x\cdot\varphi_2]^c$, by definition. It is extended to any functions $F^{\vec{\varphi}}_f(\mu)$ as a second order differential operator:
\[ (\mathcal{D}_2\cdot F^{\vec{\varphi}}_f\big)(\mu)= \frac{1}{2} \sum_{k,l=1}^n \mu[x\cdot\varphi_k]^c\,\mu[x\cdot\varphi_l]^c\, (\nabla_k\nabla_l f)(\mu[\varphi_1],\cdots,\mu[\varphi_n]) .\]
Both operators $\mathcal{D}_2$ and $\mathcal{D}=\mathcal{D}_0+4\gamma^2\, \mathcal{D}_2$ are non-positive, because they are both associated to well defined stochastic differential equations.
(ii) The operator $4\gamma^2\, \mathcal{D}_2$ is the one associated to QND $X$-measurement (without extra evolution). From the analysis of the previous, we know that in a pure QND monitoring of the position the measure $\mu_t$ behave at large $\gamma$ as
\[ \dd\mu_t(x)\vert^\mathrm{qnd}={\frac{1}{\mathcal{Z}_t}}\, e^{-2\gamma^2t\,\big((x-\bar x)^2+O(\gamma^{-1})\big)}\, \dd \mu_0(x)\, ,\]
with $\bar x$ $\mu_0$-distributed and $\mathcal{Z}_t\simeq \tilde \mu_0(\bar x)\, \sqrt{\pi/2\gamma^2 t}$ (with $\dd \mu_0(x)=\tilde \mu_0(x)dx$). Thus we have
\[ \lim_{\gamma\to\infty}\dd \mu_t(x)\vert^\mathrm{qnd} = \delta_{\bar x},\]
with $\bar x$ $\mu_0$-distributed. Alternatively, this implies that $\mathbb{E}^\mathrm{qnd}\big[\mu_t[\varphi_1]\cdots \mu_t[\varphi_n]\big] \to \mu_0[\varphi_1\cdots\varphi_n]$, as $\gamma\to\infty$, which yields that
\[ \lim_{\gamma\to\infty}\, \mathbb{E}_{\mu_0}^\mathrm{qnd}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] = \mu_0[ f^{\vec\varphi}] ,\]
for any function $f$. Since $[e^{4\gamma^2\, t\, \mathcal{D}_2}]$ is the transition kernel for QND measurements, this is equivalent to
\[ \lim_{\gamma\to\infty}\, \Big( [e^{4\gamma^2\,t\,\mathcal{D}_2}]\cdot F^{\vec{\varphi}}_f\Big)(\mu_0) = \mu_0[ f^{\vec\varphi}] .\]
From this we learn that:\\
-- The kernel of $\mathcal{D}_2$ is made of linear functions: $\mathrm{Ker}\mathcal{D}_2=\{ \mu[\varphi],\ \varphi\, \mathrm{test\ function}\}$.\\
-- Let $\Pi$ be the projector on $\mathrm{Ker}\mathcal{D}_2$ defined by $\Pi:= \lim_{a\to\infty} e^{a\,\mathcal{D}_2}$. Let $F^{\vec{\varphi}}_f\vert_{\parallel}$ be the projection of $F^{\vec{\varphi}}_f$ on $\mathrm{Ker}\mathcal{D}_2$, defined by $F^{\vec{\varphi}}_f\vert_{\parallel}=\Pi\cdot F^{\vec{\varphi}}_f$. Then $F^{\vec{\varphi}}_f\vert_{\parallel}(\mu) = \mu[f^{\vec\varphi}] $.
(iii) We now study the (original) process whose generator is $\mathcal{D}=\mathcal{D}_0+4\gamma^2\, \mathcal{D}_2$. By construction, we have
\[ \mathbb{E}_{\mu_0}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] = \Big( [e^{t\,(\mathcal{D}_0+4\gamma^2\, \mathcal{D}_2)}]\cdot F^{\vec{\varphi}}_f\Big)(\mu_0).\]
It is clear that if $F^{\vec{\varphi}}_f\vert_{\parallel}=0$ (that is $F^{\vec{\varphi}}_f\vert$ has no linear component), then $\lim_{\gamma\to\infty}\mathbb{E}_{\mu_0}\big[ F^{\vec{\varphi}}_f(\mu_t) \big]=0$. That is: only expectations of functions in $\mathrm{Ker}\mathcal{D}_2$ have a non trivial limit as $\gamma\to\infty$, and the limiting process is reduced to that space of zero modes (i.e. to that kernel).
We now use an algebraic strategy to prove convergence. Since $\mathcal{D}_0: \mathrm{Ker}\mathcal{D}_2 \to \mathrm{Ker}\mathcal{D}_2$, perturbation theory tells us (as recalled below) that the dynamics on $ \mathrm{Ker}\mathcal{D}_2$ reduces to
\[ \lim_{\gamma\to\infty}\mathbb{E}_{\mu_0}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] = \Big( [e^{t\,\mathcal{D}_0}\circ \Pi]\cdot F^{\vec{\varphi}}_f\Big)(\mu_0),\]
or equivalently,
\beq \label{eq:limit-alpha}
\lim_{\gamma\to\infty}\mathbb{E}_{\mu_0}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] = [e^{t\,\mathcal{D}_0}]\cdot \mu_0[f^{\vec\varphi}] .
\eeq
This last equation is equivalent to the claim (\ref{eq:limiting}) because $[e^{t\,\mathcal{D}_0}]\cdot \mu_0[f^{\vec\varphi}] = \mu_0[e^{t\,\frac{D}{2}\, \Delta}\cdot f^{\vec\varphi}] $.
It is thus remains to argue for eq.(\ref{eq:limit-alpha}). Let $K^\gamma_t:=[e^{t\,(\mathcal{D}_0+4\gamma^2\, \mathcal{D}_2)}]$ and $\Pi^\gamma_t:= [e^{t\,4\gamma^2\,\mathcal{D}_2}]$. We have $\partial_t K_t^\gamma = K_t^\gamma\, (\mathcal{D}_0+4\gamma^2\, \mathcal{D}_2)$ and $\lim_{\gamma\to\infty}\Pi^\gamma_t=\Pi$ with $\Pi$ the project on linear function (i.e. on $\mathrm{Ker}\mathcal{D}_2$). Let $F\in \mathrm{Ker}\mathcal{D}_2$. Then $\partial_t (K^\gamma_t\cdot F)= K^\gamma_t \mathcal{D}_0\cdot F$ (because $\mathcal{D}_2\cdot F=0$). Equivalently $\partial_t (K^\gamma_t\,\Pi)= K^\gamma_t\, ( \mathcal{D}_0\,\Pi)$. Now because $\mathcal{D}_0$ maps $\mathrm{Ker}\mathcal{D}_2$ onto $\mathrm{Ker}\mathcal{D}_2$ we have $\mathcal{D}_0\,\Pi = \Pi\, \mathcal{D}_0\,\Pi$, and thus $\partial_t (K^\gamma_t\,\Pi)= (K^\gamma_t\,\Pi)\, ( \Pi\, \mathcal{D}_0\,\Pi)$. Integrating and taking the large $\gamma$ limit yields eq.(\ref{eq:limit-alpha}).
\hfill $\square$
Let us end this sub-section by a remark. It is easy to verify that the dynamical equation (\ref{eq:sde-X}) admits a separation of variables so that its general solutions are density kernels $\rho_t(x,y)$ of the following form
\[ \rho_t(x,y)= \sigma_0(\frac{x-y}{2})\cdot e^{-\frac{\gamma^2}{2} (x-y)^2\, t}\cdot \tilde \mu_t(\frac{x+y}{2}),\]
with $\sigma_0$ an arbitrary function (normalized to $\sigma_0(0)=1$) and $\tilde \mu_t$ the density (with respect o the Lebesgue measure) of the measure $\dd \mu_t$, i.e. $\dd \mu_t(x)= \tilde \mu_t(x)\, \dd x$. This gives the complete solution of the position monitoring a simple quantum diffusion. It is clear that, except for the observable position (the observable $X$), the measures associated to any other system observables have no well defined large $\gamma$ limit. This in particular holds true for the momentum observable $P$, as expected.
\subsection{Generalization : General stochastic diffusion}
We can reverse the logic and ask ourselves whether it is possible to obtain any stochastic differential equation, of the form $dY_t= U(Y_t)dt + V(Y_t)dB_t$, as the strong monitoring limit of a quantum dynamical systems. That is: we ask whether, given two real functions $U(y)$ and $V(y)$ (sufficiently well-behaved), we can choose a Lindbladian $L$ such that the large $\gamma$ limit of the quantum trajectories
\beq \label{eq:sde-general}
d\rho_t = L(\rho_t)\, dt -\frac{\gamma^2}{2} [X, [X,\rho_t]] dt + \gamma \big( X\rho_t + \rho_t X - 2 \vev{X}_t\rho_t\big)\, dW_t,\eeq
leads to solutions of the stochastic differential equation $dY_t= U(Y_t)dt + V(Y_t)dB_t$.
{\bf Proposition:} {\it
Let $L=L_U+L_V$ be the sum of two Lindblad operators such that their duals $L_U^*$ and $L_V^*$ act on any observable $\hat \varphi$ on $\mathcal{H}=L^2(\mathbb{R})$, as follows (recall that $V(X)^*=V(X)$ and $U(X)^*=U(X)$)
\beqs
L^*_V(\hat \varphi) &=& V(X)P\, \hat \varphi\, PV(X) - \frac{1}{2}\big(\hat \varphi\, V(X)P^2V(X) + V(X)P^2V(X)\, \hat \varphi\big),\\
L^*_U(\hat \varphi) &=& \frac{i}{2}[ U(X)P+PU(X),\hat \varphi],
\eeqs
with $X$ the position observable and $P$ the momentum observable (such that $[X,P]=i$).\\
Let $\mu_t$ be the measure on the real line induced by $\rho_t$ via $\int \dd\mu_t(x)\varphi(x)=\mathrm{Tr}\big(\rho_t\,\varphi(X)\big)$ for any function $\varphi$ with $\rho_t$ solution of eq.(\ref{eq:sde-general}). \\
Then, in the large $\gamma$ limit, $\mu_t$ concentrates on solutions of the stochastic differential equation $dY_t= U(Y_t)dt + V(Y_t)dB_t$, in the sense that
\[ \lim_{\gamma\to\infty} \dd \mu_t = \delta_{Y_t}\quad \mathrm{with}\ dY_t= U(Y_t)dt + V(Y_t)dB_t. \]
}
{\bf Proof:}
Recall the definition $\mu_t[\varphi]:=\int \dd\mu_t(x)\varphi(x)=\mathrm{Tr}\big(\rho_t\,\varphi(X)\big)$. By duality, if $\rho_t$ evolves according to eq.(\ref{eq:sde-general}), then
\[ d\,\mu_t[\varphi] =\mu_t[\hat D\cdot\varphi]\, dt + 2\gamma\,\mu_t[x\cdot\varphi]^c\, dW_t,\]
with $\mu_t[x\cdot\varphi]^c= \mu_t[x\cdot\varphi]-\mu_t[x]\mu_t[\varphi]$ as before and $\hat {D}$ a linear operator on function $\varphi$ such that
\[ \mu_t[\hat D\cdot\varphi]=\mathrm{Tr}\big(L(\rho_t)\,\varphi(X)\big)= \mathrm{Tr}\big(\rho_t\, L^*(\varphi(X))\big),\]
because $\mathrm{Tr}\big(L(\rho_t)\,\varphi(X)\big)= \mathrm{Tr}\big(\rho_t\, L^*(\varphi(X))\big)$ by definition the dual Lindbladian $L^*$. The operator $\hat D$ exists and is well-defined because, as we shall see, our choice of $L$ ensures that $ L^*(\varphi(X))$ is again a function of the observable $X$. To prove the claim we need to check that $L^*$ is such that
\[ L^*(\varphi(X)) = \frac{1}{2}V^2(X)\partial_x^2\varphi(X) + U(X)\partial_x\varphi(X) =: ({D}_\mathrm{st}\cdot\varphi)(X) ,\]
because the differential operator associated to the SDE $dY_t= U(Y_t)dt + V(Y_t)dB_t$ is ${D}_\mathrm{st}=\frac{1}{2}V^2(x)\partial_x^2 + U(x)\partial_x$. Now, if $\hat \varphi =\varphi(X)$, so that it commutes with $V(X)$ and $U(X)$, we have
\beqs
L^*_V(\varphi(X)) &=& - \frac{1}{2}V(X)\, [P,[P,\varphi(X)]]\, V(X) = +\frac{1}{2} V(X)^2\, \partial_x^2\varphi(X),\\
L^*_U(\varphi(X)) &=& \frac{i}{2}\big( U(X) [P,\varphi(X)] + [P,\varphi(X)] U(X)\big) = U(X)\partial_x\varphi(X),
\eeqs
so that $(L^*_V + L^*_U)\varphi(X)) = ({D}_\mathrm{st}\cdot\varphi)(X)$ as required.
The rest of the proof is as before. We look at the functions $F^{\vec{\varphi}}_f(\mu_t)$. By identical arguments (with ${D}_\mathrm{st}$ replacing the Laplacian $\Delta$), we then have
\beq \label{eq:limit-alpha-general}
\lim_{\gamma\to\infty}\mathbb{E}_{\mu_0}\big[ F^{\vec{\varphi}}_f(\mu_t) \big] =\mu_0 [e^{t\,{D}_\mathrm{st}}\cdot f^{\vec\varphi}] .
\eeq
In parallel, let $\dd \mu^\infty_t:=\delta_{Y_t}$ with $Y_t$ solution of $dY_t= U(Y_t)dt + V(Y_t)dB_t$ with initial condition $Y_{t=0}$ $\mu_0$-distributed. Then, $\mu_t^\infty[\varphi]=\varphi(Y_t)$ and $F^{\vec{\varphi}}_f(\mu^\infty_t) = f^{\vec{\varphi}}(Y_t)$, and we have
\beq \label{eq:infty-general}
\mathbb{E}\big[ F^{\vec{\varphi}}_f(\mu^\infty_t) \big] = \mathbb{E}_{\mu_0}[f^{\vec{\varphi}}(X_t)] = \mu_0[ e^{t\,{D}_\mathrm{st}}\cdot f^{\vec\varphi}] .
\eeq
Comparing eq.(\ref{eq:limit-alpha-general}) and eq.(\ref{eq:infty-general}) proves the claim.
\hfill $\square$
Let us end this sub-section by a remark. This construction generalizes to higher dimensional systems. Indeed considered a system of stochastic differential equations, $dY^j_t= U^j(Y_t)dt + V^j_a(Y_t)dB^a_t$, (with implicit summation over repeated indices) on $M$ variables $Y^j$ driven by $N$ motions $B^a_t$ with quadratic variations $dB^a_tdB^d_t= \kappa^{ab}\, dt$. We may then ask under which conditions a quantum system concentrates along trajectories solutions of these SDEs. Of course, the system has to be in dimension $M$ with Hilbert space $\mathcal{H}=L^2(\mathbb{R}^M)$. Let us consider the evolution equation (\ref{eq:sde-general}) generalized in dimension $M$ (with monitoring of the $M$ observables $X^j$) with Lindblad operator $L=L_U+L_V$ with (with implicit summation on repeated indices)
\beqs
L^*_V(\hat \varphi) &=& \kappa^{ab}\big( V^j_a(X)P_j\, \hat \varphi\, P_kV^k_b(X) - \frac{1}{2}\big(\hat \varphi\, V^j_a(X)P_jP_kV^k_b(X) + V^j_a(X)P_jP_kV^k_b(X)\, \hat \varphi\big),\\
L^*_U(\hat \varphi) &=& \frac{i}{2}[ U^j(X)P_j+P_jU^j(X),\hat \varphi],
\eeqs
with $P_j$ the momentum operator conjugated to the position observable $X^j$ (i.e. $[X^j,P_k]=i\delta^j_k$). It is then easy to check that the measure on $\mathbb{R}^M$ associated to $X^j$ and induced by the density matrix evolving according to the $M$-dimensional generalization of eq.(\ref{eq:sde-general}) concentrates in the large $\gamma$ limit along the trajectories solutions of $dY^j_t= U^j(Y_t)dt + V^j_a(Y_t)dB^a_t$.
It remains an open question to decipher what are the stochastic processes describing the strong monitoring limit for Lindladians not quadratic in the impulsion operators.
\section{Monitoring continuous spectrum observable with Hamiltonian dynamics}
\label{sec:hamilton}
The aim of this section is to analyze similarly the large monitoring limit for a system undergoing a Hamiltonian non dissipative dynamics. We consider a particle on the real line with Hilbert space $\mathcal{H}=L^2(\mathbb{R})$ and monitoring its positions. The density matrix dynamical equation is (we put back the Planck constant for later convenience)
\beq \label{eq:rho-dyn-H}
d\rho_t = -\frac{i}{\hbar}[H,\rho_t]\, dt -\frac{\gamma^2}{2} [X, [X,\rho_t]] dt + \gamma \big( X\rho_t + \rho_t X - 2 \vev{X}_t\rho_t\big)\, dW_t,
\eeq
for some Hamiltonian $H$. As is well known, this equation preserves pure states (by construction, because monitoring preserves the state purity), so that we can equivalently write it on wave functions $\psi_t(x)$ \cite{Old-classic}:
\beq \label{eq:schro-sto}
d\psi_t(x) = -\frac{i}{\hbar}(H\psi_t)(x)\, dt -\frac{\gamma^2}{2} ( x - \vev{X}_t)^2\,\psi_t(x)\, dt + \gamma\, ( x - \vev{X}_t)\,\psi_t(x)\, dW_t ,
\eeq
with $\vev{X}_t=\int dx\, x |\psi_t(x)|^2$ and $(H\psi_t)(x)=-\frac{\hbar^2}{2m} \partial_x^2\psi_t(x) + V(x)\psi_t(x)$ for a (non-relativistic) particle of mass $m$ in a potential $V$.
As recalled in the introduction, equation (\ref{eq:schro-sto}) encodes for three different regimes \cite{Many,Kolokol,Bassi,BDK}: (a) a collapse regime, (b) a classical regime, and c) a diffusive regime. The aim of this section is to show that we may define a scaling limit which describes the cross-over from the classical regime to the diffusive regime.
\subsection{A simple case: Particle in a harmonic potential}
Let us start with this simple case which includes a free particle. It will allow us to decipher what strong monitoring limit we may expect and which features may be valid in a more general setting. We closely follow methods of analysis used refs.\cite{Bassi, BDK, Jacobs}.
Let $V(x)=\frac{1}{2}m\Omega^{2}x^{2}$ be the potential. As is well known, eq.(\ref{eq:schro-sto}) is better solved by representing the wave function as $\psi_t(x)= \phi_t(x)/\sqrt{Z_t}$ with the normalization $Z_t$ and $\phi_t(x)$ solution of the linear equation
\beq \label{eq:Omega-sto}
d\phi_t(x) = i\hbar^{-1}\,\Big( \frac{\hbar^2}{2m} \partial_x^2\phi_t(x) - V(x) \phi_t(x)\Big)\, dt -\frac{\gamma^2}{2}\, x^2\,\phi_t(x)\, dt + \gamma\, x \,\phi_t(x)\, dS_t ,
\eeq
where $S_t$ is the monitoring signal (with $dS_t^2=dt$), solution of $dS_t = 2 \gamma \vev{X}_t\, dt + dW_t$. The normalization factor $Z_t$ is such that $dZ_t= 2\gamma\, \langle X\rangle_t\, Z_t\, dS_t=2 \gamma\, \big(\int \dd x x |\phi_t(x)|^2\big)\, dS_t $.
Besides the frequency $\Omega$ associated to the harmonic potential, there is another frequency scale $\omega$ and a length scale $\ell$, both arising from the position monitoring, with
\[ \ell^4 := \frac{\hbar}{m\gamma^2}\ ,\quad \omega^2 := \frac{\hbar\gamma^2}{m} .\]
Eq.(\ref{eq:Omega-sto}) is a Schr\"odinger equation in a complex harmonic potential and can be exactly solved via superposition of Gaussian wave packets. Thus, as in \cite{BDK} we take a Gaussian ansatz for the un-normalized wave function written as
\begin{equation} \label{eq:ansatzphi}
\phi_{t}(x)=\phi_0\, \exp\big(-a_{t}(x-\bar{x}_{t})^{2}+i\bar{k}_{t}x+\alpha_{t}\big) ,
\end{equation}
where all the time-indexed quantities have to be thought as stochastic variables.
For a single Gaussian packet ansatz -- the case we shall consider --, $\bar{x}_{t}$ and $\bar{k}_{t}$
are the mean position and mean wave vector. This single Gaussian packet is then solution of eq.(\ref{eq:schro-sto}) if \cite{Kolokol,Bassi,BDK}
\begin{align*}
da_{t} & =\ell^{-2}\, \big(1-i2\ell^4a_{t}^{2}+i\frac{\Omega^2}{2\omega^2}\big)\,\omega dt\\
d\bar{x}_{t} & =\bar{v}_{t}\, dt+\frac{\sqrt{\omega}}{2\ell a_{t}^{R}}\, dW_{t}\\
d\bar{v}_{t} & = -\Omega^2\bar{x}_{t}\,dt - \ell\omega^\frac{3}{2}\,\frac{a_{t}^{I}}{a_{t}^{R}}\, dW_{t}
\end{align*}
with $\bar{v}_{t}$ is the mean velocity.
Here $a_t^R\, (a_t^I)$ denotes the real (imaginary) part of $a_t$ respectively (i.e. $a_t=a_t^R+ia_t^I$).
From these equations, it is clear that $\tau_c=1/\omega$ is the typical time for the wave function to collapse. After a typical time of order $\tau_c$, the Gaussian packet reaches its minimum size with $a_t\simeq a_\infty$ for $t\gg \tau_c$ with
\begin{equation*}
a_\infty=\Big(\frac{1}{2i\ell^4}\big(1+i\frac{\Omega^2}{\omega^2}\big)\Big)^{1/2}
\end{equation*}
Taking $\omega \to \infty$ while keeping $\Omega$ fixed allows us to simplify :
\begin{equation}
a_\infty=\frac{e^{-i\pi/4}}{\sqrt{2}}\, \ell^{-2} .
\end{equation}
In other words, monitoring stabilizes the wave function in a Gaussian wave packet with constant (minimal) width $\ell$. In this collapsed wave packet, the position and velocity dispersions are
\begin{align*}
\sigma_{x} = 2^{-1/4}\ell,\quad
\sigma_{v} = 2^{-3/4} \omega \ell .
\end{align*}
After this transient collapsing period, for $t\gg \tau_c$, the mean position and velocity evolve according to
\begin{align}
d\bar{x}_{t} & =\bar{v}_{t}\, dt+\sqrt{2\omega\ell^2}\, dW_t ,\label{eq:sto-position1} \\
d\bar{v}_{t} & = -\Omega^2\bar{x}_{t}\,dt + \ell\omega^\frac{3}{2}\, dW_{t} . \label{eq:sto-volicity1}
\end{align}
We now may wonder if there is a well defined strong monitoring limit (i.e. a limit $\gamma\to\infty$). On physical ground, this limit should be such that the time to collapse vanishes, that is $\tau_c\to 0$ or equivalently $\omega\to \infty$. It is then clear from eqs.(\ref{eq:sto-position1},\ref{eq:sto-volicity1}) above that $\ell$ should simultaneously vanish for this limit to make sense, so that the strong monitoring limit is the double scaling limit $\omega\to\infty$, $\ell\to 0$. A closer inspection of eqs.(\ref{eq:sto-position1},\ref{eq:sto-volicity1}) shows that we should take this double limit with $\varepsilon := \omega^{3}\ell^{2}$ fixed (so that $\sqrt{\ell^2\omega}\to 0$). Note that, in this limit, the wave packet is localized both in space and in velocity, $\sigma_x\to 0$ and $\sigma_v\to 0$, so it is actually a classical limit (i.e. $\gamma\to\infty$ and $\hbar\to 0$ with $\hbar\gamma/m$ fixed).
We can summarize this discussion:
{\bf Proposition:}
{\it In the double limit $\omega\to\infty$ and $\ell\to 0$ at $\varepsilon := \omega^{3}\ell^{2}=\hbar^2\gamma^2/m^2$ fixed, solution of the quantum trajectory equation (\ref{eq:schro-sto}) in an harmonic potential $V(x)=\frac{1}{2} m\Omega x^2$ localizes in the sense that the probability density $|\psi_t(x)|^2=\delta_{\bar x_t}$ with $\bar x_t$ solution of the stochastic equations
\beqa
d\bar{x}_{t} & =& \bar{v}_{t}\, dt, \label{eq:sde-langevinA}\\
d\bar{v}_{t} & =& -\Omega^{2}\bar{x}_{t}\, dt+\sqrt{\varepsilon}\, dW_{t} . \label{eq:sde-langevinB}
\eeqa}
This behavior describes the cross-over from a semi-classical behavior, which occurs just after the transient collapsing period, to the diffusion behavior due to monitoring back action. As is well known, eqs.(\ref{eq:sde-langevinA}, \ref{eq:sde-langevinB}) can be solved exactly with solution:
\begin{equation*}
\bar{x}_{t}=x_{0}\cos(\Omega t)+\sqrt{\frac{\varepsilon}{\Omega^2}}\int_{0}^{t}dW_{s}\sin(\Omega(t-s)),
\end{equation*}
where we chose for simplicity the initial conditions $x(t=0)=x_0,\ v(t=0)=0$.
It reflects the cross-over behavior from the classical solution $\bar{x}_{t}\simeq x_{0}\cos(\Omega t)$ at small time to the diffusion behavior $\bar{x}_{t}\simeq \sqrt{\frac{\varepsilon}{\Omega^2}}\int_{0}^{t}dW_s \sin(\Omega(t-s))$ at large time. The fuzziness of the trajectory can be testified by computing the variance of the position. We have $(\Delta \bar x_t)^2=\frac{\epsilon}{\Omega^{2}}(\frac{2\Omega t-\sin(2\Omega t)}{4\Omega})$, so that
$(\Delta \bar x_t)^2\simeq {\epsilon t}/2{\Omega}^2$ for $ \Omega t \gg 1$ which is typical of a diffusive behavior and $(\Delta \bar x_t)^2\simeq \frac{\epsilon t^3}{3}$ for $\Omega t \ll 1$ which can be interpreted as a state localized with accuracy $\simeq t^3$ for small times. The two behaviors are showed in fig.\ref{fig:comparaison}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{smalltime.png}
\caption{Small time behavior}
\label{fig:sfig1}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{longtime.png}
\caption{Long time behavior}
\label{fig:sfig2}
\end{subfigure}
\caption{ Typical behaviors of a particle trapped in an harmonic oscillator in the strong monitoring regime. The plots correspond to the evolution of the position of the particle viewed at different time scales and evolving according to (\ref{eq:sde-langevinA},\ref{eq:sde-langevinB}) with $\Omega=1$, $\epsilon=1$, $x_0=2$, $v_0=0$. We clearly have a transition from an oscillatory behavior at time of order $1/\Omega$ to a diffusive regime for $t \gg 1/\Omega$.}
\label{fig:comparaison}
\end{figure}
\subsection{Generalization: A particle in a smooth potential}
We can now borrow the previous observation to state what the strong monitoring limit could be for a particle in an arbitrary potential. As suggested sometime ago \cite{ZZurek,Kolokol,Bassi}, after a transient time of order $\tau_c = 1/\omega$ with $\omega^2 := \hbar\gamma^2/m$, continuous monitoring of the position leads to a collapse of the wave function on a Gaussian state with a minimal width of order $\ell$ with $\ell^4 := {\hbar}/{m\gamma^2}$.
In view of the previous analysis we are lead to suggest that
{\bf Conjecture:} {\it
In the double limit $\gamma\to\infty$ and $\hbar/m\to 0$ at $\varepsilon := \hbar^2\gamma^2/m^2$ fixed,
solution of the quantum trajectory equation (\ref{eq:schro-sto}) in a potential $V(x)$ localizes at a position $\bar x_t$ solution of the Langevin equation
\begin{align}
d\bar{x}_{t} & =\bar{v}_{t}\, dt \label{eq:sde-conjecA} \\
d\bar{v}_{t} & = -\frac{1}{m}\, \partial_xV(\bar{x}_{t})\, dt+\sqrt{\varepsilon}\, dW_{t} \label{eq:sde-conjecB}
\end{align}}
Let us give a few arguments in favour of this claim. After the transient collapse time, we can take the Gaussian wave packet (\ref{eq:ansatzphi}) for the (unnormalized) wave function. Taylor expanding the potential around the mean position $\bar x_t$ and keeping only the terms up to second order leads to the stochastic equations for the width, the mean position and the mean velocity :
\begin{align*}
da_{t} & =\big(\gamma^2-2i\frac{\hbar}{m}a_{t}^{2}+\frac{i}{2\hbar}\partial_{x}^{2}V(\bar{x}_{t})\big)\,dt \\
d\bar{x}_{t} & =\bar{v}_{t}\,dt+\frac{\gamma}{2a_{t}^{R}}\,dW_{t} \\
d\bar{v}_{t} & = -\frac{1}{m}\partial_{x}V(\bar{x}_{t})\,dt - \frac{\gamma\hbar a_{t}^{I}}{ma_{t}^{R}}\,dW_{t}
\end{align*}
with $a_t=a_t^R + ia_t^I$ and where we supposed that
\begin{equation}
(x-\bar{x}_{t})^{3}\partial_{x}^{3}V(\bar{x}_{t})\ll(x-\bar{x}_{t})^{2}\partial_{x}^{2}V(\bar{x}_{t}),\label{cubicapprox-1}
\end{equation}
for every $x$, $\bar{x}_t$. As for the harmonic potential, at time $t\gg \tau_c$ the width reaches its stationary values $a_{_{\infty}}=(\frac{1}{2i\ell^4}(1+\frac{i}{2\hbar\gamma^2}\partial_{x}^{2}V(\bar{x}_{t})))^{1/2}\label{eq:finalwidth}$. Demanding that this width be independent of dynamical aspects, that is, independent of the position, imposes $|\partial_{x}^{2}V(\bar{x}_{t})|\ll\hbar\gamma^{2}$. This is indeed satisfied in the double scaling limit (if the potential is smooth enough) and it coincides with the condition of ref.\cite{Jacobs} forcing a localisation of the wave packet. We then simply have $a_{\infty}=\frac{e^{-i\frac{\pi}{4}}}{\sqrt{2}}\ell^{-2}$ and, as in the harmonic case, the position and speed dispersions are given by $\sigma_{x}=2^{-1/4}\ell$, $\sigma_{v}\equiv2^{-3/4}\omega\ell$, in this limit.
Plugging these asymptotic values in the above equations and taking the scaling limit at $\varepsilon := \hbar^2\gamma^2/m^2$ fixed, leads to the Langevin equations (\ref{eq:sde-conjecA},\ref{eq:sde-conjecB}).
Remark that we can make an a posteriori self-consistent check for the approximation (\ref{cubicapprox-1}).
The typical length on which the wave function admits non zero value
in the $\omega \to \infty$, $\ell \to 0$ limit at $\varepsilon$ fixed scales like $\ell$ so that condition (\ref{cubicapprox-1}) amounts
to $\ell\, \partial_{x}^{3}V(\bar{x}_{t})\ll \partial_{x}^{2}V(\bar{x}_{t})$. This is indeed valid for $\ell \to 0$ if the potential is smooth enough (with finite second and third spatial derivative) everywhere in space.
\bigskip
\noindent {\bf Acknowledgements}:
This work was in part supported by the ANR project ``StoQ'', contract number
ANR-14-CE25-0003.
D.B. thanks C. Pellegrini for discussions and for his interest in this work.
\bigskip
\begin{appendix}
\section{Exchangeable processes and QND measurements}
We here give the details of the arguments needed to prove the main proposition of the Section \ref{sec:qnd}.
Before discussing the continuous time limit, it is useful to have a look at the case of discrete time, which is more elementary. Our point is to show two equivalent descriptions of the monitoring during a non-demolition measurement process.
\subsection{Discrete time monitoring}
The evolution equation for the diagonal elements of the density matrix (this is a measure in general, a subtlety that becomes unavoidable for an observable with continuous spectrum) in the pointer states basis in repeated quantum non-demolition measurements in discrete time reads,
\[ \dd\mu_{n+1}(\alpha)= \frac{\dd\mu_{n}(\alpha)p(i|\alpha)}{\int \dd\mu_{n}(\beta)p(i|\beta)} \text{ with probability } \int \dd\mu_{n}(\beta)p(i|\beta), \]
where Greek letters $\alpha,\beta,...$ index pointer states and Latin letters $i,j,\cdots$ are the outcomes of the indirect measurements. The observation process is defined by
\[ S_n(i):= \#\{m \leq n, \, m^{th} \text{ measurement has given outcome } i\}=:\sum_{m=1}^n \varepsilon_m(i)\]
so that $\varepsilon_n(i)=1$ if the outcome of the $n^{th}$ measurement is $i$ (probability $\int \dd\mu_{n-1}(\beta)p(i|\beta)$) and $0$ otherwise.
The natural situation of the experimenter is to have only access to the observation process, and possibly to the initial condition. From this viewpoint, the fate of the observation process $S_n$ and of $\dd\mu_{n}$ at large $n$ is not so easy to decipher.
However the evolution equations are easily ``solved'' to yield the joint law of $\varepsilon_1,\cdots,\varepsilon_n$ which is
\[ \text{Prob}(\varepsilon_1,\cdots,\varepsilon_n)=\int \dd\mu_{0}(\alpha) \prod_i p(i|\alpha)^{S_{n}(i)},\]
and the value of $\dd\mu_n$ which is
\[ \dd\mu_{n}(\alpha)= \dd\mu_{0}(\alpha) \frac{\prod_i p(i|\alpha)^{S_{n}(i)}}{\int \dd\mu_{0}(\beta) \prod_i p(i|\beta)^{S_{n}(i)}}\]
This law, which involves only the random variables $S_n=\sum_{m=1}^n \varepsilon_m$ is obviously invariant under permutation of $\varepsilon_1,\cdots,\varepsilon_n$, which expresses the property that the sequence $\varepsilon_1,\cdots,\varepsilon_n$ is exchangeable. De Finetti's theorem expresses that if this holds for each $n=1,2,\cdots$ then the random variables $\varepsilon_1,\varepsilon_2,\cdots$ are conditionally independent and identically distributed. This is also apparent from the explicit formula for the law: conditionally on a choice of $\alpha$ (sampled with the law $\dd\mu_{0}(\alpha)$) the joint law of $\varepsilon_1,\cdots,\varepsilon_n$ is
\[ \prod_i p(i|\alpha)^{S_{n}(i)}=\prod_{m=1}^n \prod_{i_m} p(i_m|\alpha)^{\varepsilon_m(i_m)},\]
which says that the $m^{th}$ measurement yields observation of $i_m$ with probability $p(i_m|\alpha)$ independently of the other observations.
From this viewpoint, it is clear that the sequence of frequencies $\frac{S_{n}(i)}{n}$ converges almost surely to $p(i|\alpha)$ so that if the conditional probability distributions $p(\cdot|\beta)$ are distinct for different $\beta$s the asymptotics of the observation process allow to recover the value $\alpha$ sampled initially.
To summarize, the natural situation of an experimenter if to have access at time $n$ to $\varepsilon_1,\cdots,\varepsilon_n$, but the law of the observation process is exactly the same as if, before the experiment begins, a ``cheater'' samples the state of the system with $\dd\mu_{0}$ so that he has access not only to $\varepsilon_1,\cdots,\varepsilon_n$ but also to $\alpha$. The cheater knows in advance the asymptotics (i.e. $\alpha$) while the experimenter discovers it only progressively as time goes by. To make the point again: the \textit{same} process has two different interpretations depending on the information you have at your disposal.
De Finetti's theorem is also closely related to the notion of reverse martingales. Fix $0 < l < m$. Due to exchangeability, it is easy to see that knowing $\frac{S_{n}}{n}=(\frac{S_{n}(i)}{n})_i$ for $n \geq m$, the best estimate (i.e. the conditional expectation) for $\frac{S_{l}}{l}$ is $\frac{S_{m}}{m}$. In words, the best estimate for a value in the past knowing the future is the present. This is the notion of reverse martingales, to be contrasted with (usual) martingales for whom the best estimate for a value in the future knowing the past is the present. It turns out that the notion of reverse martingales is even more rigid than that of martingales: without any conditions (apart from the existence of conditional expectations implied by their very definition), reverse martingales converge at large times, almost surely and in $\mathbb{L}^1$. Of course, in the case at hand, we can rely on the explicit formula for the law of $S_{n}$ and the strong law of large numbers to be sure that $\frac{S_{n}}{n}$ has a limit at large $n$ but this is deceptive: if $K_n$, $n\geq 0$, is the sequence of partial sums of independent identically distributed integrable random variables, then $\frac{K_{n}}{n}$ is an example of a reverse martingale, so the reverse martingale convergence theorem immediately implies the strong law of large numbers and yields a conceptual proof of it (see e.g. page 484 in \cite{FristedtGray1997}).
\subsection{Continuous time monitoring}
Not only do the frequencies $\frac{S_{n}(i)}{n}$ converge at large $n$. In fact, a stronger property, a central limit theorem holds: if the limiting pointer state is $\alpha$, $\frac{S_{n}(i)-np(i|\alpha)}{n^{1/2}}$ converges to a Gaussian. Note that $\sum_i S_{n}(i)=n$ so there is one degree of freedom less than the number of possible measurement outcomes. To take a continuous time limit, the situation resembles that of random walks: one has to replace $p(i|\alpha)$ by $p_{\delta}(i|\alpha)$ where $\delta \searrow 0$ is the time increment, with $p_{\delta}(i|\alpha)=p_0(i) + O(\delta^{1/2})$ so that each observation has only a small effect (on the correct order of magnitude) on $\dd\mu$. Assuming for simplicity that the pointer state basis is indexed by a real number $A$ and that $i$ takes only two values (so that there is only one degree of freedom) one is naturally led to an observation process described up to normalization by $S_t=B_t+A t$, $t\in [0,+\infty[$ where $B_t$ is a standard Brownian motion and $A$ is sampled from an initial distribution $\dd\mu_{0}$. This is the description from the perspective of the ``cheater'', whose knowledge at time $t$ is $\alpha$ and $S_u$, $u\in [0,t]$, or $\alpha$ and $B_u$, $u\in [0,t]$. In more mathematical terms, the ``cheater'' observes the process via the filtration $\mathcal{G}_t:=\sigma \{A \text{ and } S_u, u\in [0,t]\}= \sigma \{A \text{ and } B_u, u\in [0,t]\}$. Let us note that a general theorem (see e.g. page 322 in \cite{Kallenberg2002}) based on a natural extension of the notion of exchangeability ensures that $C B_t+A t$ with $C,A$ random and $B_t$ $t\in [0,+\infty[$ an independent Brownian motion is the most general continuous exchangeable process on $[0,+\infty[$. A random conditional variance such as $C$ in the above formula plays no role in the forthcoming discussion.
Our goal is to get the description of the same process for the experimenter, who knows only $S_u$, $u\in [0,t]$ at time $t$, i.e. uses the filtration $\mathcal{H}_t:=\sigma \{S_u, u\in [0,t]\}$. It is also useful to introduce the filtration $\mathcal{F}_t:=\sigma \{B_u, u\in [0,t]\}$. The relations between the filtrations $\mathcal{F}_t,\mathcal{G}_t,\mathcal{H}_t$ are the clue to solve our problem. It is crucial that $\mathcal{F}_t$ is independent of $A$ and that $\mathcal{F}_t, \mathcal{H}_t \subset \mathcal{G}_t$, and we use these properties freely in conditional expectations in what follows. We let $\mathbb{E}$ denote the global (i.e. over both $A$ and the Brownian motion) expectation symbol.
\subsection{Change of filtration}
The crucial computation is an identity for the joint law of the random variable $A$ and the process $S_t$.
{\bf Proposition:} {\it
Let $f$ be a nice (measurable and such that the following expectations make sense, non-negative or bounded would certainly do) function from $\mathbb{R}^{k+1}$ to $\mathbb{R}$, and $0=t_0 < t_1 \cdots < t_k=t$. Then
\[ \mathbb{E}\left[f(A,S_{t_1},\cdots,S_{t_k})\right]= \mathbb{E}\left[f(A,B_{t_1},\cdots,B_{t_k})e^{AB_t-A^2 t/2}\right].\]
}
The general tool to understand such a formula is Girsanov's theorem, but in the case at hand, an easy (if tedious) explicit computation does the job. The idea is to write
\begin{eqnarray*} \mathbb{E}\left[f(A,S_{t_1},\cdots,S_{t_k})\right] & = & \mathbb{E}\left[f(A,B_{t_1}+At_1,\cdots,B_{t_k}+At_k)\right] \\ & = & \int \dd\mu_0(\alpha) \mathbb{E}\left[f(\alpha,B_{t_1}+\alpha t_1,\cdots,B_{t_k}+\alpha t_k)\right], \end{eqnarray*}
where the first equality is the definition the observation process and the second makes use of the fact that the Brownian motion is independent of $A$. Thus we are left to prove the identity
\[ \mathbb{E}\left[f(\alpha,B_{t_1}+\alpha t_1,\cdots,B_{t_k}+\alpha t_k)\right]=\mathbb{E}\left[f(\alpha,B_{t_1},\cdots,B_{t_k})e^{\alpha B_t-\alpha^2 t/2}\right]\]
for every $\alpha \in \mathbb{R}$. This is done by writing the left-hand side using the explicit expression of the finite dimensional distributions of Brownian motion in terms of the Gaussian kernel and translating the integration variables $x_l$ associated to the positions at time $t_l$, $l=1,\cdots,k$, by $\alpha t_l$. Setting $x_l+\alpha t_l= y_l$ (with the convention $x_0=y_0=0$) one gets
\[ -\frac{(x_l-x_{l-1})^2}{2(t_l-t_{l-1})}=-\frac{(y_l-y_{l-1})^2}{2(t_l-t_{l-1})}+ \alpha (y_l-y_{l-1})-\alpha^2 (t_l-t_{l-1})/2 \]
which leads to a telescopic sum $\sum_{l=1}^k \alpha (y_l-y_{l-1})-\alpha^2 (t_l-t_{l-1})/2 = \alpha y_k- \alpha^2 t_k/2$ yielding an expression which is recognized as the right-hand side.
We use this identity to understand the conditional distribution of $A$ when the measurement has been observed up to time $t$, i.e. to have an explicit representation of $H_t:=\mathbb{E}[h(A) |\mathcal{H}_t]$ for an arbitrary measurable function $h$ such that $h(A)$ is integrable. Note that by construction $H_t$ is a closed $\mathcal{H}_t$-martingale. Note also that, at least if $h(A)$ is square integrable, a conditional expectation is a best mean square approximation so that $H_t$ is the best estimate of $h(A)$ (known exactly to the cheater) for someone whose knowledge is limited to the observations. To get a hold on this conditional expectation we introduce a bounded measurable function $g(S_{t_1},\cdots,S_{t_k})$ where $0=t_0 < t_1 \cdots < t_k=t$ and use the general formula to get
\beq \label{eq:compare}
\mathbb{E}\left[h(A)g(S_{t_1},\cdots,S_{t_k})\right]= \int \dd\mu_0(\alpha) h(\alpha) \mathbb{E}\left[g(B_{t_1},\cdots,B_{t_k})e^{\alpha B_t-\alpha^2 t/2}\right] .
\eeq
and for each $\beta \in \mathbb{R}$
\beqs \mathbb{E}\left[\frac{e^{\beta S_t-\beta^2 t/2}}{\int \dd\mu_0(\gamma)e^{\gamma S_t-\gamma^2 t/2}} g(S_{t_1},\cdots,S_{t_k})\right] & = & \\
& & \hspace{-5cm} \int \dd\mu_0(\alpha) \mathbb{E}\left[\frac{e^{\beta B_t-\beta^2 t/2}}{\int \dd\mu_0(\gamma)e^{\gamma B_t-\gamma^2 t/2}} g(B_{t_1},\cdots,B_{t_k})e^{\alpha B_t-\alpha^2 t/2}\right]
\eeqs
which simplifies to
\[ \mathbb{E}\left[\frac{e^{\beta S_t-\beta^2 t/2}}{\int \dd\mu_0(\gamma)e^{\gamma S_t-\gamma^2 t/2}} g(S_{t_1},\cdots,S_{t_k})\right] = \mathbb{E}\left[e^{\beta B_t-\beta^2 t/2}g(B_{t_1},\cdots,B_{t_k})\right].\]
Integrating this identity against $\int \dd\mu_0(\beta) h(\beta)$ and comparing with eq.(\ref{eq:compare}) we get
\[ \mathbb{E}\left[\frac{\int \dd\mu_0(\beta) h(\beta) e^{\beta S_t-\beta^2 t/2}}{\int \dd\mu_0(\gamma)e^{\gamma S_t-\gamma^2 t/2}} g(B_{t_1},\cdots,B_{t_k})\right] = \mathbb{E}\left[h(A)g(S_{t_1},\cdots,S_{t_k})\right].\]
As $\frac{\int \dd\mu_0(\beta) h(\beta) e^{\beta S_t-\beta^2 t/2}}{\int \dd\mu_0(\beta)e^{\beta S_t-\beta^2 t/2}}$ is $\mathcal{H}_t$-measurable and $g$ is arbitrary, we have obtained our major result, an explicit representation for the closed $\mathcal{H}_t$-martingale $H_t$:
\[ H_t=\mathbb{E}[h(A) |\mathcal{H}_t]= \frac{\int \dd\mu_0(\beta) h(\beta) e^{\beta S_t-\beta^2 t/2}}{\int \dd\mu_0(\beta)e^{\beta S_t-\beta^2 t/2}}.\]
This can be rephrased by saying that the measure $\dd\mu_0$ conditional on $\mathcal{H}_t$ is the measure (in fact a measure-valued $\mathcal{H}_t$-martingale)
\[ \dd\mu_t(\alpha) := \dd\mu_0(\alpha) \frac{e^{\alpha S_t-\alpha^2 t/2}}{\int \dd\mu_0(\beta)e^{\beta S_t-\beta^2 t/2}}.\]
As emphasized in the main text, the same expression for $\dd\mu_t$ holds for the diagonal of the density matrix at time $t$ as a functional of the diagonal of the density matrix at time $0$ when the results of the measurements have been taken into account. That this must be the case is strongly supported by the discrete time counterpart recalled above.
What remains to be deciphered is how $S_t$ can be analyzed from the point of view of stochastic processes under the filtration $\mathcal{H}_t$. The general formula for the joint distribution of the random variable $A$ and the process $S_t$ leads easily to
\[ \mathbb{E}[S_t-S_s|\mathcal{H}_s]=(t-s)\mathbb{E}[A|\mathcal{H}_s] \text{ for } 0 \leq s \leq t,\]
so that the process $W_t:=S_t-\int_0^t \mathbb{E}(A|\mathcal{H}_s) \, ds$ is an $\mathcal{H}_t$-martingale. This can also be checked as follows. As $\mathcal{H}_t \subset \mathcal{G}_t$ and $B_t$ is a $\mathcal{G}_t$-martingale, the process
\[ S_t-\mathbb{E}[A|\mathcal{H}_t]\,t=\mathbb{E}[S_t-At|\mathcal{H}_t]=\mathbb{E}[B_t|\mathcal{H}_t]\]
is an $\mathcal{H}_t$-martingale, and we are left to check that the process $\mathbb{E}[A|\mathcal{H}_t]\,t-\int_0^t \mathbb{E}[A|\mathcal{H}_s] \, ds$ is an $\mathcal{H}_t$-martingale. This is easy either by formal manipulations of conditional expectations or by integration by parts:
\[ \mathbb{E}[A|\mathcal{H}_t]\,t-\int_0^t \mathbb{E}[A|\mathcal{H}_s] \, ds = \int_0^t s \, d\mathbb{E}[A|\mathcal{H}_s] \]
which is a martingale as the stochastic integral of an adapted (in fact deterministic) integrand $s$ against the martingale integrator $d\mathbb{E}[A|\mathcal{H}_s]$.
The quadratic variation of $S_t$ (which has continuous trajectories) is $dS_t^2=dB_t^2=dt$ and
\[ \int_0^t \mathbb{E}[A|\mathcal{H}_s] \, ds = \int_0^t \frac{\int \dd\mu_0(\beta) h(\beta) e^{\beta S_s-\beta^2 s/2}}{\int \dd\mu_0(\beta)e^{\beta S_s-\beta^2 s/2}} \, ds\] is a finite variation process with continuous trajectories. Hence $W_t:=S_t-\int_0^t \mathbb{E}[A|\mathcal{H}_s] \, ds$ is an $\mathcal{H}_t$-martingale with continuous trajectories and quadratic variation $dW_t^2=dS_t^2=dt$. By L\'evy's characterization theorem, $W_t$ is an $\mathcal{H}_t$ Brownian motion. Thus, from the point of view of the observer, the signal $S_t$ decomposes as an $\mathcal{H}_t$-semimartingale
\[S_t=W_t+\int_0^t \mathbb{E}[A|\mathcal{H}_s] \, ds.\]
With some tedious manipulations of the general formula allowing to go back and forth between expectations of the observation process $S_t$ and the Brownian motion $B_t$ we could do without L\'evy's characterization theorem, i.e. get an explicit formula for the finite dimensional distributions of $W_t$ and recognize those of a Brownian motion. It is worth noticing that $\mathbb{E}[B_t|\mathcal{H}_t]$ which is an $\mathcal{H}_t$-martingale with continuous trajectories is not a Brownian motion.
Let us note that setting $Z(x,t):=\int \dd\mu_0(\beta)e^{\beta x-\beta^2 t/2}$, so that $Z_t:= Z(S_t,t)$ is the normalization factor for $\mu_t$, we obtain $\mathbb{E}[A|\mathcal{H}_t]=(\partial_x \log Z)(S_t,t)$, which leads to the following form for the stochastic differential equation for $S_t$ as an $\mathcal{H}_t$-semi-martingale
\[ dS_t=dW_t + (\partial_x \log Z)(S_t,t)dt = dW_t + dt\, \big(\!\!\int \!\dd\mu_t(\alpha)\alpha\big).\]
The first expression of the drift term is typical for a so-called $h$-transform and points to a systematic (but less direct and less elementary) derivation of the above results via Girsanov's theorem. The second expresses the instantaneous drift term as the average of the observable $A$ at time $t$ (i.e. in a state described by a density matrix whose diagonal in the pointer states basis is the measure $\dd\mu_t$).
One checks easily that $\mathbb{E}[h(A) |\mathcal{H}_t]$, which is automatically an $\mathcal{H}_t$-martingale, satisfies
\[ d \mathbb{E}[h(A) |\mathcal{H}_t]= \left(\mathbb{E}[Ah(A) |\mathcal{H}_t]- \mathbb{E}[h(A) |\mathcal{H}_t]\mathbb{E}[A |\mathcal{H}_t]\right) dW_t.\]
The combinatorics ensuring the absence of $dt$ terms is embodied in the relation
$\left(\partial_t+\frac{1}{2}\partial_x^2 \right) e^{\beta x-\beta^2 t/2}=0$ valid for every $\beta$.
This leads to
\[ d \mathbb{E}[B_t |\mathcal{H}_t]=\left(1-t(\mathbb{E}[A^2 |\mathcal{H}_t]-\mathbb{E}[A |\mathcal{H}_t)^2] \right)dW_t\]
where the conditional variance on the right-hand side can be rewritten as
\[ \mathbb{E}[A^2 |\mathcal{H}_t]-\mathbb{E}[A |\mathcal{H}_t]^2 = \frac{\int \dd\mu_0(\alpha) \dd\mu_0 (\beta) \, (\alpha-\beta)^2 e^{\alpha S_t-\alpha^2 t/2} e^{\beta S_t-\beta^2 t/2}} {2 \int \dd\mu_0(\alpha) d \mu_0(\beta) \, e^{\alpha S_t-\alpha^2 t/2} e^{\beta S_t-\beta^2 t/2}}=\int \dd\mu_t(\alpha) \dd\mu_t (\beta) \, \frac{(\alpha-\beta)^2}{2} .\]
At large times this conditional variance vanishes and $d\mathbb{E}[B_t |\mathcal{H}_t]$ approaches the Brownian motion increment $dW_t$.
\end{appendix} | 34,524 |
Insomimisanty
The last few days I could not wake up without fighting an uphill battle with the desire to sleep for just a few more minutes. You know how those few minutes become a few more minutes and all of the sudden it's September.
I would be up until 3 in the morning since as long as I remember but I had a great alarm clock back when I was in school, it was called Dad. If you asked for a few more minutes, it was the snooze alarm and it worked twice a morning, the third time you asked for a few minutes he would come in and scream at you until you wake up. I can't believe that I wish he would still do that once in a while.
So tonight I am trying a sleeping pill, which is a bit scary because I am worried that I won't wake up. I might find that I enjoy sleeping and decide to stay that way. I gave in after trying to make it on half nights for a few weeks.
I can't write as much, I feel lazy, and in general am not as functional as a sleep walker. When I am tired I am quiet and less annoying though, but I need my brain back.
Wish me luck. | 92,323 |
The CCPA is one of Canada’s leading sources of progressive policy ideas. Our work is rooted in the values of social justice and environmental sustainability. As non corporate-funded policy think tanks continue to be silenced, the importance of the Centre has never been greater.
Sign-up for our e-newsletter to keep up-to-date with the CCPA’s work.
The implications for Canada of a fast-tracked Trans-Pacific Partnership
Who's the Boss?
"The Elf on the Shelf" and the normalization of surveillance
Safeguarding our Parliament
Lots of talk about trade barriers but deregulation lies between the lines
Why workers should unite against Canada's next-generation trade deals | 185,646 |
. Continue reading “Open Sources Show Notes for Thursday September 3, 2015”
Tag: Thomas Mulcair”
Election Eve Gets Nasty
It was the night before the election call; and all through the house; the parties were warring; and ready to pounce. With Stephen Harper widely expected to visit the Governor-General in the next couple of days to request the dissolution of Parliament, thus starting the official campaign. But on this election eve, the main federal parties are doing all they can to align their forces for what might be the longest, hardest and bloodiest campaign in Canadian political history. Continue reading “Election Eve Gets Nasty”
Open Sources Show Notes for Thursday July 2, 2015
Happy. Continue reading “Open Sources Show Notes for Thursday July 2,”
6 Months to Election Day
Exactly six months from now, Canadians will go to the polls to elect a new federal government. Already, each of the major parties is telling us that this will be the most important election in Canadian history, and although the more meta-aware candidates recognize that this line is said almost every election year, could they be actually right this time? There are 30 new seats up for grabs thanks to the re-configuration of district lines, making a majority sweep just a little bit harder to achieve. On the issues, there’s the Stephen Harper legacy, whether or not a balanced budget is enough to make people forget the myriad of scandals his government’s incurred over the last decade. And the Opposition, is Thomas Mulcair’s prosecutorial method of holding the government in line going to keep the Orange Crush crushing, or is the face of youth and vitality in Justin Trudeau going to be more appealing? Continue reading “6 Months to Election Day” | 30,408 |
\begin{document}
\title[Kapranov, Fedosov and 1-loop exact BV on K\"ahler manifolds]{Kapranov's $L_\infty$ structures, Fedosov's star products, and one-loop exact BV quantizations on K\"ahler manifolds}
\author[Chan]{Kwokwai Chan}
\address{Department of Mathematics\\ The Chinese University of Hong Kong\\ Shatin\\ Hong Kong}
\email{[email protected]}
\author[Leung]{Naichung Conan Leung}
\address{The Institute of Mathematical Sciences and Department of Mathematics\\ The Chinese University of Hong Kong\\ Shatin \\ Hong Kong}
\email{[email protected]}
\author[Li]{Qin Li}
\address{Department of Mathematics\\ Southern University of Science and Technology\\ Shenzhen\\China}
\email{[email protected]}
\subjclass[2010]{53D55 (58J20, 81T15, 81Q30)}
\keywords{$L_\infty$ structure, deformation quantization, star product, BV quantization, algebraic index theorem, K\"ahler manifold}
\thanks{}
\begin{abstract}
We study quantization schemes on a K\"ahler manifold and relate several interesting structures.
We first construct Fedosov's star products on a K\"ahler manifold $X$ as quantizations of Kapranov's $L_\infty$-algebra structure.
Then we investigate the Batalin-Vilkovisky (BV) quantizations associated to these star products.
A remarkable feature is that they are all one-loop exact, meaning that the Feynman weights associated to graphs with two or more loops all vanish.
This leads to a succinct cochain level formula for the algebraic index.
\end{abstract}
\maketitle
\section{Introduction}
K\"ahler manifolds possess very rich structures because they lie at the crossroad of complex geometry and symplectic geometry. On the other hand, as symplectic manifolds which admit natural polarizations (namely, the complex polarization), K\"ahler manifolds provide a natural ground for constructing quantum theories. In particular, there have been extensive studies on deformation quantization on K\"ahler manifolds. A notable example is the Berezin-Toeplitz quantization, which is closely related to geometric quantization in the complex polarization \cite{Bordemann-Meinrenken, Bordemann, Karabegov96, Karabegov00, Karabegov07, Karabegov, Ma-Ma, Neumaier}.
This paper is another attempt to understand the relation between the K\"ahlerian condition and properties of quantum theories.
Our starting point is Kapranov's famous construction of an {\em $L_\infty$-algebra structure} for a K\"ahler manifold in \cite{Kapranov}, which was motivated by the study of Rozansky-Witten theory \cite{RW} (and also \cite{Kontsevich2}) and has since been playing important roles in many different subjects.
Let $X$ be a K\"ahler manifold. Using the Atiyah class of the holomorphic tangent bundle $TX$, Kapranov constructed a natural $L_\infty$-algebra structure on the Dolbeault complex $\A_X^{0,\bullet - 1}(TX)$, enabling us to view $TX[-1]$ as a Lie algebra object in the derived category of coherent sheaves on $X$.
In Section \ref{section: L-infty-structure}, we reformulate this structure as a flat connection $D_K$ (where the subscript ``K'' stands for ``Kapranov'') on the holomorphic Weyl bundle $\W_X$ over $X$.
On the other hand, {\em Fedosov abelian connections}, which give rise to {\em deformation quantizations} or {\em star products} on $X$, are connections of the form $D = \nabla-\delta+\frac{1}{\hbar}[I,-]_{\star}$ on the complexified Weyl bundle $\W_{X,\mathbb{C}}$ satisfying $D^2=0$; here $\nabla$ is the Levi-Civita connection, $I\in\A^1(X,\W_{X,\C})$ is a $1$-form valued section of $\W_{X,\C}$, and $\star$ is the fiberwise Wick product on $\W_{X,\mathbb{C}}$. The flatness condition $D^2 = 0$ is equivalent to the {\em Fedosov equation}
\begin{equation}\label{eqn:Fedosov-equation-Wick-intro}
\nabla I-\delta I+\frac{1}{\hbar}I\star I+ R_\nabla=\alpha.
\end{equation}
Our first main result says that Kapranov's $L_\infty$ structure can naturally be quantized to produce Fedosov abelian connections $D_{F,\alpha}$ (where the subscript ``F'' stands for ``Fedosov''):
\begin{thm}[=Theorem \ref{proposition: Fedosov-connection-general}]
For a representative $\alpha$ of any given formal cohomology class $[\alpha] \in \hbar H^2_{dR}(X)[[\hbar]]$ of type $(1,1)$, there exists a Fedosov abelian connection $D_{F,\alpha} = \nabla-\delta+\frac{1}{\hbar}[I_\alpha,-]_{\star}$
such that $D_{F,\alpha}|_{\W_X} = D_K$.
\end{thm}
There are some interesting features of the resulting star products $\star_\alpha$ on $X$.
First of all, they are of so-called {\em Wick type}, which roughly means that the corresponding bidifferential operators respect the complex polarization (Proposition \ref{proposition: Fedosov-quantization-1-1-class-Wick-type}). By computing the Karabegov form associated to our star products, we can also show that every Wick type star product arises from our construction (Corollary \ref{corollary:Wick-type}).
More importantly, our solutions of the Fedosov equation \eqref{eqn:Fedosov-equation-Wick-intro} satisfy a gaugue condition (Proposition \ref{proposition-fedosov-gauge-conditions}) which is different from all previous constructions of Fedosov quantization on K\"ahler manifolds. Because of this, our construction is more consistent with the Berezin-Toeplitz quantization\footnote{Combining with the results in \cite{CLL-PartI}, we can show that these Fedosov star products actually coincide with the Berezin-Toeplitz star products \cite{CLL-PartIII}.} and also the local picture that $z$ acts as the {\em creation operator} (which is classical) while $\bar{z}$ acts as the {\em annihilation operator} $\hbar\frac{\partial}{\partial z}$ (which is quantum).
Furthremore, our Fedosov quantization is, in a certain sense, {\em polarized} because only half of the functions, namely, the anti-holomorphic ones, receive quantum corrections.
See Section \ref{section: Fedosov-quantization} for more details.
Next we go from deformation quantization to quantum field theory (QFT).
From the QFT viewpoint, the Fedosov quantization describes the local data of a quantum mechanical system, namely, the cochain complex
$$
(\A_X^\bullet(\mathcal{W}_{X,\C}), D_{F,\alpha})
$$
gives the {\em cochain complex of local quantum observables} of a sigma model from $S^1$ to the target manifold $X$.
To obtain {\em global quantum observables} and also define the {\em correlation functions}, we construct the {\em Batalin-Vilkovisky (BV) quantization} \cite{BV} of this quantum mechanical system, which comes with a map from local to global quantum observables called the {\em factorization map}.
We will mainly follow Costello's approach to the BV formalism \cite{Kevin-book} and rely on Costello-Gwilliam's foundational work on factorization algebras in QFT \cite{Kevin-Owen, Kevin-Owen-2}.
To construct a BV quantization, one wants to produce solutions of the quantum master equation (QME) (see Lemma \ref{lemma: BV-operator-differential}, Definition \ref{definition: QME} and the equation \eqref{equation: quantum-master-equation}):
\begin{equation}\label{equation: quantum-master-equation-intro}
Q_{BV}(e^{r/\hbar})=0,
\end{equation}
where $Q_{BV}:=\nabla+\hbar\Delta+\frac{1}{\hbar}d_{TX}R_{\nabla}$ is the so-called {\em BV differential}, by running the {\em homotopy group flow} which is defined by choosing a suitable propagator. For general symplectic manifolds, it was shown in the joint work \cite{GLL} of the third author with Grady and Si Li that canonical solutions of the QME can be constructed by applying the homotopy group flow operator to solutions of the Fedosov equation \eqref{eqn:Fedosov-equation-Wick-intro}.
In the K\"ahler setting, it is desirable to choose the propagator differently to make it more compatible with the complex polarization. This leads to what we call the {\em polarized propagator} (see Definition \ref{definition: propagator-polarized}) and hence a slightly different form of the canonical solutions to the QME. More precisely, in Theorem \ref{theorem: Fedosov-connection-RG-flow-QME}, we will construct the canonical solution $e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}$ of the QME \eqref{equation: quantum-master-equation-intro} from a solution $\gamma$ of the Fedosov equation \eqref{equation: Fedosov-equation-gamma} (which is equivalent to \eqref{eqn:Fedosov-equation-Wick-intro}); here the curvature term $\tilde{R}_\nabla$ appears precisely because of the different choice of the propagator.
The second main result of this paper is that the resulting BV quantization is {\em one-loop exact}, meaning that the above canonical solution of the QME admits a Feynman graph expansion that involves {\em only trees and one-loop graphs}:
\begin{thm}[=Theorem \ref{theorem: gamma-infty-one-loop}]\label{theorem: gamma-infty-one-loop-intro}
Let $\gamma$ be a solution of the Fedosov equation \eqref{equation: Fedosov-equation-gamma}.
Then the Feynman weight associated to a graph $\mathcal{G}$ with two or more loops vanishes, i.e.,
$$W_{\mathcal{G}}(P, d_{TX}\gamma) = 0\quad \text{ whenever $b_1(\mathcal{G}) \geq 2$}.$$
Hence, the graph expansion of the canonical solution of the QME associated to the Fedosov connection $D_{F,\alpha}$ by Theorem \ref{theorem: Fedosov-connection-RG-flow-QME} involves only trees and one-loop graphs:
$$
\gamma_\infty=\sum_{\mathcal{G}:\ \text{connected},\ b_1(\mathcal{G})=0,1}\frac{\hbar^{g(\mathcal{G})}}{|\text{Aut}(\mathcal{G})|}W_{\mathcal{G}}(P, d_{TX}\gamma).
$$
\end{thm}
This is in sharp contrast with the general symplectic case studied in \cite{GLL} where the BV quantization involves quantum corrections from graphs with any number of loops.
The same kind of one-loop exactness has been observed in a few cases before, including the holomorphic Chern-Simons theory studied by Costello \cite{Kevin-CS} and a sigma model from $S^1$ to the target $T^*Y$ (cotangent bundle of a smooth manifold $Y$) studied by Gwilliam-Grady \cite{Gwilliam-Grady}. Theorem \ref{theorem: gamma-infty-one-loop-intro} shows that K\"ahler manifolds provide a natural geometric ground for producing such one-loop exact QFTs.
\begin{rmk}
If the Feynman weights associated to graphs of higher genera ($\geq 2$) give rise to {\em exact} differential forms and thereby contributing trivially to the computation of correlation functions, we may call the quantization {\em cohomologically one-loop exact}. This is much more commonly found in the mathematical physics literature and should be distinguished from our notion of one-loop exactness here.
\end{rmk}
As in the symplectic case \cite{GLL}, from the canonical QME solution $e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}$, we obtain the local-to-global factorization map, which can be used to define the correlation function $\langle f\rangle$ of a smooth function $f \in C^\infty(X)[[\hbar]]$ (see Definition \ref{definition:correlation-functions} and Proposition \ref{proposition: leading-term-correlation-function}). The association $\Tr: f \mapsto \langle f\rangle$ then gives a {\em trace} of the star product $\star_\alpha$ associated to the Fedosov connection $D_{F, \alpha}$ (Corollary \ref{corollary:trace}).
As an application of the one-loop exactness of our BV quantization in Theorem \ref{theorem: gamma-infty-one-loop-intro}, we deduce a novel {\em cochain level} formula for the for the {\em algebraic index} $\Tr(1)$, which is the correlation function of the constant function $1$, and thus a particularly neat presentation of the {\em algebraic index theorem}:
\begin{thm}[=Theorem \ref{theorem:algebraic-index-theorem} \& Corollary \ref{corollary:algebraic-index-theorem}]
We have
$$\sigma\left(e^{\hbar\iota_{\Pi}} (e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar}) \right)
= \hat{A}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\frac{1}{2}\Tr(\mathcal{R}^+)} = \text{Td}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\Tr(\mathcal{R}^+)},$$
where $Td(X)$ is the Todd class of $X$ and $\mathcal{R}^+$ is the curvature of the holomorphic tangent bundle defined in \eqref{equation: R-plus}. In particular, we obtain the {\em algebraic index theorem}, namely, the trace of the function $1$ is given by
\begin{align*}
\Tr(1) = \int_X\hat{A}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\frac{1}{2}\Tr(\mathcal{R}^+)}
= \int_X\text{Td}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\Tr(\mathcal{R}^+)}.
\end{align*}
\end{thm}
This theorem can be regarded as a cochain level enhancement of the result in \cite{GLL}.
In the forthcoming work \cite{CLL-PartIII}, by combining with the results in \cite{CLL-PartI}, we will prove that the star product $\star_\alpha$ associated to $\alpha = \hbar\cdot\Tr(\mathcal{R}^+)$ is precisely the {\em Berezin-Toeplitz star product} on a prequantizable K\"ahler manifold studied in \cite{Bordemann-Meinrenken, Bordemann, Karabegov}. In this case, the algebraic index theorem is of the simple form:
$$
\Tr(1)=\int_X\text{Td}(X)\cdot e^{\omega/\hbar}.
$$
\subsection*{Acknowledgement}
\
We thank Si Li and Siye Wu for useful discussions. The first named author thanks Martin Schlichenmaier and Siye Wu for inviting him to attend the conference GEOQUANT 2019 held in September 2019 in Taiwan, in which he had stimulating and very helpful discussions with both of them as well as Jørgen Ellegaard Andersen, Motohico Mulase, Georgiy Sharygin and Steve Zelditch.
K. Chan was supported by a grant of the Hong Kong Research Grants Council (Project No. CUHK14303019) and direct grant (No. 4053395) from CUHK.
N. C. Leung was supported by grants of the Hong Kong Research Grants Council (Project No. CUHK14301117 \& CUHK14303518) and direct grant (No. 4053400) from CUHK.
Q. Li was supported by a grant from National Natural Science Foundation of China for young scholars (Project No. 11501537).
\section{From Kapranov to Fedosov}
There have been extensive studies on the Fedosov quantization of K\"ahler manifolds. In this section, we give a new construction of the Fedosov quantization (i.e., solutions of the Fedosov equation for abelian connections on the Weyl bundle) as a natural quantization of Kapranov's $L_\infty$-algebra structure on a K\"ahler manifold.
The organization of this section is as follows: In Section \ref{section: kalher-geom}, we review some basic K\"ahler geometry, including the geometry of Weyl bundles on a K\"ahler manifold $X$. In Section \ref{section: L-infty-structure}, we recall the $L_\infty$-algebra structure introduced by Kapranov \cite{Kapranov}, reformulated using the geometry of Weyl bundles on $X$.
In Section \ref{section: Fedosov-quantization}, we construct Fedosov's flat connections as quantizations of Kapranov's $L_\infty$-algebra structure, which produce star products on $X$ of Wick type. We also compute the Karabegov forms associated to such star products and prove that every Wick type star product arises from our construction.
\subsection{Preliminaries in K\"ahler geometry}\label{section: kalher-geom}
\subsubsection{Some basic identities}
\
We first collect some basic identities in K\"ahler geometry, which are needed in later computations.
First of all, writing the K\"ahler form as
$$
\omega=\omega_{\alpha\bar{\beta}}dz^\alpha\wedge d\bar{z}^\beta=\sqrt{-1}g_{\alpha\bar{\beta}}dz^\alpha\wedge d\bar{z}^\beta,
$$
where we adopt the convention that $\omega^{\bar{\gamma}\alpha}\omega_{\alpha\bar{\beta}}=\delta_{\bar{\beta}}^{\bar{\gamma}}$, then a simple computation shows that $g^{\alpha\bar{\beta}}=-\sqrt{-1}\omega^{\alpha\bar{\beta}}$.
In local coordinates, the curvature of the Levi-Civita connection is given by
$$
\nabla^2(\partial_{x^k})=R_{ijk}^ldx^i\wedge dx^j\otimes\partial_{x^l},
$$
or in complex coordinates:
$$
\nabla^2(\partial_{z^k})=R_{i\bar{j}k}^l dz^i\wedge d\bar{z}^j\otimes\partial_{z^l},
$$
where the coefficients can be written as the derivatives of Christoffel symbols:
$$
R_{i\bar{j}k}^l=-\partial_{\bar{z}^j}(\Gamma_{ik}^l).
$$
We define $R_{i\bar{j}k\bar{l}}$ by
\begin{align*}
R_{i\bar{j}k\bar{l}}:=g(\mathcal{R}(\partial_{z^i},\partial_{\bar{z}^j})\partial_{z^k},\partial_{\bar{z}^l})
=g(R_{i\bar{j}k}^m\partial_{z^m},\partial_{\bar{z}^l})
=R_{i\bar{j}k}^m g_{m\bar{l}}.
\end{align*}
We can also compute the curvature on the cotangent bundle:
\begin{align*}
\mathcal{R}(\partial_{z^i},\partial_{\bar{z}^j})(y^k) & = -\mathcal{R}(\partial_{\bar{z}^j},\partial_{z^i})(y^k) = -\partial_{\bar{z}^j}(-\Gamma_{il}^kdz^i\otimes y^l)\\
& = \partial_{\bar{z}^j}(\Gamma_{ij}^k)d\bar{z}^j\wedge dz^i\otimes y^l = R_{i\bar{j}k}^ldz^i\wedge d\bar{z}^j\otimes y^l.
\end{align*}
The following computation shows that the curvature operator $\mathcal{R}$ can be expressed as a bracket:
\begin{align*}
\frac{\sqrt{-1}}{\hbar}[R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^k\bar{y}^l,y^m]_\star
& = -\frac{\sqrt{-1}}{\hbar}R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes\left(y^m\star y^k\bar{y}^l-y^my^k\bar{y}^l\right)\\
& = -\sqrt{-1}R_{i\bar{j}k\bar{l}}\omega^{m\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^k\\
& = -\sqrt{-1}R_{i\bar{j}k}^ng_{n\bar{l}}\omega^{m\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^k\\
& = -\sqrt{-1}R_{i\bar{j}k}^n(-\sqrt{-1}\omega_{n\bar{l}})\omega^{m\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^k\\
& = R_{i\bar{j}k}^mdz^i\wedge d\bar{z}^j\otimes y^k\\
& = \nabla^2(y^m).
\end{align*}
For later computations, we also use the notation $\mathcal{R}^+$ to denote the curvature of the holomorphic tangent bundle:
\begin{equation}\label{equation: R-plus}
\mathcal{R}^+:=R_{i\bar{j}k}^mdz^i\wedge d\bar{z}^j\otimes(\partial_{z^m}\otimes y^k).
\end{equation}
In particular, we have an explicit formula for its trace:
\begin{equation}\label{equation: trace-R-plus}
\Tr(\mathcal{R}^+)=R_{i\bar{j}k}^k=R_{i\bar{j}k\bar{l}}g^{k\bar{l}}=-\sqrt{-1}R_{i\bar{j}k\bar{l}}\omega^{k\bar{l}}.
\end{equation}
\subsubsection{Weyl bundles on K\"ahler manifolds}
\
Here we recall the definitions and basic properties of various types of Weyl bundles on symplectic and K\"ahler manifolds.
\begin{defn}\label{definition: real-Weyl-bundle}
For a symplectic manifold $(M,\omega)$, its {\em (real) Weyl bundle} is defined as
$$\W_{M,\R} := \widehat{\Sym}(T^*M_{\mathbb{R}})[[\hbar]],$$
where $\widehat{\Sym}(T^*M_{\mathbb{R}})$ is the completed symmetric power of the cotangent bundle $T^*M_{\mathbb{R}}$ of $M$ and $\hbar$ is a formal variable.
A (smooth) section $a$ of this infinite rank bundle is given locally by a formal series
\[
a(x,y)=\sum_{k, l\geq 0} \sum_{i_1,\dots,i_l \geq0} \hbar^k a_{k,i_1\cdots i_l}(x)y^{i_1}\cdots y^{i_l},
\]
where the $a_{k,i_1\cdots i_l}(x)'s$ are smooth functions on $M$.
\end{defn}
\begin{rmk}\label{remark: symmetric-tensor-product}
We use the following notation for the symmetric tensor power:
$$
y^{i_1}\cdots y^{i_l}:=\sum_{\tau\in S_l}y^{i_{\tau(1)}}\otimes\cdots\otimes y^{i_{\tau(l)}}.
$$
Here the product on the tensor algebra is given by
$$
(y^{i_1}\otimes\cdots\otimes y^{i_k})\cdot (y^{i_{k+1}}\otimes\cdots\otimes y^{i_{k+l}}):=\sum_{\tau\in\text{Sh}(k,l)}y^{i_{\tau(1)}}\otimes\cdots\otimes y^{i_{\tau(k+l)}},
$$
where $\text{Sh}(k,l)$ denotes the set of all $(k,l)$-shuffles.
\end{rmk}
There is a canonical (classical) fiberwise multiplication, denoted as $\cdot$, which makes $\mathcal{W}_{M,\R}$ an (infinite rank) algebra bundle over $M$. We will also consider differential forms with values in $\mathcal{W}_{M,\R}$, i.e., $\A_M^\bullet(\mathcal{W}_{M,\R})$.
\begin{rmk}
In this subsection, we are only concerned with the classical geometry of Weyl bundles; the formal variable $\hbar$ is included in Definition \ref{definition: real-Weyl-bundle} for discussing their quantum geometry in later sections.
\end{rmk}
\begin{defn}\label{defn:complexified-Weyl-bundle}
There are two important operators on $\A_M^\bullet(\mathcal{W}_{M,\R})$ which are $\A_M^\bullet$-linear:\footnote{The Einstein summation rule will be used throughout this paper.}
\begin{equation*}\label{equation: delta-delta-star}
\delta a=dx^k\wedge\frac{\partial a}{\partial y^k},\quad \delta^*a=y^k\cdot \iota_{\partial_{x^k}}a.
\end{equation*}
Here $\iota_{\partial_{x^k}}$ denotes the contraction of differential forms by the vector field $\frac{\partial}{\partial x^k}$. We can normalize the operator $\delta^*$ by letting $\delta^{-1} := \frac{1}{l+m}\delta^*$ when acting on the monomial
$
y^{i_1}\cdots y^{i_l}dx^{j_1}\wedge\cdots\wedge dx^{j_m}.
$
Then for any form $a\in\A_M^\bullet(\mathcal{W}_{M,\R})$, we have
\begin{equation*}\label{equation: Hodge-de-Rham-decomposition-Moyal}
a=\delta\delta^{-1}a+\delta^{-1}\delta a+a_{00},
\end{equation*}
where $a_{00}$ denotes the constant term, i.e., the term without any $dx^i$'s or $y^j$'s in $a$.
\end{defn}
\begin{defn}
For a K\"ahler manifold $X$, we define the {\em holomorphic and anti-holomorphic Weyl bundles} respectively by
\begin{align*}
\mathcal{W}_X := \widehat{\Sym}(T^*X)[[\hbar]],\quad \overline{\mathcal{W}}_X := \widehat{\Sym}(\overline{T^*X})[[\hbar]],
\end{align*}
where $T^*X$ and $\overline{T^*X}$ are the holomorphic and anti-holomorphic cotangent bundles of $X$ respectively. With respect to a local holomorphic coordinate system $\{z^1,\dots, z^n\}$, we let $y^i$'s and $\bar{y}^j$'s denote the local frames of $T^*X$ and $\overline{T^*X}$ respectively. A local section of the complexification $\W_{X,\mathbb{C}} := \W_{X,\R} \otimes_\mathbb{R} \mathbb{C}$ of the real Weyl bundle is then of the form:
$$
\sum_{k,m\geq 0}\sum_{i_1,\dots,i_m \geq0}\sum_{j_1,\dots,j_l \geq0}a_{k,i_1,\cdots,i_m,j_1,\cdots,j_l} \hbar^k y^{i_1}\cdots y^{i_m}\bar{y}^{j_1}\cdots\bar{y}^{j_l},
$$
from which we see that $\W_{X,\mathbb{C}}= \mathcal{W}_X\otimes_{\mathcal{C}^{\infty}_X}\overline{\mathcal{W}}_{X}$. The {\em symbol map}
\begin{equation*}\label{equation: definition-symbol-map}
\sigma: \A_X^\bullet\otimes\W_{X,\mathbb{C}}\rightarrow \A_X^\bullet[[\hbar]]
\end{equation*}
is defined by setting $y^i$'s and $\bar{y}^j$'s to be $0$.
\end{defn}
\begin{notn}
We will use the notation $\mathcal{W}_{p,q}$ to denote the component of $\mathcal{W}_{X,\mathbb{C}}$ of type $(p,q)$.
Also we will often abuse the names ``Weyl bundle'' and ``symbol map'' when there is no ambiguity.
\end{notn}
We introduce several operators on $\A_X^\bullet(\mathcal{W}_{X,\mathbb{C}})$, similar to those in Definition \ref{defn:complexified-Weyl-bundle}.
\begin{defn}
There are 4 natural operators acting as derivations on $\A_X^\bullet(\mathcal{W}_{X,\mathbb{C}})$:
\begin{align*}
\delta^{1,0} a = dz^i\wedge\frac{\partial a}{\partial y^i},\quad
\delta^{0,1}a = d\bar{z}^j\wedge\frac{\partial a}{\partial\bar{y}^j},
\end{align*}
as well as
\begin{align*}
(\delta^{1,0})^*a = y^k\cdot \iota_{\partial_{z^k}}a, \quad
(\delta^{0,1})^*a = \bar{y}^j\cdot \iota_{\partial_{\bar{z}^j}}a.
\end{align*}
We define the operators $(\delta^{1,0})^{-1}$ and $(\delta^{0,1})^{-1}$ by normalizing $(\delta^{1,0})^{*}$ and $(\delta^{1,0})^{*}$ respectively:
\begin{equation*}\label{equation: delta-1-0-inverse}
(\delta^{1,0})^{-1}:=\frac{1}{p_1+p_2}(\delta^{1,0})^*\ \text{on $\A_X^{p_1,q_1}(\mathcal{W}_{p_2,q_2})$},
\end{equation*}
\begin{equation*}\label{equation: delta-0-1-inverse}
(\delta^{0,1})^{-1}:=\frac{1}{q_1+q_2}(\delta^{0,1})^*\ \text{on $\A_X^{p_1,q_1}(\mathcal{W}_{p_2,q_2})$}.
\end{equation*}
\end{defn}
\begin{rmk}
We use the same notation for the operator $(\delta^{1,0})^{-1}$ as in Fedosov's original paper \cite{Fed}, although it could be confusing since it is actually {\em not} inverse to $\delta^{1,0}$.
\end{rmk}
\begin{lem}
We have the following identities:
\begin{align*}
\delta & = \delta^{1,0} + \delta^{0,1},\\
\text{id}-\pi_{0,*} & = \delta^{1,0}\circ(\delta^{1,0})^{-1}+(\delta^{1,0})^{-1}\circ\delta^{1,0},\\
\text{id}-\pi_{*,0} & = \delta^{0,1}\circ(\delta^{0,1})^{-1}+(\delta^{0,1})^{-1}\circ\delta^{0,1},
\end{align*}
where $\pi_{0,*}$ and $\pi_{*,0}$ denote the natural projections from $\A_X^\bullet(\mathcal{W}_{X,\mathbb{C}})$ to $\A_X^{0,\bullet}(\overline{\mathcal{W}}_X)$ and $\A_X^{\bullet,0}(\mathcal{W}_X)$ respectively.
\end{lem}
\subsection{A reformulation of Kapranov's $L_{\infty}$ structure on a K\"ahler manifold}\label{section: L-infty-structure}
\
\noindent In this subsection, we reformulate Kapranov's $L_\infty$-algebra structure \cite{Kapranov} on a K\"ahler manifold $X$ in terms of the holomorphic Weyl bundle $\W_X$ on $X$. Let us start with Kapranov's original theorem:
\begin{thm}[Theorem 2.6 and Reformulation 2.8.1 in \cite{Kapranov}]\label{thm:Kapranov-L-infinity}
Let $X$ be a K\"ahler manifold. Then there exist
$$
R_n^*\in\mathcal{A}^{0,1}_X(\Hom(T^*X,\Sym^n(T^*X))),\qquad n\geq 2
$$
such that their extensions $\tilde{R}^*_n$ to the holomorphic Weyl bundle $\W_X$ by derivation satisfy
$$\left(\bar{\partial}+\sum_{n\geq 2}\tilde{R}_n^*\right)^2 = 0,$$
or equivalently,
\begin{equation}\label{equation: square-zero-0-1-part}
\bar{\partial}\tilde{R}_n^*+\sum_{j+k=n+1}\tilde{R}_j^*\circ\tilde{R}_k^* = 0
\end{equation}
for any $n \geq 2$.
\end{thm}
The $R_n^*$'s are defined as partial transposes of the higher covariant derivatives of the curvature tensor:
$$
R_2^*=\frac{1}{2}R_{i\bar{j}k}^m d\bar{z}^j\otimes (y^iy^k\otimes\partial_{z^m}),\qquad R_n^*=(\delta^{1,0})^{-1}\circ\nabla^{1,0}(R_{n-1}^*),
$$
where, by abuse of notations, we use $\nabla$ to denote the Levi-Civita connection on the (anti)holomorphic (co)tangent bundle of $X$, as well as their tensor products including the Weyl bundles.
We can write these $R_n^*$'s locally in a more consistent way as:
\begin{equation}\label{equation: terms-L-infty-structure}
R_n^*=R_{i_1\cdots i_n,\bar{l}}^j d\bar{z}^l\otimes (y^{i_1}\cdots y^{i_n}\otimes \partial_{z^j}).
\end{equation}
Readers are referred to \cite[Section 2.5]{Kapranov} for a detailed exposition.
The $L_\infty$ relations \eqref{equation: square-zero-0-1-part} can be reformulated as the flatness of a natural connection $D_K$ (where the subscript ``K'' stands for ``Kapranov'') on the holomorphic Weyl bundle, whose $(0,1)$-part is exactly the differential operator in Theorem \ref{thm:Kapranov-L-infinity}:
\begin{prop}
The operator
\begin{equation*}\label{eqn:classical-flat-connection-holomorphic}
D_K=\nabla-\delta^{1,0}+\sum_{n\geq 2}\tilde{R}_n^*
\end{equation*}
defines a flat connection on the holomorphic Weyl bundle $\mathcal{W}_X$, which is compatible with the classical (commutative) product.
\end{prop}
\begin{proof}
To show the vanishing of the $(2,0), (0,2)$ and $(1,1)$ parts of $D_K^2$, we let $D_K^{1,0}$ and $D_K^{0,1}$ denote the $(1,0)$ and $(0,1)$ parts of the connection $D_K$ respectively, and note that
the Levi-Civita connection $\nabla$ on $\mathcal{W}_X$ has the type decomposition $\nabla=\nabla^{1,0}+\bar{\partial}$.
First, it is clear that $D_K^{0,1}$ is exactly Kapranov's differential in Theorem \ref{thm:Kapranov-L-infinity}. Thus the vanishing of the $(0,2)$ part of $D_K^2$ follows from Theorem \ref{thm:Kapranov-L-infinity}.
Next, the vanishing of the square of $D_K^{1,0}=\nabla^{1,0}-\delta^{1,0}$ follows from the following computation:
\begin{align*}
(\nabla^{1,0}-\delta^{1,0})^2
& = (\nabla^{1,0})^2+(\delta^{1,0})^2-(\nabla^{1,0}\circ\delta^{1,0}+\delta^{1,0}\circ\nabla^{1,0})\\
& = -(\nabla^{1,0}\circ\delta^{1,0}+\delta^{1,0}\circ\nabla^{1,0}) = 0,
\end{align*}
where the last equality follows from the torsion-freeness of $\nabla$. Finally, for the $(1,1)$ part, we have
$$
D_K^{1,0}\circ D_K^{0,1}+D_K^{0,1}\circ D_K^{1,0}=(\nabla^{1,0}-\delta^{1,0})\left(\sum_{n\geq 2}R_n^*\right)+\nabla^2.
$$
Also, $\delta^{1,0}(R_2^*)=\nabla^2$. So we only need to show that $\nabla^{1,0}(R_n^*)=\delta^{1,0}(R_{n+1}^*)$ for $n\geq 2$.
Recall that $R_n^*$ is inductively defined by $R_{n+1}^*=(\delta^{1,0})^{-1}\circ\nabla^{1,0}(R_n^*)$ for $n\geq 2$. It follows that
\begin{align*}
\delta^{1,0}(R_{n+1}^*) & = \delta^{1,0}\circ(\delta^{1,0})^{-1}\left(\nabla^{1,0}(R_n^*)\right)\\
& = \nabla^{1,0}(R_n^*)-(\delta^{1,0})^{-1}\circ\delta^{1,0}\left(\nabla^{1,0}(R_n^*)\right)\\
& = \nabla^{1,0}(R_n^*),
\end{align*}
as desired; here the last equality follows from the fact that $\nabla^{1,0}(R_n^*)$ is symmetric in all lower subscripts, which was shown in \cite{Kapranov}.
\end{proof}
Now if $\alpha$ is a local flat section of $\mathcal{W}_X$ under $D_K$, it is easy to see that $\sigma(\alpha)$ must be a holomorphic function. Actually, the symbol map defines an isomorphism:
\begin{prop}\label{proposition:flat-sections-holomorphic-Weyl}
The space of (local) flat sections of the holomorphic Weyl bundle with respect to the connection $D_K$ is isomorphic to the space of holomorphic functions. More precisely, the symbol map
$
\sigma: \Gamma^{\text{flat}}(U,\mathcal{W}_X)\rightarrow\mathcal{O}_X(U)[[\hbar]]
$
is an isomorphism for any open subset $U \subset X$.
\end{prop}
To prove this proposition, we mimic Fedosov's arguments in \cite{Fed}. First we need a lemma:
\begin{lem}\label{lemma: symmetry-property-Taylor-expansion}
Let $\alpha_0\in\A_X^{0,q}$ be a smooth differential form on $X$ of type $(0,q)$. Define a sequence $\{\alpha_k\}$ by $\alpha_k := ((\delta^{1,0})^{-1}\circ\nabla^{1,0})^k(\alpha_0) \in \A_X^{0,q}(\mathcal{W}_{k,0})$ for $k \geq 1$. Then we have $(\delta^{1,0}\circ\nabla^{1,0})(\alpha_k)=0$ for all $k\geq 1$, and hence $D_K^{1,0}\left(\sum_{k\geq 0}\alpha_k\right)=0$.
\end{lem}
\begin{proof}
Without loss of generality we can assume that $q=0$, i.e., $\alpha_0$ is a function on $X$. Let us write $\alpha_k$ as
$
\alpha_k=a_{i_1\cdots i_k}y^{i_1}\otimes\cdots\otimes y^{i_k},
$
where the coefficients $a_{i_1\cdots i_k}$ are totally symmetric with respect to all indices. We also write
$
\nabla^{1,0}(\alpha_k)=b_{i_1\cdots i_{k+1}}dz^{i_1}\otimes (y^{i_2}\otimes\cdots\otimes y^{i_{k+1}}).
$
We will show by induction that the coefficients $b_{i_1\cdots i_{k+1}}$ are totally symmetric with respect to all indices $i_1,\cdots,i_{k+1}$. This is clearly true for $k=1$. For general $k\geq 1$, it is clear that $\nabla^{1,0}(\alpha_k)=b_{i_1\cdots i_{k+1}}dz^{i_1}\otimes (y^{i_2}\otimes\cdots\otimes y^{i_{k+1}})$ is symmetric in $i_2,\cdots,i_{k+1}$, so we only need to show that it is symmetric in the first two indices $i_1$ and $i_2$. We know from construction that $\nabla^{1,0}(\alpha_{k-1}) =
a_{i_1\cdots i_k}dz^{i_1}\otimes y^{i_2}\cdots\otimes y^{i_k}$.
Since $(\nabla^{1,0})^2 = 0$, we have
\begin{equation}\label{eqn:nabla-square-vanish}
-dz^{i_1}\wedge \nabla^{1,0}(a_{i_1\cdots i_k}y^{i_2}\otimes\cdots\otimes y^{i_k})
= \nabla^{1,0}(a_{i_1\cdots i_k}dz^{i_1}\otimes y^{i_2}\cdots\otimes y^{i_k})
= 0.
\end{equation}
We then compute the covariant derivative:
\begin{align*}
\nabla^{1,0}(\alpha_k)=&\nabla^{1,0}(a_{i_1\cdots i_k}y^{i_1}\otimes\cdots\otimes y^{i_k})\\
=&y^{i_1}\otimes\nabla^{1,0}(a_{i_1\cdots i_k}y^{i_2}\otimes\cdots\otimes y^{i_k})+\nabla^{1,0}(y^{i_1})\otimes(a_{i_1\cdots i_k}y^{i_2}\otimes\cdots\otimes y^{i_k}).
\end{align*}
Since the Levi-Civita connection is torsion-free, the second term in the last expression is symmetric in the first two indices. Now the first term has the same symmetric property by \eqref{eqn:nabla-square-vanish}. Hence we have $(\delta^{1,0}\circ\nabla^{1,0})(\alpha_k)=0$.
Now we have
\begin{align*}
\nabla^{1,0}(\alpha_k)
& = (\delta^{1,0}\circ(\delta^{1,0})^{-1}+(\delta^{1,0})^{-1}\circ\delta^{1,0})(\nabla^{1,0}(\alpha_k))\\
& = (\delta^{1,0}\circ(\delta^{1,0})^{-1})(\nabla^{1,0}(\alpha_k))
= \delta^{1,0}(\alpha_{k+1}).
\end{align*}
Therefore
\begin{align*}
D_K^{1,0}\left(\sum_{k\geq 0}\alpha_k\right)
& = (\nabla^{1,0}-\delta^{1,0})\left(\alpha_0+\alpha_1+\alpha_2+\cdots\right)\\
& = -\delta^{1,0}(\alpha_0)+(\nabla^{1,0}(\alpha_0)-\delta^{1,0}(\alpha_1))+(\nabla^{1,0}(\alpha_1)-\delta^{1,0}(\alpha_2))+\cdots\\
& = 0.
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{proposition:flat-sections-holomorphic-Weyl}]
The injectivity follows by observing that any nonzero section $s$ of $\mathcal{W}_X$ with zero constant term cannot be flat since $\delta^{1,0}(s_0)$, where $s_0$ is the leading term (i.e. of least weight) in $s$, is of smaller weight and nonzero.
For the surjectivity, we claim that given a holomorphic function $f$, the section
\begin{equation}\label{equation: flat-section-D_K}
J_f := \sum_{k\geq 0}((\delta^{1,0})^{-1}\circ\nabla^{1,0})^k(f),
\end{equation}
with leading term $f$ is flat with respect to $D_K$. From Lemma \ref{lemma: symmetry-property-Taylor-expansion}, we know that $D_K^{1,0}(J_f)=0$. It follows that $D_K^{1,0}\circ D_K^{0,1}(J_f)=-D_K^{0,1}\circ D_K^{1,0}(J_f)=0$.
Observe that $\sigma(D_K^{0,1}(J_f))=\bar{\partial}f=0$. Thus, by the same reason as in the proof of the injectivity of $\sigma$, we must have $D_K^{0,1}(J_f)=0$.
\end{proof}
Similar arguments give the following proposition, which will be useful later:
\begin{prop}\label{proposition: flat-section-closed-0-q-form}
Let $\alpha$ be a $\bar{\partial}$-closed $(0,q)$ form on $X$. Then the section
$$
\alpha+\sum_{k\geq 1}((\delta^{1,0})^{-1}\circ\nabla^{1,0})^k(\alpha)
$$
is flat with respect to the connection $D_K$.
\end{prop}
\subsection{Fedosov quantization from Kapranov's $L_\infty$ structure}\label{section: Fedosov-quantization}
\
\noindent In \cite{Fed}, Fedosov gave an elegant geometric construction of deformation quantization on a general symplectic manifold (see also \cite{Fedbook, Fed-index}).
The goal of this section is to show that Fedosov's quantization arises naturally as a quantization of Kapranov's $L_\infty$ structure. The key is to adapt Fedosov's construction in the K\"ahler setting by incorporating the complex structure, or complex polarization, on $X$.
We begin by considering the complexified Weyl bundle $\mathcal{W}_{X,\mathbb{C}}$, equipped with the {\em fiberwise Wick product} induced by the K\"ahler form $\omega$: if $\alpha,\beta$ are sections of $\mathcal{W}_{X,\mathbb{C}}$, the Wick product is explicitly defined by:
\begin{equation*}\label{equation: Wick-product}
\alpha\star\beta:=\sum_{k\geq 0}\frac{\hbar^k}{k!}\cdot\omega^{i_1\bar{j}_1}\cdots\omega^{i_k\bar{j}_k}\cdot\frac{\partial^k\alpha}{\partial y^{i_1}\cdots\partial y^{i_k}}\frac{\partial^k\beta}{\partial \bar{y}^{j_1}\cdots\partial \bar{y}^{j_k}}.
\end{equation*}
With respect to this product, the operator $\delta$ on $\mathcal{W}_{X,\C}$ can be expressed as:
$$
\delta =\frac{1}{\hbar}\left[\omega_{i\bar{j}}(dz^i\otimes\bar{y}^j-d\bar{z}^j\otimes y^i),-\right]_{\star},
$$
where $[-,-]_{\star}$ denotes the bracket associated to the Wick product.
\begin{defn}
A connection on $\W_{X,\C}$ of the form
$$
D=\nabla-\delta+\frac{1}{\hbar}[I,-]_{\star}
$$
is called a {\em Fedosov abelian connection} if $D^2=0$. Here $\nabla$ is the Levi-Civita connection, and $I\in\A^1(X,\W_{X,\C})$ is a $1$-form valued section of $\W_{X,\C}$.
\end{defn}
Recall the following calculation in \cite{Bordemann}:
\begin{lem}[Proposition 4.1 in \cite{Bordemann}]\label{lemma: curvature-on-Weyl-bundle}
The curvature of the Levi-Civita connection on the Weyl bundle is given by
\begin{equation*}\label{equation: curvature-on-Weyl-bundle}
\nabla^2=\frac{1}{\hbar}[R_\nabla,-]_{\star},
\end{equation*}
where $R_\nabla:=\sqrt{-1}\cdot R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^k\bar{y}^l$.
\end{lem}
A simple computation together with Lemma \ref{lemma: curvature-on-Weyl-bundle} shows that the flatness of $D$ is equivalent to the {\em Fedosov equation}:
\begin{equation}\label{eqn:Fedosov-equation-Wick}
\nabla I-\delta I+\frac{1}{\hbar}I\star I+ R_\nabla=\alpha,
\end{equation}
where $\alpha=\sum_{k\geq 1}\hbar^k \omega_k\in\A_X^{2}[[\hbar]]$ is a closed formal $2$-form on $X$.
Letting $\gamma:=I+\omega_{i\bar{j}}(dz^i\otimes \bar{y}^j-d\bar{z}^j\otimes y^i)$ and $\omega_\hbar:=-\omega+\alpha$, then equation \eqref{eqn:Fedosov-equation-Wick} is equivalent to
\begin{equation}\label{equation: Fedosov-equation-gamma}
\nabla \gamma+\frac{1}{\hbar}\gamma\star\gamma+ R_\nabla=\omega_\hbar.
\end{equation}
We will show that the flat connection $D_K$ on $\mathcal{W}_X$ gives rise to a Fedosov abelian connection $D_F$ (where the subscript ``F'' stands for ``Fedosov''), which is a {\em quantization} (or {\em quantum extension}) because the Wick product $\star$ is non-commutative. First recall that $$D_K=\nabla-\delta^{1,0}+\sum_{n\geq 2}\tilde{R}_n^*,$$
where each $\tilde{R}_n^*$ is an $\A_X^\bullet$-linear derivation on $\mathcal{W}_{X,\C}$ via $R_n^*\in\A_X^{0,1}\left(\Sym^n(T^*X)\otimes TX\right)$ (see equation \eqref{equation: terms-L-infty-structure}).
Consider the $\A_X^{\bullet}$-linear operator
$$L:\A_X^\bullet\left(\widehat{\Sym}(T^*X)\otimes TX\right)\rightarrow \A_X^\bullet\left(\widehat{\Sym}(T^*X)\otimes \overline{T^*X}\right)$$
given by ``lifting the last subscript'' using the K\"ahler form $\omega$. Then we can define
$$
I_n:=L(R_n^*)=R_{i_1\cdots i_n,\bar{l}}^j\omega_{j\bar{k}}d\bar{z}^l\otimes (y^{i_1}\cdots y^{i_n}\bar{y}^{k})\in\mathcal{A}_X^{0,1}(\mathcal{W}_{X,\mathbb{C}}).
$$
\begin{lem}\label{lemma: commutation-relations}
We have
\begin{equation}\label{equation: connection-compatible-with-lifting-subscript}
\tilde{R}_n^*=\frac{1}{\hbar}[I_n,-]_\star|_{\mathcal{W}_{X}};
\end{equation}
\begin{equation}\label{equation: connection-commutes-with-L}
\nabla\circ L=L\circ\nabla;
\end{equation}
\begin{equation}\label{equation: L-commutes-with-delta}
L\circ\delta^{1,0}=\delta^{1,0}\circ L;
\end{equation}
\begin{equation}\label{equation: curvature-first-term-L-infty}
\delta^{1,0}\circ L(R_2^*)=R_\nabla;
\end{equation}
\begin{equation}\label{equation: L-Lie-algebra-homomorphism}
L([R_m^*,R_n^*])=[L(R_m^*),L(R_n^*)]_{\star}.
\end{equation}
\end{lem}
\begin{proof}
The construction of $I_n$'s implies equation \eqref{equation: connection-compatible-with-lifting-subscript}.
Equation \eqref{equation: connection-commutes-with-L} follows from the fact that $\nabla(\omega)=0$. Equation \eqref{equation: L-commutes-with-delta} is obvious. Equation \eqref{equation: curvature-first-term-L-infty} follows from a straightforward computation:
\begin{align*}
\delta^{1,0}\circ L(R_2^*)=&L\circ \delta^{1,0}\left(\frac{1}{2}R_{i\bar{j}k}^m d\bar{z}^j\otimes(y^iy^k\otimes\partial_{z^m})\right)\\
=&L\left(R_{i\bar{j}k}^m dz^i\wedge d\bar{z}^j\otimes(y^k\otimes\partial_{z^m})\right)\\
=&R_{i\bar{j}k}^m\omega_{m\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^ky^l\\
=&R_{i\bar{j}k}^m\sqrt{-1}g_{m\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^ky^l\\
=&R_\nabla.
\end{align*}
To show equation \eqref{equation: L-Lie-algebra-homomorphism}, notice that $L([R_m^*,R_n^*])=L\left(\tilde{R}_m^*(R_n^*)+\tilde{R}_n^*(R_m^*)\right)$.
We then have the explicit computation:
\begin{align*}
L\left(\tilde{R}_m^*(R_n^*)\right)
& = L\left(R_{i_1\cdots i_m,\bar{l}}^jd\bar{z}^l\otimes(y^{i_1}\cdots y^{i_m}\otimes\partial_{z^j})(R_{i'_1\cdots i'_n,\bar{l}'}^{j'}d\bar{z}^{l'}\otimes(y^{i'_1}\cdots y^{i'_n}\otimes\partial_{z^{j'}}))\right)\\
& = L\left(\sum_{1\leq \alpha\leq n}R_{i_1\cdots i_m,\bar{l}}^jR_{i'_1\cdots i'_n,\bar{l}'}^{j'}d\bar{z}^l\wedge d\bar{z}^{l'}\otimes(y^{i_1}\cdots y^{i_m}y^{i'_1}\cdots\widehat{y^{i'_\alpha}}\cdots y^{i'_n}\otimes\partial_{z^{j'}})\right)\\
& = \sum_{1\leq \alpha\leq n}\omega_{j'\bar{k}}R_{i_1\cdots i_m,\bar{l}}^{i'_\alpha}R_{i'_1\cdots i'_n,\bar{l}'}^{j'}d\bar{z}^l\wedge d\bar{z}^{l'}\otimes(y^{i_1}\cdots y^{i_m}y^{i'_1}\cdots\widehat{y^{i'_\alpha}}\cdots y^{i'_n}\bar{y}^k).
\end{align*}
On the other hand, we have
\begin{align*}
&L(R_n^*)\star L(R_m^*)\\
=&\left(R_{i'_1\cdots i'_n,\bar{l}'}^{j'}\omega_{j'\bar{k}'}d\bar{z}^{l'}\otimes(y^{i'_1}\cdots y^{i'_n} y^{\bar{k'}})\right)\star\left(R_{i_1\cdots i_m,\bar{l}}^j\omega_{j\bar{k}}d\bar{z}^l\otimes(y^{i_1}\cdots y^{i_m} y^{\bar{k}})\right)\\
=&L(R_n^*)\cdot L(R_m^*)\\
&\qquad + \sum_{1\leq \alpha\leq n}\omega_{j\bar{k}}\omega_{j'\bar{k}'}\omega^{i'_\alpha\bar{k}}R_{i_1\cdots i_m,\bar{l}}^jR_{i'_1\cdots i'_n,\bar{l}'}^{j'}d\bar{z}^{l'}\wedge d\bar{z}^{l}\otimes(y^{i_1}\cdots y^{i_m}y^{i'_1}\cdots\widehat{y^{i'_\alpha}}\cdots y^{i'_n}\bar{y}^{k'}) \\
=&-L(R_m^*)\cdot L(R_n^*)\\
&\qquad + \sum_{1\leq \alpha\leq n}(-1)\omega_{j'\bar{k}'}R_{i_1\cdots i_m,\bar{l}}^{i'_\alpha}R_{i'_1\cdots i'_n,\bar{l}'}^{j'}d\bar{z}^{l'}\wedge d\bar{z}^{l}\otimes(y^{i_1}\cdots y^{i_m}y^{i'_1}\cdots\widehat{y^{i'_\alpha}}\cdots y^{i'_n}\otimes\partial_{z^{j'}})\\
=&-L(R_m^*)\cdot L(R_n^*)+L\left(\tilde{R}_m^*(R_n^*)\right).
\end{align*}
\end{proof}
Let $I := \sum_{n\geq 2}I_n$. Then
\begin{equation*}\label{equation: Fedosov-connection-Wick-type}
D_F:=\nabla-\delta+\frac{1}{\hbar}[I,-]_{\star}
\end{equation*}
defines a connection on $\mathcal{W}_{X,\mathbb{C}}$.
Equation \eqref{equation: connection-compatible-with-lifting-subscript} says that $D_F$ is an extension of the $D_K$, namely,
\begin{equation*}\label{equation: Fedosov-connection-equals-Kapranov-connection-holomorphic-Weyl}
D_F|_{\mathcal{W}_X}=D_K.
\end{equation*}
\begin{lem}\label{lemma: delta-0-1-annihilates-I}
We have $\delta^{0,1}I=0$.
\end{lem}
\begin{proof}
Recall that $I_2=\frac{\sqrt{-1}}{2}R_{i\bar{j}k\bar{l}}d\bar{z}^l\otimes y^iy^k\bar{y}^j$, where $R_{i\bar{j}k\bar{l}}$ is symmetric in $\bar{j}$ and $\bar{l}$, from which we know that $\delta^{0,1}I_2=0$. The statement for the general $I_n$'s follows from the iterative equation $I_n=(\delta^{1,0})^{-1}\circ\nabla^{1,0}(I_{n-1})$ and the commutativity relations
$[\delta^{0,1},(\delta^{1,0})^{-1}]=[\delta^{0,1},\nabla^{1,0}]=0$.
\end{proof}
Here comes the first main result of this paper:
\begin{thm}\label{theorem: quantization-of-Kapranov-is-Fedosov-connection}
The connection $D_F$ is flat.
\end{thm}
\begin{proof}
Notice that $I=\sum_{n\geq 2}I_{n}$ is a $(0,1)$-form valued in $\mathcal{W}_{X,\mathbb{C}}$. We only need to show the vanishing of the $(2,0)$, $(0,2)$ and $(1,1)$ parts of $D_F^2$.
The vanishing of the $(2,0)$ part of $D_F^2$ follows from that of $D_K$. The $(0,2)$ part of $D_F^2$ is given by
\begin{equation*}
\frac{1}{\hbar}\left[\nabla^{0,1}I-\delta^{0,1}I+\frac{1}{\hbar}[I,I]_{\star},-\right]_{\star}.
\end{equation*}
By Lemma \ref{lemma: commutation-relations} and the flatness of $D_K$, we have
\begin{align*}
\nabla^{0,1}I-\delta^{0,1}I+\frac{1}{\hbar}[I,I]_{\star}
= \nabla^{0,1}I+\frac{1}{\hbar}[I,I]_{\star}
= L\left(\nabla^{0,1}(\sum_{k\geq 2}\tilde{R}_k^*)+\left[\sum_{k\geq 2}\tilde{R}_k^*, \sum_{k\geq 2}\tilde{R}_k^*\right]\right)
= 0;
\end{align*}
here the first equality follows from Lemma \ref{lemma: delta-0-1-annihilates-I}.
The $(1,1)$ part of $D_F^2$ is given by
\begin{align*}
&\hspace{5mm}\nabla^2+\frac{1}{\hbar}\left[\nabla^{1,0}I-\delta^{1,0}I, -\right]_{\star}\\
& = \frac{1}{\hbar}\left[R_\nabla+\nabla^{1,0}I-\delta^{1,0}I,\ -\right]_{\star}\\
& = \frac{1}{\hbar}\left[R_\nabla+(\nabla^{1,0}-\delta^{1,0})\circ L\left(\sum_{k\geq 2}R_k^*\right),\ -\right]_\star\\
& \overset{(*)}{=} \frac{1}{\hbar}\left[\delta^{1,0}\circ L(R_2^*)+(\nabla^{1,0}-\delta^{1,0})\circ L\left(\sum_{k\geq 2}R_k^*\right),\ -\right]_\star\\
& = \frac{1}{\hbar}\left[L\circ\delta^{1,0}(R_2^*)+L\circ(\nabla^{1,0}-\delta^{1,0})\left(\sum_{k\geq 2}R_k^*\right),\ -\right]_\star\\
& = \frac{1}{\hbar}\left[L\left(\delta^{1,0}(R_2^*)-\delta^{1,0}(R_2^*)+(\nabla^{1,0}R^*_2-\delta^{1,0}R^*_3)+\cdots+(\nabla^{1,0}R^*_k-\delta^{1,0}R^*_{k+1})+\cdots\right),-\right]_\star\\
& = 0;
\end{align*}
where we have used equation \eqref{equation: curvature-first-term-L-infty} in the equality $(*)$, and also the fact that $L$ commutes with both $\delta^{1,0}$ and $\nabla^{1,0}$. Hence $D_F$ is a Fedosov abelian connection. A simple computation shows that $I$ actually satisfies the Fedosov equation \eqref{eqn:Fedosov-equation-Wick} with $\alpha = 0$:
$$
\nabla I-\delta I+\frac{1}{\hbar}I\star I+ R_\nabla=0.
$$
\end{proof}
\begin{rmk}
It is worth pointing out that this flat connection $D_F$ only has a ``classical'' part, i.e., the sections $I_n\in\A_X^{\bullet}(\mathcal{W}_{X,\C})$ have no higher order terms in the $\hbar$-power expansion. This is very different from Fedosov's original solutions of his equation.
\end{rmk}
More generally, given any closed formal $2$-form $\alpha$ on $X$ with $[\alpha]\in\hbar H^2_{dR}(X)[[\hbar]]$ of type $(1,1)$, the following theorem produces an explicit solution of the Fedosov equation: \eqref{eqn:Fedosov-equation-Wick}:
\begin{thm}\label{proposition: Fedosov-connection-general}
Let $\alpha$ be a representative of a formal cohomology class in $\hbar H^2_{dR}(X)[[\hbar]]$ of type $(1,1)$. Then there exists a solution of the Fedosov equation of the form $I_\alpha = I+ J_\alpha \in \mathcal{A}_X^{0,1}(\mathcal{W}_{X,\mathbb{C}})$:
\begin{equation*}
\nabla I_\alpha - \delta I_\alpha + \frac{1}{\hbar} I_\alpha\star I_\alpha + R_\nabla=\alpha.
\end{equation*}
We denote the corresponding Fedosov abelian connection by $D_{F,\alpha}$.
\end{thm}
\begin{proof}
The $\partial\bar{\partial}$-lemma guarantees the local existence of a formal function $g$ such that $\alpha=\bar{\partial}{\partial}(g)$. We define a section $J_\alpha$ of $\mathcal{A}_X^{0,1}(\mathcal{W}_X)$ by
\begin{equation*}\label{equation: I-alpha-formula}
J_\alpha:=\sum_{k\geq 1}\left((\delta^{1,0})^{-1}\circ\nabla^{1,0}\right)^k(\bar{\partial}g).
\end{equation*}
Such a function $g$ is unique up to a sum of purely holomorphic and purely anti-holomorphic functions. It follows that $J_\alpha$ is independent of the choice of $g$. In particular, this implies that these local sections $J_\alpha$'s patch together to give a global section over $X$.
By Proposition \ref{proposition: flat-section-closed-0-q-form}, $\bar{\partial}g + J_\alpha$ is closed under $D_K$, so it is also closed under $D_F$ by the fact that $D_F|_{\mathcal{W}_X}=D_K$. Now
\begin{align*}
D_F(\bar{\partial}g + J_\alpha)
= \partial\bar{\partial}g+\nabla J_\alpha-\delta(J_\alpha)+\frac{1}{\hbar}[I, J_\alpha]_{\star}
= -\alpha+\nabla J_\alpha-\delta J_\alpha +\frac{1}{\hbar}[I, J_\alpha]_{\star},
\end{align*}
so $\nabla J_\alpha-\delta J_\alpha + \frac{1}{\hbar}[I, J_\alpha]_{\star}=\alpha$.
Together with Theorem \ref{theorem: quantization-of-Kapranov-is-Fedosov-connection} and the fact that $J_\alpha$ has only holomorphic components in $\mathcal{W}_{X,\mathbb{C}}$ which implies that $J_\alpha \star J_\alpha=0$ by type reasons, we deduce that
$\nabla I_\alpha -\delta I_\alpha + \frac{1}{\hbar} I_\alpha \star I_\alpha + R_\nabla
= \nabla J_\alpha - \delta J_\alpha + \frac{1}{\hbar}\left([I, J_\alpha]_{\star} + J_\alpha\star J_\alpha\right) = \alpha$.
\end{proof}
We now briefly recall the construction of star products from Fedosov connections.
\begin{prop}[Theorem 3.3 in \cite{Fed}]\label{proposition: iteration-equation-quantum-flat-section}
Let $D=\nabla-\delta+\frac{1}{\hbar}[I,-]_{\star}$ be a Fedosov abelian connection. Then there is a one-to-one correspondence induced by the symbol map $\sigma$:
$$
\sigma: \Gamma^{flat}(X,\mathcal{W}_{X,\C})\xrightarrow{\sim} C^\infty(X)[[\hbar]].
$$
The flat section $O_f$ corresponding to $f\in C^\infty(X)$ is the unique solution of the iterative equation:
\begin{equation}\label{equation: iteration-equation-quantum-flat-section}
O_f=f+\delta^{-1}\left(\nabla O_f+\frac{1}{\hbar}[I,O_f]_{\star}\right).
\end{equation}
\end{prop}
For the Fedosov connection $D_{F,\alpha}$ defined in Theorem \ref{proposition: Fedosov-connection-general}, the associated deformation quantization (star product) $\star_{\alpha}$ on $C^\infty(X)[[\hbar]]$ is given by
\begin{equation}\label{equation: formal-star-product}
f\star_{\alpha} g:=\sigma(O_f\star O_g).
\end{equation}
\begin{defn}\label{definition: Wick-type-deformation-quantization}
We say that a deformation quantization $(C^\infty(X)[[\hbar]],\star)$ is of {\em Wick type} (or {\em with separation of variables}) if we have $f \star g = f\cdot g$ whenever $f$ is antiholomorphic or $g$ is holomorphic.
This is equivalent to requiring the corresponding bi-differential operators $C_i(-,-)$ to take holomorphic and anti-holomorphic derivatives of the first and second arguments respectively.
\end{defn}
\begin{prop}\label{proposition: Fedosov-quantization-1-1-class-Wick-type}
For every closed formal differential form $\alpha$ of type $(1,1)$, the star product $\star_{\alpha}$ defined in \eqref{equation: formal-star-product} is of Wick type.
\end{prop}
\begin{proof}
To show that $\star_{\alpha}$ is of Wick type, we only need to show that if both $f$ and $g$ are (anti-)holomorphic functions, then $f\star_{\alpha} g=f\cdot g$.
If $f,g$ are holomorphic, then both $O_f$ and $O_g$ are sections of the holomorphic Weyl bundle $\W_X$. Now $D_{F,\alpha}=\nabla-\delta+\frac{1}{\hbar}[I_\alpha,-]_\star$ where $I_\alpha = I + J_\alpha$ and since $J_\alpha\in\A_X^{0,1}\otimes\mathcal{W}_X$, we have $D_{F,\alpha}|_{\mathcal{W}_X}=D_F|_{\mathcal{W}_X}=D_K$.
It follows that $O_f=J_f$ (where $J_f$ is defined in \eqref{equation: flat-section-D_K}) and must be of the desired type. Thus we have $\sigma(O_f\star O_g)=f\cdot g$ by type reasons.
If $f, g$ are anti-holomorphic functions, then we are in the opposite situation. By a simple induction using equation \eqref{equation: iteration-equation-quantum-flat-section}, we can see that both $O_f$ and $O_g$ do {\em not} contain any term that has only holomorphic components in $\mathcal{W}_{X,\mathbb{C}}$. Hence it also follows that $\sigma(O_f\star O_g)=f\cdot g$.
\end{proof}
\begin{rmk}
The fact that $O_f=J_f$ for any (local) holomorphic function $f$ says that holomorphic functions do {\em not} receive any quantum corrections in these Fedosov quantizations.
\end{rmk}
\subsubsection{Gauge fixing conditions and comparison with previous works}
\
\noindent Fedosov's original solutions \cite{Fed} of his equation satisfy the gauge fixing condition that $\delta^{-1}(I)=0$. On the other hand, by a simple type reason argument, it is easy to see that our solutions of the Fedosov equation satisfy instead the following gauge fixing condition:
$$
(\delta^{1,0})^{-1}(I)=0.
$$
This condition alone cannot guarantee uniqueness, but we have the following:
\begin{prop}\label{proposition-fedosov-gauge-conditions}
The solution $I$ of the Fedosov equation \eqref{eqn:Fedosov-equation-Wick} satisfying the two conditions:
\begin{equation}\label{equation: holomorphic-gauge-fixing}
(\delta^{1,0})^{-1}(I)=0,
\end{equation}
\begin{equation}\label{equation: normalization-condition}
\pi_{0,*}(I)=0
\end{equation}
is unique.
\end{prop}
\begin{proof}
Equation \eqref{equation: normalization-condition} implies that $(\delta^{1,0}\circ(\delta^{1,0})^{-1}+(\delta^{1,0})^{-1}\circ\delta^{1,0})(I)=I$.
Together with the gauge fixing condition \eqref{equation: holomorphic-gauge-fixing}, we have
$I=(\delta^{1,0})^{-1}\circ\delta^{1,0} (I)$.
By applying the operator $(\delta^{1,0})^{-1}$ to equation \eqref{eqn:Fedosov-equation-Wick}, we see that $I$ must satisfy the following iterative equation:
\begin{align*}
I = (\delta^{1,0})^{-1}\left(\nabla I+\frac{1}{\hbar}[I,I]_{\star}+R_\nabla-\omega_{\hbar}-\delta^{0,1}I\right)
= (\delta^{1,0})^{-1}\left(\nabla I+\frac{1}{\hbar}[I,I]_{\star}+R_\nabla-\omega_{\hbar}\right),
\end{align*}
where we have used the fact that the two operators $\delta^{0,1}$ and $(\delta^{1,0})^{-1}$ commute with each other in the second equality. This iterative equation clearly has a unique solution.
\end{proof}
It is clear that our solution $I = \sum_{n\geq 2}I_n$ of the Fedosov equation \eqref{eqn:Fedosov-equation-Wick} is exactly the unique one satisfying the conditions \eqref{equation: holomorphic-gauge-fixing} and \eqref{equation: normalization-condition}.
There were a number of works on the Fedosov construction of Wick type deformation quantizations on (pseudo-)K\"ahler manifolds \cites{Karabegov00, Bordemann, Neumaier}. Notice that, in all these works, the authors were using the same gauge condition, namely, Fedosov's original condition $\delta^{-1}I=0$, when solving the Fedosov equation.
For the purpose of deformation quantization, there is no essential difference between these two choices of gauge conditions.
However, here are some interesting features of our construction which were not found in previous ones:
\begin{enumerate}
\item As we have emphasized, our Fedosov connection is a quantization of Kapranov's $L_\infty$-algebra structure on a K\"ahler manifold.
\item In our construction, the sections $O_f$ corresponding to holomorphic functions $f$ do not receive any quantum corrections. This is consistent with the Berezin-Toeplitz quantization, and also the local picture where $z$ acts as the {\em creation operator} which is classical while $\bar{z}$ acts as the {\em annihilation operator} $\hbar\frac{\partial}{\partial z}$ which is quantum.
\item Our quantization is in a certain sense ``polarized'': only half of the functions, i.e., the anti-holomorphic ones receive quantum corrections.
\end{enumerate}
\subsubsection{The Karabegov form}\label{section: Karabegov-form}
\
In \cite{Karabegov96}, Karabegov gives a complete classification of deformation quantizations of Wick type on a K\"ahler manifold:\footnote{Actually the roles of the holomorphic and anti-holomorphic variables in \cite{Karabegov96} were reversed, but this does not affect the results.}
\begin{thm}[Theorem 2 in \cite{Karabegov96}]\label{theorem: Karabegov-classification-Wick-star-product}
Deformation quantizations of Wick type (or with separation of variables) on a K\"ahler manifold $X$ are in one-to-one correspondence with formal deformations of the K\"ahler metric $\omega$ on $X$.
\end{thm}
To see how a formal deformation of $\omega$ is constructed from a Wick type star product, let us recall the following proposition in \cite{Karabegov96}:
\begin{prop}[Proposition 1 in \cite{Karabegov96}]\label{proposition: function-u-k}
Let $X$ be a K\"ahler manifold with K\"ahler form $\omega$ and $\star$ be a formal star product with separation of variables. Then, on each contractible coordinate chart $U\subset X$ with any holomorphic coordinate system $(z^1,\cdots, z^n)$, there exist formal functions $u_1,\cdots, u_m\in C^\infty(U)[[\hbar]]$ such that $[u_k, z^{k'}]_\star = \hbar\delta_{kk'} $.
\end{prop}
This proposition gives a locally defined formal differential form $\bar{\partial}(-u_kdz^k)$ of type $(1,1)$ on each chart $U$, which patch together to a globally defined closed formal differential form, called the {\em Karabegov form} associated to $\star$. Moreover, this formal $(1,1)$-form is a deformation of $\omega$, i.e., of the form $\omega+O(\hbar)$.
\begin{rmk}
In \cite{Karabegov96}, the Karabegov form is defined as $\sqrt{-1}\cdot\bar{\partial}(-u_kdz^k)$. We are using a different normalization since our star products satisfy the condition that $C_1(f,g)-C_1(g,f)=\{f,g\}$, instead of $C_1(f,g)-C_1(g,f)=\sqrt{-1}\cdot\{f,g\}$.
\end{rmk}
Now let $\alpha=\sum_{i\geq 1}\hbar^i\omega_i$ be a closed formal differential form of type $(1,1)$, and $D_{F,\alpha}$ and $\star_\alpha$ be respectively the associated Fedosov abelian connection and Wick type star product constructed in Theorem \ref{proposition: Fedosov-connection-general}. To calculate the Karabegov form associated to $\star_{\alpha}$, we begin with a lemma.
\begin{lem}\label{lemma: function-u-k}
Let $\alpha=\sum_{i\geq 1}\hbar^i\omega_i$ be a closed formal differential form of type $(1,1)$ and $-\sum_{i\geq 1}\hbar^i\rho_i$ be a potential of $\alpha$ (i.e., $\bar{\partial}\partial\rho_i=\omega_i$), and let $\rho$ be a potential of $\omega$. For the (locally defined) formal functions
$$
u_k=\frac{\partial}{\partial z^k}\left(\rho + \sum_{i\geq 1}\hbar^i \rho_i\right),
$$
the terms in $O_{u_k}$ which live in $\overline{\mathcal{W}}_X$ (which we call ``terms of purely anti-holomorphic type'') are given by
$$
u_k+\omega_{k\bar{m}}\bar{y}^m.
$$
\end{lem}
\begin{proof}
Recall that $O_{u_k}$ is the unique solution of the iterative equation:
\begin{align*}
O_{u_k} = u_k+\delta^{-1}\circ\left(\nabla+\frac{1}{\hbar}[I_\alpha,-]_{\star}\right)(O_{u_k}).
\end{align*}
Observe that if a monomial $A$ does not live in $\A_X^{\bullet}(\overline{\mathcal{W}}_{X})$, then $\nabla A+\frac{1}{\hbar}[I_\alpha,A]_{\star}$ does not have terms living in $\A_X^{\bullet}(\overline{\mathcal{W}}_{X})$. So we can prove the theorem by an induction on the weights of `` terms of purely anti-holomorphic type'' in $O_{u_k}$.
The terms in $O_{u_k}$ of weight $1$ are given by
$$
\frac{\partial^2\rho}{\partial\bar{z}^l\partial z^k}\bar{y}^l=\omega_{k\bar{l}}\bar{y}^l.
$$
We know from the iterative equation that the weight $2$ terms are given by
$$
\delta^{-1}\circ\nabla^{0,1}(\omega_{k\bar{l}}\bar{y}^l),
$$
which vanish since the Levi-Civita connection is compatible with both the symplectic form and the complex structure. The next terms are
\begin{align*}
&\delta^{-1}\left(\nabla^{0,1}\left(\hbar\frac{\partial\rho_1}{\partial z^k}\right)+\frac{1}{\hbar}\left[\hbar\frac{\partial^2\rho_1}{\partial\bar{z}^n\partial z^m}d\bar{z}^n\otimes y^m,\omega_{k\bar{l}}\bar{y}^l\right]_{\star}\right)\\
=&\delta^{-1}\left(\hbar\frac{\partial^2\rho_1}{\partial z^k\partial\bar{z}^l}d\bar{z}^l+\hbar\frac{\partial^2\rho_1}{\partial\bar{z}^n\partial z^m}d\bar{z}^n\omega_{k\bar{l}}\omega^{m\bar{l}}\right)=0.
\end{align*}
Thus the weight $3$ terms in $O_{u_k}$ of purely anti-holomorphic type vanish. This argument can be generalized to all such terms of higher weights.
\end{proof}
\begin{thm}\label{theorem: Karabegov-form-Fedosov-deformation-quantization}
For every closed formal differential form $\alpha$ of type $(1,1)$, the star product $\star_{\alpha}$ defined in \eqref{equation: formal-star-product} has Karabegov form $\omega-\alpha$.
\end{thm}
\begin{proof}
Let $U$ be any contractible coordinate chart in $X$, with local holomorphic coordinates $(z^1,\cdots, z^k)$. We define the functions $u_k$ as in Lemma \ref{lemma: function-u-k}. Then the flat section $O_{z^k}$ can be explicitly written as $O_{z^k}=z^k+y^k+\cdots$, where all the terms live in $\mathcal{W}_X$. From the definition of the fiberwise Wick product on $\mathcal{W}_{X,\C}$ and that of the symbol map, we only need those terms in $O_{u_k}$ which are of ``purely anti-holomorphic type'' for the following computation:
$$O_{u_k}\star O_{z^{k'}}-O_{z^{k'}}\star O_{u_k}
= \omega_{k\bar{m}}\bar{y}^m\star y^{k'}-y^{k'}\star (\omega_{k\bar{m}}\bar{y}^m)
= -y^{k'}\star (\omega_{k\bar{m}}\bar{y}^m)
= \hbar\delta_{kk'}.$$
Thus we have shown that the functions $u_k$'s satisfy the condition in Proposition \ref{proposition: function-u-k}. According to its construction, the Karabegov form is then given by
$$
\bar{\partial}(-u_kdz^k)
= -\frac{\partial u_k}{\partial\bar{z}^l}d\bar{z}^l\wedge dz^k
= \left(\frac{\partial^2\rho}{\partial z^k\partial\bar{z}^l}+\sum_{i\geq 1}\hbar^i\frac{\partial^2\rho_i}{\partial z^k\partial\bar{z}^l}\right)dz^k\wedge d\bar{z}^l
= \omega-\alpha.
$$
\end{proof}
Combining Theorems \ref{theorem: Karabegov-classification-Wick-star-product} and \ref{theorem: Karabegov-form-Fedosov-deformation-quantization}, we see that any star product of Wick type on a K\"ahler manifold arises from our construction:
\begin{cor}\label{corollary:Wick-type}
On a K\"ahler manifold $X$, any deformation quantization of Wick type is of the form $\star_{\alpha}$ for some closed formal $(1,1)$ form $\alpha$.
\end{cor}
\section{From Fedosov to Batalin-Vilkovisky}
Let us first recall the definition of traces of a star product:
\begin{defn}
Let $(C^\infty(X)[[\hbar]],\star)$ denote a deformation quantization of $X$. A {\em trace} of the star product $\star$ is a linear map $\Tr:C^\infty(X)[[\hbar]]\rightarrow\C[[\hbar]]$ such that
\begin{enumerate}
\item $\Tr(f\star g)=\Tr(g\star f)$;
\item $\Tr(f)=\int_X f\cdot\omega^n+O(\hbar)$.
\end{enumerate}
In particular, $\Tr(1)$ is called the {\em algebraic index} of $\star$.
\end{defn}
From the point of view of quantum field theory (QFT), traces are defined by correlation functions of local quantum observables. The Fedosov quantization describes the local data of a quantum mechanical system, namely, the cochain complex
$$
(\A_X^\bullet(\mathcal{W}_{X,\C}), D_{F,\alpha})
$$
gives the {\em cochain complex of local quantum observables} of a sigma model from $S^1$ to the target $X$. To get global quantum observables and define the correlation functions properly, we study the Batalin-Vilkovisky (BV) quantization \cite{BV} of this quantum mechanical system, from which we would obtain a factorization map from local to global quantum observables.
For a detailed explanation of the physical background, we refer the readers to \cite{GLL}.
Mathematically, a BV quantization can be formulated as a solution of the quantum master equation (QME). Our main result in this section is that, the canonical solution of the QME associated to the Fedosov abelian connection $D_{F,\alpha}$ is one-loop exact. This leads to a very neat cochain level formula for the algebraic index.
The organization of this section is as follows:
Section \ref{section: BV-integration} is a review of the construction of BV quantization from Fedosov abelian connections. We mainly follow the treatment in \cite[Sections 2.3-2.5]{GLL} but there is one key difference, namely, the choice of the propagtor, in order to reflect the K\"ahlerian condition.
In Section \ref{section: computation-Feynman-graphs}, we prove the main result of this section saying that our solutions of the QME are all one-loop exact, and we explain how this can lead to the cochain level formula for the trace.
\subsection{BV quantization}\label{section: BV-integration}
\
This subsection is largely a review of the construction of BV quantizations from Fedosov quantizations in \cite[Sections 2.3-2.5]{GLL}. The main difference lies in the choice of the propagator -- we will give a construction of the so-called {\em polarized propagator}, which is more compatible with the K\"ahlerian condition and leads to some special features of the resulting BV quantizations.
\subsubsection{Geometry of BV bundles and the QME}
\
The cochain complex of global quantum observables can be described in a differential geometric way. We start with the BV bundle on $X$:
\begin{defn}[cf. Definition 2.19 in \cite{GLL}]\label{definition: BV-bundle}
The {\em BV bundle} of a K\"ahler manifold $X$ is defined to be
$$
\widehat{\Omega}^{-\bullet}_{TX}:=\widehat{\Sym}(T^*X_{\C})\otimes\wedge^{-\bullet}(T^*X_{\C}),\quad \wedge^{-\bullet}(T^*X_{\C}):=\bigoplus_k\wedge^k(T^*X_{\C})[k],
$$
where $\wedge^k(T^*X_{\C})$ has cohomological degree $-k$.
\end{defn}
For any tensor power of the BV bundle, we have the canonically defined {\em multiplication}:
$$
\text{Mult}:(\widehat{\Omega}^{-\bullet}_{TX})^{\otimes k} := \widehat{\Omega}^{-\bullet}_{TX}\otimes\cdots\otimes\widehat{\Omega}^{-\bullet}_{TX}\rightarrow\widehat{\Omega}^{-\bullet}_{TX},
$$
which can be extended $\A_X^{\bullet}$-linearly to $\text{Mult}: \A_X^{\bullet}(\widehat{\Omega}^{-\bullet}_{TX})^{\otimes k}\rightarrow \A_X^{\bullet}(\widehat{\Omega}^{-\bullet}_{TX})$.
To describe the differential on the BV bundle, we consider the fiberwise de Rham operator
$d_{TX}:\widehat{\Omega}_{TX}^{-\bullet}\rightarrow\widehat{\Omega}^{-(\bullet+1)}_{TX}$,
and the contraction
$\iota_{\Pi}:\widehat{\Omega}_{TX}^{-\bullet}\rightarrow\widehat{\Omega}^{-(\bullet-2)}_{TX}$
by the Poisson tensor. We also have similarly defined operators $\partial_{TX}, \bar{\partial}_{TX}$.
There is also the {\em BV operator} defined by
$$
\Delta:=[d_{TX},\iota_{\Pi}].
$$
The operators $d_{TX},\iota_{\Pi}$ and $\Delta$ all extend $\A_X^{\bullet}$-linearly to operators on $\A_X^{\bullet}(\widehat{\Omega}^{-\bullet}_{TX})^{\otimes k}$.
The failure of the BV operator $\Delta$ being a derivation is known as the {\em BV bracket}:
\begin{equation*}
\{A,B\}_\Delta:=\Delta(A\cdot B)-\Delta(A)\cdot B\pm A\cdot\Delta(B).
\end{equation*}
It is clear that the operators $d_{TX}, \iota_{\Pi}$ and $\Delta$ all commute with the multiplication map $\text{Mult}$. We also have $[\Delta, \nabla]=0$ since $\nabla(\omega)=0$, and $\Delta^2=0$ by the Jacobi identity for the Poisson tensor.
\begin{lem}[Lemma 2.21 in \cite{GLL}]\label{lemma: BV-operator-differential}
The operator
$$Q_{BV}:=\nabla+\hbar\Delta+\frac{1}{\hbar}d_{TX}R_{\nabla}$$
is a differential on the BV bundle (i.e., $Q_{BV}^2=0$), which we call the {\em BV differential}.
\end{lem}
\begin{defn}[Definition 2.22 in \cite{GLL}]\label{definition: QME}
A section $r$ of the BV bundle is said to {\em satisfy the quantum master equation (QME)} if
\begin{equation}\label{equation: quantum-master-equation}
Q_{BV}(e^{r/\hbar})=0.
\end{equation}
\end{defn}
This is equivalent to $\nabla r+\hbar\Delta r+\{r,r\}_{\Delta}+d_{TX}R_{\nabla}=0$ since
\begin{align*}
Q_{BV}(e^{r/\hbar}) =\left(\nabla+\hbar\Delta+\frac{1}{\hbar}d_{TX}R_{\nabla}\right)(e^{r/\hbar})
=\frac{1}{\hbar}\left(\nabla r+\hbar\Delta r+\{r,r\}_{\Delta}+d_{TX}R_{\nabla}\right)\cdot e^{r/\hbar}.
\end{align*}
\begin{lem}[Lemma 2.23 in \cite{GLL}]\label{lemma: global-quantum-observable}
Given a solution $\gamma_\infty$ of the QME \eqref{equation: quantum-master-equation}, the operator
\begin{equation}\label{equation: differential-global-quantum-observable}
\nabla + \hbar\Delta + \{\gamma_\infty,-\}_\Delta
\end{equation}
is a differential on the BV bundle.
The {\em cochain complex of global quantum observables} is defined as $(\A_X^\bullet\left(\hat{\Omega}_{TX}^{-\bullet}\right)[[\hbar]], \nabla + \hbar\Delta + \{\gamma_\infty,-\}_\Delta)$.
\end{lem}
\begin{lem}[Lemma 2.24 in \cite{GLL}]
The {\em fiberwise Berezin integration}, defined by taking the top degree component in odd variables and setting the even variables to $0$:
\begin{equation*}\label{equation: Berezin-integration}
\int_{Ber}:\A_X^\bullet\left(\hat{\Omega}_{TX}^{-\bullet}\right) \rightarrow \A_X^\bullet,
\qquad a \mapsto\frac{1}{n!}(\iota_{\Pi})^n(a)\bigg|_{y^i=\bar{y}^j=0},
\end{equation*}
is a cochain map, with respect to the BV differential $Q_{BV}$ on $\A_X^\bullet\left(\hat{\Omega}_{TX}^{-\bullet}\right)$ and the de Rham differential on $\A_X^\bullet$.
\end{lem}
From this lemma we get a well-defined composition map on cohomology classes:
\begin{equation*}\label{equation: BV-integration-composite-ordinary-integration}
H^*(\A_X^\bullet\left(\hat{\Omega}_{TX}^{-\bullet}\right)[[\hbar]])\overset{\int_{Ber}}{\longrightarrow} H^*_{dR}(X)[[\hbar]]\overset{\int_{X}}{\longrightarrow}\mathbb{C}[[\hbar]].
\end{equation*}
We can thus define the correlation functions (or expectation values) of global quantum observables:
\begin{defn}\label{definition: correlation-function}
Let $\gamma_\infty$ be a solution of the QME \eqref{equation: quantum-master-equation} and let $O$ be a global quantum observable. The {\em correlation function of $O$} is defined as
\begin{equation*}\label{equation: correlation-function}
\langle O\rangle:=\int_X \int_{Ber} O \cdot e^{\gamma_\infty/\hbar}.
\end{equation*}
\end{defn}
\subsubsection{A polarized propagator and the homotopy group flow}\label{section: homotopy-group-flow}
\
To construct solutions of the QME and define the local-to-global factorization map, we need the {\em homotopy group flow operator}. Our formulation here follows that in \cite[Section 2.4]{GLL} but with a significant modification of the definition of the propagator in order to adapt to the K\"ahler setting.
First of all, for any positive integer $k$, let $S^1[k]$ be the compactified configuration space of $k$ ordered points on the circle $S^1$.
\footnote{For the construction and basic facts of compactified configuration spaces, see \cite{AS2}.}
Then the function $P$ on $S^1[2]$ defined by
\begin{equation}\label{definition: propagator-no-polarization}
P(\theta,u) := u-\frac{1}{2}
\end{equation}
is called the {\em propagator} (see \cite[Definition 2.30]{GLL}) because it is the derivative of the Green's function on $S^1$ with respect to the standard flat metric, and thus represents the propagator of topological quantum mechanics on $S^1$ (see \cite[Remarks 2.31 and B.6]{GLL}).
When restricted to the open subset $S^1\times S^1\setminus\Delta \subset S^1[2]$, the propagator $P$ is anti-symmetric in the two copies of $S^1$.
We can now define the {\em polarized propagator} in the K\"ahler setting as a combination of $P$ and the K\"ahler form on $X$:
\begin{defn}[cf. Definition 2.32 in \cite{GLL}]\label{definition: propagator-polarized}
We define the $\A_{S^1[k]}^{\bullet}$-linear operators
$$\partial_P, D: \A_{S^1[k]}^{\bullet}\otimes_{\mathbb{C}}\A_X^{\bullet}(\widehat{\Omega}_{TX}^{-\bullet})^{\otimes k}\rightarrow \A_{S^1[k]}^{\bullet}\otimes_{\mathbb{C}}\A_X^{\bullet}(\widehat{\Omega}_{TX}^{-\bullet})^{\otimes k}$$
by
\begin{enumerate}
\item
$\partial_P(a_1\otimes\cdots\otimes a_k)$
\begin{align*}
\hspace{2cm}:=&\sum_{1\leq\alpha<\beta\leq k}\pi_{\alpha\beta}^*(P+\frac{1}{2})\otimes_{\mathbb{C}}(\omega^{i\bar{j}}(x)a_1\otimes\cdots\otimes\mathcal{L}_{\partial_{z^i}}a_\alpha\otimes\cdots\otimes\mathcal{L}_{\partial_{\bar{z}^j}}a_\beta\otimes\cdots\otimes a_k)\\
&\ -\sum_{1\leq\alpha<\beta\leq k}\pi_{\alpha\beta}^*(P-\frac{1}{2})\otimes_{\mathbb{C}}(\omega^{i\bar{j}}(x)a_1\otimes\cdots\otimes\mathcal{L}_{\partial_{\bar{z}^j}}a_\alpha\otimes\cdots\otimes\mathcal{L}_{\partial_{z^i}}a_\beta\otimes\cdots\otimes a_k);
\end{align*}
\item $\displaystyle D(a_1\otimes\cdots\otimes a_k):=\sum_{1\leq i\leq k}\pm d\theta_{i}\otimes_{\mathbb{C}} (a_1\otimes\cdots d_{TX}a_i\otimes\cdots a_k)$.
\end{enumerate}
Here $a_i \in \A_X^{\bullet}(\widehat{\Omega}_{TX}^{-\bullet})$, $\pi_{\alpha\beta}: S^1[k] \to S^1[2]$ is the forgetful map to the two points indexed by $\alpha, \beta$, $\theta_i \in [0,1)$ is the parameter on the $S^1$ indexed by $1 \leq i \leq k$ and $d\theta_i$ is a 1-form on $S^1[k]$ via the pullback $\pi_i:S^1[k] \to S^1$, and finally $\pm$ are appropriate Koszul signs.
\end{defn}
We have the following decomposition of the operator $\partial_P$:
\begin{lem}\label{remark: propagator-comparision-with-symplectic-case}
The operator $\partial_P$ in Definition \ref{definition: propagator-polarized} can be written as the sum $\partial_{P}=\partial_{P_1}+\partial_{P_2}$, where
\begin{align*}
\partial_{P_1}(a_1&\otimes\cdots\otimes a_k)=\sum_{1\leq\alpha<\beta\leq k}\pi_{\alpha\beta}^*(P)\otimes_{\mathbb{R}}(\omega^{i\bar{j}}\cdot a_1\otimes\cdots\otimes\mathcal{L}_{\partial_{z^i}}a_\alpha\otimes\cdots\otimes\mathcal{L}_{\partial_{\bar{z}^j}}a_\beta\otimes\cdots\otimes a_k)\\
&-\sum_{1\leq\alpha<\beta\leq k}\pi_{\alpha\beta}^*(P)\otimes_{\mathbb{R}}(\omega^{i\bar{j}}\cdot a_1\otimes\cdots\otimes\mathcal{L}_{\partial_{\bar{z}^j}}a_\alpha\otimes\cdots\otimes\mathcal{L}_{\partial_{z^i}}a_\beta\otimes\cdots\otimes a_k),
\end{align*}
and
\begin{align*}
\partial_{P_2}(a_1&\otimes\cdots\otimes a_k)=\frac{1}{2}\sum_{1\leq\alpha<\beta\leq k}(\omega^{i\bar{j}}\cdot a_1\otimes\cdots\otimes\mathcal{L}_{\partial_{z^i}}a_\alpha\otimes\cdots\otimes\mathcal{L}_{\partial_{\bar{z}^j}}a_\beta\otimes\cdots\otimes a_k)\\
&+\frac{1}{2}\sum_{1\leq\alpha<\beta\leq k}(\omega^{i\bar{j}}\cdot a_1\otimes\cdots\otimes\mathcal{L}_{\partial_{\bar{z}^j}}a_\alpha\otimes\cdots\otimes\mathcal{L}_{\partial_{z^i}}a_\beta\otimes\cdots\otimes a_k).
\end{align*}
\end{lem}
Notice that both $\partial_{P_1}$ and $\partial_{P_2}$ are symmetric in the indices $\alpha,\beta$, and that $\partial_{P_1}$ coincides with the propagator used in \cite{GLL}.
\begin{lem}[cf. Lemma 2.33 in \cite{GLL}]\label{lemma: differential-propogator-equal-BV}
As operators on the BV bundle, we have
\begin{align*}
[d_{S^1}, \partial_P]=[\Delta, D], \qquad D^2=0,\qquad
[\partial_P, D]=0.
\end{align*}
\end{lem}
Lemma \ref{lemma: differential-propogator-equal-BV} says that the operators $\hbar\partial_P$ and $D$ commute, so we can formally define the following operator on the BV bundle:
\begin{equation*}\label{equation: homotopy-group-flow-operator}
e^{\hbar\partial_P+D} := \sum_{k\geq 0}\frac{1}{k!}(\hbar\partial_P+D)^k.
\end{equation*}
Here is the definition of the {\em homotopy group flow operator} on the BV bundle:
\begin{defn}[cf. Definition 2.34 in \cite{GLL}]\label{definition: gamma-infty}
Given any $\gamma\in\A_X^\bullet(\mathcal{W}_{X,\C})$, we define $\gamma_\infty\in\A_X^\bullet(\hat{\Omega}_{TX}^{-\bullet})[[\hbar]]$ by
$$
e^{\gamma_{\infty}/\hbar}:=\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar},
$$
where $e^{\otimes\gamma/\hbar}:=\sum_{k\geq 0}\frac{1}{k!\hbar^k}\gamma^{\otimes k}, \gamma^{\otimes k}\in\A_X^\bullet(\hat{\Omega}_{TX}^{-\bullet})^{\otimes k})$, and $\int_{S^1[*]}:\A_{S^1[k]}^{\bullet}\otimes_{\mathbb{C}}\A_X^{\bullet}(\widehat{\Omega}_{TX}^{-\bullet})^{\otimes k}\rightarrow \A_X^{\bullet}(\widehat{\Omega}_{TX}^{-\bullet})^{\otimes k}$ is the integration $\int_{S^1[k]}$ over the appropriate configuration space $S^1[k]$.
\end{defn}
\begin{lem}[cf. Lemma 2.37 in \cite{GLL}]\label{lemma: BV-action-boundary-configuration-space}
For any $\gamma\in\A_X^\bullet(\mathcal{W}_{X,\C})$, we have
$$
\hbar\Delta e^{\gamma_\infty/\hbar}=\text{Mult}\int_{S^1[*-1]}e^{\hbar\partial_P+D}\frac{1}{\hbar}(\gamma\star\gamma)\otimes e^{\otimes\gamma/\hbar}.
$$
\end{lem}
\begin{proof}
The proof is very similar to that of \cite[Lemma 2.37]{GLL}, except that here we use the polarized propagator $\partial_P$ instead of the propagator $\partial_{P_1}$ in \cite{GLL}.
Using the commutation relations in Lemma \ref{lemma: differential-propogator-equal-BV}, we have
\begin{align*}
\hbar\Delta e^{\gamma_\infty/\hbar}&=\text{Mult}\int_{S^1[*]}\hbar\Delta e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}\\
&=\text{Mult}\int_{S^1[*]}d_{S^1}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}+\text{Mult}\int_{S^1[*]}(\hbar\Delta-d_{S^1})e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}\\
&=\text{Mult}\int_{\partial S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}+\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(\hbar\Delta-d_{S^1})e^{\otimes\gamma/\hbar}\\
&=\text{Mult}\int_{\partial S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar},
\end{align*}
where in the last step we used the fact that $\hbar\Delta \left(e^{\otimes\gamma/\hbar}\right)=d_{S^1}\left(e^{\otimes\gamma/\hbar}\right)=0$ (here $d_{S^1}$ annihilates $e^{\otimes\gamma/\hbar}$ because $e^{\otimes\gamma/\hbar}$ does not depend on the configuration space $S^1[*]$ and $\Delta$ annihilates $e^{\otimes\gamma/\hbar}$ by type reasons).
We now consider the term $\text{Mult}\int_{\partial S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}$. Recall that there is an explicit description of the boundary (i.e., codimension $1$ strata) of the configuration space:
$$
\partial S^1[k]=\bigcup_{I\subset\{1,\cdots,k\}, |I|\geq 2}\pi^{-1}(D_I),
$$
where $D_I\subset(S^1)^k$ is the small diagonal where points in those $S^1$'s indexed by $I$ coincide (see e.g. \cite[Appendix B]{GLL} for more details). Similar to the real symplectic case, only components corresponding to those indices with $|I|=2$ will contribute non-trivially to the boundary integral.
Every such index $I$ corresponds to a pair $\alpha<\beta\in\{1,\cdots,n\}$. The corresponding boundary strata has two components, each isomorphic to $S^1[*-1]$. So we have
\begin{align*}
\text{Mult}\int_{\partial S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}&=\text{Mult}\int_{S^1[*-1]}e^{\hbar\partial_P+D}\frac{1}{2\hbar}[\gamma,\gamma]_{\star}\otimes e^{\otimes\gamma/\hbar}\\
&=\text{Mult}\int_{S^1[*-1]}e^{\hbar\partial_P+D}\frac{1}{\hbar}(\gamma\star\gamma)\otimes e^{\otimes\gamma/\hbar}.
\end{align*}
We emphasize that the polarized propagator plays a key role here: the operators on the two connected components precisely correspond to $A\star B$ and $B\star A$, and this explains why the bracket $[-,-]_\star$ shows up.
\end{proof}
\subsubsection{Solutions of the QME from Fedosov abelian connections}\label{section: Fedosov-to-QME}
\
From the perspective of BV quantization, a solution of the QME \eqref{equation: quantum-master-equation} is the $\infty$-scale effective interaction of the quantum mechanical system on $X$. Such a solution can be constructed by running the homotopy group flow.
We first consider the following section of the BV bundle:
\begin{equation*}\label{equation: curvature-BV-bundle}
\tilde{R}_\nabla := (\partial_{TX}\circ\bar{\partial}_{TX})(R_\nabla)=\sqrt{-1}\cdot R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes dy^{k}\wedge d\bar{y}^{l}.
\end{equation*}
\begin{lem}\label{lemma: tilde-R-properties}
We have $\Delta(\tilde{R}_\nabla) = \nabla(\tilde{R}_\nabla) = 0$.
\end{lem}
\begin{proof}
The vanishing of the first term follows from the type of the BV operator $\Delta$, and the second one is the Bianchi identity.
\end{proof}
\begin{lem}\label{lemma: partial-P-equal-BV-bracket}
We have $\partial_{P_2}(d_{TX}R_\nabla\otimes\gamma)=-\frac{1}{2}\{\tilde{R}_\nabla,\gamma\}_{\Delta}$.
\end{lem}
\begin{proof}
This is a straightforward computation. For the LHS, we have
\begin{align*}
&\partial_{P_2}(d_{TX}R_\nabla\otimes\gamma)\\
=&\frac{1}{2}\omega^{p\bar{q}}\left(\mathcal{L}_{\partial_{z^p}}(d_{TX}R_\nabla)\otimes\mathcal{L}_{\partial_{\bar{z}^q}}\gamma+\mathcal{L}_{\partial_{\bar{z}^q}}(d_{TX}R_\nabla)\otimes\mathcal{L}_{\partial_{z^p}}\gamma\right) \\
=&\frac{\sqrt{-1}}{2}\omega^{p\bar{q}}\left(\mathcal{L}_{\partial_{z^p}}(R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^kd\bar{y}^l)\otimes\mathcal{L}_{\partial_{\bar{z}^q}}\gamma+\mathcal{L}_{\partial_{\bar{z}^q}}(R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes \bar{y}^ldy^k)\otimes\mathcal{L}_{\partial_{z^p}}\gamma\right)\\
=&\frac{\sqrt{-1}}{2}\omega^{p\bar{q}}\left((R_{i\bar{j}p\bar{l}}dz^i\wedge d\bar{z}^j\otimes d\bar{y}^l)\otimes\mathcal{L}_{\partial_{\bar{z}^q}}\gamma+(R_{i\bar{j}k\bar{q}}dz^i\wedge d\bar{z}^j\otimes dy^k)\otimes\mathcal{L}_{\partial_{z^p}}\gamma\right).
\end{align*}
On the other hand, recall that
$\Delta=\omega^{p\bar{q}}\left(\mathcal{L}_{\partial_{z^p}}\iota_{\partial_{\bar{z}^q}}-\mathcal{L}_{\partial_{\bar{z}^q}}\iota_{\partial_{z^p}}\right)$.
Hence the RHS is given by
\begin{align*}
&\{\tilde{R}_\nabla,\gamma\}_\Delta\\
=&\sqrt{-1}\omega^{p\bar{q}}\left(\iota_{\partial_{\bar{z}^q}}(R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes dy^k\wedge d\bar{y}^l)\otimes\mathcal{L}_{\partial_{z^p}}(\gamma)-\iota_{\partial_{z^p}}(R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes dy^k\wedge d\bar{y}^l)\otimes \mathcal{L}_{\partial_{\bar{z}^q}}(\gamma)\right)\\
=&-\sqrt{-1}\omega^{p\bar{q}}\left((R_{i\bar{j}k\bar{q}}dz^i\wedge d\bar{z}^j\otimes dy^k)\otimes\mathcal{L}_{\partial_{z^p}}(\gamma)+(R_{i\bar{j}p\bar{l}}dz^i\wedge d\bar{z}^j\otimes d\bar{y}^l)\otimes \mathcal{L}_{\partial_{\bar{z}^q}}(\gamma)\right)
\end{align*}
\end{proof}
We are now ready to construct our solutions of the QME:
\begin{thm}[cf. Theorem 2.26 in \cite{GLL}]\label{theorem: Fedosov-connection-RG-flow-QME}
Suppose that $\gamma$ is a solution of the Fedosov equation \eqref{equation: Fedosov-equation-gamma} and let $\gamma_\infty$ be defined as in Definition \ref{definition: gamma-infty}.
Then $e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}$ is a solution of the QME \eqref{equation: quantum-master-equation}, i.e., $Q_{BV}\left(e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right)=0$.
\end{thm}
\begin{proof}
Recall that $Q_{BV}=\nabla+\hbar\Delta+\hbar^{-1}d_{TX}R_\nabla$. We compute each term in $Q_{BV}(e^{\gamma_\infty/\hbar})$:
\begin{enumerate}
\item The first term is given by:
\begin{align*}
\nabla(e^{\gamma_\infty/\hbar}) & = \nabla\left(\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}\right)
= \text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}\nabla(e^{\otimes\gamma/\hbar})\\
& = \text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(\nabla\gamma\otimes e^{\gamma/\hbar})
= (\nabla\gamma)\cdot e^{\gamma_\infty/\hbar},
\end{align*}
where we are using the relations
$[\nabla, \hbar\partial_P+D]=\left[\nabla, \int_{S^1[*]}\right]=0$.
\item Using Lemma \ref{lemma: BV-action-boundary-configuration-space}, the second term is given by
\begin{align*}
\hbar\Delta(e^{\gamma_\infty/\hbar})=\text{Mult}\int_{S^1[*-1]}e^{\hbar\partial_P+D}\frac{1}{\hbar}(\gamma\star\gamma)\otimes e^{\otimes\gamma/\hbar}
=\frac{1}{\hbar}(\gamma\star\gamma-\omega_\hbar)\cdot e^{\gamma_\infty/\hbar}.
\end{align*}
Note that the $-\omega_\hbar$ term appears because $D(-\omega_\hbar)=0$ by type reasons.
\item For the third term, we consider the following term
$$
\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}\frac{R_{\nabla}}{\hbar}\otimes e^{\otimes\gamma/\hbar}=\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(\frac{1}{\hbar}d\theta d_{TX}R_{\nabla})\otimes e^{\otimes\gamma/\hbar}
$$
Notice that $\hbar\partial_P$ can be applied to $d_{TX}R_\nabla$ once, which gives rise to the following difference:
\begin{align*}
&\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(\frac{1}{\hbar}d\theta d_{TX}R_{\nabla})\otimes e^{\otimes\gamma/\hbar}- \text{Mult}\int_{S^1[*]}(\frac{1}{\hbar}d\theta d_{TX}R_{\nabla})e^{\hbar\partial_P+D}\left(e^{\otimes\gamma/\hbar}\right)\\
=&\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(\frac{1}{\hbar}d\theta d_{TX}R_{\nabla})\otimes e^{\otimes\gamma/\hbar}- \text{Mult}\int_{S^1[*]}(\frac{1}{\hbar}d_{TX}R_{\nabla})e^{\hbar\partial_P+D}\otimes e^{\otimes\gamma/\hbar}\\
=&\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}d\theta_1\cdot\frac{1}{\hbar}\cdot(-\frac{1}{2})\cdot\{\tilde{R}_{\nabla},d_{TX}\gamma\}_{\Delta}\otimes e^{\otimes\gamma/\hbar}\\
=&-\frac{1}{2\hbar}\{\tilde{R}_\nabla, \gamma_\infty\}_\Delta\cdot e^{\gamma_\infty/\hbar},
\end{align*}
where we have used Lemma \ref{lemma: partial-P-equal-BV-bracket} and the relations
$\left[\{\tilde{R}_{\nabla},-\}_\Delta,\hbar\partial_P\right]=\left[\{\tilde{R}_{\nabla},-\}_\Delta, D\right]=0$ in the last equality.
\end{enumerate}
By the above computations and Lemma \ref{lemma: tilde-R-properties}, we obtain that
\begin{align*}
&Q_{BV}\left(e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right)\\
=&e^{\tilde{R}_\nabla/2\hbar}\cdot Q_{BV}(e^{\gamma_\infty/\hbar})+\left((\nabla+\hbar\Delta)e^{\tilde{R}_\nabla/2\hbar}\right)\cdot e^{\gamma_\infty/\hbar}+\frac{1}{2\hbar}\{\tilde{R}_\nabla,\gamma_\infty\}_\Delta\cdot\left(e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right)\\
=&e^{\tilde{R}_\nabla/2\hbar}\cdot Q_{BV}(e^{\gamma_\infty/\hbar})+\frac{1}{2\hbar}\{\tilde{R}_\nabla,\gamma_\infty\}_\Delta\cdot\left(e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right)\\
=&\frac{1}{\hbar}\left(\nabla\gamma+\frac{1}{\hbar}\gamma\star\gamma-\omega_\hbar+R_\nabla-\frac{1}{2}\{\tilde{R}_\nabla,\gamma_\infty\}_\Delta+\frac{1}{2}\{\tilde{R}_\nabla,\gamma_\infty\}_\Delta\right)\cdot\left(e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right)\\
=&\frac{1}{\hbar}\left(\nabla\gamma+\frac{1}{\hbar}\gamma\star\gamma-\omega_\hbar+R_\nabla\right)\cdot\left(e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right) = 0.
\end{align*}
\end{proof}
\begin{rmk}
Compared with \cite[Theorem 2.26]{GLL}, our QME solution has the additional factor $e^{\tilde{R}_\nabla/2\hbar}$ due to our different choice of the propagator. We will see in the next subsection that this leads to nontrivial contribution from the tadpole graph in computing the partition function, which we do not see in the general symplectic case in \cite{GLL}.
\end{rmk}
We are now ready to define the local-to-global factorization map of quantum observables:
\begin{thm}[cf. Theorem 2.40 in \cite{GLL}]\label{theorem: local-to-global-factorization-map}
The {\em factorization map} defined by
\begin{align*}
[-]_\infty: \A_X^{\bullet}(\mathcal{W}_{X,\C})&\rightarrow\A_X^\bullet(\hat{\Omega}_{TX}^{-\bullet})[[\hbar]]\\
O&\mapsto [O]_\infty:=e^{-\gamma_\infty/\hbar}\cdot \left(\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(O d\theta_1\otimes e^{\otimes\gamma/\hbar})\right)
\end{align*}
is a cochain map from the complex $(\A_X^{\bullet}(\mathcal{W}_{X,\C}), D_{F,\alpha})$ of local quantum observables to the complex $(\A_X^\bullet(\hat{\Omega}_{TX}^{-\bullet})[[\hbar]], \nabla +\{\gamma_\infty,-\}_\Delta+\hbar\Delta)$ of global quantum observables.
\end{thm}
\begin{cor}
Let $f\in C^\infty(X)[[\hbar]]$ be any smooth function and $O_f$ be the corresponding flat section under the Fedosov connection. Then $Q_{BV}\left([O_f]_\infty\cdot e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}\right)=0$.
\end{cor}
\subsection{The trace}\label{subsection: trace-star-products}
\
For the above BV quantization, we can define the correlation function of a local quantum observable:
\begin{defn}\label{definition:correlation-functions}
The {\em correlation function} of a formal smooth function $f\in C^\infty(X)[[\hbar]]$ is defined by
$$
\langle f\rangle := \int_X \int_{Ber} [O_f]_\infty\cdot e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}.
$$
\end{defn}
The proofs of the following propositions, which we omit, are the same as that in \cite{GLL}:
\begin{prop}[cf. Proposition 2.43 in \cite{GLL}]\label{proposition: correlation-function-vanish-on-commutators}
Let $f,g\in C^\infty(X)[[\hbar]]$ be two smooth functions on $X$, and let $h=[f,g]_{\star_\alpha}$ denote the commutator of $f,g$ under the Fedosov star product $\star_\alpha$. Then $[O_h]_\infty$ is exact under the differential \eqref{equation: differential-global-quantum-observable} and hence $\langle f*_\alpha g\rangle=\langle g*_\alpha f\rangle$.
\end{prop}
\begin{prop}[cf. Proposition 2.44 in \cite{GLL}]\label{proposition: leading-term-correlation-function}
The correlation function of $f\in C^\infty(X)[[\hbar]]$ is of the form
$$
\langle f\rangle=\int_X f\cdot\omega^n+O(\hbar).
$$
\end{prop}
\begin{cor}\label{corollary:trace}
The association $\Tr: f \mapsto \langle f\rangle$ is the trace of the Fedosov star product $\star_\alpha$.
\end{cor}
To summarize, the BV quantization method gives rise to a way to explicitly compute an integral density of the trace of a Wick type star product $\star_\alpha$: for any smooth function $f\in C^\infty(X)$, we
\begin{enumerate}
\item compute $O_f$ using the iterative equation \eqref{equation: iteration-equation-quantum-flat-section},
\item obtain $\gamma_\infty$ as a Feynman graph expansion (see Lemma \ref{lemma: gamma-infty-graph-expression} below),
\item compute $[O_f]_\infty$ via Theorem \ref{theorem: local-to-global-factorization-map}, and
\item finally take the Berezin integral of $[O_f]_\infty\cdot e^{\tilde{R}_\nabla/2\hbar}\cdot e^{\gamma_\infty/\hbar}$ to obtain the density for $\Tr(f)$.
\end{enumerate}
It follows easily that this density for $Tr(f)$ satisfies a locality: at every point $x\in X$, it only depends on the Taylor expansion of $f$, the curvature of $X$ and the formal cohomology class $[\alpha]$ at $x$.
\subsection{One-loop exactness and the algebraic index}\label{section: computation-Feynman-graphs}
\
We present the explicit computation of $\Tr(1)$ as an example, whose formula is also known as the algebraic index theorem. First of all, a solution $\gamma_\infty$ of the QME \eqref{equation: quantum-master-equation} is obtained by running the homotopy group flow. This can be expressed as a Feynman graph expansion:\footnote{For some basic facts on Feynman graph expansion and Feynman weights, see Appendix \ref{section: Feynman graphs}. For more details and a proof of this lemma, see Costello's book \cite{Kevin-book}.}
\begin{lem}\label{lemma: gamma-infty-graph-expression}
Let $\gamma\in\A_X^\bullet(\mathcal{W}_{X,\C})$ and $\gamma_\infty$ be defined as in Definition \ref{definition: gamma-infty}. Then $\gamma_\infty$ can be expressed as a sum of Feynman weights:
\begin{equation*}\label{equation: gamma-infty-Feynman-weights}
\gamma_\infty=\sum_{\mathcal{G}:\ \text{connected}}\frac{\hbar^{g(\mathcal{G})}}{|\text{Aut}(\mathcal{G})|}W_{\mathcal{G}}(P, d_{TX}\gamma),
\end{equation*}
where the sum is over all connected, stable graphs $\mathcal{G}$, and $g(\mathcal{G})$ denotes the genus of $\mathcal{G}$.
\end{lem}
It turns out that, in the K\"ahler case, if $\gamma$ is the solution of the Fedosov equation \eqref{equation: Fedosov-equation-gamma} obtained by quantizing Kapranov's $L_\infty$-algebra structure, then the Feynman graph expansion of the QME solution $\gamma_\infty$ involves {\em only trees and one-loop graphs}; in other words, $\gamma_\infty$ gives a {\em one-loop exact BV quantization} of the K\"ahler manifold $X$. This is in sharp contrast with the general symplectic case \cite{GLL}, in which the BV quantization involves all-loop quantum corrections.
The same kind of one-loop exactness was observed for the holomorphic Chern-Simons theory by Costello \cite{Kevin-CS} and for a sigma model from $S^1$ to the target $T^*Y$ (cotangent bundle of a smooth manifold $Y$) by Gwilliam-Grady \cite{Gwilliam-Grady}.
As a corollary, we obtain a succinct explicit expression of the algebraic index $\Tr(1) = \int_X\int_{Ber}e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar}$ of the star product $\star_\alpha$. The latter is a cochain level enhancement of the result in \cite{GLL}: there the technique of equivariant localization was applied to show that all graphs of higher genera ($\geq 2$) give rise to exact differential forms after the Berezin integration and thus do not contribute after integration over $X$, while the Feynman weights associated to these graphs in our QME solution $\gamma_\infty$ vanish already on the cochain level.
\subsubsection{A weight on the BV bundle and one-loop exactness}
\
The key to one-loop exactness lies in the existence of a suitable weight on the BV bundle:
\begin{defn}
We define a {\em weight} on the BV bundle $\A_X^\bullet(\hat{\Omega}_{TX}^{-\bullet})[[\hbar]]$ by setting
\begin{itemize}
\item $|\bar{y}^i|=|d\bar{y}^i|=1$, and
\item $|y^i|=|dy^i|=|dz^i|=|d\bar{z}^i|=|\hbar|=0$.
\end{itemize}
\end{defn}
The following two lemmas are shown by simple computations.
\begin{lem}\label{lemma: gamma-weight}
For every closed $(1,1)$-form $\alpha$, let $D_{F,\alpha}=\nabla-\delta+\frac{1}{\hbar}[\gamma,-]_\star$ be the associated Fedosov connection obtained in Theorem \ref{proposition: Fedosov-connection-general}. Then in the $\hbar$ power expansion of $\gamma$:
$$
\gamma=\sum_{i\geq 0}\hbar^i\cdot\gamma_i,
$$
the weight of each term in $\gamma_0$ is either $0$ or $1$.
\end{lem}
\begin{proof}
The only term in $\gamma_0$ of weight $0$ is $\omega_{i\bar{j}}d\bar{z}^j\otimes \bar{y}^i$. All the other monomials in $\gamma_0$ have exactly one $\bar{y}^i$'s, and are thus of weight $1$.
\end{proof}
\begin{lem}\label{lemma: operators-preserving-weight}
The following operators preserve the weight:
$$
\hbar\Delta, \hbar\partial_{P},\hbar\{-,-\}_\Delta.
$$
In particular, the homotopy group flow operator also preserves the weight.
\end{lem}
Here is one of the main discoveries in this paper:
\begin{thm}[One-loop exactness]\label{theorem: gamma-infty-one-loop}
Let $\gamma$ be a solution of the Fedosov equation \eqref{equation: Fedosov-equation-gamma}.
Then
$$W_{\mathcal{G}}(P, d_{TX}\gamma) = 0\quad \text{ whenever $b_1(\mathcal{G}) \geq 2$}.$$
As a result, the graph expansion of the QME solution $\gamma_\infty$, as defined in Definition \ref{definition: gamma-infty}, involves only trees and one-loop graphs:
$$
\gamma_\infty=\sum_{\mathcal{G}:\ \text{connected},\ b_1(\mathcal{G})=0,1}\frac{\hbar^{g(\mathcal{G})}}{|\text{Aut}(\mathcal{G})|}W_{\mathcal{G}}(P, d_{TX}\gamma).
$$
\end{thm}
\begin{proof}
Let $\mathcal{G}$ be a connected graph with first Betti number $b_1(\mathcal{G})\geq 2$. Since every term in $\gamma$ has at most weight $1$, the total weights of those decorated vertices are bounded above by $|V(\mathcal{G})|$.
On the other hand, each propagator $P$ is of weight $-1$. Thus the internal edges of $\mathcal{G}$ decorated by $P$ has total weight $-(|V(\mathcal{G}|+b_1(\mathcal{G})-1)$. Hence we must have
$$
|V(\mathcal{G})|-(|V(\mathcal{G}|+b_1(\mathcal{G})-1)\geq 0,
$$
which implies that $b_1(\mathcal{G})\leq 1$.
\end{proof}
\begin{rmk}
The argument here is similar to the proof of one-loop exactness of the holomorphic Chern-Simons theory by Costello \cite{Kevin-CS}.
\end{rmk}
\subsubsection{A cochain level formula for the trace}
\
To derive a formula for the algebraic index $\Tr(1) = \int_X\int_{Ber}e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar}$ of the star product $\star_\alpha$, we first extend the symbol map to the BV bundle
\begin{equation*}\label{equation: symbol-map-BV-bundle}
\begin{aligned}
\sigma: \A_X^\bullet(\hat{\Omega}_{TX}^{-\bullet})&\rightarrow\A_X^\bullet\\
y^i, \bar{y}^j, dy^i,d\bar{y}^j&\mapsto 0.
\end{aligned}
\end{equation*}
\begin{lem}\label{lemma: Berezin-integral-graph}
The Berezin integral $\int_{Ber}e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar}$ can be expressed as follows:
$$
\int_X\int_{Ber}e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar}=\int_X\sigma\left(e^{\hbar\iota_{\Pi}}(e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar})\right)
$$
\end{lem}
\begin{proof}
A simple observation is that every term in $\tilde{R}_\nabla$ and $\gamma_\infty$ has the same degree in $dz^i,d\bar{z}^j$'s and $dy^i,d\bar{y}^j$'s. Since the integration $\int_X$ only takes the top degree term in $dz^i,d\bar{z}^j$'s, the only term that matters in $e^{\hbar\iota_{\Pi}}$ is $\frac{1}{n!}(\hbar\iota_{\Pi})^n$.
\end{proof}
Then we compute:
\begin{align*}
e^{\hbar\iota_{\Pi}}(e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar})
& = e^{\hbar\iota_{\Pi}}\left(e^{\tilde{R}_\nabla/2\hbar}\cdot \text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}e^{\otimes\gamma/\hbar}\right)\\
& \overset{(*)}{=} e^{\hbar\iota_{\Pi}}\left(\text{Mult}\int_{S^1[*]}e^{\hbar\partial_P+D}(e^{\tilde{R}_\nabla/2\hbar}d\theta_1)\otimes e^{\otimes\gamma/\hbar}\right)\\
& = \text{Mult}\int_{S^1[*]}e^{\hbar(\iota_{\Pi}+\partial_P)}\circ e^{D}\left((e^{\tilde{R}_\nabla/2\hbar}d\theta_1)\otimes e^{\otimes\gamma/\hbar}\right)\\
& = \exp\left(\hbar^{-1}\cdot\sum_{\mathcal{G}\ \text{connected}}\frac{\hbar^{g(\mathcal{G})}}{|\text{Aut}(\mathcal{G})|}W(P+\iota_{\Pi}, d\theta\otimes (d_{TX}\gamma+\frac{1}{2}\tilde{R}_\nabla))\right);
\end{align*}
here the equality $(*)$ follows from the facts that $\partial_P$ cannot be applied to $\tilde{R}_\nabla/2$ by type reasons and that $D(\tilde{R}_\nabla)=0$.
This computation says that the integrand whose integral gives rise to $\Tr(1)$ can be expressed as a sum of Feynman weights. But note that they are different from those for $\gamma_\infty$ because the propagator is now $\hbar(\partial_P+\iota_{\Pi})$. Similar to Theorem \ref{theorem: gamma-infty-one-loop}, this graph sum also involves only trees and one-loop graphs:
\begin{prop}\label{prop:one-loop-exact-algebraic-index}
In the Feynman graph expansion
$$
\sum_{\mathcal{G}\ \text{connected}}\frac{\hbar^{g(\mathcal{G})}}{|\text{Aut}(\mathcal{G})|}W(P+\iota_{\Pi}, d_{TX}\gamma+\frac{1}{2}\tilde{R}_\nabla),
$$
only terms which correspond to graphs with first Betti number $0$ and $1$ are non-vanishing.
\end{prop}
We now proceed to compute the Feynman weights associated to trees and one-loop graphs.
To clarify the computation, we first decompose the terms labeling the vertices and edges respectively. For the vertices, recall that the term $\gamma$ in the Fedosov connection has the $\hbar$-power expansion:
$$
\gamma=\sum_{i\geq 0}\hbar^i\gamma_i.
$$
For later computations, we give a detailed description of these $\gamma_i$'s:
\begin{itemize}
\item For $\gamma_0$, we know that
$$
\gamma_0=\omega_{i\bar{j}}(dz^i\otimes\bar{y}^j-d\bar{z}^j\otimes y^i)+\gamma_0',
$$
where all terms of $\gamma_0'$ are at least cubic, with the leading term $R_{i\bar{j}k\bar{l}}d\bar{z}^j\otimes y^iy^k\bar{y}^l$.
\item For every $i>0$, the leading term of $\gamma_i$ is given by $(\delta^{1,0})^{-1}(\alpha_i)$.
\end{itemize}
For the propagators, notice that $\partial_{P}$ and $\iota_{\Pi}$ are respectively tensor products of forms on $S^1[*]$ and tensors on the BV bundle of $X$. They correspond to the {\em analytic} and {\em combinatorial} parts of the propagators respectively.
We assign colors to both the vertices and edges, according to the previous decomposition of the functionals and propagators. For edges, we have
\begin{enumerate}
\item a blue edge is labeled by $\partial_{P_1}$;
\item a black edge is labeled by $\partial_{P_2}$;
\item a red edge is labeled by the operator $\iota_{\pi}$;
\item a yellow vertex is labeled by $\omega_{i\bar{j}}(dz^i\otimes d\bar{y}^j-d\bar{z}^j\otimes dy^i)$;
\item a purple vertex is labeled by $d_{TX}\gamma_0'$;
\item a green vertex is labeled by $\sum_{i>0}d_{TX}\gamma_i$;
\item a a blue vertex is labeled by $\frac{1}{2}\tilde{R}_\nabla$.
\end{enumerate}
In a graph, when we do not distinguish $\partial_{P_1}$ or $\partial_{P_2}$, we assign black color to this edge. Moreover, since the analytic parts of $\partial_{P_1}, \partial_{P_2}$ and $\iota_{\Pi}$ are all given by contractions using the inverse of the K\"ahler form, we assign an orientation on each internal edge, going from the holomorphic derivatives to the anti-holomorphic derivatives.
Some sample pictures are listed here:
$$
\figbox{0.39}{sample}
$$
\begin{lem}\label{lemma: properties-vertex-edges}
The vertices in our Feynman graphs have the following properties:
\begin{enumerate}
\item A purple vertex is at least trivalent with exactly one incoming tail and at least two outgoing tails.
\item The tails on a green vertex must be outgoing.
\item A blue vertex has exactly one incoming tail and one outgoing tail.
\item Every yellow, purple or green vertex can be connected to at most one red internal edge.
\end{enumerate}
\end{lem}
\begin{proof}
All the statements follow by considering the types of the sections of the BV bundle labeling the corresponding colored vertices. For instance, $(2)$ follows from the fact that the Weyl bundle component of $\gamma_i$ $(i>0)$ lives in $\mathcal{W}_X$.
\end{proof}
In terms of Feynman weights, the procedure of taking the symbol map $\sigma$ in Lemma \ref{lemma: Berezin-integral-graph} corresponds to taking only those connected graphs with no tails (external edges) but only internal edges. Let us first consider trees.
\begin{prop}\label{proposition: tree-external-edges}
Suppose $\mathcal{G}$ is a tree which contains at least one vertex labeled by $\gamma_0'$ (i.e., a purple vertex). Then this tree has at least one outgoing tail (external edge) which can be contracted by $\partial_{P_1}$ or $\partial_{P_2}$ (i.e., blue or black).
\end{prop}
\begin{proof}
We have seen that a purple vertex has to be at least trivalent. It is clear that every univalent vertex can only be connected by an internal edge labeled by $\iota_{\Pi}$. Let $v_i$ denote the number of vertices in $\mathcal{G}$ which are $i$-valent. Let $\mathcal{G}'$ denote the graph obtained from $\mathcal{G}$ by deleting those blue and yellow vertices, and also red edges (both internal and external). In particular, every internal and external edge must be either blue or black, and the number of $n$-valent vertices in $\mathcal{G}'$ is $v_{n+1}$, for $n\geq 1$.
Suppose $v_2=0$. Then a simple counting shows that there exists in $\mathcal{G}'$ at least one outgoing external edge. If $v_2>0$, we choose one of these univalent vertices in $\mathcal{G}'$, whose tail must be outgoing. We can draw a line starting from this vertex along the edges with directions and go as far as possible. Now we need the following simple fact: Suppose a vertex is at least bivalent in $\mathcal{G}$, then either all tails are outgoing, or at least one outgoing and exactly one incoming. This says that the line that we are drawing has to stop at an outgoing tail on a vertex which is at least bivalent.
\end{proof}
As a corollary, we have a full list of trees {\em without} external edges:
\begin{cor}\label{cor:trees}
The only trees which survive under the symbol map are listed below, which exactly give rise to $\omega_\hbar$.
$$
\omega_{\hbar}=\figbox{0.22}{tree}
$$
\end{cor}
\begin{proof}
According to Proposition \ref{proposition: tree-external-edges}, these trees cannot include a purple vertex. We first show that they cannot include blue vertices either: A blue vertex has exactly one incoming tail and one outgoing tail. Since a green vertex has only outgoing tails, the outgoing tail of a blue vertex has to be connected to a yellow vertex, as shown in the following picture:
$$
\figbox{0.22}{yellow-connect-blue}=\ \frac{\sqrt{-1}}{2}R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\wedge dz^k\otimes d\bar{y}^l.
$$
But then this has to vanish since $R_{i\bar{j}k\bar{l}}$ is symmetric in $i$ and $k$. Therefore such a graph can only contain yellow and green vertices. These green vertices must be univalent because there is no way to contract its blue or black tails. Hence the graphs satisfying the condition in this corollary have to be as stated.
\end{proof}
Next we turn to one-loop graphs. Since every one-loop graph can be obtained by attaching trees to a {\em wheel} (see Definition \ref{definition: wheel}), we first look at all the possible wheels (possibly with tails). It is clear that there are three types of wheels, according to the labeling of the internal edges of the wheel:
\begin{enumerate}
\item All the edges on the wheel are labeled by either $P_1$ or $P_2$.
\item All the edges on the wheel are labeled by $\iota_{\Pi}$.
\item The labelings of edges on the wheel include both $P$ (either $P_1$ or $P_2$) and $\iota_{\Pi}$.
\end{enumerate}
\begin{prop}\label{proposition: all-possible-wheels}
The symbol of a one-loop Feynman weight coming from a wheel of type $(3)$ vanishes.
\end{prop}
\begin{proof}
Let $\mathcal{G}$ denote such a one-loop graph. A simple observation is that every vertex on the wheel of $\mathcal{G}$ is either labeled by $\gamma_0'$ or $\tilde{R}_\nabla$. We now choose a vertex $v$ such that the two edges on the wheel adjacent to $v$ are labeled by $\iota_{\Pi}$ and $P$ respectively, as shown in the following picture:
$$
\figbox{0.28}{type-3-wheel}
$$
In particular, $v$ must be labeled by $\gamma_0'$, and is at least trivalent. Thus $v$ must have at least one outgoing tail (the dotted line in the above picture) which is {\em not} on the wheel. In order for the symbol of this Feynman integral to be non-vanishing, we have to attach a tree to this outgoing tail. Such a tree must include at least one vertex labeled by $\gamma_0'$, so that this vertex can be connected to the dotted line in the above picture. Thus by Proposition \ref{proposition: tree-external-edges}, this tree must also contain an outgoing tail. But this implies that the symbol of the Feynman integral of $\mathcal{G}'$ vanishes.
\end{proof}
\begin{cor}
A one-loop graph which survives under the symbol map (equivalently, without external edges) must be of the following two types:
$$
\figbox{0.5}{nonvanishingwheel}
$$
\end{cor}
\begin{proof}
Let $\mathcal{G}$ be such a one-loop graph. If $\mathcal{G}$ is of type $(2)$, then every vertex on the wheel must be labeled by $\tilde{R}_\nabla$, as shown on the left.
If $\mathcal{G}$ is of type $(1)$, then every vertex on the wheel must be labeled by $\gamma_0'$. A similar argument as in Proposition \ref{proposition: all-possible-wheels} shows that these vertices must be exactly trivalent, as shown on the right.
\end{proof}
We can also see that if the edges of the wheel on such a one-loop graph are labeled by $\partial_P$, then they have to be of the same color:
\begin{prop}
If a wheel contains edges labeled by both $\partial_{P_1}$ and $\partial_{P_2}$, then the corresponding Feynman weights vanish. In other words, the following Feynman weights vanish:
$$
\figbox{0.22}{one-loop-color-type-2}
$$
\end{prop}
\begin{proof}
We focus on the vertex $v_1$ in the above picture. Notice that both the red and green edges incident to $v_1$ only contribute constants to the analytic part of the propagator. The analytic part of the propagator $P_1$ labeling the blue edge between $v_1$ and $v_2$ is $u-1/2$ as in \eqref{definition: propagator-no-polarization}. Thus the Feynman weight of the above graph is a multiple of the following integral
$$
\int_{u=0}^1 (u-\frac{1}{2})du=0,
$$
which must vanish.
\end{proof}
To summarize, the only one-loop graphs which contribute non-trivially to the integrand whose integral gives rise to $\Tr(1)$ are of the following 4 types:
\begin{equation}\label{picture:1_loop_graphs_1}
\figbox{0.58}{nonvanishing-1}
\end{equation}
\begin{equation}\label{picture:1_loop_graphs_2}
\figbox{0.58}{nonvanishing-2}
\end{equation}
In order to have a more precise computation of the Feynman weights, we first compute the Feynman weights of the following two types of line graphs with $m\geq 2$ vertices ($m$ purple vertices in the left picture):
\begin{equation}\label{picture: two-vertices-cancellation}
\figbox{0.56}{wheel-two-vertices}
\end{equation}
We label both the internal edges and vertices to avoid the issue of automorphisms of graphs and ordering of the contraction of propagators with vertices.
\begin{lem}
The Feynman weight for the graph on the left of \eqref{picture: two-vertices-cancellation} is given by
$$
-\frac{1}{2^m}\cdot (\omega^{k_1\bar{l}_2}\omega^{k_2\bar{l}_3}\cdots \omega^{k_{m-1}\bar{l}_m})(R_{i_1\bar{j}_1k_1\bar{l}_1}R_{i_2\bar{j}_2k_2\bar{l}_2}\cdots R_{i_m\bar{j}_mk_m\bar{l}_m})(dz^{i_1}d\bar{z}^{j_1}\cdots dz^{i_m}d\bar{z}^{j_m})\otimes (d\bar{y}^{l_1}\wedge dy^{k_m})
$$
while that for the graph on the right of \eqref{picture: two-vertices-cancellation} is given by
$$
\frac{1}{2^{m-1}}\cdot (\omega^{k_1\bar{l}_2}\omega^{k_2\bar{l}_3}\cdots \omega^{k_{m-1}\bar{l}_m})(R_{i_1\bar{j}_1k_1\bar{l}_1}R_{i_2\bar{j}_2k_2\bar{l}_2}\cdots R_{i_m\bar{j}_mk_m\bar{l}_m})(dz^{i_1}d\bar{z}^{j_1}\cdots dz^{i_m}d\bar{z}^{j_m})\otimes (\bar{y}^{l_1}y^{k_m}).
$$
\end{lem}
\begin{proof}
When $m=2$, the Feynman weight of the left picture can be explicitly computed as follows:
\begin{align*}
&\omega^{p_1\bar{q}_1}\left(\iota_{\partial_{z^{p_1}}}(\tilde{R}_\nabla/2)\otimes\iota_{\partial_{\bar{z}^{q_1}}}(\tilde{R}_\nabla/2)\right)\\
=&\ \frac{1}{4}\omega^{p_1\bar{q}_1}\cdot\text{Mult}\left(\iota_{\partial_{z^{p_1}}}(R_{i_1\bar{j}_1k_1\bar{l}_1}dz^{i_1}\wedge d\bar{z}^{j_1}\otimes dy^{k_1}\wedge d\bar{y}^{l_1})\otimes\iota_{\partial_{\bar{z}^{q_1}}}(R_{i_2\bar{j}_2k_2\bar{l}_2}dz^{i_2}\wedge d\bar{z}^{j_2}\otimes dy^{k_2}\wedge d\bar{y}^{l_2})\right)\\
=&\ -\frac{1}{4}\omega^{k_1\bar{l}_2}\cdot\text{Mult}\left((R_{i_1\bar{j}_1k_1\bar{l}_1}dz^{i_1}\wedge d\bar{z}^{j_1}\otimes d\bar{y}^{l_1})\otimes(R_{i_2\bar{j}_2k_2\bar{l}_2}dz^{i_2}\wedge d\bar{z}^{j_2}\otimes dy^{k_2})\right)\\
=&\ -\frac{1}{4}\omega^{k_1\bar{l}_2}\cdot(R_{i_1\bar{j}_1k_1\bar{l}_1}R_{i_2\bar{j}_2k_2\bar{l}_2}dz^{i_1}\wedge d\bar{z}^{j_1}\wedge dz^{i_2}\wedge d\bar{z}^{j_2}\otimes(d\bar{y}^{l_1}\wedge dy^{k_2})).
\end{align*}
The statement for general $m>2$ follows by a simple induction.
A simple computation shows that if the purple vertex is trivalent, then the Feynman weight of the following picture is $R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes y^k\bar{y}^l$:
$$
\figbox{0.5}{curvature}
$$
And the Feynman weight of the right picture of \eqref{picture: two-vertices-cancellation} follows from a straightforward computation which we omit here.
Thus we indeed get the cancellation. For wheels with more vertices, the cancellation can be proved by an induction on the number of vertices on the wheels.
\end{proof}
A simple consequence of this computation is the following
\begin{prop}\label{prop:1-loop-graphs}
The first type (i.e., the left picture in \eqref{picture:1_loop_graphs_1}) and third type (i.e., the left picture in \eqref{picture:1_loop_graphs_2}) one-loop Feynman weights cancel with each other.
\end{prop}
\begin{proof}
The first type and third type one-loop graphs are exactly obtained by connecting the starting and ending tails of the graphs shown in the picture \eqref{picture: two-vertices-cancellation}. It follows from the previous lemma that the Feynman weight for the first type is of the form
$$
-C\cdot \frac{1}{2^m}\cdot (\omega^{k_1\bar{l}_2}\omega^{k_2\bar{l}_3}\cdots \omega^{k_{m-1}\bar{l}_m}\omega^{k_m\bar{l}_1})(R_{i_1\bar{j}_1k_1\bar{l}_1}R_{i_2\bar{j}_2k_2\bar{l}_2}\cdots R_{i_m\bar{j}_mk_m\bar{l}_m})(dz^{i_1}d\bar{z}^{j_1}\cdots dz^{i_m}d\bar{z}^{j_m}),
$$
where the constant $C$ arises from the combinatorics of graphs, while that for the third type is of the form
$$
C\cdot \frac{1}{2^m}\cdot (\omega^{k_1\bar{l}_2}\omega^{k_2\bar{l}_3}\cdots \omega^{k_{m-1}\bar{l}_m}\omega^{k_m\bar{l}_1})(R_{i_1\bar{j}_1k_1\bar{l}_1}R_{i_2\bar{j}_2k_2\bar{l}_2}\cdots R_{i_m\bar{j}_mk_m\bar{l}_m})(dz^{i_1}d\bar{z}^{j_1}\cdots dz^{i_m}d\bar{z}^{j_m}).
$$
\end{proof}
The second type one-loop graphs, i.e., connected wheels with only blue edges on the wheel (the right picture in \eqref{picture:1_loop_graphs_1}), give rise precisely to the logarithm of $\hat{A}$ genus, as was shown by Grady and Gwilliam \cite{Gwilliam-Grady}:
\begin{prop}[Corollary 8.6 in \cite{Gwilliam-Grady}]\label{prop: one-loop-logarithm-A-roof-genus}
The Feynman weights corresponding to the following one-loop graphs give rise to the logarithm of the $\hat{A}$ genus of $X$.
$$
\figbox{0.58}{one-loop-color}
$$
\end{prop}
Finally, the contribution from the fourth type one-loop graphs, i.e., a tadpole graph (the right picture in \eqref{picture:1_loop_graphs_2}), is described by the following lemma:
\begin{lem}\label{lemma: weight-tadple-graph}
The Feynman weight of the tadpole graph is given by $\frac{1}{2}\Tr(\mathcal{R}^+)$, i.e.,
$$
\frac{1}{2}\Tr(\mathcal{R}^+)=\figbox{0.66}{tadpole},
$$
where $\mathcal{R}^+$ is given in \eqref{equation: R-plus}, \eqref{equation: trace-R-plus}.
\end{lem}
\begin{proof}
The Feynman weight of the tadpole graph is explicitly given by
\begin{align*}
\hbar\iota_{\Pi}\left(\frac{1}{2\hbar}\tilde{R}_\nabla\right)& = \iota_{\Pi}\left(\frac{\sqrt{-1}}{2}R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j\otimes dy^k\wedge d\bar{y}^l\right)\\
& = -\frac{\sqrt{-1}}{2}\omega^{k\bar{l}}R_{i\bar{j}k\bar{l}}dz^i\wedge d\bar{z}^j = \frac{1}{2}\Tr(\mathcal{R}^+).
\end{align*}
\end{proof}
Combining Proposition \ref{prop:one-loop-exact-algebraic-index}, Corollary \ref{cor:trees}, Proposition \ref{prop:1-loop-graphs}, Proposition \ref{prop: one-loop-logarithm-A-roof-genus} and Lemma \ref{lemma: weight-tadple-graph}, we arrive at our second main result, which is a cochain level formula for the algebraic index:
\begin{thm}\label{theorem:algebraic-index-theorem}
Let $\gamma$ be a solution of the Fedosov equation \eqref{equation: Fedosov-equation-gamma} and $\gamma_\infty$ be the associated solution of the QME as defined in Definition \ref{definition: gamma-infty}. Then we have
$$\sigma\left(e^{\hbar\iota_{\Pi}} (e^{\tilde{R}_\nabla/2\hbar}e^{\gamma_\infty/\hbar}) \right)
= \hat{A}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\frac{1}{2}\Tr(\mathcal{R}^+)} = \text{Td}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\Tr(\mathcal{R}^+)},$$
where $Td(X)$ is the Todd class of $X$.
\end{thm}
\begin{proof}
We only need to show the second equality, which follows from the formula
$Td(X)=\hat{A}(X)\cdot e^{-\frac{1}{2}\Tr(\mathcal{R}^+)}$.
\end{proof}
Applying Lemma \ref{lemma: Berezin-integral-graph} gives the {\em algebraic index theorem}:
\begin{cor}\label{corollary:algebraic-index-theorem}
The trace of the function $1$ is given by
\begin{align*}
\Tr(1) = \int_X\hat{A}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\frac{1}{2}\Tr(\mathcal{R}^+)}
= \int_X\text{Td}(X)\cdot e^{-\frac{\omega_\hbar}{\hbar}+\Tr(\mathcal{R}^+)}.
\end{align*}
\end{cor}
As we mentioned in the introduction, when $\alpha = \hbar\cdot\Tr(\mathcal{R}^+)$ or $\omega_\hbar=-\omega+\hbar\cdot\Tr(\mathcal{R}^+)$, we will prove in the forthcoming work \cite{CLL-PartIII} that the associated star product $\star_\alpha$ is exactly equal to the {\em Berezin-Toeplitz star product} studied in \cite{Bordemann-Meinrenken, Bordemann, Karabegov}. In this case, the algebraic index theorem is formulated as:
$$
\Tr(1)=\int_X\text{Td}(X)\cdot e^{\omega/\hbar}.
$$
\appendix
\section{Feynman graphs}\label{section: Feynman graphs}
In this section, we describe the basics of Feynman graphs. For more details, we refer the reader to \cite{Kevin-book}.
\begin{defn}\label{definition: Feynman-graphs}
A {\em graph} $\mathcal{G}$ consists of the following data:
\begin{enumerate}
\item A finite set $V(\mathcal{G})$ of {\em vertices},
\item A finite set $H(\mathcal{G})$ of {\em half edges},
\item An involution $\sigma: H(G)\rightarrow H(G)$. The set of fixed points of this map is called the set of {\em tails} of $\mathcal{G}$, denoted by $T(\mathcal{G})$; a tail is also called an {\em external edge}. The set of two-element orbits of this map is called the set of {\em internal edges} of $\mathcal{G}$, denoted by $E(\mathcal{G})$,
\item A map $\pi: H(\mathcal{G})\rightarrow V(\mathcal{G})$ sending a half-edge to the vertex to which it is attached,
\item A map $g: V(\mathcal{G})\rightarrow\Z_{\geq 0}$ assigning a {\em genus} to each vertex.
\end{enumerate}
\end{defn}
\begin{rmk}
In a picture of a graph, we will use solid lines and dotted lines to denote internal edges and tails respectively.
\end{rmk}
It is obvious how to construct a topological space $|\mathcal{G}|$ associated to a graph $\mathcal{G}$. A graph $\mathcal{G}$ is called {\em connected} if $|\mathcal{G}|$ is connected. The {\em genus} is a graph is defined by
\begin{equation*}\label{equation: genus-graph}
g(\mathcal{G}):=b_1(\mathcal{G})+\sum_{v\in V(\mathcal{G})}g(v).
\end{equation*}
Here $b_1(\mathcal{G})$ denote the first Betti number of $|\mathcal{G}|$.
\begin{rmk}
We call a graph $\mathcal{G}$ a {\em tree} if $b_1(\mathcal{G})=0$, or a {\em one-loop graph} if $b_1(\mathcal{G})=1$.
\end{rmk}
We also introduce the notion of a wheel graph:
\begin{defn}\label{definition: wheel}
A one-loop graph $\mathcal{G}$ is called a {\em wheel} if removing any of its internal edges will give rise to a tree, as in the following picture:
$$
\figbox{0.28}{wheel}
$$
\end{defn}
Let $\E$ be a graded vector space over a base ring $R$. Let $\E^*$ denote its $R$-linear dual (or continuous dual when there is a topology on $\E$). Let
$$
\hat{\mathcal{O}}(\E):=\prod_{k\geq 0}\Sym^k_R(\E),
$$
denote the space of formal functions on $\E$. We define a subspace
$$
\mathcal{O}^+(\E)\subset\hat{\mathcal{O}}(\E)[[\hbar]]
$$
consisting of those formal functions which are at least cubic modulo $\hbar$ and the nilpotent ideal $\mathcal{I}$ in $R$. Let $F\in\mathcal{O}^+(\E)$ which we expand as
$$
F=\sum_{g,k\geq 0}F_g^{(k)},
$$
where $F_g^{(k)}:\E^{\otimes k}\rightarrow R$ is an $S_k$-invariant (continuous) map.
We will fix an element $P\in\Sym^2(\E)$ which we call the {\em propagator}. With $F$ and $P$, we will describe for every connected stable graph $\mathcal{G}$ the {\em Feynman weight}:
$$
W_{\mathcal{G}}(P,F)\in\mathcal{O}^+(\E).
$$
Explicitly, we label each vertex $v\in V(\mathcal{G})$ of genus $g(v)$ and valency $k$ by $F_{g(v)}^{(k)}$. This defines an element:
$$
F_v:\E^{\otimes H(v)}\rightarrow R,
$$
where $H(v)$ denotes the set of half-edges incident to $v$. We label each internal edge by the propagator
$$
P_e=P\in\E^{H(e)},
$$
where $H(e)$ denotes the two half edges which together give rise to the internal edge $e$. We can then contract the tensor product of vectors in $E$ (from $E(\mathcal{G})$) with the tensor product of formal functions on $E$ (from $V(\mathcal{G})$) to yield an $R$-linear map:
$$
W_{\mathcal{G}}(P,F):\E^{T(\mathcal{G})}\rightarrow R.
$$
\begin{defn}\label{definition: homotopic-group-flow}
We define the {\em homotopic renormalization group flow operator (HRG)} with respect to the propagator $P$
$$
W(P,-): \mathcal{O}^+(\E)\rightarrow\mathcal{O}^+(\E)
$$
by
$$
W(P,F):=\sum_{\mathcal{G}}\frac{\hbar^{g(\mathcal{G})}}{|\text{Aut}(\mathcal{G})|}W_{\mathcal{G}}(P,F),
$$
where the sum is over all connected graphs, and $\text{Aut}(\mathcal{G})$ denotes the automorphism group of $\mathcal{G}$. We can equivalently describe the HRG operator formally by the simple formula:
$$
e^{W(P,F)/\hbar}=e^{\hbar\partial_P}(e^{F/\hbar}),
$$
where $\partial_P$ denotes the second order differential operator on $\mathcal{O}(\E)$ given by contracting with $P$.
\end{defn}
\end{document} | 141,423 |
8 days in Kapaa Itinerary
8 days in Kapaa Itinerary
Created using Inspirock United States trip maker
Route & dates
View / edit route
Kapaa— 7 nights
Edit
View full calendar
8
days
Kapaa
8
days
Noted for its low-key beaches, Kapaa is a small town with a high concentration of tourist-oriented hotels, shops, bars, and restaurants.Get out of town with these interesting Kapaa side-trips: Hanalei Beach (in Hanalei), Poipu (Poipu Beach Park, Makawehi Bluff, & more) and Dolphin & Whale Watching (in Eleele). There's lots more to do: explore the stunning scenery at Waimea Canyon State Park, take an unforgettable tour with a helicopter tour, capture your vacation like a pro with an inspiring photography tour, and look for gifts at Kapaia Stitchery.
To see traveler tips, where to stay, photos, and tourist information, you can read our Kapaa online day trip planner .
Expect a daytime high around 30°C in December, and nighttime lows around 23°C. Cap off your sightseeing on the 9th (Sat) early enough to travel back home.
Things to do in Kapaa
Side Trips
Lihue, Poipu, Na Pali Coast State Park, Lydgate State Park, Dolphin & Whale Watching
Highlights from your plan
Waimea Canyon State Park
2:30pm, Thu Dec 7
Poipu Beach Park
10:30am, Sat Dec 2
Helicopter Tours
10:00am, Mon Dec 4 | 26,117 |
This is a Vintage Avon Topaz Cologne, (bottle is empty)
of the 1976 Princess of Yorkshire 3 1/2" yorkie dog Collectible 1 ounce Bottle
Body of bottle of Yorkshire Terrier is Glass and head is plastic. It keeps its original Avon Seal on the bottom. Very good condition!
Item ID: 5703
Only 1 available. Don't miss out!
Our Own Family Estate Heirlooms! ~4 Generations of Avid Collectors~ALL Fair Offers Considered~
~All from Our Family ESTATE ! ~LAY AWAY Available~Great Vintage Dolls.~ FAST Shipping ~ New Items Added Weekly! | 148,583 |
Inpathy
Inpathy is the.:.
Interested in learning more? Visit Inpathy.com or click here to get started!
Inpathy For Providers
Inpathy works with all types of behavioral healthcare providers to deliver a range of psychiatric, behavioral and mental health care services to consumers online, anytime, anywhere. Both Inpathy providers and consumers access sessions and other services through a HIPAA-compliant online system.
Inpathy For Organizations
Inpathy works with organizations to provide a range of services and support that streamline and expedite care, resulting in better clinical, financial and operational outcomes... | 375,047 |
Converse nude rose gold
But please contact me if you have any problems with your order.
We'll never post without your permission. I don't accept returns, exchanges or cancellations. Young spanish girl fucked. Please select an option. Converse nude rose gold. Both registration and sign in support using Google and Facebook accounts. Etsy keeps your payment information secure. We love the contrast of high top sneakers and an ankle sock peeking out over the top with an adorable little mini dress in a fun pattern. Please enter a valid post code. Reviews 5 out of 5 stars.
No returns or exchanges. From classic high-top Converse All-Stars to fun All-American red, white, and blue colored ones, Converse sneakers know how to bring the fun back into sneakers. Sopping wet tits. Overview Handmade item Materials: You don't have any lists yet. I will refund the custom purchase 5. There was a problem calculating your shipping. Running shoes from Free People give you the support and speed you need to run that mile or run that errand on the weekend. Love them so much I contacted Saundra to custom design another pair for me.
We take intellectual property concerns very seriously, but many of these problems can be resolved directly by the parties involved. Top Rated FP Exclusive. Please inquire with me about your pair if you have and concerns. Contact the shop to find out about available shipping options.
Please reply to this message if I've asked questions! Forgot your username or email? Converse x Office spring blossom line. We are no stranger to the Holiday Nude Collection by Converse released a few months back. A black pair of studded Converse sneakers with denim and a rock tee is a no-fail combination we can't ever give up.
With a total of six new colour ways made with premium leather and super soft suedes, deciding which colour to get will probably be much harder than it sounds.
- Hot women with big tits
- Nicki nude pics
- 166
- Perfect tits shemale
Hong kong nude video
Items eligible for return are at buyers expense Shipping quotes are for AirMail service via Canada Post. Madelyn marie nude videos. We are no stranger to the Holiday Nude Collection by Converse released a few months back.
But please contact me if you have any problems with your order. Related to this item. Reviewed by Debra Errico. I will send a Convo directly following notification of your order requesting some of these custom selections. My approval is required for these situations. Converse x Office spring blossom line. You're viewing an automatic translation provided by Etsy not the official version. Add this item to a list. Converse nude rose gold. Hentai milf anime. You may also like. A black pair of studded Converse sneakers with denim and a rock tee is a no-fail combination we can't ever give up.
Please inquire with me about your pair if you have and concerns. Order came quickly and beautifully packaged. Cariris is not responsible for the shoe pair I use a pattented bonding formula designed by chemistry and not available on the consumer market Love them so much I contacted Saundra to custom design another pair for me. Read our mature content policy.
Their muted shade might have been the perfect accompaniment to your fall get-up, but Converse has just recently dropped a brand new Spring Blossom Collection. You don't have any lists yet.
A little bit childish, a little bit tomboy, and totally chic. Free People knows what it means to be athletic and fashionable at the same time, and our sneaker collection has got it all right. List of lesbian porn websites. With luxurious leather donned in luscious pastel hues, who could resist the chic charm of the UK exclusive collection. Others in the collection include the Lilac Mouse and Vapour Pink Mouse, which comes in various shades of pink, making it great to wear on Wednesdays, or any other day.
Size Select an option 5 US Women's 5. If "Custom" is not mentioned on any particular listing
Milf porn short hair
You can't favourite your own item. Hmm, something went wrong. Fat lesbian strapon porn. Any placed Order is a binding contract and will remain as sold SIZE Selections are the responsibility of the buyer ANY sandal pair that is purchased in an incorrect size may be considered for exchange but ONLY with my personal prior approval Sign in with Google.
I am not responsible for these visual monitor alterations.
Airfrov 4 March Translate to English Translated by Microsoft. Add it to your favourites to revisit it later. You must select an existing treasury or provide a name for your new treasury.
Items eligible for return are at buyers expense Top Rated FP Exclusive. It has none of the brightness associated with its name, but full quiet drama from the gentle pastel blue that exudes all shades of cute.
Deana martin nude
Similar news:
- Lesbian babysitters having sex
- Huge tits webcam show
- Award winning lesbian porn
- Cartoon characters nude pics
- Milf beaver pics
- Milf sexy dress
| 35,277 |
0 reads Leave a comment
Tank looks sexy, makes sexy music, and just is sexy, so who better give women tips on how to keep it sexy? Take a look at our exclusive interview with Tank to find out what’s on his top 5 list, and check out his latest album This Is How I Feel!
RELATED: Tank Feat. Chris Brown “Lonely” [NEW MUSIC]
RELATED: Tank “Next Breath” [MUSIC]
[ooyala code=”hoOWdvNDpV0nfBoqeY3X_PNodQtuxsaC”]
Keep Up With Russ Parr … LIKE Him On Facebook!
Also On 105.9 Kiss-FM: | 258,162 |
#38 Shinzo Abe
Prime Minister, Japan
- Shinzo Abe won the October 2017 parliamentary election and his fourth 4-year term as Japan's highest elected official.
- In early 2018 a series of scandals broke out, alleging that Abe has used his position of power to grant favors, which resulted in big protests.
- Since becoming prime minister in 2012 he has employed economic policies dubbed "Abenomics," pumping billions of dollars into Japan's economic growth.
- Under constant threat of North Korean missiles, Abe has broken away from Japan's famous military neutrality and said "all options are on the table."
- Hailing from a political family (his grandfather and great uncle served as prime ministers) Abe began his career in Japan's lower house of parliament.
Content continues below
Advertisement
On forbes lists
Stats
Age63
ResidenceTokyo, Japan
CitizenshipJapan
EducationBachelor of Arts/Science, Seikei University
Did you know
After winning the 2017 election, Abe became the third-longest-serving Japanese leader in the postwar era. If he's still PM in 2019, he'll be the longest serving ever
Abe became the first Japanese prime minister to visit Pearl Harbor accompanied by a U.S. President, when he visited the naval base with then-President Obama in 2016.
Content continues below
Advertisement
Connections
Masayoshi Son
Related by residence country: Japan
Tadashi Yanai & family
Related by residence country: Japan
Takemitsu Takizaki
Related by residence country: Japan
Akira Mori & family
Related by residence country: Japan
Hiroshi Mikitani
Related by residence country: Japan53 1,059
><<
More Articles | 52,405 |
TITLE: Are determinants supposed to be constant for square matrices?
QUESTION [1 upvotes]: I have seen in a lot of Texts and Websites where the determinant for a square matrix is gotten by obtaining minors along the first row which seems to be the norm. but recently I see it can be gotten by obtaining minors along the last column or across the diagonal(I see this a lot for obtaining the determinants for 4x4 matrices). so I decided to try this for the following 3x3 matrix(A)
$$A = \begin{pmatrix}
1^+&2^-&1^+\\
3^-&-4^+&-2^-\\
5^+&3^-&5^+\\
\end{pmatrix}$$
$(i)$ taking minors across the first row of $A$(the usual norm)
$$\det(A) = \begin{array}{c|cc|}
1&-4&-2\\
& 3&5\\
\end{array}
+
\begin{array}{c|cc|}
-2&3&-2\\
& 5&5\\
\end{array}
+
\begin{array}{c|cc|}
1&3&-4\\
& 5&3\\
\end{array} = -35
$$
$(ii)$ taking minors across the last column of $A$
$$\det(A) = \begin{array}{c|cc|}
1&3&-4\\
& 5&3\\
\end{array}
+
\begin{array}{c|cc|}
-(-2)&1&2\\
& 5&3\\
\end{array}
+
\begin{array}{c|cc|}
5&1&2\\
& 3&-4\\
\end{array} = -35
$$
$(iii)$ taking minors across the diagonal of $A$
$$\det(A) = \begin{array}{c|cc|}
1&-4&-2\\
& 3&5\\
\end{array}
+
\begin{array}{c|cc|}
-(-4)&1&1\\
& 5&5\\
\end{array}
+
\begin{array}{c|cc|}
5&1&2\\
& 3&-4\\
\end{array} = -64
$$
Why do I get a different value across the diagonal if the determinant for any given matrix is constant irrespective of where the minors are obtained or are they all correct?
P.S: case $(ii)$ result was corrected
REPLY [5 votes]: So to answer your question: you get different values for two different reasons. You are right that the determinant value of a matrix is independent of which minor we choose. Note however that choosing the diagonal is not a valid method for this. There are ways to compute the determinant using the diagonal, but not in the way you use here.
Secondly, your computation of (ii) is wrong, as pointed out in the comments. The reason the first row is usually chosen is for ease of methodology. If however you are working with a mtarix with zeros as entries it can be beneficial to choose that row or column to work with. | 119,482 |
\chapter{Compressible Fluids and Non-linear Wave Equations}
\section{The Euler's Equations}
The mathematical description of the state of a moving fluid is determined by the distribution of the fluid velocity
$\textbf{v}=\textbf{v}(\textbf{x},t)$ and of any two thermodynamic quantities pertaining to the fluid, for instance the pressure $p=p(\textbf{x},\textsl{t})$
and the mass density $\rho=\rho(\textbf{x},\textsl{t})$. All the thermodynamics are determined by the value of any two of them, together with the equation of state.
The equation of motion of a perfect fluid can be derived as follows.
We begin with the equation which expresses the conservation of mass. We consider the domain $\Omega(t)$ in $\mathbb{R}^3$ occupied by some fluid at time $t$.
The mass of fluid in $\Omega(t)$ is $\int_{\Omega(t)}\rho dV$. This integral should not depend on $t$ due to the mass conservation. So we get
\begin{equation}
\frac{d}{dt}\int_{\Omega(t)}\rho dV=0
\end{equation}
This means
\begin{equation}
\int_{\Omega(t)}\frac{\partial\rho}{\partial t}dV+\int_{\partial\Omega(t)}
\rho\textbf{v}\cdot\textbf{n}dS=0
\end{equation}
where $\textbf{n}$ is the out normal of $\partial\Omega({t})$. By Green's formula, we have
\begin{equation}
\int_{\Omega({t})}[\frac{\partial\rho}{\partial{t}}+\textrm{div}(\rho\textbf{v})]{dV} =0
\end{equation}
Since $\Omega({t})$ is arbitrary, we have
\begin{equation}
\frac{\partial\rho}{\partial{t}}+\textrm{div}(\rho\textbf{v})=0
\end{equation}
This is the equation of continuity.
Then we consider the conservation of momentum. By Newton's law,
\begin{equation}
\frac{d\textbf{P}}{dt}=\textbf{F}
\end{equation}
where $\textbf{P}$ is the momentum, $\textbf{F}$ is the force. In our case, equation (1.5) becomes
\begin{equation}
\frac{d}{dt}\int_{\Omega({t})}\rho\textbf{v} {dV}=-\int_{\partial\Omega({t})}{p}\textbf{n}{dS}
\end{equation}
i.e.
\begin{equation}
\frac{d}{dt}\int_{\Omega({t})}\rho\textbf{v} {dV}=-\int_{\Omega({t})}\nabla{p}{dV}
\end{equation}
A component of this equation is
\begin{equation}
\frac{d}{dt}\int_{\Omega({t})}\rho{v}^i {dV}=-\int_{\Omega({t})}\frac{\partial{p}}{\partial {x}^i}{dV}
\end{equation}
Then we get
\begin{equation}
\int_{\Omega({t})}[\frac{\partial(\rho{v}^i)} {\partial{t}}+\textrm{div}(\rho{v}^i \textbf{v})]{dV}=
-\int_{\Omega({t})}\frac{\partial{p}}{\partial {x}^i}{dV}
\end{equation}
Since $\Omega({t})$ is arbitrary, as well as the equation of continuity, we get
\begin{equation}
\frac{\partial{v}^i}{\partial{t}}+\textbf{v}\cdot\nabla{v}^i=-\frac{1}{\rho}\frac{\partial {p}}{\partial{x}^i}
\end{equation}
In a perfect fluid, heat exchange between different parts of the fluid is absent. So the motion is adiabatic throughout the fluid.
In the adiabatic motion the entropy of any particle of fluid remains constant as that particle moves about in space.
Denoting by ${s}$ the entropy per unit mass. We can express the condition for adiabatic motion as
\begin{equation}
\frac{ds}{dt}=0
\end{equation}
which can be written as
\begin{equation}
\frac{\partial{s}}{\partial{t}}+\textbf{v}\cdot\nabla{s}=0
\end{equation}
\section{Irrotational Flow and the Nonlinear Wave Equation}
The adiabatic condition may take a much simpler form. If the entropy is constant throughout the fluid at some initial instant,
it remains everywhere the same constant value at all times. In this case, we can write the adiabatic condition simply as
\begin{equation}
{s}=constant
\end{equation}
Such a motion is said to be isentropic.
We may in this case put (1.10) into a different form. To do this, we employ the thermodynamic relation
\begin{equation}
{dh}={V}{dp}+\theta{ds}
\end{equation}
where ${h}$ is the enthalpy, defined in terms of the energy per unit mass:
\begin{align*}
h=e+pV
\end{align*}
${V}=\frac{1}{\rho}$ is the specific volume, and $\theta$ is the temperature. Since $s$ is constant, we have:
\begin{equation}
{dh}={V}{dp}=\frac{1}{\rho}{dp}
\end{equation}
So (1.10) becomes
\begin{equation}
\frac{\partial{v}^i}{\partial {t}}+\textbf{v}\cdot\nabla{v}^i=
-\frac{\partial{h}}{\partial{x}^i}
\end{equation}
This implies
\begin{equation}
\frac{\partial\omega}{\partial{t}}+\textbf{v}\cdot\nabla
\omega=\omega\cdot\nabla\textbf{v}-(\textrm{div}\textbf{v})\omega
\end{equation}
where $\omega=\nabla\times\textbf{v}$.
If $\omega\mid_{{t}=0}=0$, then by (1.17) $\omega\equiv0$.
A flow for which $\omega=\nabla\times\textbf{v}\equiv0$ in all space and time is called an irrotational flow. In this case, since $\mathbb{R}^3$ is simply-connected,
there exists a function $\phi$, such that $\textbf{v}=-\nabla\phi$. The equation (1.16) becomes
\begin{equation}
\frac{\partial^2 \phi}{\partial{x}^i\partial{t}}-\sum_{j}\frac{\partial \phi}{\partial{x}^j}\frac{\partial^2 \phi}
{\partial {x}^j \partial {x}^i}=\frac{\partial {h}}{\partial {x}^i}
\end{equation}
i.e.
\begin{equation}
\frac{\partial}{\partial {x}^i}[\frac{\partial \phi}{\partial {t}}-\frac{1}{2}|\nabla\phi|^{2}-{h}]=0
\end{equation}
Since $\phi$ is defined up to a constant which may depend on ${t}$, without loss of generality, we may set
\begin{equation}
{h}=\frac{\partial\phi}{\partial{t}}-\frac{1}{2}|\nabla\phi|^{2}
\end{equation}
Then only the continuity equation remains.
While $\frac{dh}{dp}={V}>0$, then by inverse function theorem, ${p}$ can be viewed as a function of ${h}$.
This is because ${h}$ can be viewed as a function of ${p}$ and ${s}$, and in the isentropic case,
${h}$ is a function of ${p}$. Then $\rho=\frac{dp}{dh}$, $\rho$ is also a function of ${h}$.
The continuity equation is
\begin{equation}
\frac{\partial\rho}{\partial{t}}-\textrm{div}(\rho\nabla\phi)=0
\end{equation}
i.e.
\begin{equation}
\frac{{d}\rho}{dh}\frac{\partial}{\partial{t}}(\frac{\partial \phi}{\partial{t}}
-\frac{1}{2}\sum_{i}(\frac{\partial\phi}{\partial{x}^i})^2)-
\sum_{j}\frac{{d}\rho}{dh}\frac{\partial}{\partial {x}^j}(\frac{\partial \phi}{\partial{t}}-
\frac{1}{2}\sum_{i}(\frac{\partial\phi}{\partial{x}^i})^2)\frac{\partial \phi}{\partial{x}^j}-\rho\Delta\phi=0
\end{equation}
\begin{equation}
\frac{\partial^2\phi}{\partial{t}^2}-
2\sum_{i}\frac{\partial\phi}{\partial {x}^i}\frac{\partial^2\phi}{\partial{t}\partial {x}^i }+
\sum_{ij}\frac{\partial\phi}{\partial {x}^i}\frac{\partial\phi}{\partial {x}^j}\frac{\partial^2\phi}{\partial {x}^i\partial {x}^j}-\eta^2\Delta\phi=0
\end{equation}
where we have used the fact that $\frac{dp}{dh}=\rho$ and the definition of $\eta$, the sound speed, which is the following:
\begin{equation}
\eta^2=\eta^2(h):=(\frac{\partial{p}}{\partial\rho})_{s}
\end{equation}
We define the function
\begin{equation}
{H}={H}({h}):=-\eta^2-2{h}
\end{equation}
From (1.24) we know that
\begin{equation}
\eta^{-2}=(\frac{{d}\rho}{dp})_{s}=
-\frac{1}{{V}^2}(\frac{dV}{dp})_{s}
\end{equation}
Since ${V}=(\frac{dh}{dp})_{s}$, by direct calculation we obtain
\begin{equation}
\frac{dH}{dh}=-\frac{\eta^4}{{V}^3}(\frac{{d}^2{V}}{{dp}^2})_{{s}}
\end{equation}
So $\frac{dH}{dh}$ vanishes if and only if ${V}$ is linear in ${p}$. This fact will be used later.
The meaning of the function $H$ will become apparent in the sequel.
Equation (1.21) is the Euler-Lagrange equation of the Lagrangian
\begin{equation}
{L}={p}({h})
\end{equation}
In general, ${L}={L}({t},\textbf{x};\phi,\frac{\partial\phi}{\partial {t}},\nabla\phi)$, and the Euler-lagrange equation is
\begin{equation}
\sum_{\alpha}\frac{\partial}{\partial{x}^\alpha}(\frac{\partial{L}({t},\textbf{x};\phi,\frac{\partial\phi}{\partial{t}},\nabla\phi) }{\partial {u}_{\alpha}})
=\frac{\partial{L}}{\partial{q}}(t,\textbf{x};\phi,\frac{\partial\phi}{\partial t},\nabla\phi)
\end{equation}
where ${q}=\phi$, ${x}^0={t}$, ${u}_{0}=\frac{\partial\phi}{\partial{t}}$, ${u}_{i}=\frac{\partial\phi}{\partial{x}^i}$ $i=1,2,3$
In the case of ${L}={p}({h})$,
\begin{equation}
\frac{\partial{L}}{\partial{u}_{\alpha}}=\frac{{dp}}{{dh}}\frac{\partial{h} }{\partial{u}_{\alpha}}=\rho\frac{\partial{h}}{\partial{u}_{\alpha}}
\end{equation}
where ${h}={u}_{0}-\frac{1}{2}\sum_{i}({u}_{i})^2$
So we have
\begin{align*}
\frac{\partial{h}}{\partial{u}_{0}}=1, \frac{\partial{h}}{\partial{u}_{i}}=-{u}_{i}, \frac{\partial{h}}{\partial{q}}=0
\end{align*}
Then the Euler-lagrange equation is
\begin{equation}
\frac{\partial\rho}{\partial{t}}-\sum_{i}\frac{\partial}{\partial{x}^i}(\rho{u}_{i})=0
\end{equation}
This is the equation (1.21).
\section{The Equation of Variations and the Acoustical Metric}
Next, we shall discuss the linearized equation corresponding to (1.21).
Let $\phi$ be a given solution of (1.21) and let ${\phi_{\tau}:\tau\in I}$, $I$ an open interval of the real line containing $0$,
be a differentiable 1-parameter family of solution such that $\phi_{0}=\phi$.
Then
\begin{equation}
\psi=(\frac{{d}\phi_{\tau}}{{d}\tau})_{\tau=0}
\end{equation}
is a variation of $\phi$ through solutions.
Consider the Lagrangian of the unknown $\psi$:
\begin{equation}
{L}_{\tau}[\psi]:={L}[\phi+\tau\psi]
\end{equation}
Then the linearized Lagrangian of ${L}[\phi]$ is
\begin{equation}
\dot{{L}}[\psi]:=\frac{1}{2}\frac{{d}^2{L}_{\tau}[\psi]}{{d}\tau^2}\mid_{\tau=0}
\end{equation}
In this case, ${L}_{\tau}[\psi]={p}({h}_{\tau})$, where
\begin{align*}
{h}_{\tau}=\frac{\partial}{\partial{t}}(\phi+\tau\psi)-\frac{1}{2}\sum_{i}[\frac{\partial}{\partial{x}^i}(\phi+\tau\psi)]^2
\end{align*}
By direct calculation, we have
\begin{equation}
\frac{{d}^2{L}_{\tau}}{{d}\tau^2}[\psi]\mid_{\tau=0}=\frac{{d}^2{p}}{{dh}^2}
({h})[\frac{\partial\psi}{\partial{t}}-\sum_{i}\frac{\partial\phi}{\partial{x}^i}
\frac{\partial\psi}{\partial{x}^i}]^2+\frac{{dp}}{{dh}}({h})(-\sum_{i}(\frac{\partial\psi}{\partial{x}^i})^2)
\end{equation}
i.e.
\begin{equation}
\frac{{d}^2{L}_{\tau}}{{d}\tau^2}[\psi]\mid_{\tau=0}=
\frac{{d}\rho}{{dh}}({h})(\frac{\partial\psi}{\partial{t}}
-\sum_{i}\frac{\partial\phi}{\partial{x}^i}\frac{\partial\psi}{\partial{x}^i})^2-\rho(\sum_{i}(\frac{\partial\psi}{\partial{x}^i})^2)
\end{equation}
i.e.
\begin{equation}
\frac{{d}^2{L}_{\tau}}{{d}\tau^2}[\psi]\mid_{\tau=0}=
\rho[\eta^{-2}(\partial_{t}\psi-\sum_{i}\partial_{i}\phi\partial_{i}\psi)^{2}-\sum_{i}(\partial_{i}\psi)^2]
\end{equation}
i.e.
\begin{equation}
\frac{{d}^2{L}_{\tau}}{{d}\tau^2}[\psi]\mid_{\tau=0}=-\rho({g}^{-1})^{\mu\nu}\partial_{\mu}\psi\partial_{\nu}\psi
\end{equation}
where the metric $g$ is:
\begin{equation}
{g}=-\eta^2{dt}^2+\sum_{i}({dx}^{i}-{v}^{i}{dt})^2
\end{equation}
and
\begin{equation}
{g}^{-1}=-\eta^{-2}(\frac{\partial}{\partial t}+v^{i}\frac{\partial}{\partial x^{i}})\otimes(\frac{\partial}{\partial t}+v^{j}\frac{\partial}{\partial x^{j}})
+\sum_{i}\frac{\partial}{\partial x^{i}}\otimes \frac{\partial}{\partial x^{i}}
\end{equation}
Here,${v}^i=-\partial_{i}\phi$.
We have used the fact:
\begin{align*}
\frac{\rho'}{\rho}=\frac{{d}\rho}{{dh}}\frac{1}{\rho}=\frac{{d}\rho}{{dp}}\frac{{dp}}{{dh}}\frac{1}{\rho}=\eta^{-2}
\end{align*}
Consider the conformal metric
\begin{equation}
\tilde{g}_{\mu\nu}=\Omega{g}_{\mu\nu}
\end{equation}
which satisfy the following conditions:
\begin{equation}
\rho({g}^{-1})^{\mu\nu}\partial_{\mu}\psi\partial_{\nu}\psi=(\tilde{g}^{-1})^{\mu\nu}
\partial_{\mu}\psi\partial_{\nu}\psi\sqrt{-{\det}\tilde{g}}
\end{equation}
Since $\sqrt{-{\det g}}=\eta$, we get $\sqrt{-{\det}\tilde{{g}}}=\Omega^2\eta$, we have $\Omega=\frac{\rho}{\eta}$
Since the linearized equation of (1.21) is the Euler-Lagrange equation of the linearized Lagrangian $\dot{{L}}[\psi]$, the linearized equation of (1.21) is
\begin{equation}
\Box_{\tilde{{g}}}\psi=0
\end{equation}
Since $\rho_{0},\eta_{0}$, which are the mass density and the sound speed corresponding to the constant state, are constants, and the Euler-Lagrangian equations
are not affected if the Lagrangian is multiplied by a constant, we may choose
\begin{equation}
\Omega=\frac{\rho/\rho_{0}}{\eta/\eta_{0}}
\end{equation}
so that $\Omega=1$ in the constant state.
We may in fact choose the unit of time in relation to the unit of length so that
\begin{align*}
\eta_{0}=1
\end{align*}
and we may further choose the unit of mass in relation to the unit of length so that
\begin{align*}
\rho_{0}=1
\end{align*}
These choices shall be understood from the next chapter to the end of the book.
\section{The Fundamental Variations}
Now if $\phi$ is a solution of (1.21) and ${f}_{\tau}$ is a 1-parameter subgroup of the isometry group of $\mathbb{E}^3$, that is $\mathbb{R}^{3}$ with the standard
Euclidean metric
\begin{align*}
\sum_{i}(dx^{i})^{2}
\end{align*}
,or the translation on ${t}$-direction,
then for each $\tau$,
\begin{equation}
\phi_{\tau}=\phi\circ{f}_{\tau}
\end{equation}
is also a solution. It follows that
\begin{equation}
\psi=(\frac{{d}\phi\circ{f}_{\tau}}{{d}\tau})_{\tau=0}={X}\phi
\end{equation}
satisfies the linear equation (1.43)
${X}$ can be the following:
\begin{equation}
\Omega_{ij}:={x}^{i}\partial_{j}-{x}^{j}\partial_{i}
\end{equation}
$1\leq i < j \leq 3$
and
\begin{equation}
{T}_{\mu}:=\frac{\partial}{\partial{x}^\mu}
\end{equation}
$\mu=0,1,2,3$, which are the generators of ${f}_{\tau}$.
Consider now the scaling group for equation (1.21). It is easy to see that the physical dimension of $\phi$ is $\frac{length^{2}}{time}$, so we consider
the transformation, with constants $a,b>0$:
\begin{align}
\tilde{\phi}(t,\textbf{x}):=\frac{b^{2}}{a}\phi(\frac{t}{a},\frac{\textbf{x}}{b})
\end{align}
A direct calculation implies
\begin{align}
\tilde{h}(t,\textbf{x}):=\frac{\partial\tilde{\phi}}{\partial t}-\frac{1}{2}|\nabla\tilde{\phi}|^{2}
=\frac{b^{2}}{a^{2}}h(\frac{t}{a},\frac{\textbf{x}}{b})
\end{align}
In general, $p$ is an arbitrary function of $h$:
\begin{align}
p=p(h)
\end{align}
so
\begin{align}
\frac{dp}{dh}=\rho>0\quad\textrm{i.e}\quad p^{\prime}>0\quad\textrm{and also}\quad p>0
\end{align}
and
\begin{align}
\frac{dp}{d\rho}=\eta^{2}=\frac{dp}{dh}\frac{dh}{d\rho}=\rho/\frac{d^{2}p}{dh^{2}}>0
\end{align}
Under the transformation (1.49), we have:
\begin{align}
\tilde{p}(t,\textbf{x})=p(\tilde{h}(t,\textbf{x}))=p(\frac{b^{2}}{a^{2}}h(\frac{t}{a},\frac{\textbf{x}}{b}))
\end{align}
and then
\begin{align}
\tilde{\rho}(t,\textbf{x})=\frac{d\tilde{p}}{d\tilde{h}}=p^{\prime}(\frac{b^{2}}{a^{2}}h(\frac{t}{a},\frac{\textbf{x}}{b}))=\rho(\frac{b^{2}}{a^{2}}h(\frac{t}{a},
\frac{\textbf{x}}{b}))
\end{align}
Now we can transform the Euler-Lagrange equation (1.21) under the transformation (1.49).
We can see that in general, only in the case $a=b$ the Euler-Lagrange equation is invariant under the transformation (1.49). But still, there are some
special cases that we have 2-dimensional scaling group.
First we consider the case:
\begin{align}
p(h)=\frac{1}{2}h^{2}
\end{align}
Note that this function $p(h)$ satisfies (1.51)-(1.53)
Now we may take $a\slashed{=}b$:
\begin{align*}
\tilde{h}(t,\textbf{x})=\frac{b^{2}}{a^{2}}h(\frac{t}{a},\frac{\textbf{x}}{b})
\end{align*}
Then
\begin{align}
p(\tilde{h}(t,\textbf{x}))=\frac{1}{2}\tilde{h}^{2}(t,\textbf{x})=\frac{b^{4}}{a^{4}}\frac{1}{2}h^{2}(\frac{t}{a},\frac{\textbf{x}}{b})
=\frac{b^{4}}{a^{4}}p(h(\frac{t}{a},\frac{\textbf{x}}{b}))
\end{align}
That is
\begin{align}
\tilde{p}(t,\textbf{x})=\frac{b^{4}}{a^{4}}p(\frac{t}{a},\frac{\textbf{x}}{b})
\end{align}
and also:
\begin{align}
\tilde{\rho}(t,\textbf{x})=\tilde{h}(t,\textbf{x})=\frac{b^{2}}{a^{2}}h(\frac{t}{a},\frac{\textbf{x}}{b})
=\frac{b^{2}}{a^{2}}\rho(\frac{t}{a},\frac{\textbf{x}}{b})
\end{align}
So now we have:
\begin{align}
\frac{\partial\tilde{\rho}}{\partial t}-\textrm{div}(\tilde{\rho}\nabla\tilde{\phi})\\\notag
=\frac{b^{2}}{a^{3}}\frac{\partial\rho}{\partial t}-\frac{b^{2}}{a^{2}}\frac{1}{a}\rho\Delta\phi-\frac{b}{a^{2}}\nabla\rho\cdot\frac{b}{a}\nabla\phi
=\frac{b^{2}}{a^{3}}(\frac{\partial\rho}{\partial t}-\textrm{div}(\rho\nabla\phi))
\end{align}
This means the equation (1.21) is invariant under the transformation (1.49). So in this case, the scaling group is 2-dimensional.
More generally, we can consider the case
\begin{align}
p(h)=Ch^{\alpha}, \alpha>1, C>0, h>0
\end{align}
Obviously in this case the function $p(h)$ satisfies (1.51)-(1.53) and we have:
\begin{align}
p=C(\frac{\rho}{\alpha C})^{\frac{\alpha}{\alpha-1}}
\end{align}
If we set:
\begin{align*}
\gamma=\frac{\alpha}{\alpha-1},\quad k=\frac{C}{(\alpha C)^{\frac{\alpha}{\alpha-1}}}
\end{align*}
then
\begin{align}
p=k\rho^{\gamma}
\end{align}
with $\gamma>1$ and $k$ depending on $\gamma$.
This is the polytropic case. We can calculate as above to see that in this case, we also have the 2-dimensional scaling group. Also, one can easily see
that we have the 2-dimensional scaling group only when the function $p$ is a homogeneous function of $h$.
In present book, we shall consider the general case $p=p(h)$, so the scaling group is 1-dimensional. In this case, we consider the 1-parameter group of
dilations of $\mathbb{R}\times\mathbb{E}^{3}$, given by:
\begin{align}
(t,\textbf{x})\mapsto (e^{\tau}t, e^{\tau}\textbf{x})
\end{align}
where $\tau\in\mathbb{R}$.
Let $\phi$ be a solution of (1.21) and define for each $\tau\in\mathbb{R}$ the function
\begin{align}
\phi_{\tau}:=e^{-\tau}\phi(e^{\tau}t, e^{\tau}\textbf{x})
\end{align}
As we have seen before, $\phi_{\tau}$ is also a solution of (1.21). Then
\begin{align}
\psi=(\frac{d\phi_{\tau}}{d\tau})|_{\tau=0}:=S\phi
\end{align}
satisfies the linear equation (1.43). Here $S$ is the differential operator
\begin{align}
S=D-I,\quad D=x^{\mu}\frac{\partial}{\partial x^{\mu}}
\end{align} | 205,038 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.